[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-changelog] [xen master] docs: Move misc README's into docs/misc/



commit c85d3d1d98a6e55a9f4bc55db03bdff3e6bbd796
Author:     Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
AuthorDate: Wed Aug 26 09:15:20 2015 +0000
Commit:     Ian Jackson <Ian.Jackson@xxxxxxxxxxxxx>
CommitDate: Thu Aug 27 19:14:17 2015 +0100

    docs: Move misc README's into docs/misc/
    
    To live with the other documentation.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    Acked-by: Wei Liu <wei.liu2@xxxxxxxxxx>
---
 docs/misc/stubdom.txt |   93 ++++++++++++++++++++++++++++++++++++++++
 docs/misc/xenmon.txt  |  114 +++++++++++++++++++++++++++++++++++++++++++++++++
 stubdom/Makefile      |    6 +--
 stubdom/README        |   93 ----------------------------------------
 tools/xenmon/Makefile |    2 -
 tools/xenmon/README   |  114 -------------------------------------------------
 6 files changed, 208 insertions(+), 214 deletions(-)

diff --git a/docs/misc/stubdom.txt b/docs/misc/stubdom.txt
new file mode 100644
index 0000000..de7b6c7
--- /dev/null
+++ b/docs/misc/stubdom.txt
@@ -0,0 +1,93 @@
+                                IOEMU stubdom
+                                =============
+
+  This boosts HVM performance by putting ioemu in its own lightweight domain.
+
+General Configuration
+=====================
+
+Due to a race between the creation of the IOEMU stubdomain itself and 
allocation
+of video memory for the HVM domain, you need to avoid the need for ballooning,
+by using the hypervisor dom0_mem= option for instance.
+
+Using with XL
+-------------
+
+The enable IOEMU stub domains set the following in your domain
+config:
+
+    device_model_stubdomain_override = 1
+
+See xl.cfg(5) for more details of the xl domain configuration syntax
+and http://wiki.xen.org/wiki/Device_Model_Stub_Domains for more
+information on device model stub domains
+
+
+                                   PV-GRUB
+                                   =======
+
+  This replaces pygrub to boot domU images safely: it runs the regular grub
+inside the created domain itself and uses regular domU facilities to read the
+disk / fetch files from network etc. ; it eventually loads the PV kernel and
+chain-boots it.
+  
+Configuration
+=============
+
+In your PV config,
+
+- use pv-grub.gz as kernel:
+
+kernel = "pv-grub.gz"
+
+- set the path to menu.lst, as seen from the domU, in extra:
+
+extra = "(hd0,0)/boot/grub/menu.lst"
+
+or you can provide the content of a menu.lst stored in dom0 by passing it as a
+ramdisk:
+
+ramdisk = "/boot/domU-1-menu.lst"
+
+or you can also use a tftp path (dhcp will be automatically performed):
+
+extra = "(nd)/somepath/menu.lst"
+
+or you can set it in option 150 of your dhcp server and leave extra and ramdisk
+empty (dhcp will be automatically performed)
+
+Limitations
+===========
+
+- You can not boot a 64bit kernel with a 32bit-compiled PV-GRUB and vice-versa.
+To cross-compile a 32bit PV-GRUB,
+
+export XEN_TARGET_ARCH=x86_32
+
+- bootsplash is supported, but the ioemu backend does not yet support restart
+for use by the booted kernel.
+
+- PV-GRUB doesn't support virtualized partitions. For instance:
+
+disk = [ 'phy:hda7,hda7,w' ]
+
+will be seen by PV-GRUB as (hd0), not (hd0,6), since GRUB will not see any
+partition table.
+
+
+                                Your own stubdom
+                                ================
+
+  By running
+
+cd stubdom/
+make c-stubdom
+
+  or
+
+cd stubdom/
+make caml-stubdom
+
+  you can compile examples of C or caml stub domain kernels.  You can use these
+and the relevant Makefile rules as basis to build your own stub domain kernel.
+Available libraries are libc, libxc, libxs, zlib and libpci.
diff --git a/docs/misc/xenmon.txt b/docs/misc/xenmon.txt
new file mode 100644
index 0000000..3393f5b
--- /dev/null
+++ b/docs/misc/xenmon.txt
@@ -0,0 +1,114 @@
+Xen Performance Monitor
+-----------------------
+
+The xenmon tools make use of the existing xen tracing feature to provide fine
+grained reporting of various domain related metrics. It should be stressed that
+the xenmon.py script included here is just an example of the data that may be
+displayed. The xenbake demon keeps a large amount of history in a shared memory
+area that may be accessed by tools such as xenmon.
+
+For each domain, xenmon reports various metrics. One part of the display is a
+group of metrics that have been accumulated over the last second, while another
+part of the display shows data measured over 10 seconds. Other measurement
+intervals are possible, but we have just chosen 1s and 10s as an example.
+
+
+Execution Count
+---------------
+ o The number of times that a domain was scheduled to run (ie, dispatched) over
+ the measurement interval
+
+
+CPU usage
+---------
+ o Total time used over the measurement interval
+ o Usage expressed as a percentage of the measurement interval
+ o Average cpu time used during each execution of the domain
+
+
+Waiting time
+------------
+This is how much time the domain spent waiting to run, or put another way, the
+amount of time the domain spent in the "runnable" state (or on the run queue)
+but not actually running. Xenmon displays:
+
+ o Total time waiting over the measurement interval
+ o Wait time expressed as a percentage of the measurement interval
+ o Average waiting time for each execution of the domain
+
+Blocked time
+------------
+This is how much time the domain spent blocked (or sleeping); Put another way,
+the amount of time the domain spent not needing/wanting the cpu because it was
+waiting for some event (ie, I/O). Xenmon reports:
+
+ o Total time blocked over the measurement interval
+ o Blocked time expressed as a percentage of the measurement interval
+ o Blocked time per I/O (see I/O count below)
+
+Allocation time
+---------------
+This is how much cpu time was allocated to the domain by the scheduler; This is
+distinct from cpu usage since the "time slice" given to a domain is frequently
+cut short for one reason or another, ie, the domain requests I/O and blocks.
+Xenmon reports:
+
+ o Average allocation time per execution (ie, time slice)
+ o Min and Max allocation times
+
+I/O Count
+---------
+This is a rough measure of I/O requested by the domain. The number of page
+exchanges (or page "flips") between the domain and dom0 are counted. The
+number of pages exchanged may not accurately reflect the number of bytes
+transferred to/from a domain due to partial pages being used by the network
+protocols, etc. But it does give a good sense of the magnitude of I/O being
+requested by a domain. Xenmon reports:
+
+ o Total number of page exchanges during the measurement interval
+ o Average number of page exchanges per execution of the domain
+
+
+Usage Notes and issues
+----------------------
+ - Start xenmon by simply running xenmon.py; The xenbake demon is started and
+   stopped automatically by xenmon.
+ - To see the various options for xenmon, run xenmon -h. Ditto for xenbaked.
+ - xenmon also has an option (-n) to output log data to a file instead of the
+   curses interface.
+ - NDOMAINS is defined to be 32, but can be changed by recompiling xenbaked
+ - Xenmon.py appears to create 1-2% cpu overhead; Part of this is just the
+   overhead of the python interpreter. Part of it may be the number of trace
+   records being generated. The number of trace records generated can be
+   limited by setting the trace mask (with a dom0 Op), which controls which
+   events cause a trace record to be emitted.
+ - To exit xenmon, type 'q'
+ - To cycle the display to other physical cpu's, type 'c'
+ - The first time xenmon is run, it attempts to allocate xen trace buffers
+   using a default size. If you wish to use a non-default value for the
+   trace buffer size, run the 'setsize' program (located in tools/xentrace)
+   and specify the number of memory pages as a parameter. The default is 20.
+ - Not well tested with domains using more than 1 virtual cpu
+ - If you create a lot of domains, or repeatedly kill a domain and restart it,
+   and the domain id's get to be bigger than NDOMAINS, then xenmon behaves 
badly.
+   This is a bug that is due to xenbaked's treatment of domain id's vs. domain
+   indices in a data array. Will be fixed in a future release; Workaround:
+   Increase NDOMAINS in xenbaked and rebuild.
+
+Future Work
+-----------
+o RPC interface to allow external entities to programmatically access 
processed data
+o I/O Count batching to reduce number of trace records generated
+
+Case Study
+----------
+We have written a case study which demonstrates some of the usefulness of
+this tool and the metrics reported. It is available at:
+http://www.hpl.hp.com/techreports/2005/HPL-2005-187.html
+
+Authors
+-------
+Diwaker Gupta   <diwaker.gupta@xxxxxx>
+Rob Gardner     <rob.gardner@xxxxxx>
+Lucy Cherkasova <lucy.cherkasova.hp.com>
+
diff --git a/stubdom/Makefile b/stubdom/Makefile
index faa7c21..e1359cf 100644
--- a/stubdom/Makefile
+++ b/stubdom/Makefile
@@ -463,15 +463,11 @@ xenstore-stubdom: mini-os-$(XEN_TARGET_ARCH)-xenstore 
libxc xenstore
 #########
 
 ifeq ($(STUBDOM_SUPPORTED),1)
-install: $(STUBDOMPATH) install-readme $(STUBDOM_INSTALL)
+install: $(STUBDOMPATH) $(STUBDOM_INSTALL)
 else
 install: $(STUBDOMPATH)
 endif
 
-install-readme:
-       $(INSTALL_DIR) $(DESTDIR)$(docdir)
-       $(INSTALL_DATA) README $(DESTDIR)$(docdir)/README.stubdom
-
 install-ioemu: ioemu-stubdom
        $(INSTALL_DIR) "$(DESTDIR)$(LIBEXEC_BIN)"
        $(INSTALL_PROG) stubdom-dm "$(DESTDIR)$(LIBEXEC_BIN)"
diff --git a/stubdom/README b/stubdom/README
deleted file mode 100644
index de7b6c7..0000000
--- a/stubdom/README
+++ /dev/null
@@ -1,93 +0,0 @@
-                                IOEMU stubdom
-                                =============
-
-  This boosts HVM performance by putting ioemu in its own lightweight domain.
-
-General Configuration
-=====================
-
-Due to a race between the creation of the IOEMU stubdomain itself and 
allocation
-of video memory for the HVM domain, you need to avoid the need for ballooning,
-by using the hypervisor dom0_mem= option for instance.
-
-Using with XL
--------------
-
-The enable IOEMU stub domains set the following in your domain
-config:
-
-    device_model_stubdomain_override = 1
-
-See xl.cfg(5) for more details of the xl domain configuration syntax
-and http://wiki.xen.org/wiki/Device_Model_Stub_Domains for more
-information on device model stub domains
-
-
-                                   PV-GRUB
-                                   =======
-
-  This replaces pygrub to boot domU images safely: it runs the regular grub
-inside the created domain itself and uses regular domU facilities to read the
-disk / fetch files from network etc. ; it eventually loads the PV kernel and
-chain-boots it.
-  
-Configuration
-=============
-
-In your PV config,
-
-- use pv-grub.gz as kernel:
-
-kernel = "pv-grub.gz"
-
-- set the path to menu.lst, as seen from the domU, in extra:
-
-extra = "(hd0,0)/boot/grub/menu.lst"
-
-or you can provide the content of a menu.lst stored in dom0 by passing it as a
-ramdisk:
-
-ramdisk = "/boot/domU-1-menu.lst"
-
-or you can also use a tftp path (dhcp will be automatically performed):
-
-extra = "(nd)/somepath/menu.lst"
-
-or you can set it in option 150 of your dhcp server and leave extra and ramdisk
-empty (dhcp will be automatically performed)
-
-Limitations
-===========
-
-- You can not boot a 64bit kernel with a 32bit-compiled PV-GRUB and vice-versa.
-To cross-compile a 32bit PV-GRUB,
-
-export XEN_TARGET_ARCH=x86_32
-
-- bootsplash is supported, but the ioemu backend does not yet support restart
-for use by the booted kernel.
-
-- PV-GRUB doesn't support virtualized partitions. For instance:
-
-disk = [ 'phy:hda7,hda7,w' ]
-
-will be seen by PV-GRUB as (hd0), not (hd0,6), since GRUB will not see any
-partition table.
-
-
-                                Your own stubdom
-                                ================
-
-  By running
-
-cd stubdom/
-make c-stubdom
-
-  or
-
-cd stubdom/
-make caml-stubdom
-
-  you can compile examples of C or caml stub domain kernels.  You can use these
-and the relevant Makefile rules as basis to build your own stub domain kernel.
-Available libraries are libc, libxc, libxs, zlib and libpci.
diff --git a/tools/xenmon/Makefile b/tools/xenmon/Makefile
index 5095682..20ea100 100644
--- a/tools/xenmon/Makefile
+++ b/tools/xenmon/Makefile
@@ -31,8 +31,6 @@ install: build
        $(INSTALL_PROG) xenbaked $(DESTDIR)$(sbindir)/xenbaked
        $(INSTALL_PROG) xentrace_setmask  $(DESTDIR)$(sbindir)/xentrace_setmask
        $(INSTALL_PROG) xenmon.py  $(DESTDIR)$(sbindir)/xenmon.py
-       $(INSTALL_DIR) $(DESTDIR)$(docdir)
-       $(INSTALL_DATA) README $(DESTDIR)$(docdir)/README.xenmon
 
 .PHONY: clean
 clean:
diff --git a/tools/xenmon/README b/tools/xenmon/README
deleted file mode 100644
index 3393f5b..0000000
--- a/tools/xenmon/README
+++ /dev/null
@@ -1,114 +0,0 @@
-Xen Performance Monitor
------------------------
-
-The xenmon tools make use of the existing xen tracing feature to provide fine
-grained reporting of various domain related metrics. It should be stressed that
-the xenmon.py script included here is just an example of the data that may be
-displayed. The xenbake demon keeps a large amount of history in a shared memory
-area that may be accessed by tools such as xenmon.
-
-For each domain, xenmon reports various metrics. One part of the display is a
-group of metrics that have been accumulated over the last second, while another
-part of the display shows data measured over 10 seconds. Other measurement
-intervals are possible, but we have just chosen 1s and 10s as an example.
-
-
-Execution Count
----------------
- o The number of times that a domain was scheduled to run (ie, dispatched) over
- the measurement interval
-
-
-CPU usage
----------
- o Total time used over the measurement interval
- o Usage expressed as a percentage of the measurement interval
- o Average cpu time used during each execution of the domain
-
-
-Waiting time
-------------
-This is how much time the domain spent waiting to run, or put another way, the
-amount of time the domain spent in the "runnable" state (or on the run queue)
-but not actually running. Xenmon displays:
-
- o Total time waiting over the measurement interval
- o Wait time expressed as a percentage of the measurement interval
- o Average waiting time for each execution of the domain
-
-Blocked time
-------------
-This is how much time the domain spent blocked (or sleeping); Put another way,
-the amount of time the domain spent not needing/wanting the cpu because it was
-waiting for some event (ie, I/O). Xenmon reports:
-
- o Total time blocked over the measurement interval
- o Blocked time expressed as a percentage of the measurement interval
- o Blocked time per I/O (see I/O count below)
-
-Allocation time
----------------
-This is how much cpu time was allocated to the domain by the scheduler; This is
-distinct from cpu usage since the "time slice" given to a domain is frequently
-cut short for one reason or another, ie, the domain requests I/O and blocks.
-Xenmon reports:
-
- o Average allocation time per execution (ie, time slice)
- o Min and Max allocation times
-
-I/O Count
----------
-This is a rough measure of I/O requested by the domain. The number of page
-exchanges (or page "flips") between the domain and dom0 are counted. The
-number of pages exchanged may not accurately reflect the number of bytes
-transferred to/from a domain due to partial pages being used by the network
-protocols, etc. But it does give a good sense of the magnitude of I/O being
-requested by a domain. Xenmon reports:
-
- o Total number of page exchanges during the measurement interval
- o Average number of page exchanges per execution of the domain
-
-
-Usage Notes and issues
-----------------------
- - Start xenmon by simply running xenmon.py; The xenbake demon is started and
-   stopped automatically by xenmon.
- - To see the various options for xenmon, run xenmon -h. Ditto for xenbaked.
- - xenmon also has an option (-n) to output log data to a file instead of the
-   curses interface.
- - NDOMAINS is defined to be 32, but can be changed by recompiling xenbaked
- - Xenmon.py appears to create 1-2% cpu overhead; Part of this is just the
-   overhead of the python interpreter. Part of it may be the number of trace
-   records being generated. The number of trace records generated can be
-   limited by setting the trace mask (with a dom0 Op), which controls which
-   events cause a trace record to be emitted.
- - To exit xenmon, type 'q'
- - To cycle the display to other physical cpu's, type 'c'
- - The first time xenmon is run, it attempts to allocate xen trace buffers
-   using a default size. If you wish to use a non-default value for the
-   trace buffer size, run the 'setsize' program (located in tools/xentrace)
-   and specify the number of memory pages as a parameter. The default is 20.
- - Not well tested with domains using more than 1 virtual cpu
- - If you create a lot of domains, or repeatedly kill a domain and restart it,
-   and the domain id's get to be bigger than NDOMAINS, then xenmon behaves 
badly.
-   This is a bug that is due to xenbaked's treatment of domain id's vs. domain
-   indices in a data array. Will be fixed in a future release; Workaround:
-   Increase NDOMAINS in xenbaked and rebuild.
-
-Future Work
------------
-o RPC interface to allow external entities to programmatically access 
processed data
-o I/O Count batching to reduce number of trace records generated
-
-Case Study
-----------
-We have written a case study which demonstrates some of the usefulness of
-this tool and the metrics reported. It is available at:
-http://www.hpl.hp.com/techreports/2005/HPL-2005-187.html
-
-Authors
--------
-Diwaker Gupta   <diwaker.gupta@xxxxxx>
-Rob Gardner     <rob.gardner@xxxxxx>
-Lucy Cherkasova <lucy.cherkasova.hp.com>
-
--
generated by git-patchbot for /home/xen/git/xen.git#master

_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxx
http://lists.xensource.com/xen-changelog


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.