[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH DOCDAY] introduce an xl man page in pod format



On Fri, 28 Oct 2011, Ian Campbell wrote:
> On Thu, 2011-10-27 at 17:19 +0100, Stefano Stabellini wrote:
> > This is the initial version of an xl man page, based on the old xm man
> > page.
> > Almost every command implemented in xl should be present, a notable
> > exception are the tmem commands that are currently missing.
> 
> I think it's worth enumerating all the commands, even with a TBD, since
> it marks what is missing.

the only ones that are missing are the tmem commands so I am going to
add them

> > Further improvements and clarifications to this man page are very welcome.
> >
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>
> >
> > diff -r 39aa9b2441da docs/man/xl.pod.1
> > --- /dev/null   Thu Jan 01 00:00:00 1970 +0000
> > +++ b/docs/man/xl.pod.1 Thu Oct 27 15:59:03 2011 +0000
> > @@ -0,0 +1,805 @@
> > +=head1 NAME
> > +
> > +XL - Xen management tool, based on LibXenlight
> > +
> > +=head1 SYNOPSIS
> > +
> > +B<xl> I<subcommand> [I<args>]
> 
> B<xl> [I<global-args>] I<subcommand> [I<args>]
> 
> The interesting global-args are -v (verbose, can be used repeatedly) and
> -N (dry-run).

OK

> > +
> > +=head1 DESCRIPTION
> > +
> > +The B<xl> program is the new tool for managing Xen guest
> > +domains. The program can be used to create, pause, and shutdown
> > +domains. It can also be used to list current domains, enable or pin
> > +VCPUs, and attach or detach virtual block devices.
> > +The old B<xm> tool is deprecated and should not be used.
> > +
> > +The basic structure of every B<xl> command is almost always:
> > +
> > +=over 2
> > +
> > +B<xl> I<subcommand> [I<OPTIONS>] I<domain-id>
> > +
> > +=back
> > +
> > +Where I<subcommand> is one of the subcommands listed below, I<domain-id>
> > +is the numeric domain id, or the domain name (which will be internally
> > +translated to domain id), and I<OPTIONS> are subcommand specific
> > +options.  There are a few exceptions to this rule in the cases where
> > +the subcommand in question acts on all domains, the entire machine,
> > +or directly on the Xen hypervisor.  Those exceptions will be clear for
> > +each of those subcommands.
> > +
> > +=head1 NOTES
> > +
> > +Most B<xl> operations rely upon B<xenstored> and B<xenconsoled>: make
> > +sure you start the script B</etc/init.d/xencommons> at boot time to
> > +initialize all the daemons needed by B<xl>.
> > +
> > +In the most common network configuration, you need to setup a bridge in 
> > dom0
> > +named B<xenbr0> in order to have a working network in the guest domains.
> > +Please refer to the documentation of your Linux distribution to know how to
> > +setup the bridge.
> > +
> > +Most B<xl> commands require root privileges to run due to the
> > +communications channels used to talk to the hypervisor.  Running as
> > +non root will return an error.
> > +
> > +=head1 DOMAIN SUBCOMMANDS
> > +
> > +The following subcommands manipulate domains directly.  As stated
> > +previously, most commands take I<domain-id> as the first parameter.
> > +
> > +=over 4
> > +
> > +=item B<create> [I<OPTIONS>] I<configfile>
> 
> The I<configfile> is optional and if it present it must come before the
> options.
> In addition to the normal --option stuff you can also pass key=value to
> provide options as if they were written in a configuration file, these
> override whatever is in the config file.

OK

> While checking this I noticed that before processing arguments
> main_create() does:
> 
>     if (argv[1] && argv[1][0] != '-' && !strchr(argv[1], '=')) {
>         filename = argv[1];
>         argc--; argv++;
>     }
> 
> that use of argv[1] without checking argc is a little dubious (ok if
> argc<1 then argc==0 and therefore argv[argc+1]==NULL, but still...).
> 
> > +
> > +The create subcommand requires a config file: see L<xldomain.cfg> for
> > +full details of that file format and possible options.
> > +
> > +I<configfile> can either be an absolute path to a file, or a relative
> > +path to a file located in /etc/xen.
> 
> This isn't actually true for xl. Arguably that's a bug in xl rather than
> this doc but I seem to recall that someone had a specific reason for not
> doing this.

OK, I am going to update the doc

> > +
> > +Create will return B<as soon> as the domain is started.  This B<does
> > +not> mean the guest OS in the domain has actually booted, or is
> > +available for input.
> > +
> > +B<OPTIONS>
> > +
> > +=over 4
> > +
> > +=item B<-q>, B<--quiet>
> > +
> > +No console output.
> > +
> > +=item B<-f=FILE>, B<--defconfig=FILE>
> > +
> > +Use the given configuration file.
> > +
> > +=item B<-n>, B<--dryrun>
> > +
> > +Dry run - prints the resulting configuration in SXP but does not create
> > +the domain.
> > +
> > +=item B<-p>
> > +
> > +Leave the domain paused after it is created.
> > +
> > +=item B<-c>
> > +
> > +Attach console to the domain as soon as it has started.  This is
> > +useful for determining issues with crashing domains.
> 
> ... and just as a general convenience since you often want to watch the
> domain boot.

OK

> > +
> > +=back
> > +
> > +B<EXAMPLES>
> > +
> > +=over 4
> > +
> > +=item I<with config file>
> > +
> > +  xl create DebianLenny
> > +
> > +This creates a domain with the file /etc/xen/DebianLenny, and returns as
> > +soon as it is run.
> > +
> > +=back
> > +
> > +=item B<console> I<domain-id>
> > +
> > +Attach to domain I<domain-id>'s console.  If you've set up your domains to
> > +have a traditional log in console this will look much like a normal
> > +text log in screen.
> > +
> > +Use the key combination Ctrl+] to detach the domain console.
> 
> This takes -t [pv|serial] and -n (num) options.

I'll add those options


> > +
> > +=item B<vncviewer> [I<OPTIONS>] I<domain-id>
> > +
> > +Attach to domain's VNC server, forking a vncviewer process.
> > +
> > +B<OPTIONS>
> > +
> > +=over 4
> > +
> > +=item I<--autopass>
> > +
> > +Pass VNC password to vncviewer via stdin.
> 
> What is the behaviour if you don't do this?

I am not sure. Maybe Ian knows.


> Are the sub-commands intended to be in some sort of order. In general
> they seem to be alphabetical but in that case vncviewer does not belong
> here.

I'll order them alphabetically.


> [...]
> > +=item B<list> [I<OPTIONS>] [I<domain-id> ...]
> > +
> > +Prints information about one or more domains.  If no domains are
> > +specified it prints out information about all domains.
> > +
> > +
> > +B<OPTIONS>
> > +
> > +=over 4
> > +
> > +=item B<-l>, B<--long>
> > +
> > +The output for B<xl list> is not the table view shown below, but
> > +instead presents the data in SXP compatible format.
> > +
> > +=item B<-Z>, B<--context>
> > +Also prints the security labels.
> > +
> > +=item B<-v>, B<--verbose>
> > +
> > +Also prints the domain UUIDs, the shutdown reason and security labels.
> > +
> > +=back
> > +
> > +B<EXAMPLE>
> > +
> > +An example format for the list is as follows:
> > +
> > +    Name                                        ID   Mem VCPUs      State  
> >  Time(s)
> > +    Domain-0                                     0   750     4     r-----  
> >  11794.3
> > +    win                                          1  1019     1     r-----  
> >      0.3
> > +    linux                                        2  2048     2     r-----  
> >   5624.2
> > +
> > +Name is the name of the domain.  ID the numeric domain id.  Mem is the
> > +desired amount of memory to allocate to the domain (although it may
> > +not be the currently allocated amount).  VCPUs is the number of
> > +virtual CPUs allocated to the domain.  State is the run state (see
> > +below).  Time is the total run time of the domain as accounted for by
> > +Xen.
> > +
> > +B<STATES>
> > +
> > +The State field lists 6 states for a Xen domain, and which ones the
> > +current domain is in.
> > +
> > +=over 4
> > +
> > +=item B<r - running>
> > +
> > +The domain is currently running on a CPU.
> > +
> > +=item B<b - blocked>
> > +
> > +The domain is blocked, and not running or runnable.  This can be caused
> > +because the domain is waiting on IO (a traditional wait state) or has
> > +gone to sleep because there was nothing else for it to do.
> > +
> > +=item B<p - paused>
> > +
> > +The domain has been paused, usually occurring through the administrator
> > +running B<xl pause>.  When in a paused state the domain will still
> > +consume allocated resources like memory, but will not be eligible for
> > +scheduling by the Xen hypervisor.
> > +
> > +=item B<s - shutdown>
> > +
> > +FIXME: Why would you ever see this state?
> 
> This is XEN_DOMINF_shutdown which just says "/* The guest OS has shut
> down. */". It is set in response to the guest calling SCHEDOP_shutdown.
> I think it corresponds to the period between the guest shutting down and
> the toolstack noticing and beginning to tear it down (when it moves to
> dying).

OK

> > +=item B<c - crashed>
> > +
> > +The domain has crashed, which is always a violent ending.  Usually
> > +this state can only occur if the domain has been configured not to
> > +restart on crash.  See L<xldomain.cfg> for more info.
> > +
> > +=item B<d - dying>
> > +
> > +The domain is in process of dying, but hasn't completely shutdown or
> > +crashed.
> > +
> > +FIXME: Is this right?
> 
> I think so. This is XEN_DOMINF_dying which says "/* Domain is scheduled
> to die. */"

OK

> > +
> > +=item B<migrate> [I<OPTIONS>] I<domain-id> I<host>
> > +
> > +Migrate a domain to another host machine. By default B<xl> relies on ssh 
> > as a
> > +transport mechanism between the two hosts.
> > +
> > +B<OPTIONS>
> > +
> > +=over 4
> > +
> > +=item B<-s> I<sshcommand>
> > +
> > +Use <sshcommand> instead of ssh.  String will be passed to sh. If empty, 
> > run
> > +<host> instead of ssh <host> xl migrate-receive [-d -e].
> > +
> > +=item B<-e>
> > +
> > +On the new host, do not wait in the background (on <host>) for the death 
> > of the
> > +domain.
> 
> Would be useful to reference the equivalent option to "xl create" here
> just to clarify that they mean the same.

Yes, good idea.

> > +=item B<reboot> [I<OPTIONS>] I<domain-id>
> > +
> > +Reboot a domain.  This acts just as if the domain had the B<reboot>
> > +command run from the console.
> 
> This relies on PV drivers, I think.

yes, I'll add that

> Not all guests have the option of typing "reboot" on the console but I
> suppose it is clear enough what you mean.
> 
> >   The command returns as soon as it has
> > +executed the reboot action, which may be significantly before the
> > +domain actually reboots.
> > +
> > +The behavior of what happens to a domain when it reboots is set by the
> > +B<on_reboot> parameter of the xldomain.cfg file when the domain was
> > +created.
> > +
> > +=item B<restore> [I<OPTIONS>] [I<ConfigFile>] I<CheckpointFile>
> > +
> > +Build a domain from an B<xl save> state file.  See B<save> for more info.
> > +
> > +B<OPTIONS>
> > +
> > +=over 4
> > +
> > +=item B<-p>
> > +
> > +Do not unpause domain after restoring it.
> > +
> > +=item B<-e>
> > +
> > +Do not wait in the background for the death of the domain on the new host.
> 
> Reference xl create?
> 

yep

> > +
> > +=item B<-d>
> > +
> > +Enable debug messages.
> > +
> > +=back
> > +
> > +=item B<save> [I<OPTIONS>] I<domain-id> I<CheckpointFile> [I<ConfigFile>]
> > +
> > +Saves a running domain to a state file so that it can be restored
> > +later.  Once saved, the domain will no longer be running on the
> > +system, unless the -c option is used.
> > +B<xl restore> restores from this checkpoint file.
> > +Passing a config file argument allows the user to manually select the VM 
> > config
> > +file used to create the domain.
> > +
> > +
> > +=over 4
> > +
> > +=item B<-c>
> > +
> > +Leave domain running after creating the snapshot.
> > +
> > +=back
> > +
> > +
> > +=item B<shutdown> [I<OPTIONS>] I<domain-id>
> > +
> > +Gracefully shuts down a domain.  This coordinates with the domain OS
> > +to perform graceful shutdown, so there is no guarantee that it will
> > +succeed, and may take a variable length of time depending on what
> > +services must be shutdown in the domain.  The command returns
> > +immediately after signally the domain unless that B<-w> flag is used.
> 
> Does this rely on pv drivers or does it inject ACPI events etc on HVM?

Yes, it requires PV drivers, I'll add that.

> > +
> > +The behavior of what happens to a domain when it reboots is set by the
>        behaviour ?
> 
> > +B<on_shutdown> parameter of the xldomain.cfg file when the domain was
> > +created.
> > +
> > +B<OPTIONS>
> > +
> > +=over 4
> > +
> > +=item B<-w>
> > +
> > +Wait for the domain to complete shutdown before returning.
> > +
> > +=back
> > +
> > +=item B<sysrq> I<domain-id> I<letter>
> > +
> > +Send a I<Magic System Request> signal to the domain.  For more
> > +information on available magic sys req operations, see sysrq.txt in
> > +your Linux Kernel sources.
> 
> It would be nice to word this in a more generic fashion and point out
> that the specific implementation on Linux behaves like sysrq. Other
> guests might do other things?
> 
> Relies on PV drivers.

OK

> > [...]
> > +
> > +=item B<vcpu-set> I<domain-id> I<vcpu-count>
> > +
> > +Enables the I<vcpu-count> virtual CPUs for the domain in question.
> > +Like mem-set, this command can only allocate up to the maximum virtual
> > +CPU count configured at boot for the domain.
> > +
> > +If the I<vcpu-count> is smaller than the current number of active
> > +VCPUs, the highest number VCPUs will be hotplug removed.  This may be
> > +important for pinning purposes.
> > +
> > +Attempting to set the VCPUs to a number larger than the initially
> > +configured VCPU count is an error.  Trying to set VCPUs to < 1 will be
> > +quietly ignored.
> > +
> > +Because this operation requires cooperation from the domain operating
> > +system, there is no guarantee that it will succeed.  This command will
> > +not work with a full virt domain.
> 
> I thought we supported some VCPU hotplug for HVM (using ACPI and such)
> these days?

Yes you are right, I'll remove it.


> [...]
> > +=item B<button-press> I<domain-id> I<button>
> > +
> > +Indicate an ACPI button press to the domain. I<button> is may be 'power' or
> > +'sleep'.
> 
> HVM only?

yes

> > +
> > +=item B<trigger> I<domain-id> I<nmi|reset|init|power|sleep> [I<VCPU>]
> > +
> > +Send a trigger to a domain, where the trigger can be: nmi, reset, init, 
> > power
> > +or sleep.  Optionally a specific vcpu number can be passed as an argument.
> 
> HVM only? nmi might work for PV, not sure about the rest.

I think the current implementation is HVM only

> > +=item B<getenforce>
> > +
> > +Returns the current enforcing mode of the Flask Xen security module.
> > +
> > +=item B<setenforce> I<1|0|Enforcing|Permissive>
> > +
> > +Sets the current enforcing mode of the Flask Xen security module
> > +
> > +=item B<loadpolicy> I<policyfile>
> > +
> > +Loads a new policy int the Flask Xen security module.
> 
> I suppose flask is something which needs to go onto the "to be
> documented" list such that we can reference it from here.

I am going to add a TO BE DOCUMENTED section at the end


> > +=back
> > +
> > +=head1 XEN HOST SUBCOMMANDS
> > +
> > +=over 4
> > +
> > +=item B<debug-keys> I<keys>
> > +
> > +Send debug I<keys> to Xen.
> 
> The same as pressing the Xen "conswitch" (Ctrl-A by default) three times
> and then pressing "keys".

I'll add that

> > +
> > +=item B<dmesg> [B<-c>]
> > +
> > +Reads the Xen message buffer, similar to dmesg on a Linux system.  The
>                                             dmesg(1)   ^Unix or ;-)
> 
> > +buffer contains informational, warning, and error messages created
> > +during Xen's boot process.  If you are having problems with Xen, this
> > +is one of the first places to look as part of problem determination.
> > +
> > +B<OPTIONS>
> > +
> > +=over 4
> > +
> > +=item B<-c>, B<--clear>
> > +
> > +Clears Xen's message buffer.
> > +
> > +=back
> > +
> > +=item B<info> [B<-n>, B<--numa>]
> > +
> > +Print information about the Xen host in I<name : value> format.  When
> > +reporting a Xen bug, please provide this information as part of the
> > +bug report.
> 
> I'm not sure this is useful people reporting bugs will look for
> information on reporting bugs (which should include this info) rather
> than scanning the xl man page for options which say "please include.."
> 
> I have added the need for this to
> http://wiki.xen.org/xenwiki/ReportingBugs

OK

> > +
> > +Sample output looks as follows (lines wrapped manually to make the man
> > +page more readable):
> 
> > +
> > + host                   : talon
> > + release                : 2.6.12.6-xen0
> 
> Heh. Perhaps a more up to date example if one is needed at all?

Good point

> > + version                : #1 Mon Nov 14 14:26:26 EST 2005
> > + machine                : i686
> > + nr_cpus                : 2
> > + nr_nodes               : 1
> > + cores_per_socket       : 1
> > + threads_per_core       : 1
> > + cpu_mhz                : 696
> > + hw_caps                : 0383fbff:00000000:00000000:00000040
> > + total_memory           : 767
> > + free_memory            : 37
> > + xen_major              : 3
> > + xen_minor              : 0
> > + xen_extra              : -devel
> > + xen_caps               : xen-3.0-x86_32
> > + xen_scheduler          : credit
> > + xen_pagesize           : 4096
> > + platform_params        : virt_start=0xfc000000
> > + xen_changeset          : Mon Nov 14 18:13:38 2005 +0100
> > +                          7793:090e44133d40
> > + cc_compiler            : gcc version 3.4.3 (Mandrakelinux
> > +                          10.2 3.4.3-7mdk)
> > + cc_compile_by          : sdague
> > + cc_compile_domain      : (none)
> > + cc_compile_date        : Mon Nov 14 14:16:48 EST 2005
> > + xend_config_format     : 4
> > +
> > +B<FIELDS>
> > +
> > +Not all fields will be explained here, but some of the less obvious
> > +ones deserve explanation:
> > +
> > +=over 4
> > +
> > +=item B<hw_caps>
> > +
> > +A vector showing what hardware capabilities are supported by your
> > +processor.  This is equivalent to, though more cryptic, the flags
> > +field in /proc/cpuinfo on a normal Linux machine.
> 
> Does this correspond to some cpuid output somewhere? That might be a
> good thing to reference.
> 
> (checks, hmm, it all very processor specific)

Yes, they do. I'll add a reference to that.

> > +=back
> > +
> > +B<OPTIONS>
> > +
> > +=over 4
> > +
> > +=item B<-n>, B<--numa>
> > +
> > +List host NUMA topology information
> > +
> > +=back
> [...]
> 
> > +=item B<pci-list-assignable-devices>
> > +
> > +List all the assignable PCI devices.
> 
> Perhaps add:
>         That is, though devices in the system which are configured to be
>         available for passthrough and are bound to a suitable PCI
>         backend driver in domain 0 rather than a real driver.

OK

> > +=head1 CPUPOOLS COMMANDS
> > +
> > +Xen can group the physical cpus of a server in cpu-pools. Each physical 
> > CPU is
> > +assigned at most to one cpu-pool. Domains are each restricted to a single
> > +cpu-pool. Scheduling does not cross cpu-pool boundaries, so each cpu-pool 
> > has
> > +an own scheduler.
> > +Physical cpus and domains can be moved from one pool to another only by an
> > +explicit command.
> > +
> > +=over 4
> > +
> > +=item B<cpupool-create> [I<OPTIONS>] I<ConfigFile>
> > +
> > +Create a cpu pool based an I<ConfigFile>.
> > +
> > +B<OPTIONS>
> > +
> > +=over 4
> > +
> > +=item B<-f=FILE>, B<--defconfig=FILE>
> > +
> > +Use the given configuration file.
> > +
> > +=item B<-n>, B<--dryrun>
> > +
> > +Dry run - prints the resulting configuration.
> 
> Is this deprecated in favour of global -N option? I think it should be.

Yeah, there is no point since we have a global option.

> > +
> > +=back
> > +
> > +=item B<cpupool-list> [I<-c|--cpus> I<cpu-pool>]
> > +
> > +List CPU pools on the host.
> > +If I<-c> is specified, B<xl> prints a list of CPUs used by I<cpu-pool>.
> 
> Is cpu-pool a name or a number, or both? (this info would be useful in
> the intro to the section I suppose).

I think it is a name, but I would need a confirmation from Juergen.

> > +
> > +=item B<cpupool-destroy> I<cpu-pool>
> > +
> > +Deactivates a cpu pool.
> > +
> > +=item B<cpupool-rename> I<cpu-pool> <newname>
> > +
> > +Renames a cpu pool to I<newname>.
> > +
> > +=item B<cpupool-cpu-add> I<cpu-pool> I<cpu-nr|node-nr>
> > +
> > +Adds a cpu or a numa node to a cpu pool.
> > +
> > +=item B<cpupool-cpu-remove> I<cpu-nr|node-nr>
> > +
> > +Removes a cpu or a numa node from a cpu pool.
> > +
> > +=item B<cpupool-migrate> I<domain-id> I<cpu-pool>
> > +
> > +Moves a domain into a cpu pool.
> > +
> > +=item B<cpupool-numa-split>
> > +
> > +Splits up the machine into one cpu pool per numa node.
> > +
> > +=back
> > +
> > +=head1 VIRTUAL DEVICE COMMANDS
> > +
> > +Most virtual devices can be added and removed while guests are
> > +running.
> 
> ... assuming the necessary support exists in the guest.
> 

OK

> >   The effect to the guest OS is much the same as any hotplug
> > +event.
> > +
> > +=head2 BLOCK DEVICES
> > +
> > +=over 4
> > +
> > +=item B<block-attach> I<domain-id> I<disc-spec-component(s)> ...
> > +
> > +Create a new virtual block device.  This will trigger a hotplug event
> > +for the guest.
> 
> Should add a reference to the docs/misc/xl-disk-configuration.txt doc to
> your SEE ALSO section.

OK

> > +
> > +B<OPTIONS>
> > +
> > +=over 4
> > +
> > +=item I<domain-id>
> > +
> > +The domain id of the guest domain that the device will be attached to.
> > +
> > +=item I<disc-spec-component>
> > +
> > +A disc specification in the same format used for the B<disk> variable in
> > +the domain config file. See L<xldomain.cfg>.
> > +
> > +=back
> > +
> > +=item B<block-detach> I<domain-id> I<devid> [B<--force>]
> > +
> > +Detach a domain's virtual block device. I<devid> may be the symbolic
> > +name or the numeric device id given to the device by domain 0.  You
> > +will need to run B<xl block-list> to determine that number.
> > +
> > +Detaching the device requires the cooperation of the domain.  If the
> > +domain fails to release the device (perhaps because the domain is hung
> > +or is still using the device), the detach will fail.  The B<--force>
> > +parameter will forcefully detach the device, but may cause IO errors
> > +in the domain.
> > +
> > +=item B<block-list> I<domain-id>
> > +
> > +List virtual block devices for a domain.
> > +
> > +=item B<cd-insert> I<domain-id> I<VirtualDevice> I<be-dev>
> > +
> > +Insert a cdrom into a guest domain's cd drive. Only works with HVM domains.
> > +
> > +B<OPTIONS>
> > +
> > +=over 4
> > +
> > +=item I<VirtualDevice>
> > +
> > +How the device should be presented to the guest domain; for example 
> > /dev/hdc.
> > +
> > +=item I<be-dev>
> > +
> > +the device in the backend domain (usually domain 0) to be exported; it can 
> > be a
> > +path to a file (file://path/to/file.iso). See B<disk> in L<xldomain.cfg> 
> > for the
> > +details.
> > +
> > +=back
> > +
> > +=item B<cd-eject> I<domain-id> I<VirtualDevice>
> > +
> > +Eject a cdrom from a guest's cd drive. Only works with HVM domains.
> > +I<VirtualDevice> is the cdrom device in the guest to eject.
> > +
> > +=back
> > +
> > +=head2 NETWORK DEVICES
> > +
> > +=over 4
> > +
> > +=item B<network-attach> I<domain-id> I<network-device>
> > +
> > +Creates a new network device in the domain specified by I<domain-id>.
> > +I<network-device> describes the device to attach, using the same format as 
> > the
> > +B<vif> string in the domain config file. See L<xldomain.cfg> for the
> > +description.
> 
> I sent out a patch to add docs/misc/xl-network-configuration.markdown as
> well.

I'll add a reference to it

> > +
> > +=item B<network-detach> I<domain-id> I<devid|mac>
> > +
> > +Removes the network device from the domain specified by I<domain-id>.
> > +I<devid> is the virtual interface device number within the domain
> > +(i.e. the 3 in vif22.3). Alternatively the I<mac> address can be used to
> > +select the virtual interface to detach.
> > +
> > +=item B<network-list> I<domain-id>
> > +
> > +List virtual network interfaces for a domain.
> > +
> > +=back
> > +
> > +=head2 PCI PASS-THROUGH
> > +
> > +=over 4
> > +
> > +=item B<pci-attach> I<domain-id> I<BDF>
> > +
> > +Hot-plug a new pass-through pci device to the specified domain.
> > +B<BDF> is the PCI Bus/Device/Function of the physical device to 
> > pass-through.
> > +
> > +=item B<pci-detach> [I<-f>] I<domain-id> I<BDF>
> > +
> > +Hot-unplug a previously assigned pci device from a domain. B<BDF> is the 
> > PCI
> > +Bus/Device/Function of the physical device to be removed from the guest 
> > domain.
> > +
> > +If B<-f> is specified, B<xl> is going to forcefully remove the device even
> > +without guest's collaboration.
> > +
> > +=item B<pci-list> I<domain-id>
> > +
> > +List pass-through pci devices for a domain.
> > +
> > +=back
> > +
> > +=head1 SEE ALSO
> > +
> > +B<xldomain.cfg>(5), B<xentop>(1)
> > +
> > +=head1 AUTHOR
> > +
> > +  Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>
> > +  Vincent Hanquez <vincent.hanquez@xxxxxxxxxxxxx>
> > +  Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
> > +  Ian Campbell <Ian.Campbell@xxxxxxxxxx>
> 
> This list seems so incomplete/unlikely to be updated that it may as well
> not be included. (also I think AUTHOR in a man page refers to the author
> of the page, not the authors of the software)

OK, I'll remove it

> > +=head1 BUGS
> > +
> > +Send bugs to xen-devel@xxxxxxxxxxxxxxxxxxxx
> 
> Reference http://wiki.xen.org/xenwiki/ReportingBugs
> 
 
Sure

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.