[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-API] Re: [Xen-devel] Generic PV Guests on XCP?

[re-adding xen-api so it's in the archives]

Hi Phil,

The toolstacks are separate; xm works at the individual VM level and xe works 
at the resource pool level.  When you SSH into an XCP box, you are in that 
host's dom0, and xe commands work at the level of the cluster of hosts which 
are pooled together.

If you're not too familiar with Xen, then it would be a lot easier to use XCP; 
it takes care of things for you like mapping storage and networking to the 
guest that often need some configuration and/or kernel recompilation with the 
Python stack. Some features like high availability and the XML-RPC XenAPI 
aren't implemented fully in the Python version at all since it's only single 

The only thing I miss is in XCP is an equivalent to 'xm console -c', which is 
just laziness since the small script below is an adequate replacement.  If I 
remember correctly, the automated Xen regression suite also uses the same 
technique to stress test Linux guests and capture the results, so it's fine to 
depend on.


On 25 Mar 2010, at 21:10, Phil Winterfield (winterfi) wrote:

> Thanks for the info, Anil.  I am interested in your suggestion to do
> mini-os types of work on dom0 and the lower level of the python tool
> chain, but I am not sure how to go about doing that.  When I ssh onto
> the xcp box, am I really running on dom0 at that point? I presume so,
> but how do I get access to the old xm python interface?
> Thanks
> Phil
>> -----Original Message-----
>> From: Anil Madhavapeddy [mailto:anil@xxxxxxxxxx]
>> Sent: Wednesday, March 24, 2010 1:50 PM
>> To: Ian Campbell
>> Cc: Phil Winterfield (winterfi); xen-devel@xxxxxxxxxxxxxxxxxxx;
>> DavidScott@xxxxxxxxxxxxx; xen-api@xxxxxxxxxxxxxxxxxxx; Don Banks
>> (donbanks); IanCampbell@xxxxxxxxxxxxx; David.Cottingham@xxxxxxxxxxxxx
>> Subject: Re: [Xen-API] Re: [Xen-devel] Generic PV Guests on XCP?
>> On 24 Mar 2010, at 19:45, Ian Campbell wrote:
>>> I don't think you want to start from a template here, since none of
>> the
>>> existing ones meet your needs/usecase. You can create a basic VM
>>> instance with "xe vm-create" and then configure that however you
> need
>> by
>>> modifying the various fields on the VM object. You may choose to
>> convert
>>> it to a template for convenience of instantiating multiple copies
>> (using
>>> "xe vm-install").
>> I find it a lot easier to clone the "Other Install Media" template and
>> just set the 3 fields to convert it into a PV template (clear
> HVM-boot-
>> policy, set PV-kernel to the MiniOS file in dom0, PV-args if you
> need).
>> I think my old blog entry on Ubuntu HVM->PV has more of the gory
>> details, but most isnt necessary for MiniOS:
>> http://community.citrix.com/x/4YINAg
>>> The consoles are exported via XenAPI in VNC format. You need a
> XenAPI
>>> client which is capable of attaching to these. Looks like
>>> http://www.xvpsource.org/ is the place to look?
>>> Personally I usually use /opt/xensource/debug/vncproxy (copied to my
>>> workstation) to create a local socket on my workstation which is
>> proxied
>>> to the XCP host and then run vncviewer locally against that proxy.
>> To access the raw serial console by text (the equivalent of 'xm create
>> -c'), you first need to disable the VNC proxy from spawning (set the
> VM
>> other-config:disable_pv_vnc field to something), start the VM paused,
>> and then connect to it directly using xenconsole from dom0, and then
>> unpause the VM
>> I use this small script in dom0 to automate this for a given VM uuid:
>> http://github.com/avsm/mirage/blob/master/scripts/run_minios
>> Although to be honest, for MiniOS work, I just find it far more
>> convenient to just use the lower-level Python toolstack, and then
>> switch to XCP for the higher level management stuff.
>> -anil

xen-api mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.