[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Creating HVM guest by libxenguest



On Thu, Oct 02, 2008 at 09:36:47AM +0100, Gihan Munasinghe wrote:
> Hi
> 
> I am new to developing for XEN. I wanted to crate a HVM using xencltr.h 
> and xenguest.h.. Following is the code I got from while browsing
> done by Mark McLoughlin <markmc@xxxxxxxxxx> and did some modification.
> ============================================================================================
>  
> 
> #include <stdio.h>
> #include <xenctrl.h>
> #include <xenguest.h>
> 
> #define MEM_MB 1024
> #define IMAGE "/root/osimages/vmlinuz-2.6.24-19-xen"
> #define INITRD NULL
> #define CMDLINE ""
> #define FEATURES NULL
> #define FLAGS 0
> 
> int
> main(int argc, char **argv)
> {
> int xc_handle, ret;
> uint32_t domid;
> unsigned int store_evtchn, console_evtchn;
> unsigned long store_mfn, console_mfn;
> xen_domain_handle_t uuid = {
>   0xb2, 0x93, 0x22, 0x1f,
>   0x1a, 0xaa, 0x20, 0xac,
>   0xbe, 0x36, 0xc4, 0xd7,
>   0x6b, 0x73, 0x92, 0x1,
> };
> 
> xc_handle = xc_interface_open();
> if (xc_handle == -1)
>   {
>     fprintf(stderr, "xc_interface_open() failed\n");
>     return 1;
>   }
> 
> domid = 0;
> 
> ret = xc_domain_create(xc_handle, 0, uuid, 0, &domid);
> if (ret != 0)
>   {
>     fprintf(stderr, "xc_dom_linux_build() failed\n");
>     goto fail;
>   }
> 
> store_evtchn = xc_evtchn_alloc_unbound(xc_handle, domid, 0);
> if (store_evtchn < 0)
>   {
>     fprintf(stderr, "Failed to allocate xenstore event channel\n");
>     goto fail;
>   }
> 
> console_evtchn = xc_evtchn_alloc_unbound(xc_handle, domid, 0);
> if (console_evtchn < 0)
>   {
>     fprintf(stderr, "Failed to allocate console event channel\n");
>     goto fail;
>   }
> 
> ret = xc_domain_setmaxmem(xc_handle, domid, MEM_MB << 10);
> if (ret != 0)
>   {
>     fprintf(stderr, "xc_domain_set_maxmem() failed\n");
>     goto fail;
>   }
> 
> ret = xc_domain_max_vcpus(xc_handle, domid, 1);
> if (ret != 0)
>   {
>     fprintf(stderr, "xc_domain_max_vcpus() failed\n");
>     goto fail;
>   }
> 
> 
> /*  ret = xc_linux_build(xc_handle, domid,524288,
>                      IMAGE, INITRD,
>                      CMDLINE, FEATURES, FLAGS,
>                      store_evtchn, &store_mfn,
>                      console_evtchn, &console_mfn);
> */
> ret = xc_hvm_build(xc_handle, domid, MEM_MB, IMAGE);                    
> if (ret != 0)
>   {
>     fprintf(stderr, "xc_dom_linux_build() failed\n");
>     goto fail;
>   }
> 
> 
> xc_interface_close(xc_handle);
>   return 0;
> 
> fail:
> if (domid)
>   {
>     xc_evtchn_reset(xc_handle, domid);
>     xc_domain_destroy(xc_handle, domid);
>   }
> xc_interface_close(xc_handle);
> return 1;
> }
> ===============================================================
> 
> When running this code I get the following error output
> 
> VIRTUAL MEMORY ARRANGEMENT:
> Loader:        00000000c0100000->00000000c0503000
> TOTAL:         0000000000000000->0000000040000000
> ENTRY ADDRESS: 00000000c0100000
> Failed allocation for dom 12: 261952 extents of order 0
> ERROR Internal error: Could not allocate memory for HVM guest.
> (16 = Device or resource busy)
> xc_dom_linux_build() failed
> 

Looks like you don't have enough memory on your system to build a 1GB
guest domain...

It's trying to allocate 261952 4k pages (i.e. 1GB) and failing. You
could reduce MEM_MB or try running your program on a system with more
memory.

Gary

> Could someone give me some insight to this, also is there any source 
> codes or documentation that I can find on using low level xen interface..
> 
> Thanks
> Gihan
> 
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel

-- 
Gary Pennington
Solaris Core OS
Sun Microsystems
Gary.Pennington@xxxxxxx

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.