[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-API] XenServer + libvirt + Ceph


  • To: xen-api@xxxxxxxxxxxxx
  • From: Jonathan Gowar <jon@xxxxxxxxxxxxxxxx>
  • Date: Sat, 26 Oct 2013 03:08:34 +0100
  • Delivery-date: Sat, 26 Oct 2013 02:08:54 +0000
  • List-id: User and development list for XCP and XAPI <xen-api.lists.xen.org>

On Thu, 2013-09-05 at 01:55 +0100, Jonathan Gowar wrote:
> I've gotten someway further, with the help of this thread 
> http://comments.gmane.org/gmane.comp.file-systems.ceph.user/2416
> 
> But, the pool will not start.
> 
> virsh # pool-dumpxml pp-ceph-1                             
> <pool type='rbd'>                                          
>   <name>pp-ceph-1</name>                                   
>   <uuid>be01cdbb-1962-024d-97f2-deec815f3848</uuid>        
>   <capacity unit='bytes'>5991852244992</capacity>          
>   <allocation unit='bytes'>2516807934</allocation>         
>   <available unit='bytes'>5986581966848</available>        
>   <source>                                                 
>     <host name='10.11.4.52' port='6789'/>                  
>     <name>rbd</name>                                       
>     <auth username='libvirt' type='ceph'>                  
>       <secret uuid='b6d26377-a700-bd77-9b1e-c9226f35d1f5'/>
>     </auth>                                                
>   </source>                                                
> </pool>                                                    
>                                                            
> virsh # secret-list                                        
> UUID                                 Usage                 
> -----------------------------------------------------------
> b6d26377-a700-bd77-9b1e-c9226f35d1f5 Unused                
>                                                            
> virsh # secret-dumpxml b6d26377-a700-bd77-9b1e-c9226f35d1f5
> <secret ephemeral='no' private='no'>                       
>   <uuid>b6d26377-a700-bd77-9b1e-c9226f35d1f5</uuid>        
>   <usage type='ceph'>                                      
>     <name>client.libvirt</name>                            
>   </usage>                                                 
> </secret>                                                  
> 
> virsh # pool-info pp-ceph-1                         
> Name:           pp-ceph-1                           
> UUID:           be01cdbb-1962-024d-97f2-deec815f3848
> State:          inactive                            
> Persistent:     yes                                 
> Autostart:      no                                  
>                                                     
> virsh # pool-start pp-ceph-1                      
> error: Failed to start pool pp-ceph-1             
> error: An error occurred, but the cause is unknown

Further still!

[root@pp-xen-dev ~]# TMPLUUID=$(xe template-list | grep -B1
'name-label.*Red Hat.* 6.*64-bit' | awk -F: '/uuid/{print $2}'| tr -d "
")
[root@pp-xen-dev ~]# VMUUID=$(xe vm-install new-name-label="CentOS6"
template=${TMPLUUID})
The SR has no attached PBDs
sr: aa7f4079-de6e-ffa8-ed94-d710885ca3c6 (ceph)
[root@pp-xen-dev ~]# virsh pool-info Ceph
Name:           Ceph
UUID:           b8037dda-9812-7ff3-0b39-7f2ddd450d71
State:          running
Persistent:     no
Autostart:      no
Capacity:       7.72 TiB
Allocation:     0.00 
Available:      7.72 TiB

[root@pp-xen-dev ~]# xe sr-list
uuid=aa7f4079-de6e-ffa8-ed94-d710885ca3c6 
uuid ( RO)                : aa7f4079-de6e-ffa8-ed94-d710885ca3c6
          name-label ( RW): ceph
    name-description ( RW): 
                host ( RO): pp-xen-dev
                type ( RO): libvirt
        content-type ( RO): 

So close!  Any help, please, I don't know what that error means; "The SR
has no attached PBDs".

Regards,
Jon


_______________________________________________
Xen-api mailing list
Xen-api@xxxxxxxxxxxxx
http://lists.xen.org/cgi-bin/mailman/listinfo/xen-api


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.