[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [OSSTEST Nested PATCH v11 6/7] Compose the main recipe of nested test job



On Thu, 2015-06-11 at 09:52 +0000, Pang, LongtaoX wrote:
> I have checked nested job testid, as below:
> #./standalone run-job --simulate -h dummy test-amd64-amd64-qemuu-nested | 
> grep testid
> 2015-06-11 09:46:37 Z standalone.test-amd64-amd64-qemuu-nested ========== 1 
> testid build-check(1) ==========
> 2015-06-11 09:46:37 Z standalone.test-amd64-amd64-qemuu-nested ========== 2 
> testid hosts-allocate ==========
> 2015-06-11 09:46:37 Z standalone.test-amd64-amd64-qemuu-nested ========== 3 
> testid host-install(3) ==========
> 2015-06-11 09:46:37 Z standalone.test-amd64-amd64-qemuu-nested ========== 4 
> testid host-ping-check-native ==========
> 2015-06-11 09:46:37 Z standalone.test-amd64-amd64-qemuu-nested ========== 5 
> testid xen-install ==========
> 2015-06-11 09:46:37 Z standalone.test-amd64-amd64-qemuu-nested ========== 6 
> testid xen-boot ==========
> 2015-06-11 09:46:37 Z standalone.test-amd64-amd64-qemuu-nested ========== 7 
> testid host-ping-check-xen ==========
> 2015-06-11 09:46:37 Z standalone.test-amd64-amd64-qemuu-nested ========== 8 
> testid leak-check/basis(8) ==========
> 2015-06-11 09:46:37 Z standalone.test-amd64-amd64-qemuu-nested ========== 9 
> testid debian-hvm-install/nestedl1 ==========
> 2015-06-11 09:46:37 Z standalone.test-amd64-amd64-qemuu-nested ========== 10 
> testid nested-setup/nestedl1 ==========
> 2015-06-11 09:46:37 Z standalone.test-amd64-amd64-qemuu-nested ========== 11 
> testid xen-install/nestedl1 ==========
> 2015-06-11 09:46:37 Z standalone.test-amd64-amd64-qemuu-nested ========== 12 
> testid host-reboot/nestedl1 ==========
> 2015-06-11 09:46:37 Z standalone.test-amd64-amd64-qemuu-nested ========== 13 
> testid debian-hvm-install/nestedl1/nestedl2 ==========
> 2015-06-11 09:46:37 Z standalone.test-amd64-amd64-qemuu-nested ========== 14 
> testid guest-stop/nestedl1/nestedl2 ==========
> 2015-06-11 09:46:37 Z standalone.test-amd64-amd64-qemuu-nested ========== 15 
> testid guest-destroy/nestedl1 ==========
> 2015-06-11 09:46:37 Z standalone.test-amd64-amd64-qemuu-nested ========== 16 
> testid leak-check/check ==========
> 2015-06-11 09:46:37 Z standalone.test-amd64-amd64-qemuu-nested ========== 17 
> testid capture-logs(17) ==========
> 
> Sorry, I am confused, ' capture-logs' has already been added in job, do we 
> need to add it again?

It's running against the L0 though isn't it?

I think you want both L0 and L1 hypervisors to have their logs
collected.

> 
> > > > I think some extra +s in the l2 install and start operations might be
> > > > useful, because the testid probably doesn't need to mention nestedl1.
> > > >
> > > I am sorry, do you mean that I should add '+' for l2 installation,
> > > such as ' ts-debian-hvm-install + nestedl1 + nestedl2' ?
> > 
> > You can use standalone --dry-run to see all the testid's generated by
> > your job and then adjust the +'s until they are as desired (in this case
> > Ian is suggesting to omit nestedl1 from the testid).
> > 
> Thanks Ian C.
> So, the expect requirement is like below in 'sg-run-job', right?

I'm not sure, if that produces the correct testid then it is correct.

>  (I will try to change 
> idents/guest names as `l1' and `l2' later as Ian Jackson's suggestion)
> proc need-hosts/test-nested {} {return host}
> proc run-job/test-nested {} {
>     run-ts . = ts-debian-hvm-install + host nestedl1
>     run-ts . = ts-nested-setup + host nestedl1
>     run-ts . = ts-xen-install nestedl1
>     run-ts . = ts-host-reboot nestedl1
>     run-ts . = ts-debian-hvm-install nestedl1 nestedl2
>     run-ts . = ts-guest-stop nestedl1 nestedl2
>     run-ts . = ts-guest-destroy + host nestedl1
> }
> > Ian.
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.