[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [OSSTEST Nested PATCH 2/6] Add and expose some testsupport APIs



Add xen-devel in mail loop.

> -----Original Message-----
> From: Pang, LongtaoX
> Sent: Friday, March 20, 2015 7:59 PM
> To: 'Ian Campbell'
> Cc: Ian.Jackson@xxxxxxxxxxxxx; wei.liu2@xxxxxxxxxx; Hu, Robert
> Subject: RE: [OSSTEST Nested PATCH 2/6] Add and expose some testsupport
> APIs
> 
> 
> 
> > -----Original Message-----
> > From: Ian Campbell [mailto:ian.campbell@xxxxxxxxxx]
> > Sent: Friday, March 20, 2015 12:27 AM
> > To: Pang, LongtaoX
> > Cc: xen-devel@xxxxxxxxxxxxx; Ian.Jackson@xxxxxxxxxxxxx;
> > wei.liu2@xxxxxxxxxx; Hu, Robert
> > Subject: Re: [OSSTEST Nested PATCH 2/6] Add and expose some
> > testsupport APIs
> >
> > On Tue, 2015-03-17 at 14:16 -0400, longtao.pang wrote:
> > > From: "longtao.pang" <longtaox.pang@xxxxxxxxx>
> > >
> > > 1. Designate vif model to 'e1000', otherwise, with default device
> > > model, the L1 eth0 interface disappear, hence xenbridge cannot work.
> > > Maybe this limitation can be removed later after some fix it. For
> > > now, we have to accomodate to it.
> >
> > You have done this unconditionally, which means it affects all guests.
> > You need to make this configurable by the caller, probably by plumbing
> > it through in $xopts (a hash of extra options).
> >
> > I see now you were told this last time around by Ian J, please don't
> > just resend such things without change either fix them, make an
> > argument for doing it your way or ask for clarification if you don't 
> > understand
> the requested change.
> >
> 
Thanks for your advice, I will try it. But, do you have any idea about below 
issue that confused me?
After L1 Debian hvm guest boot into XEN kernel, it failed to load 8139cp 
driver(Realtek RTL-8139), that cause L1 guest's network unavailable, and I have
to specify 'model=e1000' to make L1's network available.
The issue does not exist in RHEL6u5 OS(L0 and L1 are both RHEL6u5 OS).
> 
> > > 2. Since reboot L1 guest VM will take more time to boot up, we
> > > increase multi-times for reboot-confirm-booted if test nested job,
> > > and the multi value is stored as a runvar in 'ts-nested-setup' script.
> > > Added another function 'guest_editconfig_cd' and expose it, this
> > > function bascically changes guest boot device sequence, alter its
> > > on_reboot behavior to restart and enabled nestedhvm feature.
> >
> > This looks like two items run together?
> >
> > The multi_reboot_time thing sounds ok, but it should be called
> > reboot_time_factor or something like that. In fact I see that Ian
> > suggested previously that it should have the host ident in it, that makes
> sense to me.
> >
I will try it. Also, how do you handle below question after reboot host OS 
during running OSSTest job?
After finishing L0 and L1 host installation, the OSs will take a lot time(about 
150s) to start MTA service and NTP service. 
I know that, the poll_loop timeout is 40s of 'reboot-confirm-booted', that's 
why timeout happened when calling 'host_reboot' function after reboot host OS.

> 
> > The editconfig_cd thing -- yet another thing which Ian questioned and
> > which it was agreed you would change but you haven't.
> >
For this question, I have sent a mail about it.(2015-03-04) 
After finishing L1 guest VM installation, we need to change L1 guest boot 
sequence from ISO image to hard disk, we need modify the "boot=cd" , also need 
to enable 'nestedhvm' feature in hvm configure file, So, we added 
'guest_editconfig_cd' function.
Since, 'guest_editconfig_nocd' could not get this point, if we change it, will 
affect all guest, that not our expected.
+sub guest_editconfig_cd ($) {
+    my ($gho) = @_;
+    guest_editconfig($gho->{Host}, $gho, sub {
+        if (m/^\s*boot\s*= '\s*d\s*c\s*'/) {
+            s/dc/cd/;
+        }
+        s/^on_reboot.*/on_reboot='restart'/;
+        s/#nestedhvm/nestedhvm/;
+    });
+}
> > I think perhaps you have accidentally resent an older version of the
> > series. If not then please go back and ensure you have addressed all
> > of the feedback given on the last iteration before sending another version.
> >
> > Ian.
> >

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.