[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 2 of 4 RFC] xl/remus: Network buffering cmdline switch, setup/teardown



On Mon, Jul 29, 2013 at 11:49 AM, Ian Campbell <Ian.Campbell@xxxxxxxxxx> wrote:
> On Thu, 2013-07-25 at 00:09 -0700, Shriram Rajagopalan wrote:
>> Add appropriate code to xl_cmdline.c to setup network buffers for
>> each vif belonging to the guest.  Also provide a command line switch
>> to explicitly "enable" network buffering.
>>
>> Signed-off-by: Shriram Rajagopalan <rshriram@xxxxxxxxx>
>>
>> diff -r 3ae38cbe535c -r 3cd67f6ff63a tools/libxl/libxl_types.idl
>> --- a/tools/libxl/libxl_types.idl     Wed Jul 24 22:55:00 2013 -0700
>> +++ b/tools/libxl/libxl_types.idl     Thu Jul 25 00:02:19 2013 -0700
>> @@ -521,6 +521,7 @@ libxl_domain_remus_info = Struct("domain
>>      ("interval",     integer),
>>      ("blackhole",    bool),
>>      ("compression",  bool),
>> +    ("netbuf_iflist", libxl_string_list),
>>      ])
>>
>>  libxl_event_type = Enumeration("event_type", [
>> diff -r 3ae38cbe535c -r 3cd67f6ff63a tools/libxl/xl_cmdimpl.c
>> --- a/tools/libxl/xl_cmdimpl.c        Wed Jul 24 22:55:00 2013 -0700
>> +++ b/tools/libxl/xl_cmdimpl.c        Thu Jul 25 00:02:19 2013 -0700
>> @@ -7039,10 +7039,109 @@ done:
>>      return ret;
>>  }
>>
>> +static char **get_guest_vifnames(uint32_t domid, int *num_vifs)
>> +{
>> +    char **viflist;
>> +    libxl_device_nic *nics;
>> +    libxl_nicinfo nicinfo;
>> +    int nb, i;
>> +
>> +    nics = libxl_device_nic_list(ctx, domid, &nb);
>> +    if (!nics) { *num_vifs = 0; return NULL;}
>> +
>> +    viflist = calloc((nb + 1), sizeof(char *));
>> +    if (!viflist) {
>> +        perror("failed to allocate memory to hold vif names!");
>> +        exit(-1);
>> +    }
>> +
>> +    for (i = 0; i < nb; ++i) {
>> +        if (!libxl_device_nic_getinfo(ctx, domid, &nics[i], &nicinfo))  {
>> +            if (asprintf(&viflist[i], "vif%u.%d", domid, nicinfo.devid) < 
>> 0) {
>
> This doesn't account for the ifname field of the vif.
>

Ah thanks. good catch.. Will add a check for that.

> Also, I'm not sure how this is supposed to work when driver domains are
> in use either,

Remus wont work with driver domains, unless there are agents in each of the
driver domains, that are co-ordinated by the memory checkpoint code in dom0.

Irrespective of the hotplug scripts, Remus needs to control the IFB
network interface
attached to the Guest's vifs/tap devices. Given that all these
interfaces will be inside
a driver domain, which does not have a fast (not xenstore)
communication channel to dom0,
there is no way the memory checkpointing code can co-ordinate with the
driver domain in order
to buffer/release its network packets after each checkpoint.

One alternative would be to have a network agent running inside each
of these driver domains,
assuming that the driver domains would have network access (practical ? ).
The memory checkpoint code would have to control the IFB devices via
the network agents.

All this is in the long run.. The immediate goal is to get
network/disk buffering to work with xl.


> if this were done via libxl then it would naturally get
> incorporated into Roger's work to make hotplug scripts etc work properly
> with stub domains.
>
> Most of the other comments I had would become invalid/irrelevant when
> this was moved to libxl, since you'd naturally end up doing things
> differently anyway.
>

Even if the code moved to libxl, most of the setup would remain the
same, i.e. setup on demand
using C code or something. It does not make sense to setup an IFB
device for every domain
right from the boot. You will eventually run out of ifb devices in the system.

If you are suggesting that we invoke a hotplug script when "xl remus"
command is issued,
I dont mind doing so either. The code in libxl (to control the plug
qdisc) is not going to go away
in either case.

shriram

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.