[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 1/2] libxl: fix race in libxl__devices_destroy



Roger Pau Monne writes ("[PATCH 1/2] libxl: fix race in 
libxl__devices_destroy"):
> Don't have a fixed number of devices in the aodevs array, and instead
> size it depending on the devices present in xenstore. Also change the
> console destroy path to a switch instead of an if.
...
>      GCNEW_ARRAY(aodevs->array, aodevs->size);
>      for (int i = 0; i < aodevs->size; i++) {
> -        aodevs->array[i].aodevs = aodevs;
> -        libxl__prepare_ao_device(ao, &aodevs->array[i]);
> +        GCNEW(aodevs->array[i]);
> +        aodevs->array[i]->aodevs = aodevs;
> +        libxl__prepare_ao_device(ao, aodevs->array[i]);

I agree with Ian C's comments on this.

> +    /*
> +     * Since we have already added all the aodevs, and marked them
> +     * as active, we can mark the addition operation as finished.
> +     */
> +    aodevs->in_addition = 0;

And this is rather messy; it requires thinking about to make sure
everything is correct.

How about making libxl__prepare_ao_devices automatically create the
"we are still searching for stuff to do" aodev ?

You might want a
  libxl__finished_preparing_ao_devices
to hide the "searching" ao entirely.

If you do it this way then the "no disks" corner case comes out in the
wash too.

> @@ -588,21 +553,28 @@ void libxl__devices_destroy(libxl__egc *egc, 
> libxl__devices_remove_state *drs)
...
> +                default:
> +                    GCREALLOC_ARRAY(aodevs->array, aodevs->size + 1);
> +                    GCNEW(aodev);
> +                    aodevs->array[aodevs->size] = aodev;

Open coding this is pretty ugly.  I agree with Ian C that
libxl__prepare_ao_device should do this.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.