[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 1/2] tools: remove systemd xenstore socket definitions



On 20/07/16 12:12, Juergen Gross wrote:
> On 20/07/16 12:52, George Dunlap wrote:
>> On Wed, Jul 20, 2016 at 10:58 AM, Juergen Gross <jgross@xxxxxxxx> wrote:
>>> On 20/07/16 11:02, Ross Lagerwall wrote:
>>>> On 06/29/2016 02:44 PM, Juergen Gross wrote:
>>>>> On 29/06/16 15:31, Ross Lagerwall wrote:
>>>>>> On 06/29/2016 02:00 PM, Juergen Gross wrote:
>>>>>>> On 29/06/16 14:52, Andrew Cooper wrote:
>>>>>>>> On 29/06/16 13:44, Juergen Gross wrote:
>>>>>>>>> @@ -2068,13 +1964,6 @@ int main(int argc, char *argv[])
>>>>>>>>>      /* Tell the kernel we're up and running. */
>>>>>>>>>      xenbus_notify_running();
>>>>>>>>>
>>>>>>>>> -#if defined(XEN_SYSTEMD_ENABLED)
>>>>>>>>> -    if (systemd) {
>>>>>>>>> -        sd_notify(1, "READY=1");
>>>>>>>>> -        fprintf(stderr, SD_NOTICE "xenstored is ready\n");
>>>>>>>>> -    }
>>>>>>>>> -#endif
>>>>>>>> Getting rid of the socket configuration for systemd is ok, but we
>>>>>>>> should
>>>>>>>> keep the sd_notify() calls for when the daemon is started by systemd.
>>>>>>>>
>>>>>>>> Socket activiation and sd_notify() are orthogonal, and sd_notify() is
>>>>>>>> still required if we don't want systemd to treat xenstored as a legacy
>>>>>>>> unix daemon.
>>>>>>> So what is the downside of xenstored being treated as a legacy daemon?
>>>>>>> This question is especially interesting for the case of patch 2 being
>>>>>>> considered: xenstored is no longer started by systemd, but by a wrapper
>>>>>>> script which might decide to start the xenstore domain instead.
>>>>>>>
>>>>>>> Another problem: today xenstored decides whether to call sd_notify()
>>>>>>> by testing the xenstore sockets being specified via systemd. This will
>>>>>>> no longer work. So how to do it now?
>>>>>>>
>>>>>>>
>>>>>> One problem with the patch as it currently is implemented is that the
>>>>>> service type is not correct for when xenstored is a daemon. This makes
>>>>>> it difficult to manage with systemd and difficult for other services to
>>>>>> depend on it in a sensible fashion. The end result is subtle races and
>>>>>> occasional failures.
>>>>> Could you please educate me what's wrong? I'm no systemd expert.
>>>> Sorry for the late response.
>>>>
>>>> With Type=oneshot, systemd considers the service to be "started" as soon
>>>> as the ExecStart command completes. After re-reading the patches, I
>>>> realize that timeout_xenstore() should ensure that xenstored is actually
>>>> ready before systemd starts dependent services. This should prevent the
>>>> races I was mentioned previously.
>>>>
>>>> Nevertheless, I feel that this patch series makes the implementation
>>>> worse for users of xenstored as a daemon:
>>>>
>>>> - Because Type=oneshot is not intended to be used for daemon processes,
>>>> systemctl status does not show the service as "running", instead it
>>>> shows "exited". This is surprising, at the very least. I haven't tested,
>>>> but I believe it would also prevent us someday using the systemd service
>>>> management features like Restart=on-failure
>>>>
>>>> - Removes socket activation. Services would have to explicitly depend on
>>>> xenstored.service rather than having an implicit dependency on simply by
>>>> using xenstored.socket. This means configuring explicit ordering, etc.,
>>>> which is easier to get wrong. Socket activation is also generally the
>>>> most efficient way of starting a service.
>>> So you are not taking xenstore domain into account at all.
>>>
>>> There is no socket for xenstore domain so wanting to rely on socket
>>> activation is the wrong way.
>>>
>>>> - Uses a poll loop to determine whether the daemon is ready or not
>>>> rather than explicit notification from the daemon itself.
>>> How to do this in the domain case?
>>>
>>>> - Uses a shell script wrapper...
>>> That's on purpose: this enables me to switch between both setups
>>> (daemon or domain) by modifying only _one_ configuration file.
>> FWIW I'm pretty sure using a shell script wrapper also clobbers the
>> SELinux context.  (I forget whether this means xenstore doesn't run or
>> xenstore runs completely unrestricted.)
> Okay, this would need to be corrected. I assume it is possible to
> address such an issue in a script?
>
>> (Ross, does XenServer use SELinux?  Any ideas here?)

XenServer does not use any SELinux.


>>
>>>> I would suggest asking on the systemd-devel mailing list or the #systemd
>>>> IRC channel on freenode. They should be able to suggest the best way of
>>>> solving this.
>>> I'm sure I could tweak the whole setup more in direction of systemd.
>>> I'm not sure the final picture would be the preferred one.
>> But as Ross said, there are advantages to using the systemd-specific
>> functionality; and it's likely that the majority of our users will be
>> running xenstored in dom0.  Doesn't it make sense to at least see if
>> there are sensible options for how we can take advantage of those
>> before we just drop it?
> I still don't see the advantages, as relying on socket activation of
> the xenstore service is no option: as soon as you do this you drop the
> possibility to ever use the xenstore domain on such a system.
>
> To be clear: I don't want to avoid systemd by any means. I just don't
> want to have a complex and ugly solution with no gain just because
> doing it the systemd way.

Given the introduction of this new choice, I agree that socket
activation isn't sensible.  In the grand scheme of things it doesn't buy
you much, as xenstored does not match the intended use for socket
activation (on-demand launch of services when something tries to use its
socket), as it is a start of day service that runs forever.

However, socket activation and sd_notify() are entirely orthogonal, and
the removal of socket activation should not remove sd_notify().

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.