[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 2/6] xenbus: implement the xenwatch multithreading framework



Hi Boris,

On 09/17/2018 05:20 AM, Boris Ostrovsky wrote:
> 
> 
> On 9/14/18 3:34 AM, Dongli Zhang wrote:
>>
>> +
>> +/* Running in the context of default xenwatch kthread. */
>> +void mtwatch_create_domain(domid_t domid)
>> +{
>> +    struct mtwatch_domain *domain;
>> +
>> +    if (!domid) {
>> +        pr_err("Default xenwatch thread is for dom0\n");
>> +        return;
>> +    }
>> +
>> +    spin_lock(&mtwatch_info->domain_lock);
>> +
>> +    domain = mtwatch_find_domain(domid);
>> +    if (domain) {
>> +        atomic_inc(&domain->refcnt);
>> +        spin_unlock(&mtwatch_info->domain_lock);
>> +        return;
>> +    }
>> +
>> +    domain = kzalloc(sizeof(*domain), GFP_ATOMIC);
> 
> Is there a reason (besides this being done under spinlock) for using 
> GFP_ATOMIC?
> If domain_lock is the only reason I'd probably drop the lock and do 
> GFP_KERNEL.

spin_lock is the reason.

Would you like to switch to a mutex here?

> 
> 
>> +    if (!domain) {
>> +        spin_unlock(&mtwatch_info->domain_lock);
>> +        pr_err("Failed to allocate memory for mtwatch thread %d\n",
>> +               domid);
>> +        return;
> 
> This needs to return an error code, I think. Or do you want to fall back to
> shared xenwatch thread?

We would fall back to the shared default xenwatch thread. As in [PATCH 3/6], the
event is dispatched to the shared xenwatch thread if the per-domU one is not
available.

> 
> 
>> +    }
>> +
>> +    domain->domid = domid;
>> +    atomic_set(&domain->refcnt, 1);
>> +    mutex_init(&domain->domain_mutex);
>> +    INIT_LIST_HEAD(&domain->purge_node);
>> +
>> +    init_waitqueue_head(&domain->events_wq);
>> +    spin_lock_init(&domain->events_lock);
>> +    INIT_LIST_HEAD(&domain->events);
>> +
>> +    list_add_tail_rcu(&domain->list_node, &mtwatch_info->domain_list);
>> +
>> +    hlist_add_head_rcu(&domain->hash_node,
>> +               &mtwatch_info->domain_hash[MTWATCH_HASH(domid)]);
>> +
>> +    spin_unlock(&mtwatch_info->domain_lock);
>> +
>> +    domain->task = kthread_run(mtwatch_thread, domain,
>> +                   "xen-mtwatch-%d", domid);
>> +
>> +    if (!domain->task) {
>> +        pr_err("mtwatch kthread creation is failed\n");
>> +        domain->state = MTWATCH_DOMAIN_DOWN;
> 
> 
> Why not clean up right here?

I used to think there might be a race between mtwatch_create_domain() and
mtwatch_put_domain().

Just realized the race is impossible. I will clean up here.

> 
>> +
>> +        return;
>> +    }
>> +
>> +    domain->state = MTWATCH_DOMAIN_UP;
>> +}
>> +
> 
> 
>> +
>>   void unregister_xenbus_watch(struct xenbus_watch *watch)
>>   {
>>       struct xs_watch_event *event, *tmp;
>> @@ -831,6 +1100,9 @@ void unregister_xenbus_watch(struct xenbus_watch *watch)
>>         if (current->pid != xenwatch_pid)
>>           mutex_unlock(&xenwatch_mutex);
>> +
>> +    if (xen_mtwatch && watch->get_domid)
>> +        unregister_mtwatch(watch);
> 
> 
> I may not be understanding the logic flow here, but if we successfully
> removed/unregistered/purged the watch from mtwatch lists, do we still need to
> try removing it from watch_events list below?

Part of original unregister_xenbus_watch() has already removed the pending
events from watch_events before the above added lines of code.


Dongli Zhang

> 
> 
> -boris
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxxx
> https://lists.xenproject.org/mailman/listinfo/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.