[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 3/3] xen: optimize xenbus driver for multiple concurrent xenstore accesses

On 02/07/2017 12:51 PM, Boris Ostrovsky wrote:
> On 01/24/2017 11:23 AM, Juergen Gross wrote:
>> On 24/01/17 14:47, Boris Ostrovsky wrote:
>>> On 01/23/2017 01:59 PM, Boris Ostrovsky wrote:
>>>> On 01/23/2017 05:09 AM, Juergen Gross wrote:
>>>>> Handling of multiple concurrent Xenstore accesses through xenbus driver
>>>>> either from the kernel or user land is rather lame today: xenbus is
>>>>> capable to have one access active only at one point of time.
>>> This patch appears to break save/restore:
>> Hmm, tried multiple times, but I can't reproduce this issue.
>> Anything special in the setup? I tried a 64 bit pv guest and did
>> "xl save".
>> Do I have to run some load in parallel?
> Any luck reproducing this? I am still failing the test on dumpdata but I
> couldn't reproduce it on another system.

The problem appears to be xs_state_users being non-zero when we call

From what I understand this is caused by xs_request_exit() not
decrementing it when closing a transaction. This seems to be happening
when XS_TRANSACTION_END transaction returns XS_ERROR (I haven't traced
what causes this error but it doesn't appear to cause any visible harm).

Does the patch below make sense?

diff --git a/drivers/xen/xenbus/xenbus_xs.c b/drivers/xen/xenbus/xenbus_xs.c
index e62cb09..ffd5fac 100644
--- a/drivers/xen/xenbus/xenbus_xs.c
+++ b/drivers/xen/xenbus/xenbus_xs.c
@@ -140,7 +140,7 @@ void xs_request_exit(struct xb_req_data *req)
        if ((req->type == XS_TRANSACTION_START && req->msg.type ==
-           req->msg.type == XS_TRANSACTION_END)
+           req->type == XS_TRANSACTION_END)

I ran a few tests on dumpdata and they completed successfully. I'll keep
this for the overnight runs too, with a different Xen version.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.