[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] xs transaction


  • To: "Keir Fraser" <Keir.Fraser@xxxxxxxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • From: "Tian, Kevin" <kevin.tian@xxxxxxxxx>
  • Date: Tue, 11 Sep 2007 15:44:24 +0800
  • Delivery-date: Tue, 11 Sep 2007 00:45:13 -0700
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: Acf0Q9+kPHP1cHSpSZSDVBFVO42XRAAAfPCSAAAbwlAAAEOchQAAJD+A
  • Thread-topic: [Xen-devel] xs transaction

>From: Keir Fraser [mailto:Keir.Fraser@xxxxxxxxxxxx]
>Sent: 2007年9月11日 15:43
>
>On 11/9/07 08:34, "Tian, Kevin" <kevin.tian@xxxxxxxxx> wrote:
>
>> So given following nest case, I'm not sure how it's prevented:
>> Xs_transaction_start(xs_handle);
>> <- at this point, conn->transaction is NULL
>> Xs_transaction_start(xs_handle);
>
>These transactions are not nested (at least from the p.o.v. of xenstored):
>they are independent transaction contexts which cannot see one
>another's
>updates. It's not even possible to request construction of a nested
>transaction via the interfaces exposed in xs.h.
>

OK, if you mean nest as two transactions with parent-child relationship... 
So the problem is still here, existing code doesn't prevent multiple 
transactions created under same connection, while message process 
logic can't not handle multiple.

Say two threads both start transactions under same connection ID. It's 
likely to succeed as long as two starts goes in sequence. However later 
once two threads have message sent to xenstore at same time, one 
will cause assertion and xenstore can't handle.

Since we're sure that current implementation doesn't handle parallel 
transactions (though API level allows), I think do_transaction_start 
should check conn->transaction_list instead of conn->transaction to 
ensure it.

Thanks,
Kevin

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.