[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [OSSTEST PATCH 1/3] README.planner: Improve internals documentation a bit



* share-flight resources may end up owned by a different task to their
  shareix, perhaps as a result of test database operations or perhaps
  as a result of donation with mg-allocate.  This should not be a
  problem.

* Document the xdbref task type.

Signed-off-by: Ian Jackson <Ian.Jackson@xxxxxxxxxxxxx>
---
 README.planner | 13 +++++++++----
 1 file changed, 9 insertions(+), 4 deletions(-)

diff --git a/README.planner b/README.planner
index f3cab53..b3b41a9 100644
--- a/README.planner
+++ b/README.planner
@@ -133,6 +133,10 @@ Types of task
    mg-execute-flight).  They are automatically created and destroyed -
    see above.
 
+ * `xdbref' tasks.  These are used to own resources whose allocation
+   authority has been transferred to a separate database, eg a test
+   database.  The refkey is an indication of the other database.
+
  * magic task numbers with special meanings:
 
      magic/allocatable
@@ -211,10 +215,11 @@ Flights can be protected (preserved) by allocating them 
with
 
 Flights are represented by restype='share-flight' entries in the
 resources table.  Conventionally, the shareix is the owning taskid.
-This allows multiple tasks to lock a single flight.  There is no
-corresponding entry with restype='flight', nor a resource_sharing
-entry.  mg-allocate will create and clean up share-flight entries as
-needed.
+(This is not a constraint, because the convention can be violated by
+transfer of ownerships.)  This allows multiple tasks to lock a single
+flight.  There is no corresponding entry with restype='flight', nor a
+resource_sharing entry.  mg-allocate (and other tools) will create and
+clean up share-flight entries as needed.
 
 
 DETAILED PROTOCOL NOTES
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.