[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 MISSING/23] xsplice: Design document (v7).



On Thu, Feb 18, 2016 at 09:20:00AM -0700, Jan Beulich wrote:
> >>> On 12.02.16 at 22:57, <konrad.wilk@xxxxxxxxxx> wrote:
> > +struct xsplice_patch_func {  
> > +    const char *name;  
> > +    Elf64_Xwordnew_addr;  
> 
> Missing space.
> 
> > +    Elf64_Xword old_addr;  
> > +    Elf64_Word new_size;  
> > +    Elf64_Word long old_size;  
> 
> There are still two types left here.

That wouldn't compile very well.
> 
> > +### XEN_SYSCTL_XSPLICE_GET (1)
> > +
> > +Retrieve an status of an specific payload. This caller provides:
> > +
> > + * A `struct xen_xsplice_name` called `name` which has the unique name.
> > + * A `struct xen_xsplice_status` structure which has all members
> > +   set to zero: That is:
> > +   * `status` *MUST* be set to zero.
> > +   * `rc` *MUST* be set to zero.
> 
> Why is this?

<scratches his head>.. 
It had an _pad entry in earlier versions. Let me remove the whole
'set to zero.." 

> 
> > +The structure is as follow:
> > +
> > +<pre>
> > +struct xen_xsplice_status {  
> > +#define XSPLICE_STATUS_LOADED       1  
> > +#define XSPLICE_STATUS_CHECKED      2  
> > +#define XSPLICE_STATUS_APPLIED      3  
> > +    int32_t state;                  /* OUT: XSPLICE_STATE_*. IN: MUST be 
> > zero. */  
> > +    int32_t rc;                     /* OUT: 0 if no error, otherwise 
> > -XEN_EXX. */  
> > +                                    /* IN: MUST be zero. */
> > +};  
> > +
> > +struct xen_sysctl_xsplice_summary {  
> > +    xen_xsplice_name_t name;        /* IN, the name of the payload. */  
> > +    xen_xsplice_status_t status;    /* IN/OUT: status of the payload. */  
> > +};  
> 
> With the operation being named XEN_SYSCTL_XSPLICE_GET, shouldn't
> the structure tag be xen_sysctl_xsplice_get?

Yes!
> 
> > +### XEN_SYSCTL_XSPLICE_LIST (2)
> > +
> > +Retrieve an array of abbreviated status and names of payloads that are 
> > loaded in the
> > +hypervisor.
> > +
> > +The caller provides:
> > +
> > + * `version`. Initially (on first hypercall) *MUST* be zero.
> > + * `idx` index iterator. On first call *MUST* be zero, subsequent calls 
> > varies.
> > + * `nr` the max number of entries to populate.
> > + * `pad` - *MUST* be zero.
> > + * `status` virtual address of where to write `struct xen_xsplice_status`
> > +   structures. Caller *MUST* allocate up to `nr` of them.
> > + * `name` - virtual address of where to write the unique name of the 
> > payload.
> > +   Caller *MUST* allocate up to `nr` of them. Each *MUST* be of
> > +   **XEN_XSPLICE_NAME_SIZE** size.
> > + * `len` - virtual address of where to write the length of each unique name
> > +   of the payload. Caller *MUST* allocate up to `nr` of them. Each *MUST* 
> > be
> > +   of sizeof(uint32_t) (4 bytes).
> > +
> > +If the hypercall returns an positive number, it is the number (upto `nr`
> > +provided to the hypercall) of the payloads returned, along with `nr` 
> > updated
> > +with the number of remaining payloads, `version` updated (it may be the 
> > same
> > +across hypercalls - if it varies the data is stale and further calls could
> > +fail). The `status`, `name`, and `len`' are updated at their designed index
> > +value (`idx`) with the returned value of data.
> > +
> > +If the hypercall returns -XEN_E2BIG the `nr` is too big and should be
> > +lowered.
> > +
> > +If the hypercall returns an zero value that means there are no payloads.
> 
> Maybe worth changing to "... there are no (more) payloads",
> considering the iterative nature of the operation?

Yes.

This is the change I did:


diff --git a/docs/misc/xsplice.markdown b/docs/misc/xsplice.markdown
index 9a95243..69c5176 100644
--- a/docs/misc/xsplice.markdown
+++ b/docs/misc/xsplice.markdown
@@ -289,10 +289,10 @@ describing the functions to be patched:
 <pre>
 struct xsplice_patch_func {  
     const char *name;  
-    Elf64_Xwordnew_addr;  
+    Elf64_Xword new_addr;  
     Elf64_Xword old_addr;  
     Elf64_Word new_size;  
-    Elf64_Word long old_size;  
+    Elf64_Word old_size;  
     uint8_t pad[32];  
 };  
 </pre>
@@ -425,10 +425,8 @@ struct xen_sysctl_xsplice_upload {
 Retrieve an status of an specific payload. This caller provides:
 
  * A `struct xen_xsplice_name` called `name` which has the unique name.
- * A `struct xen_xsplice_status` structure which has all members
-   set to zero: That is:
-   * `status` *MUST* be set to zero.
-   * `rc` *MUST* be set to zero.
+ * A `struct xen_xsplice_status` structure. The member values will
+   be over-written upon completion.
 
 Upon completion the `struct xen_xsplice_status` is updated.
 
@@ -473,12 +471,11 @@ struct xen_xsplice_status {
 #define XSPLICE_STATUS_LOADED       1  
 #define XSPLICE_STATUS_CHECKED      2  
 #define XSPLICE_STATUS_APPLIED      3  
-    int32_t state;                  /* OUT: XSPLICE_STATE_*. IN: MUST be zero. 
*/  
+    int32_t state;                  /* OUT: XSPLICE_STATE_*. */  
     int32_t rc;                     /* OUT: 0 if no error, otherwise -XEN_EXX. 
*/  
-                                    /* IN: MUST be zero. */
 };  
 
-struct xen_sysctl_xsplice_summary {  
+struct xen_sysctl_xsplice_get {  
     xen_xsplice_name_t name;        /* IN, the name of the payload. */  
     xen_xsplice_status_t status;    /* IN/OUT: status of the payload. */  
 };  
@@ -514,7 +511,7 @@ value (`idx`) with the returned value of data.
 If the hypercall returns -XEN_E2BIG the `nr` is too big and should be
 lowered.
 
-If the hypercall returns an zero value that means there are no payloads.
+If the hypercall returns an zero value there are no more payloads.
 
 Note that due to the asynchronous nature of hypercalls the control domain might
 have added or removed a number of payloads making this information stale. It is
> 
> Jan
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.