[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [xen master] docs/misc: Fix a few typos
commit a29a1fb5a5b213ce972c925c84c52bebad4d34b7 Author: Bernhard Kaindl <bernhard.kaindl@xxxxxxxxx> AuthorDate: Wed Jan 15 16:09:04 2025 +0100 Commit: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> CommitDate: Thu Jan 16 11:22:11 2025 +0000 docs/misc: Fix a few typos While skimming through the misc docs, I spotted a few typos. Signed-off-by: Bernhard Kaindl <bernhard.kaindl@xxxxxxxxx> Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Reviewed-by: Stefano Stabellini <sstabellini@xxxxxxxxxx> --- docs/misc/livepatch.pandoc | 44 +++++++++++++++++------------------ docs/misc/netif-staging-grants.pandoc | 20 ++++++++-------- docs/misc/printk-formats.txt | 2 +- 3 files changed, 33 insertions(+), 33 deletions(-) diff --git a/docs/misc/livepatch.pandoc b/docs/misc/livepatch.pandoc index 4a0b4fd6d8..04dd5ed7b2 100644 --- a/docs/misc/livepatch.pandoc +++ b/docs/misc/livepatch.pandoc @@ -25,7 +25,7 @@ The document is split in four sections: * reloc - telemetries contained in the payload to construct proper trampoline. * hook - an auxiliary function being called before, during or after payload application or revert. - * quiescing zone - period when all CPUs are lock-step with each other. + * quiescent zone - period when all CPUs are lock-step with each other. ## History @@ -267,7 +267,7 @@ It may also have some architecture-specific sections. For example: * Relocations for each of these sections. The Xen Live Patch core code loads the payload as a standard ELF binary, relocates it -and handles the architecture-specifc sections as needed. This process is much +and handles the architecture-specific sections as needed. This process is much like what the Linux kernel module loader does. The payload contains at least three sections: @@ -372,7 +372,7 @@ and the core code copies the data from the undo buffer (private internal copy) to `old_addr`. It optionally may contain the address of hooks to be called right before -being applied and after being reverted (while all CPUs are still in quiescing +being applied and after being reverted (while all CPUs are still in quiescent zone). These hooks do not have access to payload structure. * `.livepatch.hooks.load` - an array of function pointers. @@ -380,7 +380,7 @@ zone). These hooks do not have access to payload structure. It optionally may also contain the address of pre- and post- vetoing hooks to be called before (pre) or after (post) apply and revert payload actions (while -all CPUs are already released from quiescing zone). These hooks do have +all CPUs are already released from quiescent zone). These hooks do have access to payload structure. The pre-apply hook can prevent from loading the payload if encoded in it condition is not met. Accordingly, the pre-revert hook can prevent from unloading the livepatch if encoded in it condition is not @@ -392,7 +392,7 @@ met. Finally, it optionally may also contain the address of apply or revert action hooks to be called instead of the default apply and revert payload actions -(while all CPUs are kept in quiescing zone). These hooks do have access to +(while all CPUs are kept in quiescent zone). These hooks do have access to payload structure. * `.livepatch.hooks.{apply,revert}` @@ -463,7 +463,7 @@ The type definition of the function are as follow: This section contains a pointer to a single function pointer to be executed before apply action is scheduled (and thereby before CPUs are put into -quiescing zone). This is useful to prevent from applying a payload when +quiescent zone). This is useful to prevent from applying a payload when certain expected conditions aren't met or when mutating actions implemented in the hook fail or cannot be executed. This type of hooks do have access to payload structure. @@ -477,7 +477,7 @@ The type definition of the function are as follow: #### .livepatch.hooks.postapply This section contains a pointer to a single function pointer to be executed -after apply action has finished and after all CPUs left the quiescing zone. +after apply action has finished and after all CPUs left the quiescent zone. This is useful to provide an ability to follow up on actions performed by the preapply hook. Especially, when module application was successful or to be able to undo certain preparation steps of the preapply hook in case of a @@ -495,7 +495,7 @@ The type definition of the function are as follow: This section contains a pointer to a single function pointer to be executed before revert action is scheduled (and thereby before CPUs are put into -quiescing zone). This is useful to prevent from reverting a payload when +quiescent zone). This is useful to prevent from reverting a payload when certain expected conditions aren't met or when mutating actions implemented in the hook fail or cannot be executed. This type of hooks do have access to payload structure. @@ -509,7 +509,7 @@ The type definition of the function are as follow: #### .livepatch.hooks.postrevert This section contains a pointer to a single function pointer to be executed -after revert action has finished and after all CPUs left the quiescing zone. +after revert action has finished and after all CPUs left the quiescent zone. This is useful to provide an ability to perform cleanup of all previously executed mutating actions in order to restore the original system state from before the current payload application. The success/failure error code is @@ -527,7 +527,7 @@ The type definition of the function are as follow: This section contains a pointer to a single function pointer to be executed instead of a default apply (or revert) action function. This is useful to replace or augment default behavior of the apply (or revert) action that -requires all CPUs to be in the quiescing zone. +requires all CPUs to be in the quiescent zone. This type of hooks do have access to payload structure. Each entry in this array is eight bytes. @@ -539,13 +539,13 @@ The type definition of the function are as follow: ### .livepatch.xen_depends, .livepatch.depends and .note.gnu.build-id To support dependencies checking and safe loading (to load the -appropiate payload against the right hypervisor) there is a need -to embbed an build-id dependency. +appropriate payload against the right hypervisor) there is a need +to embed a build-id dependency. This is done by the payload containing sections `.livepatch.xen_depends` and `.livepatch.depends` which follow the format of an ELF Note. The contents of these (name, and description) are specific to the linker -utilized to build the hypevisor and payload. +utilized to build the hypervisor and payload. If GNU linker is used then the name is `GNU` and the description is a NT_GNU_BUILD_ID type ID. The description can be an SHA1 @@ -639,7 +639,7 @@ The `name` could be an UUID that stays fixed forever for a given payload. It can be embedded into the ELF payload at creation time and extracted by tools. -The return value is zero if the payload was succesfully uploaded. +The return value is zero if the payload was successfully uploaded. Otherwise an -XEN_EXX return value is provided. Duplicate `name` are not supported. The `payload` is the ELF payload as mentioned in the `Payload format` section. @@ -819,7 +819,7 @@ The caller provides: * `cmd` The command requested: * *LIVEPATCH_ACTION_UNLOAD* (1) Unload the payload. Any further hypercalls against the `name` will result in failure unless - **XEN_SYSCTL_LIVEPATCH_UPLOAD** hypercall is perfomed with same `name`. + **XEN_SYSCTL_LIVEPATCH_UPLOAD** hypercall is performed with same `name`. * *LIVEPATCH_ACTION_REVERT* (2) Revert the payload. If the operation takes more time than the upper bound of time the `rc` in `xen_livepatch_status` retrieved via **XEN_SYSCTL_LIVEPATCH_GET** will be -XEN_EBUSY. @@ -969,7 +969,7 @@ Before we call VMXResume we check whether any soft IRQs need to be executed. This is a good spot because all Xen stacks are effectively empty at that point. -To randezvous all the CPUs an barrier with an maximum timeout (which +To rendezvous all the CPUs an barrier with an maximum timeout (which could be adjusted), combined with forcing all other CPUs through the hypervisor with IPIs, can be utilized to execute lockstep instructions on all CPUs. @@ -990,7 +990,7 @@ be done to the linker scripts to support this. The design of that is not discussed in this design. -This is implemented in a seperate tool which lives in a seperate +This is implemented in a separate tool which lives in a separate GIT repo. Currently it resides at https://xenbits.xen.org/git-http/livepatch-build-tools.git @@ -1066,17 +1066,17 @@ There are the ways this can be addressed: grows to accumulate all the code changes. * Hotpatch stack - where an mechanism exists that loads the hotpatches in the same order they were built in. We would need an build-id - of the hypevisor to make sure the hot-patches are build against the + of the hypervisor to make sure the hot-patches are build against the correct build. * Payload containing the old code to check against that. That allows - the hotpatches to be loaded indepedently (if they don't overlap) - or - if the old code also containst previously patched code - even if they + the hotpatches to be loaded independently (if they don't overlap) - or + if the old code also contains previously patched code - even if they overlap. The disadvantage of the first large patch is that it can grow over time and not provide an bisection mechanism to identify faulty patches. -The hot-patch stack puts stricts requirements on the order of the patches +The hot-patch stack puts strict requirements on the order of the patches being loaded and requires an hypervisor build-id to match against. The old code allows much more flexibility and an additional guard, @@ -1263,7 +1263,7 @@ limit that calls the next trampoline. Please note there is a small limitation for trampolines in function entries: The target function (+ trailing padding) must be able -to accomodate the trampoline. On x86 with +-2 GB relative jumps, +to accommodate the trampoline. On x86 with +-2 GB relative jumps, this means 5 bytes are required which means that `old_size` **MUST** be at least five bytes if patching in trampoline. diff --git a/docs/misc/netif-staging-grants.pandoc b/docs/misc/netif-staging-grants.pandoc index cb33028adc..838b115840 100644 --- a/docs/misc/netif-staging-grants.pandoc +++ b/docs/misc/netif-staging-grants.pandoc @@ -9,7 +9,7 @@ Architecture(s): Any # Background and Motivation -At the Xen hackaton '16 networking session, we spoke about having a permanently +At the Xen hackathon '16 networking session, we spoke about having a permanently mapped region to describe header/linear region of packet buffers. This document outlines the proposal covering motivation of this and applicability for other use-cases alongside the necessary changes. @@ -174,8 +174,8 @@ boundary 17) Allocate packet metadata -[ *Linux specific*: This structure emcompasses a linear data region which -generally accomodates the protocol header and such. Netback allocates up to 128 +[ *Linux specific*: This structure encompasses a linear data region which +generally accommodates the protocol header and such. Netback allocates up to 128 bytes for that. ] 18) *Linux specific*: Setup up a `GNTTABOP_copy` to copy up to 128 bytes to this small @@ -317,7 +317,7 @@ In essence the steps for receiving of a packet in a Linux frontend is as process the actual like the steps below. This thread has the purpose of aggregating as much copies as possible.] -2) Checks if there are enough rx ring slots that can accomodate the packet. +2) Checks if there are enough rx ring slots that can accommodate the packet. 3) Gets a request from the ring for the first data slot and fetches the `gref` from it. @@ -375,7 +375,7 @@ In essence the steps for receiving of a packet in a Linux frontend is as 24) Call packet into the network stack. -25) Allocate new pages and any necessary packet metadata strutures to new +25) Allocate new pages and any necessary packet metadata structures to new requests. These requests will then be used in step 1) and so forth. 26) Update the request producer index (`req_prod`) @@ -391,7 +391,7 @@ In essence the steps for receiving of a packet in a Linux frontend is as This proposal aims at replacing step 4), 12) and 22) with memcpy if the grefs on the Rx ring were requested to be mapped by the guest. Frontend may use -strategies to allow fast recycling of grants for replinishing the ring, +strategies to allow fast recycling of grants for replenishing the ring, hence letting Domain-0 replace the grant copies with memcpy instead, which is faster. @@ -400,8 +400,8 @@ would need to aggregate as much as grant ops as possible (step 1) and could transmit the packet on the transmit function (e.g. Linux ```ndo_start_xmit```) as previously proposed here\[[0](http://lists.xenproject.org/archives/html/xen-devel/2015-05/msg01504.html)\]. -This would heavily improve efficiency specifially for smaller packets. Which in -return would decrease RTT, having data being acknoledged much quicker. +This would heavily improve efficiency specifically for smaller packets. Which in +return would decrease RTT, having data being acknowledged much quicker. \clearpage @@ -467,13 +467,13 @@ The entry 'status' field determines if the entry was successfully removed. Control ring is only available after backend state is `XenbusConnected` therefore only on this state change can the frontend query the total amount of maps it can keep. It then grants N entries per queue on both TX and RX ring -which will create the underying backend gref -> page association (e.g. stored +which will create the underlying backend gref -> page association (e.g. stored in hash table). Frontend may wish to recycle these pregranted buffers or choose a copy approach to replace granting. On steps 19) of Guest Transmit and 3) of Guest Receive, data gref is first looked up in this table and uses the underlying page if it already exists a -mapping. On the successfull cases, steps 20) 21) and 27) of Guest Transmit are +mapping. On the successful cases, steps 20) 21) and 27) of Guest Transmit are skipped, with 19) being replaced with a memcpy of up to 128 bytes. On Guest Receive, 4) 12) and 22) are replaced with memcpy instead of a grant copy. diff --git a/docs/misc/printk-formats.txt b/docs/misc/printk-formats.txt index 8f666f696a..ce32829dae 100644 --- a/docs/misc/printk-formats.txt +++ b/docs/misc/printk-formats.txt @@ -11,7 +11,7 @@ Raw buffer as hex string: %*phN 000102 ... 3f Up to 64 characters. Buffer length expected via the field_width - paramter. i.e. printk("%*ph", 8, buffer); + parameter. i.e. printk("%*ph", 8, buffer); Bitmaps (e.g. cpumask/nodemask): -- generated by git-patchbot for /home/xen/git/xen.git#master
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |