[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Xen 4.7 Development Update



Wei,
and others.

> On 25 Nov 2015, at 02:17, Han, Huaitong <huaitong.han@xxxxxxxxx> wrote:
>> 
>> = Projects =
> 
> == Hypervisor ==
> === x86 ===
> *Memory protection keys for user pages
> -Huaitong Han

one thing I struggle with (and I am probably not the only one), is that it is 
not always easy do find out what a specific patch does in the Development 
Update mails. Obviously this is not an issue at the beginning of the cycle, but 
it can become one when we start to put the release notes and PR together. In 
this particular case, the use-case for the feature was described as a one-liner 
else-where and I am wondering, whether we should allow tracking the 
use/use-case/... in these mails.

Aka, in this case, using the information from the thread where the use-case was 
discussed, will give us something like ...  

== Hypervisor ==
=== x86 ===
* Memory protection keys for user pages
  (allows threads to cooperatively prevent themselves from "trampling" on each 
other, which increases robustness and is useful for debugging)
- Huaitong Han

Part of the reason, why I am also looking at this, is because of the Feature 
Lifecycle Management (see http://xen.markmail.org/message/uu3vifjmv2qylds4), 
where we still have outstanding issues on documenting completed features. It 
seems to me that there is an overlap between the Development Update mails, and 
recording the state of an added feature in a central file. Obviously, if a new 
feature was committed to xen.git, we would then need to add an entry to the 
still to be defined central file describing it. And it would probably make 
sense to keep the info in Development Update mails as close as possible to what 
is in the still to be defined file.

Any thoughts?

Cheers
Lars 



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.