[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Xen 4.6 Development Update (two months reminder)



On 03/16/2015 06:00 PM, Lars Kurth wrote:
> 
>> On 16 Mar 2015, at 13:01, Mihai DonÈu <mdontu@xxxxxxxxxxxxxxx> wrote:
>>
>> On Thu, 12 Mar 2015 10:21:56 +0000 wei.liu2@xxxxxxxxxx wrote:
>>> We are now two months into 4.6 development window. This is an email to keep
>>> track of all the patch series I gathered. It is by no means complete and / 
>>> or
>>> acurate. Feel free to reply this email with new projects or correct my
>>> misunderstanding.
>>>
>>> = Timeline =
>>>
>>> We are planning on a 9-month release cycle, but we could also release a bit
>>> earlier if everything goes well (no blocker, no critical bug).
>>>
>>> * Development start: 6 Jan 2015
>>> <=== We are here ===>
>>> * Feature Freeze: 10 Jul 2015
>>> * RCs: TBD
>>> * Release Date: 9 Oct 2015 (could release earlier)
>>>
>>> The RCs and release will of course depend on stability and bugs, and
>>> will therefore be fairly unpredictable.
>>>
>>> Bug-fixes, if Acked-by by maintainer, can go anytime before the First
>>> RC. Later on we will need to figure out the risk of regression/reward
>>> to eliminate the possiblity of a bug introducing another bug.
>>>
>>> = Prognosis =
>>>
>>> The states are: none -> fair -> ok -> good -> done
>>>
>>> none - nothing yet
>>> fair - still working on it, patches are prototypes or RFC
>>> ok   - patches posted, acting on review
>>> good - some last minute pieces
>>> done - all done, might have bugs
>>>
>>> = Bug Fixes =
>>>
>>> Bug fixes can be checked in without a freeze exception throughout the
>>> freeze, unless the maintainer thinks they are particularly high
>>> risk.  In later RC's, we may even begin rejecting bug fixes if the
>>> broken functionality is small and the risk to other functionality is
>>> high.
>>>
>>> Document changes can go in anytime if the maintainer is OK with it.
>>>
>>> These are guidelines and principles to give you an idea where we're coming
>>> from; if you think there's a good reason why making an exception for you 
>>> will
>>> help us make Xen better than not doing so, feel free to make your case.
>>>
>>> [...]
>>
>> I have been meaning to write this email for a while now, just to let
>> everyone know we're working on a couple more patches related to VM
>> introspection. They are not as big as our initial ones, but they do
>> bring in new functionality.
> 
> Mihai,
> it would make Wei's life easier if you could provide headlines for those 
> patches. That way they can be tracked before you post them

For my part, the patches are:

1. xen: Add support for XSETBV vm_events

This is basically VMX support for sending out an event on VMEXIT /
EXIT_REASON_XSETBV. Additional information sent out is the XCR (the
value of ECX).

2. xen: Support hybernating guests

This patch cover two areas: A) send data (just a regular blob / buffer)
back to the HV in the vm_event response, and B) have that data returned
by the read function when emulating an instruction. Unless we do this,
monitored guests won't be able to properly wake up from hybernation.

3. xen: Support for VMCALL-based vm_events

This is a modification of the VMCALL patch in my original RFC series,
which got dropped last year. The modification takes into account Andrew
Cooper's suggestion to just use a hypercall:

http://lists.xen.org/archives/html/xen-devel/2014-07/msg01677.html

4. xen: Deny MSR writes if refused by the vm_event reply

Preempt MSR writes that the monitoring application decides are evil.

5. xen: Implement actual write of CR values on xc_vcpu_setcontext()

Although libxc's API leads one to believe that all info in the context
will be set for the guest, the CR values were actually ignored for HVM
guests. This patch addresses that problem.

Hope this helps, Mihai will complete the picture with the rest.


Thanks,
Razvan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.