[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [BUGFIX][PATCH 3/4] hvm_save_one: return correct data.



On 12/15/13 14:06, Andrew Cooper wrote:
On 15/12/2013 18:41, Don Slutz wrote:
On 12/15/13 13:11, Andrew Cooper wrote:
On 15/12/2013 17:42, Don Slutz wrote:


is the final part of this one.  So I do not find any code that does what you are wondering about.

   -Don


HVM_CPU_XSAVE_SIZE() changes depending on which xsave features have ever been enabled by a vcpu (size is proportional to the contents of v->arch.xcr0_accum).  It is not guaranteed to be the same for each vcpu in a domain, (although almost certainly will be the same for any recognisable OS)

Ah, I see.

Well, hvm_save_one, hvm_save_size, and hvm_save all expect that hvm_sr_handlers[typecode].size has the max size.  I do not see that being true for XSAVE.

hvm_sr_handlers[typecode].size does need to be the maximum possible size.  It does not mean that the maximum amount of data will be written.

So long as the load on the far side can read the somewhat-shorter-than-maximum save record, it doesn't matter (except hvm_save_one).  hvm_save_size specifically need to return the maximum size possible, so the toolstack can allocate a big enough buffer.  xc_domain_save() does correctly deal with Xen handing back less than the maximum when actually saving the domain.

Jan's new generic MSR save record will also write less than the maximum if it can.

This looks to be Jan's patch:

http://lists.xen.org/archives/html/xen-devel/2013-12/msg02061.html

Does look to set hvm_sr_handlers[typecode].size to the max size.

And it looks like the code I did in patch #4 would actually fix this issue.  Since it now uses the length stored in the save descriptor to find each instance.

Jan has some questions about patch #4; so what to do about it is still pending.

Clearly I can merge #3 and #4 into 1 patch.

   -Don Slutz
~Andrew




As I said, to fix this newest problem I am experimenting with splitting the per-dom and per-vcpu save handlers, and making good progress.  It does mean that the fix for #3 would be much much more simple.

I shall send out a very RFC series as soon as I can

~Andrew
Great, I look forward to seeing them.
     -Don Slutz

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.