|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH for-4.10] x86/hvm: Don't corrupt the HVM context stream when writing the MSR record
>>> On 16.11.17 at 23:45, <andrew.cooper3@xxxxxxxxxx> wrote:
> Ever since it was introduced in c/s bd1f0b45ff, hvm_save_cpu_msrs() has had a
> bug whereby it corrupts the HVM context stream if some, but fewer than the
> maximum number of MSRs are written.
>
> _hvm_init_entry() creates an hvm_save_descriptor with length for
> msr_count_max, but in the case that we write fewer than max, h->cur only moves
> forward by the amount of space used, causing the subsequent
> hvm_save_descriptor to be written within the bounds of the previous one.
>
> To resolve this, reduce the length reported by the descriptor to match the
> actual number of bytes used.
>
> A typical failure on the destination side looks like:
>
> (XEN) HVM4 restore: CPU_MSR 0
> (XEN) HVM4.0 restore: not enough data left to read 56 MSR bytes
> (XEN) HVM4 restore: failed to load entry 20/0
>
> Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -1330,6 +1330,7 @@ static int hvm_save_cpu_msrs(struct domain *d,
> hvm_domain_context_t *h)
>
> for_each_vcpu ( d, v )
> {
> + struct hvm_save_descriptor *d = _p(&h->data[h->cur]);
> struct hvm_msr *ctxt;
> unsigned int i;
>
> @@ -1348,8 +1349,13 @@ static int hvm_save_cpu_msrs(struct domain *d,
> hvm_domain_context_t *h)
> ctxt->msr[i]._rsvd = 0;
>
> if ( ctxt->count )
> + {
> + /* Rewrite length to indicate how much space we actually used. */
> + d->length = HVM_CPU_MSR_SIZE(ctxt->count);
Would of course be nice if we had a function to do this, such that
the (sufficiently hidden) cast above also wouldn't be necessary to
open code in places like this one.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |