|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v3 2/6] x86/HVM: split restore state checking from state loading
On Tue, Nov 28, 2023 at 11:34:04AM +0100, Jan Beulich wrote:
> ..., at least as reasonably feasible without making a check hook
> mandatory (in particular strict vs relaxed/zero-extend length checking
> can't be done early this way).
>
> Note that only one of the two uses of hvm_load() is accompanied with
> hvm_check(). The other directly consumes hvm_save() output, which ought
> to be well-formed. This means that while input data related checks don't
> need repeating in the "load" function when already done by the "check"
> one (albeit assertions to this effect may be desirable), domain state
> related checks (e.g. has_xyz(d)) will be required in both places.
>
> Suggested-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
> ---
> Do we really need all the copying involved in use of _hvm_read_entry()
> (backing hvm_load_entry()? Zero-extending loads are likely easier to
> handle that way, but for strict loads all we gain is a reduced risk of
> unaligned accesses (compared to simply pointing into h->data[]).
See below, but I wonder whether the checks could be performed as part
of hvm_load() without having to introduce a separate handler and loop
over the context entries.
> Would the hvm_sr_handlers[] better use array_access_nospec()?
Maybe? Given this is a domctl I do wonder whether a domain already
having access to such interface won't have easier ways to leak data
from Xen. Maybe for a disaggregated setup.
> ---
> v2: New.
>
> --- a/xen/arch/x86/domctl.c
> +++ b/xen/arch/x86/domctl.c
> @@ -379,6 +379,10 @@ long arch_do_domctl(
> if ( copy_from_guest(c.data, domctl->u.hvmcontext.buffer, c.size) !=
> 0 )
> goto sethvmcontext_out;
>
> + ret = hvm_check(d, &c);
> + if ( ret )
> + goto sethvmcontext_out;
> +
> domain_pause(d);
> ret = hvm_load(d, &c);
> domain_unpause(d);
> --- a/xen/arch/x86/hvm/save.c
> +++ b/xen/arch/x86/hvm/save.c
> @@ -30,7 +30,8 @@ static void arch_hvm_save(struct domain
> d->arch.hvm.sync_tsc = rdtsc();
> }
>
> -static int arch_hvm_load(struct domain *d, const struct hvm_save_header *hdr)
> +static int arch_hvm_check(const struct domain *d,
> + const struct hvm_save_header *hdr)
> {
> uint32_t eax, ebx, ecx, edx;
>
> @@ -55,6 +56,11 @@ static int arch_hvm_load(struct domain *
> "(%#"PRIx32") and restored on another (%#"PRIx32").\n",
> d->domain_id, hdr->cpuid, eax);
>
> + return 0;
> +}
> +
> +static void arch_hvm_load(struct domain *d, const struct hvm_save_header
> *hdr)
> +{
> /* Restore guest's preferred TSC frequency. */
> if ( hdr->gtsc_khz )
> d->arch.tsc_khz = hdr->gtsc_khz;
> @@ -66,13 +72,12 @@ static int arch_hvm_load(struct domain *
>
> /* VGA state is not saved/restored, so we nobble the cache. */
> d->arch.hvm.stdvga.cache = STDVGA_CACHE_DISABLED;
> -
> - return 0;
> }
>
> /* List of handlers for various HVM save and restore types */
> static struct {
> hvm_save_handler save;
> + hvm_check_handler check;
> hvm_load_handler load;
> const char *name;
> size_t size;
> @@ -88,6 +93,7 @@ void __init hvm_register_savevm(uint16_t
> {
> ASSERT(typecode <= HVM_SAVE_CODE_MAX);
> ASSERT(hvm_sr_handlers[typecode].save == NULL);
> + ASSERT(hvm_sr_handlers[typecode].check == NULL);
> ASSERT(hvm_sr_handlers[typecode].load == NULL);
> hvm_sr_handlers[typecode].save = save_state;
> hvm_sr_handlers[typecode].load = load_state;
> @@ -275,6 +281,78 @@ int hvm_save(struct domain *d, hvm_domai
> return 0;
> }
>
> +int hvm_check(const struct domain *d, hvm_domain_context_t *h)
> +{
> + const struct hvm_save_header *hdr;
> + int rc;
> +
> + if ( d->is_dying )
> + return -EINVAL;
> +
> + /* Get at the save header, which must be first. */
> + hdr = hvm_get_entry(HEADER, h);
> + if ( !hdr )
> + return -ENODATA;
> +
> + rc = arch_hvm_check(d, hdr);
> + if ( rc )
> + return rc;
> +
> + for ( ; ; )
> + {
> + const struct hvm_save_descriptor *desc;
> + hvm_check_handler handler;
> +
> + if ( h->size - h->cur < sizeof(*desc) )
> + {
> + /* Run out of data */
> + printk(XENLOG_G_ERR
> + "HVM restore %pd: save did not end with a null entry\n",
> + d);
> + return -ENODATA;
> + }
> +
> + /* Read the typecode of the next entry and check for the end-marker.
> */
> + desc = (const void *)&h->data[h->cur];
> + if ( desc->typecode == HVM_SAVE_CODE(END) )
> + {
> + /* Reset cursor for hvm_load(). */
> + h->cur = 0;
> + return 0;
> + }
> +
> + /* Find the handler for this entry. */
> + if ( desc->typecode >= ARRAY_SIZE(hvm_sr_handlers) ||
> + !hvm_sr_handlers[desc->typecode].name ||
> + !hvm_sr_handlers[desc->typecode].load )
> + {
> + printk(XENLOG_G_ERR "HVM restore %pd: unknown entry typecode
> %u\n",
> + d, desc->typecode);
> + return -EINVAL;
> + }
> +
> + /* Check the entry. */
> + handler = hvm_sr_handlers[desc->typecode].check;
> + if ( !handler )
> + {
> + if ( desc->length > h->size - h->cur - sizeof(*desc) )
> + return -ENODATA;
> + h->cur += sizeof(*desc) + desc->length;
> + }
> + else if ( (rc = handler(d, h)) )
> + {
> + printk(XENLOG_G_ERR
> + "HVM restore %pd: failed to check %s:%u rc %d\n",
> + d, hvm_sr_handlers[desc->typecode].name, desc->instance,
> rc);
> + return rc;
> + }
> +
> + process_pending_softirqs();
Looking at this, won't it be better to call the check() hooks inside
the hvm_load() function instead of duplicating the loop?
I realize that you only perform the checks when the state is loaded
from a domctl, but still seems quite a lot of code duplication for
little benefit.
hvm_load() could gain an extra parameter to select whether the input
must be checked or not, and that would avoid having to iterate twice
over the context.
> + }
> +
> + /* Not reached */
ASSERT_UNREACHABLE() maybe?
Thanks, Roger.
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |