[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH L1TF MDS GT v2 1/2] common/grant_table: harden bound accesses


  • To: Norbert Manthey <nmanthey@xxxxxxxxx>
  • From: Jan Beulich <JBeulich@xxxxxxxx>
  • Date: Thu, 11 Jul 2019 12:34:53 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1;spf=pass smtp.mailfrom=suse.com;dmarc=pass action=none header.from=suse.com;dkim=pass header.d=suse.com;arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=yITyWbP45HTWXv+F9SzrlKxGWSE0WnCnk0VuTmV5QvE=; b=edjg/BdNrR7hHopbn7jFsImyzG7GgNJC3JVFNs/WoI1i1T0uFUjX9SHtLGT8SE+SIVy24KUpQhKeKMTsXpppORa7f8kY1M98HVg5HZKc0YSNBjdtgQDePRYVnGDnewpVuBx535AbCSynrsCS20zZc6UWTQBn5BC2jT3HHp7laY9Xlypt9cUb2jrI2/ZWSurq8qwcKJLeD+i26aowbksNpvKsPDq/wZ+F4f6gUmwIeGizjAKTAlVQgByI/RbEzWjj516DHZGVRjcSFqh0Cioz0+h/ySxMMnZdcmJRXFkxGZGsGqehvU2Y+UIngKYeZ3197i678P2Kgc2YxnEGQGIasg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=aiaHz26lUdIj70leF7kvf8wsXU6EykQJFXWiSYDeilb5eaUd06svGtPtRJev1mT8YjBmAaITQLqJYsHydDNKiec3vYXZlwvH9M1iZNHhQ19tRNkxuoGegH+qjQiNDiCU4PjAFH+hze6gmArxrP4qH9I/apO0lBFLzfpfRnls9hifTShc6cyDaGMCS+whvx6dcfS/Uo48/ix8UmBJEcgTp8vFLZxUbnhcx79ypjRyutz7cg4IPYF3A7LYB63JLBWfnr6za93q1/meqGY8jYCSkNoa2hYvb787fnxSWa31E6bUQ0UHuI5xAvaruyIUuC9QZj3eVs7Zksjr70ffrgj3aw==
  • Authentication-results: spf=none (sender IP is ) smtp.mailfrom=JBeulich@xxxxxxxx;
  • Cc: Juergen Gross <JGross@xxxxxxxx>, Tim Deegan <tim@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Wei Liu <wei.liu2@xxxxxxxxxx>, Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>, George Dunlap <George.Dunlap@xxxxxxxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, IanJackson <ian.jackson@xxxxxxxxxxxxx>, Dario Faggioli <dfaggioli@xxxxxxxx>, Martin Pohlack <mpohlack@xxxxxxxxx>, Pawel Wieczorkiewicz <wipawel@xxxxxxxxx>, Julien Grall <julien.grall@xxxxxxx>, David Woodhouse <dwmw@xxxxxxxxxxxx>, Martin Mazein <amazein@xxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Bjoern Doebel <doebel@xxxxxxxxx>
  • Delivery-date: Thu, 11 Jul 2019 13:03:45 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHVN+UKQvX8Kon4G0q0xnaou4Hi/w==
  • Thread-topic: [PATCH L1TF MDS GT v2 1/2] common/grant_table: harden bound accesses

On 10.07.2019 14:54, Norbert Manthey wrote:
> Guests can issue grant table operations and provide guest controlled
> data to them. This data is used as index for memory loads after bound
> checks have been done. To avoid speculative out-of-bound accesses, we
> use the array_index_nospec macro where applicable, or the macro
> block_speculation. Note, the block_speculation macro is used on all
> path in shared_entry_header and nr_grant_entries. This way, after a
> call to such a function, all bound checks that happened before become
> architectural visible, so that no additional protection is required
> for corresponding array accesses. As the way we introduce an lfence
> instruction might allow the compiler to reload certain values from
> memory multiple times, we try to avoid speculatively continuing
> execution with stale register data by moving relevant data into
> function local variables.
> 
> Speculative execution is not blocked in case one of the following
> properties is true:
>   - path cannot be triggered by the guest
>   - path does not return to the guest
>   - path does not result in an out-of-bound access
>   - path cannot be executed repeatedly

Upon re-reading I think this last item isn't fully applicable: I think
you attach such an attribute to domain creation/destruction functions.
Those aren't executed repeatedly for a single guest, but e.g. rapid
rebooting of a guest (or several ones) may actually be able to train
such paths.

> @@ -2091,6 +2100,7 @@ gnttab_transfer(
>       struct page_info *page;
>       int i;
>       struct gnttab_transfer gop;
> +    grant_ref_t ref;

This declaration would better live in the more narrow scope it's
(only) used in.

> @@ -2237,8 +2247,14 @@ gnttab_transfer(
>            */
>           spin_unlock(&e->page_alloc_lock);
>           okay = gnttab_prepare_for_transfer(e, d, gop.ref);
> +        ref = gop.ref;

Other than in the earlier cases here you copy a variable that's
already local to the function. Is this really helpful?

Independent of this - is there a particular reason you latch the
value into the (second) local variable only after its first use? It
likely won't matter much, but it's a little puzzling nevertheless.

> -        if ( unlikely(!okay || assign_pages(e, page, 0, MEMF_no_refcount)) )
> +        /*
> +         * Make sure the reference bound check in gnttab_prepare_for_transfer
> +         * is respected and speculative execution is blocked accordingly
> +         */
> +        if ( unlikely(!evaluate_nospec(okay)) ||
> +            unlikely(assign_pages(e, page, 0, MEMF_no_refcount)) )

If I can trust my mail UI (which I'm not sure I can) there's an
indentation issue here.

> @@ -3853,7 +3879,8 @@ static int gnttab_get_status_frame_mfn(struct domain *d,
>               return -EINVAL;
>       }
>   
> -    *mfn = _mfn(virt_to_mfn(gt->status[idx]));
> +    /* Make sure idx is bounded wrt nr_status_frames */
> +    *mfn = _mfn(virt_to_mfn(gt->status[array_index_nospec(idx, 
> nr_status_frames(gt))]));

This and ...

> @@ -3882,7 +3909,8 @@ static int gnttab_get_shared_frame_mfn(struct domain *d,
>               return -EINVAL;
>       }
>   
> -    *mfn = _mfn(virt_to_mfn(gt->shared_raw[idx]));
> +    /* Make sure idx is bounded wrt nr_status_frames */
> +    *mfn = _mfn(virt_to_mfn(gt->shared_raw[array_index_nospec(idx, 
> nr_grant_frames(gt))]));

... this line are too long now and hence need wrapping.

Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.