[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] xen/sched: Introduce domain_vcpu() helper
On 24/01/2019 08:35, Jan Beulich wrote: >>>> On 23.01.19 at 18:44, <andrew.cooper3@xxxxxxxxxx> wrote: >> On 23/01/2019 17:01, Jan Beulich wrote: >>>>>> On 23.01.19 at 15:59, <andrew.cooper3@xxxxxxxxxx> wrote: >>>> +static inline struct vcpu *domain_vcpu(const struct domain *d, >>>> + unsigned int vcpu_id) >>>> +{ >>>> + unsigned int idx = array_index_nospec(vcpu_id, d->max_vcpus); >>>> + >>>> + return idx >= d->max_vcpus ? NULL : d->vcpu[idx]; >>>> +} >>> For an out of bounds incoming vcpu_id, isn't it the case that >>> idx then would be zero? In which case you'd return d->vcpu[0] >>> instead of NULL? >> Speculatively, yes. array_index_nospec() works by forcing speculative >> mis-accesses to operate as if it request had been for index 0. >> >> What matters from a data-leaking perspective is whether d->vcpu[idx], >> when executed speculative, ends up being out-of-bounds or not. i.e. >> whether it is distinguishable from a path which can architecturally be >> taken. > I'm afraid we're talking of different aspects. I'm not considering > the speculation aspect at all, but the mere base functionality. Oops yes. You're right that is a real non-speculative issue here. The correct code is: { unsigned int idx = array_index_nospec(vcpu_id, d->max_vcpus); return vcpu_id >= d->max_vcpus ? NULL : d->vcpu[idx]; } Which will return a real NULL for all non-speculative out-of-bounds requests, and will return d->vcpu[0] during incorrect speculation. ~Andrew >> P.S. index 0 is actually better than NULL on any hardware lacking SMAP, >> because you won't potentially use guest-controlled data from 0 during >> the subsequent speculation. > Is that the case in the way you describe it? The case I had in mind was a guest which goes and mmap()'s something real at 0. > I thought one of the > base issues with some of last year's speculation issues was that > data related #PF get evaluated only at the end of the pipeline, > when retiring insns. That is correct for Meltdown, but you need to get a TLB hit first, so only applies to permission problems on the mapping. (Also, the data needs to be in the L1 cache to leak.) L1TF covers the other side of things where there isn't a valid mapping, the addresses in question are physical rather than linear. (Also, needs to hit in the L1 cache.) > To me this would imply speculation through > NULL is equally happening with SMAP. It is the behind-the-scenes implementation of SMAP which makes it safe on existing processors. STAC is a TLB flush operation which flushes all user mappings (hence its curious CPL 0 restriction for something which ostensibly just touches EFLAGS.AC), and while AC is set, a pagewalk which results in a user mapping won't result in a TLB fill. Therefore, when you hit a user mapping, you start with a TLB miss (because user mappings were previously flushed), request a pagewalk (which resolves to a user mapping), and this mapping is deliberately not re-inserted into the TLB, opting instead for "pagewalk resulted in failure", which is how the #PF eventually manifests. > Furthermore 32-bit PV guests could place a kernel mapping there. Yes, but this is no worse than userspace mmap()'ing a page there in the absence of SMAP. > Of course the implication would be that avoiding to hand back > NULL has even wider benefit. But then the question is whether > handing back NULL here and elsewhere shouldn't be avoided > altogether. It is even harder to do without compiler support than the lfence'ing currently under question. ~Andrew _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |