[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] RE: [RFC] transcendent memory for Linux
> From: Jeremy Fitzhardinge [mailto:jeremy@xxxxxxxx] > On 06/29/09 14:57, Dan Magenheimer wrote: > > Interesting question. But, more than the 128-bit UUID must > > be guessed... a valid 64-bit object id and a valid 32-bit > > page index must also be guessed (though most instances of > > the page index are small numbers so easy to guess). Once > > 192 bits are guessed though, yes, the pages could be viewed > > and modified. I suspect there are much more easily targeted > > security holes in most data centers than guessing 192 (or > > even 128) bits. > > If its possible to verify the uuid is valid before trying to find a > valid oid+page, then its much easier (since you can concentrate on the > uuid first). No, the uuid can't be verified. Tmem gives no indication as to whether a newly-created pool is already in use (shared) by another guest. So without both the 128-bit uuid and an already-in-use 64-bit object id and 32-bit page index, no data is readable or writable by the attacker. > You also have to consider the case of a domain which was once part of > the ocfs cluster, but now is not - it may still know the uuid, but not > be otherwise allowed to use the cluster. > If the uuid is derived from something like the > filesystem's uuid - which wouldn't normally be considered sensitive > information - then its not like its a search of the full > 128-bit space. > And even if it were secret, uuids are not generally 128 > randomly chosen bits. Hmmm... that is definitely a thornier problem. I guess the security angle definitely deserves more design. But, again, this affects only shared precache which is not intended to part of the proposed initial tmem patchset, so this is a futures issue.) Thanks again for the feedback! Dan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |