[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 19/22] arch/x86: check remote MMIO remap permissions
>>> On 14.09.12 at 15:37, Daniel De Graaf <dgdegra@xxxxxxxxxxxxx> wrote: > On 09/14/2012 04:54 AM, Jan Beulich wrote: >>>>> On 13.09.12 at 18:46, Daniel De Graaf <dgdegra@xxxxxxxxxxxxx> wrote: >>> For this example, assume domain A has access to map domain B's memory >>> read-only. Domain B has access to a device with MMIO where reads to the >>> device's memory cause state changes - an example of such as device is a >>> TPM, where replies are read by repeated reads to a single 4-byte >>> address. Domain A does not have access to this device, and would like to >>> maliciously interfere with the device. >>> >>> If domain A maps the MMIO page from domain B using pg_owner == domB, the >>> iomem_access_permitted check will be done from domain B's perspective. >>> While this is needed when domain A is mapping pages on behalf of domB, >>> it is not sufficient when attempting to constrain domain A's access. >>> >>> These checks only apply to MMIO, so the condition on line 735 will >>> evaluate to true (!mfn_valid || real_pg_owner == dom_io). >>> >>> The (existing) check on domain B's MMIO access is: >>> if ( !iomem_access_permitted(pg_owner, mfn, mfn) ) >>> >>> This patch adds a check on domain A: >>> if ( !iomem_access_permitted(curr->domain, mfn, mfn) ) >> >> So then I think I was right suggesting that the second check >> should be done at the same place where the first one is, not >> outside/after the MMIO conditional. > > That is where I am doing the second check; it is not outside the MMIO > conditional (which ends 8 lines after the inserted check). Then I must have got the context wrong when looking for the insertion place. Checking... Indeed, I didn't pay close enough attention. Sorry for all the complaints then. Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |