[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Re: [PATCH 1 of 7] x86: add _PAGE_IOMAP pte flag for IO mappings
Nick Piggin wrote: > It complements vm_normal_page, which was there first (and coined by > Linus). It is the opposite of normal. This question always comes up > and my answer is always yes, if you can convince Linus to rename > vm_normal_page to the corresponding term :) > Not really. Normal is normal, but "special" doesn't tell us what kind of special it is. > It's not exactly _PAGE_NOSTRUCTPAGE. There can be struct pages under > there, but you're not to touch them. > To the extent that the struct page may as well not exist? Does it contain any meaningful state? Are they always IO mappings? Could we just use _PAGE_IOMAP as the name for _PAGE_SPECIAL? >>> And not having a struct page should correspond well to a pte not >>> requiring pfn->mfn conversion and being an I/O page. >>> >> But _PAGE_SPECIAL is only set in a few places. It's not set in ioremap >> mappings and so on. Should it be? >> > > Kernel address space, you mean? No, it is only ever used on user > addresses. > Right. But if we fold _PAGE_SPECIAL and _PAGE_IOMAP together, it would start getting used on kernel addresses (and obviously we'd need to rearrange _PAGE_CPA_TEST). >> There's also the hiccup that it gets set in a pte with pte_mkspecial() - >> but at that point its too late because you've already constructed the >> pte and done the pfn->mfn conversion. _PAGE_IOMAP can only be set when >> you initially construct the pte out of a frame number and a pgprot. >> > > I don't see this would be any problem because the pte is always constructed > in a single line in both places where it is used. > OK. If we were to fold these two together, then pte_mkspecial() would have to go, since it wouldn't possible to use correctly in my use case. J _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |