[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [Solved] Nouveau on dom0
On Mon, Mar 8, 2010 at 11:21 PM, Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx> wrote: > On Sun, Mar 07, 2010 at 05:26:12AM +0530, Arvind R wrote: >> On Sun, Mar 7, 2010 at 2:29 AM, Arvind R <arvino55@xxxxxxxxx> wrote: >> > On Sat, Mar 6, 2010 at 1:46 PM, Arvind R <arvino55@xxxxxxxxx> wrote: >> >> On Sat, Mar 6, 2010 at 1:53 AM, Konrad Rzeszutek Wilk >> >> <konrad.wilk@xxxxxxxxxx> wrote: >> >>> On Fri, Mar 05, 2010 at 01:16:13PM +0530, Arvind R wrote: >> >>>> On Thu, Mar 4, 2010 at 11:55 PM, Konrad Rzeszutek Wilk >> >>>> <konrad.wilk@xxxxxxxxxx> wrote: >> >>>> > On Thu, Mar 04, 2010 at 02:47:58PM +0530, Arvind R wrote: >> >>>> >> On Wed, Mar 3, 2010 at 11:43 PM, Konrad Rzeszutek Wilk >> >>>> >> <konrad.wilk@xxxxxxxxxx> wrote: >> >> >>> (FYI, look at >> >>> http://git.kernel.org/?p=linux/kernel/git/konrad/xen.git;a=commit;h=e84db8b7136d1b4a393dbd982201d0c5a3794333) >> >> THAT SOLVED THE FAULTING; OUT_RING now completes under Xen. > > That is great! Thanks for doing all the hard-work in digging through the > code. > > > So this means you got graphics on the screen? Or at least that Kernel > Mode Setting and the DRM parts show fancy graphics during boot? AT LAST, yes! Patch: (after aboout 600 reboots!) diff -Naur nouveau-kernel.orig/drivers/gpu/drm/ttm/ttm_bo_vm.c nouveau-kernel.new/drivers/gpu/drm/ttm/ttm_bo_vm.c --- nouveau-kernel.orig/drivers/gpu/drm/ttm/ttm_bo_vm.c 2010-01-27 10:19:28.000000000 +0530 +++ nouveau-kernel.new/drivers/gpu/drm/ttm/ttm_bo_vm.c 2010-03-10 17:28:59.000000000 +0530 @@ -271,7 +271,10 @@ */ vma->vm_private_data = bo; - vma->vm_flags |= VM_RESERVED | VM_IO | VM_MIXEDMAP | VM_DONTEXPAND; + vma->vm_flags |= VM_RESERVED | VM_MIXEDMAP | VM_DONTEXPAND; + if (!((bo->mem.placement & TTM_PL_MASK_MEM) & TTM_PL_FLAG_TT)) + vma->vm_flags |= VM_IO; + vma->vm_page_prot = vma_get_vm_prot(vma->vm_flags); return 0; out_unref: ttm_bo_unref(&bo); The previous patch worked for memory-space exported to user via mmap. That worked for the pushbuf, but not for mode-setting (I guess). The ensuing crashes were hard - no logs, nothing. So had to devise ways of forcing log-writing before crashing (and praying). The located iomem problem and had search code for appropriate condition. And setting the vm_page_prot IS important! Nouveau does kernel-modesetting only. The framebuffer device uses channel 1 and is as regular a framebuffer as any other. 2D graphics operations use channel 2 (xf86-video-nouveau). 3D graphics (gallium) use a channel for every 3D window. There are 128 channels, 0 and 127 being reserved. Every channel has a dma-engine which is user triggered thro' pushbuffer rings. Every DMA has a 1MiB VRAM space which forms one of the targets of DMA ops - the other being in the opaque GPU-space. The BO encapsualtes the virtual-address space of the user VM. and the GPU-DMA is provided a constructed PageTable that is consistent with the kernel view of that space. The GEM_NEW ioctl sets up the whole space-management machinery, the user-space is mmaped out, and the operations triggered thro the pushbuf. > But to answer your question, the DMA address is actually the MFN > (machine frame number) which is bitshifted by twelve and an offset > added. The debug patch I provided gets that from the > > PTE value: > > if (xen_domain()) { > + phys = (pte_mfn(*pte) << PAGE_SHIFT) + offset; > > The 'phys' now has the physical address that PCI bus (and the video > card) would utilize to request data to. Please keep in mind that the > 'pte_mfn' is a special Xen function. Normally one would do 'pte'. > > There is a layer of indirection in the Linux pvops kernel that makes > this a bit funny. Mainly most of the time you get something called GPFN > which is a psedu-physical MFN. Then there is a translation of PFN to > MFN (or vice-versa). For pages that are being utilized for PCI devices > (and that have _PAGE_IOMAP PTE flag set), the GPFN is actually the MFN, > while for the rest (like the pages allocated by the mmap and then > stitched up in the ttm_bo_fault handler), it is the PFN. > > .. back to the DMA part. When kernel subsystems do DMA they go through a > PCI DMA API. This API has things such as 'dma_map_page', which through > layers of indirection calls the Xen SWIOTLB layer. The Xen SWIOTLB is > smart enough (actually, the enligthen.c) to distinguish if the page has > _PAGE_IOMAP set or not and to figure out if the PTE has a MFN or PFN. > > Hopefully I've not confused this matter :-( On the contrary, a neat essence of the matter - only wish it was clear to me a month ago:-( YAHOO! (just a simple shout) _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |