[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] Don't track all memory when enabling log dirty to track vram
>>> On 20.05.14 at 12:12, <George.Dunlap@xxxxxxxxxxxxx> wrote: > On Tue, May 20, 2014 at 8:20 AM, Jan Beulich <JBeulich@xxxxxxxx> wrote: >>>>> On 20.05.14 at 05:13, <yang.z.zhang@xxxxxxxxx> wrote: >>> George Dunlap wrote on 2014-05-19: >>>> Avoiding these by "hoping" that the guest OS doesn't DMA into a video >>>> buffer isn't really robust enough. I think that was Tim and Jan's >>> >>> Video buffer is only one case. How we can prevent the DMA to other reserved >>> region? >> >> You continue to neglect the difference: Accessing VRAM this way is >> legitimate (and potentially useful). And - as just said in the other >> reply - ideally we'd also simply ignore accesses to reserved regions >> (and in fact we try to, by not immediately bringing down a guest >> device doing such). > > On the other hand, just to play devil's advocate here: Implementing > separate IOMMU tables (including superpages) isn't free; it has a > non-negligible cost, both in initial developer time, continuing > maintenance (code complexity, fixing bugs), extra memory at run-time, > &c. > > Of all the things we could invest that developer time doing, why > should we make it possible to DMA into VRAM, rather than doing > something else? While I agree that the question is valid, my position really is that it was a mistake to implement the IOMMU code without superpage support, i.e. I view this as a shortcoming independent of the VRAM issue, and I would want to see this fixed rather sooner than later. Had it been done properly from the beginning (like one would expect for non-experimental code), a lot of this discussion could have been avoided, and we wouldn't have had to take the respective workaround close to the 4.4 release. Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |