[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v4 0/7] unsafe big.LITTLE support
On 12/03/18 02:32, Peng Fan wrote: On Fri, Mar 09, 2018 at 02:40:25PM +0000, Julien Grall wrote:Hi, On 09/03/18 13:30, Peng Fan wrote:Hi Julien, On Fri, Mar 09, 2018 at 10:22:09AM +0000, Julien Grall wrote:Hi Peng, On 09/03/18 09:05, Peng Fan wrote:On Thu, Mar 08, 2018 at 03:13:50PM +0000, Julien Grall wrote:On 08/03/18 12:43, Peng Fan wrote: There are a major difference between Dom0 and DomU in your setup. Dom0 vCPUs are pinned to a specific pCPU, so they can't move around. For DomU, each vCPU are pinned to a set of pCPUs, so they can move around. But, did you check the DomU has the workaround enabled? I am asking that because it looks like to me the way to detect the workaround is based on a device (scu) and not processor. So I am not convinced that DomU is actually using your workaround.Just checked this. Because xen toolstack create device tree with compatible "compatible = "xen,xenvm-4.10", "xen,xenvm";", but the linux code use "fsl,imx8qm" to detect soc, then call scu to get revision of chip.But how does the guest call the scu?We are doing GPU and display passthrough, also some other IPs passthrough. we could not totally rely on Dom0 to configure the pinmux, gpio, clk, relying on dom0 to do that would bring much hack code to our kernel, also runtime clk set rate in domu could not be done. So we expose an interface to domu to directly communicate with SCU(system control unit).Do you always expect a domain to access the SCU? Even with no passthrough involved?only needed when a domain only needs to directly access hardware. Then your suggested workaround can't work for guest with not device passthrough. I would recommend to find a workaround that works for every guests. Cheers, -- Julien Grall _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |