[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [ARM] Bash often segfaults in Dom0 with the latest Xen
On Wed, 5 Jun 2013, Christoffer Dall wrote: > On 5 June 2013 10:53, Julien Grall <julien.grall@xxxxxxxxxx> wrote: > > On 06/05/2013 06:36 PM, Christoffer Dall wrote: > > > >>> > >>> I'm using the linaro's branch ll_20130528.0, I have only few patches for > >>> the dts and not yet in linaro tree patches. > >>> > >>> I have the same issue with linux 3.9-rc4 with multiple CPUs and I can't > >>> really go before without carrying many xen patches to try it. > >>> > >>> I have tried different configuration with the number of CPUs in Xen > >>> (pCPU) and linux (vCPU): > >>> - 2 pCPU 2 vCPU : segfaulting > >>> - 2 pCPU 1 vCPU : working > >>> - 1 pCPU 1 vCPU : working > >>> - 1 pCPU 2 vCPU : very slow but working > >>> > >> 2 pCPU 1 vCPU are you still compiling your dom0 as an SMP kernel, but > >> only creating 1 vCPU or are you actually compiling the dom0 as UP? > > > > > > Yes. It's same kernel with the same command line (ie without nosmp). > > I have limited the number of dom0 vcpus with dom0_max_vcpus=1 on xen > > command line. > > > It indicates a bug in Xen then. Curious that it only happens for user > space in dom0, but perhaps you just haven't seen it in the kernel yet. > Bash scripts are pretty intensive on page faults so perhaps there's a > synchronization issue with some of your page fault handlers. > > You could try to touch all the memory inside dom0 (dd to a ramfs for > example) and then run your bash script and see if the problem still > occurs, that should point you to whether it's a stage-2 fault handling > issue, but this is not a fool-proof approach. Maybe Xen can > pre-allocate all the stage-2 entries? Xen pre-allocates all the memory for stage-2 entries (no overcommit or populate on demand by default) _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |