[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Debugging DomU



On 29/05/15 03:54, Chris (Christopher) Brand wrote:
> Hi Julien,
> 
>>> I hunted around quite a bit, and didn't find anything. Nothing leaps out in 
>>> the list of upstream kernel patches to mmu.c (there's a migration from 
>>> meminfo >to memblock, which I tried backporting with no effect on 
>>> behaviour). Most of the reports of similar panics that I found, the 
>>> recommendation was to ensure >that u-boot was disabling the L2 cache before 
>>> jumping to the kernel, which is presumably not helpful.
>>
>> Even though, the bug occurred in mmu.c the bug was because of miscalculation 
>> in kernel/head.S
> 
> __fixup_pv_table() ?

It was one of the offending function. I don't remember exactly the problem.

> Looking at "git blame" for that file upstream and in my kernel, there are 
> four patches that affect the part of the code that is conditional on 
> CONFIG_ARM_PATCH_PHYS_VIRT:
> E26a9e00afc - this sounds like just an optimisation
> 7a06192834414 - this just replaces "12" with "PAGE_SHIFT"
> E3892e9160 - this says it only affects big-endian
> 6ebbf2ce437b3 - this should just be an optimization
> None of those sound like likely candidates.
> 
>>> Throwing some printk() calls into sanity_check_meminfo() shows that it 
>>> decides that all the memory is highmem, and so passes 0 to 
>>> memblock_set_current_limit(). That then seems to lead to the failure to 
>>> find suitable blocks of memory to allocate, and hence the panic.
>>
>> That's exactly the problem I had with some CONFIG_VMSLIPT_*. It was related 
>> to Linux computing a wrong offset between the virtual and the physical 
>> >address.
>  
>>> As an experiment, I tried changing the start of memory in the DTS from 
>>> 0x80000000. With that change, I can get the same result with 
>>> >CONFIG_VMSPLIT_3G as I got with the other configs above (PC=0xfff000c). 
>>> That seems to indicate that this is the problem you recalled, but that 
>>> there's yet >another problem I'm hitting afterwards. I *think* I saw it go 
>>> from __arm_ioremap_pfn() into do_DataAbort(), but I'm far from certain.
>>
>> How did you choose the 0x80000000?
> 
> That was suggested to me by somebody here. Is it arbitrary ? Seems like it 
> should be.
> 
>> On a previous mail you were saying that you are using a custom kernel based 
>> on 3.14, right? I'm wondering if the kernel is trying to map device which it 
>> >should not do.
> 
> Yes, that's correct. I've attached my dts, which is pretty minimal.

This is the DTS for DOM0 or the guest?

>> Can you try to apply the patch below in Xen? It will print any guest data 
>> abort not handled by Xen before injecting it to the guest.
> 
> Between that patch and more printk debugging, I know where it was dying:
> Setup_arch
> Paging_init
> Dma_contiguous_remap
> Iotable_init
> Early_alloc_aligned
> Memset(0xee7fffd0, 0, 0x30)
> The output from your patch is:
> (XEN) traps.c:2022:d8v0 HSR=0x90000046 pc=0xc025ab80 gva=0xee7fffd0
> 
> So I'm thinking that this still could be related to the 
> __memblock_find_range_top_down(), if it now "succeeds" but still returns 
> something invalid...
> 
> I applied e26a9e00afc. It made no difference by itself. I then tried tweaking 
> the memory base address. With 0x20000000, I saw the same crash I was seeing 
> before. With 0x40000000, though, it gets much further, dying in 
> gic_init_bases():

I'm confuse. Where are you tweaking the memory based address? For DOM0?
For Xen?

Modifying the based address of the guest memory will likely not working
as the memory layout is predefined by Xen.

When you create a guest you only need to provide the kernel. The device
tree will be created by the toolstack.

Sorry if I already asked it before. Can you summarize your status:
        - Version of Xen used and modification you made
        - Version of Linux DOM0 used
        - Version of Linux DOMU used
        - Do you append a device tree to DOMU?
        - xl configuration file used to create the DOMU.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.