[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Errors with Loading Xen at a Certain Address


  • To: Julien Grall <julien.grall@xxxxxxx>
  • From: Brian Woods <brian.woods@xxxxxxxxxx>
  • Date: Fri, 4 Oct 2019 08:36:55 -0700
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 149.199.60.83) smtp.rcpttodomain=epam.com smtp.mailfrom=xilinx.com; dmarc=bestguesspass action=none header.from=xilinx.com; dkim=none (message not signed); arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=qUh6UP3j/ouzid2QKrCNXUZZQyvTMprKPqqIPf/Ua4I=; b=EHzO+GPRDkbn5yDwVE+oGxQN8nfdkRkC5+rsrOwPlfvuv8vNz80z2BjU9XMSpt+MYzhsO5RvhF8IwegqNNuJ2LaRsqay5l4xJrFZMndul0ToRyJSyPZYWkn+lBDJMeCi3ALsbDMb0MCZ2An2yIq4+LelQNHcLCHj+By+68fnjRZCQ9QcmY7VR8wQbxrA4mepqufEyT6FWHAwpIVzKeE/YP8vYAE6wMeMSTh6evCzjhVOnoHCiEJpam6W59ljMLrxlfHZsPUjK1LutigGWxA3tINMQ0oLPeuM5Rq8yJTEZi0F8kAP/N6pPZD9FECZ9G2syITgl04FKBKVd4yZ+n7OiA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=AbvmnUzk7Fr2iIfWvd2NCSvZ6stSxtxFD18h2ukhRGRflDkH50Qrifnq6UdmJdCFZ0Yg9b693Bza/8G6c05WONu25RB9fWuKyfRGbVjdqIngW763FZwCHEgyphVE9r48TLW5/EFchVZNKyLdbzecf5voPBmjFLmwqSyilX2L+KEf6002WOu5b5eK7f4SZ2EeIOYVazlyeLm5cUlHq3l18vk851VUnUESwAfThKRMp6+VokF7kz0og2FHNHXvRZlVnfCFfd3fLpBPPujcZJkMilUJ+e4id09In8laZ8oq7dEBo4AVcz9T2kMpcizRNn+uXuMMuHFfwQ4z5ArGRb8SSw==
  • Authentication-results: spf=pass (sender IP is 149.199.60.83) smtp.mailfrom=xilinx.com; epam.com; dkim=none (message not signed) header.d=none;epam.com; dmarc=bestguesspass action=none header.from=xilinx.com;
  • Cc: Brian Woods <brian.woods@xxxxxxxxxx>, nd <nd@xxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Fri, 04 Oct 2019 15:37:21 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Fri, Oct 04, 2019 at 10:49:28AM +0100, Julien Grall wrote:
> Hi Brian,
> 
> On 04/10/2019 01:25, Brian Woods wrote:
> >
> >In the log, there's:
> >(XEN) MODULE[0]: 0000000001400000 - 00000000015328f1 Xen
> >(XEN) MODULE[1]: 00000000076d2000 - 00000000076dc080 Device Tree
> >(XEN) MODULE[2]: 00000000076df000 - 0000000007fff364 Ramdisk
> >(XEN) MODULE[3]: 0000000000080000 - 0000000003180000 Kernel
> >(XEN)  RESVD[0]: 00000000076d2000 - 00000000076dc000
> >(XEN)  RESVD[1]: 00000000076df000 - 0000000007fff364
> >
> >Linux kernel ->   8_0000 - 318_0000
> >Xen          -> 140_0000 - 153_28f1
> >
> >There's something not quite right here... I'm guessing Xen was working
> >at the address before because it was out of the "range" of the Linux
> >kernel.  Now I guess I need to look into if it's a Xen or u-boot issue.
> 
> The loading address you wrote match the ones you seem to have requested in 
> U-boot:
> 
> Filename 'yocto-Image'.
> Load address: 0x80000
> 
> Filename 'xen-custom.ub'.
> Load address: 0x1400000
> 
> But the size does not match the one you provided in the Device-Tree:
> 
> Bytes transferred = 18215424 (115f200 hex)
> 
> vs
> 
> 0x0000000003180000 - 0x0000000000080000 = 0x3100000
> 
> This is always a risk when you write in advance the size of the binaries and
> location in the Device-Tree. If you are using tftp/load from FS, it is much
> less risky to provide a U-boot script that will generate the Xen DT node.
> 
> Cheers,
> 
> -- 
> Julien Grall

Yeah.  When I went in and changed the end address in the device tree and
it all worked.  I'm guessing Xen could use some warnings and some other
things to alert the user that the device tree may need tweaking or at
the very least some checks.  It seems that the blame wasn't primarily on
Xen, although it didn't do anyone any favors.

-- 
Brian Woods

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.