|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] xl create failure on arm64 with XEN 4.9rc6
Yap, worked like a charm. Much thanks.
Tested on Centos 7 arm64 system with 4.12rc2 64K page size kernel
(dom0 and domU) & Xen 4.9rc6
Tested-by: Feng Kan <fkan@xxxxxxx>
On Sun, May 28, 2017 at 10:12 AM, Julien Grall <julien.grall@xxxxxxx> wrote:
> Hi,
>
> On 05/26/2017 11:22 PM, Feng Kan wrote:
>> On Fri, May 26, 2017 at 5:40 AM, Julien Grall <julien.grall@xxxxxxx> wrote:
>>>
>>>
>>> On 26/05/17 01:37, Feng Kan wrote:
>>>>
>>>> On Thu, May 25, 2017 at 12:56 PM, Julien Grall <julien.grall@xxxxxxx>
>>>> wrote:
>>>>>
>>>>> (CC toolstack maintainers)
>>>>>
>>>>> On 25/05/2017 19:58, Feng Kan wrote:
>>>>>>
>>>>>>
>>>>>> Hi All:
>>>>>
>>>>>
>>>>>
>>>>> Hello,
>>>>>
>>>>>> This is not specifically against the XEN 4.9. I am using 4.12rc2
>>>>>> kernel on arm64 platform. Started dom0 fine with ACPI enabled, but
>>>>>> failed when creating the domU guest. Xen is built natively on the
>>>>>> arm64 platform. Using the same kernel and ramdisk as dom0. Any idea as
>>>>>> why it is stuck here
>>>>>> would be greatly appreciated?
>>>>>
>>>>>
>>>>>
>>>>> The first step would to try a stable release if you can. Also, it would
>>>>> be
>>>>> useful if you provide information about the guest (i.e the configuration)
>>>>> and your .config for the kernel.
>>>>
>>>> I am using the default xen_defconfig in the arm64 directory.
>>>
>>>
>>> I am confused. There are no xen_defconfig in the arm64 directory of the
>>> kernel. So which one are you talking about?
>> Sorry, my mistake.
>>>
>>>> This is
>>>> very early on
>>>> in building the domain, would the guest configuration matter?
>>>
>>>
>>> The configuration of DOM0 kernel matters when you want to build the guest.
>>> That's why I wanted to know what options you enabled.
>> I see. I am using the default centos 7.2 kernel config plus enabling
>> the XEN option. (Attached below)
>
> Looking at the .config, Linux is using 64KB page granularity.
>
> I managed to reproduce the failure (though different error) by using
> an initramfs > 32MB (smaller works). The patch below should fix the
> error, can you give it a try?
>
> commit c4684b425552a8330f00d7703f3175d721992ab0
> Author: Julien Grall <julien.grall@xxxxxxx>
> Date: Sun May 28 17:50:07 2017 +0100
>
> xen/privcmd: Support correctly 64KB page granularity when mapping memory
>
> Commit 5995a68 "xen/privcmd: Add support for Linux 64KB page granularity"
> did
> not go far enough to support 64KB in mmap_batch_fn.
>
> The variable 'nr' is the number of 4KB chunk to map. However, when Linux
> is using 64KB page granularity the array of pages (vma->vm_private_data)
> contain one page per 64KB. Fix it by incrementing st->index correctly.
>
> Furthermore, st->va is not correctly incremented as PAGE_SIZE !=
> XEN_PAGE_SIZE.
>
> Fixes: 5995a68 ("xen/privcmd: Add support for Linux 64KB page
> granularity")
> CC: stable@xxxxxxxxxxxxxxx
> Reported-by: Feng Kan <fkan@xxxxxxx>
> Signed-off-by: Julien Grall <julien.grall@xxxxxxx>
>
> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
> index 7a92a5e..38d9a43 100644
> --- a/drivers/xen/privcmd.c
> +++ b/drivers/xen/privcmd.c
> @@ -345,7 +345,7 @@ static int mmap_batch_fn(void *data, int nr, void *state)
> int ret;
>
> if (xen_feature(XENFEAT_auto_translated_physmap))
> - cur_pages = &pages[st->index];
> + cur_pages = &pages[st->index / XEN_PFN_PER_PAGE];
>
> BUG_ON(nr < 0);
> ret = xen_remap_domain_gfn_array(st->vma, st->va & PAGE_MASK, gfnp,
> nr,
> @@ -362,7 +362,7 @@ static int mmap_batch_fn(void *data, int nr, void *state)
> st->global_error = 1;
> }
> }
> - st->va += PAGE_SIZE * nr;
> + st->va += XEN_PAGE_SIZE * nr;
> st->index += nr;
>
> return 0;
>
> Cheers,
>
> --
> Julien Grall
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |