[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] is it possible to build two privileged domain at boot time?



Hi Derek,
I have tried your two suggestions.
Remove the following lines;

   1. *BUG_ON(d->domain_id != 0);*
   2. discard_initial_images();

xen still panic on the same point.
(XEN) Xen call trace:
(XEN) [<ff119d34>] elf_is_elfbinary+0x4/0x20
(XEN) [<ff11af53>] elf_init+0x13/0x520
(XEN) [<ff10ec8b>] avail_heap_pages+0x2b/0xa0
(XEN) [<ff12284c>] construct_dom0+0x13c/0x1830
(XEN) [<ff11ee10>] put_newline+0x50/0x80
(XEN) [<ff11c957>] sercon_puts+0x27/0x30
(XEN) [<ff11c9b1>] __putstr+0x51/0x60
(XEN) [<ff1971e8>] __start_xen+0xe28/0xf80
(XEN) [<ff10018a>] no_execute_disable+0x53/0x55
(XEN)
(XEN) Pagetable walk from 00c00000:
(XEN) L3[0x000] = 00000000001c8001 55555555
(XEN) L2[0x006] = 0000000000000000 ffffffff

00c00000 is the address for _image_start. the address for kernel image
has been unmapped though I have disabled discard_initial_images.
So we found there is another place to unmap kernel image-zap_low_mappings
/* */* **zap_low_mappings(l2start);
zap_low_mappings(idle_pg_table_l2);*/*

*/Construct_dom0 can still use kernel image by not calling them at the
first time being called by __start_xen.
But another problem occurs. Construct_dom0 will panic on fowlling lines:/*
*/*if ( (page = alloc_chunk(d, nr_pages - d->tot_pages)) == NULL )
panic("Not enough RAM for /_DOM1_/ reservation.n");
*Note dom0 can be constructed completely. Panic happens at the second
time construct_dom0 is called.
Since I know little about the memory allocation of domain0 I need your
suggestion about how to reserve memory for the another
privileged domain.*
*
Regards,
Dowen

Derek Murray 写道:
> Hi Dowen,
>
> Okay - this might prove to be an interesting effort!
>
> I assume you're seeing panics in construct_dom0, the second time that
> you call it.
>
> First of all, I presume that you've removed this line:
>
>     /* Sanity! */
>     BUG_ON(d->domain_id != 0);
>
> ...which will cause a bug in the hypervisor at the second instance of
> calling it.
>
> Your reference to the ELF image being changed might be to do with this line:
>
>     /* Free temporary buffers. */
>     discard_initial_images();
>
> ...also in construct_dom0. The second time round, these buffers will
> have been freed, which is probably why you're getting garbage for the
> second domain. So you'll want to move this out into the calling
> function.
>
> Hope this helps, and let me know how you get it.
>
> Regards,
>
> Derek.
>
> 2008/6/2 文成建 <wenchengjian@xxxxxxxxxxxxxxx>:
>   
>> Hi Derek,
>> Thank you very much for your response. Your suggestion helps me a lot.
>> What you said at the very beginning of your mail is exactly what I want
>> for my purpose.
>> Though it is not ideal method I'd like to implement it to train myself
>> on xen technology.
>> Soon later maybe I'll be able to implement the method you suggest.
>> Now I am trying to build two privileged domain mainly by calling
>> /*construct_dom0*/ two times
>> at the fuction of /*__start_xen. */ I realized that the reason for panic
>> is orginal domain0 kernel image has been changed
>> after construct_dom0. But I can't find how it has been changed. If I
>> know the details I can avoid the step changing the elf image
>> or I can prepare the elf image once again. Right?
>> I need your help about it. Thanks in advance!
>>
>> Regards,
>> Dowen
>>
>>
>> Derek Murray 写道:
>>     
>>> Hi Dowen,
>>>
>>> It would be possible to create two privileged domains at boot time (by
>>> modifying the hypervisor to make it possible for the domain builder to
>>> create more than one initial domain; or you could add a privileged
>>> hypercall to make other domains privileged, and modify the domain
>>> builder in Dom0). However, I'm not sure that this is what you would
>>> want for your purposes.
>>>
>>> If Dom0 crashes, it typically brings down the whole physical machine,
>>> because Dom0 is responsible for managing various parts of the physical
>>> platform. Therefore, I don't think it would be straightforward to
>>> perform failover to a second Dom0 in the event of the primary Dom0
>>> crashing.
>>>
>>> Perhaps a better idea would be to have a stripped-down Linux that acts
>>> as Dom0 for managing the platform, but which has no user-space
>>> applications or device drivers (and therefore would be much less
>>> likely to shut down unexpectedly). Then you could use PCI device
>>> passthrough to a second privileged domain (say, Dom1), which then runs
>>> the management software and hosts the physical device drivers.
>>> Although it wouldn't be bulletproof (since a malfunctioning device
>>> driver could probably still hose the entire system, unless you use
>>> VT-d or similar), you could probably restart Dom1 if it crashed. You'd
>>> need to modify some of the tools to make things like XenStore (which
>>> holds configuration details for the domains) persist across reboots.
>>> You might also benefit from looking at the domain save/restore code so
>>> that, if Dom1 crashes, all domains would be paused while it is
>>> rebooted, and restored when it is running again.
>>>
>>> Regards,
>>>
>>> Derek Murray.
>>>
>>> On Thu, May 29, 2008 at 5:55 PM, 文成建 <wenchengjian@xxxxxxxxxxxxxxx> wrote:
>>>
>>>       
>>>> Hi All,
>>>>   I am not very familiar with xen details. Now I am thinking of building 
>>>> two
>>>> privilged domain(domain 0 not driver domain) at boot time.
>>>> The other question is that wether i se
>>>> when domain 0 is shut down  unexpectedly another domain 0 can run at once.
>>>> Maybe it is absurd. I am looking forwards to your suggestions.
>>>>
>>>> Regards,
>>>> Dowen
>>>>
>>>> _______________________________________________
>>>> Xen-devel mailing list
>>>> Xen-devel@xxxxxxxxxxxxxxxxxxx
>>>> http://lists.xensource.com/xen-devel
>>>>
>>>>
>>>>         
>>>       
>>     
>
>   


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.