[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] is it possible to build two privileged domain at boot time?
Hi Dowen, I don't think this problem is best solved by giving the I/O ports to both domains - there is a high probability that the two domains will attempt to use the same ports, and cause some misconfiguration or confusion in the low-level hardware. The best approach is probably to look for code in the Dom0 kernel that is gated by is_initial_xendomain(). You would have to audit this to make sure that things that should only happen once on platform boot only happen in the first Dom0. Then, as long as you keep Dom1 paused while Dom0 is running, you could probably grant the same I/O ports to both domains.... Regards, Derek Murray. 2008/6/4 Chengjian Wen <wenchengjian@xxxxxxxxxxxxxxx>: > Hi Derek, > Thank you for your suggestion about the amount of physical memory. When > I adjust the amount of > physical memory allocated to dom0 and dom1(another privileged domain) it > will not panic for "Not enough memory". > It means the construct_dom0 can succeed when being called for > constructing dom0 and dom1. > But the domains can not boot still. > > Soon later we find that every time calling construct_dom0 > zap_low_mapping(l2start) should be called. > and we call zap_low_mapping(idle_pg_table_l2) only when constructing > dom1(first dom0 then dom1). > And finally it alll works. dom0 can boot successfully and we can see > both dom0 and dom1 by "xm list" when the domain0 system > is running. > > The problem is that the state of dom1 is '--p---'. By changing the state > of dom1 we can get another debugging information > (XEN) mm.c:612:d1 Non-privileged (1) attempt to map I/O space 00000000 > (XEN) mm.c:3267:d1 ptwr_emulate: could not get_page_from_l1e() > (XEN) Unhandled page fault in domain 1 on VCPU 0 (ec=0003) > (XEN) Pagetable walk from c084d8b0:on 3.1.0 (root@vmgroup) (gcc versio > (XEN) L3[0x003] = 000000007d7d8001 000007d8 > (XEN) L2[0x004] = 000000007d7dd067 000007ddJun 4 11:12:20 EDT 2008 > (XEN) L2[0x004] = 000000007d7dd067 000007ddJun 4 11:12:20 EDT 2008 > (XEN) esi: c084d8b0 edi: 0084d8b0 ebp: f5516000 esp: c03b3edc > (XEN) cr0: 8005003b cr4: 000026f0 cr3: 7d7d4000 cr2: c084d8b0 > (XEN) ds: e021 es: e021 fs: 0000 gs: 0000 ss: e021 cs: e019 > (XEN) Guest stack trace from esp=c03b3edc: > > We realized that the reason for dom1 panic is we have not give any > perimission to access I/O ports. > We did not modified any code related to accesing to I/O ports in the > bottom half of construct_dom0. > At present we can see dom1 begin to boot by giving dom1 all permission > to access all I/O ports.But it will > cause dom0 hangs. My question is how I can allocate these I/O ports to > these two domain. And maybe it comes back to the > primary purpose for constructing two privileged domains. > > I really need your help. The result at present > is now we can boot two privileged > domain0 and we need suggestion > about how to allocate these I/O > ports or anything else.(Maybe these > is other permission for dom0). > > Regards, > Dowen > > > Derek Murray 写道: >> Hi Dowen, >> >> I think the problem here is nr_pages, which is calculated by the >> function compute_dom0_nr_pages() in domain_build.c, and which >> ultimately derives from dom0_nrpages (if dom0_mem is set on the >> command line) or the total amount of memory. So it's probably trying >> to allocate the same amount of physical memory to both Dom0 and Dom1, >> which is causing it to fail. For the moment, it might work if you set >> dom0_mem to a small amount of memory, but you would ultimately need to >> change this logic to ensure that both domains could coexist in the >> default case. >> >> Regards, >> >> Derek. >> >> 2008/6/3 文成建 <wenchengjian@xxxxxxxxxxxxxxx>: >> >>> Hi Derek, >>> I have tried your two suggestions. >>> Remove the following lines; >>> >>> 1. *BUG_ON(d->domain_id != 0);* >>> 2. discard_initial_images(); >>> >>> xen still panic on the same point. >>> (XEN) Xen call trace: >>> (XEN) [<ff119d34>] elf_is_elfbinary+0x4/0x20 >>> (XEN) [<ff11af53>] elf_init+0x13/0x520 >>> (XEN) [<ff10ec8b>] avail_heap_pages+0x2b/0xa0 >>> (XEN) [<ff12284c>] construct_dom0+0x13c/0x1830 >>> (XEN) [<ff11ee10>] put_newline+0x50/0x80 >>> (XEN) [<ff11c957>] sercon_puts+0x27/0x30 >>> (XEN) [<ff11c9b1>] __putstr+0x51/0x60 >>> (XEN) [<ff1971e8>] __start_xen+0xe28/0xf80 >>> (XEN) [<ff10018a>] no_execute_disable+0x53/0x55 >>> (XEN) >>> (XEN) Pagetable walk from 00c00000: >>> (XEN) L3[0x000] = 00000000001c8001 55555555 >>> (XEN) L2[0x006] = 0000000000000000 ffffffff >>> >>> 00c00000 is the address for _image_start. the address for kernel image >>> has been unmapped though I have disabled discard_initial_images. >>> So we found there is another place to unmap kernel image-zap_low_mappings >>> /* */* **zap_low_mappings(l2start); >>> zap_low_mappings(idle_pg_table_l2);*/* >>> >>> */Construct_dom0 can still use kernel image by not calling them at the >>> first time being called by __start_xen. >>> But another problem occurs. Construct_dom0 will panic on fowlling lines:/* >>> */*if ( (page = alloc_chunk(d, nr_pages - d->tot_pages)) == NULL ) >>> panic("Not enough RAM for /_DOM1_/ reservation.n"); >>> *Note dom0 can be constructed completely. Panic happens at the second >>> time construct_dom0 is called. >>> Since I know little about the memory allocation of domain0 I need your >>> suggestion about how to reserve memory for the another >>> privileged domain.* >>> * >>> Regards, >>> Dowen >>> >>> Derek Murray 写道: >>> >>>> Hi Dowen, >>>> >>>> Okay - this might prove to be an interesting effort! >>>> >>>> I assume you're seeing panics in construct_dom0, the second time that >>>> you call it. >>>> >>>> First of all, I presume that you've removed this line: >>>> >>>> /* Sanity! */ >>>> BUG_ON(d->domain_id != 0); >>>> >>>> ...which will cause a bug in the hypervisor at the second instance of >>>> calling it. >>>> >>>> Your reference to the ELF image being changed might be to do with this >>>> line: >>>> >>>> /* Free temporary buffers. */ >>>> discard_initial_images(); >>>> >>>> ...also in construct_dom0. The second time round, these buffers will >>>> have been freed, which is probably why you're getting garbage for the >>>> second domain. So you'll want to move this out into the calling >>>> function. >>>> >>>> Hope this helps, and let me know how you get it. >>>> >>>> Regards, >>>> >>>> Derek. >>>> >>>> 2008/6/2 文成建 <wenchengjian@xxxxxxxxxxxxxxx>: >>>> >>>> >>>>> Hi Derek, >>>>> Thank you very much for your response. Your suggestion helps me a lot. >>>>> What you said at the very beginning of your mail is exactly what I want >>>>> for my purpose. >>>>> Though it is not ideal method I'd like to implement it to train myself >>>>> on xen technology. >>>>> Soon later maybe I'll be able to implement the method you suggest. >>>>> Now I am trying to build two privileged domain mainly by calling >>>>> /*construct_dom0*/ two times >>>>> at the fuction of /*__start_xen. */ I realized that the reason for panic >>>>> is orginal domain0 kernel image has been changed >>>>> after construct_dom0. But I can't find how it has been changed. If I >>>>> know the details I can avoid the step changing the elf image >>>>> or I can prepare the elf image once again. Right? >>>>> I need your help about it. Thanks in advance! >>>>> >>>>> Regards, >>>>> Dowen >>>>> >>>>> >>>>> Derek Murray 写道: >>>>> >>>>> >>>>>> Hi Dowen, >>>>>> >>>>>> It would be possible to create two privileged domains at boot time (by >>>>>> modifying the hypervisor to make it possible for the domain builder to >>>>>> create more than one initial domain; or you could add a privileged >>>>>> hypercall to make other domains privileged, and modify the domain >>>>>> builder in Dom0). However, I'm not sure that this is what you would >>>>>> want for your purposes. >>>>>> >>>>>> If Dom0 crashes, it typically brings down the whole physical machine, >>>>>> because Dom0 is responsible for managing various parts of the physical >>>>>> platform. Therefore, I don't think it would be straightforward to >>>>>> perform failover to a second Dom0 in the event of the primary Dom0 >>>>>> crashing. >>>>>> >>>>>> Perhaps a better idea would be to have a stripped-down Linux that acts >>>>>> as Dom0 for managing the platform, but which has no user-space >>>>>> applications or device drivers (and therefore would be much less >>>>>> likely to shut down unexpectedly). Then you could use PCI device >>>>>> passthrough to a second privileged domain (say, Dom1), which then runs >>>>>> the management software and hosts the physical device drivers. >>>>>> Although it wouldn't be bulletproof (since a malfunctioning device >>>>>> driver could probably still hose the entire system, unless you use >>>>>> VT-d or similar), you could probably restart Dom1 if it crashed. You'd >>>>>> need to modify some of the tools to make things like XenStore (which >>>>>> holds configuration details for the domains) persist across reboots. >>>>>> You might also benefit from looking at the domain save/restore code so >>>>>> that, if Dom1 crashes, all domains would be paused while it is >>>>>> rebooted, and restored when it is running again. >>>>>> >>>>>> Regards, >>>>>> >>>>>> Derek Murray. >>>>>> >>>>>> On Thu, May 29, 2008 at 5:55 PM, 文成建 <wenchengjian@xxxxxxxxxxxxxxx> >>>>>> wrote: >>>>>> >>>>>> >>>>>> >>>>>>> Hi All, >>>>>>> I am not very familiar with xen details. Now I am thinking of >>>>>>> building two >>>>>>> privilged domain(domain 0 not driver domain) at boot time. >>>>>>> The other question is that wether i se >>>>>>> when domain 0 is shut down unexpectedly another domain 0 can run at >>>>>>> once. >>>>>>> Maybe it is absurd. I am looking forwards to your suggestions. >>>>>>> >>>>>>> Regards, >>>>>>> Dowen >>>>>>> >>>>>>> _______________________________________________ >>>>>>> Xen-devel mailing list >>>>>>> Xen-devel@xxxxxxxxxxxxxxxxxxx >>>>>>> http://lists.xensource.com/xen-devel >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>> >>> >> >> > > _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |