[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] RE: [Xen-ia64-devel] Re: [Xen-devel] XenLinux/IA64 domU forwardport
Alex Williamson wrote: > On Fri, 2008-02-15 at 00:43 +0800, Dong, Eddie wrote: >> I agree with your catagory, but I think #C is the 1st challenge we >> need to address for now. #A could be a future task for performance >> later after pv_ops functionality is completed. I don't worry about >> those several cycles difference in the primitive ops right now, >> since we already spend 500-1000 cycles to enter the C code. > > IMHO, #A and #C are both blockers for getting into upstream > Linux/ia64. Upstream isn't going to accept a performance hit for a > paravirt enabled kernel on bare metal, so I'm not sure we should > prioritize one over the other, especially since Isaku has already made > such good progress on #A. I guess we are talking in different angle which hide the real issues. We have multiple alternaitves: 1: pv_ops 2: pv_ops + binary patching to convert those indirect function call to direct function call like in X86 3: pure binary patching For community, #1 need many effort like Jeremy spent in X86 side, it could last for 6-12 months, #2 is based on #1, the additional effort is very small, probably 2-4 weeks. #3 is not pv_ops, it may need 2-3 months effort. Per my understanding to previous Yamahata san's patch, it address part of #3 effort. I.e. #A of #3. What I want to suggest is #2. With pv_ops, all those instruction both in A/B/C are already replaced by source level pv_ops code, so no binary patching is needed. The only patching needed in #2 is to convert indirect function call to direct function call for some hot APIs, for example X86 does for cli/sti. The majority of pv_ops are not patched. So basically #2 & #3 approach is kind of conflict, and we probably need to decide which way to go earlier. For #1 effort, adopting pv_ops in IVT code is one of the major effort, i.e. item #C in previous email. Current progress in #3 won't be wasted, it simplifies debug effort of #2, since it got new kernel works:) > >> The major challenge to #C is listed in my previous thread, it is not >> an easy thing to address for now, especially if we need to change >> original IVT code a lot. > > The question of how to handle the IVT needs to be decided on > Linux-ia64. There are a couple approaches we could take, but it > really comes down to what Tony and the other developers feel is > cleanest and most maintainable. 100% agree! I will start a session there soon. > > I think we actually have similar issues with the C code in > sba_iommu and swiotlb. We have paravirtualized versions of these, > but they're very Xen specific. I think we'll need to abstract the > interfaces more to make the inline paravirtualiztion acceptable. > >> Another big challenge is machine vector. I would like to create a >> seperate thread to discuss it some time later. Basically it has >> something overlap with pv_ops. > > We might extend the machine vector to include some PV features, but > at the moment, they seem somewhat orthogonal to me. The current xen > machine vector helps to simplify things for a unprivileged guest, but Yes. > dom0 will need to use the appropriate bare metal machine vector while > still making use of pv_ops. So we somehow need to incorporate pv_ops Yes, since dom0 have to see same platform with bare metal, we need those much low level pv_ops beneath machine vector to make dom0 works on different platforms in future such as SGI platform. For unprivileged guest, we can keep xen machine vector, or purely rely on pv_ops, for example we present domU a native like dig machine vector with pv_ops beneath, we can see if this can simplify the upstream changes. My position to machine vector for now is to leave as it is, we can revisit in later stage. > into all the machine vectors. Thanks, > > Alex thx, eddie _______________________________________________ Xen-ia64-devel mailing list Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-ia64-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |