[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-ia64-devel] Re: [PATCH] xenoprof/ia64 support



Ping.
If no objection, please apply this patch.
Given that xenoprof/x86 support is already included,
it's reasonable to add xenoprof/ia64 support. This patch doesn't
break the existing code because all change is protected with
"if (!no_xen)".

thanks,

On Mon, Dec 17, 2007 at 04:38:59PM +0900, Isaku Yamahata wrote:
> Hello.
> 
> Here is the patch for xenoprof/ia64 support.
> In fact xenoprof/ia64 has been supported for long time, but I hesitated
> to post this patch because sometimes the result was wrong.
> But now it was fixed and I'm expecting that the patches will be
> merged soon.
> BTW the patch to opd_interface.h for DOMAIN_SWTICH_CODE
> is posted. Is it possible to add defined(__ia64__) check?
> 
> 
> Design note on xenoprof/ia64:
> The perfmon model is that performance monitoring is per-thread, and 
> Linux kernel owns PMU and manages PMU context switch.
> For system wide profiling, a thread is created on each physical CPUs
> and the thread manages physical PMU.
> On the other hand, on Xen/IA64 the hypervisor owns PMU so that
> perfmon driver in the Linux kernel was patched such that
> it calls the PERFMON hypercall to request Xen/IA64 to manipulate PMU.
> Thus creating a thread on each virtual CPUs doesn't make sense for
> xenoprof. Instead allocate a single context and calls perfmon driver
> only once for xenoprof system wide profiling.
> 
> thanks,
> 
> Signed-off-by: Isaku Yamahata <yamahata@xxxxxxxxxxxxx>
> 
> diff -r a5ef8c5f641e -r 394a663e3f71 daemon/opd_perfmon.c
> --- a/daemon/opd_perfmon.c    Mon Dec 17 17:50:15 2007 +0900
> +++ b/daemon/opd_perfmon.c    Mon Dec 17 18:15:40 2007 +0900
> @@ -380,6 +380,7 @@ static void wait_for_child(struct child 
>       close(child->up_pipe[1]);
>  }
>  
> +static struct child* xen_ctx;
>  
>  void perfmon_init(void)
>  {
> @@ -388,6 +389,24 @@ void perfmon_init(void)
>  
>       if (cpu_type == CPU_TIMER_INT)
>               return;
> +
> +     if (!no_xen) {
> +             xen_ctx = xmalloc(sizeof(struct child));
> +             xen_ctx->pid = getpid();
> +             xen_ctx->up_pipe[0] = -1;
> +             xen_ctx->up_pipe[1] = -1;
> +             xen_ctx->sigusr1 = 0;
> +             xen_ctx->sigusr2 = 0;
> +             xen_ctx->sigterm = 0;
> +
> +             create_context(xen_ctx);
> +
> +             write_pmu(xen_ctx);
> +             
> +             load_context(xen_ctx);
> +             return;
> +     }
> +     
>  
>       nr = sysconf(_SC_NPROCESSORS_ONLN);
>       if (nr == -1) {
> @@ -431,6 +450,9 @@ void perfmon_exit(void)
>       if (cpu_type == CPU_TIMER_INT)
>               return;
>  
> +     if (!no_xen)
> +             return;
> +
>       for (i = 0; i < nr_cpus; ++i) {
>               kill(children[i].pid, SIGKILL);
>               waitpid(children[i].pid, NULL, 0);
> @@ -445,6 +467,11 @@ void perfmon_start(void)
>       if (cpu_type == CPU_TIMER_INT)
>               return;
>  
> +     if (!no_xen) {
> +             perfmon_start_child(xen_ctx->ctx_fd);
> +             return;
> +     }
> +
>       for (i = 0; i < nr_cpus; ++i)
>               kill(children[i].pid, SIGUSR1);
>  }
> @@ -457,6 +484,11 @@ void perfmon_stop(void)
>       if (cpu_type == CPU_TIMER_INT)
>               return;
>  
> +     if (!no_xen) {
> +             perfmon_stop_child(xen_ctx->ctx_fd);
> +             return;
> +     }
> +     
>       for (i = 0; i < nr_cpus; ++i)
>               kill(children[i].pid, SIGUSR2);
>  }
> 
> 
> 

-- 
yamahata

_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.