[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] INIT ipi under Xen
Hello, Does anyone know whether Xen-4.0.1, after it has booted and created dom0, does anything special to prevent interprocessor interrupt with delivery mode INIT from affecting processors ? Judging by Intel Software Developer Manual there is no documented way to do this; but maybe something less official ? I use the following piece of code (in hypervisor context) to trigger IPI: unsigned long send_init_ipi() { unsigned long ret; int i; rdmsrl(IA32_APIC_BASE, ret); xen_printk("IA32_APIC_BASE=0x%x\n", ret); if (ret & (1 << 8)) { // I am BSP *(unsigned int *) (APIC_BASE + 0x300) = 0xc4500; // load ICR1 asm("wbinvd"); // just in case APIC_BASE is cached WB } return ret; } This code does not seem to cause any effect (even when IA32_APIC_BASE=0xfee00900 is logged) - namely, I can still observe both CPUs running: subsequent triggering of this code sometimes results in IA32_APIC_BASE=0xfee00900 logged, sometimes 0xfee00800. If I try sending "ordinary" interrupts (by eg writing 0x4169 do ICR1) I can see the interrupt count in /proc/cpuinfo increase. Moreover, if I run the above code when a HVM is running, then it is killed with note in the Xen log that unexpected exit_reason=3 (meaning, EXIT_REASON_INIT) has beed observed. So, everything indicates that indeed the above code generates INIT, but somehow it is ignored by the destination. Even writing 0x84500 (send INIT to every CPU) causes no effect. The similar code run on bare metal Linux behaves more sanely. Can anyone offer a clue ? This is on Intel Q45 board, E8400 CPU, 64bit Xen-4.0.1. RW PS Similarly, sending SIPI causes nothing; but I would expect that SIPI is reacted upon only in wait-for-SIPI state. However, this is not documented anywhere - can someone confirm this assumption more authoritatively ? _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |