x86: fix rdrand asm() Just learned the hard way that at least for non-volatile asm()s gcc indeed does what the documentation says: It may move it across jumps (i.e. ahead of the cpu_has() check). While the documentation claims that this can also happen for volatile asm()s, if that was the case we'd have many more problems in our code (and e,g, Linux would too). Signed-off-by: Jan Beulich --- a/xen/include/asm-x86/random.h +++ b/xen/include/asm-x86/random.h @@ -8,7 +8,7 @@ static inline unsigned int arch_get_rand unsigned int val = 0; if ( cpu_has(¤t_cpu_data, X86_FEATURE_RDRAND) ) - asm ( ".byte 0x0f,0xc7,0xf0" : "+a" (val) ); + __asm__ __volatile__ ( ".byte 0x0f,0xc7,0xf0" : "+a" (val) ); return val; }