summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorBenjamin Walsh <benjamin.walsh@windriver.com>2016-12-02 12:18:39 -0500
committerKumar Gala <kumar.gala@linaro.org>2016-12-02 18:07:29 +0000
commitc9235e2833fbe106a082766a3df1180e4c1bbbd5 (patch)
tree061d82e4739e4ea1ac6448af162167717f592585
parentd97299c9fa04c1e0c8ecb3135a1823e341e09e57 (diff)
x86: fix irq_lock/unlock ordering bug
Memory accesses could be reordered before an irq_lock() or after an irq_unlock() without the memory barriers. See commit 15bc537712abaeb5dfbb27ced924fe6ccc1f611c for the ARM fix for a complete description of the issue and fix. Change-Id: Ic92a6b33f62a938d2252d68eccc55a5fb07c9114 Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com> (cherry picked from commit 1f8125a41683eb96aca9414616b6eaec7f05e6ac)
-rw-r--r--include/arch/x86/asm_inline_gcc.h2
1 files changed, 1 insertions, 1 deletions
diff --git a/include/arch/x86/asm_inline_gcc.h b/include/arch/x86/asm_inline_gcc.h
index f84ed21d7..97880c537 100644
--- a/include/arch/x86/asm_inline_gcc.h
+++ b/include/arch/x86/asm_inline_gcc.h
@@ -82,7 +82,7 @@ static ALWAYS_INLINE void _do_irq_unlock(void)
{
__asm__ volatile (
"sti;\n\t"
- : :
+ : : : "memory"
);
}