diff options
author | Peter Zijlstra <peterz@infradead.org> | 2015-01-05 11:18:10 +0100 |
---|---|---|
committer | Alex Shi <alex.shi@linaro.org> | 2015-06-12 17:48:15 +0800 |
commit | bed25201031a133c6b79f00f2a1aa7f5cf2e7e7d (patch) | |
tree | 24bdbc7097df06424f2be9a44c28934889c318b4 | |
parent | ca6b5d1973d62e466af130421349fa66df6af181 (diff) |
sched/core: Validate rq_clock*() serialization
rq->clock{,_task} are serialized by rq->lock, verify this.
One immediate fail is the usage in scale_rt_capability, so 'annotate'
that for now, there's more 'funny' there. Maybe change rq->lock into a
raw_seqlock_t?
(Only 32-bit is affected)
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20150105103554.361872747@infradead.org
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: umgwanakikbuti@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from commit cebde6d681aa45f96111cfcffc1544cf2a0454ff)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Conflicts:
kernel/sched/fair.c
-rw-r--r-- | kernel/sched/fair.c | 5 | ||||
-rw-r--r-- | kernel/sched/sched.h | 7 |
2 files changed, 11 insertions, 1 deletions
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 1fc9994c6f7..470b4d8de03 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5663,9 +5663,12 @@ static unsigned long scale_rt_capacity(int cpu) */ age_stamp = ACCESS_ONCE(rq->age_stamp); avg = ACCESS_ONCE(rq->rt_avg); + delta = __rq_clock_broken(rq) - age_stamp; - total = sched_avg_period() + (rq_clock(rq) - age_stamp); + if (unlikely(delta < 0)) + delta = 0; + total = sched_avg_period() + delta; used = div_u64(avg, total); if (likely(used < SCHED_CAPACITY_SCALE)) diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index e8889e934c5..68e1d7d98a2 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -548,13 +548,20 @@ DECLARE_PER_CPU(struct rq, runqueues); #define cpu_curr(cpu) (cpu_rq(cpu)->curr) #define raw_rq() (&__raw_get_cpu_var(runqueues)) +static inline u64 __rq_clock_broken(struct rq *rq) +{ + return ACCESS_ONCE(rq->clock); +} + static inline u64 rq_clock(struct rq *rq) { + lockdep_assert_held(&rq->lock); return rq->clock; } static inline u64 rq_clock_task(struct rq *rq) { + lockdep_assert_held(&rq->lock); return rq->clock_task; } |