aboutsummaryrefslogtreecommitdiff
path: root/kernel/sched/sched.h
diff options
context:
space:
mode:
authorPaul Turner <pjt@google.com>2012-08-23 07:14:28 -0700
committerViresh Kumar <viresh.kumar@linaro.org>2012-10-03 14:21:47 +0530
commit5f8d00eec3362feb59a6847a40a3699d1ed4e6a5 (patch)
tree8c73d2074e4b7356d8ea8400057380bf4c753bf0 /kernel/sched/sched.h
parent2387653edf7d9e1b5c15b9c5043bb621cb529f17 (diff)
sched: account for blocked load waking back up
When a running entity blocks we migrate its tracked load to cfs_rq->blocked_runnable_avg. In the sleep case this occurs while holding rq->lock and so is a natural transition. Wake-ups however, are potentially asynchronous in the presence of migration and so special care must be taken. We use an atomic counter to track such migrated load, taking care to match this with the previously introduced decay counters so that we don't migrate too much load. Signed-off-by: Paul Turner <pjt@google.com> Reviewed-by: Ben Segall <bsegall@google.com>
Diffstat (limited to 'kernel/sched/sched.h')
-rw-r--r--kernel/sched/sched.h2
1 files changed, 1 insertions, 1 deletions
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index d5f2a21010a..4b7bc6ecad7 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -230,7 +230,7 @@ struct cfs_rq {
* the FAIR_GROUP_SCHED case).
*/
u64 runnable_load_avg, blocked_load_avg;
- atomic64_t decay_counter;
+ atomic64_t decay_counter, removed_load;
u64 last_decay;
#endif
#ifdef CONFIG_FAIR_GROUP_SCHED