diff options
author | Patrick Bellasi <patrick.bellasi@arm.com> | 2015-06-26 09:55:06 +0100 |
---|---|---|
committer | Punit Agrawal <punit.agrawal@arm.com> | 2016-03-21 14:57:34 +0000 |
commit | d24dad20431ada01a0f60de20b9eabd6f2c441ac (patch) | |
tree | ec0e1ec363d0fa381fbf0a1ff348a8befe6be269 | |
parent | 82a3b9a3c6b40c8048b73e04aab04603cecf8ca1 (diff) |
sched/fair: add boosted CPU usage
The CPU usage signal is used by the scheduler as an estimation of the
overall bandwidth currently allocated on a CPU. When SchedDVFS is in
use, this signal affects the selection of the operating points (OPP)
required to accommodate all the workload allocated in a CPU.
A convenient way to boost the performance of tasks running on a CPU,
which is also little intrusive, is to boost the CPU usage signal each
time it is used to select an OPP.
This patch introduces a new function:
get_boosted_cpu_usage(cpu)
to return a boosted value for the usage of a specified CPU.
The margin added to the original usage is:
1. computed based on the "boosting strategy" in use
2. proportional to the system-wide boost value defined by provided
user-space interface
The boosted signal is used by SchedDVFS (transparently) each time it
requires to get an estimation of the capacity required for a CPU.
cc: Ingo Molnar <mingo@redhat.com>
cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Patrick Bellasi <patrick.bellasi@arm.com>
-rw-r--r-- | kernel/sched/fair.c | 33 |
1 files changed, 32 insertions, 1 deletions
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 1b6a57837433..eff548fae43e 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -4144,6 +4144,8 @@ static inline void hrtick_update(struct rq *rq) } #endif +static inline unsigned long boosted_cpu_util(int cpu); + static void update_capacity_of(int cpu) { unsigned long req_cap; @@ -4152,7 +4154,8 @@ static void update_capacity_of(int cpu) return; /* Convert scale-invariant capacity to cpu. */ - req_cap = cpu_util(cpu) * SCHED_CAPACITY_SCALE / capacity_orig_of(cpu); + req_cap = boosted_cpu_util(cpu); + req_cap = req_cap * SCHED_CAPACITY_SCALE / capacity_orig_of(cpu); set_cfs_cpu_capacity(cpu, true, req_cap); } @@ -5110,8 +5113,36 @@ schedtune_margin(unsigned long signal, unsigned long boost) return margin; } +static inline unsigned int +schedtune_cpu_margin(unsigned long util) +{ + unsigned int boost = get_sysctl_sched_cfs_boost(); + + if (boost == 0) + return 0; + + return schedtune_margin(util, boost); +} + +#else /* CONFIG_SCHED_TUNE */ + +static inline unsigned int +schedtune_cpu_margin(unsigned long util) +{ + return 0; +} + #endif /* CONFIG_SCHED_TUNE */ +static inline unsigned long +boosted_cpu_util(int cpu) +{ + unsigned long util = cpu_util(cpu); + unsigned long margin = schedtune_cpu_margin(util); + + return util + margin; +} + /* * find_idlest_group finds and returns the least busy CPU group within the * domain. |