diff options
author | Andi Kleen <ak@linux.intel.com> | 2017-03-19 21:02:22 +1100 |
---|---|---|
committer | Stephen Rothwell <sfr@canb.auug.org.au> | 2017-03-23 15:01:15 +1100 |
commit | 1d7c416fe8c3f642af0495bf3c6b8e2dbcb96325 (patch) | |
tree | 4311edd50790b818b30387a1b5e35c9511648572 /kernel | |
parent | 0bbf6111d7722a445033577e642cce85e20410bd (diff) |
kernel/sched/fair.c: uninline __update_load_avg()
This is a very complex function, which is called in multiple places. It
is unlikely that inlining or not inlining it makes any difference for its
run time.
This saves around 13k text in my kernel
text data bss dec hex filename
9083992 5367600 11116544 25568136 1862388 vmlinux-before-load-avg
9070166 5367600 11116544 25554310 185ed86 vmlinux-load-avg
Link: http://lkml.kernel.org/r/20170315021431.13107-4-andi@firstfloor.org
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'kernel')
-rw-r--r-- | kernel/sched/fair.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 2805bd7c8994..08983eff70d0 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -2848,7 +2848,7 @@ static u32 __compute_runnable_contrib(u64 n) * load_avg = u_0` + y*(u_0 + u_1*y + u_2*y^2 + ... ) * = u_0 + u_1*y + u_2*y^2 + ... [re-labeling u_i --> u_{i+1}] */ -static __always_inline int +static int __update_load_avg(u64 now, int cpu, struct sched_avg *sa, unsigned long weight, int running, struct cfs_rq *cfs_rq) { |