diff options
author | Kevin Hilman <khilman@linaro.org> | 2015-04-08 14:32:07 -0700 |
---|---|---|
committer | Jon Medhurst <tixy@linaro.org> | 2015-04-14 12:08:16 +0100 |
commit | c1f0c1f51bf7b9111de27c3cdbea9b647351bf7b (patch) | |
tree | 5b20b095079b0578b249a0e721858f83ed2fe4bf | |
parent | e482d95c1d1888f34cc3f7e6778806cfda6174ff (diff) |
sched: hmp: fix spinlock recursion in active migrationHEADbig-LITTLE-MP-15.04for-lskbig-LITTLE-MP-latest
Commit cd5c2cc93d3d (hmp: Remove potential for task_struct access
race) introduced a put_task_struct() to prevent races, but in doing so
introduced potential spinlock recursion. (This change was further
consolidated in commit 0baa5811bacf -- sched: hmp: unify active
migration code.)
Unfortunately, the put_task_struct() is done while the runqueue
spinlock is held, but put_task_struct() can also cause a reschedule
causing the runqueue lock to be acquired recursively.
To fix, move the put_task_struct() outside the runqueue spinlock.
Reported-by: Victor Lixin <victor.lixin@hisilicon.com>
Cc: Jorge Ramirez-Ortiz <jorge.ramirez-ortiz@linaro.org>
Cc: Liviu Dudau <Liviu.Dudau@arm.com>
Signed-off-by: Kevin Hilman <khilman@linaro.org>
Reviewed-by: Jon Medhurst <tixy@linaro.org>
Reviewed-by: Alex Shi <alex.shi@linaro.org>
Reviewed-by: Chris Redpath <chris.redpath@arm.com>
Signed-off-by: Jon Medhurst <tixy@linaro.org>
-rw-r--r-- | kernel/sched/fair.c | 4 |
1 files changed, 2 insertions, 2 deletions
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index fd57f0be5b4..22ce83eb73f 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -6462,10 +6462,10 @@ static int __do_active_load_balance_cpu_stop(void *data, bool check_sd_lb_flag) rcu_read_unlock(); double_unlock_balance(busiest_rq, target_rq); out_unlock: - if (!check_sd_lb_flag) - put_task_struct(p); busiest_rq->active_balance = 0; raw_spin_unlock_irq(&busiest_rq->lock); + if (!check_sd_lb_flag) + put_task_struct(p); return 0; } |