From afe2c511fb2d75f1515081ff1be15bd79cfe722d Mon Sep 17 00:00:00 2001 From: Tejun Heo Date: Tue, 14 Dec 2010 16:21:17 +0100 Subject: workqueue: convert cancel_rearming_delayed_work[queue]() users to cancel_delayed_work_sync() cancel_rearming_delayed_work[queue]() has been superceded by cancel_delayed_work_sync() quite some time ago. Convert all the in-kernel users. The conversions are completely equivalent and trivial. Signed-off-by: Tejun Heo Acked-by: "David S. Miller" Acked-by: Greg Kroah-Hartman Acked-by: Evgeniy Polyakov Cc: Jeff Garzik Cc: Benjamin Herrenschmidt Cc: Mauro Carvalho Chehab Cc: netdev@vger.kernel.org Cc: Anton Vorontsov Cc: David Woodhouse Cc: "J. Bruce Fields" Cc: Neil Brown Cc: Alex Elder Cc: xfs-masters@oss.sgi.com Cc: Christoph Lameter Cc: Pekka Enberg Cc: Andrew Morton Cc: netfilter-devel@vger.kernel.org Cc: Trond Myklebust Cc: linux-nfs@vger.kernel.org --- mm/slab.c | 2 +- mm/vmstat.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) (limited to 'mm') diff --git a/mm/slab.c b/mm/slab.c index b1e40dafbab..dc983867682 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -1293,7 +1293,7 @@ static int __cpuinit cpuup_callback(struct notifier_block *nfb, * anything expensive but will only modify reap_work * and reschedule the timer. */ - cancel_rearming_delayed_work(&per_cpu(slab_reap_work, cpu)); + cancel_delayed_work_sync(&per_cpu(slab_reap_work, cpu)); /* Now the cache_reaper is guaranteed to be not running. */ per_cpu(slab_reap_work, cpu).work.func = NULL; break; diff --git a/mm/vmstat.c b/mm/vmstat.c index 42eac4d3321..d1f3cb63a8a 100644 --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -1033,7 +1033,7 @@ static int __cpuinit vmstat_cpuup_callback(struct notifier_block *nfb, break; case CPU_DOWN_PREPARE: case CPU_DOWN_PREPARE_FROZEN: - cancel_rearming_delayed_work(&per_cpu(vmstat_work, cpu)); + cancel_delayed_work_sync(&per_cpu(vmstat_work, cpu)); per_cpu(vmstat_work, cpu).work.func = NULL; break; case CPU_DOWN_FAILED: -- cgit v1.2.3