aboutsummaryrefslogtreecommitdiff
path: root/include/linux
diff options
context:
space:
mode:
authorFrederic Weisbecker <fweisbec@gmail.com>2009-05-07 22:51:20 +0200
committerFrederic Weisbecker <fweisbec@gmail.com>2009-09-14 07:18:14 +0200
commite43d3f21c502dec786f2885a75e25859f18d6ffa (patch)
tree5a1bc7a8f997da264bb4fbefc1279896e2530261 /include/linux
parent6e3647acb4f200add1d8e0203514f7ac925ae463 (diff)
kill-the-BKL/reiserfs: add reiserfs_cond_resched()
Usually, when we call cond_resched(), we want the write lock to be released and then reacquired once we return from scheduling. Not only does it follow the previous bkl based locking scheme, but it also let other waiters to get the lock. But if we aren't going to reschedule(), such as in !TIF_NEED_RESCHED case, it's useless to release the lock. Worse, if we release and reacquire the lock whereas it is not needed, we create useless contentions. Also if someone takes the lock while we are modifying or reading the tree, there are good chances we'll have to retry our operation, eg if the block we were seeeking has moved. So this patch introduces a helper which only unlock the write lock if we are going to schedule. [ Impact: prepare to inject less lock contention and less tree operation attempts ] Reported-by: Andi Kleen <andi@firstfloor.org> Cc: Jeff Mahoney <jeffm@suse.com> Cc: Chris Mason <chris.mason@oracle.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Alexander Beregalov <a.beregalov@gmail.com> Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Diffstat (limited to 'include/linux')
-rw-r--r--include/linux/reiserfs_fs.h13
1 files changed, 13 insertions, 0 deletions
diff --git a/include/linux/reiserfs_fs.h b/include/linux/reiserfs_fs.h
index fa5dbf307c4..27f4ecc2818 100644
--- a/include/linux/reiserfs_fs.h
+++ b/include/linux/reiserfs_fs.h
@@ -62,6 +62,19 @@ void reiserfs_write_unlock(struct super_block *s);
int reiserfs_write_lock_once(struct super_block *s);
void reiserfs_write_unlock_once(struct super_block *s, int lock_depth);
+/*
+ * When we schedule, we usually want to also release the write lock,
+ * according to the previous bkl based locking scheme of reiserfs.
+ */
+static inline void reiserfs_cond_resched(struct super_block *s)
+{
+ if (need_resched()) {
+ reiserfs_write_unlock(s);
+ schedule();
+ reiserfs_write_lock(s);
+ }
+}
+
struct fid;
/* in reading the #defines, it may help to understand that they employ