aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorHillf Danton <dhillf@gmail.com>2012-01-10 15:08:30 -0800
committerLinus Torvalds <torvalds@linux-foundation.org>2012-01-10 16:30:45 -0800
commitea5768c74b8e0d6a866508fc6399d5ff958da5e3 (patch)
treec9669e800bca28d9b905b6e47c206a5ea3fe149b
parent1ebb7044c9142c67d1d2b04d84010b4810a43fd8 (diff)
mm/hugetlb.c: avoid bogus counter of surplus huge page
If we have to hand back the newly allocated huge page to page allocator, for any reason, the changed counter should be recovered. This affects only s390 at present. Signed-off-by: Hillf Danton <dhillf@gmail.com> Reviewed-by: Michal Hocko <mhocko@suse.cz> Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-rw-r--r--mm/hugetlb.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index bb7dc405634..ea8c3a4cd2a 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -800,7 +800,7 @@ static struct page *alloc_buddy_huge_page(struct hstate *h, int nid)
if (page && arch_prepare_hugepage(page)) {
__free_pages(page, huge_page_order(h));
- return NULL;
+ page = NULL;
}
spin_lock(&hugetlb_lock);