aboutsummaryrefslogtreecommitdiff
path: root/Documentation/cachetlb.txt
diff options
context:
space:
mode:
authorBenjamin Herrenschmidt <benh@kernel.crashing.org>2007-10-18 23:39:14 -0700
committerLinus Torvalds <torvalds@woody.linux-foundation.org>2007-10-19 11:53:34 -0700
commit1c7037db50ebecf3d5cfbf7082daa5d97d900fef (patch)
tree1843c417160b79c3f79a54d546ddcf5ccdb1b44b /Documentation/cachetlb.txt
parent22124c9999f00340b062fff740db30187bf18454 (diff)
remove unused flush_tlb_pgtables
Nobody uses flush_tlb_pgtables anymore, this patch removes all remaining traces of it from all archs. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: <linux-arch@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'Documentation/cachetlb.txt')
-rw-r--r--Documentation/cachetlb.txt27
1 files changed, 2 insertions, 25 deletions
diff --git a/Documentation/cachetlb.txt b/Documentation/cachetlb.txt
index 552cabac060..da42ab414c4 100644
--- a/Documentation/cachetlb.txt
+++ b/Documentation/cachetlb.txt
@@ -87,30 +87,7 @@ changes occur:
This is used primarily during fault processing.
-5) void flush_tlb_pgtables(struct mm_struct *mm,
- unsigned long start, unsigned long end)
-
- The software page tables for address space 'mm' for virtual
- addresses in the range 'start' to 'end-1' are being torn down.
-
- Some platforms cache the lowest level of the software page tables
- in a linear virtually mapped array, to make TLB miss processing
- more efficient. On such platforms, since the TLB is caching the
- software page table structure, it needs to be flushed when parts
- of the software page table tree are unlinked/freed.
-
- Sparc64 is one example of a platform which does this.
-
- Usually, when munmap()'ing an area of user virtual address
- space, the kernel leaves the page table parts around and just
- marks the individual pte's as invalid. However, if very large
- portions of the address space are unmapped, the kernel frees up
- those portions of the software page tables to prevent potential
- excessive kernel memory usage caused by erratic mmap/mmunmap
- sequences. It is at these times that flush_tlb_pgtables will
- be invoked.
-
-6) void update_mmu_cache(struct vm_area_struct *vma,
+5) void update_mmu_cache(struct vm_area_struct *vma,
unsigned long address, pte_t pte)
At the end of every page fault, this routine is invoked to
@@ -123,7 +100,7 @@ changes occur:
translations for software managed TLB configurations.
The sparc64 port currently does this.
-7) void tlb_migrate_finish(struct mm_struct *mm)
+6) void tlb_migrate_finish(struct mm_struct *mm)
This interface is called at the end of an explicit
process migration. This interface provides a hook