aboutsummaryrefslogtreecommitdiff
path: root/arch/arm64/mm
AgeCommit message (Collapse)Author
2016-02-05Merge tag 'v3.14.60' into linux-linaro-lsk-v3.14lsk-v3.14-16.02Mark Brown
This is the 3.14.60 stable release # gpg: Signature made Fri 29 Jan 2016 06:00:43 GMT using RSA key ID 6092693E # gpg: Good signature from "Greg Kroah-Hartman (Linux kernel stable release signing key) <greg@kroah.com>" # gpg: WARNING: This key is not certified with a trusted signature! # gpg: There is no indication that the signature belongs to the owner. # Primary key fingerprint: 647F 2865 4894 E3BD 4571 99BE 38DB BDC8 6092 693E
2016-01-28arm64: mm: ensure that the zero page is visible to the page table walkerWill Deacon
commit 32d6397805d00573ce1fa55f408ce2bca15b0ad3 upstream. In paging_init, we allocate the zero page, memset it to zero and then point TTBR0 to it in order to avoid speculative fetches through the identity mapping. In order to guarantee that the freshly zeroed page is indeed visible to the page table walker, we need to execute a dsb instruction prior to writing the TTBR. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2015-11-04Merge tag 'v3.14.56' of ↵Kevin Hilman
git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable into linux-linaro-lsk-v3.14 This is the 3.14.56 stable release # gpg: Signature made Mon Oct 26 17:46:33 2015 PDT using RSA key ID 6092693E # gpg: Good signature from "Greg Kroah-Hartman (Linux kernel stable release signing key) <greg@kroah.com>" * tag 'v3.14.56' of git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable: (106 commits) Linux 3.14.56 sched/preempt: Fix cond_resched_lock() and cond_resched_softirq() sched/preempt: Rename PREEMPT_CHECK_OFFSET to PREEMPT_DISABLE_OFFSET rbd: fix double free on rbd_dev->header_name dm thin: fix missing pool reference count decrement in pool_ctr error path drm/radeon: add pm sysfs files late drm/nouveau/fbcon: take runpm reference when userspace has an open fd workqueue: make sure delayed work run in local cpu i2c: designware: Do not use parameters from ACPI on Dell Inspiron 7348 i2c: s3c2410: enable RuntimePM before registering to the core i2c: rcar: enable RuntimePM before registering to the core arm64: errata: use KBUILD_CFLAGS_MODULE for erratum #843419 btrfs: fix use after free iterating extrefs crypto: ahash - ensure statesize is non-zero crypto: sparc - initialize blkcipher.ivsize asix: Do full reset during ax88772_bind asix: Don't reset PHY on if_up for ASIX 88772 ethtool: Use kcalloc instead of kmalloc for ethtool_get_strings ppp: don't override sk->sk_state in pppoe_flush_dev() net: add pfmemalloc check in sk_add_backlog() ...
2015-10-22arm64: readahead: fault retry breaks mmap file read random detectionMark Salyzyn
commit 569ba74a7ba69f46ce2950bf085b37fea2408385 upstream. This is the arm64 portion of commit 45cac65b0fcd ("readahead: fault retry breaks mmap file read random detection"), which was absent from the initial port and has since gone unnoticed. The original commit says: > .fault now can retry. The retry can break state machine of .fault. In > filemap_fault, if page is miss, ra->mmap_miss is increased. In the second > try, since the page is in page cache now, ra->mmap_miss is decreased. And > these are done in one fault, so we can't detect random mmap file access. > > Add a new flag to indicate .fault is tried once. In the second try, skip > ra->mmap_miss decreasing. The filemap_fault state machine is ok with it. With this change, Mark reports that: > Random read improves by 250%, sequential read improves by 40%, and > random write by 400% to an eMMC device with dm crypto wrapped around it. Cc: Shaohua Li <shli@kernel.org> Cc: Rik van Riel <riel@redhat.com> Cc: Wu Fengguang <fengguang.wu@intel.com> Signed-off-by: Mark Salyzyn <salyzyn@android.com> Signed-off-by: Riley Andrews <riandrews@android.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2015-09-14Merge tag 'v3.14.52' of ↵lsk-v3.14-15.09Kevin Hilman
git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable into linux-linaro-lsk-v3.14 This is the 3.14.52 stable release * tag 'v3.14.52' of git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable: (64 commits) Linux 3.14.52 arm64: KVM: Fix host crash when injecting a fault into a 32bit guest SCSI: Fix NULL pointer dereference in runtime PM arm64/mm: Remove hack in mmap randomize layout crypto: caam - fix memory corruption in ahash_final_ctx regmap: regcache-rbtree: Clean new present bits on present bitmap resize libfc: Fix fc_fcp_cleanup_each_cmd() libfc: Fix fc_exch_recv_req() error path drm/vmwgfx: Fix execbuf locking issues drm/radeon: add new OLAND pci id EDAC, ppc4xx: Access mci->csrows array elements properly localmodconfig: Use Kbuild files too dm thin metadata: delete btrees when releasing metadata snapshot perf: Fix PERF_EVENT_IOC_PERIOD migration race perf: Fix fasync handling on inherited events xen-blkfront: don't add indirect pages to list when !feature_persistent mm/hwpoison: fix page refcount of unknown non LRU page ipc/sem.c: update/correct memory barriers ipc,sem: fix use after free on IPC_RMID after a task using same semaphore set exits Linux 3.14.51 ...
2015-09-13arm64/mm: Remove hack in mmap randomize layoutYann Droneaud
commit d6c763afab142a85e4770b4bc2a5f40f256d5c5d upstream. Since commit 8a0a9bd4db63 ('random: make get_random_int() more random'), get_random_int() returns a random value for each call, so comment and hack introduced in mmap_rnd() as part of commit 1d18c47c735e ('arm64: MMU fault handling and page table management') are incorrects. Commit 1d18c47c735e seems to use the same hack introduced by commit a5adc91a4b44 ('powerpc: Ensure random space between stack and mmaps'), latter copied in commit 5a0efea09f42 ('sparc64: Sharpen address space randomization calculations.'). But both architectures were cleaned up as part of commit fa8cbaaf5a68 ('powerpc+sparc64/mm: Remove hack in mmap randomize layout') as hack is no more needed since commit 8a0a9bd4db63. So the present patch removes the comment and the hack around get_random_int() on AArch64's mmap_rnd(). Cc: David S. Miller <davem@davemloft.net> Cc: Anton Blanchard <anton@samba.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: Will Deacon <will.deacon@arm.com> Acked-by: Dan McGee <dpmcgee@gmail.com> Signed-off-by: Yann Droneaud <ydroneaud@opteya.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Cc: Matthias Brugger <mbrugger@suse.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2015-08-04Merge tag 'v3.14.49' of ↵Kevin Hilman
git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable into linux-linaro-lsk-v3.14 This is the 3.14.49 stable release * tag 'v3.14.49' of git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable: (126 commits) Linux 3.14.49 MIPS: KVM: Do not sign extend on unsigned MMIO load qla2xxx: Mark port lost when we receive an RSCN for it. Fix firmware loader uevent buffer NULL pointer dereference hpfs: hpfs_error: Remove static buffer, use vsprintf extension %pV instead arm64: Don't report clear pmds and puds as huge agp/intel: Fix typo in needs_ilk_vtd_wa() rbd: use GFP_NOIO in rbd_obj_request_create() 9p: don't leave a half-initialized inode sitting around 9p: forgetting to cancel request on interrupted zero-copy RPC SUNRPC: Fix a memory leak in the backchannel code nfs: increase size of EXCHANGE_ID name string buffer fixing infinite OPEN loop in 4.0 stateid recovery NFS: Fix size of NFSACL SETACL operations watchdog: omap: assert the counter being stopped before reprogramming of: return NUMA_NO_NODE from fallback of_node_to_nid() block: Do a full clone when splitting discard bios USB: usbfs: allow URBs to be reaped after disconnection dell-laptop: Fix allocating & freeing SMI buffer page ideapad: fix software rfkill setting ...
2015-08-03arm64: Don't report clear pmds and puds as hugeChristoffer Dall
commit fd28f5d439fca77348c129d5b73043a56f8a0296 upstream. The current pmd_huge() and pud_huge() functions simply check if the table bit is not set and reports the entries as huge in that case. This is counter-intuitive as a clear pmd/pud cannot also be a huge pmd/pud, and it is inconsistent with at least arm and x86. To prevent others from making the same mistake as me in looking at code that calls these functions and to fix an issue with KVM on arm64 that causes memory corruption due to incorrect page reference counting resulting from this mistake, let's change the behavior. Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org> Reviewed-by: Steve Capper <steve.capper@linaro.org> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Fixes: 084bd29810a5 ("ARM64: mm: HugeTLB support.") Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2015-08-03arm64: mm: Fix freeing of the wrong memmap entries with !SPARSEMEM_VMEMMAPDave P Martin
commit b9bcc919931611498e856eae9bf66337330d04cc upstream. The memmap freeing code in free_unused_memmap() computes the end of each memblock by adding the memblock size onto the base. However, if SPARSEMEM is enabled then the value (start) used for the base may already have been rounded downwards to work out which memmap entries to free after the previous memblock. This may cause memmap entries that are in use to get freed. In general, you're not likely to hit this problem unless there are at least 2 memblocks and one of them is not aligned to a sparsemem section boundary. Note that carve-outs can increase the number of memblocks by splitting the regions listed in the device tree. This problem doesn't occur with SPARSEMEM_VMEMMAP, because the vmemmap code deals with freeing the unused regions of the memmap instead of requiring the arch code to do it. This patch gets the memblock base out of the memblock directly when computing the block end address to ensure the correct value is used. Signed-off-by: Dave Martin <Dave.Martin@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2015-08-03arm64: Do not attempt to use init_mm in reset_context()Catalin Marinas
commit 565630d503ef24e44c252bed55571b3a0d68455f upstream. After secondary CPU boot or hotplug, the active_mm of the idle thread is &init_mm. The init_mm.pgd (swapper_pg_dir) is only meant for TTBR1_EL1 and must not be set in TTBR0_EL1. Since when active_mm == &init_mm the TTBR0_EL1 is already set to the reserved value, there is no need to perform any context reset. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2015-07-07Merge tag 'v3.14.47' of git://git./linux/kernel/git/stable/linux-stable into ↵Shannon Zhao
linux-linaro-lsk-v3.14 Conflicts: arch/arm/include/asm/kvm_mmu.h arch/arm/kvm/mmu.c arch/arm64/include/asm/kvm_arm.h arch/arm64/include/asm/kvm_host.h arch/arm64/include/asm/kvm_mmu.h virt/kvm/arm/vgic.c
2015-07-03arm64: dma-mapping: always clear allocated buffersMarek Szyprowski
commit 6829e274a623187c24f7cfc0e3d35f25d087fcc5 upstream. Buffers allocated by dma_alloc_coherent() are always zeroed on Alpha, ARM (32bit), MIPS, PowerPC, x86/x86_64 and probably other architectures. It turned out that some drivers rely on this 'feature'. Allocated buffer might be also exposed to userspace with dma_mmap() call, so clearing it is desired from security point of view to avoid exposing random memory to userspace. This patch unifies dma_alloc_coherent() behavior on ARM64 architecture with other implementations by unconditionally zeroing allocated buffer. Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> [will: ported to 3.14.y] Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2015-04-23arm64: respect mem= for EFIlsk-v3.14-15.04Mark Rutland
When booting with EFI, we acquire the EFI memory map after parsing the early params. This unfortuantely renders the option useless as we call memblock_enforce_memory_limit (which uses memblock_remove_range behind the scenes) before we've added any memblocks. We end up removing nothing, then adding all of memory later when efi_init calls reserve_regions. Instead, we can log the limit and apply this later when we do the rest of the memblock work in memblock_init, which should work regardless of the presence of EFI. At the same time we may as well move the early parameter into arm64's mm/init.c, close to arm64_memblock_init. Any memory which must be mapped (e.g. for use by EFI runtime services) must be mapped explicitly reather than relying on the linear mapping, which may be truncated as a result of a mem= option passed on the kernel command line. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Leif Lindholm <leif.lindholm@linaro.org> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> (cherry picked from commit 6083fe74b7bfffc2c7be8c711596608bda0cda6e) Signed-off-by: Alex Shi <alex.shi@linaro.org>
2015-03-30 Merge tag 'v3.14.37' into linux-linaro-lsk-v3.14Alex Shi
This is the 3.14.37 stable release
2015-03-26arm64: Honor __GFP_ZERO in dma allocationsSuzuki K. Poulose
commit 7132813c384515c9dede1ae20e56f3895feb7f1e upstream. Current implementation doesn't zero out the pages allocated. Honor the __GFP_ZERO flag and zero out if set. Cc: <stable@vger.kernel.org> # v3.14+ Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2014-10-08Merge branch 'lsk/v3.14/topic/kvm' into lsk/lsk-with-kvm-v3.14Christoffer Dall
Conflicts: arch/arm64/include/asm/debug-monitors.h
2014-10-02arm64: Add boot time configuration of Intermediate Physical Address sizeRadha Mohan Chintakuntla
ARMv8 supports a range of physical address bit sizes. The PARange bits from ID_AA64MMFR0_EL1 register are read during boot-time and the intermediate physical address size bits are written in the translation control registers (TCR_EL1 and VTCR_EL2). There is no change in the VA bits and levels of translation. Signed-off-by: Radha Mohan Chintakuntla <rchintakuntla@cavium.com> Reviewed-by: Will Deacon <Will.deacon@arm.com> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> (cherry picked from commit 87366d8cf7b3f6dc34633938aa8766e5a390ce33) Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2014-09-13Merge remote-tracking branch 'lsk/v3.14/topic/arm64-misc' into ↵Mark Brown
linux-linaro-lsk-v3.14
2014-09-13arm64: add support for reserved memory defined by device treeMarek Szyprowski
Enable reserved memory initialization from device tree. Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Grant Likely <grant.likely@linaro.org> (cherry picked from commit 9bf14b7c540ae9ca7747af3a0c0d8470ef77b6ce) Signed-off-by: Mark Brown <broonie@kernel.org>
2014-08-11Merge remote-tracking branch 'lsk/v3.14/topic/arm64-misc' into ↵Mark Brown
linux-linaro-lsk-v3.14 Conflicts: arch/arm64/Kconfig arch/arm64/boot/dts/apm-storm.dtsi arch/arm64/include/asm/pgtable.h mm/Kconfig
2014-08-11arm64: mm: Optimise tlb flush logic where we have >4K granuleSteve Capper
The tlb maintainence functions: __cpu_flush_user_tlb_range and __cpu_flush_kern_tlb_range do not take into consideration the page granule when looping through the address range, and repeatedly flush tlb entries for the same page when operating with 64K pages. This patch re-works the logic s.t. we instead advance the loop by 1 << (PAGE_SHIFT - 12), so avoid repeating ourselves. Also the routines have been converted from assembler to static inline functions to aid with legibility and potential compiler optimisations. The isb() has been removed from flush_tlb_kernel_range(.) as it is only needed when changing the execute permission of a mapping. If one needs to set an area of the kernel as execute/non-execute an isb() must be inserted after the call to flush_tlb_kernel_range. Cc: Laura Abbott <lauraa@codeaurora.org> Signed-off-by: Steve Capper <steve.capper@linaro.org> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> (cherry picked from commit fa48e6f780a681cdbc7820e33259edfe1a79b9e3) Signed-off-by: Mark Brown <broonie@linaro.org>
2014-08-11arm64: Fix for the arm64 kern_addr_valid() functionDave Anderson
Fix for the arm64 kern_addr_valid() function to recognize virtual addresses in the kernel logical memory map. The function fails as written because it does not check whether the addresses in that region are mapped at the pmd level to 2MB or 512MB pages, continues the page table walk to the pte level, and issues a garbage value to pfn_valid(). Tested on 4K-page and 64K-page kernels. Signed-off-by: Dave Anderson <anderson@redhat.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> (cherry picked from commit da6e4cb67c6dd1f72257c0a4a97c26dc4e80d3a7) Signed-off-by: Mark Brown <broonie@linaro.org>
2014-08-11Revert "arm64: Introduce execute-only page access permissions"Catalin Marinas
This reverts commit bc07c2c6e9ed125d362af0214b6313dca180cb08. While the aim is increased security for --x memory maps, it does not protect against kernel level reads. Until SECCOMP is implemented for arm64, revert this patch to avoid giving a false idea of execute-only mappings. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> (cherry picked from commit 5a0fdfada3a2aa50d7b947a2e958bf00cbe0d830) Signed-off-by: Mark Brown <broonie@linaro.org>
2014-08-11arm64: Fix up earlier backportMark Brown
Crept in during a cherry-pick run; ideally should be squished down. Signed-off-by: Mark Brown <broonie@linaro.org>
2014-08-11arm64: Clean up the default pgprot settingMark Brown
The primary aim of this patchset is to remove the pgprot_default and prot_sect_default global variables and rely strictly on predefined values. The original goal was to be able to run SMP kernels on UP hardware by not setting the Shareability bit. However, it is unlikely to see UP ARMv8 hardware and even if we do, the Shareability bit is no longer assumed to disable cacheable accesses. A side effect is that the device mappings now have the Shareability attribute set. The hardware, however, should ignore it since Device accesses are always Outer Shareable. Following the removal of the two global variables, there is some PROT_* macro reshuffling and cleanup, including the __PAGE_* macros (replaced by PAGE_*). Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> (cherry picked from commit a501e32430d4232012ab708b8f0ce841f29e0f02) Signed-off-by: Mark Brown <broonie@linaro.org> Conflicts: arch/arm64/mm/mmu.c
2014-08-11arm64: add early_ioremap supportMark Salter
Add support for early IO or memory mappings which are needed before the normal ioremap() is usable. This also adds fixmap support for permanent fixed mappings such as that used by the earlyprintk device register region. Signed-off-by: Mark Salter <msalter@redhat.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Borislav Petkov <borislav.petkov@amd.com> Cc: Dave Young <dyoung@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> (cherry picked from commit bf4b558eba920a38f91beb5ee62a8ce2628c92f7) Signed-off-by: Mark Brown <broonie@linaro.org> Conflicts: Documentation/arm64/memory.txt arch/arm64/Kconfig arch/arm64/kernel/head.S arch/arm64/mm/mmu.c
2014-08-11arm64: Introduce execute-only page access permissionsCatalin Marinas
The ARMv8 architecture allows execute-only user permissions by clearing the PTE_UXN and PTE_USER bits. The kernel, however, can still access such page, so execute-only page permission does not protect against read(2)/write(2) etc. accesses. Systems requiring such protection must implement/enable features like SECCOMP. This patch changes the arm64 __P100 and __S100 protection_map[] macros to the new __PAGE_EXECONLY attributes. A side effect is that pte_valid_user() no longer triggers for __PAGE_EXECONLY since PTE_USER isn't set. To work around this, the check is done on the PTE_NG bit via the pte_valid_ng() macro. VM_READ is also checked now for page faults. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> (cherry picked from commit bc07c2c6e9ed125d362af0214b6313dca180cb08) Signed-off-by: Mark Brown <broonie@linaro.org>
2014-08-11arm64: Remove pgprot_dmacoherent()Catalin Marinas
Since this macro is identical to pgprot_writecombine() and is only used in a single place, remove it completely to avoid confusion. On ARMv7+ processors, the coherent DMA mapping must be Normal NonCacheable (a.k.a. writecombine) to avoid mismatched hardware attribute aliases (with the kernel linear mapping as Normal Cacheable). Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> (cherry picked from commit 196adf2f3015eacac0567278ba538e3ffdd16d0e) Signed-off-by: Mark Brown <broonie@linaro.org>
2014-08-11arm64: initialize pgprot info earlier in bootMark Salter
Presently, paging_init() calls init_mem_pgprot() to initialize pgprot values used by macros such as PAGE_KERNEL, PAGE_KERNEL_EXEC, etc. The new fixmap and early_ioremap support also needs to use these macros before paging_init() is called. This patch moves the init_mem_pgprot() call out of paging_init() and into setup_arch() so that pgprot_default gets initialized in time for fixmap and early_ioremap. Signed-off-by: Mark Salter <msalter@redhat.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Borislav Petkov <borislav.petkov@amd.com> Cc: Dave Young <dyoung@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> (cherry picked from commit 0bf757c73d6612d3d279de3f61b35062aa9c8b1d) Signed-off-by: Mark Brown <broonie@linaro.org> Conflicts: arch/arm64/include/asm/mmu.h
2014-08-11arm64: Add function to create identity mappingsMark Salter
At boot time, before switching to a virtual UEFI memory map, firmware expects UEFI memory and IO regions to be identity mapped whenever kernel makes runtime services calls. The existing early boot code creates an identity map of kernel text/data but this is not sufficient for UEFI. This patch adds a create_id_mapping() function which reuses the core code of the existing create_mapping(). Signed-off-by: Mark Salter <msalter@redhat.com> [ Fixed error message formatting (%pa). ] Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Leif Lindholm <leif.lindholm@linaro.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Matt Fleming <matt.fleming@intel.com> (cherry picked from commit d7ecbddf4caefbac1b99478dd2b679f83dfc2545) Signed-off-by: Mark Brown <broonie@linaro.org> Conflicts: arch/arm64/include/asm/mmu.h arch/arm64/mm/mmu.c
2014-07-23Merge remote-tracking branch 'lsk/v3.14/topic/arm64-misc' into ↵lsk-v3.14-preview-14.07Mark Brown
linux-linaro-lsk-v3.14
2014-07-23arm64: place initial page tables above the kernelMark Rutland
Currently we place swapper_pg_dir and idmap_pg_dir below the kernel image, between PHYS_OFFSET and (PHYS_OFFSET + TEXT_OFFSET). However, bootloaders may use portions of this memory below the kernel and we do not parse the memory reservation list until after the MMU has been enabled. As such we may clobber some memory a bootloader wishes to have preserved. To enable the use of all of this memory by bootloaders (when the required memory reservations are communicated to the kernel) it is necessary to move our initial page tables elsewhere. As we currently have an effectively unbound requirement for memory at the end of the kernel image for .bss, we can place the page tables here. This patch moves the initial page table to the end of the kernel image, after the BSS. As they do not consist of any initialised data they will be stripped from the kernel Image as with the BSS. The BSS clearing routine is updated to stop at __bss_stop rather than _end so as to not clobber the page tables, and memory reservations made redundant by the new organisation are removed. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Laura Abbott <lauraa@codeaurora.org> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> (cherry picked from commit bd00cd5f8c8c3c282bb1e1eac6a6679a4f808091) Signed-off-by: Mark Brown <broonie@linaro.org> Conflicts: arch/arm64/mm/init.c
2014-07-23arm64: Relax the kernel cache requirements for bootCatalin Marinas
With system caches for the host OS or architected caches for guest OS we cannot easily guarantee that there are no dirty or stale cache lines for the areas of memory written by the kernel during boot with the MMU off (therefore non-cacheable accesses). This patch adds the necessary cache maintenance during boot and relaxes the booting requirements. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> (cherry picked from commit c218bca74eeafa2f8528b6bbb34d112075fcf40a) Signed-off-by: Mark Brown <broonie@linaro.org>
2014-07-23Merge remote-tracking branch 'lsk/v3.14/topic/arm64-cmadma' into ↵Mark Brown
lsk-v3.14-arm64-misc
2014-07-23arm64: remove unnecessary cache flush at bootMark Rutland
Currently we flush the entire dcache at boot within __cpu_setup, but this is unnecessary as the booting protocol demands that the dcache is invalid and off upon entering the kernel. The presence of the cache flush only serves to hide bugs in bootloaders, and is not safe in the presence of SMP. In an SMP boot scenario the CPUs enter coherency outside of the kernel, and the primary CPU enables its caches before bringing up secondary CPUs. Therefore if any secondary CPU has an entry in its cache (in violation of the boot protocol), the primary CPU might snoop it even if the secondary CPU's cache is disabled. The boot-time cache flush only serves to hide a firmware bug, and slows down a cpu boot unnecessarily. This patch removes the unnecessary boot-time cache flush. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> [catalin.marinas@arm.com: make __flush_dcache_all local only] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> (cherry picked from commit bff705950e2cdcf35641dee35eb14bad9ed49e8f) Signed-off-by: Mark Brown <broonie@linaro.org>
2014-07-23Merge remote-tracking branch 'lsk/v3.14/topic/arm64-misc' into ↵Mark Brown
linux-linaro-lsk-v3.14
2014-07-23arm64: export __cpu_{clear,copy}_user_page functionsMark Salter
The __cpu_clear_user_page() and __cpu_copy_user_page() functions are not currently exported. This prevents modules from using clear_user_page() and copy_user_page(). Signed-off-by: Mark Salter <msalter@redhat.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> (cherry picked from commit bec7cedc8a92bfe96d32febe72634b30c63896bd) Signed-off-by: Mark Brown <broonie@linaro.org>
2014-07-10Merge tag 'v3.14.12' into linux-linaro-lsk-v3.14Alex Shi
This is the 3.14.12 stable release
2014-07-09arm64: mm: Make icache synchronisation logic huge page awareSteve Capper
commit 923b8f5044da753e4985ab15c1374ced2cdf616c upstream. The __sync_icache_dcache routine will only flush the dcache for the first page of a compound page, potentially leading to stale icache data residing further on in a hugetlb page. This patch addresses this issue by taking into consideration the order of the page when flushing the dcache. Reported-by: Mark Brown <broonie@linaro.org> Tested-by: Mark Brown <broonie@linaro.org> Signed-off-by: Steve Capper <steve.capper@linaro.org> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2014-07-01Merge tag 'v3.14.10' into linux-linaro-lsk-v3.14Mark Brown
This is the 3.14.10 stable release
2014-06-30hugetlb: restrict hugepage_migration_support() to x86_64Naoya Horiguchi
commit c177c81e09e517bbf75b67762cdab1b83aba6976 upstream. Currently hugepage migration is available for all archs which support pmd-level hugepage, but testing is done only for x86_64 and there're bugs for other archs. So to avoid breaking such archs, this patch limits the availability strictly to x86_64 until developers of other archs get interested in enabling this feature. Simply disabling hugepage migration on non-x86_64 archs is not enough to fix the reported problem where sys_move_pages() hits the BUG_ON() in follow_page(FOLL_GET), so let's fix this by checking if hugepage migration is supported in vma_migratable(). Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Reported-by: Michael Ellerman <mpe@ellerman.id.au> Tested-by: Michael Ellerman <mpe@ellerman.id.au> Acked-by: Hugh Dickins <hughd@google.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Tony Luck <tony.luck@intel.com> Cc: Russell King <rmk@arm.linux.org.uk> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: James Hogan <james.hogan@imgtec.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: David Miller <davem@davemloft.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2014-06-09Merge LTS 3.14.6 into linux-linaro-lsk-v3.14Alex Shi
This is the 3.14.6 stable release
2014-06-07arm64: fix pud_huge() for 2-level pagetablesMark Salter
commit 4797ec2dc83a43be35bad56037d1b53db9e2b5d5 upstream. The following happens when trying to run a kvm guest on a kernel configured for 64k pages. This doesn't happen with 4k pages: BUG: failure at include/linux/mm.h:297/put_page_testzero()! Kernel panic - not syncing: BUG! CPU: 2 PID: 4228 Comm: qemu-system-aar Tainted: GF 3.13.0-0.rc7.31.sa2.k32v1.aarch64.debug #1 Call trace: [<fffffe0000096034>] dump_backtrace+0x0/0x16c [<fffffe00000961b4>] show_stack+0x14/0x1c [<fffffe000066e648>] dump_stack+0x84/0xb0 [<fffffe0000668678>] panic+0xf4/0x220 [<fffffe000018ec78>] free_reserved_area+0x0/0x110 [<fffffe000018edd8>] free_pages+0x50/0x88 [<fffffe00000a759c>] kvm_free_stage2_pgd+0x30/0x40 [<fffffe00000a5354>] kvm_arch_destroy_vm+0x18/0x44 [<fffffe00000a1854>] kvm_put_kvm+0xf0/0x184 [<fffffe00000a1938>] kvm_vm_release+0x10/0x1c [<fffffe00001edc1c>] __fput+0xb0/0x288 [<fffffe00001ede4c>] ____fput+0xc/0x14 [<fffffe00000d5a2c>] task_work_run+0xa8/0x11c [<fffffe0000095c14>] do_notify_resume+0x54/0x58 In arch/arm/kvm/mmu.c:unmap_range(), we end up doing an extra put_page() on the stage2 pgd which leads to the BUG in put_page_testzero(). This happens because a pud_huge() test in unmap_range() returns true when it should always be false with 2-level pages tables used by 64k pages. This patch removes support for huge puds if 2-level pagetables are being used. Signed-off-by: Mark Salter <msalter@redhat.com> [catalin.marinas@arm.com: removed #ifndef around PUD_SIZE check] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2014-05-29arm64: Use bus notifiers to set per-device coherent DMA opsv3.14/topic/arm64-cmadmaCatalin Marinas
Recently, the default DMA ops have been changed to non-coherent for alignment with 32-bit ARM platforms (and DT files). This patch adds bus notifiers to be able to set the coherent DMA ops (with no cache maintenance) for devices explicitly marked as coherent via the "dma-coherent" DT property. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> (cherry picked from commit 6ecba8eb51b7d23fda66388a5420be7d8688b186) Signed-off-by: Alex Shi <alex.shi@linaro.org>
2014-05-29arm64: Support DMA_ATTR_WRITE_COMBINELaura Abbott
DMA_ATTR_WRITE_COMBINE is currently ignored. Set the pgprot appropriately for non coherent opperations. Signed-off-by: Laura Abbott <lauraa@codeaurora.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> (cherry picked from commit 214fdbe74a096c3aeb7af81d7900e2ab966b10d6) Signed-off-by: Alex Shi <alex.shi@linaro.org>
2014-05-29arm64: Implement custom mmap functions for dma mappingLaura Abbott
The current dma_ops do not specify an mmap function so maping falls back to the default implementation. There are at least two issues with using the default implementation: 1) The pgprot is always pgprot_noncached (strongly ordered) memory even with coherent operations 2) dma_common_mmap calls virt_to_page on the remapped non-coherent address which leads to invalid memory being mapped. Fix both these issue by implementing a custom mmap function which correctly accounts for remapped addresses and sets vm_pg_prot appropriately. Signed-off-by: Laura Abbott <lauraa@codeaurora.org> [catalin.marinas@arm.com: replaced "arm64_" with "__" prefix for consistency] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> (cherry picked from commit 6e8d7968e92f7668a2a615773ad3940f0219dcbd) Signed-off-by: Alex Shi <alex.shi@linaro.org>
2014-05-29arm64: Make default dma_ops to be noncoherentRitesh Harjani
Currently arm64 dma_ops is by default made coherent which makes it opposite in default policy from arm. Make default dma_ops to be noncoherent (same as arm), as currently there aren't any dma-capable drivers which assumes coherent ops Signed-off-by: Ritesh Harjani <ritesh.harjani@gmail.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> (cherry picked from commit c7a4a7658d689f664050c45493d79adf053f226e) Signed-off-by: Alex Shi <alex.shi@linaro.org>
2014-05-29arm64: Use swiotlb late initialisationCatalin Marinas
Since arm64 does not support ISA, there is no need for early swiotlb initialisation. This patch switches the DMA mapping code to swiotlb_tlb_late_init_with_default_size(). A side effect of this is that GFP_DMA is used for the swiotlb buffer and devices with a 32-bit coherent mask are correctly supported. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> (cherry picked from commit 3690951fc6d42f3a0903987677d0e592c49dd8db) Signed-off-by: Alex Shi <alex.shi@linaro.org>
2014-05-29arm64: Replace ZONE_DMA32 with ZONE_DMACatalin Marinas
On arm64 we do not have two DMA zones, so it does not make sense to implement ZONE_DMA32. This patch changes ZONE_DMA32 with ZONE_DMA, the latter covering 32-bit dma address space to honour GFP_DMA allocations. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> (cherry picked from commit 19e7640d1f2302c20df2733e3e3df49acb17189e) Signed-off-by: Alex Shi <alex.shi@linaro.org>
2014-05-29arm64: Fix DMA range invalidation for cache line unaligned buffersCatalin Marinas
If the buffer needing cache invalidation for inbound DMA does start or end on a cache line aligned address, we need to use the non-destructive clean&invalidate operation. This issue was introduced by commit 7363590d2c46 (arm64: Implement coherent DMA API based on swiotlb). Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Reported-by: Jon Medhurst (Tixy) <tixy@linaro.org> (cherry picked from commit ebf81a938dade3b450eb11c57fa744cfac4b523f) Signed-off-by: Alex Shi <alex.shi@linaro.org>