aboutsummaryrefslogtreecommitdiff
path: root/arch/powerpc/kernel/head_8xx.S
AgeCommit message (Collapse)Author
2016-07-09powerpc/8xx: add CONFIG_PIN_TLB_IMMRChristophe Leroy
CONFIG_PIN_TLB maps IMMR area and the first 24 Mbytes of memory. In some circunstances it might be more interesting to not map IMMR but map 32 Mbytes of memory instead. Therefore we add config option CONFIG_PIN_TLB_IMMR to select if IMMR shall be pinned or not, hence whether we pin 24 or 32 Mbytes of RAM Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Scott Wood <oss@buserror.net>
2016-07-09powerpc/8xx: Rework CONFIG_PIN_TLB handlingChristophe Leroy
On recent kernels, with some debug options like for instance CONFIG_LOCKDEP, the BSS requires more than 8M memory, allthough the kernel code fits in the first 8M. Today, it is necessary to activate CONFIG_PIN_TLB to get more than 8M at startup, allthough pinning TLB is not necessary for that. We could have inconditionaly mapped 16 or 24M bytes at startup but some old hardware only have 8M and mapping non-existing RAM would be an issue due to speculative accesses. With the preceding patch however, the TLB entries are populated on demand. By setting up the TLB miss handler to handle up to 24M until the handler is patched for the entire memory space, it is possible to allow access up to more memory without mapping non-existing RAM. It is therefore not needed anymore to map memory data at all at startup. It will be handled by the TLB miss handler. One might still want to PIN the IMMR and the first 24M of RAM. It is now possible to do it in the C memory initialisation functions. In addition, we now know how much memory we have when we do it, so we are able to adapt the pining to the real amount of memory available. So boards with less than 24M can now also benefit from PIN_TLB. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Scott Wood <oss@buserror.net>
2016-07-09powerpc/8xx: Don't use page table for linear memory spaceChristophe Leroy
Instead of using the first level page table to define mappings for the linear memory space, we can use direct mapping from the TLB handling routines. This has several advantages: * No need to read the tables at each TLB miss * No issue in 16k pages mode where the 1st level table maps 64 Mbytes The size of the available linear space is known at system startup. In order to avoid data access at each TLB miss to know the memory size, the TLB routine is patched at startup with the proper size This patch provides a 10%-15% improvment of TLB miss handling for kernel addresses Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Scott Wood <oss@buserror.net>
2016-07-09powerpc/8xx: unpin all TLBs before flushingChristophe Leroy
Bootloader may have pinned some TLB entries so the kernel must unpin them before flushing TLBs with tlbia otherwise pinned TLB entries won't get flushed Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Scott Wood <oss@buserror.net>
2016-07-09powerpc/8xx: Map IMMR area with 512k page at a fixed addressChristophe Leroy
Once the linear memory space has been mapped with 8Mb pages, as seen in the related commit, we get 11 millions DTLB missed during the reference 600s period. 77% of the misses are on user addresses and 23% are on kernel addresses (1 fourth for linear address space and 3 fourth for virtual address space) Traditionaly, each driver manages one computer board which has its own components with its own memory maps. But on embedded chips like the MPC8xx, the SOC has all registers located in the same IO area. When looking at ioremaps done during startup, we see that many drivers are re-mapping small parts of the IMMR for their own use and all those small pieces gets their own 4k page, amplifying the number of TLB misses: in our system we get 0xff000000 mapped 31 times and 0xff003000 mapped 9 times. Even if each part of IMMR was mapped only once with 4k pages, it would still be several small mappings towards linear area. This patch maps the IMMR with a single 512k page. With this patch applied, the number of DTLB misses during the 10 min period is reduced to 11.8 millions for a duration of 5.8s, which represents 2% of the non-idle time hence yet another 10% reduction. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Scott Wood <oss@buserror.net>
2016-07-09powerpc/8xx: Fix vaddr for IMMR early remapChristophe Leroy
Memory: 124428K/131072K available (3748K kernel code, 188K rwdata, 648K rodata, 508K init, 290K bss, 6644K reserved) Kernel virtual memory layout: * 0xfffdf000..0xfffff000 : fixmap * 0xfde00000..0xfe000000 : consistent mem * 0xfddf6000..0xfde00000 : early ioremap * 0xc9000000..0xfddf6000 : vmalloc & ioremap SLUB: HWalign=16, Order=0-3, MinObjects=0, CPUs=1, Nodes=1 Today, IMMR is mapped 1:1 at startup Mapping IMMR 1:1 is just wrong because it may overlap with another area. On most mpc8xx boards it is OK as IMMR is set to 0xff000000 but for instance on EP88xC board, IMMR is at 0xfa200000 which overlaps with VM ioremap area This patch fixes the virtual address for remapping IMMR with the fixmap regardless of the value of IMMR. The size of IMMR area is 256kbytes (CPM at offset 0, security engine at offset 128k) so a 512k page is enough Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Scott Wood <oss@buserror.net>
2016-03-11powerpc/8xx: rewrite set_context() in CChristophe Leroy
There is no real need to have set_context() in assembly. Now that we have mtspr() handling CPU6 ERRATA directly, we can rewrite set_context() in C language for easier maintenance. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Scott Wood <oss@buserror.net>
2016-03-11powerpc/8xx: remove special handling of CPU6 errata in set_dec()Christophe Leroy
CPU6 ERRATA is now handled directly in mtspr(), so we can use the standard set_dec() fonction in all cases. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Scott Wood <oss@buserror.net>
2016-03-11powerpc/8xx: Map linear kernel RAM with 8M pagesChristophe Leroy
On a live running system (VoIP gateway for Air Trafic Control), over a 10 minutes period (with 277s idle), we get 87 millions DTLB misses and approximatly 35 secondes are spent in DTLB handler. This represents 5.8% of the overall time and even 10.8% of the non-idle time. Among those 87 millions DTLB misses, 15% are on user addresses and 85% are on kernel addresses. And within the kernel addresses, 93% are on addresses from the linear address space and only 7% are on addresses from the virtual address space. MPC8xx has no BATs but it has 8Mb page size. This patch implements mapping of kernel RAM using 8Mb pages, on the same model as what is done on the 40x. In 4k pages mode, each PGD entry maps a 4Mb area: we map every two entries to the same 8Mb physical page. In each second entry, we add 4Mb to the page physical address to ease life of the FixupDAR routine. This is just ignored by HW. In 16k pages mode, each PGD entry maps a 64Mb area: each PGD entry will point to the first page of the area. The DTLB handler adds the 3 bits from EPN to map the correct page. With this patch applied, we now get only 13 millions TLB misses during the 10 minutes period. The idle time has increased to 313s and the overall time spent in DTLB miss handler is 6.3s, which represents 1% of the overall time and 2.2% of non-idle time. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Scott Wood <oss@buserror.net>
2016-03-11powerpc/8xx: Save r3 all the time in DTLB miss handlerChristophe Leroy
We are spending between 40 and 160 cycles with a mean of 65 cycles in the DTLB handling routine (measured with mftbl) so make it more simple althought it adds one instruction. With this modification, we get three registers available at all time, which will help with following patch. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Scott Wood <oss@buserror.net>
2016-03-09powerpc/8xx: CONFIG_DEBUG_PAGEALLOC requires ITLBmiss for kernel addressesChristophe Leroy
When CONFIG_DEBUG_PAGEALLOC is activated, the initial TLB mapping gets flushed to track accesses to wrong areas. Therefore, kernel addresses will also generate ITLB misses. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Scott Wood <oss@buserror.net>
2015-06-02powerpc/8xx: Implementation of PAGE_EXECLEROY Christophe
This patch implements PAGE_EXEC capability on the 8xx. All pages PP exec bits are set to 000, which means Execute for Supervisor and no Execute for User. Then we use the APG to say whether accesses are according to Page rules, "all Supervisor" rules (Exec for all) and "all User" rules (Exec for noone) Therefore, we define 4 APG groups. msb is _PAGE_EXEC, lsb is _PAGE_USER. MI_AP is initialised as follows: GP0 (00) => Not User, no exec => 11 (all accesses performed as user) GP1 (01) => User but no exec => 11 (all accesses performed as user) GP2 (10) => Not User, exec => 01 (rights according to page definition) GP3 (11) => User, exec => 00 (all accesses performed as supervisor) Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> [scottwood: comments: s/exec/data/ on data side, and s/pages/pages'/] Signed-off-by: Scott Wood <scottwood@freescale.com>
2015-06-02powerpc/8xx: Handle PAGE_USER via APG bitsLEROY Christophe
Use of APG for handling PAGE_USER. All pages PP exec bits are set to either 000 or 011, which means respectively RW for Supervisor and no access for User, or RO for Supervisor and no access for user. Then we use the APG to say whether accesses are according to Page rules or "all Supervisor" rules (Access to all) Therefore, we define 2 APG groups corresponding to _PAGE_USER. Mx_AP are initialised as follows: GP0 => No user => 01 (all accesses performed according to page definition) GP1 => User => 00 (all accesses performed as supervisor according to page definition) This removes the special 8xx handling in pte_update() Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Scott Wood <scottwood@freescale.com>
2015-06-02powerpc/8xx: Add support for TASK_SIZE greater than 0x80000000LEROY Christophe
By default, TASK_SIZE is set to 0x80000000 for PPC_8xx, which is most likely sufficient for most cases. However, kernel configuration allows to set TASK_SIZE to another value, so the 8xx shall handle it. This patch also takes into account the case of PAGE_OFFSET lower than 0x80000000, allthought most of the time it is equal to 0xC0000000 Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Scott Wood <scottwood@freescale.com>
2015-06-02powerpc/8xx: Use SPRG2 instead of DAR for saving r3LEROY Christophe
We now have SPRG2 available as in it not used anymore for saving CR, so we don't need to crash DAR anymore for saving r3 for CPU6 ERRATA handling. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Scott Wood <scottwood@freescale.com>
2015-06-02powerpc/8xx: dont save CR in SCRATCH registersLEROY Christophe
CR only needs to be preserved when checking if we are handling a kernel address. So we can preserve CR in a register: - In ITLBMiss, check is done only when CONFIG_MODULES is defined. Otherwise we don't need to do anything at all with CR. - We use r10, then we reload SRR0/MD_EPN into r10 when CR is restored Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Scott Wood <scottwood@freescale.com>
2015-06-02powerpc/8xx: Handle CR out of exception PROLOG/EPILOGLEROY Christophe
In order to be able to reduce scope during which CR is saved, we take CR saving/restoring out of exception PROLOG and EPILOG Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Scott Wood <scottwood@freescale.com>
2015-06-02powerpc/8xx: macro for handling CPU15 errataLEROY Christophe
Having a macro will help keep clear code. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Scott Wood <scottwood@freescale.com>
2015-01-29powerpc/8xx: Remove duplicated code in set_context()LEROY Christophe
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Scott Wood <scottwood@freescale.com>
2015-01-29powerpc/8xx: Optimise access to swapper_pg_dirLEROY Christophe
All accessed to PGD entries are done via 0(r11). By using lower part of swapper_pg_dir as load index to r11, we can remove the ori instruction. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Scott Wood <scottwood@freescale.com>
2015-01-29powerpc/8xx: Take benefit of aligned PGDIRLEROY Christophe
L1 base address is now aligned so we can insert L1 index into r11 directly and then preserve r10 Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Scott Wood <scottwood@freescale.com>
2015-01-29powerpc/8xx: remove tests on PGDIR entry validityLEROY Christophe
Kernel MMU handling code handles validity of entries via _PMD_PRESENT which corresponds to V bit in MD_TWC and MI_TWC. When the V bit is not set, MPC8xx triggers TLBError exception. So we don't have to check that and branch ourself to TLBError. We can set TLB entries with non present entries, remove all those tests and let the 8xx handle it. This reduce the number of cycle when the entries are valid which is the case most of the time, and doesn't significantly increase the time for handling invalid entries. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Scott Wood <scottwood@freescale.com>
2015-01-29powerpc/8xx: remove remaining unnecessary code in FixupDARLEROY Christophe
Since commit 33fb845a6f01 ("powerpc/8xx: Don't use MD_TWC for walk"), MD_EPN and MD_TWC are not writen anymore in FixupDAR so saving r3 has become useless. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Scott Wood <scottwood@freescale.com>
2015-01-29powerpc/8xx: use _PAGE_RO instead of _PAGE_RWLEROY Christophe
On powerpc 8xx, in TLB entries, 0x400 bit is set to 1 for read-only pages and is set to 0 for RW pages. So we should use _PAGE_RO instead of _PAGE_RW Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-11-07powerpc/8xx: Invalidate non present TLB as early as possibleLEROY Christophe
8xx sometimes need to load a invalid/non-present TLBs in it DTLB asm handler. These must be invalidated separaly as linux mm doesn't. Commit 5efab4a02c89c252fb4cce097aafde5f8208dbfe was invalidating them in arch/powerpc/mm/fault.c. This patch does the invalidation earlier in order to free the TLB as soon as possible. This also has the advantage of removing some 8xx specific code from fault.c Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-11-07powerpc/8xx: Use DAR to save r3 for CPU6 ERRATALEROY Christophe
As we are not using anymore DAR to save registers, it is now available for saving the r3 register used for CPU6 ERRATA handling. Therefore we can remove the major hack which was to use memory location 0 to save r3. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-11-07powerpc/8xx: Don't restore regs to save them again.LEROY Christophe
There is not need to restore r10, r11 and cr registers at this end of ITLBmiss handler as they are saved again to the same place in ITLBError handler we are jumping to. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-11-07powerpc/8xx: _PMD_PRESENT already set in level 1 entriesLEROY Christophe
When a PMD entry is valid, _PMD_PRESENT is set. Therefore, forcing that bit during TLB loading is useless. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-11-07powerpc/8xx: set PTE bit 22 off TLBmissLEROY Christophe
No need to re-set this bit at each TLB miss. Let's set it in the PTE. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-11-07powerpc/8xx: Better readibility of ERRATA CPU6 handlingLEROY Christophe
This patch hiddes that SPR address needed for CPU6 ERRATA handling in the macro. Then we don't have to worry about this address directly in the code. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-11-07powerpc/8xx: Implement 16k pagesLEROY Christophe
This patch activates the handling of 16k pages on the MPC8xx. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-11-07powerpc/8xx: Const for TLB RPN forced valueLEROY Christophe
Value 0x00f0 is used to force bits in TLB level 2 entry. This value is linked to the page size and will vary when we change the page size. Lets define a const for it in order to have it at only one place. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-11-07powerpc/8xx: Use PAGE size related constsLEROY Christophe
For PAGE size related operations, use PAGE size consts in order to be able to use different page size in the futur. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-11-07powerpc/8xx: Don't use MD_TWC for walkLEROY Christophe
MD_TWC can only be used properly with 4k pages. So lets calculate level 2 table index by ourselves. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-11-07powerpc/8xx: Use M_TW instead of M_TWBLEROY Christophe
Use M_TW instead of M_TWB for storing Level 1 table address as M_TWB requires 4k aligned tables, which is only the case with 4k pages. Consequently, we have to calculate the level 1 table index by ourselves. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-11-07powerpc/8xx: No need to restore registers and save them again.LEROY Christophe
In DTLBError handler there is not need to restore r10, r11 and cr registers after fixing DAR as they are saved again to the same place just after. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-11-07powerpc/8xx: DataAccess exception not generated by MPC8xxLEROY Christophe
DataAccess exception is never generated by MPC8xx so do the job directly where it is used to avoid an unnecessary branching. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-11-07powerpc/8xx: exception InstructionAccess does not exist on MPC8xxLEROY Christophe
Exception InstructionAccess does not exist on MPC8xx. No need to branch there from somewhere else. Handling can be done directly in InstructionTLBError Exception. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-09-04powerpc/8xx: Duplicate two insns instead of branchingLEROY Christophe
Branching takes two cycles on MPC8xx. Lets duplicate the two instructions and avoid the branching. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-09-04powerpc/8xx: Optimize verification in FixupDARLEROY Christophe
By XORing the upper part of the instruction code, we get a value that can directly be verified with the second test and we can remove the first test. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-09-04powerpc/8xx: No need to save r10 and r3 when not calling FixupDARLEROY Christophe
r10 and r3 are only used inside FixupDAR function. So lets save them inside that function only. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-09-04powerpc/8xx: Fix comment about DIRTY updateLEROY Christophe
Since commit 2321f33790a6c5b80322d907a92d5739e7521a13, dirty handling is not handled here anymore. So we fix the comment. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-09-04powerpc/8xx: Remove loading of r10 at end of FixupDARLEROY Christophe
Since commit 2321f33790a6c5b80322d907a92d5739e7521a13, r10 is not used anymore after FixupDAR. There is therefore no need to set it up with the value of DAR. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-09-04powerpc/8xx: Use SCRATCH0 and SCRATCH1 also for TLB handlersLEROY Christophe
SCRATCH0 and SCRATCH1 are only used in Exceptions prologs where no other exception can happen. There is therefore no need to preserve them accross TLB handlers, we can use them there as in other exceptions. One of the advantages is that they do not suffer CPU6 errata unlike M_TW register. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-09-04powerpc/8xx: Declare SPRG2 as a SCRATCH registerLEROY Christophe
Since commit 469d62be9263b92f2c3329540cbb1c076111f4f3, SPRG2 is used as a scratch register just like SPRG0 and SPRG1. So Declare it as such and fix the comment which is not valid anymore since that commit. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Scott Wood <scottwood@freescale.com>
2013-10-28powerpc/8xx: Fixing issue with CONFIG_PIN_TLBLEROY Christophe
Activating CONFIG_PIN_TLB is supposed to pin the IMMR and the first three 8Mbytes pages. But the setting of MD_CTR to a pinnable entry was missing before the pinning of the third 8Mb page. As the index is decremented module 28 (MD_RSV4D is set) after every DTLB update, the third 8Mbytes page was not pinned. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Scott Wood <scottwood@freescale.com>
2013-08-14powerpc: Remove the empty giveup_fpu() function on 32bit kernelKevin Hao
Instead of implementing an empty giveup_fpu() function for each 32bit processor type, replace them with an unique empty inline function. Signed-off-by: Kevin Hao <haokexin@gmail.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-03-09powerpc: Call do_page_fault() with interrupts offBenjamin Herrenschmidt
We currently turn interrupts back to their previous state before calling do_page_fault(). This can be annoying when debugging as a bad fault will potentially have lost some processor state before getting into the debugger. We also end up calling some generic code with interrupts enabled such as notify_page_fault() with interrupts enabled, which could be unexpected. This changes our code to behave more like other architectures, and make the assembly entry code call into do_page_faults() with interrupts disabled. They are conditionally re-enabled from within do_page_fault() in the same spot x86 does it. While there, add the might_sleep() test in the case of a successful trylock of the mmap semaphore, again like x86. Also fix a bug in the existing assembly where r12 (_MSR) could get clobbered by C calls (the DTL accounting in the exception common macro and DISABLE_INTS) in some cases. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> --- v2. Add the r12 clobber fix
2011-09-20powerpc/32: Pass device tree address as u64 to machine_initScott Wood
u64 is used rather than phys_addr_t to keep things simple, as this is called from assembly code. Update callers to pass a 64-bit address in r3/r4. Other unused register assignments that were once parameters to machine_init are dropped. For FSL BookE, look up the physical address of the device tree from the effective address passed in r3 by the loader. This is required for situations where memory does not start at zero (due to AMP or IOMMU-less virtualization), and thus the IMA doesn't start at zero, and thus the device tree effective address does not equal the physical address. Signed-off-by: Scott Wood <scottwood@freescale.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2010-11-29powerpc: Remove second definition of STACK_FRAME_OVERHEADStephen Rothwell
Since STACK_FRAME_OVERHEAD is defined in asm/ptrace.h and that is ASSEMBER safe, we can just include that instead of going via asm-offsets.h. Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>