aboutsummaryrefslogtreecommitdiff
path: root/arch/arm
AgeCommit message (Collapse)Author
2013-05-25Automatically merging tracking-lsk-vexpress-iks into merge-linux-linaro-lskAndrey Konovalov
Conflicting files:
2013-05-25Automatically merging tracking-iks into merge-linux-linaro-lskAndrey Konovalov
Conflicting files:
2013-05-25Merge branch 'tracking-big-LITTLE-MP-upstream' into merge-linux-linaro-lskAndrey Konovalov
2013-05-25ARM: bL_switcher: remove assumptions between logical and physical CPUstracking-lsk-vexpress-iks-lsk-20130527.0tracking-lsk-vexpress-iks-lsk-20130525.1tracking-lsk-vexpress-iks-lsk-20130525.0Nicolas Pitre
Up to now, the logical CPU was somehow tied to the physical CPU number within a cluster which caused problems when forcing the boot on an A7. The pairing is completely independent from physical CPU numbers now. Signed-off-by: Nicolas Pitre <nico@linaro.org>
2013-05-25ARM: vexpress: default b.L configNicolas Pitre
2013-05-25ARM: perf_event_cpu.c: fix memory corruption causing unpleasant effectsNicolas Pitre
1) The memory obtained via alloc_percpu() is defined (and zeroed) only for those CPUs in cpu_possible_mask. For example, it is wrong to itterate using: for (i = 0; i < NR_CPUS; i++) per_cpu_ptr(cpu_pmus, i)->mpidr = -1; This is guaranteed to corrupt memory for those CPU numbers not marked possible during CPU enumeration. 2) In cpu_pmu_free_irq(), an occasional cpu_pmu->mpidr of -1 (meaning uninitialized) was nevertheless passed to find_logical_cpu() which ended up returning very creative CPU numbers. This was then used with this line: if (!cpumask_test_and_clear_cpu(cpu, &pmu->active_irqs)) This corrupted memory due to the pmu->active_irqs overflow, and provided rather random condition results. What made this bug even nastier is the fact that a slight change in code placement due to compiler version, kernel config options or even added debugging traces could totally change the bug symptom. Signed-off-by: Nicolas Pitre <nico@linaro.org>
2013-05-25ARM: perf: [WIP] Skip PMU register save/restore when no active countersDave Martin
This patch checks whether any counters are active in the PMU's per- CPU event_mask before attempting save/restore. In practice, this means that the save/restore is skipped if there is no active perf session. If there are no active counters, nothing is saved or restored. The PMU is still reset and quiesced on the restore path, as previously. Signed-off-by: Dave Martin <dave.martin@linaro.org>
2013-05-25ARM: perf: [WIP] Map PMU IRQ affinities correctlyDave Martin
This patch determines the (mpidr,irq) pairings associated with each PMU from the DT at probe time, and uses this information to work out which IRQs to request for which logical CPU when enabling an event on a PMU. This patch also ensures that each PMU's init function is called on a CPU of the correct type. Previously, this was relying on luck. Signed-off-by: Dave Martin <dave.martin@linaro.org>
2013-05-25ARM: perf: [WIP] Initial bL switcher supportDave Martin
This patch adds preliminary, highly experimental perf support for CONFIG_BL_SWITCHER=y. In this configuration, every PMU is registered as valid for every logical CPU, in a way which covers all the combinations which will be seen at runtime, regardless of whether the switcher is enabled or not. Tracking of which PMUs are active at a given point in time is delegated to the lower-level abstractions in perf_event_v7.c. Warning: this patch does not handle PMU interrupt affinities correctly. Because of the way the switcher pairs up CPUs, this does not cause a problem when the switcher is active; however, interrupts may be directed to the wrong CPU when the switcher is disabled. This will result in spurious interrupts and wrong event counts. Signed-off-by: Dave Martin <dave.martin@linaro.org>
2013-05-25ARM: perf: [WIP] Manipulate the right shadow register for PM*CLRDave Martin
Where the ARM Architecture exposes PM*SET and PM*CLR, these really manipulate the same underlying register. This patch uses the PM*SET register for storing the logical state when the PMU is not active, and mainpulates that state when the code attempts to access the corresponding PM*CLR register. PMOVSR is a special case: this is a reset-only register, so the logical copy of PMOVSR is always used. These changes result a small number of unused fields in the armv7_pmu_logical_state structre. For now, this is considered to be harmless -- it may be tidied up later. Signed-off-by: Dave Martin <dave.martin@linaro.org>
2013-05-25ARM: perf: [WIP] Check PMU is valid for the CPU in armpmu_disable()Dave Martin
This hack is required in order to be able to manipulate the CPU logical state safely in the absence of the b.L switcher, for test purposes. The other similar checks are already present in the b.L MP perf support patches. Normally, only the physical PMU state should be mainpulated in a kernel which doesn't include the switcher, so it may be possible to remove this patch later. Signed-off-by: Dave Martin <dave.martin@linaro.org>
2013-05-25ARM: perf: [WIP] Add a separate cpu_init() method for ARM PMUsDave Martin
We need a allocate some per-cpu-pmu data outside atomic context, along with other actions required for setting up the cpu_pmu struct. This code does not need to run on any particular CPU, so we call this after the per-CPU init method is called. Signed-off-by: Dave Martin <dave.martin@linaro.org>
2013-05-25ARM: perf: [WIP] Add register emulation for offline ARMv7 PMUsDave Martin
This patch aims to provide basic register file functionality for ARMv7 CPU PMUs while the PMU is offline. It is incomplete and lacks the necessary plumbing to actually make use of this, but the extra code needed is not expected to be large or complex. Save/restore are ported over the register emulation framework, since the offline logical state of the CPU matches exactly what needs to be captures in save/restore. Because this patch is rather invasive, it should be dropped in the future in favour of higher-level abstraction before merging upstream. Signed-off-by: Dave Martin <dave.martin@linaro.org>
2013-05-25ARM: perf: Allow multiple CPU PMUs per CPUDave Martin
In a system where Linux logical CPUs can migrate between different physical CPUs, multiple CPU PMUs can logically count events for each logical CPU, as logical CPUs migrate from one cluster to another. This patch allows multiple PMUs to be registered against each CPU. The pairing of a PMU and a CPU is reperesented by a struct arm_cpu_pmu, with existing per-CPU state used by perf moving into this structure. arm_cpu_pmus are per-cpu-allocated, and hang off the relevant arm_pmu structure. This arrangement allows us to find all the CPU-PMU pairings for a given PMU, but not for a given CPU. Do do the latter, a list of all registered CPU PMUs is maintained, and we iterate over that when we need to find all of a CPU's CPU PMUs. This is not elegent, but it shouldn't be a heavy cost since the number of different CPU PMUs across the system is currently expected to be low (i.e., 2 or fewer). This could be improved later. As a side-effect, the get_hw_events() method no longer has enough context to provide an answer, because there may be multiple candidate PMUs for a CPU. This patch adds the struct arm_pmu * for the relevant PMU to this interface to resolve this problem, resulting in trivial changes to various ARM PMU implementations. Signed-off-by: Dave Martin <dave.martin@linaro.org>
2013-05-25ARM: perf: save/restore pmu registers in pm notifierSudeep KarkadaNagesha
This adds core support for saving and restoring CPU PMU registers for suspend/resume support i.e. deeper C-states in cpuidle terms. This patch adds support only to ARMv7 PMU registers save/restore. It needs to be extended to xscale and ARMv6 if needed. Signed-off-by: Sudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com>
2013-05-25ARM: perf: set cpu affinity for the irqs correctlySudeep KarkadaNagesha
This patch sets the cpu affinity for the perf IRQs in the logical order within the cluster. However interupts are assumed to be specified in the same logical order within the cluster. Signed-off-by: Sudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com>
2013-05-25ARM: perf: set cpu affinity to support multiple PMUsSudeep KarkadaNagesha
In a system with multiple heterogeneous CPU PMUs and each PMUs can handle events on a subset of CPUs, probably belonging a the same cluster. This patch introduces a cpumask to track which CPUs each PMU supports. It also updates armpmu_event_init to reject cpu-specific events being initialised for unsupported CPUs. Since process-specific events can be initialised for all the CPU PMUs,armpmu_start/stop/add are modified to prevent from being added on unsupported CPUs. Signed-off-by: Sudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com>
2013-05-25ARM: perf: register CPU PMUs with idr typesSudeep KarkadaNagesha
In order to support multiple, heterogeneous CPU PMUs and distinguish them, they cannot be registered as PERF_TYPE_RAW type. Instead we can get perf core to allocate a new idr type id for each PMU. Userspace applications can refer sysfs entried to find a PMU's type, which can then be used in tracking events on individual PMUs. Signed-off-by: Sudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com>
2013-05-25ARM: perf: replace global CPU PMU pointer with per-cpu pointersSudeep KarkadaNagesha
A single global CPU PMU pointer is not useful in a system with multiple, heterogeneous CPU PMUs as we need to access the relevant PMU depending on the current CPU. This patch replaces the single global CPU PMU pointer with per-cpu pointers and changes the OProfile accessors to refer to the PMU affine to CPU0. Signed-off-by: Sudeep KarkadaNagesha <Sudeep.KarkadaNagesha@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2013-05-25ARM: bL_switcher: Add query interface to discover CPU affinitiesDave Martin
When the switcher is active, there is no straightforward way to figure out which logical CPU a given physical CPU maps to. This patch provides a function bL_switcher_get_logical_index(mpidr), which is analogous to get_logical_index(). This function returns the logical CPU on which the specified physical CPU is grouped (or -EINVAL if unknown). If the switcher is inactive or not present, -EUNATCH is returned instead. Signed-off-by: Dave Martin <dave.martin@linaro.org>
2013-05-25Merge remote-tracking branch 'nico/iks' into lsk-3.9-vexpress-iksAndrey Konovalov
Conflicts: arch/arm/Kconfig arch/arm/common/Makefile
2013-05-13ARM: bL_switcher: add a simple /dev user interface for debugging purposestracking-iks-manifest-20130614.0tracking-iks-manifest-20130613.0tracking-iks-lsk-20130528.0tracking-iks-lsk-20130527.0tracking-iks-lsk-20130525.1tracking-iks-lsk-20130525.0tracking-iks-lsk-20130522.0v3.9/iksNicolas Pitre
Only the basic to aid debugging. Usage: echo <cpuid>,<clusterid> > /dev/b.L_switcher where <cpuid> is between 0 and 3, and <clusterid> is 0 for the A15 cluster and 1 for the A7 cluster. Signed-off-by: nicolas Pitre <nico@linaro.org>
2013-05-13ARM: bL_switcher/trace: Add kernel trace trigger interfaceDave Martin
This patch exports a bL_switcher_trace_trigger() function to provide a means for drivers using the trace events to get the current status when starting a trace session. Calling this function is equivalent to pinging the trace_trigger file in sysfs. Signed-off-by: Dave Martin <dave.martin@linaro.org>
2013-05-13ARM: bL_switcher/trace: Add trace trigger for trace bootstrappingDave Martin
When tracing switching, an external tracer needs a way to bootstrap its knowledge of the logical<->physical CPU mapping. This patch adds a sysfs attribute trace_trigger. A write to this attribute will generate a power:cpu_migrate_current event for each online CPU, indicating the current physical CPU for each logical CPU. Activating or deactivating the switcher also generates these events, so that the tracer knows about the resulting remapping of affected CPUs. Signed-off-by: Dave Martin <dave.martin@linaro.org>
2013-05-13ARM: bL_switcher: Basic trace events supportDave Martin
This patch adds simple trace events to the b.L switcher code to allow tracing of CPU migration events. To make use of the trace events, you will need: CONFIG_FTRACE=y CONFIG_ENABLE_DEFAULT_TRACERS=y The following events are added: * power:cpu_migrate_begin * power:cpu_migrate_finish each with the following data: u64 timestamp; u32 cpu_hwid; power:cpu_migrate_begin occurs immediately before the switcher-specific migration operations start. power:cpu_migrate_finish occurs immediately when migration is completed. The cpu_hwid field contains the ID fields of the MPIDR. * For power:cpu_migrate_begin, cpu_hwid is the ID of the outbound physical CPU (equivalent to (from_phys_cpu,from_phys_cluster)). * For power:cpu_migrate_finish, cpu_hwid is the ID of the inbound physical CPU (equivalent to (to_phys_cpu,to_phys_cluster)). By design, the cpu_hwid field is masked in the same way as the device tree cpu node reg property, allowing direct correlation to the DT description of the hardware. The timestamp is added in order to minimise timing noise. An accurate system-wide clock should be used for generating this (hopefully getnstimeofday is appropriate, but it could be changed). It could be any monotonic shared clock, since the aim is to allow accurate deltas to be computed. We don't necessarily care about accurate synchronisation with wall clock time. In practice, each switch takes place on a single logical CPU, and the trace infrastructure should guarantee that events are well-ordered with respect to a single logical CPU. Signed-off-by: Dave Martin <dave.martin@linaro.org> Signed-off-by: Nicolas Pitre <nico@linaro.org>
2013-05-13ARM: bL_switcher: Add runtime control notifierDave Martin
Some subsystems will need to respond synchronously to runtime enabling and disabling of the switcher. This patch adds a dedicated notifier interface to support such subsystems. Pre- and post- enable/disable notifications are sent to registered callbacks, allowing safe transition of non-b.L- transparent subsystems across these control transitions. Notifier callbacks may veto switcher (de)activation on pre notifications only. Post notifications won't revert the action. If enabling or disabling of the switcher fails after the pre-change notification has been sent, subsystems which have registered notifiers can be left in an inappropriate state. This patch sends a suitable post-change notification on failure, indicating that the old state has been reestablished. For example, a failed initialisation will result in the following sequence: BL_NOTIFY_PRE_ENABLE /* switcher initialisation fails */ BL_NOTIFY_POST_DISABLE It is the responsibility of notified subsystems to respond in an appropriate way. Signed-off-by: Dave Martin <dave.martin@linaro.org> Signed-off-by: Nicolas Pitre <nico@linaro.org>
2013-05-13ARM: bL_switcher: Add synchronous enable/disable interfaceDave Martin
Some subsystems will need to know for sure whether the switcher is enabled or disabled during certain critical regions. This patch provides a simple mutex-based mechanism to discover whether the switcher is enabled and temporarily lock out further enable/disable: * bL_switcher_get_enabled() returns true iff the switcher is enabled and temporarily inhibits enable/disable. * bL_switcher_put_enabled() permits enable/disable of the switcher again after a previous call to bL_switcher_get_enabled(). Signed-off-by: Dave Martin <dave.martin@linaro.org> Signed-off-by: Nicolas Pitre <nico@linaro.org>
2013-05-13ARM: bL_switcher: wait until inbound is alive before performing a switchNicolas Pitre
In some cases, a significant delay may be observed between the moment a request for a CPU to come up is made and the moment it is ready to start executing kernel code. This is especially true when a whole cluster has to be powered up which may take in the order of miliseconds. It is therefore a good idea to let the outbound CPU continue to execute code in the mean time, and be notified when the inbound is ready before performing the actual switch. This is achieved by registering a completion block with the appropriate IPI callback, and programming the sending of an IPI by the early assembly code prior to entering the main kernel code. Once the IPI is delivered to the outbound CPU, the completion block is "completed" and the switcher thread is resumed. Signed-off-by: Nicolas Pitre <nico@linaro.org>
2013-05-13ARM: mcpm: add a simple poke mechanism to the early entry codeNicolas Pitre
This allows to poke a predetermined value into a specific address upon entering the early boot code in bL_head.S. Signed-off-by: Nicolas Pitre <nico@linaro.org>
2013-05-13ARM: SMP: basic IPI triggered completion supportNicolas Pitre
We need a mechanism to let an inbound CPU signal that it is alive before even getting into the kernel environment i.e. from early assembly code. Using an IPI is the simplest way to achieve that. This adds some basic infrastructure to register a struct completion pointer to be "completed" when the dedicated IPI for this task is received. Signed-off-by: Nicolas Pitre <nico@linaro.org>
2013-05-13ARM: bL_switcher: synchronize the outbound with the inboundNicolas Pitre
Let's wait for the inbound to come up and snoop some of our cache. That should be a bit more efficient than going down right away. Monitoring the CCI event counters could be a better approach eventually. Signed-off-by: Nicolas Pitre <nico@linaro.org>
2013-05-13ARM: bL_switcher: veto CPU hotplug requests when the switcher is activeNicolas Pitre
Trying to support both the switcher and CPU hotplug at the same time is quickly becoming very complex for little gain. So let's simply veto any hotplug requests when the switcher is active. This restriction might be loosened a bit eventually. Signed-off-by: Nicolas Pitre <nico@linaro.org>
2013-05-13ARM: bL_switcher: add kernel cmdline param to disable the switcher on bootNicolas Pitre
By adding no_bL_switcher to the kernel cmdline string, the switcher won't be activated automatically at boot time. It is still possible to activate it later with: echo 1 > /sys/kernel/bL_switcher/active Signed-off-by: Nicolas Pitre <nico@linaro.org>
2013-05-13ARM: bL_switcher: ability to enable and disable the switcher via sysfsNicolas Pitre
The /sys/kernel/bL_switcher/enable file allows to enable or disable the switcher by writing 1 or 0 to it respectively. It is still enabled by default on boot. Signed-off-by: Nicolas Pitre <nico@linaro.org>
2013-05-13ARM: bL_switcher: do not hardcode GIC IDs in the codeNicolas Pitre
Currently, GIC IDs are hardcoded making the code dependent on the x4 b.L configuration. Let's allow for GIC IDs to be discovered upon switcher initialization to support other b.L configurations such as the x1 one. Signed-off-by: Nicolas Pitre <nico@linaro.org>
2013-05-13ARM: bL_switcher: hot-unplug half of the available CPUsNicolas Pitre
With an MP kernel, all the CPUs are initially available. The switcher model always uses half of them at any time. Let's remove half of the available CPUs and make sure we still have a working switcher configuration. Signed-off-by: Nicolas Pitre <nico@linaro.org>
2013-05-13ARM: bL_switcher: simplify stack isolationNicolas Pitre
We now have a dedicated thread for each logical CPU. That's plenty of stack space for our needs. Signed-off-by: Nicolas Pitre <nico@linaro.org>
2013-05-13ARM: bL_switcher: move to dedicated threads rather than workqueuesNicolas Pitre
The workqueues are problematic as they may be contended. They can't be scheduled with top priority either. Also the optimization in bL_switch_request() to skip the workqueue entirely when the target CPU and the calling CPU were the same didn't allow for bL_switch_request() to be called from atomic context, as might be the case for some cpufreq drivers. Let's move to dedicated kthreads instead. Signed-off-by: Nicolas Pitre <nico@linaro.org>
2013-05-13ARM: bL_switcher: add clockevent save/restore supportLorenzo Pieralisi
Per-CPU timers that are shutdown when a CPU is switched over must be disabled upon switching and reprogrammed on the inbound CPU by relying on the clock events management API. save/restore sequence is executed with irqs disabled as mandated by the clock events API. The next_event is an absolute time, hence, when the inbound CPU resumes, if the timer has expired the min delta is forced into the tick device to fire after few cycles. This patch adds switching support for clock events that are per-CPU and have to be migrated when a switch takes place; the cpumask of the clock event device is checked against the cpumask of the current cpu, and if they match, the clockevent device mode is saved and it is put in shutdown mode. Resume code reprogrammes the tick device accordingly. Tested on A15/A7 fast models and architected timers. Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: Nicolas Pitre <nico@linaro.org>
2013-05-13ARM: b.L: core switcher codeNicolas Pitre
The main entry point for a switch request is: void bL_switch_request(unsigned int cpu, unsigned int new_cluster_id) If the calling CPU is not the wanted one, this wrapper takes care of sending the request to the appropriate CPU with schedule_work_on(). In the future, some switching related tasks which do not require a strict CPU affinity might be moved here though. At the moment the core switch operation is handled by bL_switch_to() which must be called on the CPU for which a switch is requested. What this code does: * Return early if the current cluster is the wanted one. * Close the gate in the kernel entry vector for both the inbound and outbound CPUs. * Wake up the inbound CPU so it can perform its reset sequence in parallel up to the kernel entry vector gate. * Migrate all interrupts in the GIC targeting the outbound CPU interface to the inbound CPU interface, including SGIs. This is performed by gic_migrate_target() in arch/arm/common/gic.c. * Shut down the local timer for the outbound CPU. * Call cpu_pm_enter() which takes care of flushing the VFP state to RAM and save the CPU interface config from the GIC to RAM. * Call cpu_suspend() which saves the CPU state (general purpose registers, page table address) onto the stack and store the resulting stack pointer in an array indexed by processor number, then call the provided shutdown function. This happens in arch/arm/kernel/sleep.S. At this point, the provided shutdown function executed by the outbound CPU ungates the inbound CPU. Therefore the inbound CPU: * Picks up the saved stack pointer in the array indexed by processor number above. At the moment the corresponding code in arch/arm/kernel/sleep.S only looks at the CPU number field in the MPIDR so the current code works unmodified even if the new CPU comes from a different cluster. * The MMU and caches are re-enabled using the saved state on the provided stack, just like if this was a resume operation from a suspended state. * Then cpu_suspend() returns, although this is on the inbound CPU rather than the outbound CPU which called it initially. * The function cpu_pm_exit() is called which effect is to restore the CPU interface state in the GIC using the state previously saved by the outbound CPU. * The local timer on the inbound CPU is restored. * Exit of bL_switch_to() to resume normal kernel execution on the new CPU. However, the outbound CPU is potentially still running in parallel while the inbound CPU is resuming normal kernel execution, hence we need per CPU stack isolation to execute bL_do_switch(). After the outbound CPU has ungated the inbound CPU, it calls bL_cpu_power_down() to: * Clean its L1 cache. * If it is the last CPU still alive in its cluster (last man standing), it also cleans its L2 cache and disables cache snooping from the other cluster. Code called from bL_do_switch() might end up referencing 'current' for some reasons. However, 'current' is derived from the stack pointer. With any arbitrary stack, the returned value for 'current' and any dereferenced values through it are just random garbage which may lead to segmentation faults. The active page table during the execution of bL_do_switch() is also a problem. There is no guarantee that the inbound CPU won't destroy the corresponding task which would free the attached page table while the outbound CPU is still running and relying on it. To solve both issues, we borrow some of the task space belonging to the init/idle task which, by its nature, is lightly used and therefore is unlikely to clash with our usage. The init task is also never going away. Signed-off-by: Nicolas Pitre <nico@linaro.org>
2013-05-09Merge branch 'tracking-armlt-tc2-cpufreq' into lsk-3.9-vexpresstracking-lsk-vexpress-lsk-20130512.0Jon Medhurst
2013-05-09Merge branch 'tracking-armlt-tc2-psci' into lsk-3.9-vexpressJon Medhurst
2013-05-09Merge branch 'tracking-armlt-tc2-pm' into lsk-3.9-vexpressJon Medhurst
Conflicts: arch/arm/mach-vexpress/Kconfig arch/arm/mach-vexpress/Makefile
2013-05-09Merge branch 'tracking-armlt-dcscb' into lsk-3.9-vexpressJon Medhurst
2013-05-09Merge branch 'tracking-armlt-psci' into lsk-3.9-vexpressJon Medhurst
2013-05-09Merge branch 'tracking-armlt-spc' into lsk-3.9-vexpressJon Medhurst
2013-05-09Merge branch 'tracking-armlt-cci' into lsk-3.9-vexpressJon Medhurst
2013-05-09Merge branch 'mcpm-merge-nico' into lsk-3.9-vexpressJon Medhurst
2013-05-09Merge branch 'tracking-armlt-tc2-dt' into lsk-3.9-vexpressJon Medhurst
Conflicts: arch/arm/boot/dts/vexpress-v2p-ca15_a7.dts
2013-05-09Merge branch 'tracking-armlt-misc-fixes' into lsk-3.9-vexpressJon Medhurst