aboutsummaryrefslogtreecommitdiff
path: root/drivers/cpufreq/cpufreq_governor.c
AgeCommit message (Collapse)Author
2013-08-29cpufreq: Don't use smp_processor_id() in preemptible contextStephen Boyd
Workqueues are preemptible even if works are queued on them with queue_work_on(). Let's use raw_smp_processor_id() here to silence the warning. BUG: using smp_processor_id() in preemptible [00000000] code: kworker/3:2/674 caller is gov_queue_work+0x28/0xb0 CPU: 0 PID: 674 Comm: kworker/3:2 Tainted: G W 3.10.0 #30 Workqueue: events od_dbs_timer [<c010c178>] (unwind_backtrace+0x0/0x11c) from [<c0109dec>] (show_stack+0x10/0x14) [<c0109dec>] (show_stack+0x10/0x14) from [<c03885a4>] (debug_smp_processor_id+0xbc/0xf0) [<c03885a4>] (debug_smp_processor_id+0xbc/0xf0) from [<c0635864>] (gov_queue_work+0x28/0xb0) [<c0635864>] (gov_queue_work+0x28/0xb0) from [<c0635618>] (od_dbs_timer+0x108/0x134) [<c0635618>] (od_dbs_timer+0x108/0x134) from [<c01aa8f8>] (process_one_work+0x25c/0x444) [<c01aa8f8>] (process_one_work+0x25c/0x444) from [<c01aaf88>] (worker_thread+0x200/0x344) [<c01aaf88>] (worker_thread+0x200/0x344) from [<c01b03bc>] (kthread+0xa0/0xb0) [<c01b03bc>] (kthread+0xa0/0xb0) from [<c01061b8>] (ret_from_fork+0x14/0x3c) Signed-off-by: Stephen Boyd <sboyd@codeaurora.org> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2013-08-28cpufreq: governor: Fix typos in commentsStratos Karafotis
- 'Governer' should be 'Governor'. - 'S' is used for Siemens (electrical conductance) in SI units, so use small 's' for seconds. Signed-off-by: Stratos Karafotis <stratosk@semaphore.gr> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2013-08-28cpufreq: Fix timer/workqueue corruption due to double queueingStephen Boyd
When a CPU is hot removed we'll cancel all the delayed work items via gov_cancel_work(). Normally this will just cancels a delayed timer on each CPU that the policy is managing and the work won't run, but if the work is already running the workqueue code will wait for the work to finish before continuing to prevent the work items from re-queuing themselves like they normally do. This scheme will work most of the time, except for the case where the work function determines that it should adjust the delay for all other CPUs that the policy is managing. If this scenario occurs, the canceling CPU will cancel its own work but queue up the other CPUs works to run. For example: CPU0 CPU1 ---- ---- cpu_down() ... __cpufreq_remove_dev() cpufreq_governor_dbs() case CPUFREQ_GOV_STOP: gov_cancel_work(dbs_data, policy); cpu0 work is canceled timer is canceled cpu1 work is canceled <work runs> <waits for cpu1> od_dbs_timer() gov_queue_work(*, *, true); cpu0 work queued cpu1 work queued cpu2 work queued ... cpu1 work is canceled cpu2 work is canceled ... At the end of the GOV_STOP case cpu0 still has a work queued to run although the code is expecting all of the works to be canceled. __cpufreq_remove_dev() will then proceed to re-initialize all the other CPUs works except for the CPU that is going down. The CPUFREQ_GOV_START case in cpufreq_governor_dbs() will trample over the queued work and debugobjects will spit out a warning: WARNING: at lib/debugobjects.c:260 debug_print_object+0x94/0xbc() ODEBUG: init active (active state 0) object type: timer_list hint: delayed_work_timer_fn+0x0/0x10 Modules linked in: CPU: 0 PID: 1491 Comm: sh Tainted: G W 3.10.0 #19 [<c010c178>] (unwind_backtrace+0x0/0x11c) from [<c0109dec>] (show_stack+0x10/0x14) [<c0109dec>] (show_stack+0x10/0x14) from [<c01904cc>] (warn_slowpath_common+0x4c/0x6c) [<c01904cc>] (warn_slowpath_common+0x4c/0x6c) from [<c019056c>] (warn_slowpath_fmt+0x2c/0x3c) [<c019056c>] (warn_slowpath_fmt+0x2c/0x3c) from [<c0388a7c>] (debug_print_object+0x94/0xbc) [<c0388a7c>] (debug_print_object+0x94/0xbc) from [<c0388e34>] (__debug_object_init+0x2d0/0x340) [<c0388e34>] (__debug_object_init+0x2d0/0x340) from [<c019e3b0>] (init_timer_key+0x14/0xb0) [<c019e3b0>] (init_timer_key+0x14/0xb0) from [<c0635f78>] (cpufreq_governor_dbs+0x3e8/0x5f8) [<c0635f78>] (cpufreq_governor_dbs+0x3e8/0x5f8) from [<c06325a0>] (__cpufreq_governor+0xdc/0x1a4) [<c06325a0>] (__cpufreq_governor+0xdc/0x1a4) from [<c0633704>] (__cpufreq_remove_dev.isra.10+0x3b4/0x434) [<c0633704>] (__cpufreq_remove_dev.isra.10+0x3b4/0x434) from [<c08989f4>] (cpufreq_cpu_callback+0x60/0x80) [<c08989f4>] (cpufreq_cpu_callback+0x60/0x80) from [<c08a43c0>] (notifier_call_chain+0x38/0x68) [<c08a43c0>] (notifier_call_chain+0x38/0x68) from [<c01938e0>] (__cpu_notify+0x28/0x40) [<c01938e0>] (__cpu_notify+0x28/0x40) from [<c0892ad4>] (_cpu_down+0x7c/0x2c0) [<c0892ad4>] (_cpu_down+0x7c/0x2c0) from [<c0892d3c>] (cpu_down+0x24/0x40) [<c0892d3c>] (cpu_down+0x24/0x40) from [<c0893ea8>] (store_online+0x2c/0x74) [<c0893ea8>] (store_online+0x2c/0x74) from [<c04519d8>] (dev_attr_store+0x18/0x24) [<c04519d8>] (dev_attr_store+0x18/0x24) from [<c02a69d4>] (sysfs_write_file+0x100/0x148) [<c02a69d4>] (sysfs_write_file+0x100/0x148) from [<c0255c18>] (vfs_write+0xcc/0x174) [<c0255c18>] (vfs_write+0xcc/0x174) from [<c0255f70>] (SyS_write+0x38/0x64) [<c0255f70>] (SyS_write+0x38/0x64) from [<c0106120>] (ret_fast_syscall+0x0/0x30) Signed-off-by: Stephen Boyd <sboyd@codeaurora.org> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2013-08-14Merge back earlier 'pm-cpufreq' materialRafael J. Wysocki
2013-08-07cpufreq: Clean up header files included in the coreViresh Kumar
This patch addresses the following issues in the header files in the cpufreq core: - Include headers in ascending order, so that we don't add same many times by mistake. - <asm/> must be included after <linux/>, so that they override whatever they need to. - Remove unnecessary includes. - Don't include files already included by cpufreq.h or cpufreq_governor.h. [rjw: Changelog] Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2013-08-07cpufreq: rename ignore_nice as ignore_nice_loadViresh Kumar
This sysfs file was called ignore_nice_load earlier and commit 4d5dcc4 (cpufreq: governor: Implement per policy instances of governors) changed its name to ignore_nice by mistake. Lets get it renamed back to its original name. Reported-by: Martin von Gagern <Martin.vGagern@gmx.net> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Cc: 3.10+ <stable@vger.kernel.org> # 3.10+ Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2013-07-26cpufreq: ondemand: Change the calculation of target frequencyStratos Karafotis
The ondemand governor calculates load in terms of frequency and increases it only if load_freq is greater than up_threshold multiplied by the current or average frequency. This appears to produce oscillations of frequency between min and max because, for example, a relatively small load can easily saturate minimum frequency and lead the CPU to the max. Then, it will decrease back to the min due to small load_freq. Change the calculation method of load and target frequency on the basis of the following two observations: - Load computation should not depend on the current or average measured frequency. For example, absolute load of 80% at 100MHz is not necessarily equivalent to 8% at 1000MHz in the next sampling interval. - It should be possible to increase the target frequency to any value present in the frequency table proportional to the absolute load, rather than to the max only, so that: Target frequency = C * load where we take C = policy->cpuinfo.max_freq / 100. Tested on Intel i7-3770 CPU @ 3.40GHz and on Quad core 1500MHz Krait. Phoronix benchmark of Linux Kernel Compilation 3.1 test shows an increase ~1.5% in performance. cpufreq_stats (time_in_state) shows that middle frequencies are used more, with this patch. Highest and lowest frequencies were used less by ~9%. [rjw: We have run multiple other tests on kernels with this change applied and in the vast majority of cases it turns out that the resulting performance improvement also leads to reduced consumption of energy. The change is additionally justified by the overall simplification of the code in question.] Signed-off-by: Stratos Karafotis <stratosk@semaphore.gr> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2013-07-16cpufreq: Revert commit 2f7021a8 to fix CPU hotplug regressionSrivatsa S. Bhat
commit 2f7021a8 "cpufreq: protect 'policy->cpus' from offlining during __gov_queue_work()" caused a regression in CPU hotplug, because it lead to a deadlock between cpufreq governor worker thread and the CPU hotplug writer task. Lockdep splat corresponding to this deadlock is shown below: [ 60.277396] ====================================================== [ 60.277400] [ INFO: possible circular locking dependency detected ] [ 60.277407] 3.10.0-rc7-dbg-01385-g241fd04-dirty #1744 Not tainted [ 60.277411] ------------------------------------------------------- [ 60.277417] bash/2225 is trying to acquire lock: [ 60.277422] ((&(&j_cdbs->work)->work)){+.+...}, at: [<ffffffff810621b5>] flush_work+0x5/0x280 [ 60.277444] but task is already holding lock: [ 60.277449] (cpu_hotplug.lock){+.+.+.}, at: [<ffffffff81042d8b>] cpu_hotplug_begin+0x2b/0x60 [ 60.277465] which lock already depends on the new lock. [ 60.277472] the existing dependency chain (in reverse order) is: [ 60.277477] -> #2 (cpu_hotplug.lock){+.+.+.}: [ 60.277490] [<ffffffff810ac6d4>] lock_acquire+0xa4/0x200 [ 60.277503] [<ffffffff815b6157>] mutex_lock_nested+0x67/0x410 [ 60.277514] [<ffffffff81042cbc>] get_online_cpus+0x3c/0x60 [ 60.277522] [<ffffffff814b842a>] gov_queue_work+0x2a/0xb0 [ 60.277532] [<ffffffff814b7891>] cs_dbs_timer+0xc1/0xe0 [ 60.277543] [<ffffffff8106302d>] process_one_work+0x1cd/0x6a0 [ 60.277552] [<ffffffff81063d31>] worker_thread+0x121/0x3a0 [ 60.277560] [<ffffffff8106ae2b>] kthread+0xdb/0xe0 [ 60.277569] [<ffffffff815bb96c>] ret_from_fork+0x7c/0xb0 [ 60.277580] -> #1 (&j_cdbs->timer_mutex){+.+...}: [ 60.277592] [<ffffffff810ac6d4>] lock_acquire+0xa4/0x200 [ 60.277600] [<ffffffff815b6157>] mutex_lock_nested+0x67/0x410 [ 60.277608] [<ffffffff814b785d>] cs_dbs_timer+0x8d/0xe0 [ 60.277616] [<ffffffff8106302d>] process_one_work+0x1cd/0x6a0 [ 60.277624] [<ffffffff81063d31>] worker_thread+0x121/0x3a0 [ 60.277633] [<ffffffff8106ae2b>] kthread+0xdb/0xe0 [ 60.277640] [<ffffffff815bb96c>] ret_from_fork+0x7c/0xb0 [ 60.277649] -> #0 ((&(&j_cdbs->work)->work)){+.+...}: [ 60.277661] [<ffffffff810ab826>] __lock_acquire+0x1766/0x1d30 [ 60.277669] [<ffffffff810ac6d4>] lock_acquire+0xa4/0x200 [ 60.277677] [<ffffffff810621ed>] flush_work+0x3d/0x280 [ 60.277685] [<ffffffff81062d8a>] __cancel_work_timer+0x8a/0x120 [ 60.277693] [<ffffffff81062e53>] cancel_delayed_work_sync+0x13/0x20 [ 60.277701] [<ffffffff814b89d9>] cpufreq_governor_dbs+0x529/0x6f0 [ 60.277709] [<ffffffff814b76a7>] cs_cpufreq_governor_dbs+0x17/0x20 [ 60.277719] [<ffffffff814b5df8>] __cpufreq_governor+0x48/0x100 [ 60.277728] [<ffffffff814b6b80>] __cpufreq_remove_dev.isra.14+0x80/0x3c0 [ 60.277737] [<ffffffff815adc0d>] cpufreq_cpu_callback+0x38/0x4c [ 60.277747] [<ffffffff81071a4d>] notifier_call_chain+0x5d/0x110 [ 60.277759] [<ffffffff81071b0e>] __raw_notifier_call_chain+0xe/0x10 [ 60.277768] [<ffffffff815a0a68>] _cpu_down+0x88/0x330 [ 60.277779] [<ffffffff815a0d46>] cpu_down+0x36/0x50 [ 60.277788] [<ffffffff815a2748>] store_online+0x98/0xd0 [ 60.277796] [<ffffffff81452a28>] dev_attr_store+0x18/0x30 [ 60.277806] [<ffffffff811d9edb>] sysfs_write_file+0xdb/0x150 [ 60.277818] [<ffffffff8116806d>] vfs_write+0xbd/0x1f0 [ 60.277826] [<ffffffff811686fc>] SyS_write+0x4c/0xa0 [ 60.277834] [<ffffffff815bbbbe>] tracesys+0xd0/0xd5 [ 60.277842] other info that might help us debug this: [ 60.277848] Chain exists of: (&(&j_cdbs->work)->work) --> &j_cdbs->timer_mutex --> cpu_hotplug.lock [ 60.277864] Possible unsafe locking scenario: [ 60.277869] CPU0 CPU1 [ 60.277873] ---- ---- [ 60.277877] lock(cpu_hotplug.lock); [ 60.277885] lock(&j_cdbs->timer_mutex); [ 60.277892] lock(cpu_hotplug.lock); [ 60.277900] lock((&(&j_cdbs->work)->work)); [ 60.277907] *** DEADLOCK *** [ 60.277915] 6 locks held by bash/2225: [ 60.277919] #0: (sb_writers#6){.+.+.+}, at: [<ffffffff81168173>] vfs_write+0x1c3/0x1f0 [ 60.277937] #1: (&buffer->mutex){+.+.+.}, at: [<ffffffff811d9e3c>] sysfs_write_file+0x3c/0x150 [ 60.277954] #2: (s_active#61){.+.+.+}, at: [<ffffffff811d9ec3>] sysfs_write_file+0xc3/0x150 [ 60.277972] #3: (x86_cpu_hotplug_driver_mutex){+.+...}, at: [<ffffffff81024cf7>] cpu_hotplug_driver_lock+0x17/0x20 [ 60.277990] #4: (cpu_add_remove_lock){+.+.+.}, at: [<ffffffff815a0d32>] cpu_down+0x22/0x50 [ 60.278007] #5: (cpu_hotplug.lock){+.+.+.}, at: [<ffffffff81042d8b>] cpu_hotplug_begin+0x2b/0x60 [ 60.278023] stack backtrace: [ 60.278031] CPU: 3 PID: 2225 Comm: bash Not tainted 3.10.0-rc7-dbg-01385-g241fd04-dirty #1744 [ 60.278037] Hardware name: Acer Aspire 5741G /Aspire 5741G , BIOS V1.20 02/08/2011 [ 60.278042] ffffffff8204e110 ffff88014df6b9f8 ffffffff815b3d90 ffff88014df6ba38 [ 60.278055] ffffffff815b0a8d ffff880150ed3f60 ffff880150ed4770 3871c4002c8980b2 [ 60.278068] ffff880150ed4748 ffff880150ed4770 ffff880150ed3f60 ffff88014df6bb00 [ 60.278081] Call Trace: [ 60.278091] [<ffffffff815b3d90>] dump_stack+0x19/0x1b [ 60.278101] [<ffffffff815b0a8d>] print_circular_bug+0x2b6/0x2c5 [ 60.278111] [<ffffffff810ab826>] __lock_acquire+0x1766/0x1d30 [ 60.278123] [<ffffffff81067e08>] ? __kernel_text_address+0x58/0x80 [ 60.278134] [<ffffffff810ac6d4>] lock_acquire+0xa4/0x200 [ 60.278142] [<ffffffff810621b5>] ? flush_work+0x5/0x280 [ 60.278151] [<ffffffff810621ed>] flush_work+0x3d/0x280 [ 60.278159] [<ffffffff810621b5>] ? flush_work+0x5/0x280 [ 60.278169] [<ffffffff810a9b14>] ? mark_held_locks+0x94/0x140 [ 60.278178] [<ffffffff81062d77>] ? __cancel_work_timer+0x77/0x120 [ 60.278188] [<ffffffff810a9cbd>] ? trace_hardirqs_on_caller+0xfd/0x1c0 [ 60.278196] [<ffffffff81062d8a>] __cancel_work_timer+0x8a/0x120 [ 60.278206] [<ffffffff81062e53>] cancel_delayed_work_sync+0x13/0x20 [ 60.278214] [<ffffffff814b89d9>] cpufreq_governor_dbs+0x529/0x6f0 [ 60.278225] [<ffffffff814b76a7>] cs_cpufreq_governor_dbs+0x17/0x20 [ 60.278234] [<ffffffff814b5df8>] __cpufreq_governor+0x48/0x100 [ 60.278244] [<ffffffff814b6b80>] __cpufreq_remove_dev.isra.14+0x80/0x3c0 [ 60.278255] [<ffffffff815adc0d>] cpufreq_cpu_callback+0x38/0x4c [ 60.278265] [<ffffffff81071a4d>] notifier_call_chain+0x5d/0x110 [ 60.278275] [<ffffffff81071b0e>] __raw_notifier_call_chain+0xe/0x10 [ 60.278284] [<ffffffff815a0a68>] _cpu_down+0x88/0x330 [ 60.278292] [<ffffffff81024cf7>] ? cpu_hotplug_driver_lock+0x17/0x20 [ 60.278302] [<ffffffff815a0d46>] cpu_down+0x36/0x50 [ 60.278311] [<ffffffff815a2748>] store_online+0x98/0xd0 [ 60.278320] [<ffffffff81452a28>] dev_attr_store+0x18/0x30 [ 60.278329] [<ffffffff811d9edb>] sysfs_write_file+0xdb/0x150 [ 60.278337] [<ffffffff8116806d>] vfs_write+0xbd/0x1f0 [ 60.278347] [<ffffffff81185950>] ? fget_light+0x320/0x4b0 [ 60.278355] [<ffffffff811686fc>] SyS_write+0x4c/0xa0 [ 60.278364] [<ffffffff815bbbbe>] tracesys+0xd0/0xd5 [ 60.280582] smpboot: CPU 1 is now offline The intention of that commit was to avoid warnings during CPU hotplug, which indicated that offline CPUs were getting IPIs from the cpufreq governor's work items. But the real root-cause of that problem was commit a66b2e5 (cpufreq: Preserve sysfs files across suspend/resume) because it totally skipped all the cpufreq callbacks during CPU hotplug in the suspend/resume path, and hence it never actually shut down the cpufreq governor's worker threads during CPU offline in the suspend/resume path. Reflecting back, the reason why we never suspected that commit as the root-cause earlier, was that the original issue was reported with just the halt command and nobody had brought in suspend/resume to the equation. The reason for _that_ in turn, as it turns out, is that earlier halt/shutdown was being done by disabling non-boot CPUs while tasks were frozen, just like suspend/resume.... but commit cf7df378a (reboot: migrate shutdown/reboot to boot cpu) which came somewhere along that very same time changed that logic: shutdown/halt no longer takes CPUs offline. Thus, the test-cases for reproducing the bug were vastly different and thus we went totally off the trail. Overall, it was one hell of a confusion with so many commits affecting each other and also affecting the symptoms of the problems in subtle ways. Finally, now since the original problematic commit (a66b2e5) has been completely reverted, revert this intermediate fix too (2f7021a8), to fix the CPU hotplug deadlock. Phew! Reported-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Reported-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com> Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Tested-by: Peter Wu <lekensteyn@gmail.com> Cc: 3.10+ <stable@vger.kernel.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2013-06-27cpufreq: don't leave stale policy pointer in cdbs->cur_policyJacob Shin
Clear ->cur_policy when stopping a governor, or the ->cur_policy pointer may be stale on systems with have_governor_per_policy when a new policy is allocated due to CPU hotplug offline/online. [rjw: Changelog] Suggested-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Jacob Shin <jacob.shin@amd.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2013-06-27Merge branch 'pm-cpufreq-assorted' into pm-cpufreqRafael J. Wysocki
* pm-cpufreq-assorted: (21 commits) cpufreq: powernow-k8: call CPUFREQ_POSTCHANGE notfier in error cases cpufreq: pcc: call CPUFREQ_POSTCHANGE notfier in error cases cpufreq: e_powersaver: call CPUFREQ_POSTCHANGE notfier in error cases cpufreq: ACPI: call CPUFREQ_POSTCHANGE notfier in error cases cpufreq: make __cpufreq_notify_transition() static cpufreq: Fix minor formatting issues cpufreq: Fix governor start/stop race condition cpufreq: Simplify userspace governor cpufreq: powerpc: move cpufreq driver to drivers/cpufreq cpufreq: kirkwood: Select CPU_FREQ_TABLE option cpufreq: big.LITTLE needs cpufreq table cpufreq: SPEAr needs cpufreq table cpufreq: powerpc: Add cpufreq driver for Freescale e500mc SoCs cpufreq: remove unnecessary cpufreq_cpu_{get|put}() calls cpufreq: MAINTAINERS: Add git tree path for ARM specific updates cpufreq: rename index as driver_data in cpufreq_frequency_table cpufreq: Don't create empty /sys/devices/system/cpu/cpufreq directory cpufreq: Move get_cpu_idle_time() to cpufreq.c cpufreq: governors: Move get_governor_parent_kobj() to cpufreq.c cpufreq: Add EXPORT_SYMBOL_GPL for have_governor_per_policy ...
2013-06-05cpufreq: protect 'policy->cpus' from offlining during __gov_queue_work()Michael Wang
Jiri Kosina <jkosina@suse.cz> and Borislav Petkov <bp@alien8.de> reported the warning: [ 51.616759] ------------[ cut here ]------------ [ 51.621460] WARNING: at arch/x86/kernel/smp.c:123 native_smp_send_reschedule+0x58/0x60() [ 51.629638] Modules linked in: ext2 vfat fat loop snd_hda_codec_hdmi usbhid snd_hda_codec_realtek coretemp kvm_intel kvm snd_hda_intel snd_hda_codec crc32_pclmul crc32c_intel ghash_clmulni_intel snd_hwdep snd_pcm aesni_intel sb_edac aes_x86_64 ehci_pci snd_page_alloc glue_helper snd_timer xhci_hcd snd iTCO_wdt iTCO_vendor_support ehci_hcd edac_core lpc_ich acpi_cpufreq lrw gf128mul ablk_helper cryptd mperf usbcore usb_common soundcore mfd_core dcdbas evdev pcspkr processor i2c_i801 button microcode [ 51.675581] CPU: 0 PID: 244 Comm: kworker/1:1 Tainted: G W 3.10.0-rc1+ #10 [ 51.683407] Hardware name: Dell Inc. Precision T3600/0PTTT9, BIOS A08 01/24/2013 [ 51.690901] Workqueue: events od_dbs_timer [ 51.695069] 0000000000000009 ffff88043a2f5b68 ffffffff8161441c ffff88043a2f5ba8 [ 51.702602] ffffffff8103e540 0000000000000033 0000000000000001 ffff88043d5f8000 [ 51.710136] 00000000ffff0ce1 0000000000000001 ffff88044fc4fc08 ffff88043a2f5bb8 [ 51.717691] Call Trace: [ 51.720191] [<ffffffff8161441c>] dump_stack+0x19/0x1b [ 51.725396] [<ffffffff8103e540>] warn_slowpath_common+0x70/0xa0 [ 51.731473] [<ffffffff8103e58a>] warn_slowpath_null+0x1a/0x20 [ 51.737378] [<ffffffff81025628>] native_smp_send_reschedule+0x58/0x60 [ 51.744013] [<ffffffff81072cfd>] wake_up_nohz_cpu+0x2d/0xa0 [ 51.749745] [<ffffffff8104f6bf>] add_timer_on+0x8f/0x110 [ 51.755214] [<ffffffff8105f6fe>] __queue_delayed_work+0x16e/0x1a0 [ 51.761470] [<ffffffff8105f251>] ? try_to_grab_pending+0xd1/0x1a0 [ 51.767724] [<ffffffff8105f78a>] mod_delayed_work_on+0x5a/0xa0 [ 51.773719] [<ffffffff814f6b5d>] gov_queue_work+0x4d/0xc0 [ 51.779271] [<ffffffff814f60cb>] od_dbs_timer+0xcb/0x170 [ 51.784734] [<ffffffff8105e75d>] process_one_work+0x1fd/0x540 [ 51.790634] [<ffffffff8105e6f2>] ? process_one_work+0x192/0x540 [ 51.796711] [<ffffffff8105ef22>] worker_thread+0x122/0x380 [ 51.802350] [<ffffffff8105ee00>] ? rescuer_thread+0x320/0x320 [ 51.808264] [<ffffffff8106634a>] kthread+0xea/0xf0 [ 51.813200] [<ffffffff81066260>] ? flush_kthread_worker+0x150/0x150 [ 51.819644] [<ffffffff81623d5c>] ret_from_fork+0x7c/0xb0 [ 51.918165] nouveau E[ DRM] GPU lockup - switching to software fbcon [ 51.930505] [<ffffffff81066260>] ? flush_kthread_worker+0x150/0x150 [ 51.936994] ---[ end trace f419538ada83b5c5 ]--- It was caused by the policy->cpus changed during the process of __gov_queue_work(), in other word, cpu offline happened. Use get/put_online_cpus() to prevent the offline from happening while __gov_queue_work() is running. [rjw: The problem has been present since recent commit 031299b (cpufreq: governors: Avoid unnecessary per cpu timer interrupts)] References: https://lkml.org/lkml/2013/6/5/88 Reported-by: Borislav Petkov <bp@alien8.de> Reported-and-tested-by: Jiri Kosina <jkosina@suse.cz> Signed-off-by: Michael Wang <wangyun@linux.vnet.ibm.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2013-05-27cpufreq: Don't create empty /sys/devices/system/cpu/cpufreq directoryViresh Kumar
When we don't have any file in cpu/cpufreq directory we shouldn't create it. Specially with the introduction of per-policy governor instance patchset, even governors are moved to cpu/cpu*/cpufreq/governor-name directory and so this directory is just not required. Lets have it only when required. Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2013-05-27cpufreq: Move get_cpu_idle_time() to cpufreq.cViresh Kumar
Governors other than ondemand and conservative can also use get_cpu_idle_time() and they aren't required to compile cpufreq_governor.c. So, move these independent routines to cpufreq.c instead. Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2013-05-27cpufreq: governors: Move get_governor_parent_kobj() to cpufreq.cViresh Kumar
get_governor_parent_kobj() can be used by any governor, generic cpufreq governors or platform specific ones and so must be present in cpufreq.c instead of cpufreq_governor.c. This patch moves it to cpufreq.c. This also adds EXPORT_SYMBOL_GPL(get_governor_parent_kobj) so that modules can use this function too. Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2013-05-12cpufreq: governors: Fix CPUFREQ_GOV_POLICY_{INIT|EXIT} notifiersViresh Kumar
There are two types of INIT/EXIT activities that we need to do for governors: - Done only once per governor (doesn't depend how many instances of the governor there are). eg: cpufreq_register_notifier() for conservative governor. - Done per governor instance, eg: sysfs_{create|remove}_group(). There were some corner cases where current code isn't able to handle them separately and so failing for some test cases. We use two separate variables now for keeping track of above two requirements. - governor->initialized for first one - dbs_data->usage_count for per governor instance Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2013-04-01cpufreq: governors: Calculate iowait time only when necessaryStratos Karafotis
Currently we always calculate the CPU iowait time and add it to idle time. If we are in ondemand and we use io_is_busy, we re-calculate iowait time and we subtract it from idle time. With this patch iowait time is calculated only when necessary avoiding the double call to get_cpu_iowait_time_us. We use a parameter in function get_cpu_idle_time to distinguish when the iowait time will be added to idle time or not, without the need of keeping the prev_io_wait. Signed-off-by: Stratos Karafotis <stratosk@semaphore.gr> Acked-by: Viresh Kumar <viresh.kumar@linaro.,org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2013-04-01cpufreq: governors: Avoid unnecessary per cpu timer interruptsViresh Kumar
Following patch has introduced per cpu timers or works for ondemand and conservative governors. commit 2abfa876f1117b0ab45f191fb1f82c41b1cbc8fe Author: Rickard Andersson <rickard.andersson@stericsson.com> Date: Thu Dec 27 14:55:38 2012 +0000 cpufreq: handle SW coordinated CPUs This causes additional unnecessary interrupts on all cpus when the load is recently evaluated by any other cpu. i.e. When load is recently evaluated by cpu x, we don't really need any other cpu to evaluate this load again for the next sampling_rate time. Some sort of code is present to avoid that but we are still getting timer interrupts for all cpus. A good way of avoiding this would be to modify delays for all cpus (policy->cpus) whenever any cpu has evaluated load. This patch does this change and some related code cleanup. Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2013-04-01cpufreq: governor: Implement per policy instances of governorsViresh Kumar
Currently, there can't be multiple instances of single governor_type. If we have a multi-package system, where we have multiple instances of struct policy (per package), we can't have multiple instances of same governor. i.e. We can't have multiple instances of ondemand governor for multiple packages. Governors directory in sysfs is created at /sys/devices/system/cpu/cpufreq/ governor-name/. Which again reflects that there can be only one instance of a governor_type in the system. This is a bottleneck for multicluster system, where we want different packages to use same governor type, but with different tunables. This patch uses the infrastructure provided by earlier patch and implements init/exit routines for ondemand and conservative governors. Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2013-02-09cpufreq: governors: Fix WARN_ON() for multi-policy platformsViresh Kumar
On multi-policy systems there is a single instance of governor for both the policies (if same governor is chosen for both policies). With the code update from following patches: 8eeed09 cpufreq: governors: Get rid of dbs_data->enable field b394058 cpufreq: governors: Reset tunables only for cpufreq_unregister_governor() We are creating/removing sysfs directory of governor for for every call to GOV_START and STOP. This would fail for multi-policy system as there is a per-policy call to START/STOP. This patch reuses the governor->initialized variable to detect total users of governor. Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2013-02-09cpufreq: Don't check cpu_online(policy->cpu)Viresh Kumar
policy->cpu or cpus in policy->cpus can't be offline anymore. And so we don't need to check if they are online or not. Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2013-02-02cpufreq: governors: Reset tunables only for cpufreq_unregister_governor()Viresh Kumar
Currently, whenever governor->governor() is called for CPUFRREQ_GOV_START event we reset few tunables of governor. Which isn't correct, as this routine is called for every cpu hot-[un]plugging event. We should actually be resetting these only when the governor module is removed and re-installed. Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2013-02-02cpufreq: governors: Remove code redundancy between governorsViresh Kumar
With the inclusion of following patches: 9f4eb10 cpufreq: conservative: call dbs_check_cpu only when necessary 772b4b1 cpufreq: ondemand: call dbs_check_cpu only when necessary code redundancy between the conservative and ondemand governors is introduced again, so get rid of it. [rjw: Changelog] Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Tested-by: Fabio Baltieri <fabio.baltieri@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2013-02-02cpufreq: governors: Get rid of dbs_data->enable fieldViresh Kumar
CPUFREQ_GOV_START/STOP are called only once for all policy->cpus and hence we don't need to adapt cpufreq_governor_dbs() routine for multiple calls. So, this patch removes dbs_data->enable field entirely. And rearrange code a bit. Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Tested-by: Fabio Baltieri <fabio.baltieri@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2013-02-02cpufreq: governors: fix misuse of cdbs.cpuFabio Baltieri
Fix governors code to set all cpu's cdbs->cpu to the the actual cpu id and use cur_policy->cpu istead of cdbs->cpu to track current governor's leader cpu. Reported-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Fabio Baltieri <fabio.baltieri@linaro.org> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2013-02-02cpufreq: governors: implement generic policy_is_sharedFabio Baltieri
Implement a generic helper function policy_is_shared() to replace the current dbs_sw_coordinated_cpus() at cpufreq level, so that it can be used by code other than cpufreq governors. Suggested-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Fabio Baltieri <fabio.baltieri@linaro.org> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2013-02-02cpufreq: governors: clean timer init and exit codeFabio Baltieri
Drop unused arguments from dbs_timer_init and clean dbs_timer_exit and cpufreq_governor_dbs to remove non necessary special cases. Reported-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Fabio Baltieri <fabio.baltieri@linaro.org> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2013-02-02cpufreq: ondemand: call dbs_check_cpu only when necessaryFabio Baltieri
Modify ondemand timer to not resample CPU utilization if recently sampled from another SW coordinated core. Signed-off-by: Fabio Baltieri <fabio.baltieri@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2013-02-02cpufreq: handle SW coordinated CPUsRickard Andersson
This patch fixes a bug that occurred when we had load on a secondary CPU and the primary CPU was sleeping. Only one sampling timer was spawned and it was spawned as a deferred timer on the primary CPU, so when a secondary CPU had a change in load this was not detected by the cpufreq governor (both ondemand and conservative). This patch make sure that deferred timers are run on all CPUs in the case of software controlled CPUs that run on the same frequency. Signed-off-by: Rickard Andersson <rickard.andersson@stericsson.com> Signed-off-by: Fabio Baltieri <fabio.baltieri@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2012-11-24cpufreq: governors: Fix jiffies/cputime mixup (revisited)Rafael J. Wysocki
This change was made by commit 8636fd2 (cpufreq: fix jiffies/cputime mixup in conservative/ondemand governors) before, but then it has been reverted inadvertently by commit 4471a34 (cpufreq: governors: remove redundant code). The changelog of commit 8636fd2's says: The function get_cpu_idle_time_jiffy in both the conservative and ondemand governors use jiffies_to_usecs to convert a cputime value to usecs which gives the wrong value on architectures where cputime and jiffies use different units. Only matters if NO_HZ is disabled, since otherwise get_cpu_idle_time_us should already return a valid value, and get_cpu_idle_time_jiffy isn't actually called. Since now we have only one common get_cpu_idle_time_jiffy() used by both governors in question, modify it along the lines of commit 8636fd2 to restore the correct behavior. Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Reviewed-by: Viresh Kumar <viresh.kumar@linaro.org>
2012-11-15cpufreq: Fix sparse warnings by updating cputime64_t to u64Viresh Kumar
There were few sparse warnings due to mismatch of type on function arguments. Two types were used u64 and cputime64_t. Both are actually u64, so use u64 only. Reported-by: Fengguang Wu <fengguang.wu@intel.com> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2012-11-15cpufreq: governors: remove redundant codeViresh Kumar
Initially ondemand governor was written and then using its code conservative governor is written. It used a lot of code from ondemand governor, but copy of code was created instead of using the same routines from both governors. Which increased code redundancy, which is difficult to manage. This patch is an attempt to move common part of both the governors to cpufreq_governor.c file to come over above mentioned issues. This shouldn't change anything from functionality point of view. Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2012-11-15cpufreq: fix jiffies/cputime mixup in conservative/ondemand governorsAndreas Schwab
The function get_cpu_idle_time_jiffy in both the conservative and ondemand governors use jiffies_to_usecs to convert a cputime value to usecs which gives the wrong value on architectures where cputime and jiffies use different units. Only matters if NO_HZ is disabled, since otherwise get_cpu_idle_time_us should already return a valid value, and get_cpu_idle_time_jiffy isn't actually called. Signed-off-by: Andreas Schwab <schwab@linux-m68k.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2012-11-15cpufreq: Move common part from governors to separate file, v2viresh kumar
Multiple cpufreq governers have defined similar get_cpu_idle_time_***() routines. These routines must be moved to some common place, so that all governors can use them. So moving them to cpufreq_governor.c, which seems to be a better place for keeping these routines. Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>