aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorYvan Roux <yvan.roux@linaro.org>2014-04-07 14:01:27 +0000
committerYvan Roux <yvan.roux@linaro.org>2014-04-07 14:01:27 +0000
commite44ee981122cca3bd321a9e60c6fa8a165454800 (patch)
tree92a16a669ae4780c108779fbc20fa2fa44332a7d
parent3b84be736b763024ee89517526862b3c5f3936d7 (diff)
gcc/
2014-04-07 Michael Collison <michael.collison@linaro.org> Backport from trunk r205105 2013-11-20 James Greenhalgh <james.greenhalgh@arm.com> * config/aarch64/aarch64.md: Remove "mode" and "mode2" attributes from all insns. 2014-04-07 Michael Collison <michael.collison@linaro.org> Backport from trunk r205050 2013-11-19 James Greenhalgh <james.greenhalgh@arm.com> * config/arm/arm.md (zero_extend<mode>di2): Add type attribute. 2014-04-07 Michael Collison <michael.collison@linaro.org> Backport from trunk r204852 2013-11-19 James Greenhalgh <james.greenhalgh@arm.com> * config/aarch64/aarch64.md: Remove v8type from all insns. 2014-04-07 Michael Collison <michael.collison@linaro.org> Backport from trunk r204852 2013-11-15 James Greenhalgh <james.greenhalgh@arm.com> * config/aarch64/aarch64-simd.md: Remove simd_type from all patterns. * config/aarch64/aarch64.md: Likewise, correct "type" attribute where it is incorrect or missing. 2014-04-07 Michael Collison <michael.collison@linaro.org> Backport from trunk r204784 2013-11-14 James Greenhalgh <james.greenhalgh@arm.com> * config/aarch64/aarch64-cores.def (example-1): Remove. (example-2): Likewise. * config/aarch64/aarch64-tune.md: Regenerate. * config/aarch64/aarch64.md: Do not include "large.md" or "small.md". (generic_sched): Remove "large", "small". * config/aarch64/large.md: Delete. * config/aarch64/small.md: Delete. 2014-04-07 Michael Collison <michael.collison@linaro.org> Backport from trunk r204783 2013-11-14 James Greenhalgh <james.greenhalgh@arm.com> * config/aarch64/aarch64-cores.def (cortex-a57): Tune for cortexa15. * config/aarch64/aarch64-tune.md: Regenerate. * config/aarch64/aarch64.md: Include cortex-a15 pipeline model. (generic_sched): "no" if we are tuning for cortexa15. * config/arm/cortex-a15.md: Include cortex-a15-neon.md by relative path. 2014-04-07 Michael Collison <michael.collison@linaro.org> Backport from trunk r204782 2013-11-14 James Greenhalgh <james.greenhalgh@arm.com> * config/aarch64/aarch64-arches.def (armv8-a): Tune for cortex-a53. * config/aarch64/aarch64.md: Do not include aarch64-generic.md. * config/aarch64/aarch64.c (aarch64_tune): Initialize to cortexa53. (all_cores): Use cortexa53 when tuning for "generic". (aarch64_override_options): Fix comment. * config/aarch64/aarch64.h (TARGET_CPU_DEFAULT): Set to cortexa53. * config/aarch64/aarch64-generic.md: Delete. 2014-04-07 Michael Collison <michael.collison@linaro.org> Backport from trunk r204575 2013-11-08 James Greenhalgh <james.greenhalgh@arm.com> * config/arm/aarch-common.c (search_term): New typedef. (shift_rtx_costs): New array. (arm_rtx_shift_left_p): New. (arm_find_sub_rtx_with_search_term): Likewise. (arm_find_sub_rtx_with_code): Likewise. (arm_early_load_addr_dep): Add sanity checking. (arm_no_early_alu_shift_dep): Likewise. (arm_no_early_alu_shift_value_dep): Likewise. (arm_no_early_mul_dep): Likewise. (arm_no_early_store_addr_dep): Likewise. 2014-04-07 Michael Collison <michael.collison@linaro.org> Backport from trunk r203621 2013-10-15 James Greenhalgh <james.greenhalgh@arm.com> * config/arm/neon-schedgen.ml: Remove. * config/arm/cortex-a9-neon.md: Remove comment regarding neon-schedgen.ml. 2014-04-07 Michael Collison <michael.collison@linaro.org> Backport from trunk r203620 2013-10-15 James Greenhalgh <james.greenhalgh@arm.com> * config/arm/types: Remove old neon types. 2014-04-07 Michael Collison <michael.collison@linaro.org> Backport from trunk r203619 2013-10-15 James Greenhalgh <james.greenhalgh@arm.com> * config/arm/cortex-a7.md (cortex_a7_neon_type): New. (cortex_a7_neon_mul): Update for new types. (cortex_a7_neon_mla): Likewise. (cortex_a7_neon): Likewise. 2014-04-07 Michael Collison <michael.collison@linaro.org> Backport from trunk r203618 2013-10-15 James Greenhalgh <james.greenhalgh@arm.com> * config/arm/cortex-a15-neon.md (cortex_a15_neon_type): New, (cortex_a15_neon_int_1): Remove. (cortex_a15_neon_int_2): Likewise. (cortex_a15_neon_int_3): Likewise. (cortex_a15_neon_int_4): Likewise. (cortex_a15_neon_int_5): Likewise. (cortex_a15_neon_vqneg_vqabs): Likewise. (cortex_a15_neon_vmov): Likewise. (cortex_a15_neon_vaba): Likewise. (cortex_a15_neon_vaba_qqq): Likewise. (cortex_a15_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long): Likewise. (cortex_a15_neon_mul_qqq_8_16_32_ddd_32): Likewise. (cortex_a15_neon_mul_qdd_64_32_long_qqd_16_ddd_32_scalar_64_32_long_scalar): Likewise. (cortex_a15_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long): Likewise. (cortex_a15_neon_mla_qqq_8_16): Likewise. (cortex_a15_neon_mla_ddd_32_qqd_16_ddd_32_scalar): Likewise. (cortex_a15_neon_mla_qqq_32_qqd_32_scalar): Likewise. (cortex_a15_neon_mul_ddd_16_scalar_32_16_long_scalar): Likewise. (cortex_a15_neon_mul_qqd_32_scalar): Likewise. (cortex_a15_neon_mla_ddd_16_scalar_qdd_32_16_long_scalar): Likewise. (cortex_a15_neon_shift_1): Likewise. (cortex_a15_neon_shift_2): Likewise. (cortex_a15_neon_shift_3): Likewise. (cortex_a15_neon_vshl_ddd): Likewise. (cortex_a15_neon_vqshl_vrshl_vqrshl_qqq): Likewise. (cortex_a15_neon_vsra_vrsra): Likewise. (cortex_a15_neon_fp_vmla_ddd_scalar): Likewise. (cortex_a15_neon_fp_vmla_qqq_scalar): Likewise. (cortex_a15_neon_bp_3cycle): Likewise. (cortex_a15_neon_ldm_2): Likewise. (cortex_a15_neon_stm_2): Likewise. (cortex_a15_neon_mcr): Likewise. (cortex_a15_neon_mrc): Likewise. (cortex_a15_neon_fp_vadd_ddd_vabs_dd): Likewise. (cortex_a15_neon_fp_vadd_qqq_vabs_qq): Likewise. (cortex_a15_neon_fp_vmul_ddd): Likewise. (cortex_a15_neon_fp_vmul_qqd): Likewise. (cortex_a15_neon_fp_vmla_ddd): Likewise. (cortex_a15_neon_fp_vmla_qqq): Likewise. (cortex_a15_neon_fp_vmla_ddd_scalar): Likewise. (cortex_a15_neon_fp_vmla_qqq_scalar): Likewise. (cortex_a15_neon_fp_vrecps_vrsqrts_ddd): Likewise. (cortex_a15_neon_fp_vrecps_vrsqrts_qqq): Likewise. (cortex_a15_neon_bp_simple): Likewise. (cortex_a15_neon_bp_2cycle): Likewise. (cortex_a15_neon_bp_3cycle): Likewise. (cortex_a15_neon_vld1_1_2_regs): Likewise. (cortex_a15_neon_vld1_3_4_regs): Likewise. (cortex_a15_neon_vld2_2_regs_vld1_vld2_all_lanes): Likewise. (cortex_a15_neon_vld2_4_regs): Likewise. (cortex_a15_neon_vld3_vld4): Likewise. (cortex_a15_neon_vst1_1_2_regs_vst2_2_regs): Likewise. (cortex_a15_neon_vst1_3_4_regs): Likewise. (cortex_a15_neon_vst2_4_regs_vst3_vst4): Rename to... (cortex_a15_neon_vst2_4_regs_vst3): ...This, update for new attributes. (cortex_a15_neon_vst3_vst4): Rename to... (cortex_a15_neon_vst4): This, update for new attributes. (cortex_a15_neon_vld1_vld2_lane): Update for new attributes. (cortex_a15_neon_vld3_vld4_lane): Likewise. (cortex_a15_neon_vst1_vst2_lane): Likewise. (cortex_a15_neon_vst3_vst4_lane): Likewise. (cortex_a15_neon_vld3_vld4_all_lanes): Likewise. (cortex_a15_neon_ldm_2): Likewise. (cortex_a15_neon_stm_2): Likewise. (cortex_a15_neon_mcr): Likewise. (cortex_a15_neon_mcr_2_mcrr): Likewise. (cortex_a15_neon_mrc): Likewise. (cortex_a15_neon_mrrc): Likewise. (cortex_a15_neon_abd): New. (cortex_a15_neon_abd_q): Likewise. (cortex_a15_neon_aba): Likewise. (cortex_a15_neon_aba_q): Likewise. (cortex_a15_neon_acc): Likewise. (cortex_a15_neon_acc_q): Likewise. (cortex_a15_neon_arith_basic): Likewise. (cortex_a15_neon_arith_complex): Likewise. (cortex_a15_neon_multiply): Likewise. (cortex_a15_neon_multiply_q): Likewise. (cortex_a15_neon_mla): Likewise. (cortex_a15_neon_mla_q): Likewise. (cortex_a15_neon_sat_mla_long): Likewise. (cortex_a15_neon_shift_acc): Likewise. (cortex_a15_neon_shift_imm_basic): Likewise. (cortex_a15_neon_shift_imm_complex): Likewise. (cortex_a15_neon_shift_reg_basic): Likewise. (cortex_a15_neon_shift_reg_basic_q): Likewise. (cortex_a15_neon_shift_reg_complex): Likewise. (cortex_a15_neon_shift_reg_complex_q): Likewise. (cortex_a15_neon_fp_negabs): Likewise (cortex_a15_neon_fp_arith): Likewise (cortex_a15_neon_fp_arith_q): Likewise (cortex_a15_neon_fp_cvt_int): Likewise (cortex_a15_neon_fp_cvt_int_q): Likewise (cortex_a15_neon_fp_cvt_16): Likewise (cortex_a15_neon_fp_mul): Likewise (cortex_a15_neon_fp_mul_q): Likewise (cortex_a15_neon_fp_mla): Likewise (cortex_a15_neon_fp_mla_q): Likewise (cortex_a15_neon_fp_recps_rsqrte): Likewise. (cortex_a15_neon_fp_recps_rsqrte_q): Likewise. (cortex_a15_neon_bitops): Likewise. (cortex_a15_neon_bitops_q): Likewise. (cortex_a15_neon_from_gp): Likewise. (cortex_a15_neon_from_gp_q): Likewise. (cortex_a15_neon_tbl3_tbl4): Likewise. (cortex_a15_neon_zip_q): Likewise. (cortex_a15_neon_to_gp): Likewise. (cortex_a15_neon_load_a): Likewise. (cortex_a15_neon_load_b): Likewise. (cortex_a15_neon_load_c): Likewise. (cortex_a15_neon_load_d): Likewise. (cortex_a15_neon_load_e): Likewise. (cortex_a15_neon_load_f): Likewise. (cortex_a15_neon_store_a): Likewise. (cortex_a15_neon_store_b): Likewise. (cortex_a15_neon_store_c): Likewise. (cortex_a15_neon_store_d): Likewise. (cortex_a15_neon_store_e): Likewise. (cortex_a15_neon_store_f): Likewise. (cortex_a15_neon_store_g): Likewise. (cortex_a15_neon_store_h): Likewise. (cortex_a15_vfp_to_from_gp): Likewise. 2014-04-07 Michael Collison <michael.collison@linaro.org> Backport from trunk r203617 2013-10-15 James Greenhalgh <james.greenhalgh@arm.com> * config/arm/cortex-a9-neon.md (cortex_a9_neon_type): New. (cortex_a9_neon_vshl_ddd): Remove. (cortex_a9_neon_vst3_vst4): Likewise. (cortex_a9_neon_vld3_vld4_all_lanes): Likewise. (cortex_a9_neon_bit_ops_q): New. (cortex_a9_neon_int_1): Use cortex_a8_neon_type. (cortex_a9_neon_int_2): Likewise. (cortex_a9_neon_int_3): Likewise. (cortex_a9_neon_int_4): Likewise. (cortex_a9_neon_int_5): Likewise. (cortex_a9_neon_vqneg_vqabs): Likewise. (cortex_a9_neon_vmov): Likewise. (cortex_a9_neon_vaba): Likewise. (cortex_a9_neon_vaba_qqq): Likewise. (cortex_a9_neon_shift_1): Likewise. (cortex_a9_neon_shift_2): Likewise. (cortex_a9_neon_shift_3): Likewise. (cortex_a9_neon_vqshl_vrshl_vqrshl_qqq): Likewise. (cortex_a9_neon_vsra_vrsra): Likewise. (cortex_a9_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long): Likewise. (cortex_a9_neon_mul_qqq_8_16_32_ddd_32): Likewise. (cortex_a9_neon_mul_qdd_64_32_long_qqd_16_ddd_32_scalar_64_32_long_scalar): Likewise. (cortex_a9_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long): Likewise. (cortex_a9_neon_mla_qqq_8_16): Likewise. (cortex_a9_neon_mla_ddd_32_qqd_16_ddd_32_scalar_qdd_64_32_long_scalar_qdd_64_32_long): Likewise. (cortex_a9_neon_mla_qqq_32_qqd_32_scalar): Likewise. (cortex_a9_neon_mul_ddd_16_scalar_32_16_long_scalar): Likewise. (cortex_a9_neon_mul_qqd_32_scalar): Likewise. (cortex_a9_neon_mla_ddd_16_scalar_qdd_32_16_long_scalar): Likewise. (cortex_a9_neon_fp_vadd_ddd_vabs_dd): Likewise. (cortex_a9_neon_fp_vadd_qqq_vabs_qq): Likewise. (cortex_a9_neon_fp_vsum): Likewise. (cortex_a9_neon_fp_vmul_ddd): Likewise. (cortex_a9_neon_fp_vmul_qqd): Likewise. (cortex_a9_neon_fp_vmla_ddd): Likewise. (cortex_a9_neon_fp_vmla_qqq): Likewise. (cortex_a9_neon_fp_vmla_ddd_scalar): Likewise. (cortex_a9_neon_fp_vmla_qqq_scalar): Likewise. (cortex_a9_neon_fp_vrecps_vrsqrts_ddd): Likewise. (cortex_a9_neon_fp_vrecps_vrsqrts_qqq): Likewise. (cortex_a9_neon_bp_simple): Likewise. (cortex_a9_neon_bp_2cycle): Likewise. (cortex_a9_neon_bp_3cycle): Likewise. (cortex_a9_neon_ldr): Likewise. (cortex_a9_neon_str): Likewise. (cortex_a9_neon_vld1_1_2_regs): Likewise. (cortex_a9_neon_vld1_3_4_regs): Likewise. (cortex_a9_neon_vld2_2_regs_vld1_vld2_all_lanes): Likewise. (cortex_a9_neon_vld2_4_regs): Likewise. (cortex_a9_neon_vld3_vld4): Likewise. (cortex_a9_neon_vld1_vld2_lane): Likewise. (cortex_a9_neon_vld3_vld4_lane): Likewise. (cortex_a9_neon_vld3_vld4_all_lanes): Likewise. (cortex_a9_neon_vst1_1_2_regs_vst2_2_regs): Likewise. (cortex_a9_neon_vst1_3_4_regs): Likewise. (cortex_a9_neon_vst2_4_regs_vst3_vst4): Likewise. (cortex_a9_neon_vst1_vst2_lane): Likewise. (cortex_a9_neon_vst3_vst4_lane): Likewise. (cortex_a9_neon_mcr): Likewise. (cortex_a9_neon_mcr_2_mcrr): Likewise. (cortex_a9_neon_mrc): Likewise. (cortex_a9_neon_mrrc): Likewise. 2014-04-07 Michael Collison <michael.collison@linaro.org> Backport from trunk r203616 2013-10-15 James Greenhalgh <james.greenhalgh@arm.com> * config/arm/cortex-a8-neon.md (cortex_a8_neon_type): New. (cortex_a8_neon_vshl_ddd): Remove. (cortex_a8_neon_vst3_vst4): Likewise. (cortex_a8_neon_vld3_vld4_all_lanes): Likewise. (cortex_a8_neon_bit_ops_q): New. (cortex_a8_neon_int_1): Use cortex_a8_neon_type. (cortex_a8_neon_int_2): Likewise.. (cortex_a8_neon_int_3): Likewise. (cortex_a8_neon_int_5): Likewise. (cortex_a8_neon_vqneg_vqabs): Likewise. (cortex_a8_neon_int_4): Likewise. (cortex_a8_neon_vaba): Likewise. (cortex_a8_neon_vaba_qqq): Likewise. (cortex_a8_neon_shift_1): Likewise. (cortex_a8_neon_shift_2): Likewise. (cortex_a8_neon_shift_3): Likewise. (cortex_a8_neon_vqshl_vrshl_vqrshl_qqq): Likewise. (cortex_a8_neon_vsra_vrsra): Likewise. (cortex_a8_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long): Likewise. (cortex_a8_neon_mul_qqq_8_16_32_ddd_32): Likewise. (cortex_a8_neon_mul_qdd_64_32_long_qqd_16_ddd_32_scalar_64_32_long_scalar): Likewise. (cortex_a8_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long): Likewise. (cortex_a8_neon_mla_qqq_8_16): Likewise. (cortex_a8_neon_mla_ddd_32_qqd_16_ddd_32_scalar_qdd_64_32_long_scalar_qdd_64_32_long): Likewise. (cortex_a8_neon_mla_qqq_32_qqd_32_scalar): Likewise. (cortex_a8_neon_mul_ddd_16_scalar_32_16_long_scalar): Likewise. (cortex_a8_neon_mul_qqd_32_scalar): Likewise. (cortex_a8_neon_mla_ddd_16_scalar_qdd_32_16_long_scalar): Likewise. (cortex_a8_neon_fp_vadd_ddd_vabs_dd): Likewise. (cortex_a8_neon_fp_vadd_qqq_vabs_qq): Likewise. (cortex_a8_neon_fp_vsum): Likewise. (cortex_a8_neon_fp_vmul_ddd): Likewise. (cortex_a8_neon_fp_vmul_qqd): Likewise. (cortex_a8_neon_fp_vmla_ddd): Likewise. (cortex_a8_neon_fp_vmla_qqq): Likewise. (cortex_a8_neon_fp_vmla_ddd_scalar): Likewise. (cortex_a8_neon_fp_vmla_qqq_scalar): Likewise. (cortex_a8_neon_fp_vrecps_vrsqrts_ddd): Likewise. (cortex_a8_neon_fp_vrecps_vrsqrts_qqq): Likewise. (cortex_a8_neon_bp_simple): Likewise. (cortex_a8_neon_bp_2cycle): Likewise. (cortex_a8_neon_bp_3cycle): Likewise. (cortex_a8_neon_ldr): Likewise. (cortex_a8_neon_str): Likewise. (cortex_a8_neon_vld1_1_2_regs): Likewise. (cortex_a8_neon_vld1_3_4_regs): Likewise. (cortex_a8_neon_vld2_2_regs_vld1_vld2_all_lanes): Likewise. (cortex_a8_neon_vld2_4_regs): Likewise. (cortex_a8_neon_vld3_vld4): Likewise. (cortex_a8_neon_vld1_vld2_lane): Likewise. (cortex_a8_neon_vld3_vld4_lane): Likewise. (cortex_a8_neon_vst1_1_2_regs_vst2_2_regs): Likewise. (cortex_a8_neon_vst1_3_4_regs): Likewise. (cortex_a8_neon_vst2_4_regs_vst3_vst4): Likewise. (cortex_a8_neon_vst1_vst2_lane): Likewise. (cortex_a8_neon_vst3_vst4_lane): Likewise. (cortex_a8_neon_mcr): Likewise. (cortex_a8_neon_mcr_2_mcrr): Likewise. (cortex_a8_neon_mrc): Likewise. (cortex_a8_neon_mrrc): Likewise. 2014-04-07 Michael Collison <michael.collison@linaro.org> Backport from trunk r203614 2013-10-15 James Greenhalgh <james.greenhalgh@arm.com> * config/aarch64/iterators.md (Vetype): Add SF and DF modes. (fp): New. * config/aarch64/aarch64-simd.md (neon_type): Remove. (aarch64_simd_dup<mode>): Add "type" attribute. (aarch64_dup_lane<mode>): Likewise. (aarch64_dup_lane_<vswap_width_name><mode>): Likewise. (*aarch64_simd_mov<mode>): Likewise. (aarch64_simd_mov_from_<mode>low): Likewise. (aarch64_simd_mov_from_<mode>high): Likewise. (orn<mode>3): Likewise. (bic<mode>3): Likewise. (add<mode>3): Likewise. (sub<mode>3): Likewise. (mul<mode>3): Likewise. (*aarch64_mul3_elt<mode>): Likewise. (*aarch64_mul3_elt_<vswap_width_name><mode>): Likewise. (*aarch64_mul3_elt_to_128df): Likewise. (*aarch64_mul3_elt_to_64v2df): Likewise. (neg<mode>2): Likewise. (abs<mode>2): Likewise. (abd<mode>_3): Likewise. (aba<mode>_3): Likewise. (fabd<mode>_3): Likewise. (*fabd_scalar<mode>3): Likewise. (and<mode>3): Likewise. (ior<mode>3): Likewise. (xor<mode>3): Likewise. (one_cmpl<mode>2): Likewise. (aarch64_simd_vec_set<mode>): Likewise. (aarch64_simd_lshr<mode>): Likewise. (aarch64_simd_ashr<mode>): Likewise. (aarch64_simd_imm_shl<mode>): Likewise. (aarch64_simd_reg_sshl<mode): Likewise. (aarch64_simd_reg_shl<mode>_unsigned): Likewise. (aarch64_simd_reg_shl<mode>_signed): Likewise. (aarch64_simd_vec_setv2di): Likewise. (aarch64_simd_vec_set<mode>): Likewise. (aarch64_mla<mode>): Likewise. (*aarch64_mla_elt<mode>): Likewise. (*aarch64_mla_elt_<vswap_width_name><mode>): Likewise. (aarch64_mls<mode>): Likewise. (*aarch64_mls_elt<mode>): Likewise. (*aarch64_mls_elt_<vswap_width_name><mode>): Likewise. (<su><maxmin><mode>3): Likewise. (move_lo_quad_<mode>): Likewise. (aarch64_simd_move_hi_quad_<mode>): Likewise. (aarch64_simd_vec_pack_trunc_<mode>): Likewise. (vec_pack_trunc_<mode>): Likewise. (aarch64_simd_vec_unpack<su>_lo_<mode>): Likewise. (aarch64_simd_vec_unpack<su>_hi_<mode>): Likewise. (*aarch64_<su>mlal_lo<mode>): Likewise. (*aarch64_<su>mlal_hi<mode>): Likewise. (*aarch64_<su>mlsl_lo<mode>): Likewise. (*aarch64_<su>mlsl_hi<mode>): Likewise. (*aarch64_<su>mlal<mode>): Likewise. (*aarch64_<su>mlsl<mode>): Likewise. (aarch64_simd_vec_<su>mult_lo_<mode>): Likewise. (aarch64_simd_vec_<su>mult_hi_<mode>): Likewise. (add<mode>3): Likewise. (sub<mode>3): Likewise. (mul<mode>3): Likewise. (div<mode>3): Likewise. (neg<mode>2): Likewise. (abs<mode>2): Likewise. (fma<mode>4): Likewise. (*aarch64_fma4_elt<mode>): Likewise. (*aarch64_fma4_elt_<vswap_width_name><mode>): Likewise. (*aarch64_fma4_elt_to_128df): Likewise. (*aarch64_fma4_elt_to_64v2df): Likewise. (fnma<mode>4): Likewise. (*aarch64_fnma4_elt<mode>): Likewise. (*aarch64_fnma4_elt_<vswap_width_name><mode> (*aarch64_fnma4_elt_to_128df): Likewise. (*aarch64_fnma4_elt_to_64v2df): Likewise. (<frint_pattern><mode>2): Likewise. (l<fcvt_pattern><su_optab><VDQF:mode><fcvt_target>2): Likewise. (<optab><fcvt_target><VDQF:VDQF:mode>2): Likewise. (vec_unpacks_lo_v4sf): Likewise. (aarch64_float_extend_lo_v2df): Likewise. (vec_unpacks_hi_v4sf): Likewise. (aarch64_float_truncate_lo_v2sf): Likewise. (aarch64_float_truncate_hi_v4sf): Likewise. (aarch64_vmls<mode>): Likewise. (<su><maxmin><mode>3): Likewise. (<maxmin_uns><mode>3): Likewise. (reduc_<sur>plus_<mode>): Likewise. (reduc_<sur>plus_v2di): Likewise. (reduc_<sur>plus_v2si): Likewise. (reduc_<sur>plus_<mode>): Likewise. (aarch64_addpv4sf): Likewise. (clz<mode>2): Likewise. (reduc_<maxmin_uns>_<mode>): Likewise. (reduc_<maxmin_uns>_v2di): Likewise. (reduc_<maxmin_uns>_v2si): Likewise. (reduc_<maxmin_uns>_<mode>): Likewise. (reduc_<maxmin_uns>_v4sf): Likewise. (aarch64_simd_bsl<mode>_internal): Likewise. (*aarch64_get_lane_extend<GPI:mode><VDQQH:mode>): Likewise. (*aarch64_get_lane_zero_extendsi<mode>): Likewise. (aarch64_get_lane<mode>): Likewise. (*aarch64_combinez<mode>): Likewise. (aarch64_combine<mode>): Likewise. (aarch64_simd_combine<mode>): Likewise. (aarch64_<ANY_EXTEND:su><ADDSUB:optab>l<mode>_hi_internal): Likewise. (aarch64_<ANY_EXTEND:su><ADDSUB:optab>l<mode>_lo_internal): Likewise. (aarch64_<ANY_EXTEND:su><ADDSUB:optab>l<mode>): Likewise. (aarch64_<ANY_EXTEND:su><ADDSUB:optab>w<mode>): Likewise. (aarch64_<ANY_EXTEND:su><ADDSUB:optab>w2<mode>_internal): Likewise. (aarch64_<sur>h<addsub><mode>): Likewise. (aarch64_<sur><addsub>hn<mode>): Likewise. (aarch64_<sur><addsub>hn2<mode>): Likewise. (aarch64_pmul<mode>): Likewise. (aarch64_<su_optab><optab><mode>): Likewise. (aarch64_<sur>qadd<mode>): Likewise. (aarch64_sqmovun<mode>): Likewise. (aarch64_<sur>qmovn<mode>): Likewise. (aarch64_s<optab><mode>): Likewise. (aarch64_sq<r>dmulh<mode>): Likewise. (aarch64_sq<r>dmulh_lane<mode>): Likewise. (aarch64_sq<r>dmulh_laneq<mode>): Likewise. (aarch64_sq<r>dmulh_lane<mode>): Likewise. (aarch64_sqdml<SBINQOPS:as>l<mode>): Likewise. (aarch64_sqdml<SBINQOPS:as>l_lane<mode>_internal): Likewise. (aarch64_sqdml<SBINQOPS:as>l_lane<mode>_internal): Likewise. (aarch64_sqdml<SBINQOPS:as>l_n<mode>): Likewise. (aarch64_sqdml<SBINQOPS:as>l2<mode>_internal): Likewise. (aarch64_sqdml<SBINQOPS:as>l2_lane<mode>_internal): Likewise. (aarch64_sqdml<SBINQOPS:as>l2_n<mode>_internal): Likewise. (aarch64_sqdmull<mode>): Likewise. (aarch64_sqdmull_lane<mode>_internal): Likewise. (aarch64_sqdmull_n<mode>): Likewise. (aarch64_sqdmull2<mode>_internal): Likewise. (aarch64_sqdmull2_lane<mode>_internal): Likewise. (aarch64_sqdmull2_n<mode>_internal): Likewise. (aarch64_<sur>shl<mode>): Likewise. (aarch64_<sur>q<r>shl<mode> (aarch64_<sur>shll_n<mode>): Likewise. (aarch64_<sur>shll2_n<mode>): Likewise. (aarch64_<sur>shr_n<mode>): Likewise. (aarch64_<sur>sra_n<mode>): Likewise. (aarch64_<sur>s<lr>i_n<mode>): Likewise. (aarch64_<sur>qshl<u>_n<mode>): Likewise. (aarch64_<sur>q<r>shr<u>n_n<mode>): Likewise. (aarch64_cm<optab><mode>): Likewise. (aarch64_cm<optab>di): Likewise. (aarch64_cm<optab><mode>): Likewise. (aarch64_cm<optab>di): Likewise. (aarch64_cmtst<mode>): Likewise. (aarch64_cmtstdi): Likewise. (aarch64_cm<optab><mode>): Likewise. (*aarch64_fac<optab><mode>): Likewise. (aarch64_addp<mode>): Likewise. (aarch64_addpdi): Likewise. (sqrt<mode>2): Likewise. (vec_load_lanesoi<mode>): Likewise. (vec_store_lanesoi<mode>): Likewise. (vec_load_lanesci<mode>): Likewise. (vec_store_lanesci<mode>): Likewise. (vec_load_lanesxi<mode>): Likewise. (vec_store_lanesxi<mode>): Likewise. (*aarch64_mov<mode>): Likewise. (aarch64_ld2<mode>_dreg): Likewise. (aarch64_ld2<mode>_dreg): Likewise. (aarch64_ld3<mode>_dreg): Likewise. (aarch64_ld3<mode>_dreg): Likewise. (aarch64_ld4<mode>_dreg): Likewise. (aarch64_ld4<mode>_dreg): Likewise. (aarch64_tbl1<mode>): Likewise. (aarch64_tbl2v16qi): Likewise. (aarch64_combinev16qi): Likewise. (aarch64_<PERMUTE:perm_insn><PERMUTE:perm_hilo><mode>): Likewise. (aarch64_st2<mode>_dreg): Likewise. (aarch64_st2<mode>_dreg): Likewise. (aarch64_st3<mode>_dreg): Likewise. (aarch64_st3<mode>_dreg): Likewise. (aarch64_st4<mode>_dreg): Likewise. (aarch64_st4<mode>_dreg): Likewise. (*aarch64_simd_ld1r<mode>): Likewise. (aarch64_frecpe<mode>): Likewise. (aarch64_frecp<FRECP:frecp_suffix><mode>): Likewise. (aarch64_frecps<mode>): Likewise. 2014-04-07 Michael Collison <michael.collison@linaro.org> Backport from trunk r203613 2013-10-15 James Greenhalgh <james.greenhalgh@arm.com> * config/arm/iterators.md (V_elem_ch): New. (q): Likewise. (VQH_type): Likewise. * config/arm/arm.md (is_neon_type): New. (conds): Use is_neon_type. (anddi3_insn): Update type attribute. (xordi3_insn): Likewise. (one_cmpldi2): Likewise. * gcc/config/arm/vfp.md (movhf_vfp_neon): Update type attribute. * gcc/config/arm/neon.md (neon_mov): Update type attribute. (*movmisalign<mode>_neon_store): Likewise. (*movmisalign<mode>_neon_load): Likewise. (vec_set<mode>_internal): Likewise. (vec_set<mode>_internal): Likewise. (vec_setv2di_internal): Likewise. (vec_extract<mode>): Likewise. (vec_extract<mode>): Likewise. (vec_extractv2di): Likewise. (*add<mode>3_neon): Likewise. (adddi3_neon): Likewise. (*sub<mode>3_neon): Likewise. (subdi3_neon): Likewise. (fma<VCVTF:mode>4): Likewise. (fma<VCVTF:mode>4_intrinsic): Likewise. (*fmsub<VCVTF:mode>4): Likewise. (fmsub<VCVTF:mode>4_intrinsic): Likewise. (neon_vrint<NEON_VRINT:nvrint_variant><VCVTF:mode>): Likewise. (ior<mode>3): Likewise. (and<mode>3): Likewise. (orn<mode>3_neon): Likewise. (orndi3_neon): Likewise. (bic<mode>3_neon): Likewise. (bicdi3_neon): Likewise. (xor<mode>3): Likewise. (one_cmpl<mode>2): Likewise. (abs<mode>2): Likewise. (neg<mode>2): Likewise. (negdi2_neon): Likewise. (*umin<mode>3_neon): Likewise. (*umax<mode>3_neon): Likewise. (*smin<mode>3_neon): Likewise. (*smax<mode>3_neon): Likewise. (vashl<mode>3): Likewise. (vashr<mode>3_imm): Likewise. (vlshr<mode>3_imm): Likewise. (ashl<mode>3_signed): Likewise. (ashl<mode>3_unsigned): Likewise. (neon_load_count): Likewise. (ashldi3_neon_noclobber): Likewise. (ashldi3_neon): Likewise. (signed_shift_di3_neon): Likewise. (unsigned_shift_di3_neon): Likewise. (ashrdi3_neon_imm_noclobber): Likewise. (lshrdi3_neon_imm_noclobber): Likewise. (<shift>di3_neon): Likewise. (widen_ssum<mode>3): Likewise. (widen_usum<mode>3): Likewise. (quad_halves_<code>v4si): Likewise. (quad_halves_<code>v4sf): Likewise. (quad_halves_<code>v8hi): Likewise. (quad_halves_<code>v16qi): Likewise. (reduc_splus_v2di): Likewise. (neon_vpadd_internal<mode>): Likewise. (neon_vpsmin<mode>): Likewise. (neon_vpsmax<mode>): Likewise. (neon_vpumin<mode>): Likewise. (neon_vpumax<mode>): Likewise. (*ss_add<mode>_neon): Likewise. (*us_add<mode>_neon): Likewise. (*ss_sub<mode>_neon): Likewise. (*us_sub<mode>_neon): Likewise. (neon_vadd<mode>_unspec): Likewise. (neon_vaddl<mode>): Likewise. (neon_vaddw<mode>): Likewise. (neon_vhadd<mode>): Likewise. (neon_vqadd<mode>): Likewise. (neon_vaddhn<mode>): Likewise. (neon_vmul<mode>): Likewise. (neon_vfms<VCVTF:mode>): Likewise. (neon_vmlal<mode>): Likewise. (neon_vmls<mode>): Likewise. (neon_vmlsl<mode>): Likewise. (neon_vqdmulh<mode>): Likewise. (neon_vqdmlal<mode>): Likewise. (neon_vqdmlsl<mode>): Likewise. (neon_vmull<mode>): Likewise. (neon_vqdmull<mode>): Likewise. (neon_vsub<mode>_unspec): Likewise. (neon_vsubl<mode>): Likewise. (neon_vsubw<mode>): Likewise. (neon_vqsub<mode>): Likewise. (neon_vhsub<mode>): Likewise. (neon_vsubhn<mode>): Likewise. (neon_vceq<mode>): Likewise. (neon_vcge<mode>): Likewise. (neon_vcgeu<mode>): Likewise. (neon_vcgt<mode>): Likewise. (neon_vcgtu<mode>): Likewise. (neon_vcle<mode>): Likewise. (neon_vclt<mode>): Likewise. (neon_vcage<mode>): Likewise. (neon_vcagt<mode>): Likewise. (neon_vtst<mode>): Likewise. (neon_vabd<mode>): Likewise. (neon_vabdl<mode>): Likewise. (neon_vaba<mode>): Likewise. (neon_vabal<mode>): Likewise. (neon_vmax<mode>): Likewise. (neon_vmin<mode>): Likewise. (neon_vpaddl<mode>): Likewise. (neon_vpadal<mode>): Likewise. (neon_vpmax<mode>): Likewise. (neon_vpmin<mode>): Likewise. (neon_vrecps<mode>): Likewise. (neon_vrsqrts<mode>): Likewise. (neon_vqabs<mode>): Likewise. (neon_vqneg<mode>): Likewise. (neon_vcls<mode>): Likewise. (clz<mode>2): Likewise. (popcount<mode>2): Likewise. (neon_vrecpe<mode>): Likewise. (neon_vrsqrte<mode>): Likewise. (neon_vget_lane<mode>_sext_internal): Likewise. (neon_vget_lane<mode>_zext_internal): Likewise. (neon_vdup_n<mode>): Likewise. (neon_vdup_n<mode>): Likewise. (neon_vdup_nv2di): Likewise. (neon_vdup_lane<mode>_interal): Likewise. (*neon_vswp<mode>): Likewise. (neon_vcombine<mode>): Likewise. (float<mode><V_cvtto>2): Likewise. (floatuns<mode><V_cvtto>2): Likewise. (fix_trunc<mode><V_cvtto>2): Likewise. (fixuns_trunc<mode><V_cvtto>2 (neon_vcvt<mode>): Likewise. (neon_vcvt<mode>): Likewise. (neon_vcvtv4sfv4hf): Likewise. (neon_vcvtv4hfv4sf): Likewise. (neon_vcvt_n<mode>): Likewise. (neon_vcvt_n<mode>): Likewise. (neon_vmovn<mode>): Likewise. (neon_vqmovn<mode>): Likewise. (neon_vqmovun<mode>): Likewise. (neon_vmovl<mode>): Likewise. (neon_vmul_lane<mode>): Likewise. (neon_vmul_lane<mode>): Likewise. (neon_vmull_lane<mode>): Likewise. (neon_vqdmull_lane<mode>): Likewise. (neon_vqdmulh_lane<mode>): Likewise. (neon_vqdmulh_lane<mode>): Likewise. (neon_vmla_lane<mode>): Likewise. (neon_vmla_lane<mode>): Likewise. (neon_vmlal_lane<mode>): Likewise. (neon_vqdmlal_lane<mode>): Likewise. (neon_vmls_lane<mode>): Likewise. (neon_vmls_lane<mode>): Likewise. (neon_vmlsl_lane<mode>): Likewise. (neon_vqdmlsl_lane<mode>): Likewise. (neon_vext<mode>): Likewise. (neon_vrev64<mode>): Likewise. (neon_vrev32<mode>): Likewise. (neon_vrev16<mode>): Likewise. (neon_vbsl<mode>_internal): Likewise. (neon_vshl<mode>): Likewise. (neon_vqshl<mode>): Likewise. (neon_vshr_n<mode>): Likewise. (neon_vshrn_n<mode>): Likewise. (neon_vqshrn_n<mode>): Likewise. (neon_vqshrun_n<mode>): Likewise. (neon_vshl_n<mode>): Likewise. (neon_vqshl_n<mode>): Likewise. (neon_vqshlu_n<mode>): Likewise. (neon_vshll_n<mode>): Likewise. (neon_vsra_n<mode>): Likewise. (neon_vsri_n<mode>): Likewise. (neon_vsli_n<mode>): Likewise. (neon_vtbl1v8qi): Likewise. (neon_vtbl2v8qi): Likewise. (neon_vtbl3v8qi): Likewise. (neon_vtbl4v8qi): Likewise. (neon_vtbl1v16qi): Likewise. (neon_vtbl2v16qi): Likewise. (neon_vcombinev16qi): Likewise. (neon_vtbx1v8qi): Likewise. (neon_vtbx2v8qi): Likewise. (neon_vtbx3v8qi): Likewise. (neon_vtbx4v8qi): Likewise. (*neon_vtrn<mode>_insn): Likewise. (*neon_vzip<mode>_insn): Likewise. (*neon_vuzp<mode>_insn): Likewise. (neon_vld1<mode>): Likewise. (neon_vld1_lane<mode>): Likewise. (neon_vld1_lane<mode>): Likewise. (neon_vld1_dup<mode>): Likewise. (neon_vld1_dup<mode>): Likewise. (neon_vld1_dupv2di): Likewise. (neon_vst1<mode>): Likewise. (neon_vst1_lane<mode>): Likewise. (neon_vst1_lane<mode>): Likewise. (neon_vld2<mode>): Likewise. (neon_vld2<mode>): Likewise. (neon_vld2_lane<mode>): Likewise. (neon_vld2_lane<mode>): Likewise. (neon_vld2_dup<mode>): Likewise. (neon_vst2<mode>): Likewise. (neon_vst2<mode>): Likewise. (neon_vst2_lane<mode>): Likewise. (neon_vst2_lane<mode>): Likewise. (neon_vld3<mode>): Likewise. (neon_vld3qa<mode>): Likewise. (neon_vld3qb<mode>): Likewise. (neon_vld3_lane<mode>): Likewise. (neon_vld3_lane<mode>): Likewise. (neon_vld3_dup<mode>): Likewise. (neon_vst3<mode>): Likewise. (neon_vst3qa<mode>): Likewise. (neon_vst3qb<mode>): Likewise. (neon_vst3_lane<mode>): Likewise. (neon_vst3_lane<mode>): Likewise. (neon_vld4<mode>): Likewise. (neon_vld4qa<mode>): Likewise. (neon_vld4qb<mode>): Likewise. (neon_vld4_lane<mode>): Likewise. (neon_vld4_lane<mode>): Likewise. (neon_vld4_dup<mode>): Likewise. (neon_vst4<mode>): Likewise. (neon_vst4qa<mode>): Likewise. (neon_vst4qb<mode>): Likewise. (neon_vst4_lane<mode>): Likewise. (neon_vst4_lane<mode>): Likewise. (neon_vec_unpack<US>_lo_<mode>): Likewise. (neon_vec_unpack<US>_hi_<mode>): Likewise. (neon_vec_<US>mult_lo_<mode>): Likewise. (neon_vec_<US>mult_hi_<mode>): Likewise. (neon_vec_<US>shiftl_<mode>): Likewise. (neon_unpack<US>_<mode>): Likewise. (neon_vec_<US>mult_<mode>): Likewise. (vec_pack_trunc_<mode>): Likewise. (neon_vec_pack_trunc_<mode>): Likewise. (neon_vabd<mode>_2): Likewise. (neon_vabd<mode>_3): Likewise. 2014-04-07 Michael Collison <michael.collison@linaro.org> Backport from trunk r203612 2013-10-15 James Greenhalgh <james.greenhalgh@arm.com> * config/aarch64/aarch64.md (movtf_aarch64): Update type attribute. (load_pair): Update type attribute. (store_pair): Update type attribute. * config/aarch64/iterators.md (q): New. 2014-04-07 Michael Collison <michael.collison@linaro.org> Backport from trunk r203611 2013-10-15 James Greenhalgh <james.greenhalgh@arm.com> * config/arm/types.md: Add new types for Neon insns. 2014-04-07 Michael Collison <michael.collison@linaro.org> Backport from trunk r203241 2013-10-07 Renlin Li <Renlin.Li@arm.com> * config/arm/arm-cores.def (cortex-a53): Use cortex tuning. 2014-04-07 Michael Collison <michael.collison@linaro.org> Backport from trunk r202560 2013-09-13 Kyrylo Tkachov <kyrylo.tkachov@arm.com> * config/arm/arm.md (arm_cmpsi_insn): Split rI alternative. Set type attribute correctly. Set predicable_short_it attribute. (cmpsi_shiftsi): Remove %? from output template. 2014-04-07 Michael Collison <michael.collison@linaro.org> Backport from trunk r202448 2013-09-10 James Greenhalgh <james.greenhalgh@arm.com> * config/aarch64/aarch64.md (generic_sched): New. * config/aarch64/aarch64-generic.md (load): Make conditional on generic_sched attribute. (nonload): Likewise. 2014-04-07 Michael Collison <michael.collison@linaro.org> Backport from trunk r202334 2013-09-06 James Greenhalgh <james.greenhalgh@arm.com> * config/aarch64/aarch64.md (*movtf_aarch64): Use neon_<ls>dm_2 as type where v8type is fpsimd_<load/store>2. (load_pair<mode>): Likewise. (store_pair<mode>): Likewise. 2014-04-07 Michael Collison <michael.collison@linaro.org> Backport from trunk r202333 2013-09-06 James Greenhalgh <james.greenhalgh@arm.com> * config/arm/types.md (type): Add "mrs" type. * config/aarch64/aarch64.md (aarch64_load_tp_hard): Make type "mrs". * config/arm/arm.md (load_tp_hard): Make type "mrs". * config/arm/cortex-a15.md: Update with new attributes. * config/arm/cortex-a5.md: Update with new attributes. * config/arm/cortex-a53.md: Update with new attributes. * config/arm/cortex-a7.md: Update with new attributes. * config/arm/cortex-a8.md: Update with new attributes. * config/arm/cortex-a9.md: Update with new attributes. * config/arm/cortex-m4.md: Update with new attributes. * config/arm/cortex-r4.md: Update with new attributes. * config/arm/fa526.md: Update with new attributes. * config/arm/fa606te.md: Update with new attributes. * config/arm/fa626te.md: Update with new attributes. * config/arm/fa726te.md: Update with new attributes. 2014-04-07 Michael Collison <michael.collison@linaro.org> Backport from trunk r202332 2013-09-06 James Greenhalgh <james.greenhalgh@arm.com> * config/aarch64/aarch64.md (*movti_aarch64): Use "multiple" for type where v8type is "move2". (*movtf_aarch64): Likewise. * config/arm/arm.md (thumb1_movdi_insn): Use "multiple" for type where more than one instruction is used for a move. (*arm32_movhf): Likewise. (*thumb_movdf_insn): Likewise. 2014-04-07 Michael Collison <michael.collison@linaro.org> Backport from trunk r202331 2013-09-06 James Greenhalgh <james.greenhalgh@arm.com> * config/arm/types.md (type): Rename fcpys to fmov. * config/arm/vfp.md (*arm_movsi_vfp): Rename type fcpys as fmov. (*thumb2_movsi_vfp): Likewise (*movhf_vfp_neon): Likewise (*movhf_vfp): Likewise (*movsf_vfp): Likewise (*thumb2_movsf_vfp): Likewise (*movsfcc_vfp): Likewise (*thumb2_movsfcc_vfp): Likewise * config/aarch64/aarch64-simd.md (move_lo_quad_<mode>): Replace type mov_reg with fmovs. * config/aarch64/aarch64.md (*movsi_aarch64): Replace type mov_reg with fmovs. (*movdi_aarch64): Likewise (*movsf_aarch64): Likewise (*movdf_aarch64): Likewise * config/arm/arm.c (cortexa7_older_only): Rename TYPE_FCPYS to TYPE_FMOV. * config/arm/iwmmxt.md (*iwmmxt_movsi_insn): Rename type fcpys as fmov. * config/arm/arm1020e.md: Update with new attributes. * config/arm/cortex-a15-neon.md: Update with new attributes. * config/arm/cortex-a5.md: Update with new attributes. * config/arm/cortex-a53.md: Update with new attributes. * config/arm/cortex-a7.md: Update with new attributes. * config/arm/cortex-a8-neon.md: Update with new attributes. * config/arm/cortex-a9.md: Update with new attributes. * config/arm/cortex-m4-fpu.md: Update with new attributes. * config/arm/cortex-r4f.md: Update with new attributes. * config/arm/marvell-pj4.md: Update with new attributes. * config/arm/vfp11.md: Update with new attributes. 2014-04-07 Michael Collison <michael.collison@linaro.org> Backport from trunk r202330 2013-09-06 James Greenhalgh <james.greenhalgh@arm.com> * config/aarch64/aarch64.md (*madd<mode>): Fix type attribute. (*maddsi_uxtw): Likewise. (*msub<mode>): Likewise. (*msubsi_uxtw): Likewise. (<su_optab>maddsidi4): Likewise. (<su_optab>msubsidi4): Likewise. 2014-04-07 Michael Collison <michael.collison@linaro.org> Backport from trunk r202329 2013-09-06 James Greenhalgh <james.greenhalgh@arm.com> * config/arm/types.md: Split fdiv<sd> as fsqrt<sd>, fdiv<sd>. * config/arm/arm.md (core_cycles): Remove fdiv. * config/arm/vfp.md: (*sqrtsf2_vfp): Update for attribute changes. (*sqrtdf2_vfp): Likewise. * config/aarch64/aarch64.md: (sqrt<mode>2): Update for attribute changes. * config/arm/arm1020e.md: Update with new attributes. * config/arm/cortex-a15-neon.md: Update with new attributes. * config/arm/cortex-a5.md: Update with new attributes. * config/arm/cortex-a53.md: Update with new attributes. * config/arm/cortex-a7.md: Update with new attributes. * config/arm/cortex-a8-neon.md: Update with new attributes. * config/arm/cortex-a9.md: Update with new attributes. * config/arm/cortex-m4-fpu.md: Update with new attributes. * config/arm/cortex-r4f.md: Update with new attributes. * config/arm/marvell-pj4.md: Update with new attributes. * config/arm/vfp11.md: Update with new attributes. 2014-04-07 Michael Collison <michael.collison@linaro.org> Backport from trunk r202328 2013-09-06 James Greenhalgh <james.greenhalgh@arm.com> * config/arm/types.md (type): Split f_cvt as f_cvt, f_cvtf2i, f_cvti2f. * config/aarch64/aarch64.md (l<fcvt_pattern><su_optab><GPF:mode><GPI:mode>2): Update with new attributes. (fix_trunc<GPF:mode><GPI:mode>2): Likewise. (fixuns_trunc<GPF:mode><GPI:mode>2): Likewise. (float<GPI:mode><GPF:mode>2): Likewise. * config/arm/vfp.md (*truncsisf2_vfp): Update with new attributes. (*truncsidf2_vfp): Likewise. (fixuns_truncsfsi2): Likewise. (fixuns_truncdfsi2): Likewise. (*floatsisf2_vfp): Likewise. (*floatsidf2_vfp): Likewise. (floatunssisf2): Likewise. (floatunssidf2): Likewise. (*combine_vcvt_f32_<FCVTI32typename>): Likewise. (*combine_vcvt_f64_<FCVTI32typename>): Likewise. * config/arm/arm1020e.md: Update with new attributes. * config/arm/cortex-a15-neon.md: Update with new attributes. * config/arm/cortex-a5.md: Update with new attributes. * config/arm/cortex-a53.md: Update with new attributes. * config/arm/cortex-a7.md: Update with new attributes. * config/arm/cortex-a8-neon.md: Update with new attributes. * config/arm/cortex-a9.md: Update with new attributes. * config/arm/cortex-m4-fpu.md: Update with new attributes. * config/arm/cortex-r4f.md: Update with new attributes. * config/arm/marvell-pj4.md: Update with new attributes. * config/arm/vfp11.md: Update with new attributes. 2014-04-07 Michael Collison <michael.collison@linaro.org> Backport from trunk r202323 2013-09-06 James Greenhalgh <james.greenhalgh@arm.com> * config/arm/types.md: Add "no_insn", "multiple" and "untyped" types. * config/arm/arm-fixed.md: Add type attribute to all insn patterns. (add<mode>3): Add type attribute. (add<mode>3): Likewise. (usadd<mode>3): Likewise. (ssadd<mode>3): Likewise. (sub<mode>3): Likewise. (sub<mode>3): Likewise. (ussub<mode>3): Likewise. (sssub<mode>3): Likewise. (ssmulsa3): Likewise. (usmulusa3): Likewise. (arm_usatsihi): Likewise. * config/arm/vfp.md (*movdi_vfp): Add types for all instructions. (*movdi_vfp_cortexa8): Likewise. (*movhf_vfp_neon): Likewise. (*movhf_vfp): Likewise. (*movdf_vfp): Likewise. (*thumb2_movdf_vfp): Likewise. (*thumb2_movdfcc_vfp): Likewise. * config/arm/arm.md: Add type attribute to all insn patterns. (*thumb1_adddi3): Add type attribute. (*arm_adddi3): Likewise. (*adddi_sesidi_di): Likewise. (*adddi_zesidi_di): Likewise. (*thumb1_addsi3): Likewise. (addsi3_compare0): Likewise. (*addsi3_compare0_scratch): Likewise. (*compare_negsi_si): Likewise. (cmpsi2_addneg): Likewise. (*addsi3_carryin_<optab>): Likewise. (*addsi3_carryin_alt2_<optab>): Likewise. (*addsi3_carryin_clobercc_<optab>): Likewise. (*subsi3_carryin): Likewise. (*subsi3_carryin_const): Likewise. (*subsi3_carryin_compare): Likewise. (*subsi3_carryin_compare_const): Likewise. (*arm_subdi3): Likewise. (*thumb_subdi3): Likewise. (*subdi_di_zesidi): Likewise. (*subdi_di_sesidi): Likewise. (*subdi_zesidi_di): Likewise. (*subdi_sesidi_di): Likewise. (*subdi_zesidi_ze): Likewise. (thumb1_subsi3_insn): Likewise. (*arm_subsi3_insn): Likewise. (*anddi3_insn): Likewise. (*anddi_zesidi_di): Likewise. (*anddi_sesdi_di): Likewise. (*ne_zeroextracts): Likewise. (*ne_zeroextracts): Likewise. (*ite_ne_zeroextr): Likewise. (*ite_ne_zeroextr): Likewise. (*anddi_notdi_di): Likewise. (*anddi_notzesidi): Likewise. (*anddi_notsesidi): Likewise. (andsi_notsi_si): Likewise. (thumb1_bicsi3): Likewise. (*iordi3_insn): Likewise. (*iordi_zesidi_di): Likewise. (*iordi_sesidi_di): Likewise. (*thumb1_iorsi3_insn): Likewise. (*xordi3_insn): Likewise. (*xordi_zesidi_di): Likewise. (*xordi_sesidi_di): Likewise. (*arm_xorsi3): Likewise. (*andsi_iorsi3_no): Likewise. (*smax_0): Likewise. (*smax_m1): Likewise. (*arm_smax_insn): Likewise. (*smin_0): Likewise. (*arm_smin_insn): Likewise. (*arm_umaxsi3): Likewise. (*arm_uminsi3): Likewise. (*minmax_arithsi): Likewise. (*minmax_arithsi_): Likewise. (*satsi_<SAT:code>): Likewise. (arm_ashldi3_1bit): Likewise. (arm_ashrdi3_1bit): Likewise. (arm_lshrdi3_1bit): Likewise. (*arm_negdi2): Likewise. (*thumb1_negdi2): Likewise. (*arm_negsi2): Likewise. (*thumb1_negsi2): Likewise. (*negdi_extendsid): Likewise. (*negdi_zero_extend): Likewise. (*arm_abssi2): Likewise. (*thumb1_abssi2): Likewise. (*arm_neg_abssi2): Likewise. (*thumb1_neg_abss): Likewise. (one_cmpldi2): Likewise. (extend<mode>di2): Likewise. (*compareqi_eq0): Likewise. (*arm_extendhisi2addsi): Likewise. (*arm_movdi): Likewise. (*thumb1_movdi_insn): Likewise. (*arm_movt): Likewise. (*thumb1_movsi_insn): Likewise. (pic_add_dot_plus_four): Likewise. (pic_add_dot_plus_eight): Likewise. (tls_load_dot_plus_eight): Likewise. (*thumb1_movhi_insn): Likewise. (*thumb1_movsf_insn): Likewise. (*movdf_soft_insn): Likewise. (*thumb_movdf_insn): Likewise. (cbranchsi4_insn): Likewise. (cbranchsi4_scratch): Likewise. (*negated_cbranchsi4): Likewise. (*tbit_cbranch): Likewise. (*tlobits_cbranch): Likewise. (*tstsi3_cbranch): Likewise. (*cbranchne_decr1): Likewise. (*addsi3_cbranch): Likewise. (*addsi3_cbranch_scratch): Likewise. (*arm_cmpdi_insn): Likewise. (*arm_cmpdi_unsig): Likewise. (*arm_cmpdi_zero): Likewise. (*thumb_cmpdi_zero): Likewise. (*deleted_compare): Likewise. (*mov_scc): Likewise. (*mov_negscc): Likewise. (*mov_notscc): Likewise. (*cstoresi_eq0_thumb1_insn): Likewise. (cstoresi_nltu_thumb1): Likewise. (cstoresi_ltu_thu): Likewise. (thumb1_addsi3_addgeu): Likewise. (*arm_jump): Likewise. (*thumb_jump): Likewise. (*check_arch2): Likewise. (arm_casesi_internal): Likewise. (thumb1_casesi_dispatch): Likewise. (*arm_indirect_jump): Likewise. (*thumb1_indirect_jump): Likewise. (nop): Likewise. (*and_scc): Likewise. (*ior_scc): Likewise. (*compare_scc): Likewise. (*cond_move): Likewise. (*cond_arith): Likewise. (*cond_sub): Likewise. (*cmp_ite0): Likewise. (*cmp_ite1): Likewise. (*cmp_and): Likewise. (*cmp_ior): Likewise. (*ior_scc_scc): Likewise. (*ior_scc_scc_cmp): Likewise. (*and_scc_scc): Likewise. (*and_scc_scc_cmp): Likewise. (*and_scc_scc_nod): Likewise. (*negscc): Likewise. (movcond_addsi): Likewise. (movcond): Likewise. (*ifcompare_plus_move): Likewise. (*if_plus_move): Likewise. (*ifcompare_move_plus): Likewise. (*if_move_plus): Likewise. (*ifcompare_arith_arith): Likewise. (*if_arith_arith): Likewise. (*ifcompare_arith_move): Likewise. (*if_arith_move): Likewise. (*ifcompare_move_arith): Likewise. (*if_move_arith): Likewise. (*ifcompare_move_not): Likewise. (*if_move_not): Likewise. (*ifcompare_not_move): Likewise. (*if_not_move): Likewise. (*ifcompare_shift_move): Likewise. (*if_shift_move): Likewise. (*ifcompare_move_shift): Likewise. (*if_move_shift): Likewise. (*ifcompare_shift_shift): Likewise. (*ifcompare_not_arith): Likewise. (*ifcompare_arith_not): Likewise. (*if_arith_not): Likewise. (*ifcompare_neg_move): Likewise. (*if_neg_move): Likewise. (*ifcompare_move_neg): Likewise. (*if_move_neg): Likewise. (prologue_thumb1_interwork): Likewise. (*cond_move_not): Likewise. (*sign_extract_onebit): Likewise. (*not_signextract_onebit): Likewise. (stack_tie): Likewise. (align_4): Likewise. (align_8): Likewise. (consttable_end): Likewise. (consttable_1): Likewise. (consttable_2): Likewise. (consttable_4): Likewise. (consttable_8): Likewise. (consttable_16): Likewise. (*thumb1_tablejump): Likewise. (prefetch): Likewise. (force_register_use): Likewise. (thumb_eh_return): Likewise. (load_tp_hard): Likewise. (load_tp_soft): Likewise. (tlscall): Likewise. (*arm_movtas_ze): Likewise. (*arm_rev): Likewise. (*arm_revsh): Likewise. (*arm_rev16): Likewise. * config/arm/thumb2.md (*thumb2_smaxsi3): Likewise. (*thumb2_sminsi3): Likewise. (*thumb32_umaxsi3): Likewise. (*thumb2_uminsi3): Likewise. (*thumb2_negdi2): Likewise. (*thumb2_abssi2): Likewise. (*thumb2_neg_abss): Likewise. (*thumb2_movsi_insn): Likewise. (tls_load_dot_plus_four): Likewise. (*thumb2_movhi_insn): Likewise. (*thumb2_mov_scc): Likewise. (*thumb2_mov_negs): Likewise. (*thumb2_mov_negs): Likewise. (*thumb2_mov_nots): Likewise. (*thumb2_mov_nots): Likewise. (*thumb2_movsicc_): Likewise. (*thumb2_movsfcc_soft_insn): Likewise. (*thumb2_indirect_jump): Likewise. (*thumb2_and_scc): Likewise. (*thumb2_ior_scc): Likewise. (*thumb2_ior_scc_strict_it): Likewise. (*thumb2_cond_move): Likewise. (*thumb2_cond_arith): Likewise. (*thumb2_cond_ari): Likewise. (*thumb2_cond_sub): Likewise. (*thumb2_negscc): Likewise. (*thumb2_movcond): Likewise. (thumb2_casesi_internal): Likewise. (thumb2_casesi_internal_pic): Likewise. (*thumb2_alusi3_short): Likewise. (*thumb2_mov<mode>_shortim): Likewise. (*thumb2_addsi_short): Likewise. (*thumb2_subsi_short): Likewise. (thumb2_addsi3_compare0): Likewise. (*thumb2_cbz): Likewise. (*thumb2_cbnz): Likewise. (*thumb2_one_cmplsi2_short): Likewise. (*thumb2_negsi2_short): Likewise. (*orsi_notsi_si): Likewise. * config/arm/arm1020e.md: Update with new attributes. * config/arm/arm1026ejs.md: Update with new attributes. * config/arm/arm1136jfs.md: Update with new attributes. * config/arm/arm926ejs.md: Update with new attributes. * config/arm/cortex-a15.md: Update with new attributes. * config/arm/cortex-a5.md: Update with new attributes. * config/arm/cortex-a53.md: Update with new attributes. * config/arm/cortex-a7.md: Update with new attributes. * config/arm/cortex-a8.md: Update with new attributes. * config/arm/cortex-a9.md: Update with new attributes. * config/arm/cortex-m4.md: Update with new attributes. * config/arm/cortex-r4.md: Update with new attributes. * config/arm/fa526.md: Update with new attributes. * config/arm/fa606te.md: Update with new attributes. * config/arm/fa626te.md: Update with new attributes. * config/arm/fa726te.md: Update with new attributes. 2014-04-07 Michael Collison <michael.collison@linaro.org> Backport from trunk r202292 2013-09-05 James Greenhalgh <james.greenhalgh@arm.com> * config/aarch64/aarch64.md (type): Remove frecpe, frecps, frecpx. (aarch64_frecp<FRECP:frecp_suffix><mode>): Move to aarch64-simd.md, fix to be a TARGET_SIMD instruction. (aarch64_frecps): Remove. * config/aarch64/aarch64-simd.md (aarch64_frecp<FRECP:frecp_suffix><mode>): New, moved from aarch64.md (aarch64_frecps<mode>): Handle all float/vector of float modes. 2014-04-07 Michael Collison <michael.collison@linaro.org> Backport from trunk r202291 2013-09-05 James Greenhalgh <james.greenhalgh@arm.com> Sofiane Naci <sofiane.naci@arm.com> * config/arm/types.md (define_attr "type"): Expand "arlo_imm" into "adr", "alu_imm", "alus_imm", "logic_imm", "logics_imm". Expand "arlo_reg" into "adc_reg", "adc_imm", "adcs_reg", "adcs_imm", "alu_ext", "alu_reg", "alus_ext", "alus_reg", "bfm", "csel", "logic_reg", "logics_reg", "rev". Expand "arlo_shift" into "alu_shift_imm", "alus_shift_imm", "logic_shift_imm", "logics_shift_imm". Expand "arlo_shift_reg" into "alu_shift_reg", "alus_shift_reg", "logic_shift_reg", "logics_shift_reg". Expand "clz" into "clz, "rbit". Rename "shift" to "shift_imm". * config/arm/arm.md (define_attr "core_cycles"): Update for attribute changes. Update for attribute changes all occurrences of arlo_* and shift* types. * config/arm/arm-fixed.md: Update for attribute changes all occurrences of arlo_* types. * config/arm/thumb2.md: Update for attribute changes all occurrences of arlo_* types. * config/arm/arm.c (xscale_sched_adjust_cost): (rtx insn, rtx (cortexa7_older_only): Likewise. (cortexa7_younger): Likewise. * config/arm/arm1020e.md (1020alu_op): Update for attribute changes. (1020alu_shift_op): Likewise. (1020alu_shift_reg_op): Likewise. * config/arm/arm1026ejs.md (alu_op): Update for attribute changes. (alu_shift_op): Likewise. (alu_shift_reg_op): Likewise. * config/arm/arm1136jfs.md (11_alu_op): Update for attribute changes. (11_alu_shift_op): Likewise. (11_alu_shift_reg_op): Likewise. * config/arm/arm926ejs.md (9_alu_op): Update for attribute changes. (9_alu_shift_reg_op): Likewise. * config/arm/cortex-a15.md (cortex_a15_alu): Update for attribute changes. (cortex_a15_alu_shift): Likewise. (cortex_a15_alu_shift_reg): Likewise. * config/arm/cortex-a5.md (cortex_a5_alu): Update for attribute changes. (cortex_a5_alu_shift): Likewise. * config/arm/cortex-a53.md (cortex_a53_alu): Update for attribute changes. (cortex_a53_alu_shift): Likewise. * config/arm/cortex-a7.md (cortex_a7_alu_imm): Update for attribute changes. (cortex_a7_alu_reg): Likewise. (cortex_a7_alu_shift): Likewise. * config/arm/cortex-a8.md (cortex_a8_alu): Update for attribute changes. (cortex_a8_alu_shift): Likewise. (cortex_a8_alu_shift_reg): Likewise. * config/arm/cortex-a9.md (cortex_a9_dp): Update for attribute changes. (cortex_a9_dp_shift): Likewise. * config/arm/cortex-m4.md (cortex_m4_alu): Update for attribute changes. * config/arm/cortex-r4.md (cortex_r4_alu): Update for attribute changes. (cortex_r4_mov): Likewise. (cortex_r4_alu_shift_reg): Likewise. * config/arm/fa526.md (526_alu_op): Update for attribute changes. (526_alu_shift_op): Likewise. * config/arm/fa606te.md (606te_alu_op): Update for attribute changes. * config/arm/fa626te.md (626te_alu_op): Update for attribute changes. (626te_alu_shift_op): Likewise. * config/arm/fa726te.md (726te_alu_op): Update for attribute changes. (726te_alu_shift_op): Likewise. (726te_alu_shift_reg_op): Likewise. * config/arm/fmp626.md (mp626_alu_op): Update for attribute changes. (mp626_alu_shift_op): Likewise. * config/arm/marvell-pj4.md (pj4_alu): Update for attribute changes. (pj4_alu_conds): Likewise. (pj4_shift): Likewise. (pj4_shift_conds): Likewise. (pj4_alu_shift): Likewise. (pj4_alu_shift_conds): Likewise. * config/aarch64/aarch64.md: Update for attribute change all occurrences of arlo_* and shift* types. 2014-04-07 Michael Collison <michael.collison@linaro.org> Backport from trunk r202272 2013-08-02 James Greenhalgh <james.greenhalgh@arm.com> Sofiane Naci <sofiane.naci@arm.com> * config/aarch64/aarch64.md (*movti_aarch64): Rename r_2_f and f_2_r. (*movsf_aarch64): Likewise. (*movdf_aarch64): Likewise. (*movtf_aarch64): Likewise. (aarch64_movdi_<mode>low): Likewise. (aarch64_movdi_<mode>high): Likewise. (aarch64_mov<mode>high_di): Likewise. (aarch64_mov<mode>low_di): Likewise. (aarch64_movtilow_tilow): Likewise. * config/arm/arm.md (attribute "neon_type"): Delete. Move attribute values to config/arm/types.md (attribute "conds"): Update for attribute change. (anddi3_insn): Likewise. (iordi3_insn): Likewise. (xordi3_insn): Likewise. (one_cmpldi2): Likewise. * config/arm/types.md (type): Add Neon types. * config/arm/neon.md (neon_mov<mode>): Remove "neon_type" attribute, use "type" attribute. (movmisalign<mode>_neon_store): Likewise. (movmisalign<mode>_neon_load): Likewise. (vec_set<mode>_internal): Likewise. (vec_setv2di_internal): Likewise. (vec_extract<mode>): Likewise. (vec_extractv2di): Likewise. (add<mode>3_neon): Likewise. (adddi3_neon): Likewise. (sub<mode>3_neon): Likewise. (subdi3_neon): Likewise. (mul<mode>3_neon): Likewise. (mul<mode>3add<mode>_neon): Likewise. (mul<mode>3neg<mode>add<mode>_neon): Likewise. (fma<VCVTF:mode>4)): Likewise. (fma<VCVTF:mode>4_intrinsic): Likewise. (fmsub<VCVTF:mode>4)): Likewise. (fmsub<VCVTF:mode>4_intrinsic): Likewise. (neon_vrint<NEON_VRINT:nvrint_variant><VCVTF:mode>): Likewise. (ior<mode>3): Likewise. (and<mode>3): Likewise. (anddi3_neon): Likewise. (orn<mode>3_neon): Likewise. (orndi3_neon): Likewise. (bic<mode>3_neon): Likewise. (bicdi3_neon): Likewise. (xor<mode>3): Likewise. (one_cmpl<mode>2): Likewise. (abs<mode>2): Likewise. (neg<mode>2): Likewise. (umin<mode>3_neon): Likewise. (umax<mode>3_neon): Likewise. (smin<mode>3_neon): Likewise. (smax<mode>3_neon): Likewise. (vashl<mode>3): Likewise. (vashr<mode>3_imm): Likewise. (vlshr<mode>3_imm): Likewise. (ashl<mode>3_signed): Likewise. (ashl<mode>3_unsigned): Likewise. (neon_load_count): Likewise. (ashldi3_neon_noclobber): Likewise. (signed_shift_di3_neon): Likewise. (unsigned_shift_di3_neon): Likewise. (ashrdi3_neon_imm_noclobber): Likewise. (lshrdi3_neon_imm_noclobber): Likewise. (widen_ssum<mode>3): Likewise. (widen_usum<mode>3): Likewise. (quad_halves_<code>v4si): Likewise. (quad_halves_<code>v4sf): Likewise. (quad_halves_<code>v8hi): Likewise. (quad_halves_<code>v16qi): Likewise. (reduc_splus_v2di): Likewise. (neon_vpadd_internal<mode>): Likewise. (neon_vpsmin<mode>): Likewise. (neon_vpsmax<mode>): Likewise. (neon_vpumin<mode>): Likewise. (neon_vpumax<mode>): Likewise. (ss_add<mode>_neon): Likewise. (us_add<mode>_neon): Likewise. (ss_sub<mode>_neon): Likewise. (us_sub<mode>_neon): Likewise. (neon_vadd<mode>_unspec): Likewise. (neon_vaddl<mode>): Likewise. (neon_vaddw<mode>): Likewise. (neon_vhadd<mode>): Likewise. (neon_vqadd<mode>): Likewise. (neon_vaddhn<mode>): Likewise. (neon_vmul<mode>): Likewise. (neon_vmla<mode>): Likewise. (neon_vmlal<mode>): Likewise. (neon_vmls<mode>): Likewise. (neon_vmlsl<mode>): Likewise. (neon_vqdmulh<mode>): Likewise. (neon_vqdmlal<mode>): Likewise. (neon_vqdmlsl<mode>): Likewise. (neon_vmull<mode>): Likewise. (neon_vqdmull<mode>): Likewise. (neon_vsub<mode>_unspec): Likewise. (neon_vsubl<mode>): Likewise. (neon_vsubw<mode>): Likewise. (neon_vqsub<mode>): Likewise. (neon_vhsub<mode>): Likewise. (neon_vsubhn<mode>): Likewise. (neon_vceq<mode>): Likewise. (neon_vcge<mode>): Likewise. (neon_vcgeu<mode>): Likewise. (neon_vcgt<mode>): Likewise. (neon_vcgtu<mode>): Likewise. (neon_vcle<mode>): Likewise. (neon_vclt<mode>): Likewise. (neon_vcage<mode>): Likewise. (neon_vcagt<mode>): Likewise. (neon_vtst<mode>): Likewise. (neon_vabd<mode>): Likewise. (neon_vabdl<mode>): Likewise. (neon_vaba<mode>): Likewise. (neon_vabal<mode>): Likewise. (neon_vmax<mode>): Likewise. (neon_vmin<mode>): Likewise. (neon_vpaddl<mode>): Likewise. (neon_vpadal<mode>): Likewise. (neon_vpmax<mode>): Likewise. (neon_vpmin<mode>): Likewise. (neon_vrecps<mode>): Likewise. (neon_vrsqrts<mode>): Likewise. (neon_vqabs<mode>): Likewise. (neon_vqneg<mode>): Likewise. (neon_vcls<mode>): Likewise. (clz<mode>2): Likewise. (popcount<mode>2): Likewise. (neon_vrecpe): Likewise. (neon_vrsqrte): Likewise. (neon_vget_lane<mode>_sext_internal): Likewise. (neon_vget_lane<mode>_zext_internal): Likewise. (neon_vdup_n<mode>): Likewise. (neon_vdup_nv2di): Likewise. (neon_vdpu_lane<mode>_internal): Likewise. (neon_vswp<mode>): Likewise. (float<mode><V_cvtto>2): Likewise. (floatuns<mode><V_cvtto>2): Likewise. (fix_trunc<mode><V_cvtto>)2): Likewise (fixuns_trunc<mode><V_cvtto)2): Likewise. (neon_vcvt<mode>): Likewise. (neon_vcvtv4sfv4hf): Likewise. (neon_vcvtv4hfv4sf): Likewise. (neon_vcvt_n<mode>): Likewise. (neon_vmovn<mode>): Likewise. (neon_vqmovn<mode>): Likewise. (neon_vqmovun<mode>): Likewise. (neon_vmovl<mode>): Likewise. (neon_vmul_lane<mode>): Likewise. (neon_vmull_lane<mode>): Likewise. (neon_vqdmull_lane<mode>): Likewise. (neon_vqdmulh_lane<mode>): Likewise. (neon_vmla_lane<mode>): Likewise. (neon_vmlal_lane<mode>): Likewise. (neon_vqdmlal_lane<mode>): Likewise. (neon_vmls_lane<mode>): Likewise. (neon_vmlsl_lane<mode>): Likewise. (neon_vqdmlsl_lane<mode>): Likewise. (neon_vext<mode>): Likewise. (neon_vrev64<mode>): Likewise. (neon_vrev32<mode>): Likewise. (neon_vrev16<mode>): Likewise. (neon_vbsl<mode>_internal): Likewise. (neon_vshl<mode>): Likewise. (neon_vqshl<mode>): Likewise. (neon_vshr_n<mode>): Likewise. (neon_vshrn_n<mode>): Likewise. (neon_vqshrn_n<mode>): Likewise. (neon_vqshrun_n<mode>): Likewise. (neon_vshl_n<mode>): Likewise. (neon_vqshl_n<mode>): Likewise. (neon_vqshlu_n<mode>): Likewise. (neon_vshll_n<mode>): Likewise. (neon_vsra_n<mode>): Likewise. (neon_vsri_n<mode>): Likewise. (neon_vsli_n<mode>): Likewise. (neon_vtbl1v8qi): Likewise. (neon_vtbl2v8qi): Likewise. (neon_vtbl3v8qi): Likewise. (neon_vtbl4v8qi): Likewise. (neon_vtbx1v8qi): Likewise. (neon_vtbx2v8qi): Likewise. (neon_vtbx3v8qi): Likewise. (neon_vtbx4v8qi): Likewise. (neon_vtrn<mode>_internal): Likewise. (neon_vzip<mode>_internal): Likewise. (neon_vuzp<mode>_internal): Likewise. (neon_vld1<mode>): Likewise. (neon_vld1_lane<mode>): Likewise. (neon_vld1_dup<mode>): Likewise. (neon_vld1_dupv2di): Likewise. (neon_vst1<mode>): Likewise. (neon_vst1_lane<mode>): Likewise. (neon_vld2<mode>): Likewise. (neon_vld2_lane<mode>): Likewise. (neon_vld2_dup<mode>): Likewise. (neon_vst2<mode>): Likewise. (neon_vst2_lane<mode>): Likewise. (neon_vld3<mode>): Likewise. (neon_vld3qa<mode>): Likewise. (neon_vld3qb<mode>): Likewise. (neon_vld3_lane<mode>): Likewise. (neon_vld3_dup<mode>): Likewise. (neon_vst3<mode>): Likewise. (neon_vst3qa<mode>): Likewise. (neon_vst3qb<mode>): Likewise. (neon_vst3_lane<mode>): Likewise. (neon_vld4<mode>): Likewise. (neon_vld4qa<mode>): Likewise. (neon_vld4qb<mode>): Likewise. (neon_vld4_lane<mode>): Likewise. (neon_vld4_dup<mode>): Likewise. (neon_vst4<mode>): Likewise. (neon_vst4qa<mode>): Likewise. (neon_vst4qb<mode>): Likewise. (neon_vst4_lane<mode>): Likewise. (neon_vec_unpack<US>_lo_<mode>): Likewise. (neon_vec_unpack<US>_hi_<mode>): Likewise. (neon_vec_<US>mult_lo_<mode>): Likewise. (neon_vec_<US>mult_hi_<mode>): Likewise. (neon_vec_<US>shiftl_<mode>): Likewise. (neon_unpack<US>_<mode>): Likewise. (neon_vec_<US>mult_<mode>): Likewise. (vec_pack_trunc_<mode>): Likewise. (neon_vec_pack_trunk_<mode>): Likewise. (neon_vabd<mode>_2): Likewise. (neon_vabd<mode>_3): Likewise. * config/arm/vfp.md (arm_movsi_vfp): Update for attribute changes. (thumb2_movsi_vfp): Likewise. (movdi_vfp): Likewise. (movdi_vfp_cortexa8): Likewise. (movhf_vfp_neon): Likewise. (movhf_vfp): Likewiwse. (movsf_vfp): Likewiwse. (thumb2_movsf_vfp): Likewiwse. (movdf_vfp): Likewise. (thumb2_movdf_vfp): Likewise. (movsfcc_vfp): Likewise. (thumb2_movsfcc_vfp): Likewise. (movdfcc_vfp): Likewise. (thumb2_movdfcc_vfp): Likewise. * config/arm/arm.c (cortexa7_older_only): Update for attribute change. * config/arm/arm1020e.md (v10_c2v): Update for attribute change. (v10_v2c): Likewise. * config/arm/cortex-a15-neon.md (cortex_a15_neon_int_1): Update for attribute change. (cortex_a15_neon_int_2): Likewise. (cortex_a15_neon_int_3): Likewise. (cortex_a15_neon_int_4): Likewise. (cortex_a15_neon_int_5): Likewise. (cortex_a15_neon_vqneg_vqabs): Likewise. (cortex_a15_neon_vmov): Likewise. (cortex_a15_neon_vaba): Likewise. (cortex_a15_neon_vaba_qqq): Likewise. (cortex_a15_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long): Likewise. (cortex_a15_neon_mul_qqq_8_16_32_ddd_32): Likewise. (cortex_a15_neon_mul_qdd_64_32_long_qqd_16_ddd_32_\ scalar_64_32_long_scalar): Likewise. (cortex_a15_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long): Likewise. (cortex_a15_neon_mla_qqq_8_16): Likewise. (cortex_a15_neon_mla_ddd_32_qqd_16_ddd_32_scalar_qdd_64_32_\ lotype_qdd_64_32_long): Likewise. (cortex_a15_neon_mla_qqq_32_qqd_32_scalar): Likewise. (cortex_a15_neon_mul_ddd_16_scalar_32_16_long_scalar): Likewise. (cortex_a15_neon_mul_qqd_32_scalar): Likewise. (cortex_a15_neon_mla_ddd_16_scalar_qdd_32_16_long_scalar): Likewise. (cortex_a15_neon_shift_1): Likewise. (cortex_a15_neon_shift_2): Likewise. (cortex_a15_neon_shift_3): Likewise. (cortex_a15_neon_vshl_ddd): Likewise. (cortex_a15_neon_vqshl_vrshl_vqrshl_qqq): Likewise. (cortex_a15_neon_vsra_vrsra): Likewise. (cortex_a15_neon_fp_vadd_ddd_vabs_dd): Likewise. (cortex_a15_neon_fp_vadd_qqq_vabs_qq): Likewise. (cortex_a15_neon_fp_vmul_ddd): Likewise. (cortex_a15_neon_fp_vmul_qqd): Likewise. (cortex_a15_neon_fp_vmla_ddd): Likewise. (cortex_a15_neon_fp_vmla_qqq): Likewise. (cortex_a15_neon_fp_vmla_ddd_scalar): Likewise. (cortex_a15_neon_fp_vmla_qqq_scalar): Likewise. (cortex_a15_neon_fp_vrecps_vrsqrts_ddd): Likewise. (cortex_a15_neon_fp_vrecps_vrsqrts_qqq): Likewise. (cortex_a15_neon_bp_simple): Likewise. (cortex_a15_neon_bp_2cycle): Likewise. (cortex_a15_neon_bp_3cycle): Likewise. (cortex_a15_neon_vld1_1_2_regs): Likewise. (cortex_a15_neon_vld1_3_4_regs): Likewise. (cortex_a15_neon_vld2_2_regs_vld1_vld2_all_lanes): Likewise. (cortex_a15_neon_vld2_4_regs): Likewise. (cortex_a15_neon_vld3_vld4): Likewise. (cortex_a15_neon_vst1_1_2_regs_vst2_2_regs): Likewise. (cortex_a15_neon_vst1_3_4_regs): Likewise. (cortex_a15_neon_vst2_4_regs_vst3_vst4): Likewise. (cortex_a15_neon_vst3_vst4): Likewise. (cortex_a15_neon_vld1_vld2_lane): Likewise. (cortex_a15_neon_vld3_vld4_lane" 10 (cortex_a15_neon_vst1_vst2_lane): Likewise. (cortex_a15_neon_vst3_vst4_lane): Likewise. (cortex_a15_neon_vld3_vld4_all_lanes): Likewise. (cortex_a15_neon_ldm_2): Likewise.0 (cortex_a15_neon_stm_2): Likewise. (cortex_a15_neon_mcr): Likewise. (cortex_a15_neon_mcr_2_mcrr): Likewise. (cortex_a15_neon_mrc): Likewise. (cortex_a15_neon_mrrc): Likewise. * config/arm/cortex-a15.md (cortex_a15_alu): Update for attribute change. (cortex_a15_alu_shift): Likewise. (cortex_a15_alu_shift_reg): Likewise. (cortex_a15_mult32): Likewise. (cortex_a15_mult64): Likewise. (cortex_a15_block): Likewise. (cortex_a15_branch): Likewise. (cortex_a15_load1): Likewise. (cortex_a15_load3): Likewise. (cortex_a15_store1): Likewise. (cortex_a15_store3): Likewise. (cortex_a15_call): Likewise. * config/arm/cortex-a5.md (cortex_a5_r2f): Update for attribute change. (cortex_a5_f2r): Likewise. * config/arm/cortex-a53.md (cortex_a53_r2f): Update for attribute change. (cortex_a53_f2r): Likewise. * config/arm/cortex-a7.md (cortex_a7_branch): Update for attribute change. (cortex_a7_call): Likewise. (cortex_a7_alu_imm): Likewise. (cortex_a7_alu_reg): Likewise. (cortex_a7_alu_shift): Likewise. (cortex_a7_mul): Likewise. (cortex_a7_load1): Likewise. (cortex_a7_store1): Likewise. (cortex_a7_load2): Likewise. (cortex_a7_store2): Likewise. (cortex_a7_load3): Likewise. (cortex_a7_store3): Likewise. (cortex_a7_load4): Likewise. (cortex_a7_store4): Likewise. (cortex_a7_fpalu): Likewise. (cortex_a7_fconst): Likewise. (cortex_a7_fpmuls): Likewise. (cortex_a7_neon_mul): Likewise. (cortex_a7_fpmacs): Likewise. (cortex_a7_neon_mla: Likewise. (cortex_a7_fpmuld: Likewise. (cortex_a7_fpmacd: Likewise. (cortex_a7_fpfmad: Likewise. (cortex_a7_fdivs: Likewise. (cortex_a7_fdivd: Likewise. (cortex_a7_r2f: Likewise. (cortex_a7_f2r: Likewise. (cortex_a7_f_flags: Likewise. (cortex_a7_f_loads: Likewise. (cortex_a7_f_loadd: Likewise. (cortex_a7_f_stores: Likewise. (cortex_a7_f_stored: Likewise. (cortex_a7_neon): Likewise. * config/arm/cortex-a8-neon.md (cortex_a8_neon_mrc): Update for attribute change. (cortex_a8_neon_mrrc): Likewise. (cortex_a8_neon_int_1): Likewise. (cortex_a8_neon_int_2): Likewise. (cortex_a8_neon_int_3): Likewise. (cortex_a8_neon_int_4): Likewise. (cortex_a8_neon_int_5): Likewise. (cortex_a8_neon_vqneg_vqabs): Likewise. (cortex_a8_neon_vmov): Likewise. (cortex_a8_neon_vaba): Likewise. (cortex_a8_neon_vaba_qqq): Likewise. (cortex_a8_neon_vsma): Likewise. (cortex_a8_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long): Likewise. (cortex_a8_neon_mul_qqq_8_16_32_ddd_32): Likewise. (cortex_a8_neon_mul_qdd_64_32_long_qqd_16_ddd_32_scalar_64_32_long_scalar): Likewise. (cortex_a8_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long): Likewise. (cortex_a8_neon_mla_qqq_8_16): Likewise. (cortex_a8_neon_mla_ddd_32_qqd_16_ddd_32_scalar_qdd_64_32_\ long_scalar_qdd_64_32_long): Likewise. (cortex_a8_neon_mla_qqq_32_qqd_32_scalar): Likewise. (cortex_a8_neon_mul_ddd_16_scalar_32_16_long_scalar): Likewise. (cortex_a8_neon_mul_qqd_32_scalar): Likewise. (cortex_a8_neon_mla_ddd_16_scalar_qdd_32_16_long_scalar): Likewise. (cortex_a8_neon_shift_1): Likewise. (cortex_a8_neon_shift_2): Likewise. (cortex_a8_neon_shift_3): Likewise. (cortex_a8_neon_vshl_ddd): Likewise. (cortex_a8_neon_vqshl_vrshl_vqrshl_qqq): Likewise. (cortex_a8_neon_vsra_vrsra): Likewise. (cortex_a8_neon_fp_vadd_ddd_vabs_dd): Likewise. (cortex_a8_neon_fp_vadd_qqq_vabs_qq): Likewise. (cortex_a8_neon_fp_vsum): Likewise. (cortex_a8_neon_fp_vmul_ddd): Likewise. (cortex_a8_neon_fp_vmul_qqd): Likewise. (cortex_a8_neon_fp_vmla_ddd): Likewise. (cortex_a8_neon_fp_vmla_qqq): Likewise. (cortex_a8_neon_fp_vmla_ddd_scalar): Likewise. (cortex_a8_neon_fp_vmla_qqq_scalar): Likewise. (cortex_a8_neon_fp_vrecps_vrsqrts_ddd): Likewise. (cortex_a8_neon_fp_vrecps_vrsqrts_qqq): Likewise. (cortex_a8_neon_bp_simple): Likewise. (cortex_a8_neon_bp_2cycle): Likewise. (cortex_a8_neon_bp_3cycle): Likewise. (cortex_a8_neon_ldr): Likewise. (cortex_a8_neon_str): Likewise. (cortex_a8_neon_vld1_1_2_regs): Likewise. (cortex_a8_neon_vld1_3_4_regs): Likewise. (cortex_a8_neon_vld2_2_regs_vld1_vld2_all_lanes): Likewise. (cortex_a8_neon_vld2_4_regs): Likewise. (cortex_a8_neon_vld3_vld4): Likewise. (cortex_a8_neon_vst1_1_2_regs_vst2_2_regs): Likewise. (cortex_a8_neon_vst1_3_4_regs): Likewise. (cortex_a8_neon_vst2_4_regs_vst3_vst4): Likewise. (cortex_a8_neon_vst3_vst4): Likewise. (cortex_a8_neon_vld1_vld2_lane): Likewise. (cortex_a8_neon_vld3_vld4_lane): Likewise. (cortex_a8_neon_vst1_vst2_lane): Likewise. (cortex_a8_neon_vst3_vst4_lane): Likewise. (cortex_a8_neon_vld3_vld4_all_lanes): Likewise. (cortex_a8_neon_mcr): Likewise. (cortex_a8_neon_mcr_2_mcrr): Likewise. * config/arm/cortex-a8.md (cortex_a8_alu): Update for attribute change. * config/arm/cortex-a9-neon.md (ca9_neon_mrc): Update for attribute change. (ca9_neon_mrrc): Likewise. (cortex_a9_neon_int_1): Likewise. (cortex_a9_neon_int_2): Likewise. (cortex_a9_neon_int_3): Likewise. (cortex_a9_neon_int_4): Likewise. (cortex_a9_neon_int_5): Likewise. (cortex_a9_neon_vqneg_vqabs): Likewise. (cortex_a9_neon_vmov): Likewise. (cortex_a9_neon_vaba): Likewise. (cortex_a9_neon_vaba_qqq): Likewise. (cortex_a9_neon_vsma): Likewise. (cortex_a9_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long): Likewise. (cortex_a9_neon_mul_qqq_8_16_32_ddd_32): Likewise. (cortex_a9_neon_mul_qdd_64_32_long_qqd_16_ddd_32_scalar_64_32_long_scalar): Likewise. (cortex_a9_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long): Likewise. (cortex_a9_neon_mla_qqq_8_16): Likewise. (cortex_a9_neon_mla_ddd_32_qqd_16_ddd_32_scalar_qdd_64_32_\ long_scalar_qdd_64_32_long): Likewise. (cortex_a9_neon_mla_qqq_32_qqd_32_scalar): Likewise. (cortex_a9_neon_mul_ddd_16_scalar_32_16_long_scalar): Likewise. (cortex_a9_neon_mul_qqd_32_scalar): Likewise. (cortex_a9_neon_mla_ddd_16_scalar_qdd_32_16_long_scalar): Likewise. (cortex_a9_neon_shift_1): Likewise. (cortex_a9_neon_shift_2): Likewise. (cortex_a9_neon_shift_3): Likewise. (cortex_a9_neon_vshl_ddd): Likewise. (cortex_a9_neon_vqshl_vrshl_vqrshl_qqq): Likewise. (cortex_a9_neon_vsra_vrsra): Likewise. (cortex_a9_neon_fp_vadd_ddd_vabs_dd): Likewise. (cortex_a9_neon_fp_vadd_qqq_vabs_qq): Likewise. (cortex_a9_neon_fp_vsum): Likewise. (cortex_a9_neon_fp_vmul_ddd): Likewise. (cortex_a9_neon_fp_vmul_qqd): Likewise. (cortex_a9_neon_fp_vmla_ddd): Likewise. (cortex_a9_neon_fp_vmla_qqq): Likewise. (cortex_a9_neon_fp_vmla_ddd_scalar): Likewise. (cortex_a9_neon_fp_vmla_qqq_scalar): Likewise. (cortex_a9_neon_fp_vrecps_vrsqrts_ddd): Likewise. (cortex_a9_neon_fp_vrecps_vrsqrts_qqq): Likewise. (cortex_a9_neon_bp_simple): Likewise. (cortex_a9_neon_bp_2cycle): Likewise. (cortex_a9_neon_bp_3cycle): Likewise. (cortex_a9_neon_ldr): Likewise. (cortex_a9_neon_str): Likewise. (cortex_a9_neon_vld1_1_2_regs): Likewise. (cortex_a9_neon_vld1_3_4_regs): Likewise. (cortex_a9_neon_vld2_2_regs_vld1_vld2_all_lanes): Likewise. (cortex_a9_neon_vld2_4_regs): Likewise. (cortex_a9_neon_vld3_vld4): Likewise. (cortex_a9_neon_vst1_1_2_regs_vst2_2_regs): Likewise. (cortex_a9_neon_vst1_3_4_regs): Likewise. (cortex_a9_neon_vst2_4_regs_vst3_vst4): Likewise. (cortex_a9_neon_vst3_vst4): Likewise. (cortex_a9_neon_vld1_vld2_lane): Likewise. (cortex_a9_neon_vld3_vld4_lane): Likewise. (cortex_a9_neon_vst1_vst2_lane): Likewise. (cortex_a9_neon_vst3_vst4_lane): Likewise. (cortex_a9_neon_vld3_vld4_all_lanes): Likewise. (cortex_a9_neon_mcr): Likewise. (cortex_a9_neon_mcr_2_mcrr): Likewise. * config/arm/cortex-a9.md (cortex_a9_dp): Update for attribute change. (cortex_a9_fps): Likewise. * config/arm/cortex-m4-fpu.md (cortex_m4_vmov_2): Update for attribute change. (cortex_m4_fmuls): Likewise. * config/arm/cortex-r4f.md (cortex_r4_mcr): Update for attribute change. (cortex_r4_mrc): Likewise. * config/arm/iterators.md: Update comment referring to neon_type. * config/arm/iwmmxt.md (iwmmxt_arm_movdi): Update for attribute change. (iwmmxt_movsi_insn): Likewise. * config/arm/marvell-pj4.md (pj4_vfp_to_core): Update for attribute change. (pj4_core_to_vfp): Likewise. * config/arm/neon-schedgen.ml (emit_insn_reservations): Update for attribute change. * config/arm/vfp11.md (vfp_fload): Update for attribute change. (vfp_fstore): Likewise. * doc/md.texi: Change references to neon_type to refer to type. 2014-04-07 Michael Collison <michael.collison@linaro.org> Backport from trunk r201436 2013-08-02 Sofiane Naci <sofiane.naci@arm.com> * config/arm/types.md (define_attr "type"): Add "load_acq" and "store_rel". * config/arm/cortex-a53.md (cortex_a53_load1): Update for attribute changes. (cortex_a53_store1): Likewise. 2014-04-07 Michael Collison <michael.collison@linaro.org> Backport from trunk r201400 2013-08-01 Sofiane Naci <sofiane.naci@arm.com> * config.gcc (aarch64*-*-*): Add aarch-common.o to extra_objs. Add aarch-common-protos.h to extra_headers. (aarch64*-*-*): Add arm/aarch-common-protos.h to tm_p_file. * config/aarch64/aarch64.md: Include "../arm/cortex-a53.md". * config/aarch64/t-aarch64 (aarch-common.o): Define. 2014-04-07 Michael Collison <michael.collison@linaro.org> Backport from trunk r201399 2013-08-01 Sofiane Naci <sofiane.naci@arm.com> * config/aarch64/aarch64.md (define_attr "type"): Delete. Include "../arm/types.md". Define "type" attribute for all patterns. * config/aarch64/aarch64-simd.md (move_lo_quad_<mode>): Update for attribute changes. 2014-04-07 Michael Collison <michael.collison@linaro.org> Backport from trunk r201376 2013-07-31 Sofiane Naci <sofiane.naci@arm.com> * config.gcc (arm*-*-*): Add aarch-common.o to extra_objs. Add aarch-common-protos.h to extra_headers. (arm*-*-*): Add arm/aarch-common-protos.h to tm_p_file. * config/arm/arm.c (arm_early_load_addr_dep): Move from here to ... (arm_early_store_addr_dep): Likewise. (arm_no_early_alu_shift_dep: Likewise. (arm_no_early_alu_shift_value_dep: Likewise. (arm_no_early_mul_dep: Likewise. (arm_no_early_store_addr_dep: Likewise. (arm_mac_accumulator_is_mul_result: Likewise. (arm_mac_accumulator_is_result: Likewise. * config/arm/aarch-common.c: ... here. New file. * config/arm/arm-protos.h (arm_early_load_addr_dep): Move from here to ... (arm_early_store_addr_dep): Likewise. (arm_no_early_alu_shift_dep: Likewise. (arm_no_early_alu_shift_value_dep: Likewise. (arm_no_early_mul_dep: Likewise. (arm_no_early_store_addr_dep: Likewise. (arm_mac_accumulator_is_mul_result: Likewise. (arm_mac_accumulator_is_result: Likewise. * config/arm/aarch-common-protos.h: ... here. New file. * config/arm/t-arm (aarch-common.o): Define. 2014-04-07 Michael Collison <michael.collison@linaro.org> Backport from trunk r201375 2013-07-31 Sofiane Naci <sofiane.naci@arm.com> * config/arm/arm.md: Include new file "types.md". (define_attr "type"): Move from here to ... (define_attr "mul32"): Likewise. (define_attr "mul64"): Likewise. * config/arm/types.md: ... here. New file. testsuite/ 2014-04-07 Michael Collison <michael.collison@linaro.org> Backport from trunk r204784 2013-11-14 James Greenhalgh <james.greenhalgh@arm.com> * gcc.target/aarch64/cpu-diagnostics-2.c: Change "-mcpu=" to "cortex-a53". * gcc.target/aarch64/cpu-diagnostics-3.c: Change "-mcpu=" to "cortex-a53". git-svn-id: https://gcc.gnu.org/svn/gcc/branches/linaro/gcc-4_8-branch@209189 138bc75d-0d04-0410-961f-82ee72b054a4
-rw-r--r--gcc/ChangeLog.linaro1985
-rw-r--r--gcc/config.gcc12
-rw-r--r--gcc/config/aarch64/aarch64-arches.def2
-rw-r--r--gcc/config/aarch64/aarch64-cores.def4
-rw-r--r--gcc/config/aarch64/aarch64-generic.md38
-rw-r--r--gcc/config/aarch64/aarch64-simd.md876
-rw-r--r--gcc/config/aarch64/aarch64-tune.md2
-rw-r--r--gcc/config/aarch64/aarch64.c6
-rw-r--r--gcc/config/aarch64/aarch64.h4
-rw-r--r--gcc/config/aarch64/aarch64.md925
-rw-r--r--gcc/config/aarch64/iterators.md19
-rw-r--r--gcc/config/aarch64/large.md312
-rw-r--r--gcc/config/aarch64/small.md287
-rw-r--r--gcc/config/aarch64/t-aarch647
-rw-r--r--gcc/config/arm/aarch-common-protos.h36
-rw-r--r--gcc/config/arm/aarch-common.c356
-rw-r--r--gcc/config/arm/arm-cores.def2
-rw-r--r--gcc/config/arm/arm-fixed.md31
-rw-r--r--gcc/config/arm/arm-protos.h8
-rw-r--r--gcc/config/arm/arm.c278
-rw-r--r--gcc/config/arm/arm.md1227
-rw-r--r--gcc/config/arm/arm1020e.md29
-rw-r--r--gcc/config/arm/arm1026ejs.md17
-rw-r--r--gcc/config/arm/arm1136jfs.md17
-rw-r--r--gcc/config/arm/arm926ejs.md15
-rw-r--r--gcc/config/arm/cortex-a15-neon.md1246
-rw-r--r--gcc/config/arm/cortex-a15.md52
-rw-r--r--gcc/config/arm/cortex-a5.md26
-rw-r--r--gcc/config/arm/cortex-a53.md33
-rw-r--r--gcc/config/arm/cortex-a7.md190
-rw-r--r--gcc/config/arm/cortex-a8-neon.md478
-rw-r--r--gcc/config/arm/cortex-a8.md19
-rw-r--r--gcc/config/arm/cortex-a9-neon.md432
-rw-r--r--gcc/config/arm/cortex-a9.md30
-rw-r--r--gcc/config/arm/cortex-m4-fpu.md8
-rw-r--r--gcc/config/arm/cortex-m4.md14
-rw-r--r--gcc/config/arm/cortex-r4.md15
-rw-r--r--gcc/config/arm/cortex-r4f.md12
-rw-r--r--gcc/config/arm/crypto.md12
-rw-r--r--gcc/config/arm/fa526.md15
-rw-r--r--gcc/config/arm/fa606te.md14
-rw-r--r--gcc/config/arm/fa626te.md15
-rw-r--r--gcc/config/arm/fa726te.md13
-rw-r--r--gcc/config/arm/fmp626.md10
-rw-r--r--gcc/config/arm/iterators.md34
-rw-r--r--gcc/config/arm/iwmmxt.md6
-rw-r--r--gcc/config/arm/marvell-pj4.md47
-rw-r--r--gcc/config/arm/neon-schedgen.ml543
-rw-r--r--gcc/config/arm/neon.md1008
-rw-r--r--gcc/config/arm/t-arm5
-rw-r--r--gcc/config/arm/thumb2.md125
-rw-r--r--gcc/config/arm/types.md1076
-rw-r--r--gcc/config/arm/vfp.md70
-rw-r--r--gcc/config/arm/vfp11.md13
-rw-r--r--gcc/doc/md.texi6
-rw-r--r--gcc/testsuite/ChangeLog.linaro10
-rw-r--r--gcc/testsuite/gcc.target/aarch64/cpu-diagnostics-2.c2
-rw-r--r--gcc/testsuite/gcc.target/aarch64/cpu-diagnostics-3.c2
58 files changed, 6530 insertions, 5546 deletions
diff --git a/gcc/ChangeLog.linaro b/gcc/ChangeLog.linaro
index d2e1d219ed7..cd9964efc93 100644
--- a/gcc/ChangeLog.linaro
+++ b/gcc/ChangeLog.linaro
@@ -1,5 +1,1990 @@
2014-04-07 Michael Collison <michael.collison@linaro.org>
+ Backport from trunk r205105
+ 2013-11-20 James Greenhalgh <james.greenhalgh@arm.com>
+
+ * config/aarch64/aarch64.md: Remove "mode" and "mode2" attributes
+ from all insns.
+
+2014-04-07 Michael Collison <michael.collison@linaro.org>
+
+ Backport from trunk r205050
+ 2013-11-19 James Greenhalgh <james.greenhalgh@arm.com>
+
+ * config/arm/arm.md (zero_extend<mode>di2): Add type attribute.
+
+2014-04-07 Michael Collison <michael.collison@linaro.org>
+
+ Backport from trunk r204852
+ 2013-11-19 James Greenhalgh <james.greenhalgh@arm.com>
+
+ * config/aarch64/aarch64.md: Remove v8type from all insns.
+
+2014-04-07 Michael Collison <michael.collison@linaro.org>
+
+ Backport from trunk r204852
+ 2013-11-15 James Greenhalgh <james.greenhalgh@arm.com>
+
+ * config/aarch64/aarch64-simd.md: Remove simd_type from all
+ patterns.
+ * config/aarch64/aarch64.md: Likewise, correct "type" attribute
+ where it is incorrect or missing.
+
+2014-04-07 Michael Collison <michael.collison@linaro.org>
+
+ Backport from trunk r204784
+ 2013-11-14 James Greenhalgh <james.greenhalgh@arm.com>
+
+ * config/aarch64/aarch64-cores.def (example-1): Remove.
+ (example-2): Likewise.
+ * config/aarch64/aarch64-tune.md: Regenerate.
+ * config/aarch64/aarch64.md: Do not include "large.md" or "small.md".
+ (generic_sched): Remove "large", "small".
+ * config/aarch64/large.md: Delete.
+ * config/aarch64/small.md: Delete.
+
+2014-04-07 Michael Collison <michael.collison@linaro.org>
+
+ Backport from trunk r204783
+ 2013-11-14 James Greenhalgh <james.greenhalgh@arm.com>
+
+ * config/aarch64/aarch64-cores.def (cortex-a57): Tune for cortexa15.
+ * config/aarch64/aarch64-tune.md: Regenerate.
+ * config/aarch64/aarch64.md: Include cortex-a15 pipeline model.
+ (generic_sched): "no" if we are tuning for cortexa15.
+ * config/arm/cortex-a15.md: Include cortex-a15-neon.md by
+ relative path.
+
+2014-04-07 Michael Collison <michael.collison@linaro.org>
+
+ Backport from trunk r204782
+ 2013-11-14 James Greenhalgh <james.greenhalgh@arm.com>
+
+ * config/aarch64/aarch64-arches.def (armv8-a): Tune for cortex-a53.
+ * config/aarch64/aarch64.md: Do not include aarch64-generic.md.
+ * config/aarch64/aarch64.c (aarch64_tune): Initialize to cortexa53.
+ (all_cores): Use cortexa53 when tuning for "generic".
+ (aarch64_override_options): Fix comment.
+ * config/aarch64/aarch64.h (TARGET_CPU_DEFAULT): Set to cortexa53.
+ * config/aarch64/aarch64-generic.md: Delete.
+
+2014-04-07 Michael Collison <michael.collison@linaro.org>
+
+ Backport from trunk r204575
+ 2013-11-08 James Greenhalgh <james.greenhalgh@arm.com>
+
+ * config/arm/aarch-common.c
+ (search_term): New typedef.
+ (shift_rtx_costs): New array.
+ (arm_rtx_shift_left_p): New.
+ (arm_find_sub_rtx_with_search_term): Likewise.
+ (arm_find_sub_rtx_with_code): Likewise.
+ (arm_early_load_addr_dep): Add sanity checking.
+ (arm_no_early_alu_shift_dep): Likewise.
+ (arm_no_early_alu_shift_value_dep): Likewise.
+ (arm_no_early_mul_dep): Likewise.
+ (arm_no_early_store_addr_dep): Likewise.
+
+2014-04-07 Michael Collison <michael.collison@linaro.org>
+
+ Backport from trunk r203621
+ 2013-10-15 James Greenhalgh <james.greenhalgh@arm.com>
+
+ * config/arm/neon-schedgen.ml: Remove.
+ * config/arm/cortex-a9-neon.md: Remove comment regarding
+ neon-schedgen.ml.
+
+2014-04-07 Michael Collison <michael.collison@linaro.org>
+
+ Backport from trunk r203620
+ 2013-10-15 James Greenhalgh <james.greenhalgh@arm.com>
+
+ * config/arm/types: Remove old neon types.
+
+2014-04-07 Michael Collison <michael.collison@linaro.org>
+
+ Backport from trunk r203619
+ 2013-10-15 James Greenhalgh <james.greenhalgh@arm.com>
+
+ * config/arm/cortex-a7.md
+ (cortex_a7_neon_type): New.
+ (cortex_a7_neon_mul): Update for new types.
+ (cortex_a7_neon_mla): Likewise.
+ (cortex_a7_neon): Likewise.
+
+2014-04-07 Michael Collison <michael.collison@linaro.org>
+
+ Backport from trunk r203618
+ 2013-10-15 James Greenhalgh <james.greenhalgh@arm.com>
+
+ * config/arm/cortex-a15-neon.md
+ (cortex_a15_neon_type): New,
+
+ (cortex_a15_neon_int_1): Remove.
+ (cortex_a15_neon_int_2): Likewise.
+ (cortex_a15_neon_int_3): Likewise.
+ (cortex_a15_neon_int_4): Likewise.
+ (cortex_a15_neon_int_5): Likewise.
+ (cortex_a15_neon_vqneg_vqabs): Likewise.
+ (cortex_a15_neon_vmov): Likewise.
+ (cortex_a15_neon_vaba): Likewise.
+ (cortex_a15_neon_vaba_qqq): Likewise.
+ (cortex_a15_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long): Likewise.
+ (cortex_a15_neon_mul_qqq_8_16_32_ddd_32): Likewise.
+ (cortex_a15_neon_mul_qdd_64_32_long_qqd_16_ddd_32_scalar_64_32_long_scalar):
+ Likewise.
+ (cortex_a15_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long): Likewise.
+ (cortex_a15_neon_mla_qqq_8_16): Likewise.
+ (cortex_a15_neon_mla_ddd_32_qqd_16_ddd_32_scalar): Likewise.
+ (cortex_a15_neon_mla_qqq_32_qqd_32_scalar): Likewise.
+ (cortex_a15_neon_mul_ddd_16_scalar_32_16_long_scalar): Likewise.
+ (cortex_a15_neon_mul_qqd_32_scalar): Likewise.
+ (cortex_a15_neon_mla_ddd_16_scalar_qdd_32_16_long_scalar): Likewise.
+ (cortex_a15_neon_shift_1): Likewise.
+ (cortex_a15_neon_shift_2): Likewise.
+ (cortex_a15_neon_shift_3): Likewise.
+ (cortex_a15_neon_vshl_ddd): Likewise.
+ (cortex_a15_neon_vqshl_vrshl_vqrshl_qqq): Likewise.
+ (cortex_a15_neon_vsra_vrsra): Likewise.
+ (cortex_a15_neon_fp_vmla_ddd_scalar): Likewise.
+ (cortex_a15_neon_fp_vmla_qqq_scalar): Likewise.
+ (cortex_a15_neon_bp_3cycle): Likewise.
+ (cortex_a15_neon_ldm_2): Likewise.
+ (cortex_a15_neon_stm_2): Likewise.
+ (cortex_a15_neon_mcr): Likewise.
+ (cortex_a15_neon_mrc): Likewise.
+ (cortex_a15_neon_fp_vadd_ddd_vabs_dd): Likewise.
+ (cortex_a15_neon_fp_vadd_qqq_vabs_qq): Likewise.
+ (cortex_a15_neon_fp_vmul_ddd): Likewise.
+ (cortex_a15_neon_fp_vmul_qqd): Likewise.
+ (cortex_a15_neon_fp_vmla_ddd): Likewise.
+ (cortex_a15_neon_fp_vmla_qqq): Likewise.
+ (cortex_a15_neon_fp_vmla_ddd_scalar): Likewise.
+ (cortex_a15_neon_fp_vmla_qqq_scalar): Likewise.
+ (cortex_a15_neon_fp_vrecps_vrsqrts_ddd): Likewise.
+ (cortex_a15_neon_fp_vrecps_vrsqrts_qqq): Likewise.
+ (cortex_a15_neon_bp_simple): Likewise.
+ (cortex_a15_neon_bp_2cycle): Likewise.
+ (cortex_a15_neon_bp_3cycle): Likewise.
+ (cortex_a15_neon_vld1_1_2_regs): Likewise.
+ (cortex_a15_neon_vld1_3_4_regs): Likewise.
+ (cortex_a15_neon_vld2_2_regs_vld1_vld2_all_lanes): Likewise.
+ (cortex_a15_neon_vld2_4_regs): Likewise.
+ (cortex_a15_neon_vld3_vld4): Likewise.
+ (cortex_a15_neon_vst1_1_2_regs_vst2_2_regs): Likewise.
+ (cortex_a15_neon_vst1_3_4_regs): Likewise.
+ (cortex_a15_neon_vst2_4_regs_vst3_vst4): Rename to...
+ (cortex_a15_neon_vst2_4_regs_vst3): ...This, update for new attributes.
+ (cortex_a15_neon_vst3_vst4): Rename to...
+ (cortex_a15_neon_vst4): This, update for new attributes.
+ (cortex_a15_neon_vld1_vld2_lane): Update for new attributes.
+ (cortex_a15_neon_vld3_vld4_lane): Likewise.
+ (cortex_a15_neon_vst1_vst2_lane): Likewise.
+ (cortex_a15_neon_vst3_vst4_lane): Likewise.
+ (cortex_a15_neon_vld3_vld4_all_lanes): Likewise.
+ (cortex_a15_neon_ldm_2): Likewise.
+ (cortex_a15_neon_stm_2): Likewise.
+ (cortex_a15_neon_mcr): Likewise.
+ (cortex_a15_neon_mcr_2_mcrr): Likewise.
+ (cortex_a15_neon_mrc): Likewise.
+ (cortex_a15_neon_mrrc): Likewise.
+
+ (cortex_a15_neon_abd): New.
+ (cortex_a15_neon_abd_q): Likewise.
+ (cortex_a15_neon_aba): Likewise.
+ (cortex_a15_neon_aba_q): Likewise.
+ (cortex_a15_neon_acc): Likewise.
+ (cortex_a15_neon_acc_q): Likewise.
+ (cortex_a15_neon_arith_basic): Likewise.
+ (cortex_a15_neon_arith_complex): Likewise.
+ (cortex_a15_neon_multiply): Likewise.
+ (cortex_a15_neon_multiply_q): Likewise.
+ (cortex_a15_neon_mla): Likewise.
+ (cortex_a15_neon_mla_q): Likewise.
+ (cortex_a15_neon_sat_mla_long): Likewise.
+ (cortex_a15_neon_shift_acc): Likewise.
+ (cortex_a15_neon_shift_imm_basic): Likewise.
+ (cortex_a15_neon_shift_imm_complex): Likewise.
+ (cortex_a15_neon_shift_reg_basic): Likewise.
+ (cortex_a15_neon_shift_reg_basic_q): Likewise.
+ (cortex_a15_neon_shift_reg_complex): Likewise.
+ (cortex_a15_neon_shift_reg_complex_q): Likewise.
+ (cortex_a15_neon_fp_negabs): Likewise
+ (cortex_a15_neon_fp_arith): Likewise
+ (cortex_a15_neon_fp_arith_q): Likewise
+ (cortex_a15_neon_fp_cvt_int): Likewise
+ (cortex_a15_neon_fp_cvt_int_q): Likewise
+ (cortex_a15_neon_fp_cvt_16): Likewise
+ (cortex_a15_neon_fp_mul): Likewise
+ (cortex_a15_neon_fp_mul_q): Likewise
+ (cortex_a15_neon_fp_mla): Likewise
+ (cortex_a15_neon_fp_mla_q): Likewise
+ (cortex_a15_neon_fp_recps_rsqrte): Likewise.
+ (cortex_a15_neon_fp_recps_rsqrte_q): Likewise.
+ (cortex_a15_neon_bitops): Likewise.
+ (cortex_a15_neon_bitops_q): Likewise.
+ (cortex_a15_neon_from_gp): Likewise.
+ (cortex_a15_neon_from_gp_q): Likewise.
+ (cortex_a15_neon_tbl3_tbl4): Likewise.
+ (cortex_a15_neon_zip_q): Likewise.
+ (cortex_a15_neon_to_gp): Likewise.
+ (cortex_a15_neon_load_a): Likewise.
+ (cortex_a15_neon_load_b): Likewise.
+ (cortex_a15_neon_load_c): Likewise.
+ (cortex_a15_neon_load_d): Likewise.
+ (cortex_a15_neon_load_e): Likewise.
+ (cortex_a15_neon_load_f): Likewise.
+ (cortex_a15_neon_store_a): Likewise.
+ (cortex_a15_neon_store_b): Likewise.
+ (cortex_a15_neon_store_c): Likewise.
+ (cortex_a15_neon_store_d): Likewise.
+ (cortex_a15_neon_store_e): Likewise.
+ (cortex_a15_neon_store_f): Likewise.
+ (cortex_a15_neon_store_g): Likewise.
+ (cortex_a15_neon_store_h): Likewise.
+ (cortex_a15_vfp_to_from_gp): Likewise.
+
+2014-04-07 Michael Collison <michael.collison@linaro.org>
+
+ Backport from trunk r203617
+ 2013-10-15 James Greenhalgh <james.greenhalgh@arm.com>
+
+ * config/arm/cortex-a9-neon.md (cortex_a9_neon_type): New.
+
+ (cortex_a9_neon_vshl_ddd): Remove.
+ (cortex_a9_neon_vst3_vst4): Likewise.
+ (cortex_a9_neon_vld3_vld4_all_lanes): Likewise.
+
+ (cortex_a9_neon_bit_ops_q): New.
+
+ (cortex_a9_neon_int_1): Use cortex_a8_neon_type.
+ (cortex_a9_neon_int_2): Likewise.
+ (cortex_a9_neon_int_3): Likewise.
+ (cortex_a9_neon_int_4): Likewise.
+ (cortex_a9_neon_int_5): Likewise.
+ (cortex_a9_neon_vqneg_vqabs): Likewise.
+ (cortex_a9_neon_vmov): Likewise.
+ (cortex_a9_neon_vaba): Likewise.
+ (cortex_a9_neon_vaba_qqq): Likewise.
+ (cortex_a9_neon_shift_1): Likewise.
+ (cortex_a9_neon_shift_2): Likewise.
+ (cortex_a9_neon_shift_3): Likewise.
+ (cortex_a9_neon_vqshl_vrshl_vqrshl_qqq): Likewise.
+ (cortex_a9_neon_vsra_vrsra): Likewise.
+ (cortex_a9_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long): Likewise.
+ (cortex_a9_neon_mul_qqq_8_16_32_ddd_32): Likewise.
+ (cortex_a9_neon_mul_qdd_64_32_long_qqd_16_ddd_32_scalar_64_32_long_scalar):
+ Likewise.
+ (cortex_a9_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long): Likewise.
+ (cortex_a9_neon_mla_qqq_8_16): Likewise.
+ (cortex_a9_neon_mla_ddd_32_qqd_16_ddd_32_scalar_qdd_64_32_long_scalar_qdd_64_32_long):
+ Likewise.
+ (cortex_a9_neon_mla_qqq_32_qqd_32_scalar): Likewise.
+ (cortex_a9_neon_mul_ddd_16_scalar_32_16_long_scalar): Likewise.
+ (cortex_a9_neon_mul_qqd_32_scalar): Likewise.
+ (cortex_a9_neon_mla_ddd_16_scalar_qdd_32_16_long_scalar): Likewise.
+ (cortex_a9_neon_fp_vadd_ddd_vabs_dd): Likewise.
+ (cortex_a9_neon_fp_vadd_qqq_vabs_qq): Likewise.
+ (cortex_a9_neon_fp_vsum): Likewise.
+ (cortex_a9_neon_fp_vmul_ddd): Likewise.
+ (cortex_a9_neon_fp_vmul_qqd): Likewise.
+ (cortex_a9_neon_fp_vmla_ddd): Likewise.
+ (cortex_a9_neon_fp_vmla_qqq): Likewise.
+ (cortex_a9_neon_fp_vmla_ddd_scalar): Likewise.
+ (cortex_a9_neon_fp_vmla_qqq_scalar): Likewise.
+ (cortex_a9_neon_fp_vrecps_vrsqrts_ddd): Likewise.
+ (cortex_a9_neon_fp_vrecps_vrsqrts_qqq): Likewise.
+ (cortex_a9_neon_bp_simple): Likewise.
+ (cortex_a9_neon_bp_2cycle): Likewise.
+ (cortex_a9_neon_bp_3cycle): Likewise.
+ (cortex_a9_neon_ldr): Likewise.
+ (cortex_a9_neon_str): Likewise.
+ (cortex_a9_neon_vld1_1_2_regs): Likewise.
+ (cortex_a9_neon_vld1_3_4_regs): Likewise.
+ (cortex_a9_neon_vld2_2_regs_vld1_vld2_all_lanes): Likewise.
+ (cortex_a9_neon_vld2_4_regs): Likewise.
+ (cortex_a9_neon_vld3_vld4): Likewise.
+ (cortex_a9_neon_vld1_vld2_lane): Likewise.
+ (cortex_a9_neon_vld3_vld4_lane): Likewise.
+ (cortex_a9_neon_vld3_vld4_all_lanes): Likewise.
+ (cortex_a9_neon_vst1_1_2_regs_vst2_2_regs): Likewise.
+ (cortex_a9_neon_vst1_3_4_regs): Likewise.
+ (cortex_a9_neon_vst2_4_regs_vst3_vst4): Likewise.
+ (cortex_a9_neon_vst1_vst2_lane): Likewise.
+ (cortex_a9_neon_vst3_vst4_lane): Likewise.
+ (cortex_a9_neon_mcr): Likewise.
+ (cortex_a9_neon_mcr_2_mcrr): Likewise.
+ (cortex_a9_neon_mrc): Likewise.
+ (cortex_a9_neon_mrrc): Likewise.
+
+2014-04-07 Michael Collison <michael.collison@linaro.org>
+
+ Backport from trunk r203616
+ 2013-10-15 James Greenhalgh <james.greenhalgh@arm.com>
+
+ * config/arm/cortex-a8-neon.md (cortex_a8_neon_type): New.
+
+ (cortex_a8_neon_vshl_ddd): Remove.
+ (cortex_a8_neon_vst3_vst4): Likewise.
+ (cortex_a8_neon_vld3_vld4_all_lanes): Likewise.
+
+ (cortex_a8_neon_bit_ops_q): New.
+
+ (cortex_a8_neon_int_1): Use cortex_a8_neon_type.
+ (cortex_a8_neon_int_2): Likewise..
+ (cortex_a8_neon_int_3): Likewise.
+ (cortex_a8_neon_int_5): Likewise.
+ (cortex_a8_neon_vqneg_vqabs): Likewise.
+ (cortex_a8_neon_int_4): Likewise.
+ (cortex_a8_neon_vaba): Likewise.
+ (cortex_a8_neon_vaba_qqq): Likewise.
+ (cortex_a8_neon_shift_1): Likewise.
+ (cortex_a8_neon_shift_2): Likewise.
+ (cortex_a8_neon_shift_3): Likewise.
+ (cortex_a8_neon_vqshl_vrshl_vqrshl_qqq): Likewise.
+ (cortex_a8_neon_vsra_vrsra): Likewise.
+ (cortex_a8_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long): Likewise.
+ (cortex_a8_neon_mul_qqq_8_16_32_ddd_32): Likewise.
+ (cortex_a8_neon_mul_qdd_64_32_long_qqd_16_ddd_32_scalar_64_32_long_scalar):
+ Likewise.
+ (cortex_a8_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long): Likewise.
+ (cortex_a8_neon_mla_qqq_8_16): Likewise.
+ (cortex_a8_neon_mla_ddd_32_qqd_16_ddd_32_scalar_qdd_64_32_long_scalar_qdd_64_32_long):
+ Likewise.
+ (cortex_a8_neon_mla_qqq_32_qqd_32_scalar): Likewise.
+ (cortex_a8_neon_mul_ddd_16_scalar_32_16_long_scalar): Likewise.
+ (cortex_a8_neon_mul_qqd_32_scalar): Likewise.
+ (cortex_a8_neon_mla_ddd_16_scalar_qdd_32_16_long_scalar): Likewise.
+ (cortex_a8_neon_fp_vadd_ddd_vabs_dd): Likewise.
+ (cortex_a8_neon_fp_vadd_qqq_vabs_qq): Likewise.
+ (cortex_a8_neon_fp_vsum): Likewise.
+ (cortex_a8_neon_fp_vmul_ddd): Likewise.
+ (cortex_a8_neon_fp_vmul_qqd): Likewise.
+ (cortex_a8_neon_fp_vmla_ddd): Likewise.
+ (cortex_a8_neon_fp_vmla_qqq): Likewise.
+ (cortex_a8_neon_fp_vmla_ddd_scalar): Likewise.
+ (cortex_a8_neon_fp_vmla_qqq_scalar): Likewise.
+ (cortex_a8_neon_fp_vrecps_vrsqrts_ddd): Likewise.
+ (cortex_a8_neon_fp_vrecps_vrsqrts_qqq): Likewise.
+ (cortex_a8_neon_bp_simple): Likewise.
+ (cortex_a8_neon_bp_2cycle): Likewise.
+ (cortex_a8_neon_bp_3cycle): Likewise.
+ (cortex_a8_neon_ldr): Likewise.
+ (cortex_a8_neon_str): Likewise.
+ (cortex_a8_neon_vld1_1_2_regs): Likewise.
+ (cortex_a8_neon_vld1_3_4_regs): Likewise.
+ (cortex_a8_neon_vld2_2_regs_vld1_vld2_all_lanes): Likewise.
+ (cortex_a8_neon_vld2_4_regs): Likewise.
+ (cortex_a8_neon_vld3_vld4): Likewise.
+ (cortex_a8_neon_vld1_vld2_lane): Likewise.
+ (cortex_a8_neon_vld3_vld4_lane): Likewise.
+ (cortex_a8_neon_vst1_1_2_regs_vst2_2_regs): Likewise.
+ (cortex_a8_neon_vst1_3_4_regs): Likewise.
+ (cortex_a8_neon_vst2_4_regs_vst3_vst4): Likewise.
+ (cortex_a8_neon_vst1_vst2_lane): Likewise.
+ (cortex_a8_neon_vst3_vst4_lane): Likewise.
+ (cortex_a8_neon_mcr): Likewise.
+ (cortex_a8_neon_mcr_2_mcrr): Likewise.
+ (cortex_a8_neon_mrc): Likewise.
+ (cortex_a8_neon_mrrc): Likewise.
+
+2014-04-07 Michael Collison <michael.collison@linaro.org>
+
+ Backport from trunk r203614
+ 2013-10-15 James Greenhalgh <james.greenhalgh@arm.com>
+
+ * config/aarch64/iterators.md (Vetype): Add SF and DF modes.
+ (fp): New.
+ * config/aarch64/aarch64-simd.md (neon_type): Remove.
+ (aarch64_simd_dup<mode>): Add "type" attribute.
+ (aarch64_dup_lane<mode>): Likewise.
+ (aarch64_dup_lane_<vswap_width_name><mode>): Likewise.
+ (*aarch64_simd_mov<mode>): Likewise.
+ (aarch64_simd_mov_from_<mode>low): Likewise.
+ (aarch64_simd_mov_from_<mode>high): Likewise.
+ (orn<mode>3): Likewise.
+ (bic<mode>3): Likewise.
+ (add<mode>3): Likewise.
+ (sub<mode>3): Likewise.
+ (mul<mode>3): Likewise.
+ (*aarch64_mul3_elt<mode>): Likewise.
+ (*aarch64_mul3_elt_<vswap_width_name><mode>): Likewise.
+ (*aarch64_mul3_elt_to_128df): Likewise.
+ (*aarch64_mul3_elt_to_64v2df): Likewise.
+ (neg<mode>2): Likewise.
+ (abs<mode>2): Likewise.
+ (abd<mode>_3): Likewise.
+ (aba<mode>_3): Likewise.
+ (fabd<mode>_3): Likewise.
+ (*fabd_scalar<mode>3): Likewise.
+ (and<mode>3): Likewise.
+ (ior<mode>3): Likewise.
+ (xor<mode>3): Likewise.
+ (one_cmpl<mode>2): Likewise.
+ (aarch64_simd_vec_set<mode>): Likewise.
+ (aarch64_simd_lshr<mode>): Likewise.
+ (aarch64_simd_ashr<mode>): Likewise.
+ (aarch64_simd_imm_shl<mode>): Likewise.
+ (aarch64_simd_reg_sshl<mode): Likewise.
+ (aarch64_simd_reg_shl<mode>_unsigned): Likewise.
+ (aarch64_simd_reg_shl<mode>_signed): Likewise.
+ (aarch64_simd_vec_setv2di): Likewise.
+ (aarch64_simd_vec_set<mode>): Likewise.
+ (aarch64_mla<mode>): Likewise.
+ (*aarch64_mla_elt<mode>): Likewise.
+ (*aarch64_mla_elt_<vswap_width_name><mode>): Likewise.
+ (aarch64_mls<mode>): Likewise.
+ (*aarch64_mls_elt<mode>): Likewise.
+ (*aarch64_mls_elt_<vswap_width_name><mode>): Likewise.
+ (<su><maxmin><mode>3): Likewise.
+ (move_lo_quad_<mode>): Likewise.
+ (aarch64_simd_move_hi_quad_<mode>): Likewise.
+ (aarch64_simd_vec_pack_trunc_<mode>): Likewise.
+ (vec_pack_trunc_<mode>): Likewise.
+ (aarch64_simd_vec_unpack<su>_lo_<mode>): Likewise.
+ (aarch64_simd_vec_unpack<su>_hi_<mode>): Likewise.
+ (*aarch64_<su>mlal_lo<mode>): Likewise.
+ (*aarch64_<su>mlal_hi<mode>): Likewise.
+ (*aarch64_<su>mlsl_lo<mode>): Likewise.
+ (*aarch64_<su>mlsl_hi<mode>): Likewise.
+ (*aarch64_<su>mlal<mode>): Likewise.
+ (*aarch64_<su>mlsl<mode>): Likewise.
+ (aarch64_simd_vec_<su>mult_lo_<mode>): Likewise.
+ (aarch64_simd_vec_<su>mult_hi_<mode>): Likewise.
+ (add<mode>3): Likewise.
+ (sub<mode>3): Likewise.
+ (mul<mode>3): Likewise.
+ (div<mode>3): Likewise.
+ (neg<mode>2): Likewise.
+ (abs<mode>2): Likewise.
+ (fma<mode>4): Likewise.
+ (*aarch64_fma4_elt<mode>): Likewise.
+ (*aarch64_fma4_elt_<vswap_width_name><mode>): Likewise.
+ (*aarch64_fma4_elt_to_128df): Likewise.
+ (*aarch64_fma4_elt_to_64v2df): Likewise.
+ (fnma<mode>4): Likewise.
+ (*aarch64_fnma4_elt<mode>): Likewise.
+ (*aarch64_fnma4_elt_<vswap_width_name><mode>
+ (*aarch64_fnma4_elt_to_128df): Likewise.
+ (*aarch64_fnma4_elt_to_64v2df): Likewise.
+ (<frint_pattern><mode>2): Likewise.
+ (l<fcvt_pattern><su_optab><VDQF:mode><fcvt_target>2): Likewise.
+ (<optab><fcvt_target><VDQF:VDQF:mode>2): Likewise.
+ (vec_unpacks_lo_v4sf): Likewise.
+ (aarch64_float_extend_lo_v2df): Likewise.
+ (vec_unpacks_hi_v4sf): Likewise.
+ (aarch64_float_truncate_lo_v2sf): Likewise.
+ (aarch64_float_truncate_hi_v4sf): Likewise.
+ (aarch64_vmls<mode>): Likewise.
+ (<su><maxmin><mode>3): Likewise.
+ (<maxmin_uns><mode>3): Likewise.
+ (reduc_<sur>plus_<mode>): Likewise.
+ (reduc_<sur>plus_v2di): Likewise.
+ (reduc_<sur>plus_v2si): Likewise.
+ (reduc_<sur>plus_<mode>): Likewise.
+ (aarch64_addpv4sf): Likewise.
+ (clz<mode>2): Likewise.
+ (reduc_<maxmin_uns>_<mode>): Likewise.
+ (reduc_<maxmin_uns>_v2di): Likewise.
+ (reduc_<maxmin_uns>_v2si): Likewise.
+ (reduc_<maxmin_uns>_<mode>): Likewise.
+ (reduc_<maxmin_uns>_v4sf): Likewise.
+ (aarch64_simd_bsl<mode>_internal): Likewise.
+ (*aarch64_get_lane_extend<GPI:mode><VDQQH:mode>): Likewise.
+ (*aarch64_get_lane_zero_extendsi<mode>): Likewise.
+ (aarch64_get_lane<mode>): Likewise.
+ (*aarch64_combinez<mode>): Likewise.
+ (aarch64_combine<mode>): Likewise.
+ (aarch64_simd_combine<mode>): Likewise.
+ (aarch64_<ANY_EXTEND:su><ADDSUB:optab>l<mode>_hi_internal): Likewise.
+ (aarch64_<ANY_EXTEND:su><ADDSUB:optab>l<mode>_lo_internal): Likewise.
+ (aarch64_<ANY_EXTEND:su><ADDSUB:optab>l<mode>): Likewise.
+ (aarch64_<ANY_EXTEND:su><ADDSUB:optab>w<mode>): Likewise.
+ (aarch64_<ANY_EXTEND:su><ADDSUB:optab>w2<mode>_internal): Likewise.
+ (aarch64_<sur>h<addsub><mode>): Likewise.
+ (aarch64_<sur><addsub>hn<mode>): Likewise.
+ (aarch64_<sur><addsub>hn2<mode>): Likewise.
+ (aarch64_pmul<mode>): Likewise.
+ (aarch64_<su_optab><optab><mode>): Likewise.
+ (aarch64_<sur>qadd<mode>): Likewise.
+ (aarch64_sqmovun<mode>): Likewise.
+ (aarch64_<sur>qmovn<mode>): Likewise.
+ (aarch64_s<optab><mode>): Likewise.
+ (aarch64_sq<r>dmulh<mode>): Likewise.
+ (aarch64_sq<r>dmulh_lane<mode>): Likewise.
+ (aarch64_sq<r>dmulh_laneq<mode>): Likewise.
+ (aarch64_sq<r>dmulh_lane<mode>): Likewise.
+ (aarch64_sqdml<SBINQOPS:as>l<mode>): Likewise.
+ (aarch64_sqdml<SBINQOPS:as>l_lane<mode>_internal): Likewise.
+ (aarch64_sqdml<SBINQOPS:as>l_lane<mode>_internal): Likewise.
+ (aarch64_sqdml<SBINQOPS:as>l_n<mode>): Likewise.
+ (aarch64_sqdml<SBINQOPS:as>l2<mode>_internal): Likewise.
+ (aarch64_sqdml<SBINQOPS:as>l2_lane<mode>_internal): Likewise.
+ (aarch64_sqdml<SBINQOPS:as>l2_n<mode>_internal): Likewise.
+ (aarch64_sqdmull<mode>): Likewise.
+ (aarch64_sqdmull_lane<mode>_internal): Likewise.
+ (aarch64_sqdmull_n<mode>): Likewise.
+ (aarch64_sqdmull2<mode>_internal): Likewise.
+ (aarch64_sqdmull2_lane<mode>_internal): Likewise.
+ (aarch64_sqdmull2_n<mode>_internal): Likewise.
+ (aarch64_<sur>shl<mode>): Likewise.
+ (aarch64_<sur>q<r>shl<mode>
+ (aarch64_<sur>shll_n<mode>): Likewise.
+ (aarch64_<sur>shll2_n<mode>): Likewise.
+ (aarch64_<sur>shr_n<mode>): Likewise.
+ (aarch64_<sur>sra_n<mode>): Likewise.
+ (aarch64_<sur>s<lr>i_n<mode>): Likewise.
+ (aarch64_<sur>qshl<u>_n<mode>): Likewise.
+ (aarch64_<sur>q<r>shr<u>n_n<mode>): Likewise.
+ (aarch64_cm<optab><mode>): Likewise.
+ (aarch64_cm<optab>di): Likewise.
+ (aarch64_cm<optab><mode>): Likewise.
+ (aarch64_cm<optab>di): Likewise.
+ (aarch64_cmtst<mode>): Likewise.
+ (aarch64_cmtstdi): Likewise.
+ (aarch64_cm<optab><mode>): Likewise.
+ (*aarch64_fac<optab><mode>): Likewise.
+ (aarch64_addp<mode>): Likewise.
+ (aarch64_addpdi): Likewise.
+ (sqrt<mode>2): Likewise.
+ (vec_load_lanesoi<mode>): Likewise.
+ (vec_store_lanesoi<mode>): Likewise.
+ (vec_load_lanesci<mode>): Likewise.
+ (vec_store_lanesci<mode>): Likewise.
+ (vec_load_lanesxi<mode>): Likewise.
+ (vec_store_lanesxi<mode>): Likewise.
+ (*aarch64_mov<mode>): Likewise.
+ (aarch64_ld2<mode>_dreg): Likewise.
+ (aarch64_ld2<mode>_dreg): Likewise.
+ (aarch64_ld3<mode>_dreg): Likewise.
+ (aarch64_ld3<mode>_dreg): Likewise.
+ (aarch64_ld4<mode>_dreg): Likewise.
+ (aarch64_ld4<mode>_dreg): Likewise.
+ (aarch64_tbl1<mode>): Likewise.
+ (aarch64_tbl2v16qi): Likewise.
+ (aarch64_combinev16qi): Likewise.
+ (aarch64_<PERMUTE:perm_insn><PERMUTE:perm_hilo><mode>): Likewise.
+ (aarch64_st2<mode>_dreg): Likewise.
+ (aarch64_st2<mode>_dreg): Likewise.
+ (aarch64_st3<mode>_dreg): Likewise.
+ (aarch64_st3<mode>_dreg): Likewise.
+ (aarch64_st4<mode>_dreg): Likewise.
+ (aarch64_st4<mode>_dreg): Likewise.
+ (*aarch64_simd_ld1r<mode>): Likewise.
+ (aarch64_frecpe<mode>): Likewise.
+ (aarch64_frecp<FRECP:frecp_suffix><mode>): Likewise.
+ (aarch64_frecps<mode>): Likewise.
+
+2014-04-07 Michael Collison <michael.collison@linaro.org>
+
+ Backport from trunk r203613
+ 2013-10-15 James Greenhalgh <james.greenhalgh@arm.com>
+
+ * config/arm/iterators.md (V_elem_ch): New.
+ (q): Likewise.
+ (VQH_type): Likewise.
+ * config/arm/arm.md (is_neon_type): New.
+ (conds): Use is_neon_type.
+ (anddi3_insn): Update type attribute.
+ (xordi3_insn): Likewise.
+ (one_cmpldi2): Likewise.
+ * gcc/config/arm/vfp.md (movhf_vfp_neon): Update type attribute.
+ * gcc/config/arm/neon.md (neon_mov): Update type attribute.
+ (*movmisalign<mode>_neon_store): Likewise.
+ (*movmisalign<mode>_neon_load): Likewise.
+ (vec_set<mode>_internal): Likewise.
+ (vec_set<mode>_internal): Likewise.
+ (vec_setv2di_internal): Likewise.
+ (vec_extract<mode>): Likewise.
+ (vec_extract<mode>): Likewise.
+ (vec_extractv2di): Likewise.
+ (*add<mode>3_neon): Likewise.
+ (adddi3_neon): Likewise.
+ (*sub<mode>3_neon): Likewise.
+ (subdi3_neon): Likewise.
+ (fma<VCVTF:mode>4): Likewise.
+ (fma<VCVTF:mode>4_intrinsic): Likewise.
+ (*fmsub<VCVTF:mode>4): Likewise.
+ (fmsub<VCVTF:mode>4_intrinsic): Likewise.
+ (neon_vrint<NEON_VRINT:nvrint_variant><VCVTF:mode>): Likewise.
+ (ior<mode>3): Likewise.
+ (and<mode>3): Likewise.
+ (orn<mode>3_neon): Likewise.
+ (orndi3_neon): Likewise.
+ (bic<mode>3_neon): Likewise.
+ (bicdi3_neon): Likewise.
+ (xor<mode>3): Likewise.
+ (one_cmpl<mode>2): Likewise.
+ (abs<mode>2): Likewise.
+ (neg<mode>2): Likewise.
+ (negdi2_neon): Likewise.
+ (*umin<mode>3_neon): Likewise.
+ (*umax<mode>3_neon): Likewise.
+ (*smin<mode>3_neon): Likewise.
+ (*smax<mode>3_neon): Likewise.
+ (vashl<mode>3): Likewise.
+ (vashr<mode>3_imm): Likewise.
+ (vlshr<mode>3_imm): Likewise.
+ (ashl<mode>3_signed): Likewise.
+ (ashl<mode>3_unsigned): Likewise.
+ (neon_load_count): Likewise.
+ (ashldi3_neon_noclobber): Likewise.
+ (ashldi3_neon): Likewise.
+ (signed_shift_di3_neon): Likewise.
+ (unsigned_shift_di3_neon): Likewise.
+ (ashrdi3_neon_imm_noclobber): Likewise.
+ (lshrdi3_neon_imm_noclobber): Likewise.
+ (<shift>di3_neon): Likewise.
+ (widen_ssum<mode>3): Likewise.
+ (widen_usum<mode>3): Likewise.
+ (quad_halves_<code>v4si): Likewise.
+ (quad_halves_<code>v4sf): Likewise.
+ (quad_halves_<code>v8hi): Likewise.
+ (quad_halves_<code>v16qi): Likewise.
+ (reduc_splus_v2di): Likewise.
+ (neon_vpadd_internal<mode>): Likewise.
+ (neon_vpsmin<mode>): Likewise.
+ (neon_vpsmax<mode>): Likewise.
+ (neon_vpumin<mode>): Likewise.
+ (neon_vpumax<mode>): Likewise.
+ (*ss_add<mode>_neon): Likewise.
+ (*us_add<mode>_neon): Likewise.
+ (*ss_sub<mode>_neon): Likewise.
+ (*us_sub<mode>_neon): Likewise.
+ (neon_vadd<mode>_unspec): Likewise.
+ (neon_vaddl<mode>): Likewise.
+ (neon_vaddw<mode>): Likewise.
+ (neon_vhadd<mode>): Likewise.
+ (neon_vqadd<mode>): Likewise.
+ (neon_vaddhn<mode>): Likewise.
+ (neon_vmul<mode>): Likewise.
+ (neon_vfms<VCVTF:mode>): Likewise.
+ (neon_vmlal<mode>): Likewise.
+ (neon_vmls<mode>): Likewise.
+ (neon_vmlsl<mode>): Likewise.
+ (neon_vqdmulh<mode>): Likewise.
+ (neon_vqdmlal<mode>): Likewise.
+ (neon_vqdmlsl<mode>): Likewise.
+ (neon_vmull<mode>): Likewise.
+ (neon_vqdmull<mode>): Likewise.
+ (neon_vsub<mode>_unspec): Likewise.
+ (neon_vsubl<mode>): Likewise.
+ (neon_vsubw<mode>): Likewise.
+ (neon_vqsub<mode>): Likewise.
+ (neon_vhsub<mode>): Likewise.
+ (neon_vsubhn<mode>): Likewise.
+ (neon_vceq<mode>): Likewise.
+ (neon_vcge<mode>): Likewise.
+ (neon_vcgeu<mode>): Likewise.
+ (neon_vcgt<mode>): Likewise.
+ (neon_vcgtu<mode>): Likewise.
+ (neon_vcle<mode>): Likewise.
+ (neon_vclt<mode>): Likewise.
+ (neon_vcage<mode>): Likewise.
+ (neon_vcagt<mode>): Likewise.
+ (neon_vtst<mode>): Likewise.
+ (neon_vabd<mode>): Likewise.
+ (neon_vabdl<mode>): Likewise.
+ (neon_vaba<mode>): Likewise.
+ (neon_vabal<mode>): Likewise.
+ (neon_vmax<mode>): Likewise.
+ (neon_vmin<mode>): Likewise.
+ (neon_vpaddl<mode>): Likewise.
+ (neon_vpadal<mode>): Likewise.
+ (neon_vpmax<mode>): Likewise.
+ (neon_vpmin<mode>): Likewise.
+ (neon_vrecps<mode>): Likewise.
+ (neon_vrsqrts<mode>): Likewise.
+ (neon_vqabs<mode>): Likewise.
+ (neon_vqneg<mode>): Likewise.
+ (neon_vcls<mode>): Likewise.
+ (clz<mode>2): Likewise.
+ (popcount<mode>2): Likewise.
+ (neon_vrecpe<mode>): Likewise.
+ (neon_vrsqrte<mode>): Likewise.
+ (neon_vget_lane<mode>_sext_internal): Likewise.
+ (neon_vget_lane<mode>_zext_internal): Likewise.
+ (neon_vdup_n<mode>): Likewise.
+ (neon_vdup_n<mode>): Likewise.
+ (neon_vdup_nv2di): Likewise.
+ (neon_vdup_lane<mode>_interal): Likewise.
+ (*neon_vswp<mode>): Likewise.
+ (neon_vcombine<mode>): Likewise.
+ (float<mode><V_cvtto>2): Likewise.
+ (floatuns<mode><V_cvtto>2): Likewise.
+ (fix_trunc<mode><V_cvtto>2): Likewise.
+ (fixuns_trunc<mode><V_cvtto>2
+ (neon_vcvt<mode>): Likewise.
+ (neon_vcvt<mode>): Likewise.
+ (neon_vcvtv4sfv4hf): Likewise.
+ (neon_vcvtv4hfv4sf): Likewise.
+ (neon_vcvt_n<mode>): Likewise.
+ (neon_vcvt_n<mode>): Likewise.
+ (neon_vmovn<mode>): Likewise.
+ (neon_vqmovn<mode>): Likewise.
+ (neon_vqmovun<mode>): Likewise.
+ (neon_vmovl<mode>): Likewise.
+ (neon_vmul_lane<mode>): Likewise.
+ (neon_vmul_lane<mode>): Likewise.
+ (neon_vmull_lane<mode>): Likewise.
+ (neon_vqdmull_lane<mode>): Likewise.
+ (neon_vqdmulh_lane<mode>): Likewise.
+ (neon_vqdmulh_lane<mode>): Likewise.
+ (neon_vmla_lane<mode>): Likewise.
+ (neon_vmla_lane<mode>): Likewise.
+ (neon_vmlal_lane<mode>): Likewise.
+ (neon_vqdmlal_lane<mode>): Likewise.
+ (neon_vmls_lane<mode>): Likewise.
+ (neon_vmls_lane<mode>): Likewise.
+ (neon_vmlsl_lane<mode>): Likewise.
+ (neon_vqdmlsl_lane<mode>): Likewise.
+ (neon_vext<mode>): Likewise.
+ (neon_vrev64<mode>): Likewise.
+ (neon_vrev32<mode>): Likewise.
+ (neon_vrev16<mode>): Likewise.
+ (neon_vbsl<mode>_internal): Likewise.
+ (neon_vshl<mode>): Likewise.
+ (neon_vqshl<mode>): Likewise.
+ (neon_vshr_n<mode>): Likewise.
+ (neon_vshrn_n<mode>): Likewise.
+ (neon_vqshrn_n<mode>): Likewise.
+ (neon_vqshrun_n<mode>): Likewise.
+ (neon_vshl_n<mode>): Likewise.
+ (neon_vqshl_n<mode>): Likewise.
+ (neon_vqshlu_n<mode>): Likewise.
+ (neon_vshll_n<mode>): Likewise.
+ (neon_vsra_n<mode>): Likewise.
+ (neon_vsri_n<mode>): Likewise.
+ (neon_vsli_n<mode>): Likewise.
+ (neon_vtbl1v8qi): Likewise.
+ (neon_vtbl2v8qi): Likewise.
+ (neon_vtbl3v8qi): Likewise.
+ (neon_vtbl4v8qi): Likewise.
+ (neon_vtbl1v16qi): Likewise.
+ (neon_vtbl2v16qi): Likewise.
+ (neon_vcombinev16qi): Likewise.
+ (neon_vtbx1v8qi): Likewise.
+ (neon_vtbx2v8qi): Likewise.
+ (neon_vtbx3v8qi): Likewise.
+ (neon_vtbx4v8qi): Likewise.
+ (*neon_vtrn<mode>_insn): Likewise.
+ (*neon_vzip<mode>_insn): Likewise.
+ (*neon_vuzp<mode>_insn): Likewise.
+ (neon_vld1<mode>): Likewise.
+ (neon_vld1_lane<mode>): Likewise.
+ (neon_vld1_lane<mode>): Likewise.
+ (neon_vld1_dup<mode>): Likewise.
+ (neon_vld1_dup<mode>): Likewise.
+ (neon_vld1_dupv2di): Likewise.
+ (neon_vst1<mode>): Likewise.
+ (neon_vst1_lane<mode>): Likewise.
+ (neon_vst1_lane<mode>): Likewise.
+ (neon_vld2<mode>): Likewise.
+ (neon_vld2<mode>): Likewise.
+ (neon_vld2_lane<mode>): Likewise.
+ (neon_vld2_lane<mode>): Likewise.
+ (neon_vld2_dup<mode>): Likewise.
+ (neon_vst2<mode>): Likewise.
+ (neon_vst2<mode>): Likewise.
+ (neon_vst2_lane<mode>): Likewise.
+ (neon_vst2_lane<mode>): Likewise.
+ (neon_vld3<mode>): Likewise.
+ (neon_vld3qa<mode>): Likewise.
+ (neon_vld3qb<mode>): Likewise.
+ (neon_vld3_lane<mode>): Likewise.
+ (neon_vld3_lane<mode>): Likewise.
+ (neon_vld3_dup<mode>): Likewise.
+ (neon_vst3<mode>): Likewise.
+ (neon_vst3qa<mode>): Likewise.
+ (neon_vst3qb<mode>): Likewise.
+ (neon_vst3_lane<mode>): Likewise.
+ (neon_vst3_lane<mode>): Likewise.
+ (neon_vld4<mode>): Likewise.
+ (neon_vld4qa<mode>): Likewise.
+ (neon_vld4qb<mode>): Likewise.
+ (neon_vld4_lane<mode>): Likewise.
+ (neon_vld4_lane<mode>): Likewise.
+ (neon_vld4_dup<mode>): Likewise.
+ (neon_vst4<mode>): Likewise.
+ (neon_vst4qa<mode>): Likewise.
+ (neon_vst4qb<mode>): Likewise.
+ (neon_vst4_lane<mode>): Likewise.
+ (neon_vst4_lane<mode>): Likewise.
+ (neon_vec_unpack<US>_lo_<mode>): Likewise.
+ (neon_vec_unpack<US>_hi_<mode>): Likewise.
+ (neon_vec_<US>mult_lo_<mode>): Likewise.
+ (neon_vec_<US>mult_hi_<mode>): Likewise.
+ (neon_vec_<US>shiftl_<mode>): Likewise.
+ (neon_unpack<US>_<mode>): Likewise.
+ (neon_vec_<US>mult_<mode>): Likewise.
+ (vec_pack_trunc_<mode>): Likewise.
+ (neon_vec_pack_trunc_<mode>): Likewise.
+ (neon_vabd<mode>_2): Likewise.
+ (neon_vabd<mode>_3): Likewise.
+
+2014-04-07 Michael Collison <michael.collison@linaro.org>
+
+ Backport from trunk r203612
+ 2013-10-15 James Greenhalgh <james.greenhalgh@arm.com>
+
+ * config/aarch64/aarch64.md (movtf_aarch64): Update type attribute.
+ (load_pair): Update type attribute.
+ (store_pair): Update type attribute.
+ * config/aarch64/iterators.md (q): New.
+
+2014-04-07 Michael Collison <michael.collison@linaro.org>
+
+ Backport from trunk r203611
+ 2013-10-15 James Greenhalgh <james.greenhalgh@arm.com>
+
+ * config/arm/types.md: Add new types for Neon insns.
+
+2014-04-07 Michael Collison <michael.collison@linaro.org>
+
+ Backport from trunk r203241
+ 2013-10-07 Renlin Li <Renlin.Li@arm.com>
+
+ * config/arm/arm-cores.def (cortex-a53): Use cortex tuning.
+
+2014-04-07 Michael Collison <michael.collison@linaro.org>
+
+ Backport from trunk r202560
+ 2013-09-13 Kyrylo Tkachov <kyrylo.tkachov@arm.com>
+
+ * config/arm/arm.md (arm_cmpsi_insn): Split rI alternative.
+ Set type attribute correctly. Set predicable_short_it attribute.
+ (cmpsi_shiftsi): Remove %? from output template.
+
+2014-04-07 Michael Collison <michael.collison@linaro.org>
+
+ Backport from trunk r202448
+ 2013-09-10 James Greenhalgh <james.greenhalgh@arm.com>
+
+ * config/aarch64/aarch64.md (generic_sched): New.
+ * config/aarch64/aarch64-generic.md (load): Make conditional
+ on generic_sched attribute.
+ (nonload): Likewise.
+
+2014-04-07 Michael Collison <michael.collison@linaro.org>
+
+ Backport from trunk r202334
+ 2013-09-06 James Greenhalgh <james.greenhalgh@arm.com>
+
+ * config/aarch64/aarch64.md
+ (*movtf_aarch64): Use neon_<ls>dm_2 as type where v8type
+ is fpsimd_<load/store>2.
+ (load_pair<mode>): Likewise.
+ (store_pair<mode>): Likewise.
+
+2014-04-07 Michael Collison <michael.collison@linaro.org>
+
+ Backport from trunk r202333
+ 2013-09-06 James Greenhalgh <james.greenhalgh@arm.com>
+
+ * config/arm/types.md (type): Add "mrs" type.
+ * config/aarch64/aarch64.md
+ (aarch64_load_tp_hard): Make type "mrs".
+ * config/arm/arm.md
+ (load_tp_hard): Make type "mrs".
+ * config/arm/cortex-a15.md: Update with new attributes.
+ * config/arm/cortex-a5.md: Update with new attributes.
+ * config/arm/cortex-a53.md: Update with new attributes.
+ * config/arm/cortex-a7.md: Update with new attributes.
+ * config/arm/cortex-a8.md: Update with new attributes.
+ * config/arm/cortex-a9.md: Update with new attributes.
+ * config/arm/cortex-m4.md: Update with new attributes.
+ * config/arm/cortex-r4.md: Update with new attributes.
+ * config/arm/fa526.md: Update with new attributes.
+ * config/arm/fa606te.md: Update with new attributes.
+ * config/arm/fa626te.md: Update with new attributes.
+ * config/arm/fa726te.md: Update with new attributes.
+
+2014-04-07 Michael Collison <michael.collison@linaro.org>
+
+ Backport from trunk r202332
+ 2013-09-06 James Greenhalgh <james.greenhalgh@arm.com>
+
+ * config/aarch64/aarch64.md
+ (*movti_aarch64): Use "multiple" for type where v8type is "move2".
+ (*movtf_aarch64): Likewise.
+ * config/arm/arm.md
+ (thumb1_movdi_insn): Use "multiple" for type where more than one
+ instruction is used for a move.
+ (*arm32_movhf): Likewise.
+ (*thumb_movdf_insn): Likewise.
+
+2014-04-07 Michael Collison <michael.collison@linaro.org>
+
+ Backport from trunk r202331
+ 2013-09-06 James Greenhalgh <james.greenhalgh@arm.com>
+
+ * config/arm/types.md (type): Rename fcpys to fmov.
+ * config/arm/vfp.md
+ (*arm_movsi_vfp): Rename type fcpys as fmov.
+ (*thumb2_movsi_vfp): Likewise
+ (*movhf_vfp_neon): Likewise
+ (*movhf_vfp): Likewise
+ (*movsf_vfp): Likewise
+ (*thumb2_movsf_vfp): Likewise
+ (*movsfcc_vfp): Likewise
+ (*thumb2_movsfcc_vfp): Likewise
+ * config/aarch64/aarch64-simd.md
+ (move_lo_quad_<mode>): Replace type mov_reg with fmovs.
+ * config/aarch64/aarch64.md
+ (*movsi_aarch64): Replace type mov_reg with fmovs.
+ (*movdi_aarch64): Likewise
+ (*movsf_aarch64): Likewise
+ (*movdf_aarch64): Likewise
+ * config/arm/arm.c
+ (cortexa7_older_only): Rename TYPE_FCPYS to TYPE_FMOV.
+ * config/arm/iwmmxt.md
+ (*iwmmxt_movsi_insn): Rename type fcpys as fmov.
+ * config/arm/arm1020e.md: Update with new attributes.
+ * config/arm/cortex-a15-neon.md: Update with new attributes.
+ * config/arm/cortex-a5.md: Update with new attributes.
+ * config/arm/cortex-a53.md: Update with new attributes.
+ * config/arm/cortex-a7.md: Update with new attributes.
+ * config/arm/cortex-a8-neon.md: Update with new attributes.
+ * config/arm/cortex-a9.md: Update with new attributes.
+ * config/arm/cortex-m4-fpu.md: Update with new attributes.
+ * config/arm/cortex-r4f.md: Update with new attributes.
+ * config/arm/marvell-pj4.md: Update with new attributes.
+ * config/arm/vfp11.md: Update with new attributes.
+
+2014-04-07 Michael Collison <michael.collison@linaro.org>
+
+ Backport from trunk r202330
+ 2013-09-06 James Greenhalgh <james.greenhalgh@arm.com>
+
+ * config/aarch64/aarch64.md
+ (*madd<mode>): Fix type attribute.
+ (*maddsi_uxtw): Likewise.
+ (*msub<mode>): Likewise.
+ (*msubsi_uxtw): Likewise.
+ (<su_optab>maddsidi4): Likewise.
+ (<su_optab>msubsidi4): Likewise.
+
+2014-04-07 Michael Collison <michael.collison@linaro.org>
+
+ Backport from trunk r202329
+ 2013-09-06 James Greenhalgh <james.greenhalgh@arm.com>
+
+ * config/arm/types.md: Split fdiv<sd> as fsqrt<sd>, fdiv<sd>.
+ * config/arm/arm.md (core_cycles): Remove fdiv.
+ * config/arm/vfp.md:
+ (*sqrtsf2_vfp): Update for attribute changes.
+ (*sqrtdf2_vfp): Likewise.
+ * config/aarch64/aarch64.md:
+ (sqrt<mode>2): Update for attribute changes.
+ * config/arm/arm1020e.md: Update with new attributes.
+ * config/arm/cortex-a15-neon.md: Update with new attributes.
+ * config/arm/cortex-a5.md: Update with new attributes.
+ * config/arm/cortex-a53.md: Update with new attributes.
+ * config/arm/cortex-a7.md: Update with new attributes.
+ * config/arm/cortex-a8-neon.md: Update with new attributes.
+ * config/arm/cortex-a9.md: Update with new attributes.
+ * config/arm/cortex-m4-fpu.md: Update with new attributes.
+ * config/arm/cortex-r4f.md: Update with new attributes.
+ * config/arm/marvell-pj4.md: Update with new attributes.
+ * config/arm/vfp11.md: Update with new attributes.
+
+2014-04-07 Michael Collison <michael.collison@linaro.org>
+
+ Backport from trunk r202328
+ 2013-09-06 James Greenhalgh <james.greenhalgh@arm.com>
+
+ * config/arm/types.md
+ (type): Split f_cvt as f_cvt, f_cvtf2i, f_cvti2f.
+ * config/aarch64/aarch64.md
+ (l<fcvt_pattern><su_optab><GPF:mode><GPI:mode>2): Update with
+ new attributes.
+ (fix_trunc<GPF:mode><GPI:mode>2): Likewise.
+ (fixuns_trunc<GPF:mode><GPI:mode>2): Likewise.
+ (float<GPI:mode><GPF:mode>2): Likewise.
+ * config/arm/vfp.md
+ (*truncsisf2_vfp): Update with new attributes.
+ (*truncsidf2_vfp): Likewise.
+ (fixuns_truncsfsi2): Likewise.
+ (fixuns_truncdfsi2): Likewise.
+ (*floatsisf2_vfp): Likewise.
+ (*floatsidf2_vfp): Likewise.
+ (floatunssisf2): Likewise.
+ (floatunssidf2): Likewise.
+ (*combine_vcvt_f32_<FCVTI32typename>): Likewise.
+ (*combine_vcvt_f64_<FCVTI32typename>): Likewise.
+ * config/arm/arm1020e.md: Update with new attributes.
+ * config/arm/cortex-a15-neon.md: Update with new attributes.
+ * config/arm/cortex-a5.md: Update with new attributes.
+ * config/arm/cortex-a53.md: Update with new attributes.
+ * config/arm/cortex-a7.md: Update with new attributes.
+ * config/arm/cortex-a8-neon.md: Update with new attributes.
+ * config/arm/cortex-a9.md: Update with new attributes.
+ * config/arm/cortex-m4-fpu.md: Update with new attributes.
+ * config/arm/cortex-r4f.md: Update with new attributes.
+ * config/arm/marvell-pj4.md: Update with new attributes.
+ * config/arm/vfp11.md: Update with new attributes.
+
+2014-04-07 Michael Collison <michael.collison@linaro.org>
+
+ Backport from trunk r202323
+ 2013-09-06 James Greenhalgh <james.greenhalgh@arm.com>
+
+ * config/arm/types.md: Add "no_insn", "multiple" and "untyped"
+ types.
+ * config/arm/arm-fixed.md: Add type attribute to all insn
+ patterns.
+ (add<mode>3): Add type attribute.
+ (add<mode>3): Likewise.
+ (usadd<mode>3): Likewise.
+ (ssadd<mode>3): Likewise.
+ (sub<mode>3): Likewise.
+ (sub<mode>3): Likewise.
+ (ussub<mode>3): Likewise.
+ (sssub<mode>3): Likewise.
+ (ssmulsa3): Likewise.
+ (usmulusa3): Likewise.
+ (arm_usatsihi): Likewise.
+ * config/arm/vfp.md
+ (*movdi_vfp): Add types for all instructions.
+ (*movdi_vfp_cortexa8): Likewise.
+ (*movhf_vfp_neon): Likewise.
+ (*movhf_vfp): Likewise.
+ (*movdf_vfp): Likewise.
+ (*thumb2_movdf_vfp): Likewise.
+ (*thumb2_movdfcc_vfp): Likewise.
+ * config/arm/arm.md: Add type attribute to all insn patterns.
+ (*thumb1_adddi3): Add type attribute.
+ (*arm_adddi3): Likewise.
+ (*adddi_sesidi_di): Likewise.
+ (*adddi_zesidi_di): Likewise.
+ (*thumb1_addsi3): Likewise.
+ (addsi3_compare0): Likewise.
+ (*addsi3_compare0_scratch): Likewise.
+ (*compare_negsi_si): Likewise.
+ (cmpsi2_addneg): Likewise.
+ (*addsi3_carryin_<optab>): Likewise.
+ (*addsi3_carryin_alt2_<optab>): Likewise.
+ (*addsi3_carryin_clobercc_<optab>): Likewise.
+ (*subsi3_carryin): Likewise.
+ (*subsi3_carryin_const): Likewise.
+ (*subsi3_carryin_compare): Likewise.
+ (*subsi3_carryin_compare_const): Likewise.
+ (*arm_subdi3): Likewise.
+ (*thumb_subdi3): Likewise.
+ (*subdi_di_zesidi): Likewise.
+ (*subdi_di_sesidi): Likewise.
+ (*subdi_zesidi_di): Likewise.
+ (*subdi_sesidi_di): Likewise.
+ (*subdi_zesidi_ze): Likewise.
+ (thumb1_subsi3_insn): Likewise.
+ (*arm_subsi3_insn): Likewise.
+ (*anddi3_insn): Likewise.
+ (*anddi_zesidi_di): Likewise.
+ (*anddi_sesdi_di): Likewise.
+ (*ne_zeroextracts): Likewise.
+ (*ne_zeroextracts): Likewise.
+ (*ite_ne_zeroextr): Likewise.
+ (*ite_ne_zeroextr): Likewise.
+ (*anddi_notdi_di): Likewise.
+ (*anddi_notzesidi): Likewise.
+ (*anddi_notsesidi): Likewise.
+ (andsi_notsi_si): Likewise.
+ (thumb1_bicsi3): Likewise.
+ (*iordi3_insn): Likewise.
+ (*iordi_zesidi_di): Likewise.
+ (*iordi_sesidi_di): Likewise.
+ (*thumb1_iorsi3_insn): Likewise.
+ (*xordi3_insn): Likewise.
+ (*xordi_zesidi_di): Likewise.
+ (*xordi_sesidi_di): Likewise.
+ (*arm_xorsi3): Likewise.
+ (*andsi_iorsi3_no): Likewise.
+ (*smax_0): Likewise.
+ (*smax_m1): Likewise.
+ (*arm_smax_insn): Likewise.
+ (*smin_0): Likewise.
+ (*arm_smin_insn): Likewise.
+ (*arm_umaxsi3): Likewise.
+ (*arm_uminsi3): Likewise.
+ (*minmax_arithsi): Likewise.
+ (*minmax_arithsi_): Likewise.
+ (*satsi_<SAT:code>): Likewise.
+ (arm_ashldi3_1bit): Likewise.
+ (arm_ashrdi3_1bit): Likewise.
+ (arm_lshrdi3_1bit): Likewise.
+ (*arm_negdi2): Likewise.
+ (*thumb1_negdi2): Likewise.
+ (*arm_negsi2): Likewise.
+ (*thumb1_negsi2): Likewise.
+ (*negdi_extendsid): Likewise.
+ (*negdi_zero_extend): Likewise.
+ (*arm_abssi2): Likewise.
+ (*thumb1_abssi2): Likewise.
+ (*arm_neg_abssi2): Likewise.
+ (*thumb1_neg_abss): Likewise.
+ (one_cmpldi2): Likewise.
+ (extend<mode>di2): Likewise.
+ (*compareqi_eq0): Likewise.
+ (*arm_extendhisi2addsi): Likewise.
+ (*arm_movdi): Likewise.
+ (*thumb1_movdi_insn): Likewise.
+ (*arm_movt): Likewise.
+ (*thumb1_movsi_insn): Likewise.
+ (pic_add_dot_plus_four): Likewise.
+ (pic_add_dot_plus_eight): Likewise.
+ (tls_load_dot_plus_eight): Likewise.
+ (*thumb1_movhi_insn): Likewise.
+ (*thumb1_movsf_insn): Likewise.
+ (*movdf_soft_insn): Likewise.
+ (*thumb_movdf_insn): Likewise.
+ (cbranchsi4_insn): Likewise.
+ (cbranchsi4_scratch): Likewise.
+ (*negated_cbranchsi4): Likewise.
+ (*tbit_cbranch): Likewise.
+ (*tlobits_cbranch): Likewise.
+ (*tstsi3_cbranch): Likewise.
+ (*cbranchne_decr1): Likewise.
+ (*addsi3_cbranch): Likewise.
+ (*addsi3_cbranch_scratch): Likewise.
+ (*arm_cmpdi_insn): Likewise.
+ (*arm_cmpdi_unsig): Likewise.
+ (*arm_cmpdi_zero): Likewise.
+ (*thumb_cmpdi_zero): Likewise.
+ (*deleted_compare): Likewise.
+ (*mov_scc): Likewise.
+ (*mov_negscc): Likewise.
+ (*mov_notscc): Likewise.
+ (*cstoresi_eq0_thumb1_insn): Likewise.
+ (cstoresi_nltu_thumb1): Likewise.
+ (cstoresi_ltu_thu): Likewise.
+ (thumb1_addsi3_addgeu): Likewise.
+ (*arm_jump): Likewise.
+ (*thumb_jump): Likewise.
+ (*check_arch2): Likewise.
+ (arm_casesi_internal): Likewise.
+ (thumb1_casesi_dispatch): Likewise.
+ (*arm_indirect_jump): Likewise.
+ (*thumb1_indirect_jump): Likewise.
+ (nop): Likewise.
+ (*and_scc): Likewise.
+ (*ior_scc): Likewise.
+ (*compare_scc): Likewise.
+ (*cond_move): Likewise.
+ (*cond_arith): Likewise.
+ (*cond_sub): Likewise.
+ (*cmp_ite0): Likewise.
+ (*cmp_ite1): Likewise.
+ (*cmp_and): Likewise.
+ (*cmp_ior): Likewise.
+ (*ior_scc_scc): Likewise.
+ (*ior_scc_scc_cmp): Likewise.
+ (*and_scc_scc): Likewise.
+ (*and_scc_scc_cmp): Likewise.
+ (*and_scc_scc_nod): Likewise.
+ (*negscc): Likewise.
+ (movcond_addsi): Likewise.
+ (movcond): Likewise.
+ (*ifcompare_plus_move): Likewise.
+ (*if_plus_move): Likewise.
+ (*ifcompare_move_plus): Likewise.
+ (*if_move_plus): Likewise.
+ (*ifcompare_arith_arith): Likewise.
+ (*if_arith_arith): Likewise.
+ (*ifcompare_arith_move): Likewise.
+ (*if_arith_move): Likewise.
+ (*ifcompare_move_arith): Likewise.
+ (*if_move_arith): Likewise.
+ (*ifcompare_move_not): Likewise.
+ (*if_move_not): Likewise.
+ (*ifcompare_not_move): Likewise.
+ (*if_not_move): Likewise.
+ (*ifcompare_shift_move): Likewise.
+ (*if_shift_move): Likewise.
+ (*ifcompare_move_shift): Likewise.
+ (*if_move_shift): Likewise.
+ (*ifcompare_shift_shift): Likewise.
+ (*ifcompare_not_arith): Likewise.
+ (*ifcompare_arith_not): Likewise.
+ (*if_arith_not): Likewise.
+ (*ifcompare_neg_move): Likewise.
+ (*if_neg_move): Likewise.
+ (*ifcompare_move_neg): Likewise.
+ (*if_move_neg): Likewise.
+ (prologue_thumb1_interwork): Likewise.
+ (*cond_move_not): Likewise.
+ (*sign_extract_onebit): Likewise.
+ (*not_signextract_onebit): Likewise.
+ (stack_tie): Likewise.
+ (align_4): Likewise.
+ (align_8): Likewise.
+ (consttable_end): Likewise.
+ (consttable_1): Likewise.
+ (consttable_2): Likewise.
+ (consttable_4): Likewise.
+ (consttable_8): Likewise.
+ (consttable_16): Likewise.
+ (*thumb1_tablejump): Likewise.
+ (prefetch): Likewise.
+ (force_register_use): Likewise.
+ (thumb_eh_return): Likewise.
+ (load_tp_hard): Likewise.
+ (load_tp_soft): Likewise.
+ (tlscall): Likewise.
+ (*arm_movtas_ze): Likewise.
+ (*arm_rev): Likewise.
+ (*arm_revsh): Likewise.
+ (*arm_rev16): Likewise.
+ * config/arm/thumb2.md
+ (*thumb2_smaxsi3): Likewise.
+ (*thumb2_sminsi3): Likewise.
+ (*thumb32_umaxsi3): Likewise.
+ (*thumb2_uminsi3): Likewise.
+ (*thumb2_negdi2): Likewise.
+ (*thumb2_abssi2): Likewise.
+ (*thumb2_neg_abss): Likewise.
+ (*thumb2_movsi_insn): Likewise.
+ (tls_load_dot_plus_four): Likewise.
+ (*thumb2_movhi_insn): Likewise.
+ (*thumb2_mov_scc): Likewise.
+ (*thumb2_mov_negs): Likewise.
+ (*thumb2_mov_negs): Likewise.
+ (*thumb2_mov_nots): Likewise.
+ (*thumb2_mov_nots): Likewise.
+ (*thumb2_movsicc_): Likewise.
+ (*thumb2_movsfcc_soft_insn): Likewise.
+ (*thumb2_indirect_jump): Likewise.
+ (*thumb2_and_scc): Likewise.
+ (*thumb2_ior_scc): Likewise.
+ (*thumb2_ior_scc_strict_it): Likewise.
+ (*thumb2_cond_move): Likewise.
+ (*thumb2_cond_arith): Likewise.
+ (*thumb2_cond_ari): Likewise.
+ (*thumb2_cond_sub): Likewise.
+ (*thumb2_negscc): Likewise.
+ (*thumb2_movcond): Likewise.
+ (thumb2_casesi_internal): Likewise.
+ (thumb2_casesi_internal_pic): Likewise.
+ (*thumb2_alusi3_short): Likewise.
+ (*thumb2_mov<mode>_shortim): Likewise.
+ (*thumb2_addsi_short): Likewise.
+ (*thumb2_subsi_short): Likewise.
+ (thumb2_addsi3_compare0): Likewise.
+ (*thumb2_cbz): Likewise.
+ (*thumb2_cbnz): Likewise.
+ (*thumb2_one_cmplsi2_short): Likewise.
+ (*thumb2_negsi2_short): Likewise.
+ (*orsi_notsi_si): Likewise.
+ * config/arm/arm1020e.md: Update with new attributes.
+ * config/arm/arm1026ejs.md: Update with new attributes.
+ * config/arm/arm1136jfs.md: Update with new attributes.
+ * config/arm/arm926ejs.md: Update with new attributes.
+ * config/arm/cortex-a15.md: Update with new attributes.
+ * config/arm/cortex-a5.md: Update with new attributes.
+ * config/arm/cortex-a53.md: Update with new attributes.
+ * config/arm/cortex-a7.md: Update with new attributes.
+ * config/arm/cortex-a8.md: Update with new attributes.
+ * config/arm/cortex-a9.md: Update with new attributes.
+ * config/arm/cortex-m4.md: Update with new attributes.
+ * config/arm/cortex-r4.md: Update with new attributes.
+ * config/arm/fa526.md: Update with new attributes.
+ * config/arm/fa606te.md: Update with new attributes.
+ * config/arm/fa626te.md: Update with new attributes.
+ * config/arm/fa726te.md: Update with new attributes.
+
+2014-04-07 Michael Collison <michael.collison@linaro.org>
+
+ Backport from trunk r202292
+ 2013-09-05 James Greenhalgh <james.greenhalgh@arm.com>
+
+ * config/aarch64/aarch64.md
+ (type): Remove frecpe, frecps, frecpx.
+ (aarch64_frecp<FRECP:frecp_suffix><mode>): Move to aarch64-simd.md,
+ fix to be a TARGET_SIMD instruction.
+ (aarch64_frecps): Remove.
+ * config/aarch64/aarch64-simd.md
+ (aarch64_frecp<FRECP:frecp_suffix><mode>): New, moved from aarch64.md
+ (aarch64_frecps<mode>): Handle all float/vector of float modes.
+
+2014-04-07 Michael Collison <michael.collison@linaro.org>
+
+ Backport from trunk r202291
+ 2013-09-05 James Greenhalgh <james.greenhalgh@arm.com>
+ Sofiane Naci <sofiane.naci@arm.com>
+
+ * config/arm/types.md (define_attr "type"):
+ Expand "arlo_imm"
+ into "adr", "alu_imm", "alus_imm", "logic_imm", "logics_imm".
+ Expand "arlo_reg"
+ into "adc_reg", "adc_imm", "adcs_reg", "adcs_imm", "alu_ext",
+ "alu_reg", "alus_ext", "alus_reg", "bfm", "csel", "logic_reg",
+ "logics_reg", "rev".
+ Expand "arlo_shift"
+ into "alu_shift_imm", "alus_shift_imm", "logic_shift_imm",
+ "logics_shift_imm".
+ Expand "arlo_shift_reg"
+ into "alu_shift_reg", "alus_shift_reg", "logic_shift_reg",
+ "logics_shift_reg".
+ Expand "clz" into "clz, "rbit".
+ Rename "shift" to "shift_imm".
+ * config/arm/arm.md (define_attr "core_cycles"): Update for attribute
+ changes.
+ Update for attribute changes all occurrences of arlo_* and
+ shift* types.
+ * config/arm/arm-fixed.md: Update for attribute changes
+ all occurrences of arlo_* types.
+ * config/arm/thumb2.md: Update for attribute changes all occurrences
+ of arlo_* types.
+ * config/arm/arm.c (xscale_sched_adjust_cost): (rtx insn, rtx
+ (cortexa7_older_only): Likewise.
+ (cortexa7_younger): Likewise.
+ * config/arm/arm1020e.md (1020alu_op): Update for attribute changes.
+ (1020alu_shift_op): Likewise.
+ (1020alu_shift_reg_op): Likewise.
+ * config/arm/arm1026ejs.md (alu_op): Update for attribute changes.
+ (alu_shift_op): Likewise.
+ (alu_shift_reg_op): Likewise.
+ * config/arm/arm1136jfs.md (11_alu_op): Update for
+ attribute changes.
+ (11_alu_shift_op): Likewise.
+ (11_alu_shift_reg_op): Likewise.
+ * config/arm/arm926ejs.md (9_alu_op): Update for attribute changes.
+ (9_alu_shift_reg_op): Likewise.
+ * config/arm/cortex-a15.md (cortex_a15_alu): Update for
+ attribute changes.
+ (cortex_a15_alu_shift): Likewise.
+ (cortex_a15_alu_shift_reg): Likewise.
+ * config/arm/cortex-a5.md (cortex_a5_alu): Update for
+ attribute changes.
+ (cortex_a5_alu_shift): Likewise.
+ * config/arm/cortex-a53.md
+ (cortex_a53_alu): Update for attribute changes.
+ (cortex_a53_alu_shift): Likewise.
+ * config/arm/cortex-a7.md
+ (cortex_a7_alu_imm): Update for attribute changes.
+ (cortex_a7_alu_reg): Likewise.
+ (cortex_a7_alu_shift): Likewise.
+ * config/arm/cortex-a8.md
+ (cortex_a8_alu): Update for attribute changes.
+ (cortex_a8_alu_shift): Likewise.
+ (cortex_a8_alu_shift_reg): Likewise.
+ * config/arm/cortex-a9.md
+ (cortex_a9_dp): Update for attribute changes.
+ (cortex_a9_dp_shift): Likewise.
+ * config/arm/cortex-m4.md
+ (cortex_m4_alu): Update for attribute changes.
+ * config/arm/cortex-r4.md
+ (cortex_r4_alu): Update for attribute changes.
+ (cortex_r4_mov): Likewise.
+ (cortex_r4_alu_shift_reg): Likewise.
+ * config/arm/fa526.md
+ (526_alu_op): Update for attribute changes.
+ (526_alu_shift_op): Likewise.
+ * config/arm/fa606te.md
+ (606te_alu_op): Update for attribute changes.
+ * config/arm/fa626te.md
+ (626te_alu_op): Update for attribute changes.
+ (626te_alu_shift_op): Likewise.
+ * config/arm/fa726te.md
+ (726te_alu_op): Update for attribute changes.
+ (726te_alu_shift_op): Likewise.
+ (726te_alu_shift_reg_op): Likewise.
+ * config/arm/fmp626.md (mp626_alu_op): Update for attribute changes.
+ (mp626_alu_shift_op): Likewise.
+ * config/arm/marvell-pj4.md (pj4_alu): Update for attribute changes.
+ (pj4_alu_conds): Likewise.
+ (pj4_shift): Likewise.
+ (pj4_shift_conds): Likewise.
+ (pj4_alu_shift): Likewise.
+ (pj4_alu_shift_conds): Likewise.
+ * config/aarch64/aarch64.md: Update for attribute change
+ all occurrences of arlo_* and shift* types.
+
+2014-04-07 Michael Collison <michael.collison@linaro.org>
+
+ Backport from trunk r202272
+ 2013-08-02 James Greenhalgh <james.greenhalgh@arm.com>
+ Sofiane Naci <sofiane.naci@arm.com>
+
+ * config/aarch64/aarch64.md
+ (*movti_aarch64): Rename r_2_f and f_2_r.
+ (*movsf_aarch64): Likewise.
+ (*movdf_aarch64): Likewise.
+ (*movtf_aarch64): Likewise.
+ (aarch64_movdi_<mode>low): Likewise.
+ (aarch64_movdi_<mode>high): Likewise.
+ (aarch64_mov<mode>high_di): Likewise.
+ (aarch64_mov<mode>low_di): Likewise.
+ (aarch64_movtilow_tilow): Likewise.
+ * config/arm/arm.md (attribute "neon_type"): Delete. Move attribute
+ values to config/arm/types.md
+ (attribute "conds"): Update for attribute change.
+ (anddi3_insn): Likewise.
+ (iordi3_insn): Likewise.
+ (xordi3_insn): Likewise.
+ (one_cmpldi2): Likewise.
+ * config/arm/types.md (type): Add Neon types.
+ * config/arm/neon.md (neon_mov<mode>): Remove "neon_type" attribute,
+ use "type" attribute.
+ (movmisalign<mode>_neon_store): Likewise.
+ (movmisalign<mode>_neon_load): Likewise.
+ (vec_set<mode>_internal): Likewise.
+ (vec_setv2di_internal): Likewise.
+ (vec_extract<mode>): Likewise.
+ (vec_extractv2di): Likewise.
+ (add<mode>3_neon): Likewise.
+ (adddi3_neon): Likewise.
+ (sub<mode>3_neon): Likewise.
+ (subdi3_neon): Likewise.
+ (mul<mode>3_neon): Likewise.
+ (mul<mode>3add<mode>_neon): Likewise.
+ (mul<mode>3neg<mode>add<mode>_neon): Likewise.
+ (fma<VCVTF:mode>4)): Likewise.
+ (fma<VCVTF:mode>4_intrinsic): Likewise.
+ (fmsub<VCVTF:mode>4)): Likewise.
+ (fmsub<VCVTF:mode>4_intrinsic): Likewise.
+ (neon_vrint<NEON_VRINT:nvrint_variant><VCVTF:mode>): Likewise.
+ (ior<mode>3): Likewise.
+ (and<mode>3): Likewise.
+ (anddi3_neon): Likewise.
+ (orn<mode>3_neon): Likewise.
+ (orndi3_neon): Likewise.
+ (bic<mode>3_neon): Likewise.
+ (bicdi3_neon): Likewise.
+ (xor<mode>3): Likewise.
+ (one_cmpl<mode>2): Likewise.
+ (abs<mode>2): Likewise.
+ (neg<mode>2): Likewise.
+ (umin<mode>3_neon): Likewise.
+ (umax<mode>3_neon): Likewise.
+ (smin<mode>3_neon): Likewise.
+ (smax<mode>3_neon): Likewise.
+ (vashl<mode>3): Likewise.
+ (vashr<mode>3_imm): Likewise.
+ (vlshr<mode>3_imm): Likewise.
+ (ashl<mode>3_signed): Likewise.
+ (ashl<mode>3_unsigned): Likewise.
+ (neon_load_count): Likewise.
+ (ashldi3_neon_noclobber): Likewise.
+ (signed_shift_di3_neon): Likewise.
+ (unsigned_shift_di3_neon): Likewise.
+ (ashrdi3_neon_imm_noclobber): Likewise.
+ (lshrdi3_neon_imm_noclobber): Likewise.
+ (widen_ssum<mode>3): Likewise.
+ (widen_usum<mode>3): Likewise.
+ (quad_halves_<code>v4si): Likewise.
+ (quad_halves_<code>v4sf): Likewise.
+ (quad_halves_<code>v8hi): Likewise.
+ (quad_halves_<code>v16qi): Likewise.
+ (reduc_splus_v2di): Likewise.
+ (neon_vpadd_internal<mode>): Likewise.
+ (neon_vpsmin<mode>): Likewise.
+ (neon_vpsmax<mode>): Likewise.
+ (neon_vpumin<mode>): Likewise.
+ (neon_vpumax<mode>): Likewise.
+ (ss_add<mode>_neon): Likewise.
+ (us_add<mode>_neon): Likewise.
+ (ss_sub<mode>_neon): Likewise.
+ (us_sub<mode>_neon): Likewise.
+ (neon_vadd<mode>_unspec): Likewise.
+ (neon_vaddl<mode>): Likewise.
+ (neon_vaddw<mode>): Likewise.
+ (neon_vhadd<mode>): Likewise.
+ (neon_vqadd<mode>): Likewise.
+ (neon_vaddhn<mode>): Likewise.
+ (neon_vmul<mode>): Likewise.
+ (neon_vmla<mode>): Likewise.
+ (neon_vmlal<mode>): Likewise.
+ (neon_vmls<mode>): Likewise.
+ (neon_vmlsl<mode>): Likewise.
+ (neon_vqdmulh<mode>): Likewise.
+ (neon_vqdmlal<mode>): Likewise.
+ (neon_vqdmlsl<mode>): Likewise.
+ (neon_vmull<mode>): Likewise.
+ (neon_vqdmull<mode>): Likewise.
+ (neon_vsub<mode>_unspec): Likewise.
+ (neon_vsubl<mode>): Likewise.
+ (neon_vsubw<mode>): Likewise.
+ (neon_vqsub<mode>): Likewise.
+ (neon_vhsub<mode>): Likewise.
+ (neon_vsubhn<mode>): Likewise.
+ (neon_vceq<mode>): Likewise.
+ (neon_vcge<mode>): Likewise.
+ (neon_vcgeu<mode>): Likewise.
+ (neon_vcgt<mode>): Likewise.
+ (neon_vcgtu<mode>): Likewise.
+ (neon_vcle<mode>): Likewise.
+ (neon_vclt<mode>): Likewise.
+ (neon_vcage<mode>): Likewise.
+ (neon_vcagt<mode>): Likewise.
+ (neon_vtst<mode>): Likewise.
+ (neon_vabd<mode>): Likewise.
+ (neon_vabdl<mode>): Likewise.
+ (neon_vaba<mode>): Likewise.
+ (neon_vabal<mode>): Likewise.
+ (neon_vmax<mode>): Likewise.
+ (neon_vmin<mode>): Likewise.
+ (neon_vpaddl<mode>): Likewise.
+ (neon_vpadal<mode>): Likewise.
+ (neon_vpmax<mode>): Likewise.
+ (neon_vpmin<mode>): Likewise.
+ (neon_vrecps<mode>): Likewise.
+ (neon_vrsqrts<mode>): Likewise.
+ (neon_vqabs<mode>): Likewise.
+ (neon_vqneg<mode>): Likewise.
+ (neon_vcls<mode>): Likewise.
+ (clz<mode>2): Likewise.
+ (popcount<mode>2): Likewise.
+ (neon_vrecpe): Likewise.
+ (neon_vrsqrte): Likewise.
+ (neon_vget_lane<mode>_sext_internal): Likewise.
+ (neon_vget_lane<mode>_zext_internal): Likewise.
+ (neon_vdup_n<mode>): Likewise.
+ (neon_vdup_nv2di): Likewise.
+ (neon_vdpu_lane<mode>_internal): Likewise.
+ (neon_vswp<mode>): Likewise.
+ (float<mode><V_cvtto>2): Likewise.
+ (floatuns<mode><V_cvtto>2): Likewise.
+ (fix_trunc<mode><V_cvtto>)2): Likewise
+ (fixuns_trunc<mode><V_cvtto)2): Likewise.
+ (neon_vcvt<mode>): Likewise.
+ (neon_vcvtv4sfv4hf): Likewise.
+ (neon_vcvtv4hfv4sf): Likewise.
+ (neon_vcvt_n<mode>): Likewise.
+ (neon_vmovn<mode>): Likewise.
+ (neon_vqmovn<mode>): Likewise.
+ (neon_vqmovun<mode>): Likewise.
+ (neon_vmovl<mode>): Likewise.
+ (neon_vmul_lane<mode>): Likewise.
+ (neon_vmull_lane<mode>): Likewise.
+ (neon_vqdmull_lane<mode>): Likewise.
+ (neon_vqdmulh_lane<mode>): Likewise.
+ (neon_vmla_lane<mode>): Likewise.
+ (neon_vmlal_lane<mode>): Likewise.
+ (neon_vqdmlal_lane<mode>): Likewise.
+ (neon_vmls_lane<mode>): Likewise.
+ (neon_vmlsl_lane<mode>): Likewise.
+ (neon_vqdmlsl_lane<mode>): Likewise.
+ (neon_vext<mode>): Likewise.
+ (neon_vrev64<mode>): Likewise.
+ (neon_vrev32<mode>): Likewise.
+ (neon_vrev16<mode>): Likewise.
+ (neon_vbsl<mode>_internal): Likewise.
+ (neon_vshl<mode>): Likewise.
+ (neon_vqshl<mode>): Likewise.
+ (neon_vshr_n<mode>): Likewise.
+ (neon_vshrn_n<mode>): Likewise.
+ (neon_vqshrn_n<mode>): Likewise.
+ (neon_vqshrun_n<mode>): Likewise.
+ (neon_vshl_n<mode>): Likewise.
+ (neon_vqshl_n<mode>): Likewise.
+ (neon_vqshlu_n<mode>): Likewise.
+ (neon_vshll_n<mode>): Likewise.
+ (neon_vsra_n<mode>): Likewise.
+ (neon_vsri_n<mode>): Likewise.
+ (neon_vsli_n<mode>): Likewise.
+ (neon_vtbl1v8qi): Likewise.
+ (neon_vtbl2v8qi): Likewise.
+ (neon_vtbl3v8qi): Likewise.
+ (neon_vtbl4v8qi): Likewise.
+ (neon_vtbx1v8qi): Likewise.
+ (neon_vtbx2v8qi): Likewise.
+ (neon_vtbx3v8qi): Likewise.
+ (neon_vtbx4v8qi): Likewise.
+ (neon_vtrn<mode>_internal): Likewise.
+ (neon_vzip<mode>_internal): Likewise.
+ (neon_vuzp<mode>_internal): Likewise.
+ (neon_vld1<mode>): Likewise.
+ (neon_vld1_lane<mode>): Likewise.
+ (neon_vld1_dup<mode>): Likewise.
+ (neon_vld1_dupv2di): Likewise.
+ (neon_vst1<mode>): Likewise.
+ (neon_vst1_lane<mode>): Likewise.
+ (neon_vld2<mode>): Likewise.
+ (neon_vld2_lane<mode>): Likewise.
+ (neon_vld2_dup<mode>): Likewise.
+ (neon_vst2<mode>): Likewise.
+ (neon_vst2_lane<mode>): Likewise.
+ (neon_vld3<mode>): Likewise.
+ (neon_vld3qa<mode>): Likewise.
+ (neon_vld3qb<mode>): Likewise.
+ (neon_vld3_lane<mode>): Likewise.
+ (neon_vld3_dup<mode>): Likewise.
+ (neon_vst3<mode>): Likewise.
+ (neon_vst3qa<mode>): Likewise.
+ (neon_vst3qb<mode>): Likewise.
+ (neon_vst3_lane<mode>): Likewise.
+ (neon_vld4<mode>): Likewise.
+ (neon_vld4qa<mode>): Likewise.
+ (neon_vld4qb<mode>): Likewise.
+ (neon_vld4_lane<mode>): Likewise.
+ (neon_vld4_dup<mode>): Likewise.
+ (neon_vst4<mode>): Likewise.
+ (neon_vst4qa<mode>): Likewise.
+ (neon_vst4qb<mode>): Likewise.
+ (neon_vst4_lane<mode>): Likewise.
+ (neon_vec_unpack<US>_lo_<mode>): Likewise.
+ (neon_vec_unpack<US>_hi_<mode>): Likewise.
+ (neon_vec_<US>mult_lo_<mode>): Likewise.
+ (neon_vec_<US>mult_hi_<mode>): Likewise.
+ (neon_vec_<US>shiftl_<mode>): Likewise.
+ (neon_unpack<US>_<mode>): Likewise.
+ (neon_vec_<US>mult_<mode>): Likewise.
+ (vec_pack_trunc_<mode>): Likewise.
+ (neon_vec_pack_trunk_<mode>): Likewise.
+ (neon_vabd<mode>_2): Likewise.
+ (neon_vabd<mode>_3): Likewise.
+ * config/arm/vfp.md (arm_movsi_vfp): Update for attribute changes.
+ (thumb2_movsi_vfp): Likewise.
+ (movdi_vfp): Likewise.
+ (movdi_vfp_cortexa8): Likewise.
+ (movhf_vfp_neon): Likewise.
+ (movhf_vfp): Likewiwse.
+ (movsf_vfp): Likewiwse.
+ (thumb2_movsf_vfp): Likewiwse.
+ (movdf_vfp): Likewise.
+ (thumb2_movdf_vfp): Likewise.
+ (movsfcc_vfp): Likewise.
+ (thumb2_movsfcc_vfp): Likewise.
+ (movdfcc_vfp): Likewise.
+ (thumb2_movdfcc_vfp): Likewise.
+ * config/arm/arm.c (cortexa7_older_only): Update for attribute change.
+ * config/arm/arm1020e.md (v10_c2v): Update for attribute change.
+ (v10_v2c): Likewise.
+ * config/arm/cortex-a15-neon.md (cortex_a15_neon_int_1): Update for
+ attribute change.
+ (cortex_a15_neon_int_2): Likewise.
+ (cortex_a15_neon_int_3): Likewise.
+ (cortex_a15_neon_int_4): Likewise.
+ (cortex_a15_neon_int_5): Likewise.
+ (cortex_a15_neon_vqneg_vqabs): Likewise.
+ (cortex_a15_neon_vmov): Likewise.
+ (cortex_a15_neon_vaba): Likewise.
+ (cortex_a15_neon_vaba_qqq): Likewise.
+ (cortex_a15_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long): Likewise.
+ (cortex_a15_neon_mul_qqq_8_16_32_ddd_32): Likewise.
+ (cortex_a15_neon_mul_qdd_64_32_long_qqd_16_ddd_32_\
+ scalar_64_32_long_scalar): Likewise.
+ (cortex_a15_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long): Likewise.
+ (cortex_a15_neon_mla_qqq_8_16): Likewise.
+ (cortex_a15_neon_mla_ddd_32_qqd_16_ddd_32_scalar_qdd_64_32_\
+ lotype_qdd_64_32_long): Likewise.
+ (cortex_a15_neon_mla_qqq_32_qqd_32_scalar): Likewise.
+ (cortex_a15_neon_mul_ddd_16_scalar_32_16_long_scalar): Likewise.
+ (cortex_a15_neon_mul_qqd_32_scalar): Likewise.
+ (cortex_a15_neon_mla_ddd_16_scalar_qdd_32_16_long_scalar): Likewise.
+ (cortex_a15_neon_shift_1): Likewise.
+ (cortex_a15_neon_shift_2): Likewise.
+ (cortex_a15_neon_shift_3): Likewise.
+ (cortex_a15_neon_vshl_ddd): Likewise.
+ (cortex_a15_neon_vqshl_vrshl_vqrshl_qqq): Likewise.
+ (cortex_a15_neon_vsra_vrsra): Likewise.
+ (cortex_a15_neon_fp_vadd_ddd_vabs_dd): Likewise.
+ (cortex_a15_neon_fp_vadd_qqq_vabs_qq): Likewise.
+ (cortex_a15_neon_fp_vmul_ddd): Likewise.
+ (cortex_a15_neon_fp_vmul_qqd): Likewise.
+ (cortex_a15_neon_fp_vmla_ddd): Likewise.
+ (cortex_a15_neon_fp_vmla_qqq): Likewise.
+ (cortex_a15_neon_fp_vmla_ddd_scalar): Likewise.
+ (cortex_a15_neon_fp_vmla_qqq_scalar): Likewise.
+ (cortex_a15_neon_fp_vrecps_vrsqrts_ddd): Likewise.
+ (cortex_a15_neon_fp_vrecps_vrsqrts_qqq): Likewise.
+ (cortex_a15_neon_bp_simple): Likewise.
+ (cortex_a15_neon_bp_2cycle): Likewise.
+ (cortex_a15_neon_bp_3cycle): Likewise.
+ (cortex_a15_neon_vld1_1_2_regs): Likewise.
+ (cortex_a15_neon_vld1_3_4_regs): Likewise.
+ (cortex_a15_neon_vld2_2_regs_vld1_vld2_all_lanes): Likewise.
+ (cortex_a15_neon_vld2_4_regs): Likewise.
+ (cortex_a15_neon_vld3_vld4): Likewise.
+ (cortex_a15_neon_vst1_1_2_regs_vst2_2_regs): Likewise.
+ (cortex_a15_neon_vst1_3_4_regs): Likewise.
+ (cortex_a15_neon_vst2_4_regs_vst3_vst4): Likewise.
+ (cortex_a15_neon_vst3_vst4): Likewise.
+ (cortex_a15_neon_vld1_vld2_lane): Likewise.
+ (cortex_a15_neon_vld3_vld4_lane" 10
+ (cortex_a15_neon_vst1_vst2_lane): Likewise.
+ (cortex_a15_neon_vst3_vst4_lane): Likewise.
+ (cortex_a15_neon_vld3_vld4_all_lanes): Likewise.
+ (cortex_a15_neon_ldm_2): Likewise.0
+ (cortex_a15_neon_stm_2): Likewise.
+ (cortex_a15_neon_mcr): Likewise.
+ (cortex_a15_neon_mcr_2_mcrr): Likewise.
+ (cortex_a15_neon_mrc): Likewise.
+ (cortex_a15_neon_mrrc): Likewise.
+ * config/arm/cortex-a15.md (cortex_a15_alu): Update for attribute
+ change.
+ (cortex_a15_alu_shift): Likewise.
+ (cortex_a15_alu_shift_reg): Likewise.
+ (cortex_a15_mult32): Likewise.
+ (cortex_a15_mult64): Likewise.
+ (cortex_a15_block): Likewise.
+ (cortex_a15_branch): Likewise.
+ (cortex_a15_load1): Likewise.
+ (cortex_a15_load3): Likewise.
+ (cortex_a15_store1): Likewise.
+ (cortex_a15_store3): Likewise.
+ (cortex_a15_call): Likewise.
+ * config/arm/cortex-a5.md (cortex_a5_r2f): Update for attribute
+ change.
+ (cortex_a5_f2r): Likewise.
+ * config/arm/cortex-a53.md (cortex_a53_r2f): Update for attribute
+ change.
+ (cortex_a53_f2r): Likewise.
+ * config/arm/cortex-a7.md
+ (cortex_a7_branch): Update for attribute change.
+ (cortex_a7_call): Likewise.
+ (cortex_a7_alu_imm): Likewise.
+ (cortex_a7_alu_reg): Likewise.
+ (cortex_a7_alu_shift): Likewise.
+ (cortex_a7_mul): Likewise.
+ (cortex_a7_load1): Likewise.
+ (cortex_a7_store1): Likewise.
+ (cortex_a7_load2): Likewise.
+ (cortex_a7_store2): Likewise.
+ (cortex_a7_load3): Likewise.
+ (cortex_a7_store3): Likewise.
+ (cortex_a7_load4): Likewise.
+ (cortex_a7_store4): Likewise.
+ (cortex_a7_fpalu): Likewise.
+ (cortex_a7_fconst): Likewise.
+ (cortex_a7_fpmuls): Likewise.
+ (cortex_a7_neon_mul): Likewise.
+ (cortex_a7_fpmacs): Likewise.
+ (cortex_a7_neon_mla: Likewise.
+ (cortex_a7_fpmuld: Likewise.
+ (cortex_a7_fpmacd: Likewise.
+ (cortex_a7_fpfmad: Likewise.
+ (cortex_a7_fdivs: Likewise.
+ (cortex_a7_fdivd: Likewise.
+ (cortex_a7_r2f: Likewise.
+ (cortex_a7_f2r: Likewise.
+ (cortex_a7_f_flags: Likewise.
+ (cortex_a7_f_loads: Likewise.
+ (cortex_a7_f_loadd: Likewise.
+ (cortex_a7_f_stores: Likewise.
+ (cortex_a7_f_stored: Likewise.
+ (cortex_a7_neon): Likewise.
+ * config/arm/cortex-a8-neon.md
+ (cortex_a8_neon_mrc): Update for attribute change.
+ (cortex_a8_neon_mrrc): Likewise.
+ (cortex_a8_neon_int_1): Likewise.
+ (cortex_a8_neon_int_2): Likewise.
+ (cortex_a8_neon_int_3): Likewise.
+ (cortex_a8_neon_int_4): Likewise.
+ (cortex_a8_neon_int_5): Likewise.
+ (cortex_a8_neon_vqneg_vqabs): Likewise.
+ (cortex_a8_neon_vmov): Likewise.
+ (cortex_a8_neon_vaba): Likewise.
+ (cortex_a8_neon_vaba_qqq): Likewise.
+ (cortex_a8_neon_vsma): Likewise.
+ (cortex_a8_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long): Likewise.
+ (cortex_a8_neon_mul_qqq_8_16_32_ddd_32): Likewise.
+ (cortex_a8_neon_mul_qdd_64_32_long_qqd_16_ddd_32_scalar_64_32_long_scalar):
+ Likewise.
+ (cortex_a8_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long): Likewise.
+ (cortex_a8_neon_mla_qqq_8_16): Likewise.
+ (cortex_a8_neon_mla_ddd_32_qqd_16_ddd_32_scalar_qdd_64_32_\
+ long_scalar_qdd_64_32_long): Likewise.
+ (cortex_a8_neon_mla_qqq_32_qqd_32_scalar): Likewise.
+ (cortex_a8_neon_mul_ddd_16_scalar_32_16_long_scalar): Likewise.
+ (cortex_a8_neon_mul_qqd_32_scalar): Likewise.
+ (cortex_a8_neon_mla_ddd_16_scalar_qdd_32_16_long_scalar): Likewise.
+ (cortex_a8_neon_shift_1): Likewise.
+ (cortex_a8_neon_shift_2): Likewise.
+ (cortex_a8_neon_shift_3): Likewise.
+ (cortex_a8_neon_vshl_ddd): Likewise.
+ (cortex_a8_neon_vqshl_vrshl_vqrshl_qqq): Likewise.
+ (cortex_a8_neon_vsra_vrsra): Likewise.
+ (cortex_a8_neon_fp_vadd_ddd_vabs_dd): Likewise.
+ (cortex_a8_neon_fp_vadd_qqq_vabs_qq): Likewise.
+ (cortex_a8_neon_fp_vsum): Likewise.
+ (cortex_a8_neon_fp_vmul_ddd): Likewise.
+ (cortex_a8_neon_fp_vmul_qqd): Likewise.
+ (cortex_a8_neon_fp_vmla_ddd): Likewise.
+ (cortex_a8_neon_fp_vmla_qqq): Likewise.
+ (cortex_a8_neon_fp_vmla_ddd_scalar): Likewise.
+ (cortex_a8_neon_fp_vmla_qqq_scalar): Likewise.
+ (cortex_a8_neon_fp_vrecps_vrsqrts_ddd): Likewise.
+ (cortex_a8_neon_fp_vrecps_vrsqrts_qqq): Likewise.
+ (cortex_a8_neon_bp_simple): Likewise.
+ (cortex_a8_neon_bp_2cycle): Likewise.
+ (cortex_a8_neon_bp_3cycle): Likewise.
+ (cortex_a8_neon_ldr): Likewise.
+ (cortex_a8_neon_str): Likewise.
+ (cortex_a8_neon_vld1_1_2_regs): Likewise.
+ (cortex_a8_neon_vld1_3_4_regs): Likewise.
+ (cortex_a8_neon_vld2_2_regs_vld1_vld2_all_lanes): Likewise.
+ (cortex_a8_neon_vld2_4_regs): Likewise.
+ (cortex_a8_neon_vld3_vld4): Likewise.
+ (cortex_a8_neon_vst1_1_2_regs_vst2_2_regs): Likewise.
+ (cortex_a8_neon_vst1_3_4_regs): Likewise.
+ (cortex_a8_neon_vst2_4_regs_vst3_vst4): Likewise.
+ (cortex_a8_neon_vst3_vst4): Likewise.
+ (cortex_a8_neon_vld1_vld2_lane): Likewise.
+ (cortex_a8_neon_vld3_vld4_lane): Likewise.
+ (cortex_a8_neon_vst1_vst2_lane): Likewise.
+ (cortex_a8_neon_vst3_vst4_lane): Likewise.
+ (cortex_a8_neon_vld3_vld4_all_lanes): Likewise.
+ (cortex_a8_neon_mcr): Likewise.
+ (cortex_a8_neon_mcr_2_mcrr): Likewise.
+ * config/arm/cortex-a8.md (cortex_a8_alu): Update for attribute
+ change.
+ * config/arm/cortex-a9-neon.md (ca9_neon_mrc): Update for attribute
+ change.
+ (ca9_neon_mrrc): Likewise.
+ (cortex_a9_neon_int_1): Likewise.
+ (cortex_a9_neon_int_2): Likewise.
+ (cortex_a9_neon_int_3): Likewise.
+ (cortex_a9_neon_int_4): Likewise.
+ (cortex_a9_neon_int_5): Likewise.
+ (cortex_a9_neon_vqneg_vqabs): Likewise.
+ (cortex_a9_neon_vmov): Likewise.
+ (cortex_a9_neon_vaba): Likewise.
+ (cortex_a9_neon_vaba_qqq): Likewise.
+ (cortex_a9_neon_vsma): Likewise.
+ (cortex_a9_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long): Likewise.
+ (cortex_a9_neon_mul_qqq_8_16_32_ddd_32): Likewise.
+ (cortex_a9_neon_mul_qdd_64_32_long_qqd_16_ddd_32_scalar_64_32_long_scalar):
+ Likewise.
+ (cortex_a9_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long): Likewise.
+ (cortex_a9_neon_mla_qqq_8_16): Likewise.
+ (cortex_a9_neon_mla_ddd_32_qqd_16_ddd_32_scalar_qdd_64_32_\
+ long_scalar_qdd_64_32_long): Likewise.
+ (cortex_a9_neon_mla_qqq_32_qqd_32_scalar): Likewise.
+ (cortex_a9_neon_mul_ddd_16_scalar_32_16_long_scalar): Likewise.
+ (cortex_a9_neon_mul_qqd_32_scalar): Likewise.
+ (cortex_a9_neon_mla_ddd_16_scalar_qdd_32_16_long_scalar): Likewise.
+ (cortex_a9_neon_shift_1): Likewise.
+ (cortex_a9_neon_shift_2): Likewise.
+ (cortex_a9_neon_shift_3): Likewise.
+ (cortex_a9_neon_vshl_ddd): Likewise.
+ (cortex_a9_neon_vqshl_vrshl_vqrshl_qqq): Likewise.
+ (cortex_a9_neon_vsra_vrsra): Likewise.
+ (cortex_a9_neon_fp_vadd_ddd_vabs_dd): Likewise.
+ (cortex_a9_neon_fp_vadd_qqq_vabs_qq): Likewise.
+ (cortex_a9_neon_fp_vsum): Likewise.
+ (cortex_a9_neon_fp_vmul_ddd): Likewise.
+ (cortex_a9_neon_fp_vmul_qqd): Likewise.
+ (cortex_a9_neon_fp_vmla_ddd): Likewise.
+ (cortex_a9_neon_fp_vmla_qqq): Likewise.
+ (cortex_a9_neon_fp_vmla_ddd_scalar): Likewise.
+ (cortex_a9_neon_fp_vmla_qqq_scalar): Likewise.
+ (cortex_a9_neon_fp_vrecps_vrsqrts_ddd): Likewise.
+ (cortex_a9_neon_fp_vrecps_vrsqrts_qqq): Likewise.
+ (cortex_a9_neon_bp_simple): Likewise.
+ (cortex_a9_neon_bp_2cycle): Likewise.
+ (cortex_a9_neon_bp_3cycle): Likewise.
+ (cortex_a9_neon_ldr): Likewise.
+ (cortex_a9_neon_str): Likewise.
+ (cortex_a9_neon_vld1_1_2_regs): Likewise.
+ (cortex_a9_neon_vld1_3_4_regs): Likewise.
+ (cortex_a9_neon_vld2_2_regs_vld1_vld2_all_lanes): Likewise.
+ (cortex_a9_neon_vld2_4_regs): Likewise.
+ (cortex_a9_neon_vld3_vld4): Likewise.
+ (cortex_a9_neon_vst1_1_2_regs_vst2_2_regs): Likewise.
+ (cortex_a9_neon_vst1_3_4_regs): Likewise.
+ (cortex_a9_neon_vst2_4_regs_vst3_vst4): Likewise.
+ (cortex_a9_neon_vst3_vst4): Likewise.
+ (cortex_a9_neon_vld1_vld2_lane): Likewise.
+ (cortex_a9_neon_vld3_vld4_lane): Likewise.
+ (cortex_a9_neon_vst1_vst2_lane): Likewise.
+ (cortex_a9_neon_vst3_vst4_lane): Likewise.
+ (cortex_a9_neon_vld3_vld4_all_lanes): Likewise.
+ (cortex_a9_neon_mcr): Likewise.
+ (cortex_a9_neon_mcr_2_mcrr): Likewise.
+ * config/arm/cortex-a9.md (cortex_a9_dp): Update for attribute change.
+ (cortex_a9_fps): Likewise.
+ * config/arm/cortex-m4-fpu.md (cortex_m4_vmov_2): Update for attribute
+ change.
+ (cortex_m4_fmuls): Likewise.
+ * config/arm/cortex-r4f.md (cortex_r4_mcr): Update for attribute
+ change.
+ (cortex_r4_mrc): Likewise.
+ * config/arm/iterators.md: Update comment referring to neon_type.
+ * config/arm/iwmmxt.md
+ (iwmmxt_arm_movdi): Update for attribute change.
+ (iwmmxt_movsi_insn): Likewise.
+ * config/arm/marvell-pj4.md
+ (pj4_vfp_to_core): Update for attribute change.
+ (pj4_core_to_vfp): Likewise.
+ * config/arm/neon-schedgen.ml (emit_insn_reservations): Update for
+ attribute change.
+ * config/arm/vfp11.md (vfp_fload): Update for attribute change.
+ (vfp_fstore): Likewise.
+ * doc/md.texi: Change references to neon_type to refer to type.
+
+2014-04-07 Michael Collison <michael.collison@linaro.org>
+
+ Backport from trunk r201436
+ 2013-08-02 Sofiane Naci <sofiane.naci@arm.com>
+
+ * config/arm/types.md (define_attr "type"): Add "load_acq" and "store_rel".
+ * config/arm/cortex-a53.md (cortex_a53_load1): Update for attribute
+ changes.
+ (cortex_a53_store1): Likewise.
+
+2014-04-07 Michael Collison <michael.collison@linaro.org>
+
+ Backport from trunk r201400
+ 2013-08-01 Sofiane Naci <sofiane.naci@arm.com>
+
+ * config.gcc (aarch64*-*-*): Add aarch-common.o to extra_objs. Add
+ aarch-common-protos.h to extra_headers.
+ (aarch64*-*-*): Add arm/aarch-common-protos.h to tm_p_file.
+ * config/aarch64/aarch64.md: Include "../arm/cortex-a53.md".
+ * config/aarch64/t-aarch64 (aarch-common.o): Define.
+
+2014-04-07 Michael Collison <michael.collison@linaro.org>
+
+ Backport from trunk r201399
+ 2013-08-01 Sofiane Naci <sofiane.naci@arm.com>
+
+ * config/aarch64/aarch64.md (define_attr "type"): Delete.
+ Include "../arm/types.md". Define "type" attribute for all patterns.
+ * config/aarch64/aarch64-simd.md (move_lo_quad_<mode>): Update for
+ attribute changes.
+
+2014-04-07 Michael Collison <michael.collison@linaro.org>
+
+ Backport from trunk r201376
+ 2013-07-31 Sofiane Naci <sofiane.naci@arm.com>
+
+ * config.gcc (arm*-*-*): Add aarch-common.o to extra_objs. Add
+ aarch-common-protos.h to extra_headers.
+ (arm*-*-*): Add arm/aarch-common-protos.h to tm_p_file.
+ * config/arm/arm.c (arm_early_load_addr_dep): Move from here to ...
+ (arm_early_store_addr_dep): Likewise.
+ (arm_no_early_alu_shift_dep: Likewise.
+ (arm_no_early_alu_shift_value_dep: Likewise.
+ (arm_no_early_mul_dep: Likewise.
+ (arm_no_early_store_addr_dep: Likewise.
+ (arm_mac_accumulator_is_mul_result: Likewise.
+ (arm_mac_accumulator_is_result: Likewise.
+ * config/arm/aarch-common.c: ... here. New file.
+ * config/arm/arm-protos.h (arm_early_load_addr_dep): Move from here to ...
+ (arm_early_store_addr_dep): Likewise.
+ (arm_no_early_alu_shift_dep: Likewise.
+ (arm_no_early_alu_shift_value_dep: Likewise.
+ (arm_no_early_mul_dep: Likewise.
+ (arm_no_early_store_addr_dep: Likewise.
+ (arm_mac_accumulator_is_mul_result: Likewise.
+ (arm_mac_accumulator_is_result: Likewise.
+ * config/arm/aarch-common-protos.h: ... here. New file.
+ * config/arm/t-arm (aarch-common.o): Define.
+
+2014-04-07 Michael Collison <michael.collison@linaro.org>
+
+ Backport from trunk r201375
+ 2013-07-31 Sofiane Naci <sofiane.naci@arm.com>
+
+ * config/arm/arm.md: Include new file "types.md".
+ (define_attr "type"): Move from here to ...
+ (define_attr "mul32"): Likewise.
+ (define_attr "mul64"): Likewise.
+ * config/arm/types.md: ... here. New file.
+
+2014-04-07 Michael Collison <michael.collison@linaro.org>
+
Backport from trunk r202663
2013-09-17 Cong Hou <congh@google.com>
diff --git a/gcc/config.gcc b/gcc/config.gcc
index c1012637b47..89637861652 100644
--- a/gcc/config.gcc
+++ b/gcc/config.gcc
@@ -312,7 +312,7 @@ aarch64*-*-*)
cpu_type=aarch64
need_64bit_hwint=yes
extra_headers="arm_neon.h"
- extra_objs="aarch64-builtins.o"
+ extra_objs="aarch64-builtins.o aarch-common.o"
target_has_targetm_common=yes
;;
alpha*-*-*)
@@ -326,6 +326,7 @@ am33_2.0-*-linux*)
arm*-*-*)
cpu_type=arm
extra_headers="mmintrin.h arm_neon.h arm_acle.h"
+ extra_objs="aarch-common.o"
target_type_format_char='%'
c_target_objs="arm-c.o"
cxx_target_objs="arm-c.o"
@@ -498,6 +499,9 @@ then
fi
case ${target} in
+aarch64*-*-*)
+ tm_p_file="${tm_p_file} arm/aarch-common-protos.h"
+ ;;
i[34567]86-*-*)
if test "x$with_abi" != x; then
echo "This target does not support --with-abi."
@@ -538,7 +542,11 @@ x86_64-*-*)
fi
tm_file="vxworks-dummy.h ${tm_file}"
;;
-arm*-*-* | mips*-*-* | sh*-*-* | sparc*-*-*)
+arm*-*-*)
+ tm_p_file="${tm_p_file} arm/aarch-common-protos.h"
+ tm_file="vxworks-dummy.h ${tm_file}"
+ ;;
+mips*-*-* | sh*-*-* | sparc*-*-*)
tm_file="vxworks-dummy.h ${tm_file}"
;;
esac
diff --git a/gcc/config/aarch64/aarch64-arches.def b/gcc/config/aarch64/aarch64-arches.def
index b66e33ec932..683c34c1ec4 100644
--- a/gcc/config/aarch64/aarch64-arches.def
+++ b/gcc/config/aarch64/aarch64-arches.def
@@ -26,4 +26,4 @@
this architecture. ARCH is the architecture revision. FLAGS are
the flags implied by the architecture. */
-AARCH64_ARCH("armv8-a", generic, 8, AARCH64_FL_FOR_ARCH8)
+AARCH64_ARCH("armv8-a", cortexa53, 8, AARCH64_FL_FOR_ARCH8)
diff --git a/gcc/config/aarch64/aarch64-cores.def b/gcc/config/aarch64/aarch64-cores.def
index c840aa016e8..51c1ff803b0 100644
--- a/gcc/config/aarch64/aarch64-cores.def
+++ b/gcc/config/aarch64/aarch64-cores.def
@@ -35,6 +35,4 @@
therefore serves as a template for adding more CPUs in the future. */
AARCH64_CORE("cortex-a53", cortexa53, 8, AARCH64_FL_FPSIMD, generic)
-AARCH64_CORE("cortex-a57", cortexa57, 8, AARCH64_FL_FPSIMD, generic)
-AARCH64_CORE("example-1", large, 8, AARCH64_FL_FPSIMD, generic)
-AARCH64_CORE("example-2", small, 8, AARCH64_FL_FPSIMD, generic)
+AARCH64_CORE("cortex-a57", cortexa15, 8, AARCH64_FL_FPSIMD, generic)
diff --git a/gcc/config/aarch64/aarch64-generic.md b/gcc/config/aarch64/aarch64-generic.md
deleted file mode 100644
index cbb75600389..00000000000
--- a/gcc/config/aarch64/aarch64-generic.md
+++ /dev/null
@@ -1,38 +0,0 @@
-;; Machine description for AArch64 architecture.
-;; Copyright (C) 2009-2013 Free Software Foundation, Inc.
-;; Contributed by ARM Ltd.
-;;
-;; This file is part of GCC.
-;;
-;; GCC is free software; you can redistribute it and/or modify it
-;; under the terms of the GNU General Public License as published by
-;; the Free Software Foundation; either version 3, or (at your option)
-;; any later version.
-;;
-;; GCC is distributed in the hope that it will be useful, but
-;; WITHOUT ANY WARRANTY; without even the implied warranty of
-;; MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
-;; General Public License for more details.
-;;
-;; You should have received a copy of the GNU General Public License
-;; along with GCC; see the file COPYING3. If not see
-;; <http://www.gnu.org/licenses/>.
-
-;; Generic scheduler
-
-(define_automaton "aarch64")
-
-(define_cpu_unit "core" "aarch64")
-
-(define_attr "is_load" "yes,no"
- (if_then_else (eq_attr "v8type" "fpsimd_load,fpsimd_load2,load1,load2")
- (const_string "yes")
- (const_string "no")))
-
-(define_insn_reservation "load" 2
- (eq_attr "is_load" "yes")
- "core")
-
-(define_insn_reservation "nonload" 1
- (eq_attr "is_load" "no")
- "core")
diff --git a/gcc/config/aarch64/aarch64-simd.md b/gcc/config/aarch64/aarch64-simd.md
index 6caac8f351c..332daae7098 100644
--- a/gcc/config/aarch64/aarch64-simd.md
+++ b/gcc/config/aarch64/aarch64-simd.md
@@ -18,307 +18,6 @@
;; along with GCC; see the file COPYING3. If not see
;; <http://www.gnu.org/licenses/>.
-
-; Main data types used by the insntructions
-
-(define_attr "simd_mode" "unknown,none,V8QI,V16QI,V4HI,V8HI,V2SI,V4SI,V2DI,V2SF,V4SF,V2DF,OI,CI,XI,TI,DI,DF,SI,SF,HI,QI"
- (const_string "unknown"))
-
-
-; Classification of AdvSIMD instructions for scheduling purposes.
-; Do not set this attribute and the "v8type" attribute together in
-; any instruction pattern.
-
-; simd_abd integer absolute difference and accumulate.
-; simd_abdl integer absolute difference and accumulate (long).
-; simd_adal integer add and accumulate (long).
-; simd_add integer addition/subtraction.
-; simd_addl integer addition/subtraction (long).
-; simd_addlv across lanes integer sum (long).
-; simd_addn integer addition/subtraction (narrow).
-; simd_addn2 integer addition/subtraction (narrow, high).
-; simd_addv across lanes integer sum.
-; simd_cls count leading sign/zero bits.
-; simd_cmp compare / create mask.
-; simd_cnt population count.
-; simd_dup duplicate element.
-; simd_dupgp duplicate general purpose register.
-; simd_ext bitwise extract from pair.
-; simd_fabd floating point absolute difference.
-; simd_fadd floating point add/sub.
-; simd_fcmp floating point compare.
-; simd_fcvti floating point convert to integer.
-; simd_fcvtl floating-point convert upsize.
-; simd_fcvtn floating-point convert downsize (narrow).
-; simd_fcvtn2 floating-point convert downsize (narrow, high).
-; simd_fdiv floating point division.
-; simd_fminmax floating point min/max.
-; simd_fminmaxv across lanes floating point min/max.
-; simd_fmla floating point multiply-add.
-; simd_fmla_elt floating point multiply-add (by element).
-; simd_fmul floating point multiply.
-; simd_fmul_elt floating point multiply (by element).
-; simd_fnegabs floating point neg/abs.
-; simd_frecpe floating point reciprocal estimate.
-; simd_frecps floating point reciprocal step.
-; simd_frecpx floating point reciprocal exponent.
-; simd_frint floating point round to integer.
-; simd_fsqrt floating point square root.
-; simd_icvtf integer convert to floating point.
-; simd_ins insert element.
-; simd_insgp insert general purpose register.
-; simd_load1 load multiple structures to one register (LD1).
-; simd_load1r load single structure to all lanes of one register (LD1R).
-; simd_load1s load single structure to one lane of one register (LD1 [index]).
-; simd_load2 load multiple structures to two registers (LD1, LD2).
-; simd_load2r load single structure to all lanes of two registers (LD1R, LD2R).
-; simd_load2s load single structure to one lane of two registers (LD2 [index]).
-; simd_load3 load multiple structures to three registers (LD1, LD3).
-; simd_load3r load single structure to all lanes of three registers (LD3R).
-; simd_load3s load single structure to one lane of three registers (LD3 [index]).
-; simd_load4 load multiple structures to four registers (LD1, LD2, LD4).
-; simd_load4r load single structure to all lanes of four registers (LD4R).
-; simd_load4s load single structure to one lane of four registers (LD4 [index]).
-; simd_logic logical operation.
-; simd_logic_imm logcial operation (immediate).
-; simd_minmax integer min/max.
-; simd_minmaxv across lanes integer min/max,
-; simd_mla integer multiply-accumulate.
-; simd_mla_elt integer multiply-accumulate (by element).
-; simd_mlal integer multiply-accumulate (long).
-; simd_mlal_elt integer multiply-accumulate (by element, long).
-; simd_move move register.
-; simd_move_imm move immediate.
-; simd_movgp move element to general purpose register.
-; simd_mul integer multiply.
-; simd_mul_elt integer multiply (by element).
-; simd_mull integer multiply (long).
-; simd_mull_elt integer multiply (by element, long).
-; simd_negabs integer negate/absolute.
-; simd_rbit bitwise reverse.
-; simd_rcpe integer reciprocal estimate.
-; simd_rcps integer reciprocal square root.
-; simd_rev element reverse.
-; simd_sat_add integer saturating addition/subtraction.
-; simd_sat_mlal integer saturating multiply-accumulate (long).
-; simd_sat_mlal_elt integer saturating multiply-accumulate (by element, long).
-; simd_sat_mul integer saturating multiply.
-; simd_sat_mul_elt integer saturating multiply (by element).
-; simd_sat_mull integer saturating multiply (long).
-; simd_sat_mull_elt integer saturating multiply (by element, long).
-; simd_sat_negabs integer saturating negate/absolute.
-; simd_sat_shift integer saturating shift.
-; simd_sat_shift_imm integer saturating shift (immediate).
-; simd_sat_shiftn_imm integer saturating shift (narrow, immediate).
-; simd_sat_shiftn2_imm integer saturating shift (narrow, high, immediate).
-; simd_shift shift register/vector.
-; simd_shift_acc shift accumulate.
-; simd_shift_imm shift immediate.
-; simd_shift_imm_acc shift immediate and accumualte.
-; simd_shiftl shift register/vector (long).
-; simd_shiftl_imm shift register/vector (long, immediate).
-; simd_shiftn_imm shift register/vector (narrow, immediate).
-; simd_shiftn2_imm shift register/vector (narrow, high, immediate).
-; simd_store1 store multiple structures from one register (ST1).
-; simd_store1s store single structure from one lane of one register (ST1 [index]).
-; simd_store2 store multiple structures from two registers (ST1, ST2).
-; simd_store2s store single structure from one lane of two registers (ST2 [index]).
-; simd_store3 store multiple structures from three registers (ST1, ST3).
-; simd_store3s store single structure from one lane of three register (ST3 [index]).
-; simd_store4 store multiple structures from four registers (ST1, ST2, ST4).
-; simd_store4s store single structure from one lane for four registers (ST4 [index]).
-; simd_tbl table lookup.
-; simd_trn transpose.
-; simd_uzp unzip.
-; simd_zip zip.
-
-(define_attr "simd_type"
- "simd_abd,\
- simd_abdl,\
- simd_adal,\
- simd_add,\
- simd_addl,\
- simd_addlv,\
- simd_addn,\
- simd_addn2,\
- simd_addv,\
- simd_cls,\
- simd_cmp,\
- simd_cnt,\
- simd_dup,\
- simd_dupgp,\
- simd_ext,\
- simd_fabd,\
- simd_fadd,\
- simd_fcmp,\
- simd_fcvti,\
- simd_fcvtl,\
- simd_fcvtn,\
- simd_fcvtn2,\
- simd_fdiv,\
- simd_fminmax,\
- simd_fminmaxv,\
- simd_fmla,\
- simd_fmla_elt,\
- simd_fmul,\
- simd_fmul_elt,\
- simd_fnegabs,\
- simd_frecpe,\
- simd_frecps,\
- simd_frecpx,\
- simd_frint,\
- simd_fsqrt,\
- simd_icvtf,\
- simd_ins,\
- simd_insgp,\
- simd_load1,\
- simd_load1r,\
- simd_load1s,\
- simd_load2,\
- simd_load2r,\
- simd_load2s,\
- simd_load3,\
- simd_load3r,\
- simd_load3s,\
- simd_load4,\
- simd_load4r,\
- simd_load4s,\
- simd_logic,\
- simd_logic_imm,\
- simd_minmax,\
- simd_minmaxv,\
- simd_mla,\
- simd_mla_elt,\
- simd_mlal,\
- simd_mlal_elt,\
- simd_movgp,\
- simd_move,\
- simd_move_imm,\
- simd_mul,\
- simd_mul_d_long,\
- simd_mul_elt,\
- simd_mull,\
- simd_mull_elt,\
- simd_negabs,\
- simd_rbit,\
- simd_rcpe,\
- simd_rcps,\
- simd_rev,\
- simd_sat_add,\
- simd_sat_mlal,\
- simd_sat_mlal_elt,\
- simd_sat_mul,\
- simd_sat_mul_elt,\
- simd_sat_mull,\
- simd_sat_mull_elt,\
- simd_sat_negabs,\
- simd_sat_shift,\
- simd_sat_shift_imm,\
- simd_sat_shiftn_imm,\
- simd_sat_shiftn2_imm,\
- simd_shift,\
- simd_shift_acc,\
- simd_shift_imm,\
- simd_shift_imm_acc,\
- simd_shiftl,\
- simd_shiftl_imm,\
- simd_shiftn_imm,\
- simd_shiftn2_imm,\
- simd_store1,\
- simd_store1s,\
- simd_store2,\
- simd_store2s,\
- simd_store3,\
- simd_store3s,\
- simd_store4,\
- simd_store4s,\
- simd_tbl,\
- simd_trn,\
- simd_uzp,\
- simd_zip,\
- simd_crypto_aes,\
- simd_crypto_sha1_xor,\
- simd_crypto_sha1_fast,\
- simd_crypto_sha1_slow,\
- simd_crypto_sha256_fast,\
- simd_crypto_sha256_slow,\
- none"
- (const_string "none"))
-
-
-; The "neon_type" attribute is used by the AArch32 backend. Below is a mapping
-; from "simd_type" to "neon_type".
-
-(define_attr "neon_type"
- "neon_int_1,neon_int_2,neon_int_3,neon_int_4,neon_int_5,neon_vqneg_vqabs,
- neon_vmov,neon_vaba,neon_vsma,neon_vaba_qqq,
- neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,neon_mul_qqq_8_16_32_ddd_32,
- neon_mul_qdd_64_32_long_qqd_16_ddd_32_scalar_64_32_long_scalar,
- neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,neon_mla_qqq_8_16,
- neon_mla_ddd_32_qqd_16_ddd_32_scalar_qdd_64_32_long_scalar_qdd_64_32_long,
- neon_mla_qqq_32_qqd_32_scalar,neon_mul_ddd_16_scalar_32_16_long_scalar,
- neon_mul_qqd_32_scalar,neon_mla_ddd_16_scalar_qdd_32_16_long_scalar,
- neon_shift_1,neon_shift_2,neon_shift_3,neon_vshl_ddd,
- neon_vqshl_vrshl_vqrshl_qqq,neon_vsra_vrsra,neon_fp_vadd_ddd_vabs_dd,
- neon_fp_vadd_qqq_vabs_qq,neon_fp_vsum,neon_fp_vmul_ddd,neon_fp_vmul_qqd,
- neon_fp_vmla_ddd,neon_fp_vmla_qqq,neon_fp_vmla_ddd_scalar,
- neon_fp_vmla_qqq_scalar,neon_fp_vrecps_vrsqrts_ddd,
- neon_fp_vrecps_vrsqrts_qqq,neon_bp_simple,neon_bp_2cycle,neon_bp_3cycle,
- neon_ldr,neon_str,neon_vld1_1_2_regs,neon_vld1_3_4_regs,
- neon_vld2_2_regs_vld1_vld2_all_lanes,neon_vld2_4_regs,neon_vld3_vld4,
- neon_vst1_1_2_regs_vst2_2_regs,neon_vst1_3_4_regs,
- neon_vst2_4_regs_vst3_vst4,neon_vst3_vst4,neon_vld1_vld2_lane,
- neon_vld3_vld4_lane,neon_vst1_vst2_lane,neon_vst3_vst4_lane,
- neon_vld3_vld4_all_lanes,neon_mcr,neon_mcr_2_mcrr,neon_mrc,neon_mrrc,
- neon_ldm_2,neon_stm_2,none,unknown"
- (cond [
- (eq_attr "simd_type" "simd_dup") (const_string "neon_bp_simple")
- (eq_attr "simd_type" "simd_movgp") (const_string "neon_bp_simple")
- (eq_attr "simd_type" "simd_add,simd_logic,simd_logic_imm") (const_string "neon_int_1")
- (eq_attr "simd_type" "simd_negabs,simd_addlv") (const_string "neon_int_3")
- (eq_attr "simd_type" "simd_addn,simd_addn2,simd_addl,simd_sat_add,simd_sat_negabs") (const_string "neon_int_4")
- (eq_attr "simd_type" "simd_move") (const_string "neon_vmov")
- (eq_attr "simd_type" "simd_ins") (const_string "neon_mcr")
- (and (eq_attr "simd_type" "simd_mul,simd_sat_mul") (eq_attr "simd_mode" "V8QI,V4HI")) (const_string "neon_mul_ddd_8_16_qdd_16_8_long_32_16_long")
- (and (eq_attr "simd_type" "simd_mul,simd_sat_mul") (eq_attr "simd_mode" "V2SI,V8QI,V16QI,V2SI")) (const_string "neon_mul_qqq_8_16_32_ddd_32")
- (and (eq_attr "simd_type" "simd_mull,simd_sat_mull") (eq_attr "simd_mode" "V8QI,V16QI,V4HI,V8HI")) (const_string "neon_mul_ddd_8_16_qdd_16_8_long_32_16_long")
- (and (eq_attr "simd_type" "simd_mull,simd_sat_mull") (eq_attr "simd_mode" "V2SI,V4SI,V2DI")) (const_string "neon_mul_qdd_64_32_long_qqd_16_ddd_32_scalar_64_32_long_scalar")
- (and (eq_attr "simd_type" "simd_mla,simd_sat_mlal") (eq_attr "simd_mode" "V8QI,V4HI")) (const_string "neon_mla_ddd_8_16_qdd_16_8_long_32_16_long")
- (and (eq_attr "simd_type" "simd_mla,simd_sat_mlal") (eq_attr "simd_mode" "V2SI")) (const_string "neon_mla_ddd_32_qqd_16_ddd_32_scalar_qdd_64_32_long_scalar_qdd_64_32_long")
- (and (eq_attr "simd_type" "simd_mla,simd_sat_mlal") (eq_attr "simd_mode" "V16QI,V8HI")) (const_string "neon_mla_qqq_8_16")
- (and (eq_attr "simd_type" "simd_mla,simd_sat_mlal") (eq_attr "simd_mode" "V4SI")) (const_string "neon_mla_qqq_32_qqd_32_scalar")
- (and (eq_attr "simd_type" "simd_mlal") (eq_attr "simd_mode" "V8QI,V16QI,V4HI,V8HI")) (const_string "neon_mla_ddd_8_16_qdd_16_8_long_32_16_long")
- (and (eq_attr "simd_type" "simd_mlal") (eq_attr "simd_mode" "V2SI,V4SI,V2DI")) (const_string "neon_mla_ddd_32_qqd_16_ddd_32_scalar_qdd_64_32_long_scalar_qdd_64_32_long")
- (and (eq_attr "simd_type" "simd_fmla") (eq_attr "simd_mode" "V2SF")) (const_string "neon_fp_vmla_ddd")
- (and (eq_attr "simd_type" "simd_fmla") (eq_attr "simd_mode" "V4SF,V2DF")) (const_string "neon_fp_vmla_qqq")
- (and (eq_attr "simd_type" "simd_fmla_elt") (eq_attr "simd_mode" "V2SF")) (const_string "neon_fp_vmla_ddd_scalar")
- (and (eq_attr "simd_type" "simd_fmla_elt") (eq_attr "simd_mode" "V4SF,V2DF")) (const_string "neon_fp_vmla_qqq_scalar")
- (and (eq_attr "simd_type" "simd_fmul,simd_fmul_elt,simd_fdiv,simd_fsqrt") (eq_attr "simd_mode" "V2SF")) (const_string "neon_fp_vmul_ddd")
- (and (eq_attr "simd_type" "simd_fmul,simd_fmul_elt,simd_fdiv,simd_fsqrt") (eq_attr "simd_mode" "V4SF,V2DF")) (const_string "neon_fp_vmul_qqd")
- (and (eq_attr "simd_type" "simd_fadd") (eq_attr "simd_mode" "V2SF")) (const_string "neon_fp_vadd_ddd_vabs_dd")
- (and (eq_attr "simd_type" "simd_fadd") (eq_attr "simd_mode" "V4SF,V2DF")) (const_string "neon_fp_vadd_qqq_vabs_qq")
- (and (eq_attr "simd_type" "simd_fnegabs,simd_fminmax,simd_fminmaxv") (eq_attr "simd_mode" "V2SF")) (const_string "neon_fp_vadd_ddd_vabs_dd")
- (and (eq_attr "simd_type" "simd_fnegabs,simd_fminmax,simd_fminmaxv") (eq_attr "simd_mode" "V4SF,V2DF")) (const_string "neon_fp_vadd_qqq_vabs_qq")
- (and (eq_attr "simd_type" "simd_shift,simd_shift_acc") (eq_attr "simd_mode" "V8QI,V4HI,V2SI")) (const_string "neon_vshl_ddd")
- (and (eq_attr "simd_type" "simd_shift,simd_shift_acc") (eq_attr "simd_mode" "V16QI,V8HI,V4SI,V2DI")) (const_string "neon_shift_3")
- (eq_attr "simd_type" "simd_minmax,simd_minmaxv") (const_string "neon_int_5")
- (eq_attr "simd_type" "simd_shiftn_imm,simd_shiftn2_imm,simd_shiftl_imm,") (const_string "neon_shift_1")
- (eq_attr "simd_type" "simd_load1,simd_load2") (const_string "neon_vld1_1_2_regs")
- (eq_attr "simd_type" "simd_load3,simd_load3") (const_string "neon_vld1_3_4_regs")
- (eq_attr "simd_type" "simd_load1r,simd_load2r,simd_load3r,simd_load4r") (const_string "neon_vld2_2_regs_vld1_vld2_all_lanes")
- (eq_attr "simd_type" "simd_load1s,simd_load2s") (const_string "neon_vld1_vld2_lane")
- (eq_attr "simd_type" "simd_load3s,simd_load4s") (const_string "neon_vld3_vld4_lane")
- (eq_attr "simd_type" "simd_store1,simd_store2") (const_string "neon_vst1_1_2_regs_vst2_2_regs")
- (eq_attr "simd_type" "simd_store3,simd_store4") (const_string "neon_vst1_3_4_regs")
- (eq_attr "simd_type" "simd_store1s,simd_store2s") (const_string "neon_vst1_vst2_lane")
- (eq_attr "simd_type" "simd_store3s,simd_store4s") (const_string "neon_vst3_vst4_lane")
- (and (eq_attr "simd_type" "simd_frecpe,simd_frecps") (eq_attr "simd_mode" "V2SF")) (const_string "neon_fp_vrecps_vrsqrts_ddd")
- (and (eq_attr "simd_type" "simd_frecpe,simd_frecps") (eq_attr "simd_mode" "V4SF,V2DF")) (const_string "neon_fp_vrecps_vrsqrts_qqq")
- (eq_attr "simd_type" "none") (const_string "none")
- ]
- (const_string "unknown")))
-
-
(define_expand "mov<mode>"
[(set (match_operand:VALL 0 "aarch64_simd_nonimmediate_operand" "")
(match_operand:VALL 1 "aarch64_simd_general_operand" ""))]
@@ -347,8 +46,7 @@
(vec_duplicate:VDQ (match_operand:<VEL> 1 "register_operand" "r")))]
"TARGET_SIMD"
"dup\\t%0.<Vtype>, %<vw>1"
- [(set_attr "simd_type" "simd_dupgp")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_from_gp<q>")]
)
(define_insn "aarch64_dup_lane<mode>"
@@ -360,8 +58,7 @@
)))]
"TARGET_SIMD"
"dup\\t%<v>0<Vmtype>, %1.<Vetype>[%2]"
- [(set_attr "simd_type" "simd_dup")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_dup<q>")]
)
(define_insn "aarch64_simd_dup<mode>"
@@ -369,8 +66,7 @@
(vec_duplicate:VDQF (match_operand:<VEL> 1 "register_operand" "w")))]
"TARGET_SIMD"
"dup\\t%0.<Vtype>, %1.<Vetype>[0]"
- [(set_attr "simd_type" "simd_dup")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_dup<q>")]
)
(define_insn "*aarch64_simd_mov<mode>"
@@ -396,8 +92,9 @@
default: gcc_unreachable ();
}
}
- [(set_attr "simd_type" "simd_load1,simd_store1,simd_move,simd_movgp,simd_insgp,simd_move,simd_move_imm")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_load1_1reg<q>, neon_store1_1reg<q>,\
+ neon_logic<q>, neon_to_gp<q>, neon_from_gp<q>,\
+ mov_reg, neon_move<q>")]
)
(define_insn "*aarch64_simd_mov<mode>"
@@ -427,8 +124,9 @@
gcc_unreachable ();
}
}
- [(set_attr "simd_type" "simd_load1,simd_store1,simd_move,simd_movgp,simd_insgp,simd_move,simd_move_imm")
- (set_attr "simd_mode" "<MODE>")
+ [(set_attr "type" "neon_load1_1reg<q>, neon_store1_1reg<q>,\
+ neon_logic<q>, multiple, multiple, multiple,\
+ neon_move<q>")
(set_attr "length" "4,4,4,8,8,8,4")]
)
@@ -507,8 +205,7 @@
(match_operand:VQ 2 "vect_par_cnst_lo_half" "")))]
"TARGET_SIMD && reload_completed"
"umov\t%0, %1.d[0]"
- [(set_attr "simd_type" "simd_movgp")
- (set_attr "simd_mode" "<MODE>")
+ [(set_attr "type" "neon_to_gp<q>")
(set_attr "length" "4")
])
@@ -519,8 +216,7 @@
(match_operand:VQ 2 "vect_par_cnst_hi_half" "")))]
"TARGET_SIMD && reload_completed"
"umov\t%0, %1.d[1]"
- [(set_attr "simd_type" "simd_movgp")
- (set_attr "simd_mode" "<MODE>")
+ [(set_attr "type" "neon_to_gp<q>")
(set_attr "length" "4")
])
@@ -530,8 +226,7 @@
(match_operand:VDQ 2 "register_operand" "w")))]
"TARGET_SIMD"
"orn\t%0.<Vbtype>, %2.<Vbtype>, %1.<Vbtype>"
- [(set_attr "simd_type" "simd_logic")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_logic<q>")]
)
(define_insn "bic<mode>3"
@@ -540,8 +235,7 @@
(match_operand:VDQ 2 "register_operand" "w")))]
"TARGET_SIMD"
"bic\t%0.<Vbtype>, %2.<Vbtype>, %1.<Vbtype>"
- [(set_attr "simd_type" "simd_logic")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_logic<q>")]
)
(define_insn "add<mode>3"
@@ -550,8 +244,7 @@
(match_operand:VDQ 2 "register_operand" "w")))]
"TARGET_SIMD"
"add\t%0.<Vtype>, %1.<Vtype>, %2.<Vtype>"
- [(set_attr "simd_type" "simd_add")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_add<q>")]
)
(define_insn "sub<mode>3"
@@ -560,8 +253,7 @@
(match_operand:VDQ 2 "register_operand" "w")))]
"TARGET_SIMD"
"sub\t%0.<Vtype>, %1.<Vtype>, %2.<Vtype>"
- [(set_attr "simd_type" "simd_add")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_sub<q>")]
)
(define_insn "mul<mode>3"
@@ -570,8 +262,7 @@
(match_operand:VDQM 2 "register_operand" "w")))]
"TARGET_SIMD"
"mul\t%0.<Vtype>, %1.<Vtype>, %2.<Vtype>"
- [(set_attr "simd_type" "simd_mul")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_mul_<Vetype><q>")]
)
(define_insn "neg<mode>2"
@@ -579,8 +270,7 @@
(neg:VDQ (match_operand:VDQ 1 "register_operand" "w")))]
"TARGET_SIMD"
"neg\t%0.<Vtype>, %1.<Vtype>"
- [(set_attr "simd_type" "simd_negabs")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_neg<q>")]
)
(define_insn "abs<mode>2"
@@ -588,8 +278,7 @@
(abs:VDQ (match_operand:VDQ 1 "register_operand" "w")))]
"TARGET_SIMD"
"abs\t%0.<Vtype>, %1.<Vtype>"
- [(set_attr "simd_type" "simd_negabs")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_abs<q>")]
)
(define_insn "abd<mode>_3"
@@ -599,8 +288,7 @@
(match_operand:VDQ_BHSI 2 "register_operand" "w"))))]
"TARGET_SIMD"
"sabd\t%0.<Vtype>, %1.<Vtype>, %2.<Vtype>"
- [(set_attr "simd_type" "simd_abd")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_abd<q>")]
)
(define_insn "aba<mode>_3"
@@ -611,8 +299,7 @@
(match_operand:VDQ_BHSI 3 "register_operand" "0")))]
"TARGET_SIMD"
"saba\t%0.<Vtype>, %1.<Vtype>, %2.<Vtype>"
- [(set_attr "simd_type" "simd_abd")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_arith_acc<q>")]
)
(define_insn "fabd<mode>_3"
@@ -622,8 +309,7 @@
(match_operand:VDQF 2 "register_operand" "w"))))]
"TARGET_SIMD"
"fabd\t%0.<Vtype>, %1.<Vtype>, %2.<Vtype>"
- [(set_attr "simd_type" "simd_fabd")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_fp_abd_<Vetype><q>")]
)
(define_insn "*fabd_scalar<mode>3"
@@ -633,8 +319,7 @@
(match_operand:GPF 2 "register_operand" "w"))))]
"TARGET_SIMD"
"fabd\t%<s>0, %<s>1, %<s>2"
- [(set_attr "simd_type" "simd_fabd")
- (set_attr "mode" "<MODE>")]
+ [(set_attr "type" "neon_fp_abd_<Vetype><q>")]
)
(define_insn "and<mode>3"
@@ -643,8 +328,7 @@
(match_operand:VDQ 2 "register_operand" "w")))]
"TARGET_SIMD"
"and\t%0.<Vbtype>, %1.<Vbtype>, %2.<Vbtype>"
- [(set_attr "simd_type" "simd_logic")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_logic<q>")]
)
(define_insn "ior<mode>3"
@@ -653,8 +337,7 @@
(match_operand:VDQ 2 "register_operand" "w")))]
"TARGET_SIMD"
"orr\t%0.<Vbtype>, %1.<Vbtype>, %2.<Vbtype>"
- [(set_attr "simd_type" "simd_logic")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_logic<q>")]
)
(define_insn "xor<mode>3"
@@ -663,8 +346,7 @@
(match_operand:VDQ 2 "register_operand" "w")))]
"TARGET_SIMD"
"eor\t%0.<Vbtype>, %1.<Vbtype>, %2.<Vbtype>"
- [(set_attr "simd_type" "simd_logic")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_logic<q>")]
)
(define_insn "one_cmpl<mode>2"
@@ -672,8 +354,7 @@
(not:VDQ (match_operand:VDQ 1 "register_operand" "w")))]
"TARGET_SIMD"
"not\t%0.<Vbtype>, %1.<Vbtype>"
- [(set_attr "simd_type" "simd_logic")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_logic<q>")]
)
(define_insn "aarch64_simd_vec_set<mode>"
@@ -685,8 +366,7 @@
(match_operand:SI 2 "immediate_operand" "i")))]
"TARGET_SIMD"
"ins\t%0.<Vetype>[%p2], %w1";
- [(set_attr "simd_type" "simd_insgp")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_from_gp<q>")]
)
(define_insn "aarch64_simd_lshr<mode>"
@@ -695,8 +375,7 @@
(match_operand:VDQ 2 "aarch64_simd_rshift_imm" "Dr")))]
"TARGET_SIMD"
"ushr\t%0.<Vtype>, %1.<Vtype>, %2"
- [(set_attr "simd_type" "simd_shift_imm")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_shift_imm<q>")]
)
(define_insn "aarch64_simd_ashr<mode>"
@@ -705,8 +384,7 @@
(match_operand:VDQ 2 "aarch64_simd_rshift_imm" "Dr")))]
"TARGET_SIMD"
"sshr\t%0.<Vtype>, %1.<Vtype>, %2"
- [(set_attr "simd_type" "simd_shift_imm")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_shift_imm<q>")]
)
(define_insn "aarch64_simd_imm_shl<mode>"
@@ -715,8 +393,7 @@
(match_operand:VDQ 2 "aarch64_simd_lshift_imm" "Dl")))]
"TARGET_SIMD"
"shl\t%0.<Vtype>, %1.<Vtype>, %2"
- [(set_attr "simd_type" "simd_shift_imm")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_shift_imm<q>")]
)
(define_insn "aarch64_simd_reg_sshl<mode>"
@@ -725,8 +402,7 @@
(match_operand:VDQ 2 "register_operand" "w")))]
"TARGET_SIMD"
"sshl\t%0.<Vtype>, %1.<Vtype>, %2.<Vtype>"
- [(set_attr "simd_type" "simd_shift")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_shift_reg<q>")]
)
(define_insn "aarch64_simd_reg_shl<mode>_unsigned"
@@ -736,8 +412,7 @@
UNSPEC_ASHIFT_UNSIGNED))]
"TARGET_SIMD"
"ushl\t%0.<Vtype>, %1.<Vtype>, %2.<Vtype>"
- [(set_attr "simd_type" "simd_shift")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_shift_reg<q>")]
)
(define_insn "aarch64_simd_reg_shl<mode>_signed"
@@ -747,8 +422,7 @@
UNSPEC_ASHIFT_SIGNED))]
"TARGET_SIMD"
"sshl\t%0.<Vtype>, %1.<Vtype>, %2.<Vtype>"
- [(set_attr "simd_type" "simd_shift")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_shift_reg<q>")]
)
(define_expand "ashl<mode>3"
@@ -954,8 +628,7 @@
(match_operand:SI 2 "immediate_operand" "i")))]
"TARGET_SIMD"
"ins\t%0.d[%p2], %1";
- [(set_attr "simd_type" "simd_insgp")
- (set_attr "simd_mode" "V2DI")]
+ [(set_attr "type" "neon_from_gp")]
)
(define_expand "vec_setv2di"
@@ -980,8 +653,7 @@
(match_operand:SI 2 "immediate_operand" "i")))]
"TARGET_SIMD"
"ins\t%0.<Vetype>[%p2], %1.<Vetype>[0]";
- [(set_attr "simd_type" "simd_ins")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_ins<q>")]
)
(define_expand "vec_set<mode>"
@@ -1005,8 +677,7 @@
(match_operand:VQ_S 1 "register_operand" "0")))]
"TARGET_SIMD"
"mla\t%0.<Vtype>, %2.<Vtype>, %3.<Vtype>"
- [(set_attr "simd_type" "simd_mla")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_mla_<Vetype><q>")]
)
(define_insn "aarch64_mls<mode>"
@@ -1016,8 +687,7 @@
(match_operand:VQ_S 3 "register_operand" "w"))))]
"TARGET_SIMD"
"mls\t%0.<Vtype>, %2.<Vtype>, %3.<Vtype>"
- [(set_attr "simd_type" "simd_mla")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_mla_<Vetype><q>")]
)
;; Max/Min operations.
@@ -1027,8 +697,7 @@
(match_operand:VQ_S 2 "register_operand" "w")))]
"TARGET_SIMD"
"<su><maxmin>\t%0.<Vtype>, %1.<Vtype>, %2.<Vtype>"
- [(set_attr "simd_type" "simd_minmax")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_minmax<q>")]
)
;; Move into low-half clearing high half to 0.
@@ -1043,9 +712,7 @@
dup\\t%d0, %1.d[0]
fmov\\t%d0, %1
dup\\t%d0, %1"
- [(set_attr "v8type" "*,fmov,*")
- (set_attr "simd_type" "simd_dup,*,simd_dup")
- (set_attr "simd_mode" "<MODE>")
+ [(set_attr "type" "neon_dup<q>,fmov,neon_dup<q>")
(set_attr "simd" "yes,*,yes")
(set_attr "fp" "*,yes,*")
(set_attr "length" "4")]
@@ -1064,8 +731,7 @@
"@
ins\\t%0.d[1], %1.d[0]
ins\\t%0.d[1], %1"
- [(set_attr "simd_type" "simd_ins,simd_ins")
- (set_attr "simd_mode" "<MODE>")
+ [(set_attr "type" "neon_ins")
(set_attr "length" "4")]
)
@@ -1088,8 +754,7 @@
(truncate:<VNARROWQ> (match_operand:VQN 1 "register_operand" "w")))]
"TARGET_SIMD"
"xtn\\t%0.<Vntype>, %1.<Vtype>"
- [(set_attr "simd_type" "simd_shiftn_imm")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_shift_imm_narrow_q")]
)
(define_expand "vec_pack_trunc_<mode>"
@@ -1115,8 +780,7 @@
(truncate:<VNARROWQ> (match_operand:VQN 2 "register_operand" "w"))))]
"TARGET_SIMD"
"xtn\\t%0.<Vntype>, %1.<Vtype>\;xtn2\\t%0.<V2ntype>, %2.<Vtype>"
- [(set_attr "simd_type" "simd_shiftn2_imm")
- (set_attr "simd_mode" "<MODE>")
+ [(set_attr "type" "multiple")
(set_attr "length" "8")]
)
@@ -1130,8 +794,7 @@
)))]
"TARGET_SIMD"
"<su>shll %0.<Vwtype>, %1.<Vhalftype>, 0"
- [(set_attr "simd_type" "simd_shiftl_imm")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_shift_imm_long")]
)
(define_insn "aarch64_simd_vec_unpack<su>_hi_<mode>"
@@ -1142,8 +805,7 @@
)))]
"TARGET_SIMD"
"<su>shll2 %0.<Vwtype>, %1.<Vtype>, 0"
- [(set_attr "simd_type" "simd_shiftl_imm")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_shift_imm_long")]
)
(define_expand "vec_unpack<su>_hi_<mode>"
@@ -1185,8 +847,7 @@
(match_operand:<VWIDE> 1 "register_operand" "0")))]
"TARGET_SIMD"
"<su>mlal\t%0.<Vwtype>, %2.<Vhalftype>, %4.<Vhalftype>"
- [(set_attr "simd_type" "simd_mlal")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_mla_<Vetype>_long")]
)
(define_insn "*aarch64_<su>mlal_hi<mode>"
@@ -1202,8 +863,7 @@
(match_operand:<VWIDE> 1 "register_operand" "0")))]
"TARGET_SIMD"
"<su>mlal2\t%0.<Vwtype>, %2.<Vtype>, %4.<Vtype>"
- [(set_attr "simd_type" "simd_mlal")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_mla_<Vetype>_long")]
)
(define_insn "*aarch64_<su>mlsl_lo<mode>"
@@ -1219,8 +879,7 @@
(match_dup 3))))))]
"TARGET_SIMD"
"<su>mlsl\t%0.<Vwtype>, %2.<Vhalftype>, %4.<Vhalftype>"
- [(set_attr "simd_type" "simd_mlal")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_mla_<Vetype>_long")]
)
(define_insn "*aarch64_<su>mlsl_hi<mode>"
@@ -1236,8 +895,7 @@
(match_dup 3))))))]
"TARGET_SIMD"
"<su>mlsl2\t%0.<Vwtype>, %2.<Vtype>, %4.<Vtype>"
- [(set_attr "simd_type" "simd_mlal")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_mla_<Vetype>_long")]
)
(define_insn "*aarch64_<su>mlal<mode>"
@@ -1251,8 +909,7 @@
(match_operand:<VWIDE> 3 "register_operand" "0")))]
"TARGET_SIMD"
"<su>mlal\t%0.<Vwtype>, %1.<Vtype>, %2.<Vtype>"
- [(set_attr "simd_type" "simd_mlal")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_mla_<Vetype>_long")]
)
(define_insn "*aarch64_<su>mlsl<mode>"
@@ -1266,8 +923,7 @@
(match_operand:VDW 3 "register_operand" "w")))))]
"TARGET_SIMD"
"<su>mlsl\t%0.<Vwtype>, %2.<Vtype>, %3.<Vtype>"
- [(set_attr "simd_type" "simd_mlal")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_mla_<Vetype>_long")]
)
(define_insn "aarch64_simd_vec_<su>mult_lo_<mode>"
@@ -1280,8 +936,7 @@
(match_dup 3)))))]
"TARGET_SIMD"
"<su>mull\\t%0.<Vwtype>, %1.<Vhalftype>, %2.<Vhalftype>"
- [(set_attr "simd_type" "simd_mull")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_mul_<Vetype>_long")]
)
(define_expand "vec_widen_<su>mult_lo_<mode>"
@@ -1308,8 +963,7 @@
(match_dup 3)))))]
"TARGET_SIMD"
"<su>mull2\\t%0.<Vwtype>, %1.<Vtype>, %2.<Vtype>"
- [(set_attr "simd_type" "simd_mull")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_mul_<Vetype>_long")]
)
(define_expand "vec_widen_<su>mult_hi_<mode>"
@@ -1358,8 +1012,7 @@
(match_operand:VDQF 2 "register_operand" "w")))]
"TARGET_SIMD"
"fadd\\t%0.<Vtype>, %1.<Vtype>, %2.<Vtype>"
- [(set_attr "simd_type" "simd_fadd")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_fp_addsub_<Vetype><q>")]
)
(define_insn "sub<mode>3"
@@ -1368,8 +1021,7 @@
(match_operand:VDQF 2 "register_operand" "w")))]
"TARGET_SIMD"
"fsub\\t%0.<Vtype>, %1.<Vtype>, %2.<Vtype>"
- [(set_attr "simd_type" "simd_fadd")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_fp_addsub_<Vetype><q>")]
)
(define_insn "mul<mode>3"
@@ -1378,8 +1030,7 @@
(match_operand:VDQF 2 "register_operand" "w")))]
"TARGET_SIMD"
"fmul\\t%0.<Vtype>, %1.<Vtype>, %2.<Vtype>"
- [(set_attr "simd_type" "simd_fmul")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_fp_mul_<Vetype><q>")]
)
(define_insn "div<mode>3"
@@ -1388,8 +1039,7 @@
(match_operand:VDQF 2 "register_operand" "w")))]
"TARGET_SIMD"
"fdiv\\t%0.<Vtype>, %1.<Vtype>, %2.<Vtype>"
- [(set_attr "simd_type" "simd_fdiv")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_fp_div_<Vetype><q>")]
)
(define_insn "neg<mode>2"
@@ -1397,8 +1047,7 @@
(neg:VDQF (match_operand:VDQF 1 "register_operand" "w")))]
"TARGET_SIMD"
"fneg\\t%0.<Vtype>, %1.<Vtype>"
- [(set_attr "simd_type" "simd_fnegabs")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_fp_neg_<Vetype><q>")]
)
(define_insn "abs<mode>2"
@@ -1406,8 +1055,7 @@
(abs:VDQF (match_operand:VDQF 1 "register_operand" "w")))]
"TARGET_SIMD"
"fabs\\t%0.<Vtype>, %1.<Vtype>"
- [(set_attr "simd_type" "simd_fnegabs")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_fp_abs_<Vetype><q>")]
)
(define_insn "fma<mode>4"
@@ -1417,8 +1065,7 @@
(match_operand:VDQF 3 "register_operand" "0")))]
"TARGET_SIMD"
"fmla\\t%0.<Vtype>, %1.<Vtype>, %2.<Vtype>"
- [(set_attr "simd_type" "simd_fmla")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_fp_mla_<Vetype><q>")]
)
;; Vector versions of the floating-point frint patterns.
@@ -1429,8 +1076,7 @@
FRINT))]
"TARGET_SIMD"
"frint<frint_suffix>\\t%0.<Vtype>, %1.<Vtype>"
- [(set_attr "simd_type" "simd_frint")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_fp_round_<Vetype><q>")]
)
;; Vector versions of the fcvt standard patterns.
@@ -1442,8 +1088,7 @@
FCVT)))]
"TARGET_SIMD"
"fcvt<frint_suffix><su>\\t%0.<Vtype>, %1.<Vtype>"
- [(set_attr "simd_type" "simd_fcvti")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_fp_to_int_<Vetype><q>")]
)
(define_expand "<optab><VDQF:mode><fcvt_target>2"
@@ -1475,8 +1120,7 @@
(match_operand:<FCVT_TARGET> 1 "register_operand" "w")))]
"TARGET_SIMD"
"<su_optab>cvtf\\t%0.<Vtype>, %1.<Vtype>"
- [(set_attr "simd_type" "simd_icvtf")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_int_to_fp_<Vetype><q>")]
)
;; Conversions between vectors of floats and doubles.
@@ -1494,8 +1138,7 @@
)))]
"TARGET_SIMD"
"fcvtl\\t%0.2d, %1.2s"
- [(set_attr "simd_type" "simd_fcvtl")
- (set_attr "simd_mode" "V2DF")]
+ [(set_attr "type" "neon_fp_cvt_widen_s")]
)
(define_insn "aarch64_float_extend_lo_v2df"
@@ -1504,8 +1147,7 @@
(match_operand:V2SF 1 "register_operand" "w")))]
"TARGET_SIMD"
"fcvtl\\t%0.2d, %1.2s"
- [(set_attr "simd_type" "simd_fcvtl")
- (set_attr "simd_mode" "V2DF")]
+ [(set_attr "type" "neon_fp_cvt_widen_s")]
)
(define_insn "vec_unpacks_hi_v4sf"
@@ -1517,8 +1159,7 @@
)))]
"TARGET_SIMD"
"fcvtl2\\t%0.2d, %1.4s"
- [(set_attr "simd_type" "simd_fcvtl")
- (set_attr "simd_mode" "V2DF")]
+ [(set_attr "type" "neon_fp_cvt_widen_s")]
)
;; Float narrowing operations.
@@ -1529,8 +1170,7 @@
(match_operand:V2DF 1 "register_operand" "w")))]
"TARGET_SIMD"
"fcvtn\\t%0.2s, %1.2d"
- [(set_attr "simd_type" "simd_fcvtl")
- (set_attr "simd_mode" "V2SF")]
+ [(set_attr "type" "neon_fp_cvt_narrow_d_q")]
)
(define_insn "aarch64_float_truncate_hi_v4sf"
@@ -1541,8 +1181,7 @@
(match_operand:V2DF 2 "register_operand" "w"))))]
"TARGET_SIMD"
"fcvtn2\\t%0.4s, %2.2d"
- [(set_attr "simd_type" "simd_fcvtl")
- (set_attr "simd_mode" "V4SF")]
+ [(set_attr "type" "neon_fp_cvt_narrow_d_q")]
)
(define_expand "vec_pack_trunc_v2df"
@@ -1588,8 +1227,7 @@
(match_operand:VDQF 3 "register_operand" "w"))))]
"TARGET_SIMD"
"fmls\\t%0.<Vtype>, %2.<Vtype>, %3.<Vtype>"
- [(set_attr "simd_type" "simd_fmla")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_fp_mla_<Vetype>_scalar<q>")]
)
;; FP Max/Min
@@ -1612,8 +1250,7 @@
(match_operand:VDQF 2 "register_operand" "w")))]
"TARGET_SIMD"
"f<maxmin>nm\\t%0.<Vtype>, %1.<Vtype>, %2.<Vtype>"
- [(set_attr "simd_type" "simd_fminmax")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_fp_minmax_<Vetype><q>")]
)
(define_insn "<maxmin_uns><mode>3"
@@ -1623,8 +1260,7 @@
FMAXMIN_UNS))]
"TARGET_SIMD"
"<maxmin_uns_op>\\t%0.<Vtype>, %1.<Vtype>, %2.<Vtype>"
- [(set_attr "simd_type" "simd_fminmax")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_fp_minmax_<Vetype><q>")]
)
;; 'across lanes' add.
@@ -1635,8 +1271,7 @@
SUADDV))]
"TARGET_SIMD"
"addv\\t%<Vetype>0, %1.<Vtype>"
- [(set_attr "simd_type" "simd_addv")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_reduc_add<q>")]
)
(define_insn "reduc_<sur>plus_v2di"
@@ -1645,8 +1280,7 @@
SUADDV))]
"TARGET_SIMD"
"addp\\t%d0, %1.2d"
- [(set_attr "simd_type" "simd_addv")
- (set_attr "simd_mode" "V2DI")]
+ [(set_attr "type" "neon_reduc_add_q")]
)
(define_insn "reduc_<sur>plus_v2si"
@@ -1655,8 +1289,7 @@
SUADDV))]
"TARGET_SIMD"
"addp\\t%0.2s, %1.2s, %1.2s"
- [(set_attr "simd_type" "simd_addv")
- (set_attr "simd_mode" "V2SI")]
+ [(set_attr "type" "neon_reduc_add")]
)
(define_insn "reduc_<sur>plus_<mode>"
@@ -1665,8 +1298,7 @@
SUADDV))]
"TARGET_SIMD"
"faddp\\t%<Vetype>0, %1.<Vtype>"
- [(set_attr "simd_type" "simd_fadd")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_fp_reduc_add_<Vetype><q>")]
)
(define_insn "aarch64_addpv4sf"
@@ -1675,8 +1307,7 @@
UNSPEC_FADDV))]
"TARGET_SIMD"
"faddp\\t%0.4s, %1.4s, %1.4s"
- [(set_attr "simd_type" "simd_fadd")
- (set_attr "simd_mode" "V4SF")]
+ [(set_attr "type" "neon_fp_reduc_add_s_q")]
)
(define_expand "reduc_<sur>plus_v4sf"
@@ -1696,8 +1327,7 @@
(clz:VDQ_BHSI (match_operand:VDQ_BHSI 1 "register_operand" "w")))]
"TARGET_SIMD"
"clz\\t%0.<Vtype>, %1.<Vtype>"
- [(set_attr "simd_type" "simd_cls")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_cls<q>")]
)
;; 'across lanes' max and min ops.
@@ -1708,8 +1338,7 @@
MAXMINV))]
"TARGET_SIMD"
"<maxmin_uns_op>v\\t%<Vetype>0, %1.<Vtype>"
- [(set_attr "simd_type" "simd_minmaxv")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_reduc_minmax<q>")]
)
(define_insn "reduc_<maxmin_uns>_v2di"
@@ -1718,8 +1347,7 @@
MAXMINV))]
"TARGET_SIMD"
"<maxmin_uns_op>p\\t%d0, %1.2d"
- [(set_attr "simd_type" "simd_minmaxv")
- (set_attr "simd_mode" "V2DI")]
+ [(set_attr "type" "neon_reduc_minmax_q")]
)
(define_insn "reduc_<maxmin_uns>_v2si"
@@ -1728,8 +1356,7 @@
MAXMINV))]
"TARGET_SIMD"
"<maxmin_uns_op>p\\t%0.2s, %1.2s, %1.2s"
- [(set_attr "simd_type" "simd_minmaxv")
- (set_attr "simd_mode" "V2SI")]
+ [(set_attr "type" "neon_reduc_minmax")]
)
(define_insn "reduc_<maxmin_uns>_<mode>"
@@ -1738,8 +1365,7 @@
FMAXMINV))]
"TARGET_SIMD"
"<maxmin_uns_op>p\\t%<Vetype>0, %1.<Vtype>"
- [(set_attr "simd_type" "simd_fminmaxv")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_fp_reduc_minmax_<Vetype><q>")]
)
(define_insn "reduc_<maxmin_uns>_v4sf"
@@ -1748,8 +1374,7 @@
FMAXMINV))]
"TARGET_SIMD"
"<maxmin_uns_op>v\\t%s0, %1.4s"
- [(set_attr "simd_type" "simd_fminmaxv")
- (set_attr "simd_mode" "V4SF")]
+ [(set_attr "type" "neon_fp_reduc_minmax_s_q")]
)
;; aarch64_simd_bsl may compile to any of bsl/bif/bit depending on register
@@ -1784,6 +1409,7 @@
bsl\\t%0.<Vbtype>, %2.<Vbtype>, %3.<Vbtype>
bit\\t%0.<Vbtype>, %2.<Vbtype>, %1.<Vbtype>
bif\\t%0.<Vbtype>, %3.<Vbtype>, %1.<Vbtype>"
+ [(set_attr "type" "neon_bsl<q>")]
)
(define_expand "aarch64_simd_bsl<mode>"
@@ -2148,8 +1774,7 @@
(parallel [(match_operand:SI 2 "immediate_operand" "i")]))))]
"TARGET_SIMD"
"smov\\t%<GPI:w>0, %1.<VDQQH:Vetype>[%2]"
- [(set_attr "simd_type" "simd_movgp")
- (set_attr "simd_mode" "<VDQQH:MODE>")]
+ [(set_attr "type" "neon_to_gp<q>")]
)
(define_insn "*aarch64_get_lane_zero_extendsi<mode>"
@@ -2160,8 +1785,7 @@
(parallel [(match_operand:SI 2 "immediate_operand" "i")]))))]
"TARGET_SIMD"
"umov\\t%w0, %1.<Vetype>[%2]"
- [(set_attr "simd_type" "simd_movgp")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_to_gp<q>")]
)
;; Lane extraction of a value, neither sign nor zero extension
@@ -2175,8 +1799,7 @@
"@
umov\\t%<vwcore>0, %1.<Vetype>[%2]
dup\\t%<Vetype>0, %1.<Vetype>[%2]"
- [(set_attr "simd_type" "simd_movgp, simd_dup")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_to_gp<q>, neon_dup<q>")]
)
(define_expand "aarch64_get_lanedi"
@@ -2299,8 +1922,7 @@
(match_operand:VDIC 2 "aarch64_simd_imm_zero" "Dz")))]
"TARGET_SIMD"
"mov\\t%0.8b, %1.8b"
- [(set_attr "simd_type" "simd_move")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_move<q>")]
)
(define_insn_and_split "aarch64_combine<mode>"
@@ -2314,7 +1936,9 @@
{
aarch64_split_simd_combine (operands[0], operands[1], operands[2]);
DONE;
-})
+}
+[(set_attr "type" "multiple")]
+)
(define_expand "aarch64_simd_combine<mode>"
[(set (match_operand:<VDBL> 0 "register_operand" "=&w")
@@ -2325,7 +1949,9 @@
emit_insn (gen_move_lo_quad_<Vdbl> (operands[0], operands[1]));
emit_insn (gen_move_hi_quad_<Vdbl> (operands[0], operands[2]));
DONE;
- })
+ }
+[(set_attr "type" "multiple")]
+)
;; <su><addsub>l<q>.
@@ -2339,8 +1965,7 @@
(match_dup 3)))))]
"TARGET_SIMD"
"<ANY_EXTEND:su><ADDSUB:optab>l2 %0.<Vwtype>, %1.<Vtype>, %2.<Vtype>"
- [(set_attr "simd_type" "simd_addl")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_<ADDSUB:optab>_long")]
)
(define_expand "aarch64_saddl2<mode>"
@@ -2399,8 +2024,7 @@
(match_operand:VDW 2 "register_operand" "w"))))]
"TARGET_SIMD"
"<ANY_EXTEND:su><ADDSUB:optab>l %0.<Vwtype>, %1.<Vtype>, %2.<Vtype>"
- [(set_attr "simd_type" "simd_addl")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_<ADDSUB:optab>_long")]
)
;; <su><addsub>w<q>.
@@ -2412,8 +2036,7 @@
(match_operand:VDW 2 "register_operand" "w"))))]
"TARGET_SIMD"
"<ANY_EXTEND:su><ADDSUB:optab>w\\t%0.<Vwtype>, %1.<Vwtype>, %2.<Vtype>"
- [(set_attr "simd_type" "simd_addl")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_<ADDSUB:optab>_widen")]
)
(define_insn "aarch64_<ANY_EXTEND:su><ADDSUB:optab>w2<mode>_internal"
@@ -2425,8 +2048,7 @@
(match_operand:VQW 3 "vect_par_cnst_hi_half" "")))))]
"TARGET_SIMD"
"<ANY_EXTEND:su><ADDSUB:optab>w2\\t%0.<Vwtype>, %1.<Vwtype>, %2.<Vtype>"
- [(set_attr "simd_type" "simd_addl")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_<ADDSUB:optab>_widen")]
)
(define_expand "aarch64_saddw2<mode>"
@@ -2487,8 +2109,7 @@
HADDSUB))]
"TARGET_SIMD"
"<sur>h<addsub>\\t%0.<Vtype>, %1.<Vtype>, %2.<Vtype>"
- [(set_attr "simd_type" "simd_add")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_<addsub>_halve<q>")]
)
;; <r><addsub>hn<q>.
@@ -2500,8 +2121,7 @@
ADDSUBHN))]
"TARGET_SIMD"
"<sur><addsub>hn\\t%0.<Vntype>, %1.<Vtype>, %2.<Vtype>"
- [(set_attr "simd_type" "simd_addn")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_<addsub>_halve_narrow_q")]
)
(define_insn "aarch64_<sur><addsub>hn2<mode>"
@@ -2512,8 +2132,7 @@
ADDSUBHN2))]
"TARGET_SIMD"
"<sur><addsub>hn2\\t%0.<V2ntype>, %2.<Vtype>, %3.<Vtype>"
- [(set_attr "simd_type" "simd_addn2")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_<addsub>_halve_narrow_q")]
)
;; pmul.
@@ -2525,8 +2144,7 @@
UNSPEC_PMUL))]
"TARGET_SIMD"
"pmul\\t%0.<Vtype>, %1.<Vtype>, %2.<Vtype>"
- [(set_attr "simd_type" "simd_mul")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_mul_<Vetype><q>")]
)
;; <su>q<addsub>
@@ -2537,8 +2155,7 @@
(match_operand:VSDQ_I 2 "register_operand" "w")))]
"TARGET_SIMD"
"<su_optab><optab>\\t%<v>0<Vmtype>, %<v>1<Vmtype>, %<v>2<Vmtype>"
- [(set_attr "simd_type" "simd_add")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_<optab><q>")]
)
;; suqadd and usqadd
@@ -2550,8 +2167,7 @@
USSUQADD))]
"TARGET_SIMD"
"<sur>qadd\\t%<v>0<Vmtype>, %<v>2<Vmtype>"
- [(set_attr "simd_type" "simd_sat_add")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_qadd<q>")]
)
;; sqmovun
@@ -2562,8 +2178,7 @@
UNSPEC_SQXTUN))]
"TARGET_SIMD"
"sqxtun\\t%<vn2>0<Vmntype>, %<v>1<Vmtype>"
- [(set_attr "simd_type" "simd_sat_shiftn_imm")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_sat_shift_imm_narrow_q")]
)
;; sqmovn and uqmovn
@@ -2574,8 +2189,7 @@
SUQMOVN))]
"TARGET_SIMD"
"<sur>qxtn\\t%<vn2>0<Vmntype>, %<v>1<Vmtype>"
- [(set_attr "simd_type" "simd_sat_shiftn_imm")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_sat_shift_imm_narrow_q")]
)
;; <su>q<absneg>
@@ -2586,8 +2200,7 @@
(match_operand:VSDQ_I_BHSI 1 "register_operand" "w")))]
"TARGET_SIMD"
"s<optab>\\t%<v>0<Vmtype>, %<v>1<Vmtype>"
- [(set_attr "simd_type" "simd_sat_negabs")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_<optab><q>")]
)
;; sq<r>dmulh.
@@ -2600,8 +2213,7 @@
VQDMULH))]
"TARGET_SIMD"
"sq<r>dmulh\\t%<v>0<Vmtype>, %<v>1<Vmtype>, %<v>2<Vmtype>"
- [(set_attr "simd_type" "simd_sat_mul")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_sat_mul_<Vetype><q>")]
)
;; sq<r>dmulh_lane
@@ -2618,8 +2230,7 @@
"*
aarch64_simd_lane_bounds (operands[3], 0, GET_MODE_NUNITS (<VCOND>mode));
return \"sq<r>dmulh\\t%0.<Vtype>, %1.<Vtype>, %2.<Vetype>[%3]\";"
- [(set_attr "simd_type" "simd_sat_mul")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_sat_mul_<Vetype>_scalar<q>")]
)
(define_insn "aarch64_sq<r>dmulh_laneq<mode>"
@@ -2634,8 +2245,7 @@
"*
aarch64_simd_lane_bounds (operands[3], 0, GET_MODE_NUNITS (<VCONQ>mode));
return \"sq<r>dmulh\\t%0.<Vtype>, %1.<Vtype>, %2.<Vetype>[%3]\";"
- [(set_attr "simd_type" "simd_sat_mul")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_sat_mul_<Vetype>_scalar<q>")]
)
(define_insn "aarch64_sq<r>dmulh_lane<mode>"
@@ -2650,8 +2260,7 @@
"*
aarch64_simd_lane_bounds (operands[3], 0, GET_MODE_NUNITS (<VCONQ>mode));
return \"sq<r>dmulh\\t%<v>0, %<v>1, %2.<v>[%3]\";"
- [(set_attr "simd_type" "simd_sat_mul")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_sat_mul_<Vetype>_scalar<q>")]
)
;; vqdml[sa]l
@@ -2669,8 +2278,7 @@
(const_int 1))))]
"TARGET_SIMD"
"sqdml<SBINQOPS:as>l\\t%<vw2>0<Vmwtype>, %<v>2<Vmtype>, %<v>3<Vmtype>"
- [(set_attr "simd_type" "simd_sat_mlal")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_sat_mla_<Vetype>_long")]
)
;; vqdml[sa]l_lane
@@ -2692,8 +2300,7 @@
(const_int 1))))]
"TARGET_SIMD"
"sqdml<SBINQOPS:as>l\\t%<vw2>0<Vmwtype>, %<v>2<Vmtype>, %3.<Vetype>[%4]"
- [(set_attr "simd_type" "simd_sat_mlal")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_sat_mla_<Vetype>_scalar_long")]
)
(define_insn "aarch64_sqdml<SBINQOPS:as>l_lane<mode>_internal"
@@ -2712,8 +2319,7 @@
(const_int 1))))]
"TARGET_SIMD"
"sqdml<SBINQOPS:as>l\\t%<vw2>0<Vmwtype>, %<v>2<Vmtype>, %3.<Vetype>[%4]"
- [(set_attr "simd_type" "simd_sat_mlal")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_sat_mla_<Vetype>_scalar_long")]
)
(define_expand "aarch64_sqdmlal_lane<mode>"
@@ -2792,8 +2398,7 @@
(const_int 1))))]
"TARGET_SIMD"
"sqdml<SBINQOPS:as>l\\t%<vw2>0<Vmwtype>, %<v>2<Vmtype>, %3.<Vetype>[0]"
- [(set_attr "simd_type" "simd_sat_mlal")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_sat_mla_<Vetype>_scalar_long")]
)
;; sqdml[as]l2
@@ -2815,8 +2420,7 @@
(const_int 1))))]
"TARGET_SIMD"
"sqdml<SBINQOPS:as>l2\\t%<vw2>0<Vmwtype>, %<v>2<Vmtype>, %<v>3<Vmtype>"
- [(set_attr "simd_type" "simd_sat_mlal")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_sat_mla_<Vetype>_scalar_long")]
)
(define_expand "aarch64_sqdmlal2<mode>"
@@ -2866,8 +2470,7 @@
(const_int 1))))]
"TARGET_SIMD"
"sqdml<SBINQOPS:as>l2\\t%<vw2>0<Vmwtype>, %<v>2<Vmtype>, %3.<Vetype>[%4]"
- [(set_attr "simd_type" "simd_sat_mlal")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_sat_mla_<Vetype>_scalar_long")]
)
(define_expand "aarch64_sqdmlal2_lane<mode>"
@@ -2950,8 +2553,7 @@
(const_int 1))))]
"TARGET_SIMD"
"sqdml<SBINQOPS:as>l2\\t%<vw2>0<Vmwtype>, %<v>2<Vmtype>, %3.<Vetype>[0]"
- [(set_attr "simd_type" "simd_sat_mlal")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_sat_mla_<Vetype>_scalar_long")]
)
(define_expand "aarch64_sqdmlal2_n<mode>"
@@ -2995,8 +2597,7 @@
(const_int 1)))]
"TARGET_SIMD"
"sqdmull\\t%<vw2>0<Vmwtype>, %<v>1<Vmtype>, %<v>2<Vmtype>"
- [(set_attr "simd_type" "simd_sat_mul")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_sat_mul_<Vetype>_long")]
)
;; vqdmull_lane
@@ -3016,8 +2617,7 @@
(const_int 1)))]
"TARGET_SIMD"
"sqdmull\\t%<vw2>0<Vmwtype>, %<v>1<Vmtype>, %2.<Vetype>[%3]"
- [(set_attr "simd_type" "simd_sat_mul")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_sat_mul_<Vetype>_scalar_long")]
)
(define_insn "aarch64_sqdmull_lane<mode>_internal"
@@ -3034,8 +2634,7 @@
(const_int 1)))]
"TARGET_SIMD"
"sqdmull\\t%<vw2>0<Vmwtype>, %<v>1<Vmtype>, %2.<Vetype>[%3]"
- [(set_attr "simd_type" "simd_sat_mul")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_sat_mul_<Vetype>_scalar_long")]
)
(define_expand "aarch64_sqdmull_lane<mode>"
@@ -3079,8 +2678,7 @@
(const_int 1)))]
"TARGET_SIMD"
"sqdmull\\t%<vw2>0<Vmwtype>, %<v>1<Vmtype>, %2.<Vetype>[0]"
- [(set_attr "simd_type" "simd_sat_mul")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_sat_mul_<Vetype>_scalar_long")]
)
;; vqdmull2
@@ -3103,8 +2701,7 @@
(const_int 1)))]
"TARGET_SIMD"
"sqdmull2\\t%<vw2>0<Vmwtype>, %<v>1<Vmtype>, %<v>2<Vmtype>"
- [(set_attr "simd_type" "simd_sat_mul")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_sat_mul_<Vetype>_scalar_long")]
)
(define_expand "aarch64_sqdmull2<mode>"
@@ -3138,8 +2735,7 @@
(const_int 1)))]
"TARGET_SIMD"
"sqdmull2\\t%<vw2>0<Vmwtype>, %<v>1<Vmtype>, %2.<Vetype>[%3]"
- [(set_attr "simd_type" "simd_sat_mul")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_sat_mul_<Vetype>_scalar_long")]
)
(define_expand "aarch64_sqdmull2_lane<mode>"
@@ -3189,8 +2785,7 @@
(const_int 1)))]
"TARGET_SIMD"
"sqdmull2\\t%<vw2>0<Vmwtype>, %<v>1<Vmtype>, %2.<Vetype>[0]"
- [(set_attr "simd_type" "simd_sat_mul")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_sat_mul_<Vetype>_scalar_long")]
)
(define_expand "aarch64_sqdmull2_n<mode>"
@@ -3215,8 +2810,7 @@
VSHL))]
"TARGET_SIMD"
"<sur>shl\\t%<v>0<Vmtype>, %<v>1<Vmtype>, %<v>2<Vmtype>";
- [(set_attr "simd_type" "simd_shift")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_shift_reg<q>")]
)
@@ -3230,8 +2824,7 @@
VQSHL))]
"TARGET_SIMD"
"<sur>q<r>shl\\t%<v>0<Vmtype>, %<v>1<Vmtype>, %<v>2<Vmtype>";
- [(set_attr "simd_type" "simd_sat_shift")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_sat_shift_reg<q>")]
)
;; vshll_n
@@ -3252,8 +2845,7 @@
else {
return \"<sur>shll\\t%0.<Vwtype>, %1.<Vtype>, %2\";
}"
- [(set_attr "simd_type" "simd_shift_imm")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_shift_imm_long")]
)
;; vshll_high_n
@@ -3274,8 +2866,7 @@
else {
return \"<sur>shll2\\t%0.<Vwtype>, %1.<Vtype>, %2\";
}"
- [(set_attr "simd_type" "simd_shift_imm")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_shift_imm_long")]
)
;; vrshr_n
@@ -3290,8 +2881,7 @@
int bit_width = GET_MODE_UNIT_SIZE (<MODE>mode) * BITS_PER_UNIT;
aarch64_simd_const_bounds (operands[2], 1, bit_width + 1);
return \"<sur>shr\\t%<v>0<Vmtype>, %<v>1<Vmtype>, %2\";"
- [(set_attr "simd_type" "simd_shift_imm")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_sat_shift_imm<q>")]
)
;; v(r)sra_n
@@ -3307,8 +2897,7 @@
int bit_width = GET_MODE_UNIT_SIZE (<MODE>mode) * BITS_PER_UNIT;
aarch64_simd_const_bounds (operands[3], 1, bit_width + 1);
return \"<sur>sra\\t%<v>0<Vmtype>, %<v>2<Vmtype>, %3\";"
- [(set_attr "simd_type" "simd_shift_imm_acc")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_shift_acc<q>")]
)
;; vs<lr>i_n
@@ -3325,8 +2914,7 @@
aarch64_simd_const_bounds (operands[3], 1 - <VSLRI:offsetlr>,
bit_width - <VSLRI:offsetlr> + 1);
return \"s<lr>i\\t%<v>0<Vmtype>, %<v>2<Vmtype>, %3\";"
- [(set_attr "simd_type" "simd_shift_imm")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_shift_imm<q>")]
)
;; vqshl(u)
@@ -3341,8 +2929,7 @@
int bit_width = GET_MODE_UNIT_SIZE (<MODE>mode) * BITS_PER_UNIT;
aarch64_simd_const_bounds (operands[2], 0, bit_width);
return \"<sur>qshl<u>\\t%<v>0<Vmtype>, %<v>1<Vmtype>, %2\";"
- [(set_attr "simd_type" "simd_sat_shift_imm")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_sat_shift_imm<q>")]
)
@@ -3358,8 +2945,7 @@
int bit_width = GET_MODE_UNIT_SIZE (<MODE>mode) * BITS_PER_UNIT;
aarch64_simd_const_bounds (operands[2], 1, bit_width + 1);
return \"<sur>q<r>shr<u>n\\t%<vn2>0<Vmntype>, %<v>1<Vmtype>, %2\";"
- [(set_attr "simd_type" "simd_sat_shiftn_imm")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_sat_shift_imm_narrow_q")]
)
@@ -3378,8 +2964,7 @@
"@
cm<n_optab>\t%<v>0<Vmtype>, %<v><cmp_1><Vmtype>, %<v><cmp_2><Vmtype>
cm<optab>\t%<v>0<Vmtype>, %<v>1<Vmtype>, #0"
- [(set_attr "simd_type" "simd_cmp")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_compare<q>, neon_compare_zero<q>")]
)
(define_insn_and_split "aarch64_cm<optab>di"
@@ -3408,8 +2993,7 @@
emit_insn (gen_cstoredi_neg (operands[0], comparison, cc_reg));
DONE;
}
- [(set_attr "simd_type" "simd_cmp")
- (set_attr "simd_mode" "DI")]
+ [(set_attr "type" "neon_compare, neon_compare_zero, multiple")]
)
;; cm(hs|hi)
@@ -3423,8 +3007,7 @@
)))]
"TARGET_SIMD"
"cm<n_optab>\t%<v>0<Vmtype>, %<v><cmp_1><Vmtype>, %<v><cmp_2><Vmtype>"
- [(set_attr "simd_type" "simd_cmp")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_compare<q>")]
)
(define_insn_and_split "aarch64_cm<optab>di"
@@ -3452,8 +3035,7 @@
emit_insn (gen_cstoredi_neg (operands[0], comparison, cc_reg));
DONE;
}
- [(set_attr "simd_type" "simd_cmp")
- (set_attr "simd_mode" "DI")]
+ [(set_attr "type" "neon_compare, neon_compare_zero")]
)
;; cmtst
@@ -3468,8 +3050,7 @@
(vec_duplicate:<V_cmp_result> (const_int 0)))))]
"TARGET_SIMD"
"cmtst\t%<v>0<Vmtype>, %<v>1<Vmtype>, %<v>2<Vmtype>"
- [(set_attr "simd_type" "simd_cmp")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_tst<q>")]
)
(define_insn_and_split "aarch64_cmtstdi"
@@ -3499,8 +3080,7 @@
emit_insn (gen_cstoredi_neg (operands[0], comparison, cc_reg));
DONE;
}
- [(set_attr "simd_type" "simd_cmp")
- (set_attr "simd_mode" "DI")]
+ [(set_attr "type" "neon_tst")]
)
;; fcm(eq|ge|gt|le|lt)
@@ -3516,8 +3096,7 @@
"@
fcm<n_optab>\t%<v>0<Vmtype>, %<v><cmp_1><Vmtype>, %<v><cmp_2><Vmtype>
fcm<optab>\t%<v>0<Vmtype>, %<v>1<Vmtype>, 0"
- [(set_attr "simd_type" "simd_fcmp")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_fp_compare_<Vetype><q>")]
)
;; fac(ge|gt)
@@ -3533,8 +3112,7 @@
)))]
"TARGET_SIMD"
"fac<n_optab>\t%<v>0<Vmtype>, %<v><cmp_1><Vmtype>, %<v><cmp_2><Vmtype>"
- [(set_attr "simd_type" "simd_fcmp")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_fp_compare_<Vetype><q>")]
)
;; addp
@@ -3547,8 +3125,7 @@
UNSPEC_ADDP))]
"TARGET_SIMD"
"addp\t%<v>0<Vmtype>, %<v>1<Vmtype>, %<v>2<Vmtype>"
- [(set_attr "simd_type" "simd_add")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_reduc_add<q>")]
)
(define_insn "aarch64_addpdi"
@@ -3558,8 +3135,7 @@
UNSPEC_ADDP))]
"TARGET_SIMD"
"addp\t%d0, %1.2d"
- [(set_attr "simd_type" "simd_add")
- (set_attr "simd_mode" "DI")]
+ [(set_attr "type" "neon_reduc_add")]
)
;; sqrt
@@ -3569,8 +3145,7 @@
(sqrt:VDQF (match_operand:VDQF 1 "register_operand" "w")))]
"TARGET_SIMD"
"fsqrt\\t%0.<Vtype>, %1.<Vtype>"
- [(set_attr "simd_type" "simd_fsqrt")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_fp_sqrt_<Vetype><q>")]
)
;; Patterns for vector struct loads and stores.
@@ -3582,8 +3157,8 @@
UNSPEC_LD2))]
"TARGET_SIMD"
"ld2\\t{%S0.<Vtype> - %T0.<Vtype>}, %1"
- [(set_attr "simd_type" "simd_load2")
- (set_attr "simd_mode" "<MODE>")])
+ [(set_attr "type" "neon_load2_2reg<q>")]
+)
(define_insn "vec_store_lanesoi<mode>"
[(set (match_operand:OI 0 "aarch64_simd_struct_operand" "=Utv")
@@ -3592,8 +3167,8 @@
UNSPEC_ST2))]
"TARGET_SIMD"
"st2\\t{%S1.<Vtype> - %T1.<Vtype>}, %0"
- [(set_attr "simd_type" "simd_store2")
- (set_attr "simd_mode" "<MODE>")])
+ [(set_attr "type" "neon_store2_2reg<q>")]
+)
(define_insn "vec_load_lanesci<mode>"
[(set (match_operand:CI 0 "register_operand" "=w")
@@ -3602,8 +3177,8 @@
UNSPEC_LD3))]
"TARGET_SIMD"
"ld3\\t{%S0.<Vtype> - %U0.<Vtype>}, %1"
- [(set_attr "simd_type" "simd_load3")
- (set_attr "simd_mode" "<MODE>")])
+ [(set_attr "type" "neon_load3_3reg<q>")]
+)
(define_insn "vec_store_lanesci<mode>"
[(set (match_operand:CI 0 "aarch64_simd_struct_operand" "=Utv")
@@ -3612,8 +3187,8 @@
UNSPEC_ST3))]
"TARGET_SIMD"
"st3\\t{%S1.<Vtype> - %U1.<Vtype>}, %0"
- [(set_attr "simd_type" "simd_store3")
- (set_attr "simd_mode" "<MODE>")])
+ [(set_attr "type" "neon_store3_3reg<q>")]
+)
(define_insn "vec_load_lanesxi<mode>"
[(set (match_operand:XI 0 "register_operand" "=w")
@@ -3622,8 +3197,8 @@
UNSPEC_LD4))]
"TARGET_SIMD"
"ld4\\t{%S0.<Vtype> - %V0.<Vtype>}, %1"
- [(set_attr "simd_type" "simd_load4")
- (set_attr "simd_mode" "<MODE>")])
+ [(set_attr "type" "neon_load4_4reg<q>")]
+)
(define_insn "vec_store_lanesxi<mode>"
[(set (match_operand:XI 0 "aarch64_simd_struct_operand" "=Utv")
@@ -3632,8 +3207,8 @@
UNSPEC_ST4))]
"TARGET_SIMD"
"st4\\t{%S1.<Vtype> - %V1.<Vtype>}, %0"
- [(set_attr "simd_type" "simd_store4")
- (set_attr "simd_mode" "<MODE>")])
+ [(set_attr "type" "neon_store4_4reg<q>")]
+)
;; Reload patterns for AdvSIMD register list operands.
@@ -3665,9 +3240,10 @@
default: gcc_unreachable ();
}
}
- [(set_attr "simd_type" "simd_move,simd_store<nregs>,simd_load<nregs>")
- (set (attr "length") (symbol_ref "aarch64_simd_attr_length_move (insn)"))
- (set_attr "simd_mode" "<MODE>")])
+ [(set_attr "type" "neon_move,neon_store<nregs>_<nregs>reg_q,\
+ neon_load<nregs>_<nregs>reg_q")
+ (set (attr "length") (symbol_ref "aarch64_simd_attr_length_move (insn)"))]
+)
(define_split
[(set (match_operand:OI 0 "register_operand" "")
@@ -3749,8 +3325,8 @@
(vec_duplicate:VD (const_int 0)))) 0))]
"TARGET_SIMD"
"ld2\\t{%S0.<Vtype> - %T0.<Vtype>}, %1"
- [(set_attr "simd_type" "simd_load2")
- (set_attr "simd_mode" "<MODE>")])
+ [(set_attr "type" "neon_load2_2reg<q>")]
+)
(define_insn "aarch64_ld2<mode>_dreg"
[(set (match_operand:OI 0 "register_operand" "=w")
@@ -3766,8 +3342,8 @@
(const_int 0))) 0))]
"TARGET_SIMD"
"ld1\\t{%S0.1d - %T0.1d}, %1"
- [(set_attr "simd_type" "simd_load2")
- (set_attr "simd_mode" "<MODE>")])
+ [(set_attr "type" "neon_load1_2reg<q>")]
+)
(define_insn "aarch64_ld3<mode>_dreg"
[(set (match_operand:CI 0 "register_operand" "=w")
@@ -3788,8 +3364,8 @@
(vec_duplicate:VD (const_int 0)))) 0))]
"TARGET_SIMD"
"ld3\\t{%S0.<Vtype> - %U0.<Vtype>}, %1"
- [(set_attr "simd_type" "simd_load3")
- (set_attr "simd_mode" "<MODE>")])
+ [(set_attr "type" "neon_load3_3reg<q>")]
+)
(define_insn "aarch64_ld3<mode>_dreg"
[(set (match_operand:CI 0 "register_operand" "=w")
@@ -3810,8 +3386,8 @@
(const_int 0))) 0))]
"TARGET_SIMD"
"ld1\\t{%S0.1d - %U0.1d}, %1"
- [(set_attr "simd_type" "simd_load3")
- (set_attr "simd_mode" "<MODE>")])
+ [(set_attr "type" "neon_load1_3reg<q>")]
+)
(define_insn "aarch64_ld4<mode>_dreg"
[(set (match_operand:XI 0 "register_operand" "=w")
@@ -3837,8 +3413,8 @@
(vec_duplicate:VD (const_int 0))))) 0))]
"TARGET_SIMD"
"ld4\\t{%S0.<Vtype> - %V0.<Vtype>}, %1"
- [(set_attr "simd_type" "simd_load4")
- (set_attr "simd_mode" "<MODE>")])
+ [(set_attr "type" "neon_load4_4reg<q>")]
+)
(define_insn "aarch64_ld4<mode>_dreg"
[(set (match_operand:XI 0 "register_operand" "=w")
@@ -3864,8 +3440,8 @@
(const_int 0)))) 0))]
"TARGET_SIMD"
"ld1\\t{%S0.1d - %V0.1d}, %1"
- [(set_attr "simd_type" "simd_load4")
- (set_attr "simd_mode" "<MODE>")])
+ [(set_attr "type" "neon_load1_4reg<q>")]
+)
(define_expand "aarch64_ld<VSTRUCT:nregs><VDC:mode>"
[(match_operand:VSTRUCT 0 "register_operand" "=w")
@@ -3979,8 +3555,7 @@
UNSPEC_TBL))]
"TARGET_SIMD"
"tbl\\t%0.<Vtype>, {%1.16b}, %2.<Vtype>"
- [(set_attr "simd_type" "simd_tbl")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_tbl1<q>")]
)
;; Two source registers.
@@ -3992,8 +3567,7 @@
UNSPEC_TBL))]
"TARGET_SIMD"
"tbl\\t%0.16b, {%S1.16b - %T1.16b}, %2.16b"
- [(set_attr "simd_type" "simd_tbl")
- (set_attr "simd_mode" "V16QI")]
+ [(set_attr "type" "neon_tbl2_q")]
)
(define_insn_and_split "aarch64_combinev16qi"
@@ -4008,7 +3582,9 @@
{
aarch64_split_combinev16qi (operands);
DONE;
-})
+}
+[(set_attr "type" "multiple")]
+)
(define_insn "aarch64_<PERMUTE:perm_insn><PERMUTE:perm_hilo><mode>"
[(set (match_operand:VALL 0 "register_operand" "=w")
@@ -4017,8 +3593,7 @@
PERMUTE))]
"TARGET_SIMD"
"<PERMUTE:perm_insn><PERMUTE:perm_hilo>\\t%0.<Vtype>, %1.<Vtype>, %2.<Vtype>"
- [(set_attr "simd_type" "simd_<PERMUTE:perm_insn>")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_permute<q>")]
)
(define_insn "aarch64_st2<mode>_dreg"
@@ -4028,8 +3603,8 @@
UNSPEC_ST2))]
"TARGET_SIMD"
"st2\\t{%S1.<Vtype> - %T1.<Vtype>}, %0"
- [(set_attr "simd_type" "simd_store2")
- (set_attr "simd_mode" "<MODE>")])
+ [(set_attr "type" "neon_store2_2reg")]
+)
(define_insn "aarch64_st2<mode>_dreg"
[(set (match_operand:TI 0 "aarch64_simd_struct_operand" "=Utv")
@@ -4038,8 +3613,8 @@
UNSPEC_ST2))]
"TARGET_SIMD"
"st1\\t{%S1.1d - %T1.1d}, %0"
- [(set_attr "simd_type" "simd_store2")
- (set_attr "simd_mode" "<MODE>")])
+ [(set_attr "type" "neon_store1_2reg")]
+)
(define_insn "aarch64_st3<mode>_dreg"
[(set (match_operand:EI 0 "aarch64_simd_struct_operand" "=Utv")
@@ -4048,8 +3623,8 @@
UNSPEC_ST3))]
"TARGET_SIMD"
"st3\\t{%S1.<Vtype> - %U1.<Vtype>}, %0"
- [(set_attr "simd_type" "simd_store3")
- (set_attr "simd_mode" "<MODE>")])
+ [(set_attr "type" "neon_store3_3reg")]
+)
(define_insn "aarch64_st3<mode>_dreg"
[(set (match_operand:EI 0 "aarch64_simd_struct_operand" "=Utv")
@@ -4058,8 +3633,8 @@
UNSPEC_ST3))]
"TARGET_SIMD"
"st1\\t{%S1.1d - %U1.1d}, %0"
- [(set_attr "simd_type" "simd_store3")
- (set_attr "simd_mode" "<MODE>")])
+ [(set_attr "type" "neon_store1_3reg")]
+)
(define_insn "aarch64_st4<mode>_dreg"
[(set (match_operand:OI 0 "aarch64_simd_struct_operand" "=Utv")
@@ -4068,8 +3643,8 @@
UNSPEC_ST4))]
"TARGET_SIMD"
"st4\\t{%S1.<Vtype> - %V1.<Vtype>}, %0"
- [(set_attr "simd_type" "simd_store4")
- (set_attr "simd_mode" "<MODE>")])
+ [(set_attr "type" "neon_store4_4reg")]
+)
(define_insn "aarch64_st4<mode>_dreg"
[(set (match_operand:OI 0 "aarch64_simd_struct_operand" "=Utv")
@@ -4078,8 +3653,8 @@
UNSPEC_ST4))]
"TARGET_SIMD"
"st1\\t{%S1.1d - %V1.1d}, %0"
- [(set_attr "simd_type" "simd_store4")
- (set_attr "simd_mode" "<MODE>")])
+ [(set_attr "type" "neon_store1_4reg")]
+)
(define_expand "aarch64_st<VSTRUCT:nregs><VDC:mode>"
[(match_operand:DI 0 "register_operand" "r")
@@ -4157,8 +3732,8 @@
(match_operand:<VEL> 1 "aarch64_simd_struct_operand" "Utv")))]
"TARGET_SIMD"
"ld1r\\t{%0.<Vtype>}, %1"
- [(set_attr "simd_type" "simd_load1r")
- (set_attr "simd_mode" "<MODE>")])
+ [(set_attr "type" "neon_load1_all_lanes")]
+)
(define_insn "aarch64_frecpe<mode>"
[(set (match_operand:VDQF 0 "register_operand" "=w")
@@ -4166,19 +3741,26 @@
UNSPEC_FRECPE))]
"TARGET_SIMD"
"frecpe\\t%0.<Vtype>, %1.<Vtype>"
- [(set_attr "simd_type" "simd_frecpe")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "neon_fp_recpe_<Vetype><q>")]
+)
+
+(define_insn "aarch64_frecp<FRECP:frecp_suffix><mode>"
+ [(set (match_operand:GPF 0 "register_operand" "=w")
+ (unspec:GPF [(match_operand:GPF 1 "register_operand" "w")]
+ FRECP))]
+ "TARGET_SIMD"
+ "frecp<FRECP:frecp_suffix>\\t%<s>0, %<s>1"
+ [(set_attr "type" "neon_fp_recp<FRECP:frecp_suffix>_<GPF:Vetype><GPF:q>")]
)
(define_insn "aarch64_frecps<mode>"
- [(set (match_operand:VDQF 0 "register_operand" "=w")
- (unspec:VDQF [(match_operand:VDQF 1 "register_operand" "w")
- (match_operand:VDQF 2 "register_operand" "w")]
+ [(set (match_operand:VALLF 0 "register_operand" "=w")
+ (unspec:VALLF [(match_operand:VALLF 1 "register_operand" "w")
+ (match_operand:VALLF 2 "register_operand" "w")]
UNSPEC_FRECPS))]
"TARGET_SIMD"
- "frecps\\t%0.<Vtype>, %1.<Vtype>, %2.<Vtype>"
- [(set_attr "simd_type" "simd_frecps")
- (set_attr "simd_mode" "<MODE>")]
+ "frecps\\t%<v>0<Vmtype>, %<v>1<Vmtype>, %<v>2<Vmtype>"
+ [(set_attr "type" "neon_fp_recps_<Vetype><q>")]
)
;; aes
@@ -4190,8 +3772,8 @@
CRYPTO_AES))]
"TARGET_SIMD && TARGET_CRYPTO"
"aes<aes_op>\\t%0.16b, %2.16b"
- [(set_attr "simd_type" "simd_crypto_aes")
- (set_attr "simd_mode" "V16QI")])
+ [(set_attr "type" "crypto_aes")]
+)
(define_insn "aarch64_crypto_aes<aesmc_op>v16qi"
[(set (match_operand:V16QI 0 "register_operand" "=w")
@@ -4199,8 +3781,8 @@
CRYPTO_AESMC))]
"TARGET_SIMD && TARGET_CRYPTO"
"aes<aesmc_op>\\t%0.16b, %1.16b"
- [(set_attr "simd_type" "simd_crypto_aes")
- (set_attr "simd_mode" "V16QI")])
+ [(set_attr "type" "crypto_aes")]
+)
;; sha1
@@ -4211,8 +3793,8 @@
UNSPEC_SHA1H))]
"TARGET_SIMD && TARGET_CRYPTO"
"sha1h\\t%s0, %s1"
- [(set_attr "simd_type" "simd_crypto_sha1_fast")
- (set_attr "simd_mode" "SI")])
+ [(set_attr "type" "crypto_sha1_fast")]
+)
(define_insn "aarch64_crypto_sha1su1v4si"
[(set (match_operand:V4SI 0 "register_operand" "=w")
@@ -4221,8 +3803,8 @@
UNSPEC_SHA1SU1))]
"TARGET_SIMD && TARGET_CRYPTO"
"sha1su1\\t%0.4s, %2.4s"
- [(set_attr "simd_type" "simd_crypto_sha1_fast")
- (set_attr "simd_mode" "V4SI")])
+ [(set_attr "type" "crypto_sha1_fast")]
+)
(define_insn "aarch64_crypto_sha1<sha1_op>v4si"
[(set (match_operand:V4SI 0 "register_operand" "=w")
@@ -4232,8 +3814,8 @@
CRYPTO_SHA1))]
"TARGET_SIMD && TARGET_CRYPTO"
"sha1<sha1_op>\\t%q0, %s2, %3.4s"
- [(set_attr "simd_type" "simd_crypto_sha1_slow")
- (set_attr "simd_mode" "V4SI")])
+ [(set_attr "type" "crypto_sha1_slow")]
+)
(define_insn "aarch64_crypto_sha1su0v4si"
[(set (match_operand:V4SI 0 "register_operand" "=w")
@@ -4243,9 +3825,8 @@
UNSPEC_SHA1SU0))]
"TARGET_SIMD && TARGET_CRYPTO"
"sha1su0\\t%0.4s, %2.4s, %3.4s"
- [(set_attr "simd_type" "simd_crypto_sha1_xor")
- (set_attr "simd_mode" "V4SI")])
-
+ [(set_attr "type" "crypto_sha1_xor")]
+)
;; sha256
@@ -4257,8 +3838,8 @@
CRYPTO_SHA256))]
"TARGET_SIMD && TARGET_CRYPTO"
"sha256h<sha256_op>\\t%q0, %q2, %3.4s"
- [(set_attr "simd_type" "simd_crypto_sha256_slow")
- (set_attr "simd_mode" "V4SI")])
+ [(set_attr "type" "crypto_sha256_slow")]
+)
(define_insn "aarch64_crypto_sha256su0v4si"
[(set (match_operand:V4SI 0 "register_operand" "=w")
@@ -4267,8 +3848,8 @@
UNSPEC_SHA256SU0))]
"TARGET_SIMD &&TARGET_CRYPTO"
"sha256su0\\t%0.4s, %2.4s"
- [(set_attr "simd_type" "simd_crypto_sha256_fast")
- (set_attr "simd_mode" "V4SI")])
+ [(set_attr "type" "crypto_sha256_fast")]
+)
(define_insn "aarch64_crypto_sha256su1v4si"
[(set (match_operand:V4SI 0 "register_operand" "=w")
@@ -4278,9 +3859,8 @@
UNSPEC_SHA256SU1))]
"TARGET_SIMD &&TARGET_CRYPTO"
"sha256su1\\t%0.4s, %2.4s, %3.4s"
- [(set_attr "simd_type""simd_crypto_sha256_slow")
- (set_attr "simd_mode" "V4SI")])
-
+ [(set_attr "type" "crypto_sha256_slow")]
+)
;; pmull
@@ -4291,8 +3871,8 @@
UNSPEC_PMULL))]
"TARGET_SIMD && TARGET_CRYPTO"
"pmull\\t%0.1q, %1.1d, %2.1d"
- [(set_attr "simd_type" "simd_mul_d_long")
- (set_attr "simd_mode" "TI")])
+ [(set_attr "type" "neon_mul_d_long")]
+)
(define_insn "aarch64_crypto_pmullv2di"
[(set (match_operand:TI 0 "register_operand" "=w")
@@ -4301,5 +3881,5 @@
UNSPEC_PMULL2))]
"TARGET_SIMD && TARGET_CRYPTO"
"pmull2\\t%0.1q, %1.2d, %2.2d"
- [(set_attr "simd_type" "simd_mul_d_long")
- (set_attr "simd_mode" "TI")]) \ No newline at end of file
+ [(set_attr "type" "neon_mul_d_long")]
+)
diff --git a/gcc/config/aarch64/aarch64-tune.md b/gcc/config/aarch64/aarch64-tune.md
index 02699e35c3f..84081d1ba57 100644
--- a/gcc/config/aarch64/aarch64-tune.md
+++ b/gcc/config/aarch64/aarch64-tune.md
@@ -1,5 +1,5 @@
;; -*- buffer-read-only: t -*-
;; Generated automatically by gentune.sh from aarch64-cores.def
(define_attr "tune"
- "cortexa53,cortexa57,large,small"
+ "cortexa53,cortexa15"
(const (symbol_ref "((enum attr_tune) aarch64_tune)")))
diff --git a/gcc/config/aarch64/aarch64.c b/gcc/config/aarch64/aarch64.c
index 31bb3f8dfcb..5e8e6efae96 100644
--- a/gcc/config/aarch64/aarch64.c
+++ b/gcc/config/aarch64/aarch64.c
@@ -123,7 +123,7 @@ static bool aarch64_vectorize_vec_perm_const_ok (enum machine_mode vmode,
const unsigned char *sel);
/* The processor for which instructions should be scheduled. */
-enum aarch64_processor aarch64_tune = generic;
+enum aarch64_processor aarch64_tune = cortexa53;
/* The current tuning set. */
const struct tune_params *aarch64_tune_params;
@@ -236,7 +236,7 @@ static const struct processor all_cores[] =
{NAME, IDENT, #ARCH, FLAGS | AARCH64_FL_FOR_ARCH##ARCH, &COSTS##_tunings},
#include "aarch64-cores.def"
#undef AARCH64_CORE
- {"generic", generic, "8", AARCH64_FL_FPSIMD | AARCH64_FL_FOR_ARCH8, &generic_tunings},
+ {"generic", cortexa53, "8", AARCH64_FL_FPSIMD | AARCH64_FL_FOR_ARCH8, &generic_tunings},
{NULL, aarch64_none, NULL, 0, NULL}
};
@@ -5072,7 +5072,7 @@ aarch64_override_options (void)
/* If the user did not specify a processor, choose the default
one for them. This will be the CPU set during configuration using
- --with-cpu, otherwise it is "generic". */
+ --with-cpu, otherwise it is "coretex-a53". */
if (!selected_cpu)
{
selected_cpu = &all_cores[TARGET_CPU_DEFAULT & 0x3f];
diff --git a/gcc/config/aarch64/aarch64.h b/gcc/config/aarch64/aarch64.h
index 99fcce5662d..57e6df10f65 100644
--- a/gcc/config/aarch64/aarch64.h
+++ b/gcc/config/aarch64/aarch64.h
@@ -465,10 +465,10 @@ enum target_cpus
TARGET_CPU_generic
};
-/* If there is no CPU defined at configure, use "generic" as default. */
+/* If there is no CPU defined at configure, use "cortex-a53" as default. */
#ifndef TARGET_CPU_DEFAULT
#define TARGET_CPU_DEFAULT \
- (TARGET_CPU_generic | (AARCH64_CPU_DEFAULT_FLAGS << 6))
+ (TARGET_CPU_cortexa53 | (AARCH64_CPU_DEFAULT_FLAGS << 6))
#endif
/* The processor for which instructions should be scheduled. */
diff --git a/gcc/config/aarch64/aarch64.md b/gcc/config/aarch64/aarch64.md
index 16d586c0596..dea03118bf7 100644
--- a/gcc/config/aarch64/aarch64.md
+++ b/gcc/config/aarch64/aarch64.md
@@ -112,208 +112,9 @@
;; Instruction types and attributes
;; -------------------------------------------------------------------
-;; Main data types used by the insntructions
-
-(define_attr "mode" "unknown,none,QI,HI,SI,DI,TI,SF,DF,TF"
- (const_string "unknown"))
-
-(define_attr "mode2" "unknown,none,QI,HI,SI,DI,TI,SF,DF,TF"
- (const_string "unknown"))
-
-; The "v8type" attribute is used to for fine grained classification of
-; AArch64 instructions. This table briefly explains the meaning of each type.
-
-; adc add/subtract with carry.
-; adcs add/subtract with carry (setting condition flags).
-; adr calculate address.
-; alu simple alu instruction (no memory or fp regs access).
-; alu_ext simple alu instruction (sign/zero-extended register).
-; alu_shift simple alu instruction, with a source operand shifted by a constant.
-; alus simple alu instruction (setting condition flags).
-; alus_ext simple alu instruction (sign/zero-extended register, setting condition flags).
-; alus_shift simple alu instruction, with a source operand shifted by a constant (setting condition flags).
-; bfm bitfield move operation.
-; branch branch.
-; call subroutine call.
-; ccmp conditional compare.
-; clz count leading zeros/sign bits.
-; csel conditional select.
-; dmb data memory barrier.
-; extend sign/zero-extend (specialised bitfield move).
-; extr extract register-sized bitfield encoding.
-; fpsimd_load load single floating point / simd scalar register from memory.
-; fpsimd_load2 load pair of floating point / simd scalar registers from memory.
-; fpsimd_store store single floating point / simd scalar register to memory.
-; fpsimd_store2 store pair floating point / simd scalar registers to memory.
-; fadd floating point add/sub.
-; fccmp floating point conditional compare.
-; fcmp floating point comparison.
-; fconst floating point load immediate.
-; fcsel floating point conditional select.
-; fcvt floating point convert (float to float).
-; fcvtf2i floating point convert (float to integer).
-; fcvti2f floating point convert (integer to float).
-; fdiv floating point division operation.
-; ffarith floating point abs, neg or cpy.
-; fmadd floating point multiply-add/sub.
-; fminmax floating point min/max.
-; fmov floating point move (float to float).
-; fmovf2i floating point move (float to integer).
-; fmovi2f floating point move (integer to float).
-; fmul floating point multiply.
-; frint floating point round to integral.
-; fsqrt floating point square root.
-; load_acq load-acquire.
-; load load single general register from memory
-; load2 load pair of general registers from memory
-; logic logical operation (register).
-; logic_imm and/or/xor operation (immediate).
-; logic_shift logical operation with shift.
-; logics logical operation (register, setting condition flags).
-; logics_imm and/or/xor operation (immediate, setting condition flags).
-; logics_shift logical operation with shift (setting condition flags).
-; madd integer multiply-add/sub.
-; maddl widening integer multiply-add/sub.
-; misc miscellaneous - any type that doesn't fit into the rest.
-; move integer move operation.
-; move2 double integer move operation.
-; movk move 16-bit immediate with keep.
-; movz move 16-bit immmediate with zero/one.
-; mrs system/special register move.
-; mulh 64x64 to 128-bit multiply (high part).
-; mull widening multiply.
-; mult integer multiply instruction.
-; prefetch memory prefetch.
-; rbit reverse bits.
-; rev reverse bytes.
-; sdiv integer division operation (signed).
-; shift variable shift operation.
-; shift_imm immediate shift operation (specialised bitfield move).
-; store_rel store-release.
-; store store single general register to memory.
-; store2 store pair of general registers to memory.
-; udiv integer division operation (unsigned).
-
-(define_attr "v8type"
- "adc,\
- adcs,\
- adr,\
- alu,\
- alu_ext,\
- alu_shift,\
- alus,\
- alus_ext,\
- alus_shift,\
- bfm,\
- branch,\
- call,\
- ccmp,\
- clz,\
- csel,\
- dmb,\
- div,\
- div64,\
- extend,\
- extr,\
- fpsimd_load,\
- fpsimd_load2,\
- fpsimd_store2,\
- fpsimd_store,\
- fadd,\
- fccmp,\
- fcvt,\
- fcvtf2i,\
- fcvti2f,\
- fcmp,\
- fconst,\
- fcsel,\
- fdiv,\
- ffarith,\
- fmadd,\
- fminmax,\
- fmov,\
- fmovf2i,\
- fmovi2f,\
- fmul,\
- frecpe,\
- frecps,\
- frecpx,\
- frint,\
- fsqrt,\
- load_acq,\
- load1,\
- load2,\
- logic,\
- logic_imm,\
- logic_shift,\
- logics,\
- logics_imm,\
- logics_shift,\
- madd,\
- maddl,\
- misc,\
- move,\
- move2,\
- movk,\
- movz,\
- mrs,\
- mulh,\
- mull,\
- mult,\
- prefetch,\
- rbit,\
- rev,\
- sdiv,\
- shift,\
- shift_imm,\
- store_rel,\
- store1,\
- store2,\
- udiv"
- (const_string "alu"))
-
-
-; The "type" attribute is used by the AArch32 backend. Below is a mapping
-; from "v8type" to "type".
-
-(define_attr "type"
- "alu,alu_shift,block,branch,call,f_2_r,f_cvt,f_flag,f_loads,
- f_loadd,f_stored,f_stores,faddd,fadds,fcmpd,fcmps,fconstd,fconsts,
- fcpys,fdivd,fdivs,ffarithd,ffariths,fmacd,fmacs,fmuld,fmuls,load_byte,
- load1,load2,mult,r_2_f,store1,store2"
- (cond [
- (eq_attr "v8type" "alu_shift,alus_shift,logic_shift,logics_shift") (const_string "alu_shift")
- (eq_attr "v8type" "branch") (const_string "branch")
- (eq_attr "v8type" "call") (const_string "call")
- (eq_attr "v8type" "fmovf2i") (const_string "f_2_r")
- (eq_attr "v8type" "fcvt,fcvtf2i,fcvti2f") (const_string "f_cvt")
- (and (eq_attr "v8type" "fpsimd_load") (eq_attr "mode" "SF")) (const_string "f_loads")
- (and (eq_attr "v8type" "fpsimd_load") (eq_attr "mode" "DF")) (const_string "f_loadd")
- (and (eq_attr "v8type" "fpsimd_store") (eq_attr "mode" "SF")) (const_string "f_stores")
- (and (eq_attr "v8type" "fpsimd_store") (eq_attr "mode" "DF")) (const_string "f_stored")
- (and (eq_attr "v8type" "fadd,fminmax") (eq_attr "mode" "DF")) (const_string "faddd")
- (and (eq_attr "v8type" "fadd,fminmax") (eq_attr "mode" "SF")) (const_string "fadds")
- (and (eq_attr "v8type" "fcmp,fccmp") (eq_attr "mode" "DF")) (const_string "fcmpd")
- (and (eq_attr "v8type" "fcmp,fccmp") (eq_attr "mode" "SF")) (const_string "fcmps")
- (and (eq_attr "v8type" "fconst") (eq_attr "mode" "DF")) (const_string "fconstd")
- (and (eq_attr "v8type" "fconst") (eq_attr "mode" "SF")) (const_string "fconsts")
- (and (eq_attr "v8type" "fdiv,fsqrt") (eq_attr "mode" "DF")) (const_string "fdivd")
- (and (eq_attr "v8type" "fdiv,fsqrt") (eq_attr "mode" "SF")) (const_string "fdivs")
- (and (eq_attr "v8type" "ffarith") (eq_attr "mode" "DF")) (const_string "ffarithd")
- (and (eq_attr "v8type" "ffarith") (eq_attr "mode" "SF")) (const_string "ffariths")
- (and (eq_attr "v8type" "fmadd") (eq_attr "mode" "DF")) (const_string "fmacd")
- (and (eq_attr "v8type" "fmadd") (eq_attr "mode" "SF")) (const_string "fmacs")
- (and (eq_attr "v8type" "fmul") (eq_attr "mode" "DF")) (const_string "fmuld")
- (and (eq_attr "v8type" "fmul") (eq_attr "mode" "SF")) (const_string "fmuls")
- (and (eq_attr "v8type" "load1") (eq_attr "mode" "QI,HI")) (const_string "load_byte")
- (and (eq_attr "v8type" "load1") (eq_attr "mode" "SI,DI,TI")) (const_string "load1")
- (eq_attr "v8type" "load2") (const_string "load2")
- (and (eq_attr "v8type" "mulh,mult,mull,madd,sdiv,udiv") (eq_attr "mode" "SI")) (const_string "mult")
- (eq_attr "v8type" "fmovi2f") (const_string "r_2_f")
- (eq_attr "v8type" "store1") (const_string "store1")
- (eq_attr "v8type" "store2") (const_string "store2")
- ]
- (const_string "alu")))
+; The "type" attribute is is included here from AArch32 backend to be able
+; to share pipeline descriptions.
+(include "../arm/types.md")
;; Attribute that specifies whether or not the instruction touches fp
;; registers.
@@ -345,10 +146,17 @@
;; Processor types.
(include "aarch64-tune.md")
+;; True if the generic scheduling description should be used.
+
+(define_attr "generic_sched" "yes,no"
+ (const (if_then_else
+ (eq_attr "tune" "cortexa53,cortexa15")
+ (const_string "no")
+ (const_string "yes"))))
+
;; Scheduling
-(include "aarch64-generic.md")
-(include "large.md")
-(include "small.md")
+(include "../arm/cortex-a53.md")
+(include "../arm/cortex-a15.md")
;; -------------------------------------------------------------------
;; Jumps and other miscellaneous insns
@@ -358,14 +166,14 @@
[(set (pc) (match_operand:DI 0 "register_operand" "r"))]
""
"br\\t%0"
- [(set_attr "v8type" "branch")]
+ [(set_attr "type" "branch")]
)
(define_insn "jump"
[(set (pc) (label_ref (match_operand 0 "" "")))]
""
"b\\t%l0"
- [(set_attr "v8type" "branch")]
+ [(set_attr "type" "branch")]
)
(define_expand "cbranch<mode>4"
@@ -403,7 +211,7 @@
(pc)))]
""
"b%m0\\t%l2"
- [(set_attr "v8type" "branch")]
+ [(set_attr "type" "branch")]
)
(define_expand "casesi"
@@ -467,14 +275,14 @@
return aarch64_output_casesi (operands);
"
[(set_attr "length" "16")
- (set_attr "v8type" "branch")]
+ (set_attr "type" "branch")]
)
(define_insn "nop"
[(unspec[(const_int 0)] UNSPEC_NOP)]
""
"nop"
- [(set_attr "v8type" "misc")]
+ [(set_attr "type" "no_insn")]
)
(define_expand "prologue"
@@ -508,7 +316,7 @@
[(return)]
""
"ret"
- [(set_attr "v8type" "branch")]
+ [(set_attr "type" "branch")]
)
(define_insn "eh_return"
@@ -516,7 +324,8 @@
UNSPECV_EH_RETURN)]
""
"#"
- [(set_attr "v8type" "branch")]
+ [(set_attr "type" "branch")]
+
)
(define_split
@@ -536,7 +345,8 @@
(pc)))]
""
"<cbz>\\t%<w>0, %l1"
- [(set_attr "v8type" "branch")]
+ [(set_attr "type" "branch")]
+
)
(define_insn "*tb<optab><mode>1"
@@ -554,8 +364,7 @@
return \"ubfx\\t%<w>3, %<w>0, %1, #1\;<cbz>\\t%<w>3, %l2\";
return \"<tbz>\\t%<w>0, %1, %l2\";
"
- [(set_attr "v8type" "branch")
- (set_attr "mode" "<MODE>")
+ [(set_attr "type" "branch")
(set (attr "length")
(if_then_else (and (ge (minus (match_dup 2) (pc)) (const_int -32768))
(lt (minus (match_dup 2) (pc)) (const_int 32764)))
@@ -575,8 +384,7 @@
return \"ubfx\\t%<w>2, %<w>0, <sizem1>, #1\;<cbz>\\t%<w>2, %l1\";
return \"<tbz>\\t%<w>0, <sizem1>, %l1\";
"
- [(set_attr "v8type" "branch")
- (set_attr "mode" "<MODE>")
+ [(set_attr "type" "branch")
(set (attr "length")
(if_then_else (and (ge (minus (match_dup 1) (pc)) (const_int -32768))
(lt (minus (match_dup 1) (pc)) (const_int 32764)))
@@ -620,7 +428,7 @@
(clobber (reg:DI LR_REGNUM))]
""
"blr\\t%0"
- [(set_attr "v8type" "call")]
+ [(set_attr "type" "call")]
)
(define_insn "*call_symbol"
@@ -631,7 +439,7 @@
"GET_CODE (operands[0]) == SYMBOL_REF
&& !aarch64_is_long_call_p (operands[0])"
"bl\\t%a0"
- [(set_attr "v8type" "call")]
+ [(set_attr "type" "call")]
)
(define_expand "call_value"
@@ -668,7 +476,8 @@
(clobber (reg:DI LR_REGNUM))]
""
"blr\\t%1"
- [(set_attr "v8type" "call")]
+ [(set_attr "type" "call")]
+
)
(define_insn "*call_value_symbol"
@@ -680,7 +489,7 @@
"GET_CODE (operands[1]) == SYMBOL_REF
&& !aarch64_is_long_call_p (operands[1])"
"bl\\t%a1"
- [(set_attr "v8type" "call")]
+ [(set_attr "type" "call")]
)
(define_expand "sibcall"
@@ -715,7 +524,8 @@
(use (match_operand 2 "" ""))]
"GET_CODE (operands[0]) == SYMBOL_REF"
"b\\t%a0"
- [(set_attr "v8type" "branch")]
+ [(set_attr "type" "branch")]
+
)
(define_insn "*sibcall_value_insn"
@@ -726,7 +536,7 @@
(use (match_operand 3 "" ""))]
"GET_CODE (operands[1]) == SYMBOL_REF"
"b\\t%a1"
- [(set_attr "v8type" "branch")]
+ [(set_attr "type" "branch")]
)
;; Call subroutine returning any type.
@@ -803,11 +613,9 @@
gcc_unreachable ();
}
}
- [(set_attr "v8type" "move,alu,alu,load1,load1,store1,store1,*,*,*")
- (set_attr "simd_type" "*,*,simd_move_imm,*,*,*,*,simd_movgp,simd_dupgp,simd_dup")
- (set_attr "simd" "*,*,yes,*,*,*,*,yes,yes,yes")
- (set_attr "mode" "<MODE>")
- (set_attr "simd_mode" "<MODE>")]
+ [(set_attr "type" "mov_reg,mov_imm,mov_imm,load1,load1,store1,store1,\
+ neon_from_gp<q>,neon_from_gp<q>, neon_dup")
+ (set_attr "simd" "*,*,yes,*,*,*,*,yes,yes,yes")]
)
(define_expand "mov<mode>"
@@ -841,8 +649,7 @@
fmov\\t%s0, %w1
fmov\\t%w0, %s1
fmov\\t%s0, %s1"
- [(set_attr "v8type" "move,alu,load1,load1,store1,store1,fmov,fmov,fmov")
- (set_attr "mode" "SI")
+ [(set_attr "type" "mov_reg,mov_imm,load1,load1,store1,store1,fmov,fmov,fmov")
(set_attr "fp" "*,*,*,yes,*,yes,yes,yes,yes")]
)
@@ -866,8 +673,8 @@
fmov\\t%x0, %d1
fmov\\t%d0, %d1
movi\\t%d0, %1"
- [(set_attr "v8type" "move,move,move,alu,load1,load1,store1,store1,adr,adr,fmov,fmov,fmov,fmov")
- (set_attr "mode" "DI")
+ [(set_attr "type" "mov_reg,mov_reg,mov_reg,mov_imm,load1,load1,store1,store1,\
+ adr,adr,fmov,fmov,fmov,fmov")
(set_attr "fp" "*,*,*,*,*,yes,*,yes,*,*,yes,yes,yes,*")
(set_attr "simd" "*,*,*,*,*,*,*,*,*,*,*,*,*,yes")]
)
@@ -880,8 +687,7 @@
"UINTVAL (operands[1]) < GET_MODE_BITSIZE (<MODE>mode)
&& UINTVAL (operands[1]) % 16 == 0"
"movk\\t%<w>0, %X2, lsl %1"
- [(set_attr "v8type" "movk")
- (set_attr "mode" "<MODE>")]
+ [(set_attr "type" "mov_imm")]
)
(define_expand "movti"
@@ -911,13 +717,12 @@
stp\\txzr, xzr, %0
ldr\\t%q0, %1
str\\t%q1, %0"
- [(set_attr "v8type" "move2,fmovi2f,fmovf2i,*, \
- load2,store2,store2,fpsimd_load,fpsimd_store")
- (set_attr "simd_type" "*,*,*,simd_move,*,*,*,*,*")
- (set_attr "mode" "DI,DI,DI,TI,DI,DI,DI,TI,TI")
+ [(set_attr "type" "multiple,f_mcr,f_mrc,neon_logic_q, \
+ load2,store2,store2,f_loadd,f_stored")
(set_attr "length" "8,8,8,4,4,4,4,4,4")
- (set_attr "fp" "*,*,*,*,*,*,*,yes,yes")
- (set_attr "simd" "*,*,*,yes,*,*,*,*,*")])
+ (set_attr "simd" "*,*,*,yes,*,*,*,*,*")
+ (set_attr "fp" "*,*,*,*,*,*,*,yes,yes")]
+)
;; Split a TImode register-register or register-immediate move into
;; its component DImode pieces, taking care to handle overlapping
@@ -963,10 +768,8 @@
ldr\\t%w0, %1
str\\t%w1, %0
mov\\t%w0, %w1"
- [(set_attr "v8type" "fmovi2f,fmovf2i,\
- fmov,fconst,fpsimd_load,\
- fpsimd_store,fpsimd_load,fpsimd_store,fmov")
- (set_attr "mode" "SF")]
+ [(set_attr "type" "f_mcr,f_mrc,fmov,fconsts,\
+ f_loads,f_stores,f_loads,f_stores,fmov")]
)
(define_insn "*movdf_aarch64"
@@ -984,10 +787,8 @@
ldr\\t%x0, %1
str\\t%x1, %0
mov\\t%x0, %x1"
- [(set_attr "v8type" "fmovi2f,fmovf2i,\
- fmov,fconst,fpsimd_load,\
- fpsimd_store,fpsimd_load,fpsimd_store,move")
- (set_attr "mode" "DF")]
+ [(set_attr "type" "f_mcr,f_mrc,fmov,fconstd,\
+ f_loadd,f_stored,f_loadd,f_stored,mov_reg")]
)
(define_expand "movtf"
@@ -1024,8 +825,8 @@
str\\t%q1, %0
ldp\\t%0, %H0, %1
stp\\t%1, %H1, %0"
- [(set_attr "v8type" "logic,move2,fmovi2f,fmovf2i,fconst,fconst,fpsimd_load,fpsimd_store,fpsimd_load2,fpsimd_store2")
- (set_attr "mode" "DF,DF,DF,DF,DF,DF,TF,TF,DF,DF")
+ [(set_attr "type" "logic_reg,multiple,f_mcr,f_mrc,fconstd,fconstd,\
+ f_loadd,f_stored,neon_load1_2reg,neon_store1_2reg")
(set_attr "length" "4,8,8,8,4,4,4,4,4,4")
(set_attr "fp" "*,*,yes,yes,*,yes,yes,yes,*,*")
(set_attr "simd" "yes,*,*,*,yes,*,*,*,*,*")]
@@ -1054,8 +855,7 @@
XEXP (operands[1], 0),
GET_MODE_SIZE (<MODE>mode)))"
"ldp\\t%<w>0, %<w>2, %1"
- [(set_attr "v8type" "load2")
- (set_attr "mode" "<MODE>")]
+ [(set_attr "type" "load2")]
)
;; Operands 0 and 2 are tied together by the final condition; so we allow
@@ -1070,8 +870,7 @@
XEXP (operands[0], 0),
GET_MODE_SIZE (<MODE>mode)))"
"stp\\t%<w>1, %<w>3, %0"
- [(set_attr "v8type" "store2")
- (set_attr "mode" "<MODE>")]
+ [(set_attr "type" "store2")]
)
;; Operands 1 and 3 are tied together by the final condition; so we allow
@@ -1086,8 +885,7 @@
XEXP (operands[1], 0),
GET_MODE_SIZE (<MODE>mode)))"
"ldp\\t%<w>0, %<w>2, %1"
- [(set_attr "v8type" "fpsimd_load2")
- (set_attr "mode" "<MODE>")]
+ [(set_attr "type" "neon_load1_2reg<q>")]
)
;; Operands 0 and 2 are tied together by the final condition; so we allow
@@ -1102,8 +900,7 @@
XEXP (operands[0], 0),
GET_MODE_SIZE (<MODE>mode)))"
"stp\\t%<w>1, %<w>3, %0"
- [(set_attr "v8type" "fpsimd_load2")
- (set_attr "mode" "<MODE>")]
+ [(set_attr "type" "neon_store1_2reg<q>")]
)
;; Load pair with writeback. This is primarily used in function epilogues
@@ -1121,8 +918,7 @@
(match_operand:PTR 5 "const_int_operand" "n"))))])]
"INTVAL (operands[5]) == INTVAL (operands[4]) + GET_MODE_SIZE (<GPI:MODE>mode)"
"ldp\\t%<w>2, %<w>3, [%1], %4"
- [(set_attr "v8type" "load2")
- (set_attr "mode" "<GPI:MODE>")]
+ [(set_attr "type" "load2")]
)
;; Store pair with writeback. This is primarily used in function prologues
@@ -1140,8 +936,7 @@
(match_operand:GPI 3 "register_operand" "r"))])]
"INTVAL (operands[5]) == INTVAL (operands[4]) + GET_MODE_SIZE (<GPI:MODE>mode)"
"stp\\t%<w>2, %<w>3, [%0, %4]!"
- [(set_attr "v8type" "store2")
- (set_attr "mode" "<GPI:MODE>")]
+ [(set_attr "type" "store2")]
)
;; -------------------------------------------------------------------
@@ -1161,8 +956,7 @@
"@
sxtw\t%0, %w1
ldrsw\t%0, %1"
- [(set_attr "v8type" "extend,load1")
- (set_attr "mode" "DI")]
+ [(set_attr "type" "extend,load1")]
)
(define_insn "*zero_extendsidi2_aarch64"
@@ -1172,8 +966,7 @@
"@
uxtw\t%0, %w1
ldr\t%w0, %1"
- [(set_attr "v8type" "extend,load1")
- (set_attr "mode" "DI")]
+ [(set_attr "type" "extend,load1")]
)
(define_expand "<ANY_EXTEND:optab><SHORT:mode><GPI:mode>2"
@@ -1189,8 +982,7 @@
"@
sxt<SHORT:size>\t%<GPI:w>0, %w1
ldrs<SHORT:size>\t%<GPI:w>0, %1"
- [(set_attr "v8type" "extend,load1")
- (set_attr "mode" "<GPI:MODE>")]
+ [(set_attr "type" "extend,load1")]
)
(define_insn "*zero_extend<SHORT:mode><GPI:mode>2_aarch64"
@@ -1201,8 +993,7 @@
uxt<SHORT:size>\t%<GPI:w>0, %w1
ldr<SHORT:size>\t%w0, %1
ldr\t%<SHORT:size>0, %1"
- [(set_attr "v8type" "extend,load1,load1")
- (set_attr "mode" "<GPI:MODE>")]
+ [(set_attr "type" "extend,load1,load1")]
)
(define_expand "<optab>qihi2"
@@ -1218,8 +1009,7 @@
"@
<su>xtb\t%w0, %w1
<ldrxt>b\t%w0, %1"
- [(set_attr "v8type" "extend,load1")
- (set_attr "mode" "HI")]
+ [(set_attr "type" "extend,load1")]
)
;; -------------------------------------------------------------------
@@ -1262,8 +1052,7 @@
add\\t%w0, %w1, %2
add\\t%w0, %w1, %w2
sub\\t%w0, %w1, #%n2"
- [(set_attr "v8type" "alu")
- (set_attr "mode" "SI")]
+ [(set_attr "type" "alu_imm,alu_reg,alu_imm")]
)
;; zero_extend version of above
@@ -1278,8 +1067,7 @@
add\\t%w0, %w1, %2
add\\t%w0, %w1, %w2
sub\\t%w0, %w1, #%n2"
- [(set_attr "v8type" "alu")
- (set_attr "mode" "SI")]
+ [(set_attr "type" "alu_imm,alu_reg,alu_imm")]
)
(define_insn "*adddi3_aarch64"
@@ -1294,42 +1082,41 @@
add\\t%x0, %x1, %x2
sub\\t%x0, %x1, #%n2
add\\t%d0, %d1, %d2"
- [(set_attr "v8type" "alu")
- (set_attr "mode" "DI")
+ [(set_attr "type" "alu_imm,alu_reg,alu_imm,alu_reg")
(set_attr "simd" "*,*,*,yes")]
)
(define_insn "*add<mode>3_compare0"
[(set (reg:CC_NZ CC_REGNUM)
(compare:CC_NZ
- (plus:GPI (match_operand:GPI 1 "register_operand" "%r,r")
- (match_operand:GPI 2 "aarch64_plus_operand" "rI,J"))
+ (plus:GPI (match_operand:GPI 1 "register_operand" "%r,r,r")
+ (match_operand:GPI 2 "aarch64_plus_operand" "r,I,J"))
(const_int 0)))
- (set (match_operand:GPI 0 "register_operand" "=r,r")
+ (set (match_operand:GPI 0 "register_operand" "=r,r,r")
(plus:GPI (match_dup 1) (match_dup 2)))]
""
"@
adds\\t%<w>0, %<w>1, %<w>2
+ adds\\t%<w>0, %<w>1, %<w>2
subs\\t%<w>0, %<w>1, #%n2"
- [(set_attr "v8type" "alus")
- (set_attr "mode" "<MODE>")]
+ [(set_attr "type" "alus_reg,alus_imm,alus_imm")]
)
;; zero_extend version of above
(define_insn "*addsi3_compare0_uxtw"
[(set (reg:CC_NZ CC_REGNUM)
(compare:CC_NZ
- (plus:SI (match_operand:SI 1 "register_operand" "%r,r")
- (match_operand:SI 2 "aarch64_plus_operand" "rI,J"))
+ (plus:SI (match_operand:SI 1 "register_operand" "%r,r,r")
+ (match_operand:SI 2 "aarch64_plus_operand" "r,I,J"))
(const_int 0)))
- (set (match_operand:DI 0 "register_operand" "=r,r")
+ (set (match_operand:DI 0 "register_operand" "=r,r,r")
(zero_extend:DI (plus:SI (match_dup 1) (match_dup 2))))]
""
"@
adds\\t%w0, %w1, %w2
+ adds\\t%w0, %w1, %w2
subs\\t%w0, %w1, #%n2"
- [(set_attr "v8type" "alus")
- (set_attr "mode" "SI")]
+ [(set_attr "type" "alus_reg,alus_imm,alus_imm")]
)
(define_insn "*adds_mul_imm_<mode>"
@@ -1345,8 +1132,7 @@
(match_dup 3)))]
""
"adds\\t%<w>0, %<w>3, %<w>1, lsl %p2"
- [(set_attr "v8type" "alus_shift")
- (set_attr "mode" "<MODE>")]
+ [(set_attr "type" "alus_shift_imm")]
)
(define_insn "*subs_mul_imm_<mode>"
@@ -1362,8 +1148,7 @@
(mult:GPI (match_dup 2) (match_dup 3))))]
""
"subs\\t%<w>0, %<w>1, %<w>2, lsl %p3"
- [(set_attr "v8type" "alus_shift")
- (set_attr "mode" "<MODE>")]
+ [(set_attr "type" "alus_shift_imm")]
)
(define_insn "*adds_<optab><ALLX:mode>_<GPI:mode>"
@@ -1377,8 +1162,7 @@
(plus:GPI (ANY_EXTEND:GPI (match_dup 1)) (match_dup 2)))]
""
"adds\\t%<GPI:w>0, %<GPI:w>2, %<GPI:w>1, <su>xt<ALLX:size>"
- [(set_attr "v8type" "alus_ext")
- (set_attr "mode" "<GPI:MODE>")]
+ [(set_attr "type" "alus_ext")]
)
(define_insn "*subs_<optab><ALLX:mode>_<GPI:mode>"
@@ -1392,8 +1176,7 @@
(minus:GPI (match_dup 1) (ANY_EXTEND:GPI (match_dup 2))))]
""
"subs\\t%<GPI:w>0, %<GPI:w>1, %<GPI:w>2, <su>xt<ALLX:size>"
- [(set_attr "v8type" "alus_ext")
- (set_attr "mode" "<GPI:MODE>")]
+ [(set_attr "type" "alus_ext")]
)
(define_insn "*adds_<optab><mode>_multp2"
@@ -1413,8 +1196,7 @@
(match_dup 4)))]
"aarch64_is_extend_from_extract (<MODE>mode, operands[2], operands[3])"
"adds\\t%<w>0, %<w>4, %<w>1, <su>xt%e3 %p2"
- [(set_attr "v8type" "alus_ext")
- (set_attr "mode" "<MODE>")]
+ [(set_attr "type" "alus_ext")]
)
(define_insn "*subs_<optab><mode>_multp2"
@@ -1434,22 +1216,21 @@
(const_int 0))))]
"aarch64_is_extend_from_extract (<MODE>mode, operands[2], operands[3])"
"subs\\t%<w>0, %<w>4, %<w>1, <su>xt%e3 %p2"
- [(set_attr "v8type" "alus_ext")
- (set_attr "mode" "<MODE>")]
+ [(set_attr "type" "alus_ext")]
)
(define_insn "*add<mode>3nr_compare0"
[(set (reg:CC_NZ CC_REGNUM)
(compare:CC_NZ
- (plus:GPI (match_operand:GPI 0 "register_operand" "%r,r")
- (match_operand:GPI 1 "aarch64_plus_operand" "rI,J"))
+ (plus:GPI (match_operand:GPI 0 "register_operand" "%r,r,r")
+ (match_operand:GPI 1 "aarch64_plus_operand" "r,I,J"))
(const_int 0)))]
""
"@
cmn\\t%<w>0, %<w>1
+ cmn\\t%<w>0, %<w>1
cmp\\t%<w>0, #%n1"
- [(set_attr "v8type" "alus")
- (set_attr "mode" "<MODE>")]
+ [(set_attr "type" "alus_reg,alus_imm,alus_imm")]
)
(define_insn "*compare_neg<mode>"
@@ -1459,8 +1240,7 @@
(match_operand:GPI 1 "register_operand" "r")))]
""
"cmn\\t%<w>1, %<w>0"
- [(set_attr "v8type" "alus")
- (set_attr "mode" "<MODE>")]
+ [(set_attr "type" "alus_reg")]
)
(define_insn "*add_<shift>_<mode>"
@@ -1470,8 +1250,7 @@
(match_operand:GPI 3 "register_operand" "r")))]
""
"add\\t%<w>0, %<w>3, %<w>1, <shift> %2"
- [(set_attr "v8type" "alu_shift")
- (set_attr "mode" "<MODE>")]
+ [(set_attr "type" "alu_shift_imm")]
)
;; zero_extend version of above
@@ -1483,8 +1262,7 @@
(match_operand:SI 3 "register_operand" "r"))))]
""
"add\\t%w0, %w3, %w1, <shift> %2"
- [(set_attr "v8type" "alu_shift")
- (set_attr "mode" "SI")]
+ [(set_attr "type" "alu_shift_imm")]
)
(define_insn "*add_mul_imm_<mode>"
@@ -1494,8 +1272,7 @@
(match_operand:GPI 3 "register_operand" "r")))]
""
"add\\t%<w>0, %<w>3, %<w>1, lsl %p2"
- [(set_attr "v8type" "alu_shift")
- (set_attr "mode" "<MODE>")]
+ [(set_attr "type" "alu_shift_imm")]
)
(define_insn "*add_<optab><ALLX:mode>_<GPI:mode>"
@@ -1504,8 +1281,7 @@
(match_operand:GPI 2 "register_operand" "r")))]
""
"add\\t%<GPI:w>0, %<GPI:w>2, %<GPI:w>1, <su>xt<ALLX:size>"
- [(set_attr "v8type" "alu_ext")
- (set_attr "mode" "<GPI:MODE>")]
+ [(set_attr "type" "alu_ext")]
)
;; zero_extend version of above
@@ -1516,8 +1292,7 @@
(match_operand:GPI 2 "register_operand" "r"))))]
""
"add\\t%w0, %w2, %w1, <su>xt<SHORT:size>"
- [(set_attr "v8type" "alu_ext")
- (set_attr "mode" "SI")]
+ [(set_attr "type" "alu_ext")]
)
(define_insn "*add_<optab><ALLX:mode>_shft_<GPI:mode>"
@@ -1528,8 +1303,7 @@
(match_operand:GPI 3 "register_operand" "r")))]
""
"add\\t%<GPI:w>0, %<GPI:w>3, %<GPI:w>1, <su>xt<ALLX:size> %2"
- [(set_attr "v8type" "alu_ext")
- (set_attr "mode" "<GPI:MODE>")]
+ [(set_attr "type" "alu_ext")]
)
;; zero_extend version of above
@@ -1542,8 +1316,7 @@
(match_operand:SI 3 "register_operand" "r"))))]
""
"add\\t%w0, %w3, %w1, <su>xt<SHORT:size> %2"
- [(set_attr "v8type" "alu_ext")
- (set_attr "mode" "SI")]
+ [(set_attr "type" "alu_ext")]
)
(define_insn "*add_<optab><ALLX:mode>_mult_<GPI:mode>"
@@ -1554,8 +1327,7 @@
(match_operand:GPI 3 "register_operand" "r")))]
""
"add\\t%<GPI:w>0, %<GPI:w>3, %<GPI:w>1, <su>xt<ALLX:size> %p2"
- [(set_attr "v8type" "alu_ext")
- (set_attr "mode" "<GPI:MODE>")]
+ [(set_attr "type" "alu_ext")]
)
;; zero_extend version of above
@@ -1567,8 +1339,7 @@
(match_operand:SI 3 "register_operand" "r"))))]
""
"add\\t%w0, %w3, %w1, <su>xt<SHORT:size> %p2"
- [(set_attr "v8type" "alu_ext")
- (set_attr "mode" "SI")]
+ [(set_attr "type" "alu_ext")]
)
(define_insn "*add_<optab><mode>_multp2"
@@ -1581,8 +1352,7 @@
(match_operand:GPI 4 "register_operand" "r")))]
"aarch64_is_extend_from_extract (<MODE>mode, operands[2], operands[3])"
"add\\t%<w>0, %<w>4, %<w>1, <su>xt%e3 %p2"
- [(set_attr "v8type" "alu_ext")
- (set_attr "mode" "<MODE>")]
+ [(set_attr "type" "alu_ext")]
)
;; zero_extend version of above
@@ -1597,8 +1367,7 @@
(match_operand:SI 4 "register_operand" "r"))))]
"aarch64_is_extend_from_extract (SImode, operands[2], operands[3])"
"add\\t%w0, %w4, %w1, <su>xt%e3 %p2"
- [(set_attr "v8type" "alu_ext")
- (set_attr "mode" "SI")]
+ [(set_attr "type" "alu_ext")]
)
(define_insn "*add<mode>3_carryin"
@@ -1610,8 +1379,7 @@
(match_operand:GPI 2 "register_operand" "r"))))]
""
"adc\\t%<w>0, %<w>1, %<w>2"
- [(set_attr "v8type" "adc")
- (set_attr "mode" "<MODE>")]
+ [(set_attr "type" "adc_reg")]
)
;; zero_extend version of above
@@ -1625,8 +1393,7 @@
(match_operand:SI 2 "register_operand" "r")))))]
""
"adc\\t%w0, %w1, %w2"
- [(set_attr "v8type" "adc")
- (set_attr "mode" "SI")]
+ [(set_attr "type" "adc_reg")]
)
(define_insn "*add<mode>3_carryin_alt1"
@@ -1638,8 +1405,7 @@
(geu:GPI (reg:CC CC_REGNUM) (const_int 0))))]
""
"adc\\t%<w>0, %<w>1, %<w>2"
- [(set_attr "v8type" "adc")
- (set_attr "mode" "<MODE>")]
+ [(set_attr "type" "adc_reg")]
)
;; zero_extend version of above
@@ -1653,8 +1419,7 @@
(geu:SI (reg:CC CC_REGNUM) (const_int 0)))))]
""
"adc\\t%w0, %w1, %w2"
- [(set_attr "v8type" "adc")
- (set_attr "mode" "SI")]
+ [(set_attr "type" "adc_reg")]
)
(define_insn "*add<mode>3_carryin_alt2"
@@ -1666,8 +1431,7 @@
(match_operand:GPI 2 "register_operand" "r")))]
""
"adc\\t%<w>0, %<w>1, %<w>2"
- [(set_attr "v8type" "adc")
- (set_attr "mode" "<MODE>")]
+ [(set_attr "type" "adc_reg")]
)
;; zero_extend version of above
@@ -1681,8 +1445,7 @@
(match_operand:SI 2 "register_operand" "r"))))]
""
"adc\\t%w0, %w1, %w2"
- [(set_attr "v8type" "adc")
- (set_attr "mode" "SI")]
+ [(set_attr "type" "adc_reg")]
)
(define_insn "*add<mode>3_carryin_alt3"
@@ -1694,8 +1457,7 @@
(match_operand:GPI 1 "register_operand" "r")))]
""
"adc\\t%<w>0, %<w>1, %<w>2"
- [(set_attr "v8type" "adc")
- (set_attr "mode" "<MODE>")]
+ [(set_attr "type" "adc_reg")]
)
;; zero_extend version of above
@@ -1709,8 +1471,7 @@
(match_operand:SI 1 "register_operand" "r"))))]
""
"adc\\t%w0, %w1, %w2"
- [(set_attr "v8type" "adc")
- (set_attr "mode" "SI")]
+ [(set_attr "type" "adc_reg")]
)
(define_insn "*add_uxt<mode>_multp2"
@@ -1725,8 +1486,7 @@
operands[3] = GEN_INT (aarch64_uxt_size (exact_log2 (INTVAL (operands[2])),
INTVAL (operands[3])));
return \"add\t%<w>0, %<w>4, %<w>1, uxt%e3 %p2\";"
- [(set_attr "v8type" "alu_ext")
- (set_attr "mode" "<MODE>")]
+ [(set_attr "type" "alu_ext")]
)
;; zero_extend version of above
@@ -1743,8 +1503,7 @@
operands[3] = GEN_INT (aarch64_uxt_size (exact_log2 (INTVAL (operands[2])),
INTVAL (operands[3])));
return \"add\t%w0, %w4, %w1, uxt%e3 %p2\";"
- [(set_attr "v8type" "alu_ext")
- (set_attr "mode" "SI")]
+ [(set_attr "type" "alu_ext")]
)
(define_insn "subsi3"
@@ -1753,8 +1512,7 @@
(match_operand:SI 2 "register_operand" "r")))]
""
"sub\\t%w0, %w1, %w2"
- [(set_attr "v8type" "alu")
- (set_attr "mode" "SI")]
+ [(set_attr "type" "alu_reg")]
)
;; zero_extend version of above
@@ -1765,8 +1523,7 @@
(match_operand:SI 2 "register_operand" "r"))))]
""
"sub\\t%w0, %w1, %w2"
- [(set_attr "v8type" "alu")
- (set_attr "mode" "SI")]
+ [(set_attr "type" "alu_reg")]
)
(define_insn "subdi3"
@@ -1777,8 +1534,7 @@
"@
sub\\t%x0, %x1, %x2
sub\\t%d0, %d1, %d2"
- [(set_attr "v8type" "alu")
- (set_attr "mode" "DI")
+ [(set_attr "type" "alu_reg, neon_sub")
(set_attr "simd" "*,yes")]
)
@@ -1792,8 +1548,7 @@
(minus:GPI (match_dup 1) (match_dup 2)))]
""
"subs\\t%<w>0, %<w>1, %<w>2"
- [(set_attr "v8type" "alus")
- (set_attr "mode" "<MODE>")]
+ [(set_attr "type" "alus_reg")]
)
;; zero_extend version of above
@@ -1806,8 +1561,7 @@
(zero_extend:DI (minus:SI (match_dup 1) (match_dup 2))))]
""
"subs\\t%w0, %w1, %w2"
- [(set_attr "v8type" "alus")
- (set_attr "mode" "SI")]
+ [(set_attr "type" "alus_reg")]
)
(define_insn "*sub_<shift>_<mode>"
@@ -1818,8 +1572,7 @@
(match_operand:QI 2 "aarch64_shift_imm_<mode>" "n"))))]
""
"sub\\t%<w>0, %<w>3, %<w>1, <shift> %2"
- [(set_attr "v8type" "alu_shift")
- (set_attr "mode" "<MODE>")]
+ [(set_attr "type" "alu_shift_imm")]
)
;; zero_extend version of above
@@ -1832,8 +1585,7 @@
(match_operand:QI 2 "aarch64_shift_imm_si" "n")))))]
""
"sub\\t%w0, %w3, %w1, <shift> %2"
- [(set_attr "v8type" "alu_shift")
- (set_attr "mode" "SI")]
+ [(set_attr "type" "alu_shift_imm")]
)
(define_insn "*sub_mul_imm_<mode>"
@@ -1844,8 +1596,7 @@
(match_operand:QI 2 "aarch64_pwr_2_<mode>" "n"))))]
""
"sub\\t%<w>0, %<w>3, %<w>1, lsl %p2"
- [(set_attr "v8type" "alu_shift")
- (set_attr "mode" "<MODE>")]
+ [(set_attr "type" "alu_shift_imm")]
)
;; zero_extend version of above
@@ -1858,8 +1609,7 @@
(match_operand:QI 2 "aarch64_pwr_2_si" "n")))))]
""
"sub\\t%w0, %w3, %w1, lsl %p2"
- [(set_attr "v8type" "alu_shift")
- (set_attr "mode" "SI")]
+ [(set_attr "type" "alu_shift_imm")]
)
(define_insn "*sub_<optab><ALLX:mode>_<GPI:mode>"
@@ -1869,8 +1619,7 @@
(match_operand:ALLX 2 "register_operand" "r"))))]
""
"sub\\t%<GPI:w>0, %<GPI:w>1, %<GPI:w>2, <su>xt<ALLX:size>"
- [(set_attr "v8type" "alu_ext")
- (set_attr "mode" "<GPI:MODE>")]
+ [(set_attr "type" "alu_ext")]
)
;; zero_extend version of above
@@ -1882,8 +1631,7 @@
(match_operand:SHORT 2 "register_operand" "r")))))]
""
"sub\\t%w0, %w1, %w2, <su>xt<SHORT:size>"
- [(set_attr "v8type" "alu_ext")
- (set_attr "mode" "SI")]
+ [(set_attr "type" "alu_ext")]
)
(define_insn "*sub_<optab><ALLX:mode>_shft_<GPI:mode>"
@@ -1894,8 +1642,7 @@
(match_operand 3 "aarch64_imm3" "Ui3"))))]
""
"sub\\t%<GPI:w>0, %<GPI:w>1, %<GPI:w>2, <su>xt<ALLX:size> %3"
- [(set_attr "v8type" "alu_ext")
- (set_attr "mode" "<GPI:MODE>")]
+ [(set_attr "type" "alu_ext")]
)
;; zero_extend version of above
@@ -1908,8 +1655,7 @@
(match_operand 3 "aarch64_imm3" "Ui3")))))]
""
"sub\\t%w0, %w1, %w2, <su>xt<SHORT:size> %3"
- [(set_attr "v8type" "alu_ext")
- (set_attr "mode" "SI")]
+ [(set_attr "type" "alu_ext")]
)
(define_insn "*sub_<optab><mode>_multp2"
@@ -1922,8 +1668,7 @@
(const_int 0))))]
"aarch64_is_extend_from_extract (<MODE>mode, operands[2], operands[3])"
"sub\\t%<w>0, %<w>4, %<w>1, <su>xt%e3 %p2"
- [(set_attr "v8type" "alu_ext")
- (set_attr "mode" "<MODE>")]
+ [(set_attr "type" "alu_ext")]
)
;; zero_extend version of above
@@ -1938,8 +1683,7 @@
(const_int 0)))))]
"aarch64_is_extend_from_extract (SImode, operands[2], operands[3])"
"sub\\t%w0, %w4, %w1, <su>xt%e3 %p2"
- [(set_attr "v8type" "alu_ext")
- (set_attr "mode" "SI")]
+ [(set_attr "type" "alu_ext")]
)
(define_insn "*sub<mode>3_carryin"
@@ -1951,8 +1695,7 @@
(match_operand:GPI 2 "register_operand" "r")))]
""
"sbc\\t%<w>0, %<w>1, %<w>2"
- [(set_attr "v8type" "adc")
- (set_attr "mode" "<MODE>")]
+ [(set_attr "type" "adc_reg")]
)
;; zero_extend version of the above
@@ -1966,8 +1709,7 @@
(match_operand:SI 2 "register_operand" "r"))))]
""
"sbc\\t%w0, %w1, %w2"
- [(set_attr "v8type" "adc")
- (set_attr "mode" "SI")]
+ [(set_attr "type" "adc_reg")]
)
(define_insn "*sub_uxt<mode>_multp2"
@@ -1982,8 +1724,7 @@
operands[3] = GEN_INT (aarch64_uxt_size (exact_log2 (INTVAL (operands[2])),
INTVAL (operands[3])));
return \"sub\t%<w>0, %<w>4, %<w>1, uxt%e3 %p2\";"
- [(set_attr "v8type" "alu_ext")
- (set_attr "mode" "<MODE>")]
+ [(set_attr "type" "alu_ext")]
)
;; zero_extend version of above
@@ -2000,8 +1741,7 @@
operands[3] = GEN_INT (aarch64_uxt_size (exact_log2 (INTVAL (operands[2])),
INTVAL (operands[3])));
return \"sub\t%w0, %w4, %w1, uxt%e3 %p2\";"
- [(set_attr "v8type" "alu_ext")
- (set_attr "mode" "SI")]
+ [(set_attr "type" "alu_ext")]
)
(define_insn_and_split "absdi2"
@@ -2032,8 +1772,7 @@
GEN_INT (63)))));
DONE;
}
- [(set_attr "v8type" "alu")
- (set_attr "mode" "DI")]
+ [(set_attr "type" "alu_reg")]
)
(define_insn "neg<mode>2"
@@ -2041,8 +1780,8 @@
(neg:GPI (match_operand:GPI 1 "register_operand" "r")))]
""
"neg\\t%<w>0, %<w>1"
- [(set_attr "v8type" "alu")
- (set_attr "mode" "<MODE>")]
+ [(set_attr "type" "alu_reg")
+ (set_attr "simd" "*")]
)
;; zero_extend version of above
@@ -2051,8 +1790,7 @@
(zero_extend:DI (neg:SI (match_operand:SI 1 "register_operand" "r"))))]
""
"neg\\t%w0, %w1"
- [(set_attr "v8type" "alu")
- (set_attr "mode" "SI")]
+ [(set_attr "type" "alu_reg")]
)
(define_insn "*ngc<mode>"
@@ -2061,8 +1799,7 @@
(match_operand:GPI 1 "register_operand" "r")))]
""
"ngc\\t%<w>0, %<w>1"
- [(set_attr "v8type" "adc")
- (set_attr "mode" "<MODE>")]
+ [(set_attr "type" "adc_reg")]
)
(define_insn "*ngcsi_uxtw"
@@ -2072,8 +1809,7 @@
(match_operand:SI 1 "register_operand" "r"))))]
""
"ngc\\t%w0, %w1"
- [(set_attr "v8type" "adc")
- (set_attr "mode" "SI")]
+ [(set_attr "type" "adc_reg")]
)
(define_insn "*neg<mode>2_compare0"
@@ -2084,8 +1820,7 @@
(neg:GPI (match_dup 1)))]
""
"negs\\t%<w>0, %<w>1"
- [(set_attr "v8type" "alus")
- (set_attr "mode" "<MODE>")]
+ [(set_attr "type" "alus_reg")]
)
;; zero_extend version of above
@@ -2097,8 +1832,7 @@
(zero_extend:DI (neg:SI (match_dup 1))))]
""
"negs\\t%w0, %w1"
- [(set_attr "v8type" "alus")
- (set_attr "mode" "SI")]
+ [(set_attr "type" "alus_reg")]
)
(define_insn "*neg_<shift><mode>3_compare0"
@@ -2112,8 +1846,7 @@
(neg:GPI (ASHIFT:GPI (match_dup 1) (match_dup 2))))]
""
"negs\\t%<w>0, %<w>1, <shift> %2"
- [(set_attr "v8type" "alus_shift")
- (set_attr "mode" "<MODE>")]
+ [(set_attr "type" "alus_shift_imm")]
)
(define_insn "*neg_<shift>_<mode>2"
@@ -2123,8 +1856,7 @@
(match_operand:QI 2 "aarch64_shift_imm_<mode>" "n"))))]
""
"neg\\t%<w>0, %<w>1, <shift> %2"
- [(set_attr "v8type" "alu_shift")
- (set_attr "mode" "<MODE>")]
+ [(set_attr "type" "alu_shift_imm")]
)
;; zero_extend version of above
@@ -2136,8 +1868,7 @@
(match_operand:QI 2 "aarch64_shift_imm_si" "n")))))]
""
"neg\\t%w0, %w1, <shift> %2"
- [(set_attr "v8type" "alu_shift")
- (set_attr "mode" "SI")]
+ [(set_attr "type" "alu_shift_imm")]
)
(define_insn "*neg_mul_imm_<mode>2"
@@ -2147,8 +1878,7 @@
(match_operand:QI 2 "aarch64_pwr_2_<mode>" "n"))))]
""
"neg\\t%<w>0, %<w>1, lsl %p2"
- [(set_attr "v8type" "alu_shift")
- (set_attr "mode" "<MODE>")]
+ [(set_attr "type" "alu_shift_imm")]
)
;; zero_extend version of above
@@ -2160,8 +1890,7 @@
(match_operand:QI 2 "aarch64_pwr_2_si" "n")))))]
""
"neg\\t%w0, %w1, lsl %p2"
- [(set_attr "v8type" "alu_shift")
- (set_attr "mode" "SI")]
+ [(set_attr "type" "alu_shift_imm")]
)
(define_insn "mul<mode>3"
@@ -2170,8 +1899,7 @@
(match_operand:GPI 2 "register_operand" "r")))]
""
"mul\\t%<w>0, %<w>1, %<w>2"
- [(set_attr "v8type" "mult")
- (set_attr "mode" "<MODE>")]
+ [(set_attr "type" "mul")]
)
;; zero_extend version of above
@@ -2182,8 +1910,7 @@
(match_operand:SI 2 "register_operand" "r"))))]
""
"mul\\t%w0, %w1, %w2"
- [(set_attr "v8type" "mult")
- (set_attr "mode" "SI")]
+ [(set_attr "type" "mul")]
)
(define_insn "*madd<mode>"
@@ -2193,8 +1920,7 @@
(match_operand:GPI 3 "register_operand" "r")))]
""
"madd\\t%<w>0, %<w>1, %<w>2, %<w>3"
- [(set_attr "v8type" "madd")
- (set_attr "mode" "<MODE>")]
+ [(set_attr "type" "mla")]
)
;; zero_extend version of above
@@ -2206,8 +1932,7 @@
(match_operand:SI 3 "register_operand" "r"))))]
""
"madd\\t%w0, %w1, %w2, %w3"
- [(set_attr "v8type" "madd")
- (set_attr "mode" "SI")]
+ [(set_attr "type" "mla")]
)
(define_insn "*msub<mode>"
@@ -2218,8 +1943,7 @@
""
"msub\\t%<w>0, %<w>1, %<w>2, %<w>3"
- [(set_attr "v8type" "madd")
- (set_attr "mode" "<MODE>")]
+ [(set_attr "type" "mla")]
)
;; zero_extend version of above
@@ -2232,8 +1956,7 @@
""
"msub\\t%w0, %w1, %w2, %w3"
- [(set_attr "v8type" "madd")
- (set_attr "mode" "SI")]
+ [(set_attr "type" "mla")]
)
(define_insn "*mul<mode>_neg"
@@ -2243,8 +1966,7 @@
""
"mneg\\t%<w>0, %<w>1, %<w>2"
- [(set_attr "v8type" "mult")
- (set_attr "mode" "<MODE>")]
+ [(set_attr "type" "mul")]
)
;; zero_extend version of above
@@ -2256,8 +1978,7 @@
""
"mneg\\t%w0, %w1, %w2"
- [(set_attr "v8type" "mult")
- (set_attr "mode" "SI")]
+ [(set_attr "type" "mul")]
)
(define_insn "<su_optab>mulsidi3"
@@ -2266,8 +1987,7 @@
(ANY_EXTEND:DI (match_operand:SI 2 "register_operand" "r"))))]
""
"<su>mull\\t%0, %w1, %w2"
- [(set_attr "v8type" "mull")
- (set_attr "mode" "DI")]
+ [(set_attr "type" "<su>mull")]
)
(define_insn "<su_optab>maddsidi4"
@@ -2278,8 +1998,7 @@
(match_operand:DI 3 "register_operand" "r")))]
""
"<su>maddl\\t%0, %w1, %w2, %3"
- [(set_attr "v8type" "maddl")
- (set_attr "mode" "DI")]
+ [(set_attr "type" "<su>mlal")]
)
(define_insn "<su_optab>msubsidi4"
@@ -2291,8 +2010,7 @@
(match_operand:SI 2 "register_operand" "r")))))]
""
"<su>msubl\\t%0, %w1, %w2, %3"
- [(set_attr "v8type" "maddl")
- (set_attr "mode" "DI")]
+ [(set_attr "type" "<su>mlal")]
)
(define_insn "*<su_optab>mulsidi_neg"
@@ -2302,8 +2020,7 @@
(ANY_EXTEND:DI (match_operand:SI 2 "register_operand" "r"))))]
""
"<su>mnegl\\t%0, %w1, %w2"
- [(set_attr "v8type" "mull")
- (set_attr "mode" "DI")]
+ [(set_attr "type" "<su>mull")]
)
(define_insn "<su>muldi3_highpart"
@@ -2316,8 +2033,7 @@
(const_int 64))))]
""
"<su>mulh\\t%0, %1, %2"
- [(set_attr "v8type" "mulh")
- (set_attr "mode" "DI")]
+ [(set_attr "type" "<su>mull")]
)
(define_insn "<su_optab>div<mode>3"
@@ -2326,8 +2042,7 @@
(match_operand:GPI 2 "register_operand" "r")))]
""
"<su>div\\t%<w>0, %<w>1, %<w>2"
- [(set_attr "v8type" "<su>div")
- (set_attr "mode" "<MODE>")]
+ [(set_attr "type" "<su>div")]
)
;; zero_extend version of above
@@ -2338,8 +2053,7 @@
(match_operand:SI 2 "register_operand" "r"))))]
""
"<su>div\\t%w0, %w1, %w2"
- [(set_attr "v8type" "<su>div")
- (set_attr "mode" "SI")]
+ [(set_attr "type" "<su>div")]
)
;; -------------------------------------------------------------------
@@ -2348,14 +2062,14 @@
(define_insn "*cmp<mode>"
[(set (reg:CC CC_REGNUM)
- (compare:CC (match_operand:GPI 0 "register_operand" "r,r")
- (match_operand:GPI 1 "aarch64_plus_operand" "rI,J")))]
+ (compare:CC (match_operand:GPI 0 "register_operand" "r,r,r")
+ (match_operand:GPI 1 "aarch64_plus_operand" "r,I,J")))]
""
"@
cmp\\t%<w>0, %<w>1
+ cmp\\t%<w>0, %<w>1
cmn\\t%<w>0, #%n1"
- [(set_attr "v8type" "alus")
- (set_attr "mode" "<MODE>")]
+ [(set_attr "type" "alus_reg,alus_imm,alus_imm")]
)
(define_insn "*cmp<mode>"
@@ -2366,8 +2080,7 @@
"@
fcmp\\t%<s>0, #0.0
fcmp\\t%<s>0, %<s>1"
- [(set_attr "v8type" "fcmp")
- (set_attr "mode" "<MODE>")]
+ [(set_attr "type" "fcmp<s>")]
)
(define_insn "*cmpe<mode>"
@@ -2378,8 +2091,7 @@
"@
fcmpe\\t%<s>0, #0.0
fcmpe\\t%<s>0, %<s>1"
- [(set_attr "v8type" "fcmp")
- (set_attr "mode" "<MODE>")]
+ [(set_attr "type" "fcmp<s>")]
)
(define_insn "*cmp_swp_<shift>_reg<mode>"
@@ -2390,8 +2102,7 @@
(match_operand:GPI 2 "aarch64_reg_or_zero" "rZ")))]
""
"cmp\\t%<w>2, %<w>0, <shift> %1"
- [(set_attr "v8type" "alus_shift")
- (set_attr "mode" "<MODE>")]
+ [(set_attr "type" "alus_shift_imm")]
)
(define_insn "*cmp_swp_<optab><ALLX:mode>_reg<GPI:mode>"
@@ -2401,8 +2112,7 @@
(match_operand:GPI 1 "register_operand" "r")))]
""
"cmp\\t%<GPI:w>1, %<GPI:w>0, <su>xt<ALLX:size>"
- [(set_attr "v8type" "alus_ext")
- (set_attr "mode" "<GPI:MODE>")]
+ [(set_attr "type" "alus_ext")]
)
(define_insn "*cmp_swp_<optab><ALLX:mode>_shft_<GPI:mode>"
@@ -2414,8 +2124,7 @@
(match_operand:GPI 2 "register_operand" "r")))]
""
"cmp\\t%<GPI:w>2, %<GPI:w>0, <su>xt<ALLX:size> %1"
- [(set_attr "v8type" "alus_ext")
- (set_attr "mode" "<GPI:MODE>")]
+ [(set_attr "type" "alus_ext")]
)
;; -------------------------------------------------------------------
@@ -2454,8 +2163,7 @@
[(match_operand 2 "cc_register" "") (const_int 0)]))]
""
"cset\\t%<w>0, %m1"
- [(set_attr "v8type" "csel")
- (set_attr "mode" "<MODE>")]
+ [(set_attr "type" "csel")]
)
;; zero_extend version of the above
@@ -2466,8 +2174,7 @@
[(match_operand 2 "cc_register" "") (const_int 0)])))]
""
"cset\\t%w0, %m1"
- [(set_attr "v8type" "csel")
- (set_attr "mode" "SI")]
+ [(set_attr "type" "csel")]
)
(define_insn "cstore<mode>_neg"
@@ -2476,8 +2183,7 @@
[(match_operand 2 "cc_register" "") (const_int 0)])))]
""
"csetm\\t%<w>0, %m1"
- [(set_attr "v8type" "csel")
- (set_attr "mode" "<MODE>")]
+ [(set_attr "type" "csel")]
)
;; zero_extend version of the above
@@ -2488,8 +2194,7 @@
[(match_operand 2 "cc_register" "") (const_int 0)]))))]
""
"csetm\\t%w0, %m1"
- [(set_attr "v8type" "csel")
- (set_attr "mode" "SI")]
+ [(set_attr "type" "csel")]
)
(define_expand "cmov<mode>6"
@@ -2542,8 +2247,7 @@
csinc\\t%<w>0, %<w>4, <w>zr, %M1
mov\\t%<w>0, -1
mov\\t%<w>0, 1"
- [(set_attr "v8type" "csel")
- (set_attr "mode" "<MODE>")]
+ [(set_attr "type" "csel")]
)
;; zero_extend version of above
@@ -2566,8 +2270,7 @@
csinc\\t%w0, %w4, wzr, %M1
mov\\t%w0, -1
mov\\t%w0, 1"
- [(set_attr "v8type" "csel")
- (set_attr "mode" "SI")]
+ [(set_attr "type" "csel")]
)
(define_insn "*cmov<mode>_insn"
@@ -2579,8 +2282,7 @@
(match_operand:GPF 4 "register_operand" "w")))]
"TARGET_FLOAT"
"fcsel\\t%<s>0, %<s>3, %<s>4, %m1"
- [(set_attr "v8type" "fcsel")
- (set_attr "mode" "<MODE>")]
+ [(set_attr "type" "fcsel")]
)
(define_expand "mov<mode>cc"
@@ -2628,8 +2330,8 @@
(match_operand:GPI 1 "register_operand" "r")))]
""
"csinc\\t%<w>0, %<w>1, %<w>1, %M2"
- [(set_attr "v8type" "csel")
- (set_attr "mode" "<MODE>")])
+ [(set_attr "type" "csel")]
+)
(define_insn "csinc3<mode>_insn"
[(set (match_operand:GPI 0 "register_operand" "=r")
@@ -2641,8 +2343,7 @@
(match_operand:GPI 4 "aarch64_reg_or_zero" "rZ")))]
""
"csinc\\t%<w>0, %<w>4, %<w>3, %M1"
- [(set_attr "v8type" "csel")
- (set_attr "mode" "<MODE>")]
+ [(set_attr "type" "csel")]
)
(define_insn "*csinv3<mode>_insn"
@@ -2654,8 +2355,8 @@
(match_operand:GPI 4 "aarch64_reg_or_zero" "rZ")))]
""
"csinv\\t%<w>0, %<w>4, %<w>3, %M1"
- [(set_attr "v8type" "csel")
- (set_attr "mode" "<MODE>")])
+ [(set_attr "type" "csel")]
+)
(define_insn "*csneg3<mode>_insn"
[(set (match_operand:GPI 0 "register_operand" "=r")
@@ -2666,8 +2367,8 @@
(match_operand:GPI 4 "aarch64_reg_or_zero" "rZ")))]
""
"csneg\\t%<w>0, %<w>4, %<w>3, %M1"
- [(set_attr "v8type" "csel")
- (set_attr "mode" "<MODE>")])
+ [(set_attr "type" "csel")]
+)
;; -------------------------------------------------------------------
;; Logical operations
@@ -2679,8 +2380,8 @@
(match_operand:GPI 2 "aarch64_logical_operand" "r,<lconst>")))]
""
"<logical>\\t%<w>0, %<w>1, %<w>2"
- [(set_attr "v8type" "logic,logic_imm")
- (set_attr "mode" "<MODE>")])
+ [(set_attr "type" "logic_reg,logic_imm")]
+)
;; zero_extend version of above
(define_insn "*<optab>si3_uxtw"
@@ -2690,8 +2391,8 @@
(match_operand:SI 2 "aarch64_logical_operand" "r,K"))))]
""
"<logical>\\t%w0, %w1, %w2"
- [(set_attr "v8type" "logic,logic_imm")
- (set_attr "mode" "SI")])
+ [(set_attr "type" "logic_reg,logic_imm")]
+)
(define_insn "*and<mode>3_compare0"
[(set (reg:CC_NZ CC_REGNUM)
@@ -2703,8 +2404,7 @@
(and:GPI (match_dup 1) (match_dup 2)))]
""
"ands\\t%<w>0, %<w>1, %<w>2"
- [(set_attr "v8type" "logics,logics_imm")
- (set_attr "mode" "<MODE>")]
+ [(set_attr "type" "logics_reg,logics_imm")]
)
;; zero_extend version of above
@@ -2718,8 +2418,7 @@
(zero_extend:DI (and:SI (match_dup 1) (match_dup 2))))]
""
"ands\\t%w0, %w1, %w2"
- [(set_attr "v8type" "logics,logics_imm")
- (set_attr "mode" "SI")]
+ [(set_attr "type" "logics_reg,logics_imm")]
)
(define_insn "*and_<SHIFT:optab><mode>3_compare0"
@@ -2734,8 +2433,7 @@
(and:GPI (SHIFT:GPI (match_dup 1) (match_dup 2)) (match_dup 3)))]
""
"ands\\t%<w>0, %<w>3, %<w>1, <SHIFT:shift> %2"
- [(set_attr "v8type" "logics_shift")
- (set_attr "mode" "<MODE>")]
+ [(set_attr "type" "logics_shift_imm")]
)
;; zero_extend version of above
@@ -2752,8 +2450,7 @@
(match_dup 3))))]
""
"ands\\t%w0, %w3, %w1, <SHIFT:shift> %2"
- [(set_attr "v8type" "logics_shift")
- (set_attr "mode" "SI")]
+ [(set_attr "type" "logics_shift_imm")]
)
(define_insn "*<LOGICAL:optab>_<SHIFT:optab><mode>3"
@@ -2764,8 +2461,8 @@
(match_operand:GPI 3 "register_operand" "r")))]
""
"<LOGICAL:logical>\\t%<w>0, %<w>3, %<w>1, <SHIFT:shift> %2"
- [(set_attr "v8type" "logic_shift")
- (set_attr "mode" "<MODE>")])
+ [(set_attr "type" "logic_shift_imm")]
+)
;; zero_extend version of above
(define_insn "*<LOGICAL:optab>_<SHIFT:optab>si3_uxtw"
@@ -2777,16 +2474,16 @@
(match_operand:SI 3 "register_operand" "r"))))]
""
"<LOGICAL:logical>\\t%w0, %w3, %w1, <SHIFT:shift> %2"
- [(set_attr "v8type" "logic_shift")
- (set_attr "mode" "SI")])
+ [(set_attr "type" "logic_shift_imm")]
+)
(define_insn "one_cmpl<mode>2"
[(set (match_operand:GPI 0 "register_operand" "=r")
(not:GPI (match_operand:GPI 1 "register_operand" "r")))]
""
"mvn\\t%<w>0, %<w>1"
- [(set_attr "v8type" "logic")
- (set_attr "mode" "<MODE>")])
+ [(set_attr "type" "logic_reg")]
+)
(define_insn "*one_cmpl_<optab><mode>2"
[(set (match_operand:GPI 0 "register_operand" "=r")
@@ -2794,8 +2491,8 @@
(match_operand:QI 2 "aarch64_shift_imm_<mode>" "n"))))]
""
"mvn\\t%<w>0, %<w>1, <shift> %2"
- [(set_attr "v8type" "logic_shift")
- (set_attr "mode" "<MODE>")])
+ [(set_attr "type" "logic_shift_imm")]
+)
(define_insn "*<LOGICAL:optab>_one_cmpl<mode>3"
[(set (match_operand:GPI 0 "register_operand" "=r")
@@ -2804,8 +2501,8 @@
(match_operand:GPI 2 "register_operand" "r")))]
""
"<LOGICAL:nlogical>\\t%<w>0, %<w>2, %<w>1"
- [(set_attr "v8type" "logic")
- (set_attr "mode" "<MODE>")])
+ [(set_attr "type" "logic_reg")]
+)
(define_insn "*and_one_cmpl<mode>3_compare0"
[(set (reg:CC_NZ CC_REGNUM)
@@ -2818,8 +2515,8 @@
(and:GPI (not:GPI (match_dup 1)) (match_dup 2)))]
""
"bics\\t%<w>0, %<w>2, %<w>1"
- [(set_attr "v8type" "logics")
- (set_attr "mode" "<MODE>")])
+ [(set_attr "type" "logics_reg")]
+)
;; zero_extend version of above
(define_insn "*and_one_cmplsi3_compare0_uxtw"
@@ -2833,8 +2530,8 @@
(zero_extend:DI (and:SI (not:SI (match_dup 1)) (match_dup 2))))]
""
"bics\\t%w0, %w2, %w1"
- [(set_attr "v8type" "logics")
- (set_attr "mode" "SI")])
+ [(set_attr "type" "logics_reg")]
+)
(define_insn "*<LOGICAL:optab>_one_cmpl_<SHIFT:optab><mode>3"
[(set (match_operand:GPI 0 "register_operand" "=r")
@@ -2845,8 +2542,8 @@
(match_operand:GPI 3 "register_operand" "r")))]
""
"<LOGICAL:nlogical>\\t%<w>0, %<w>3, %<w>1, <SHIFT:shift> %2"
- [(set_attr "v8type" "logic_shift")
- (set_attr "mode" "<MODE>")])
+ [(set_attr "type" "logics_shift_imm")]
+)
(define_insn "*and_one_cmpl_<SHIFT:optab><mode>3_compare0"
[(set (reg:CC_NZ CC_REGNUM)
@@ -2863,8 +2560,8 @@
(match_dup 1) (match_dup 2))) (match_dup 3)))]
""
"bics\\t%<w>0, %<w>3, %<w>1, <SHIFT:shift> %2"
- [(set_attr "v8type" "logics_shift")
- (set_attr "mode" "<MODE>")])
+ [(set_attr "type" "logics_shift_imm")]
+)
;; zero_extend version of above
(define_insn "*and_one_cmpl_<SHIFT:optab>si3_compare0_uxtw"
@@ -2882,16 +2579,16 @@
(SHIFT:SI (match_dup 1) (match_dup 2))) (match_dup 3))))]
""
"bics\\t%w0, %w3, %w1, <SHIFT:shift> %2"
- [(set_attr "v8type" "logics_shift")
- (set_attr "mode" "SI")])
+ [(set_attr "type" "logics_shift_imm")]
+)
(define_insn "clz<mode>2"
[(set (match_operand:GPI 0 "register_operand" "=r")
(clz:GPI (match_operand:GPI 1 "register_operand" "r")))]
""
"clz\\t%<w>0, %<w>1"
- [(set_attr "v8type" "clz")
- (set_attr "mode" "<MODE>")])
+ [(set_attr "type" "clz")]
+)
(define_expand "ffs<mode>2"
[(match_operand:GPI 0 "register_operand")
@@ -2913,16 +2610,16 @@
(unspec:GPI [(match_operand:GPI 1 "register_operand" "r")] UNSPEC_CLS))]
""
"cls\\t%<w>0, %<w>1"
- [(set_attr "v8type" "clz")
- (set_attr "mode" "<MODE>")])
+ [(set_attr "type" "clz")]
+)
(define_insn "rbit<mode>2"
[(set (match_operand:GPI 0 "register_operand" "=r")
(unspec:GPI [(match_operand:GPI 1 "register_operand" "r")] UNSPEC_RBIT))]
""
"rbit\\t%<w>0, %<w>1"
- [(set_attr "v8type" "rbit")
- (set_attr "mode" "<MODE>")])
+ [(set_attr "type" "rbit")]
+)
(define_expand "ctz<mode>2"
[(match_operand:GPI 0 "register_operand")
@@ -2943,8 +2640,8 @@
(const_int 0)))]
""
"tst\\t%<w>0, %<w>1"
- [(set_attr "v8type" "logics")
- (set_attr "mode" "<MODE>")])
+ [(set_attr "type" "logics_reg")]
+)
(define_insn "*and_<SHIFT:optab><mode>3nr_compare0"
[(set (reg:CC_NZ CC_REGNUM)
@@ -2956,8 +2653,8 @@
(const_int 0)))]
""
"tst\\t%<w>2, %<w>0, <SHIFT:shift> %1"
- [(set_attr "v8type" "logics_shift")
- (set_attr "mode" "<MODE>")])
+ [(set_attr "type" "logics_shift_imm")]
+)
;; -------------------------------------------------------------------
;; Shifts
@@ -3053,8 +2750,7 @@
(match_operand:QI 2 "aarch64_reg_or_shift_imm_<mode>" "rUs<cmode>")))]
""
"<shift>\\t%<w>0, %<w>1, %<w>2"
- [(set_attr "v8type" "shift")
- (set_attr "mode" "<MODE>")]
+ [(set_attr "type" "shift_reg")]
)
;; zero_extend version of above
@@ -3065,8 +2761,7 @@
(match_operand:QI 2 "aarch64_reg_or_shift_imm_si" "rUss"))))]
""
"<shift>\\t%w0, %w1, %w2"
- [(set_attr "v8type" "shift")
- (set_attr "mode" "SI")]
+ [(set_attr "type" "shift_reg")]
)
(define_insn "*ashl<mode>3_insn"
@@ -3075,8 +2770,7 @@
(match_operand:QI 2 "aarch64_reg_or_shift_imm_si" "rUss")))]
""
"lsl\\t%<w>0, %<w>1, %<w>2"
- [(set_attr "v8type" "shift")
- (set_attr "mode" "<MODE>")]
+ [(set_attr "type" "shift_reg")]
)
(define_insn "*<optab><mode>3_insn"
@@ -3088,8 +2782,7 @@
operands[3] = GEN_INT (<sizen> - UINTVAL (operands[2]));
return "<bfshift>\t%w0, %w1, %2, %3";
}
- [(set_attr "v8type" "bfm")
- (set_attr "mode" "<MODE>")]
+ [(set_attr "type" "bfm")]
)
(define_insn "*extr<mode>5_insn"
@@ -3101,8 +2794,7 @@
"UINTVAL (operands[3]) < GET_MODE_BITSIZE (<MODE>mode) &&
(UINTVAL (operands[3]) + UINTVAL (operands[4]) == GET_MODE_BITSIZE (<MODE>mode))"
"extr\\t%<w>0, %<w>1, %<w>2, %4"
- [(set_attr "v8type" "shift")
- (set_attr "mode" "<MODE>")]
+ [(set_attr "type" "shift_imm")]
)
;; zero_extend version of the above
@@ -3116,8 +2808,7 @@
"UINTVAL (operands[3]) < 32 &&
(UINTVAL (operands[3]) + UINTVAL (operands[4]) == 32)"
"extr\\t%w0, %w1, %w2, %4"
- [(set_attr "v8type" "shift")
- (set_attr "mode" "SI")]
+ [(set_attr "type" "shift_imm")]
)
(define_insn "*ror<mode>3_insn"
@@ -3129,8 +2820,7 @@
operands[3] = GEN_INT (<sizen> - UINTVAL (operands[2]));
return "ror\\t%<w>0, %<w>1, %3";
}
- [(set_attr "v8type" "shift")
- (set_attr "mode" "<MODE>")]
+ [(set_attr "type" "shift_imm")]
)
;; zero_extend version of the above
@@ -3144,8 +2834,7 @@
operands[3] = GEN_INT (32 - UINTVAL (operands[2]));
return "ror\\t%w0, %w1, %3";
}
- [(set_attr "v8type" "shift")
- (set_attr "mode" "SI")]
+ [(set_attr "type" "shift_imm")]
)
(define_insn "*<ANY_EXTEND:optab><GPI:mode>_ashl<SHORT:mode>"
@@ -3158,8 +2847,7 @@
operands[3] = GEN_INT (<SHORT:sizen> - UINTVAL (operands[2]));
return "<su>bfiz\t%<GPI:w>0, %<GPI:w>1, %2, %3";
}
- [(set_attr "v8type" "bfm")
- (set_attr "mode" "<GPI:MODE>")]
+ [(set_attr "type" "bfm")]
)
(define_insn "*zero_extend<GPI:mode>_lshr<SHORT:mode>"
@@ -3172,8 +2860,7 @@
operands[3] = GEN_INT (<SHORT:sizen> - UINTVAL (operands[2]));
return "ubfx\t%<GPI:w>0, %<GPI:w>1, %2, %3";
}
- [(set_attr "v8type" "bfm")
- (set_attr "mode" "<GPI:MODE>")]
+ [(set_attr "type" "bfm")]
)
(define_insn "*extend<GPI:mode>_ashr<SHORT:mode>"
@@ -3186,8 +2873,7 @@
operands[3] = GEN_INT (<SHORT:sizen> - UINTVAL (operands[2]));
return "sbfx\\t%<GPI:w>0, %<GPI:w>1, %2, %3";
}
- [(set_attr "v8type" "bfm")
- (set_attr "mode" "<GPI:MODE>")]
+ [(set_attr "type" "bfm")]
)
;; -------------------------------------------------------------------
@@ -3210,8 +2896,7 @@
(match_operand 3 "const_int_operand" "n")))]
""
"<su>bfx\\t%<w>0, %<w>1, %3, %2"
- [(set_attr "v8type" "bfm")
- (set_attr "mode" "<MODE>")]
+ [(set_attr "type" "bfm")]
)
;; Bitfield Insert (insv)
@@ -3254,8 +2939,7 @@
|| (UINTVAL (operands[2]) + UINTVAL (operands[1])
> GET_MODE_BITSIZE (<MODE>mode)))"
"bfi\\t%<w>0, %<w>3, %2, %1"
- [(set_attr "v8type" "bfm")
- (set_attr "mode" "<MODE>")]
+ [(set_attr "type" "bfm")]
)
(define_insn "*extr_insv_lower_reg<mode>"
@@ -3269,8 +2953,7 @@
|| (UINTVAL (operands[3]) + UINTVAL (operands[1])
> GET_MODE_BITSIZE (<MODE>mode)))"
"bfxil\\t%<w>0, %<w>2, %3, %1"
- [(set_attr "v8type" "bfm")
- (set_attr "mode" "<MODE>")]
+ [(set_attr "type" "bfm")]
)
(define_insn "*<optab><ALLX:mode>_shft_<GPI:mode>"
@@ -3285,8 +2968,7 @@
: GEN_INT (<GPI:sizen> - UINTVAL (operands[2]));
return "<su>bfiz\t%<GPI:w>0, %<GPI:w>1, %2, %3";
}
- [(set_attr "v8type" "bfm")
- (set_attr "mode" "<GPI:MODE>")]
+ [(set_attr "type" "bfm")]
)
;; XXX We should match (any_extend (ashift)) here, like (and (ashift)) below
@@ -3299,8 +2981,7 @@
"exact_log2 ((INTVAL (operands[3]) >> INTVAL (operands[2])) + 1) >= 0
&& (INTVAL (operands[3]) & ((1 << INTVAL (operands[2])) - 1)) == 0"
"ubfiz\\t%<w>0, %<w>1, %2, %P3"
- [(set_attr "v8type" "bfm")
- (set_attr "mode" "<MODE>")]
+ [(set_attr "type" "bfm")]
)
(define_insn "bswap<mode>2"
@@ -3308,8 +2989,7 @@
(bswap:GPI (match_operand:GPI 1 "register_operand" "r")))]
""
"rev\\t%<w>0, %<w>1"
- [(set_attr "v8type" "rev")
- (set_attr "mode" "<MODE>")]
+ [(set_attr "type" "rev")]
)
(define_insn "bswaphi2"
@@ -3317,8 +2997,7 @@
(bswap:HI (match_operand:HI 1 "register_operand" "r")))]
""
"rev16\\t%w0, %w1"
- [(set_attr "v8type" "rev")
- (set_attr "mode" "HI")]
+ [(set_attr "type" "rev")]
)
;; zero_extend version of above
@@ -3327,8 +3006,7 @@
(zero_extend:DI (bswap:SI (match_operand:SI 1 "register_operand" "r"))))]
""
"rev\\t%w0, %w1"
- [(set_attr "v8type" "rev")
- (set_attr "mode" "SI")]
+ [(set_attr "type" "rev")]
)
;; -------------------------------------------------------------------
@@ -3344,8 +3022,7 @@
FRINT))]
"TARGET_FLOAT"
"frint<frint_suffix>\\t%<s>0, %<s>1"
- [(set_attr "v8type" "frint")
- (set_attr "mode" "<MODE>")]
+ [(set_attr "type" "f_rint<s>")]
)
;; frcvt floating-point round to integer and convert standard patterns.
@@ -3356,9 +3033,7 @@
FCVT)))]
"TARGET_FLOAT"
"fcvt<frint_suffix><su>\\t%<GPI:w>0, %<GPF:s>1"
- [(set_attr "v8type" "fcvtf2i")
- (set_attr "mode" "<GPF:MODE>")
- (set_attr "mode2" "<GPI:MODE>")]
+ [(set_attr "type" "f_cvtf2i")]
)
;; fma - no throw
@@ -3370,8 +3045,7 @@
(match_operand:GPF 3 "register_operand" "w")))]
"TARGET_FLOAT"
"fmadd\\t%<s>0, %<s>1, %<s>2, %<s>3"
- [(set_attr "v8type" "fmadd")
- (set_attr "mode" "<MODE>")]
+ [(set_attr "type" "fmac<s>")]
)
(define_insn "fnma<mode>4"
@@ -3381,8 +3055,7 @@
(match_operand:GPF 3 "register_operand" "w")))]
"TARGET_FLOAT"
"fmsub\\t%<s>0, %<s>1, %<s>2, %<s>3"
- [(set_attr "v8type" "fmadd")
- (set_attr "mode" "<MODE>")]
+ [(set_attr "type" "fmac<s>")]
)
(define_insn "fms<mode>4"
@@ -3392,8 +3065,7 @@
(neg:GPF (match_operand:GPF 3 "register_operand" "w"))))]
"TARGET_FLOAT"
"fnmsub\\t%<s>0, %<s>1, %<s>2, %<s>3"
- [(set_attr "v8type" "fmadd")
- (set_attr "mode" "<MODE>")]
+ [(set_attr "type" "fmac<s>")]
)
(define_insn "fnms<mode>4"
@@ -3403,8 +3075,7 @@
(neg:GPF (match_operand:GPF 3 "register_operand" "w"))))]
"TARGET_FLOAT"
"fnmadd\\t%<s>0, %<s>1, %<s>2, %<s>3"
- [(set_attr "v8type" "fmadd")
- (set_attr "mode" "<MODE>")]
+ [(set_attr "type" "fmac<s>")]
)
;; If signed zeros are ignored, -(a * b + c) = -a * b - c.
@@ -3415,8 +3086,7 @@
(match_operand:GPF 3 "register_operand" "w"))))]
"!HONOR_SIGNED_ZEROS (<MODE>mode) && TARGET_FLOAT"
"fnmadd\\t%<s>0, %<s>1, %<s>2, %<s>3"
- [(set_attr "v8type" "fmadd")
- (set_attr "mode" "<MODE>")]
+ [(set_attr "type" "fmac<s>")]
)
;; -------------------------------------------------------------------
@@ -3428,9 +3098,7 @@
(float_extend:DF (match_operand:SF 1 "register_operand" "w")))]
"TARGET_FLOAT"
"fcvt\\t%d0, %s1"
- [(set_attr "v8type" "fcvt")
- (set_attr "mode" "DF")
- (set_attr "mode2" "SF")]
+ [(set_attr "type" "f_cvt")]
)
(define_insn "truncdfsf2"
@@ -3438,9 +3106,7 @@
(float_truncate:SF (match_operand:DF 1 "register_operand" "w")))]
"TARGET_FLOAT"
"fcvt\\t%s0, %d1"
- [(set_attr "v8type" "fcvt")
- (set_attr "mode" "SF")
- (set_attr "mode2" "DF")]
+ [(set_attr "type" "f_cvt")]
)
(define_insn "fix_trunc<GPF:mode><GPI:mode>2"
@@ -3448,9 +3114,7 @@
(fix:GPI (match_operand:GPF 1 "register_operand" "w")))]
"TARGET_FLOAT"
"fcvtzs\\t%<GPI:w>0, %<GPF:s>1"
- [(set_attr "v8type" "fcvtf2i")
- (set_attr "mode" "<GPF:MODE>")
- (set_attr "mode2" "<GPI:MODE>")]
+ [(set_attr "type" "f_cvtf2i")]
)
(define_insn "fixuns_trunc<GPF:mode><GPI:mode>2"
@@ -3458,9 +3122,7 @@
(unsigned_fix:GPI (match_operand:GPF 1 "register_operand" "w")))]
"TARGET_FLOAT"
"fcvtzu\\t%<GPI:w>0, %<GPF:s>1"
- [(set_attr "v8type" "fcvtf2i")
- (set_attr "mode" "<GPF:MODE>")
- (set_attr "mode2" "<GPI:MODE>")]
+ [(set_attr "type" "f_cvtf2i")]
)
(define_insn "float<GPI:mode><GPF:mode>2"
@@ -3468,9 +3130,7 @@
(float:GPF (match_operand:GPI 1 "register_operand" "r")))]
"TARGET_FLOAT"
"scvtf\\t%<GPF:s>0, %<GPI:w>1"
- [(set_attr "v8type" "fcvti2f")
- (set_attr "mode" "<GPF:MODE>")
- (set_attr "mode2" "<GPI:MODE>")]
+ [(set_attr "type" "f_cvti2f")]
)
(define_insn "floatuns<GPI:mode><GPF:mode>2"
@@ -3478,9 +3138,7 @@
(unsigned_float:GPF (match_operand:GPI 1 "register_operand" "r")))]
"TARGET_FLOAT"
"ucvtf\\t%<GPF:s>0, %<GPI:w>1"
- [(set_attr "v8type" "fcvt")
- (set_attr "mode" "<GPF:MODE>")
- (set_attr "mode2" "<GPI:MODE>")]
+ [(set_attr "type" "f_cvt")]
)
;; -------------------------------------------------------------------
@@ -3494,8 +3152,7 @@
(match_operand:GPF 2 "register_operand" "w")))]
"TARGET_FLOAT"
"fadd\\t%<s>0, %<s>1, %<s>2"
- [(set_attr "v8type" "fadd")
- (set_attr "mode" "<MODE>")]
+ [(set_attr "type" "fadd<s>")]
)
(define_insn "sub<mode>3"
@@ -3505,8 +3162,7 @@
(match_operand:GPF 2 "register_operand" "w")))]
"TARGET_FLOAT"
"fsub\\t%<s>0, %<s>1, %<s>2"
- [(set_attr "v8type" "fadd")
- (set_attr "mode" "<MODE>")]
+ [(set_attr "type" "fadd<s>")]
)
(define_insn "mul<mode>3"
@@ -3516,8 +3172,7 @@
(match_operand:GPF 2 "register_operand" "w")))]
"TARGET_FLOAT"
"fmul\\t%<s>0, %<s>1, %<s>2"
- [(set_attr "v8type" "fmul")
- (set_attr "mode" "<MODE>")]
+ [(set_attr "type" "fmul<s>")]
)
(define_insn "*fnmul<mode>3"
@@ -3527,8 +3182,7 @@
(match_operand:GPF 2 "register_operand" "w")))]
"TARGET_FLOAT"
"fnmul\\t%<s>0, %<s>1, %<s>2"
- [(set_attr "v8type" "fmul")
- (set_attr "mode" "<MODE>")]
+ [(set_attr "type" "fmul<s>")]
)
(define_insn "div<mode>3"
@@ -3538,8 +3192,7 @@
(match_operand:GPF 2 "register_operand" "w")))]
"TARGET_FLOAT"
"fdiv\\t%<s>0, %<s>1, %<s>2"
- [(set_attr "v8type" "fdiv")
- (set_attr "mode" "<MODE>")]
+ [(set_attr "type" "fdiv<s>")]
)
(define_insn "neg<mode>2"
@@ -3547,8 +3200,7 @@
(neg:GPF (match_operand:GPF 1 "register_operand" "w")))]
"TARGET_FLOAT"
"fneg\\t%<s>0, %<s>1"
- [(set_attr "v8type" "ffarith")
- (set_attr "mode" "<MODE>")]
+ [(set_attr "type" "ffarith<s>")]
)
(define_insn "sqrt<mode>2"
@@ -3556,8 +3208,7 @@
(sqrt:GPF (match_operand:GPF 1 "register_operand" "w")))]
"TARGET_FLOAT"
"fsqrt\\t%<s>0, %<s>1"
- [(set_attr "v8type" "fsqrt")
- (set_attr "mode" "<MODE>")]
+ [(set_attr "type" "fsqrt<s>")]
)
(define_insn "abs<mode>2"
@@ -3565,8 +3216,7 @@
(abs:GPF (match_operand:GPF 1 "register_operand" "w")))]
"TARGET_FLOAT"
"fabs\\t%<s>0, %<s>1"
- [(set_attr "v8type" "ffarith")
- (set_attr "mode" "<MODE>")]
+ [(set_attr "type" "ffarith<s>")]
)
;; Given that smax/smin do not specify the result when either input is NaN,
@@ -3579,8 +3229,7 @@
(match_operand:GPF 2 "register_operand" "w")))]
"TARGET_FLOAT"
"fmaxnm\\t%<s>0, %<s>1, %<s>2"
- [(set_attr "v8type" "fminmax")
- (set_attr "mode" "<MODE>")]
+ [(set_attr "type" "f_minmax<s>")]
)
(define_insn "smin<mode>3"
@@ -3589,29 +3238,7 @@
(match_operand:GPF 2 "register_operand" "w")))]
"TARGET_FLOAT"
"fminnm\\t%<s>0, %<s>1, %<s>2"
- [(set_attr "v8type" "fminmax")
- (set_attr "mode" "<MODE>")]
-)
-
-(define_insn "aarch64_frecp<FRECP:frecp_suffix><mode>"
- [(set (match_operand:GPF 0 "register_operand" "=w")
- (unspec:GPF [(match_operand:GPF 1 "register_operand" "w")]
- FRECP))]
- "TARGET_FLOAT"
- "frecp<FRECP:frecp_suffix>\\t%<s>0, %<s>1"
- [(set_attr "v8type" "frecp<FRECP:frecp_suffix>")
- (set_attr "mode" "<MODE>")]
-)
-
-(define_insn "aarch64_frecps<mode>"
- [(set (match_operand:GPF 0 "register_operand" "=w")
- (unspec:GPF [(match_operand:GPF 1 "register_operand" "w")
- (match_operand:GPF 2 "register_operand" "w")]
- UNSPEC_FRECPS))]
- "TARGET_FLOAT"
- "frecps\\t%<s>0, %<s>1, %<s>2"
- [(set_attr "v8type" "frecps")
- (set_attr "mode" "<MODE>")]
+ [(set_attr "type" "f_minmax<s>")]
)
;; -------------------------------------------------------------------
@@ -3675,8 +3302,7 @@
(truncate:DI (match_operand:TX 1 "register_operand" "w")))]
"reload_completed || reload_in_progress"
"fmov\\t%x0, %d1"
- [(set_attr "v8type" "fmovf2i")
- (set_attr "mode" "DI")
+ [(set_attr "type" "f_mrc")
(set_attr "length" "4")
])
@@ -3687,8 +3313,7 @@
(const_int 64))))]
"reload_completed || reload_in_progress"
"fmov\\t%x0, %1.d[1]"
- [(set_attr "v8type" "fmovf2i")
- (set_attr "mode" "DI")
+ [(set_attr "type" "f_mrc")
(set_attr "length" "4")
])
@@ -3698,8 +3323,7 @@
(zero_extend:TX (match_operand:DI 1 "register_operand" "r")))]
"reload_completed || reload_in_progress"
"fmov\\t%0.d[1], %x1"
- [(set_attr "v8type" "fmovi2f")
- (set_attr "mode" "DI")
+ [(set_attr "type" "f_mcr")
(set_attr "length" "4")
])
@@ -3708,8 +3332,7 @@
(zero_extend:TX (match_operand:DI 1 "register_operand" "r")))]
"reload_completed || reload_in_progress"
"fmov\\t%d0, %x1"
- [(set_attr "v8type" "fmovi2f")
- (set_attr "mode" "DI")
+ [(set_attr "type" "f_mcr")
(set_attr "length" "4")
])
@@ -3719,8 +3342,7 @@
(truncate:DI (match_operand:TI 1 "register_operand" "w"))))]
"reload_completed || reload_in_progress"
"fmov\\t%d0, %d1"
- [(set_attr "v8type" "fmovi2f")
- (set_attr "mode" "DI")
+ [(set_attr "type" "f_mcr")
(set_attr "length" "4")
])
@@ -3735,9 +3357,7 @@
(match_operand 2 "aarch64_valid_symref" "S")))]
""
"add\\t%0, %1, :lo12:%a2"
- [(set_attr "v8type" "alu")
- (set_attr "mode" "DI")]
-
+ [(set_attr "type" "alu_reg")]
)
(define_insn "ldr_got_small"
@@ -3748,8 +3368,7 @@
UNSPEC_GOTSMALLPIC))]
""
"ldr\\t%0, [%1, #:got_lo12:%a2]"
- [(set_attr "v8type" "load1")
- (set_attr "mode" "DI")]
+ [(set_attr "type" "load1")]
)
(define_insn "ldr_got_tiny"
@@ -3758,8 +3377,7 @@
UNSPEC_GOTTINYPIC))]
""
"ldr\\t%0, %L1"
- [(set_attr "v8type" "load1")
- (set_attr "mode" "DI")]
+ [(set_attr "type" "load1")]
)
(define_insn "aarch64_load_tp_hard"
@@ -3767,8 +3385,7 @@
(unspec:DI [(const_int 0)] UNSPEC_TLS))]
""
"mrs\\t%0, tpidr_el0"
- [(set_attr "v8type" "mrs")
- (set_attr "mode" "DI")]
+ [(set_attr "type" "mrs")]
)
;; The TLS ABI specifically requires that the compiler does not schedule
@@ -3792,7 +3409,7 @@
]
""
"adrp\\tx0, %A1\;add\\tx0, x0, %L1\;bl\\t%2\;nop"
- [(set_attr "v8type" "call")
+ [(set_attr "type" "call")
(set_attr "length" "16")])
(define_insn "tlsie_small"
@@ -3801,8 +3418,7 @@
UNSPEC_GOTSMALLTLS))]
""
"adrp\\t%0, %A1\;ldr\\t%0, [%0, #%L1]"
- [(set_attr "v8type" "load1")
- (set_attr "mode" "DI")
+ [(set_attr "type" "load1")
(set_attr "length" "8")]
)
@@ -3813,8 +3429,7 @@
UNSPEC_GOTSMALLTLS))]
""
"add\\t%0, %1, #%G2\;add\\t%0, %0, #%L2"
- [(set_attr "v8type" "alu")
- (set_attr "mode" "DI")
+ [(set_attr "type" "alu_reg")
(set_attr "length" "8")]
)
@@ -3826,7 +3441,7 @@
(clobber (match_scratch:DI 1 "=r"))]
"TARGET_TLS_DESC"
"adrp\\tx0, %A0\;ldr\\t%1, [x0, #%L0]\;add\\tx0, x0, %L0\;.tlsdesccall\\t%0\;blr\\t%1"
- [(set_attr "v8type" "call")
+ [(set_attr "type" "call")
(set_attr "length" "16")])
(define_insn "stack_tie"
diff --git a/gcc/config/aarch64/iterators.md b/gcc/config/aarch64/iterators.md
index 737ed68cbb6..007c19f4f4a 100644
--- a/gcc/config/aarch64/iterators.md
+++ b/gcc/config/aarch64/iterators.md
@@ -346,6 +346,7 @@
(V2SI "s") (V4SI "s")
(V2DI "d") (V2SF "s")
(V4SF "s") (V2DF "d")
+ (SF "s") (DF "d")
(QI "b") (HI "h")
(SI "s") (DI "d")])
@@ -531,6 +532,24 @@
(define_mode_attr fcvt_target [(V2DF "v2di") (V4SF "v4si") (V2SF "v2si")])
(define_mode_attr FCVT_TARGET [(V2DF "V2DI") (V4SF "V4SI") (V2SF "V2SI")])
+;; Defined to '_fp' for types whose element type is a float type.
+(define_mode_attr fp [(V8QI "") (V16QI "")
+ (V4HI "") (V8HI "")
+ (V2SI "") (V4SI "")
+ (DI "") (V2DI "")
+ (V2SF "_fp") (V4SF "_fp")
+ (V2DF "_fp") (DF "_fp")
+ (SF "_fp")])
+
+;; Defined to '_q' for 128-bit types.
+(define_mode_attr q [(V8QI "") (V16QI "_q")
+ (V4HI "") (V8HI "_q")
+ (V2SI "") (V4SI "_q")
+ (DI "") (V2DI "_q")
+ (V2SF "") (V4SF "_q")
+ (V2DF "_q")
+ (QI "") (HI "") (SI "") (DI "") (SF "") (DF "")])
+
;; -------------------------------------------------------------------
;; Code Iterators
;; -------------------------------------------------------------------
diff --git a/gcc/config/aarch64/large.md b/gcc/config/aarch64/large.md
deleted file mode 100644
index 4316cc7dfaf..00000000000
--- a/gcc/config/aarch64/large.md
+++ /dev/null
@@ -1,312 +0,0 @@
-;; Copyright (C) 2012-2013 Free Software Foundation, Inc.
-;;
-;; Contributed by ARM Ltd.
-;;
-;; This file is part of GCC.
-;;
-;; GCC is free software; you can redistribute it and/or modify it
-;; under the terms of the GNU General Public License as published by
-;; the Free Software Foundation; either version 3, or (at your option)
-;; any later version.
-;;
-;; GCC is distributed in the hope that it will be useful, but
-;; WITHOUT ANY WARRANTY; without even the implied warranty of
-;; MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
-;; General Public License for more details.
-;;
-;; You should have received a copy of the GNU General Public License
-;; along with GCC; see the file COPYING3. If not see
-;; <http://www.gnu.org/licenses/>.
-
-;; In the absence of any ARMv8-A implementations, two examples derived
-;; from ARM's most recent ARMv7-A cores (Cortex-A7 and Cortex-A15) are
-;; included by way of example. This is a temporary measure.
-
-;; Example pipeline description for an example 'large' core
-;; implementing AArch64
-
-;;-------------------------------------------------------
-;; General Description
-;;-------------------------------------------------------
-
-(define_automaton "large_cpu")
-
-;; The core is modelled as a triple issue pipeline that has
-;; the following dispatch units.
-;; 1. Two pipelines for simple integer operations: int1, int2
-;; 2. Two pipelines for SIMD and FP data-processing operations: fpsimd1, fpsimd2
-;; 3. One pipeline for branch operations: br
-;; 4. One pipeline for integer multiply and divide operations: multdiv
-;; 5. Two pipelines for load and store operations: ls1, ls2
-;;
-;; We can issue into three pipelines per-cycle.
-;;
-;; We assume that where we have unit pairs xxx1 is always filled before xxx2.
-
-;;-------------------------------------------------------
-;; CPU Units and Reservations
-;;-------------------------------------------------------
-
-;; The three issue units
-(define_cpu_unit "large_cpu_unit_i1, large_cpu_unit_i2, large_cpu_unit_i3" "large_cpu")
-
-(define_reservation "large_cpu_resv_i1"
- "(large_cpu_unit_i1 | large_cpu_unit_i2 | large_cpu_unit_i3)")
-
-(define_reservation "large_cpu_resv_i2"
- "((large_cpu_unit_i1 + large_cpu_unit_i2) | (large_cpu_unit_i2 + large_cpu_unit_i3))")
-
-(define_reservation "large_cpu_resv_i3"
- "(large_cpu_unit_i1 + large_cpu_unit_i2 + large_cpu_unit_i3)")
-
-(final_presence_set "large_cpu_unit_i2" "large_cpu_unit_i1")
-(final_presence_set "large_cpu_unit_i3" "large_cpu_unit_i2")
-
-;; The main dispatch units
-(define_cpu_unit "large_cpu_unit_int1, large_cpu_unit_int2" "large_cpu")
-(define_cpu_unit "large_cpu_unit_fpsimd1, large_cpu_unit_fpsimd2" "large_cpu")
-(define_cpu_unit "large_cpu_unit_ls1, large_cpu_unit_ls2" "large_cpu")
-(define_cpu_unit "large_cpu_unit_br" "large_cpu")
-(define_cpu_unit "large_cpu_unit_multdiv" "large_cpu")
-
-(define_reservation "large_cpu_resv_ls" "(large_cpu_unit_ls1 | large_cpu_unit_ls2)")
-
-;; The extended load-store pipeline
-(define_cpu_unit "large_cpu_unit_load, large_cpu_unit_store" "large_cpu")
-
-;; The extended ALU pipeline
-(define_cpu_unit "large_cpu_unit_int1_alu, large_cpu_unit_int2_alu" "large_cpu")
-(define_cpu_unit "large_cpu_unit_int1_shf, large_cpu_unit_int2_shf" "large_cpu")
-(define_cpu_unit "large_cpu_unit_int1_sat, large_cpu_unit_int2_sat" "large_cpu")
-
-
-;;-------------------------------------------------------
-;; Simple ALU Instructions
-;;-------------------------------------------------------
-
-;; Simple ALU operations without shift
-(define_insn_reservation "large_cpu_alu" 2
- (and (eq_attr "tune" "large") (eq_attr "v8type" "adc,alu,alu_ext"))
- "large_cpu_resv_i1, \
- (large_cpu_unit_int1, large_cpu_unit_int1_alu) |\
- (large_cpu_unit_int2, large_cpu_unit_int2_alu)")
-
-(define_insn_reservation "large_cpu_logic" 2
- (and (eq_attr "tune" "large") (eq_attr "v8type" "logic,logic_imm"))
- "large_cpu_resv_i1, \
- (large_cpu_unit_int1, large_cpu_unit_int1_alu) |\
- (large_cpu_unit_int2, large_cpu_unit_int2_alu)")
-
-(define_insn_reservation "large_cpu_shift" 2
- (and (eq_attr "tune" "large") (eq_attr "v8type" "shift,shift_imm"))
- "large_cpu_resv_i1, \
- (large_cpu_unit_int1, large_cpu_unit_int1_shf) |\
- (large_cpu_unit_int2, large_cpu_unit_int2_shf)")
-
-;; Simple ALU operations with immediate shift
-(define_insn_reservation "large_cpu_alu_shift" 3
- (and (eq_attr "tune" "large") (eq_attr "v8type" "alu_shift"))
- "large_cpu_resv_i1, \
- (large_cpu_unit_int1,
- large_cpu_unit_int1 + large_cpu_unit_int1_shf, large_cpu_unit_int1_alu) | \
- (large_cpu_unit_int2,
- large_cpu_unit_int2 + large_cpu_unit_int2_shf, large_cpu_unit_int2_alu)")
-
-(define_insn_reservation "large_cpu_logic_shift" 3
- (and (eq_attr "tune" "large") (eq_attr "v8type" "logic_shift"))
- "large_cpu_resv_i1, \
- (large_cpu_unit_int1, large_cpu_unit_int1_alu) |\
- (large_cpu_unit_int2, large_cpu_unit_int2_alu)")
-
-
-;;-------------------------------------------------------
-;; Multiplication/Division
-;;-------------------------------------------------------
-
-;; Simple multiplication
-(define_insn_reservation "large_cpu_mult_single" 3
- (and (eq_attr "tune" "large")
- (and (eq_attr "v8type" "mult,madd") (eq_attr "mode" "SI")))
- "large_cpu_resv_i1, large_cpu_unit_multdiv")
-
-(define_insn_reservation "large_cpu_mult_double" 4
- (and (eq_attr "tune" "large")
- (and (eq_attr "v8type" "mult,madd") (eq_attr "mode" "DI")))
- "large_cpu_resv_i1, large_cpu_unit_multdiv")
-
-;; 64-bit multiplication
-(define_insn_reservation "large_cpu_mull" 4
- (and (eq_attr "tune" "large") (eq_attr "v8type" "mull,mulh,maddl"))
- "large_cpu_resv_i1, large_cpu_unit_multdiv * 2")
-
-;; Division
-(define_insn_reservation "large_cpu_udiv_single" 9
- (and (eq_attr "tune" "large")
- (and (eq_attr "v8type" "udiv") (eq_attr "mode" "SI")))
- "large_cpu_resv_i1, large_cpu_unit_multdiv")
-
-(define_insn_reservation "large_cpu_udiv_double" 18
- (and (eq_attr "tune" "large")
- (and (eq_attr "v8type" "udiv") (eq_attr "mode" "DI")))
- "large_cpu_resv_i1, large_cpu_unit_multdiv")
-
-(define_insn_reservation "large_cpu_sdiv_single" 10
- (and (eq_attr "tune" "large")
- (and (eq_attr "v8type" "sdiv") (eq_attr "mode" "SI")))
- "large_cpu_resv_i1, large_cpu_unit_multdiv")
-
-(define_insn_reservation "large_cpu_sdiv_double" 20
- (and (eq_attr "tune" "large")
- (and (eq_attr "v8type" "sdiv") (eq_attr "mode" "DI")))
- "large_cpu_resv_i1, large_cpu_unit_multdiv")
-
-
-;;-------------------------------------------------------
-;; Branches
-;;-------------------------------------------------------
-
-;; Branches take one issue slot.
-;; No latency as there is no result
-(define_insn_reservation "large_cpu_branch" 0
- (and (eq_attr "tune" "large") (eq_attr "v8type" "branch"))
- "large_cpu_resv_i1, large_cpu_unit_br")
-
-
-;; Calls take up all issue slots, and form a block in the
-;; pipeline. The result however is available the next cycle.
-;; Addition of new units requires this to be updated.
-(define_insn_reservation "large_cpu_call" 1
- (and (eq_attr "tune" "large") (eq_attr "v8type" "call"))
- "large_cpu_resv_i3 | large_cpu_resv_i2, \
- large_cpu_unit_int1 + large_cpu_unit_int2 + large_cpu_unit_br + \
- large_cpu_unit_multdiv + large_cpu_unit_fpsimd1 + large_cpu_unit_fpsimd2 + \
- large_cpu_unit_ls1 + large_cpu_unit_ls2,\
- large_cpu_unit_int1_alu + large_cpu_unit_int1_shf + large_cpu_unit_int1_sat + \
- large_cpu_unit_int2_alu + large_cpu_unit_int2_shf + \
- large_cpu_unit_int2_sat + large_cpu_unit_load + large_cpu_unit_store")
-
-
-;;-------------------------------------------------------
-;; Load/Store Instructions
-;;-------------------------------------------------------
-
-;; Loads of up to two words.
-(define_insn_reservation "large_cpu_load1" 4
- (and (eq_attr "tune" "large") (eq_attr "v8type" "load_acq,load1,load2"))
- "large_cpu_resv_i1, large_cpu_resv_ls, large_cpu_unit_load, nothing")
-
-;; Stores of up to two words.
-(define_insn_reservation "large_cpu_store1" 0
- (and (eq_attr "tune" "large") (eq_attr "v8type" "store_rel,store1,store2"))
- "large_cpu_resv_i1, large_cpu_resv_ls, large_cpu_unit_store")
-
-
-;;-------------------------------------------------------
-;; Floating-point arithmetic.
-;;-------------------------------------------------------
-
-(define_insn_reservation "large_cpu_fpalu" 4
- (and (eq_attr "tune" "large")
- (eq_attr "v8type" "ffarith,fadd,fccmp,fcvt,fcmp"))
- "large_cpu_resv_i1 + large_cpu_unit_fpsimd1")
-
-(define_insn_reservation "large_cpu_fconst" 3
- (and (eq_attr "tune" "large")
- (eq_attr "v8type" "fconst"))
- "large_cpu_resv_i1 + large_cpu_unit_fpsimd1")
-
-(define_insn_reservation "large_cpu_fpmuls" 4
- (and (eq_attr "tune" "large")
- (and (eq_attr "v8type" "fmul,fmadd") (eq_attr "mode" "SF")))
- "large_cpu_resv_i1 + large_cpu_unit_fpsimd1")
-
-(define_insn_reservation "large_cpu_fpmuld" 7
- (and (eq_attr "tune" "large")
- (and (eq_attr "v8type" "fmul,fmadd") (eq_attr "mode" "DF")))
- "large_cpu_resv_i1 + large_cpu_unit_fpsimd1, large_cpu_unit_fpsimd1 * 2,\
- large_cpu_resv_i1 + large_cpu_unit_fpsimd1")
-
-
-;;-------------------------------------------------------
-;; Floating-point Division
-;;-------------------------------------------------------
-
-;; Single-precision divide takes 14 cycles to complete, and this
-;; includes the time taken for the special instruction used to collect the
-;; result to travel down the multiply pipeline.
-
-(define_insn_reservation "large_cpu_fdivs" 14
- (and (eq_attr "tune" "large")
- (and (eq_attr "v8type" "fdiv,fsqrt") (eq_attr "mode" "SF")))
- "large_cpu_resv_i1, large_cpu_unit_fpsimd1 * 13")
-
-(define_insn_reservation "large_cpu_fdivd" 29
- (and (eq_attr "tune" "large")
- (and (eq_attr "v8type" "fdiv,fsqrt") (eq_attr "mode" "DF")))
- "large_cpu_resv_i1, large_cpu_unit_fpsimd1 * 28")
-
-
-
-;;-------------------------------------------------------
-;; Floating-point Transfers
-;;-------------------------------------------------------
-
-(define_insn_reservation "large_cpu_i2f" 4
- (and (eq_attr "tune" "large")
- (eq_attr "v8type" "fmovi2f"))
- "large_cpu_resv_i1")
-
-(define_insn_reservation "large_cpu_f2i" 2
- (and (eq_attr "tune" "large")
- (eq_attr "v8type" "fmovf2i"))
- "large_cpu_resv_i1")
-
-
-;;-------------------------------------------------------
-;; Floating-point Load/Store
-;;-------------------------------------------------------
-
-(define_insn_reservation "large_cpu_floads" 4
- (and (eq_attr "tune" "large")
- (and (eq_attr "v8type" "fpsimd_load,fpsimd_load2") (eq_attr "mode" "SF")))
- "large_cpu_resv_i1")
-
-(define_insn_reservation "large_cpu_floadd" 5
- (and (eq_attr "tune" "large")
- (and (eq_attr "v8type" "fpsimd_load,fpsimd_load2") (eq_attr "mode" "DF")))
- "large_cpu_resv_i1 + large_cpu_unit_br, large_cpu_resv_i1")
-
-(define_insn_reservation "large_cpu_fstores" 0
- (and (eq_attr "tune" "large")
- (and (eq_attr "v8type" "fpsimd_store,fpsimd_store2") (eq_attr "mode" "SF")))
- "large_cpu_resv_i1")
-
-(define_insn_reservation "large_cpu_fstored" 0
- (and (eq_attr "tune" "large")
- (and (eq_attr "v8type" "fpsimd_store,fpsimd_store2") (eq_attr "mode" "DF")))
- "large_cpu_resv_i1 + large_cpu_unit_br, large_cpu_resv_i1")
-
-
-;;-------------------------------------------------------
-;; Bypasses
-;;-------------------------------------------------------
-
-(define_bypass 1 "large_cpu_alu, large_cpu_logic, large_cpu_shift"
- "large_cpu_alu, large_cpu_alu_shift, large_cpu_logic, large_cpu_logic_shift, large_cpu_shift")
-
-(define_bypass 2 "large_cpu_alu_shift, large_cpu_logic_shift"
- "large_cpu_alu, large_cpu_alu_shift, large_cpu_logic, large_cpu_logic_shift, large_cpu_shift")
-
-(define_bypass 1 "large_cpu_alu, large_cpu_logic, large_cpu_shift" "large_cpu_load1")
-
-(define_bypass 2 "large_cpu_alu_shift, large_cpu_logic_shift" "large_cpu_load1")
-
-(define_bypass 2 "large_cpu_floads"
- "large_cpu_fpalu, large_cpu_fpmuld,\
- large_cpu_fdivs, large_cpu_fdivd,\
- large_cpu_f2i")
-
-(define_bypass 3 "large_cpu_floadd"
- "large_cpu_fpalu, large_cpu_fpmuld,\
- large_cpu_fdivs, large_cpu_fdivd,\
- large_cpu_f2i")
diff --git a/gcc/config/aarch64/small.md b/gcc/config/aarch64/small.md
deleted file mode 100644
index a19083ccff2..00000000000
--- a/gcc/config/aarch64/small.md
+++ /dev/null
@@ -1,287 +0,0 @@
-;; Copyright (C) 2012-2013 Free Software Foundation, Inc.
-;;
-;; Contributed by ARM Ltd.
-;;
-;; This file is part of GCC.
-;;
-;; GCC is free software; you can redistribute it and/or modify it
-;; under the terms of the GNU General Public License as published by
-;; the Free Software Foundation; either version 3, or (at your option)
-;; any later version.
-;;
-;; GCC is distributed in the hope that it will be useful, but
-;; WITHOUT ANY WARRANTY; without even the implied warranty of
-;; MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
-;; General Public License for more details.
-;;
-;; You should have received a copy of the GNU General Public License
-;; along with GCC; see the file COPYING3. If not see
-;; <http://www.gnu.org/licenses/>.
-
-;; In the absence of any ARMv8-A implementations, two examples derived
-;; from ARM's most recent ARMv7-A cores (Cortex-A7 and Cortex-A15) are
-;; included by way of example. This is a temporary measure.
-
-;; Example pipeline description for an example 'small' core
-;; implementing AArch64
-
-;;-------------------------------------------------------
-;; General Description
-;;-------------------------------------------------------
-
-(define_automaton "small_cpu")
-
-;; The core is modelled as a single issue pipeline with the following
-;; dispatch units.
-;; 1. One pipeline for simple intructions.
-;; 2. One pipeline for branch intructions.
-;;
-;; There are five pipeline stages.
-;; The decode/issue stages operate the same for all instructions.
-;; Instructions always advance one stage per cycle in order.
-;; Only branch instructions may dual-issue with other instructions, except
-;; when those instructions take multiple cycles to issue.
-
-
-;;-------------------------------------------------------
-;; CPU Units and Reservations
-;;-------------------------------------------------------
-
-(define_cpu_unit "small_cpu_unit_i" "small_cpu")
-(define_cpu_unit "small_cpu_unit_br" "small_cpu")
-
-;; Pseudo-unit for blocking the multiply pipeline when a double-precision
-;; multiply is in progress.
-(define_cpu_unit "small_cpu_unit_fpmul_pipe" "small_cpu")
-
-;; The floating-point add pipeline, used to model the usage
-;; of the add pipeline by fp alu instructions.
-(define_cpu_unit "small_cpu_unit_fpadd_pipe" "small_cpu")
-
-;; Floating-point division pipeline (long latency, out-of-order completion).
-(define_cpu_unit "small_cpu_unit_fpdiv" "small_cpu")
-
-
-;;-------------------------------------------------------
-;; Simple ALU Instructions
-;;-------------------------------------------------------
-
-;; Simple ALU operations without shift
-(define_insn_reservation "small_cpu_alu" 2
- (and (eq_attr "tune" "small")
- (eq_attr "v8type" "adc,alu,alu_ext"))
- "small_cpu_unit_i")
-
-(define_insn_reservation "small_cpu_logic" 2
- (and (eq_attr "tune" "small")
- (eq_attr "v8type" "logic,logic_imm"))
- "small_cpu_unit_i")
-
-(define_insn_reservation "small_cpu_shift" 2
- (and (eq_attr "tune" "small")
- (eq_attr "v8type" "shift,shift_imm"))
- "small_cpu_unit_i")
-
-;; Simple ALU operations with immediate shift
-(define_insn_reservation "small_cpu_alu_shift" 2
- (and (eq_attr "tune" "small")
- (eq_attr "v8type" "alu_shift"))
- "small_cpu_unit_i")
-
-(define_insn_reservation "small_cpu_logic_shift" 2
- (and (eq_attr "tune" "small")
- (eq_attr "v8type" "logic_shift"))
- "small_cpu_unit_i")
-
-
-;;-------------------------------------------------------
-;; Multiplication/Division
-;;-------------------------------------------------------
-
-;; Simple multiplication
-(define_insn_reservation "small_cpu_mult_single" 2
- (and (eq_attr "tune" "small")
- (and (eq_attr "v8type" "mult,madd") (eq_attr "mode" "SI")))
- "small_cpu_unit_i")
-
-(define_insn_reservation "small_cpu_mult_double" 3
- (and (eq_attr "tune" "small")
- (and (eq_attr "v8type" "mult,madd") (eq_attr "mode" "DI")))
- "small_cpu_unit_i")
-
-;; 64-bit multiplication
-(define_insn_reservation "small_cpu_mull" 3
- (and (eq_attr "tune" "small") (eq_attr "v8type" "mull,mulh,maddl"))
- "small_cpu_unit_i * 2")
-
-;; Division
-(define_insn_reservation "small_cpu_udiv_single" 5
- (and (eq_attr "tune" "small")
- (and (eq_attr "v8type" "udiv") (eq_attr "mode" "SI")))
- "small_cpu_unit_i")
-
-(define_insn_reservation "small_cpu_udiv_double" 10
- (and (eq_attr "tune" "small")
- (and (eq_attr "v8type" "udiv") (eq_attr "mode" "DI")))
- "small_cpu_unit_i")
-
-(define_insn_reservation "small_cpu_sdiv_single" 6
- (and (eq_attr "tune" "small")
- (and (eq_attr "v8type" "sdiv") (eq_attr "mode" "SI")))
- "small_cpu_unit_i")
-
-(define_insn_reservation "small_cpu_sdiv_double" 12
- (and (eq_attr "tune" "small")
- (and (eq_attr "v8type" "sdiv") (eq_attr "mode" "DI")))
- "small_cpu_unit_i")
-
-
-;;-------------------------------------------------------
-;; Load/Store Instructions
-;;-------------------------------------------------------
-
-(define_insn_reservation "small_cpu_load1" 2
- (and (eq_attr "tune" "small")
- (eq_attr "v8type" "load_acq,load1"))
- "small_cpu_unit_i")
-
-(define_insn_reservation "small_cpu_store1" 0
- (and (eq_attr "tune" "small")
- (eq_attr "v8type" "store_rel,store1"))
- "small_cpu_unit_i")
-
-(define_insn_reservation "small_cpu_load2" 3
- (and (eq_attr "tune" "small")
- (eq_attr "v8type" "load2"))
- "small_cpu_unit_i + small_cpu_unit_br, small_cpu_unit_i")
-
-(define_insn_reservation "small_cpu_store2" 0
- (and (eq_attr "tune" "small")
- (eq_attr "v8type" "store2"))
- "small_cpu_unit_i + small_cpu_unit_br, small_cpu_unit_i")
-
-
-;;-------------------------------------------------------
-;; Branches
-;;-------------------------------------------------------
-
-;; Direct branches are the only instructions that can dual-issue.
-;; The latency here represents when the branch actually takes place.
-
-(define_insn_reservation "small_cpu_unit_br" 3
- (and (eq_attr "tune" "small")
- (eq_attr "v8type" "branch,call"))
- "small_cpu_unit_br")
-
-
-;;-------------------------------------------------------
-;; Floating-point arithmetic.
-;;-------------------------------------------------------
-
-(define_insn_reservation "small_cpu_fpalu" 4
- (and (eq_attr "tune" "small")
- (eq_attr "v8type" "ffarith,fadd,fccmp,fcvt,fcmp"))
- "small_cpu_unit_i + small_cpu_unit_fpadd_pipe")
-
-(define_insn_reservation "small_cpu_fconst" 3
- (and (eq_attr "tune" "small")
- (eq_attr "v8type" "fconst"))
- "small_cpu_unit_i + small_cpu_unit_fpadd_pipe")
-
-(define_insn_reservation "small_cpu_fpmuls" 4
- (and (eq_attr "tune" "small")
- (and (eq_attr "v8type" "fmul") (eq_attr "mode" "SF")))
- "small_cpu_unit_i + small_cpu_unit_fpmul_pipe")
-
-(define_insn_reservation "small_cpu_fpmuld" 7
- (and (eq_attr "tune" "small")
- (and (eq_attr "v8type" "fmul") (eq_attr "mode" "DF")))
- "small_cpu_unit_i + small_cpu_unit_fpmul_pipe, small_cpu_unit_fpmul_pipe * 2,\
- small_cpu_unit_i + small_cpu_unit_fpmul_pipe")
-
-
-;;-------------------------------------------------------
-;; Floating-point Division
-;;-------------------------------------------------------
-
-;; Single-precision divide takes 14 cycles to complete, and this
-;; includes the time taken for the special instruction used to collect the
-;; result to travel down the multiply pipeline.
-
-(define_insn_reservation "small_cpu_fdivs" 14
- (and (eq_attr "tune" "small")
- (and (eq_attr "v8type" "fdiv,fsqrt") (eq_attr "mode" "SF")))
- "small_cpu_unit_i, small_cpu_unit_fpdiv * 13")
-
-(define_insn_reservation "small_cpu_fdivd" 29
- (and (eq_attr "tune" "small")
- (and (eq_attr "v8type" "fdiv,fsqrt") (eq_attr "mode" "DF")))
- "small_cpu_unit_i, small_cpu_unit_fpdiv * 28")
-
-
-;;-------------------------------------------------------
-;; Floating-point Transfers
-;;-------------------------------------------------------
-
-(define_insn_reservation "small_cpu_i2f" 4
- (and (eq_attr "tune" "small")
- (eq_attr "v8type" "fmovi2f"))
- "small_cpu_unit_i")
-
-(define_insn_reservation "small_cpu_f2i" 2
- (and (eq_attr "tune" "small")
- (eq_attr "v8type" "fmovf2i"))
- "small_cpu_unit_i")
-
-
-;;-------------------------------------------------------
-;; Floating-point Load/Store
-;;-------------------------------------------------------
-
-(define_insn_reservation "small_cpu_floads" 4
- (and (eq_attr "tune" "small")
- (and (eq_attr "v8type" "fpsimd_load") (eq_attr "mode" "SF")))
- "small_cpu_unit_i")
-
-(define_insn_reservation "small_cpu_floadd" 5
- (and (eq_attr "tune" "small")
- (and (eq_attr "v8type" "fpsimd_load") (eq_attr "mode" "DF")))
- "small_cpu_unit_i + small_cpu_unit_br, small_cpu_unit_i")
-
-(define_insn_reservation "small_cpu_fstores" 0
- (and (eq_attr "tune" "small")
- (and (eq_attr "v8type" "fpsimd_store") (eq_attr "mode" "SF")))
- "small_cpu_unit_i")
-
-(define_insn_reservation "small_cpu_fstored" 0
- (and (eq_attr "tune" "small")
- (and (eq_attr "v8type" "fpsimd_store") (eq_attr "mode" "DF")))
- "small_cpu_unit_i + small_cpu_unit_br, small_cpu_unit_i")
-
-
-;;-------------------------------------------------------
-;; Bypasses
-;;-------------------------------------------------------
-
-;; Forwarding path for unshifted operands.
-
-(define_bypass 1 "small_cpu_alu, small_cpu_alu_shift"
- "small_cpu_alu, small_cpu_alu_shift, small_cpu_logic, small_cpu_logic_shift, small_cpu_shift")
-
-(define_bypass 1 "small_cpu_logic, small_cpu_logic_shift"
- "small_cpu_alu, small_cpu_alu_shift, small_cpu_logic, small_cpu_logic_shift, small_cpu_shift")
-
-(define_bypass 1 "small_cpu_shift"
- "small_cpu_alu, small_cpu_alu_shift, small_cpu_logic, small_cpu_logic_shift, small_cpu_shift")
-
-;; Load-to-use for floating-point values has a penalty of one cycle.
-
-(define_bypass 2 "small_cpu_floads"
- "small_cpu_fpalu, small_cpu_fpmuld,\
- small_cpu_fdivs, small_cpu_fdivd,\
- small_cpu_f2i")
-
-(define_bypass 3 "small_cpu_floadd"
- "small_cpu_fpalu, small_cpu_fpmuld,\
- small_cpu_fdivs, small_cpu_fdivd,\
- small_cpu_f2i")
diff --git a/gcc/config/aarch64/t-aarch64 b/gcc/config/aarch64/t-aarch64
index 4c265ebba7b..28b8a11d25f 100644
--- a/gcc/config/aarch64/t-aarch64
+++ b/gcc/config/aarch64/t-aarch64
@@ -34,3 +34,10 @@ aarch64-builtins.o: $(srcdir)/config/aarch64/aarch64-builtins.c $(CONFIG_H) \
$(srcdir)/config/aarch64/aarch64-simd-builtins.def
$(COMPILER) -c $(ALL_COMPILERFLAGS) $(ALL_CPPFLAGS) $(INCLUDES) \
$(srcdir)/config/aarch64/aarch64-builtins.c
+
+aarch-common.o: $(srcdir)/config/arm/aarch-common.c $(CONFIG_H) $(SYSTEM_H) \
+ coretypes.h $(TM_H) $(TM_P_H) $(RTL_H) $(TREE_H) output.h $(C_COMMON_H)
+ $(COMPILER) -c $(ALL_COMPILERFLAGS) $(ALL_CPPFLAGS) $(INCLUDES) \
+ $(srcdir)/config/arm/aarch-common.c
+
+
diff --git a/gcc/config/arm/aarch-common-protos.h b/gcc/config/arm/aarch-common-protos.h
new file mode 100644
index 00000000000..97768fce0ca
--- /dev/null
+++ b/gcc/config/arm/aarch-common-protos.h
@@ -0,0 +1,36 @@
+/* Function prototypes for instruction scheduling dependeoncy routines,
+ defined in aarch-common.c
+
+ Copyright (C) 1991-2013 Free Software Foundation, Inc.
+ Contributed by ARM Ltd.
+
+ This file is part of GCC.
+
+ GCC is free software; you can redistribute it and/or modify it
+ under the terms of the GNU General Public License as published
+ by the Free Software Foundation; either version 3, or (at your
+ option) any later version.
+
+ GCC is distributed in the hope that it will be useful, but WITHOUT
+ ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public
+ License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with GCC; see the file COPYING3. If not see
+ <http://www.gnu.org/licenses/>. */
+
+
+#ifndef GCC_AARCH_COMMON_PROTOS_H
+#define GCC_AARCH_COMMON_PROTOS_H
+
+extern int arm_early_load_addr_dep (rtx, rtx);
+extern int arm_early_store_addr_dep (rtx, rtx);
+extern int arm_mac_accumulator_is_mul_result (rtx, rtx);
+extern int arm_mac_accumulator_is_result (rtx, rtx);
+extern int arm_no_early_alu_shift_dep (rtx, rtx);
+extern int arm_no_early_alu_shift_value_dep (rtx, rtx);
+extern int arm_no_early_mul_dep (rtx, rtx);
+extern int arm_no_early_store_addr_dep (rtx, rtx);
+
+#endif /* GCC_AARCH_COMMON_PROTOS_H */
diff --git a/gcc/config/arm/aarch-common.c b/gcc/config/arm/aarch-common.c
new file mode 100644
index 00000000000..78816625017
--- /dev/null
+++ b/gcc/config/arm/aarch-common.c
@@ -0,0 +1,356 @@
+/* Dependency checks for instruction scheduling, shared between ARM and
+ AARCH64.
+
+ Copyright (C) 1991-2013 Free Software Foundation, Inc.
+ Contributed by ARM Ltd.
+
+ This file is part of GCC.
+
+ GCC is free software; you can redistribute it and/or modify it
+ under the terms of the GNU General Public License as published
+ by the Free Software Foundation; either version 3, or (at your
+ option) any later version.
+
+ GCC is distributed in the hope that it will be useful, but WITHOUT
+ ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public
+ License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with GCC; see the file COPYING3. If not see
+ <http://www.gnu.org/licenses/>. */
+
+
+/* Return nonzero if the CONSUMER instruction (a load) does need
+ PRODUCER's value to calculate the address. */
+
+#include "config.h"
+#include "system.h"
+#include "coretypes.h"
+#include "tm.h"
+#include "tm_p.h"
+#include "rtl.h"
+#include "tree.h"
+#include "c-family/c-common.h"
+#include "rtl.h"
+
+typedef struct
+{
+ rtx_code search_code;
+ rtx search_result;
+ bool find_any_shift;
+} search_term;
+
+/* Return TRUE if X is either an arithmetic shift left, or
+ is a multiplication by a power of two. */
+static bool
+arm_rtx_shift_left_p (rtx x)
+{
+ enum rtx_code code = GET_CODE (x);
+
+ if (code == MULT && CONST_INT_P (XEXP (x, 1))
+ && exact_log2 (INTVAL (XEXP (x, 1))) > 0)
+ return true;
+
+ if (code == ASHIFT)
+ return true;
+
+ return false;
+}
+
+static rtx_code shift_rtx_codes[] =
+ { ASHIFT, ROTATE, ASHIFTRT, LSHIFTRT,
+ ROTATERT, ZERO_EXTEND, SIGN_EXTEND };
+
+/* Callback function for arm_find_sub_rtx_with_code.
+ DATA is safe to treat as a SEARCH_TERM, ST. This will
+ hold a SEARCH_CODE. PATTERN is checked to see if it is an
+ RTX with that code. If it is, write SEARCH_RESULT in ST
+ and return 1. Otherwise, or if we have been passed a NULL_RTX
+ return 0. If ST.FIND_ANY_SHIFT then we are interested in
+ anything which can reasonably be described as a SHIFT RTX. */
+static int
+arm_find_sub_rtx_with_search_term (rtx *pattern, void *data)
+{
+ search_term *st = (search_term *) data;
+ rtx_code pattern_code;
+ int found = 0;
+
+ gcc_assert (pattern);
+ gcc_assert (st);
+
+ /* Poorly formed patterns can really ruin our day. */
+ if (*pattern == NULL_RTX)
+ return 0;
+
+ pattern_code = GET_CODE (*pattern);
+
+ if (st->find_any_shift)
+ {
+ unsigned i = 0;
+
+ /* Left shifts might have been canonicalized to a MULT of some
+ power of two. Make sure we catch them. */
+ if (arm_rtx_shift_left_p (*pattern))
+ found = 1;
+ else
+ for (i = 0; i < ARRAY_SIZE (shift_rtx_codes); i++)
+ if (pattern_code == shift_rtx_codes[i])
+ found = 1;
+ }
+
+ if (pattern_code == st->search_code)
+ found = 1;
+
+ if (found)
+ st->search_result = *pattern;
+
+ return found;
+}
+
+/* Traverse PATTERN looking for a sub-rtx with RTX_CODE CODE. */
+static rtx
+arm_find_sub_rtx_with_code (rtx pattern, rtx_code code, bool find_any_shift)
+{
+ search_term st;
+ int result = 0;
+
+ gcc_assert (pattern != NULL_RTX);
+ st.search_code = code;
+ st.search_result = NULL_RTX;
+ st.find_any_shift = find_any_shift;
+ result = for_each_rtx (&pattern, arm_find_sub_rtx_with_search_term, &st);
+ if (result)
+ return st.search_result;
+ else
+ return NULL_RTX;
+}
+
+/* Traverse PATTERN looking for any sub-rtx which looks like a shift. */
+static rtx
+arm_find_shift_sub_rtx (rtx pattern)
+{
+ return arm_find_sub_rtx_with_code (pattern, ASHIFT, true);
+}
+
+/* PRODUCER and CONSUMER are two potentially dependant RTX. PRODUCER
+ (possibly) contains a SET which will provide a result we can access
+ using the SET_DEST macro. We will place the RTX which would be
+ written by PRODUCER in SET_SOURCE.
+ Similarly, CONSUMER (possibly) contains a SET which has an operand
+ we can access using SET_SRC. We place this operand in
+ SET_DESTINATION.
+
+ Return nonzero if we found the SET RTX we expected. */
+static int
+arm_get_set_operands (rtx producer, rtx consumer,
+ rtx *set_source, rtx *set_destination)
+{
+ rtx set_producer = arm_find_sub_rtx_with_code (producer, SET, false);
+ rtx set_consumer = arm_find_sub_rtx_with_code (consumer, SET, false);
+
+ if (set_producer && set_consumer)
+ {
+ *set_source = SET_DEST (set_producer);
+ *set_destination = SET_SRC (set_consumer);
+ return 1;
+ }
+ return 0;
+}
+
+/* Return nonzero if the CONSUMER instruction (a load) does need
+ PRODUCER's value to calculate the address. */
+int
+arm_early_load_addr_dep (rtx producer, rtx consumer)
+{
+ rtx value, addr;
+
+ if (!arm_get_set_operands (producer, consumer, &value, &addr))
+ return 0;
+
+ return reg_overlap_mentioned_p (value, addr);
+}
+
+/* Return nonzero if the CONSUMER instruction (an ALU op) does not
+ have an early register shift value or amount dependency on the
+ result of PRODUCER. */
+int
+arm_no_early_alu_shift_dep (rtx producer, rtx consumer)
+{
+ rtx value, op;
+ rtx early_op;
+
+ if (!arm_get_set_operands (producer, consumer, &value, &op))
+ return 0;
+
+ if ((early_op = arm_find_shift_sub_rtx (op)))
+ {
+ if (REG_P (early_op))
+ early_op = op;
+
+ return !reg_overlap_mentioned_p (value, early_op);
+ }
+
+ return 0;
+}
+
+/* Return nonzero if the CONSUMER instruction (an ALU op) does not
+ have an early register shift value dependency on the result of
+ PRODUCER. */
+int
+arm_no_early_alu_shift_value_dep (rtx producer, rtx consumer)
+{
+ rtx value, op;
+ rtx early_op;
+
+ if (!arm_get_set_operands (producer, consumer, &value, &op))
+ return 0;
+
+ if ((early_op = arm_find_shift_sub_rtx (op)))
+ /* We want to check the value being shifted. */
+ if (!reg_overlap_mentioned_p (value, XEXP (early_op, 0)))
+ return 1;
+
+ return 0;
+}
+
+/* Return nonzero if the CONSUMER (a mul or mac op) does not
+ have an early register mult dependency on the result of
+ PRODUCER. */
+int
+arm_no_early_mul_dep (rtx producer, rtx consumer)
+{
+ rtx value, op;
+
+ if (!arm_get_set_operands (producer, consumer, &value, &op))
+ return 0;
+
+ if (GET_CODE (op) == PLUS || GET_CODE (op) == MINUS)
+ {
+ if (GET_CODE (XEXP (op, 0)) == MULT)
+ return !reg_overlap_mentioned_p (value, XEXP (op, 0));
+ else
+ return !reg_overlap_mentioned_p (value, XEXP (op, 1));
+ }
+
+ return 0;
+}
+
+/* Return nonzero if the CONSUMER instruction (a store) does not need
+ PRODUCER's value to calculate the address. */
+
+int
+arm_no_early_store_addr_dep (rtx producer, rtx consumer)
+{
+ rtx value = arm_find_sub_rtx_with_code (producer, SET, false);
+ rtx addr = arm_find_sub_rtx_with_code (consumer, SET, false);
+
+ if (value)
+ value = SET_DEST (value);
+
+ if (addr)
+ addr = SET_DEST (addr);
+
+ if (!value || !addr)
+ return 0;
+
+ return !reg_overlap_mentioned_p (value, addr);
+}
+
+/* Return nonzero if the CONSUMER instruction (a store) does need
+ PRODUCER's value to calculate the address. */
+
+int
+arm_early_store_addr_dep (rtx producer, rtx consumer)
+{
+ return !arm_no_early_store_addr_dep (producer, consumer);
+}
+
+/* Return non-zero iff the consumer (a multiply-accumulate or a
+ multiple-subtract instruction) has an accumulator dependency on the
+ result of the producer and no other dependency on that result. It
+ does not check if the producer is multiply-accumulate instruction. */
+int
+arm_mac_accumulator_is_result (rtx producer, rtx consumer)
+{
+ rtx result;
+ rtx op0, op1, acc;
+
+ producer = PATTERN (producer);
+ consumer = PATTERN (consumer);
+
+ if (GET_CODE (producer) == COND_EXEC)
+ producer = COND_EXEC_CODE (producer);
+ if (GET_CODE (consumer) == COND_EXEC)
+ consumer = COND_EXEC_CODE (consumer);
+
+ if (GET_CODE (producer) != SET)
+ return 0;
+
+ result = XEXP (producer, 0);
+
+ if (GET_CODE (consumer) != SET)
+ return 0;
+
+ /* Check that the consumer is of the form
+ (set (...) (plus (mult ...) (...)))
+ or
+ (set (...) (minus (...) (mult ...))). */
+ if (GET_CODE (XEXP (consumer, 1)) == PLUS)
+ {
+ if (GET_CODE (XEXP (XEXP (consumer, 1), 0)) != MULT)
+ return 0;
+
+ op0 = XEXP (XEXP (XEXP (consumer, 1), 0), 0);
+ op1 = XEXP (XEXP (XEXP (consumer, 1), 0), 1);
+ acc = XEXP (XEXP (consumer, 1), 1);
+ }
+ else if (GET_CODE (XEXP (consumer, 1)) == MINUS)
+ {
+ if (GET_CODE (XEXP (XEXP (consumer, 1), 1)) != MULT)
+ return 0;
+
+ op0 = XEXP (XEXP (XEXP (consumer, 1), 1), 0);
+ op1 = XEXP (XEXP (XEXP (consumer, 1), 1), 1);
+ acc = XEXP (XEXP (consumer, 1), 0);
+ }
+ else
+ return 0;
+
+ return (reg_overlap_mentioned_p (result, acc)
+ && !reg_overlap_mentioned_p (result, op0)
+ && !reg_overlap_mentioned_p (result, op1));
+}
+
+/* Return non-zero if the consumer (a multiply-accumulate instruction)
+ has an accumulator dependency on the result of the producer (a
+ multiplication instruction) and no other dependency on that result. */
+int
+arm_mac_accumulator_is_mul_result (rtx producer, rtx consumer)
+{
+ rtx mul = PATTERN (producer);
+ rtx mac = PATTERN (consumer);
+ rtx mul_result;
+ rtx mac_op0, mac_op1, mac_acc;
+
+ if (GET_CODE (mul) == COND_EXEC)
+ mul = COND_EXEC_CODE (mul);
+ if (GET_CODE (mac) == COND_EXEC)
+ mac = COND_EXEC_CODE (mac);
+
+ /* Check that mul is of the form (set (...) (mult ...))
+ and mla is of the form (set (...) (plus (mult ...) (...))). */
+ if ((GET_CODE (mul) != SET || GET_CODE (XEXP (mul, 1)) != MULT)
+ || (GET_CODE (mac) != SET || GET_CODE (XEXP (mac, 1)) != PLUS
+ || GET_CODE (XEXP (XEXP (mac, 1), 0)) != MULT))
+ return 0;
+
+ mul_result = XEXP (mul, 0);
+ mac_op0 = XEXP (XEXP (XEXP (mac, 1), 0), 0);
+ mac_op1 = XEXP (XEXP (XEXP (mac, 1), 0), 1);
+ mac_acc = XEXP (XEXP (mac, 1), 1);
+
+ return (reg_overlap_mentioned_p (mul_result, mac_acc)
+ && !reg_overlap_mentioned_p (mul_result, mac_op0)
+ && !reg_overlap_mentioned_p (mul_result, mac_op1));
+}
diff --git a/gcc/config/arm/arm-cores.def b/gcc/config/arm/arm-cores.def
index 3d59fa6f5a9..17c9bf3255a 100644
--- a/gcc/config/arm/arm-cores.def
+++ b/gcc/config/arm/arm-cores.def
@@ -129,7 +129,7 @@ ARM_CORE("cortex-a7", cortexa7, 7A, FL_LDSCHED | FL_THUMB_DIV | FL_ARM_DIV
ARM_CORE("cortex-a8", cortexa8, 7A, FL_LDSCHED, cortex)
ARM_CORE("cortex-a9", cortexa9, 7A, FL_LDSCHED, cortex_a9)
ARM_CORE("cortex-a15", cortexa15, 7A, FL_LDSCHED | FL_THUMB_DIV | FL_ARM_DIV, cortex_a15)
-ARM_CORE("cortex-a53", cortexa53, 8A, FL_LDSCHED, cortex_a5)
+ARM_CORE("cortex-a53", cortexa53, 8A, FL_LDSCHED, cortex)
ARM_CORE("cortex-r4", cortexr4, 7R, FL_LDSCHED, cortex)
ARM_CORE("cortex-r4f", cortexr4f, 7R, FL_LDSCHED, cortex)
ARM_CORE("cortex-r5", cortexr5, 7R, FL_LDSCHED | FL_ARM_DIV, cortex)
diff --git a/gcc/config/arm/arm-fixed.md b/gcc/config/arm/arm-fixed.md
index dc8e7ac8c14..3972a850990 100644
--- a/gcc/config/arm/arm-fixed.md
+++ b/gcc/config/arm/arm-fixed.md
@@ -25,7 +25,8 @@
"TARGET_32BIT"
"add%?\\t%0, %1, %2"
[(set_attr "predicable" "yes")
- (set_attr "predicable_short_it" "yes,no")])
+ (set_attr "predicable_short_it" "yes,no")
+ (set_attr "type" "alu_reg")])
(define_insn "add<mode>3"
[(set (match_operand:ADDSUB 0 "s_register_operand" "=r")
@@ -34,7 +35,8 @@
"TARGET_INT_SIMD"
"sadd<qaddsub_suf>%?\\t%0, %1, %2"
[(set_attr "predicable" "yes")
- (set_attr "predicable_short_it" "no")])
+ (set_attr "predicable_short_it" "no")
+ (set_attr "type" "alu_reg")])
(define_insn "usadd<mode>3"
[(set (match_operand:UQADDSUB 0 "s_register_operand" "=r")
@@ -43,7 +45,8 @@
"TARGET_INT_SIMD"
"uqadd<qaddsub_suf>%?\\t%0, %1, %2"
[(set_attr "predicable" "yes")
- (set_attr "predicable_short_it" "no")])
+ (set_attr "predicable_short_it" "no")
+ (set_attr "type" "alu_reg")])
(define_insn "ssadd<mode>3"
[(set (match_operand:QADDSUB 0 "s_register_operand" "=r")
@@ -52,7 +55,8 @@
"TARGET_INT_SIMD"
"qadd<qaddsub_suf>%?\\t%0, %1, %2"
[(set_attr "predicable" "yes")
- (set_attr "predicable_short_it" "no")])
+ (set_attr "predicable_short_it" "no")
+ (set_attr "type" "alu_reg")])
(define_insn "sub<mode>3"
[(set (match_operand:FIXED 0 "s_register_operand" "=l,r")
@@ -61,7 +65,8 @@
"TARGET_32BIT"
"sub%?\\t%0, %1, %2"
[(set_attr "predicable" "yes")
- (set_attr "predicable_short_it" "yes,no")])
+ (set_attr "predicable_short_it" "yes,no")
+ (set_attr "type" "alu_reg")])
(define_insn "sub<mode>3"
[(set (match_operand:ADDSUB 0 "s_register_operand" "=r")
@@ -70,7 +75,8 @@
"TARGET_INT_SIMD"
"ssub<qaddsub_suf>%?\\t%0, %1, %2"
[(set_attr "predicable" "yes")
- (set_attr "predicable_short_it" "no")])
+ (set_attr "predicable_short_it" "no")
+ (set_attr "type" "alu_reg")])
(define_insn "ussub<mode>3"
[(set (match_operand:UQADDSUB 0 "s_register_operand" "=r")
@@ -80,7 +86,8 @@
"TARGET_INT_SIMD"
"uqsub<qaddsub_suf>%?\\t%0, %1, %2"
[(set_attr "predicable" "yes")
- (set_attr "predicable_short_it" "no")])
+ (set_attr "predicable_short_it" "no")
+ (set_attr "type" "alu_reg")])
(define_insn "sssub<mode>3"
[(set (match_operand:QADDSUB 0 "s_register_operand" "=r")
@@ -89,7 +96,8 @@
"TARGET_INT_SIMD"
"qsub<qaddsub_suf>%?\\t%0, %1, %2"
[(set_attr "predicable" "yes")
- (set_attr "predicable_short_it" "no")])
+ (set_attr "predicable_short_it" "no")
+ (set_attr "type" "alu_reg")])
;; Fractional multiplies.
@@ -246,6 +254,7 @@
return "";
}
[(set_attr "conds" "clob")
+ (set_attr "type" "multiple")
(set (attr "length")
(if_then_else (eq_attr "is_thumb" "yes")
(if_then_else (match_test "arm_restrict_it")
@@ -305,6 +314,7 @@
return "";
}
[(set_attr "conds" "clob")
+ (set_attr "type" "multiple")
(set (attr "length")
(if_then_else (eq_attr "is_thumb" "yes")
(if_then_else (match_test "arm_restrict_it")
@@ -406,7 +416,7 @@
[(set_attr "predicable" "yes")
(set_attr "predicable_short_it" "no")
(set_attr "shift" "1")
- (set_attr "type" "arlo_shift")])
+ (set_attr "type" "alu_shift_imm")])
(define_insn "arm_usatsihi"
[(set (match_operand:HI 0 "s_register_operand" "=r")
@@ -414,5 +424,6 @@
"TARGET_INT_SIMD"
"usat%?\\t%0, #16, %1"
[(set_attr "predicable" "yes")
- (set_attr "predicable_short_it" "no")]
+ (set_attr "predicable_short_it" "no")
+ (set_attr "type" "alu_imm")]
)
diff --git a/gcc/config/arm/arm-protos.h b/gcc/config/arm/arm-protos.h
index 5ec83b349cd..8a8e36dac4e 100644
--- a/gcc/config/arm/arm-protos.h
+++ b/gcc/config/arm/arm-protos.h
@@ -97,14 +97,6 @@ extern bool arm_tls_referenced_p (rtx);
extern int arm_coproc_mem_operand (rtx, bool);
extern int neon_vector_mem_operand (rtx, int);
extern int neon_struct_mem_operand (rtx);
-extern int arm_no_early_store_addr_dep (rtx, rtx);
-extern int arm_early_store_addr_dep (rtx, rtx);
-extern int arm_early_load_addr_dep (rtx, rtx);
-extern int arm_no_early_alu_shift_dep (rtx, rtx);
-extern int arm_no_early_alu_shift_value_dep (rtx, rtx);
-extern int arm_no_early_mul_dep (rtx, rtx);
-extern int arm_mac_accumulator_is_result (rtx, rtx);
-extern int arm_mac_accumulator_is_mul_result (rtx, rtx);
extern int tls_mentioned_p (rtx);
extern int symbol_mentioned_p (rtx);
diff --git a/gcc/config/arm/arm.c b/gcc/config/arm/arm.c
index c3eacdd21bf..4c9a39700a9 100644
--- a/gcc/config/arm/arm.c
+++ b/gcc/config/arm/arm.c
@@ -8670,8 +8670,14 @@ xscale_sched_adjust_cost (rtx insn, rtx link, rtx dep, int * cost)
instruction we depend on is another ALU instruction, then we may
have to account for an additional stall. */
if (shift_opnum != 0
- && (attr_type == TYPE_ARLO_SHIFT
- || attr_type == TYPE_ARLO_SHIFT_REG
+ && (attr_type == TYPE_ALU_SHIFT_IMM
+ || attr_type == TYPE_ALUS_SHIFT_IMM
+ || attr_type == TYPE_LOGIC_SHIFT_IMM
+ || attr_type == TYPE_LOGICS_SHIFT_IMM
+ || attr_type == TYPE_ALU_SHIFT_REG
+ || attr_type == TYPE_ALUS_SHIFT_REG
+ || attr_type == TYPE_LOGIC_SHIFT_REG
+ || attr_type == TYPE_LOGICS_SHIFT_REG
|| attr_type == TYPE_MOV_SHIFT
|| attr_type == TYPE_MVN_SHIFT
|| attr_type == TYPE_MOV_SHIFT_REG
@@ -8958,9 +8964,17 @@ cortexa7_older_only (rtx insn)
switch (get_attr_type (insn))
{
- case TYPE_ARLO_REG:
+ case TYPE_ALU_REG:
+ case TYPE_ALUS_REG:
+ case TYPE_LOGIC_REG:
+ case TYPE_LOGICS_REG:
+ case TYPE_ADC_REG:
+ case TYPE_ADCS_REG:
+ case TYPE_ADR:
+ case TYPE_BFM:
+ case TYPE_REV:
case TYPE_MVN_REG:
- case TYPE_SHIFT:
+ case TYPE_SHIFT_IMM:
case TYPE_SHIFT_REG:
case TYPE_LOAD_BYTE:
case TYPE_LOAD1:
@@ -8969,7 +8983,7 @@ cortexa7_older_only (rtx insn)
case TYPE_FADDS:
case TYPE_FFARITHD:
case TYPE_FADDD:
- case TYPE_FCPYS:
+ case TYPE_FMOV:
case TYPE_F_CVT:
case TYPE_FCMPS:
case TYPE_FCMPD:
@@ -8981,7 +8995,8 @@ cortexa7_older_only (rtx insn)
case TYPE_FMACD:
case TYPE_FDIVS:
case TYPE_FDIVD:
- case TYPE_F_2_R:
+ case TYPE_F_MRC:
+ case TYPE_F_MRRC:
case TYPE_F_FLAG:
case TYPE_F_LOADS:
case TYPE_F_STORES:
@@ -9004,7 +9019,10 @@ cortexa7_younger (FILE *file, int verbose, rtx insn)
switch (get_attr_type (insn))
{
- case TYPE_ARLO_IMM:
+ case TYPE_ALU_IMM:
+ case TYPE_ALUS_IMM:
+ case TYPE_LOGIC_IMM:
+ case TYPE_LOGICS_IMM:
case TYPE_EXTEND:
case TYPE_MVN_IMM:
case TYPE_MOV_IMM:
@@ -25759,163 +25777,6 @@ arm_setup_incoming_varargs (cumulative_args_t pcum_v,
*pretend_size = (NUM_ARG_REGS - nregs) * UNITS_PER_WORD;
}
-/* Return nonzero if the CONSUMER instruction (a store) does not need
- PRODUCER's value to calculate the address. */
-
-int
-arm_no_early_store_addr_dep (rtx producer, rtx consumer)
-{
- rtx value = PATTERN (producer);
- rtx addr = PATTERN (consumer);
-
- if (GET_CODE (value) == COND_EXEC)
- value = COND_EXEC_CODE (value);
- if (GET_CODE (value) == PARALLEL)
- value = XVECEXP (value, 0, 0);
- value = XEXP (value, 0);
- if (GET_CODE (addr) == COND_EXEC)
- addr = COND_EXEC_CODE (addr);
- if (GET_CODE (addr) == PARALLEL)
- addr = XVECEXP (addr, 0, 0);
- addr = XEXP (addr, 0);
-
- return !reg_overlap_mentioned_p (value, addr);
-}
-
-/* Return nonzero if the CONSUMER instruction (a store) does need
- PRODUCER's value to calculate the address. */
-
-int
-arm_early_store_addr_dep (rtx producer, rtx consumer)
-{
- return !arm_no_early_store_addr_dep (producer, consumer);
-}
-
-/* Return nonzero if the CONSUMER instruction (a load) does need
- PRODUCER's value to calculate the address. */
-
-int
-arm_early_load_addr_dep (rtx producer, rtx consumer)
-{
- rtx value = PATTERN (producer);
- rtx addr = PATTERN (consumer);
-
- if (GET_CODE (value) == COND_EXEC)
- value = COND_EXEC_CODE (value);
- if (GET_CODE (value) == PARALLEL)
- value = XVECEXP (value, 0, 0);
- value = XEXP (value, 0);
- if (GET_CODE (addr) == COND_EXEC)
- addr = COND_EXEC_CODE (addr);
- if (GET_CODE (addr) == PARALLEL)
- {
- if (GET_CODE (XVECEXP (addr, 0, 0)) == RETURN)
- addr = XVECEXP (addr, 0, 1);
- else
- addr = XVECEXP (addr, 0, 0);
- }
- addr = XEXP (addr, 1);
-
- return reg_overlap_mentioned_p (value, addr);
-}
-
-/* Return nonzero if the CONSUMER instruction (an ALU op) does not
- have an early register shift value or amount dependency on the
- result of PRODUCER. */
-
-int
-arm_no_early_alu_shift_dep (rtx producer, rtx consumer)
-{
- rtx value = PATTERN (producer);
- rtx op = PATTERN (consumer);
- rtx early_op;
-
- if (GET_CODE (value) == COND_EXEC)
- value = COND_EXEC_CODE (value);
- if (GET_CODE (value) == PARALLEL)
- value = XVECEXP (value, 0, 0);
- value = XEXP (value, 0);
- if (GET_CODE (op) == COND_EXEC)
- op = COND_EXEC_CODE (op);
- if (GET_CODE (op) == PARALLEL)
- op = XVECEXP (op, 0, 0);
- op = XEXP (op, 1);
-
- early_op = XEXP (op, 0);
- /* This is either an actual independent shift, or a shift applied to
- the first operand of another operation. We want the whole shift
- operation. */
- if (REG_P (early_op))
- early_op = op;
-
- return !reg_overlap_mentioned_p (value, early_op);
-}
-
-/* Return nonzero if the CONSUMER instruction (an ALU op) does not
- have an early register shift value dependency on the result of
- PRODUCER. */
-
-int
-arm_no_early_alu_shift_value_dep (rtx producer, rtx consumer)
-{
- rtx value = PATTERN (producer);
- rtx op = PATTERN (consumer);
- rtx early_op;
-
- if (GET_CODE (value) == COND_EXEC)
- value = COND_EXEC_CODE (value);
- if (GET_CODE (value) == PARALLEL)
- value = XVECEXP (value, 0, 0);
- value = XEXP (value, 0);
- if (GET_CODE (op) == COND_EXEC)
- op = COND_EXEC_CODE (op);
- if (GET_CODE (op) == PARALLEL)
- op = XVECEXP (op, 0, 0);
- op = XEXP (op, 1);
-
- early_op = XEXP (op, 0);
-
- /* This is either an actual independent shift, or a shift applied to
- the first operand of another operation. We want the value being
- shifted, in either case. */
- if (!REG_P (early_op))
- early_op = XEXP (early_op, 0);
-
- return !reg_overlap_mentioned_p (value, early_op);
-}
-
-/* Return nonzero if the CONSUMER (a mul or mac op) does not
- have an early register mult dependency on the result of
- PRODUCER. */
-
-int
-arm_no_early_mul_dep (rtx producer, rtx consumer)
-{
- rtx value = PATTERN (producer);
- rtx op = PATTERN (consumer);
-
- if (GET_CODE (value) == COND_EXEC)
- value = COND_EXEC_CODE (value);
- if (GET_CODE (value) == PARALLEL)
- value = XVECEXP (value, 0, 0);
- value = XEXP (value, 0);
- if (GET_CODE (op) == COND_EXEC)
- op = COND_EXEC_CODE (op);
- if (GET_CODE (op) == PARALLEL)
- op = XVECEXP (op, 0, 0);
- op = XEXP (op, 1);
-
- if (GET_CODE (op) == PLUS || GET_CODE (op) == MINUS)
- {
- if (GET_CODE (XEXP (op, 0)) == MULT)
- return !reg_overlap_mentioned_p (value, XEXP (op, 0));
- else
- return !reg_overlap_mentioned_p (value, XEXP (op, 1));
- }
-
- return 0;
-}
-
/* We can't rely on the caller doing the proper promotion when
using APCS or ATPCS. */
@@ -25965,95 +25826,6 @@ arm_cxx_guard_type (void)
return TARGET_AAPCS_BASED ? integer_type_node : long_long_integer_type_node;
}
-/* Return non-zero iff the consumer (a multiply-accumulate or a
- multiple-subtract instruction) has an accumulator dependency on the
- result of the producer and no other dependency on that result. It
- does not check if the producer is multiply-accumulate instruction. */
-int
-arm_mac_accumulator_is_result (rtx producer, rtx consumer)
-{
- rtx result;
- rtx op0, op1, acc;
-
- producer = PATTERN (producer);
- consumer = PATTERN (consumer);
-
- if (GET_CODE (producer) == COND_EXEC)
- producer = COND_EXEC_CODE (producer);
- if (GET_CODE (consumer) == COND_EXEC)
- consumer = COND_EXEC_CODE (consumer);
-
- if (GET_CODE (producer) != SET)
- return 0;
-
- result = XEXP (producer, 0);
-
- if (GET_CODE (consumer) != SET)
- return 0;
-
- /* Check that the consumer is of the form
- (set (...) (plus (mult ...) (...)))
- or
- (set (...) (minus (...) (mult ...))). */
- if (GET_CODE (XEXP (consumer, 1)) == PLUS)
- {
- if (GET_CODE (XEXP (XEXP (consumer, 1), 0)) != MULT)
- return 0;
-
- op0 = XEXP (XEXP (XEXP (consumer, 1), 0), 0);
- op1 = XEXP (XEXP (XEXP (consumer, 1), 0), 1);
- acc = XEXP (XEXP (consumer, 1), 1);
- }
- else if (GET_CODE (XEXP (consumer, 1)) == MINUS)
- {
- if (GET_CODE (XEXP (XEXP (consumer, 1), 1)) != MULT)
- return 0;
-
- op0 = XEXP (XEXP (XEXP (consumer, 1), 1), 0);
- op1 = XEXP (XEXP (XEXP (consumer, 1), 1), 1);
- acc = XEXP (XEXP (consumer, 1), 0);
- }
- else
- return 0;
-
- return (reg_overlap_mentioned_p (result, acc)
- && !reg_overlap_mentioned_p (result, op0)
- && !reg_overlap_mentioned_p (result, op1));
-}
-
-/* Return non-zero if the consumer (a multiply-accumulate instruction)
- has an accumulator dependency on the result of the producer (a
- multiplication instruction) and no other dependency on that result. */
-int
-arm_mac_accumulator_is_mul_result (rtx producer, rtx consumer)
-{
- rtx mul = PATTERN (producer);
- rtx mac = PATTERN (consumer);
- rtx mul_result;
- rtx mac_op0, mac_op1, mac_acc;
-
- if (GET_CODE (mul) == COND_EXEC)
- mul = COND_EXEC_CODE (mul);
- if (GET_CODE (mac) == COND_EXEC)
- mac = COND_EXEC_CODE (mac);
-
- /* Check that mul is of the form (set (...) (mult ...))
- and mla is of the form (set (...) (plus (mult ...) (...))). */
- if ((GET_CODE (mul) != SET || GET_CODE (XEXP (mul, 1)) != MULT)
- || (GET_CODE (mac) != SET || GET_CODE (XEXP (mac, 1)) != PLUS
- || GET_CODE (XEXP (XEXP (mac, 1), 0)) != MULT))
- return 0;
-
- mul_result = XEXP (mul, 0);
- mac_op0 = XEXP (XEXP (XEXP (mac, 1), 0), 0);
- mac_op1 = XEXP (XEXP (XEXP (mac, 1), 0), 1);
- mac_acc = XEXP (XEXP (mac, 1), 1);
-
- return (reg_overlap_mentioned_p (mul_result, mac_acc)
- && !reg_overlap_mentioned_p (mul_result, mac_op0)
- && !reg_overlap_mentioned_p (mul_result, mac_op1));
-}
-
/* The EABI says test the least significant bit of a guard variable. */
diff --git a/gcc/config/arm/arm.md b/gcc/config/arm/arm.md
index fbf285e150a..e6fb0e119b5 100644
--- a/gcc/config/arm/arm.md
+++ b/gcc/config/arm/arm.md
@@ -226,418 +226,109 @@
(set_attr "length" "4")
(set_attr "pool_range" "250")])
-; TYPE attribute is used to classify instructions for use in scheduling.
-;
-; Instruction classification:
-;
-; arlo_imm any arithmetic or logical instruction that doesn't have
-; a shifted operand and has an immediate operand. This
-; excludes MOV, MVN and RSB(S) immediate.
-; arlo_reg any arithmetic or logical instruction that doesn't have
-; a shifted or an immediate operand. This excludes
-; MOV and MVN but includes MOVT. This is also the default.
-; arlo_shift any arithmetic or logical instruction that has a source
-; operand shifted by a constant. This excludes
-; simple shifts.
-; arlo_shift_reg as arlo_shift, with the shift amount specified in a
-; register.
-; block blockage insn, this blocks all functional units.
-; branch branch.
-; call subroutine call.
-; clz count leading zeros (CLZ).
-; extend extend instruction (SXTB, SXTH, UXTB, UXTH).
-; f_2_r transfer from float to core (no memory needed).
-; f_cvt conversion between float and integral.
-; f_flag transfer of co-processor flags to the CPSR.
-; f_load[d,s] double/single load from memory. Used for VFP unit.
-; f_minmax[d,s] double/single floating point minimum/maximum.
-; f_rint[d,s] double/single floating point rount to integral.
-; f_sel[d,s] double/single floating byte select.
-; f_store[d,s] double/single store to memory. Used for VFP unit.
-; fadd[d,s] double/single floating-point scalar addition.
-; fcmp[d,s] double/single floating-point compare.
-; fconst[d,s] double/single load immediate.
-; fcpys single precision floating point cpy.
-; fdiv[d,s] double/single precision floating point division.
-; ffarith[d,s] double/single floating point abs/neg/cpy.
-; ffma[d,s] double/single floating point fused multiply-accumulate.
-; float floating point arithmetic operation.
-; fmac[d,s] double/single floating point multiply-accumulate.
-; fmul[d,s] double/single floating point multiply.
-; load_byte load byte(s) from memory to arm registers.
-; load1 load 1 word from memory to arm registers.
-; load2 load 2 words from memory to arm registers.
-; load3 load 3 words from memory to arm registers.
-; load4 load 4 words from memory to arm registers.
-; mla integer multiply accumulate.
-; mlas integer multiply accumulate, flag setting.
-; mov_imm simple MOV instruction that moves an immediate to
-; register. This includes MOVW, but not MOVT.
-; mov_reg simple MOV instruction that moves a register to another
-; register. This includes MOVW, but not MOVT.
-; mov_shift simple MOV instruction, shifted operand by a constant.
-; mov_shift_reg simple MOV instruction, shifted operand by a register.
-; mul integer multiply.
-; muls integer multiply, flag setting.
-; mvn_imm inverting move instruction, immediate.
-; mvn_reg inverting move instruction, register.
-; mvn_shift inverting move instruction, shifted operand by a constant.
-; mvn_shift_reg inverting move instruction, shifted operand by a register.
-; r_2_f transfer from core to float.
-; sdiv signed division.
-; shift simple shift operation (LSL, LSR, ASR, ROR) with an
-; immediate.
-; shift_reg simple shift by a register.
-; smlad signed multiply accumulate dual.
-; smladx signed multiply accumulate dual reverse.
-; smlal signed multiply accumulate long.
-; smlald signed multiply accumulate long dual.
-; smlals signed multiply accumulate long, flag setting.
-; smlalxy signed multiply accumulate, 16x16-bit, 64-bit accumulate.
-; smlawx signed multiply accumulate, 32x16-bit, 32-bit accumulate.
-; smlawy signed multiply accumulate wide, 32x16-bit,
-; 32-bit accumulate.
-; smlaxy signed multiply accumulate, 16x16-bit, 32-bit accumulate.
-; smlsd signed multiply subtract dual.
-; smlsdx signed multiply subtract dual reverse.
-; smlsld signed multiply subtract long dual.
-; smmla signed most significant word multiply accumulate.
-; smmul signed most significant word multiply.
-; smmulr signed most significant word multiply, rounded.
-; smuad signed dual multiply add.
-; smuadx signed dual multiply add reverse.
-; smull signed multiply long.
-; smulls signed multiply long, flag setting.
-; smulwy signed multiply wide, 32x16-bit, 32-bit accumulate.
-; smulxy signed multiply, 16x16-bit, 32-bit accumulate.
-; smusd signed dual multiply subtract.
-; smusdx signed dual multiply subtract reverse.
-; store1 store 1 word to memory from arm registers.
-; store2 store 2 words to memory from arm registers.
-; store3 store 3 words to memory from arm registers.
-; store4 store 4 (or more) words to memory from arm registers.
-; udiv unsigned division.
-; umaal unsigned multiply accumulate accumulate long.
-; umlal unsigned multiply accumulate long.
-; umlals unsigned multiply accumulate long, flag setting.
-; umull unsigned multiply long.
-; umulls unsigned multiply long, flag setting.
-;
-; The classification below is for instructions used by the Wireless MMX
-; Technology. Each attribute value is used to classify an instruction of the
-; same name or family.
-;
-; wmmx_tandc
-; wmmx_tbcst
-; wmmx_textrc
-; wmmx_textrm
-; wmmx_tinsr
-; wmmx_tmcr
-; wmmx_tmcrr
-; wmmx_tmia
-; wmmx_tmiaph
-; wmmx_tmiaxy
-; wmmx_tmrc
-; wmmx_tmrrc
-; wmmx_tmovmsk
-; wmmx_torc
-; wmmx_torvsc
-; wmmx_wabs
-; wmmx_wdiff
-; wmmx_wacc
-; wmmx_wadd
-; wmmx_waddbhus
-; wmmx_waddsubhx
-; wmmx_waligni
-; wmmx_walignr
-; wmmx_wand
-; wmmx_wandn
-; wmmx_wavg2
-; wmmx_wavg4
-; wmmx_wcmpeq
-; wmmx_wcmpgt
-; wmmx_wmac
-; wmmx_wmadd
-; wmmx_wmax
-; wmmx_wmerge
-; wmmx_wmiawxy
-; wmmx_wmiaxy
-; wmmx_wmin
-; wmmx_wmov
-; wmmx_wmul
-; wmmx_wmulw
-; wmmx_wldr
-; wmmx_wor
-; wmmx_wpack
-; wmmx_wqmiaxy
-; wmmx_wqmulm
-; wmmx_wqmulwm
-; wmmx_wror
-; wmmx_wsad
-; wmmx_wshufh
-; wmmx_wsll
-; wmmx_wsra
-; wmmx_wsrl
-; wmmx_wstr
-; wmmx_wsub
-; wmmx_wsubaddhx
-; wmmx_wunpckeh
-; wmmx_wunpckel
-; wmmx_wunpckih
-; wmmx_wunpckil
-; wmmx_wxor
-
-(define_attr "type"
- "arlo_imm,\
- arlo_reg,\
- arlo_shift,\
- arlo_shift_reg,\
- block,\
- branch,\
- call,\
- clz,\
- crc,\
- extend,\
- f_2_r,\
- f_cvt,\
- f_flag,\
- f_loadd,\
- f_loads,\
- f_minmaxd,\
- f_minmaxs,\
- f_rintd,\
- f_rints,\
- f_seld,\
- f_sels,\
- f_stored,\
- f_stores,\
- faddd,\
- fadds,\
- fcmpd,\
- fcmps,\
- fconstd,\
- fconsts,\
- fcpys,\
- fdivd,\
- fdivs,\
- ffarithd,\
- ffariths,\
- ffmad,\
- ffmas,\
- float,\
- fmacd,\
- fmacs,\
- fmuld,\
- fmuls,\
- load_byte,\
- load1,\
- load2,\
- load3,\
- load4,\
- mla,\
- mlas,\
- mov_imm,\
- mov_reg,\
- mov_shift,\
- mov_shift_reg,\
- mul,\
- muls,\
- mvn_imm,\
- mvn_reg,\
- mvn_shift,\
- mvn_shift_reg,\
- r_2_f,\
- sdiv,\
- shift,\
- shift_reg,\
- smlad,\
- smladx,\
- smlal,\
- smlald,\
- smlals,\
- smlalxy,\
- smlawx,\
- smlawy,\
- smlaxy,\
- smlsd,\
- smlsdx,\
- smlsld,\
- smmla,\
- smmul,\
- smmulr,\
- smuad,\
- smuadx,\
- smull,\
- smulls,\
- smulwy,\
- smulxy,\
- smusd,\
- smusdx,\
- store1,\
- store2,\
- store3,\
- store4,\
- udiv,\
- umaal,\
- umlal,\
- umlals,\
- umull,\
- umulls,\
- wmmx_tandc,\
- wmmx_tbcst,\
- wmmx_textrc,\
- wmmx_textrm,\
- wmmx_tinsr,\
- wmmx_tmcr,\
- wmmx_tmcrr,\
- wmmx_tmia,\
- wmmx_tmiaph,\
- wmmx_tmiaxy,\
- wmmx_tmrc,\
- wmmx_tmrrc,\
- wmmx_tmovmsk,\
- wmmx_torc,\
- wmmx_torvsc,\
- wmmx_wabs,\
- wmmx_wabsdiff,\
- wmmx_wacc,\
- wmmx_wadd,\
- wmmx_waddbhus,\
- wmmx_waddsubhx,\
- wmmx_waligni,\
- wmmx_walignr,\
- wmmx_wand,\
- wmmx_wandn,\
- wmmx_wavg2,\
- wmmx_wavg4,\
- wmmx_wcmpeq,\
- wmmx_wcmpgt,\
- wmmx_wmac,\
- wmmx_wmadd,\
- wmmx_wmax,\
- wmmx_wmerge,\
- wmmx_wmiawxy,\
- wmmx_wmiaxy,\
- wmmx_wmin,\
- wmmx_wmov,\
- wmmx_wmul,\
- wmmx_wmulw,\
- wmmx_wldr,\
- wmmx_wor,\
- wmmx_wpack,\
- wmmx_wqmiaxy,\
- wmmx_wqmulm,\
- wmmx_wqmulwm,\
- wmmx_wror,\
- wmmx_wsad,\
- wmmx_wshufh,\
- wmmx_wsll,\
- wmmx_wsra,\
- wmmx_wsrl,\
- wmmx_wstr,\
- wmmx_wsub,\
- wmmx_wsubaddhx,\
- wmmx_wunpckeh,\
- wmmx_wunpckel,\
- wmmx_wunpckih,\
- wmmx_wunpckil,\
- wmmx_wxor"
- (const_string "arlo_reg"))
-
-; Is this an (integer side) multiply with a 32-bit (or smaller) result?
-(define_attr "mul32" "no,yes"
- (if_then_else
- (eq_attr "type"
- "smulxy,smlaxy,smulwy,smlawx,mul,muls,mla,mlas,smlawy,smuad,smuadx,\
- smlad,smladx,smusd,smusdx,smlsd,smlsdx,smmul,smmulr,smmla,smlald,smlsld")
- (const_string "yes")
- (const_string "no")))
-
-; Is this an (integer side) multiply with a 64-bit result?
-(define_attr "mul64" "no,yes"
- (if_then_else
- (eq_attr "type"
- "smlalxy,umull,umulls,umaal,umlal,umlals,smull,smulls,smlal,smlals")
- (const_string "yes")
- (const_string "no")))
+;; Instruction classification types
+(include "types.md")
; Load scheduling, set from the arm_ld_sched variable
; initialized by arm_option_override()
(define_attr "ldsched" "no,yes" (const (symbol_ref "arm_ld_sched")))
-;; Classification of NEON instructions for scheduling purposes.
-(define_attr "neon_type"
- "neon_int_1,\
- neon_int_2,\
- neon_int_3,\
- neon_int_4,\
- neon_int_5,\
- neon_vqneg_vqabs,\
- neon_vmov,\
- neon_vaba,\
- neon_vsma,\
- neon_vaba_qqq,\
- neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
- neon_mul_qqq_8_16_32_ddd_32,\
- neon_mul_qdd_64_32_long_qqd_16_ddd_32_scalar_64_32_long_scalar,\
- neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
- neon_mla_qqq_8_16,\
- neon_mla_ddd_32_qqd_16_ddd_32_scalar_qdd_64_32_long_scalar_qdd_64_32_long,\
- neon_mla_qqq_32_qqd_32_scalar,\
- neon_mul_ddd_16_scalar_32_16_long_scalar,\
- neon_mul_qqd_32_scalar,\
- neon_mla_ddd_16_scalar_qdd_32_16_long_scalar,\
- neon_shift_1,\
- neon_shift_2,\
- neon_shift_3,\
- neon_vshl_ddd,\
- neon_vqshl_vrshl_vqrshl_qqq,\
- neon_vsra_vrsra,\
- neon_fp_vadd_ddd_vabs_dd,\
- neon_fp_vadd_qqq_vabs_qq,\
- neon_fp_vsum,\
- neon_fp_vmul_ddd,\
- neon_fp_vmul_qqd,\
- neon_fp_vmla_ddd,\
- neon_fp_vmla_qqq,\
- neon_fp_vmla_ddd_scalar,\
- neon_fp_vmla_qqq_scalar,\
- neon_fp_vrecps_vrsqrts_ddd,\
- neon_fp_vrecps_vrsqrts_qqq,\
- neon_bp_simple,\
- neon_bp_2cycle,\
- neon_bp_3cycle,\
- neon_ldr,\
- neon_str,\
- neon_vld1_1_2_regs,\
- neon_vld1_3_4_regs,\
- neon_vld2_2_regs_vld1_vld2_all_lanes,\
- neon_vld2_4_regs,\
- neon_vld3_vld4,\
- neon_vst1_1_2_regs_vst2_2_regs,\
- neon_vst1_3_4_regs,\
- neon_vst2_4_regs_vst3_vst4,\
- neon_vst3_vst4,\
- neon_vld1_vld2_lane,\
- neon_vld3_vld4_lane,\
- neon_vst1_vst2_lane,\
- neon_vst3_vst4_lane,\
- neon_vld3_vld4_all_lanes,\
- neon_mcr,\
- neon_mcr_2_mcrr,\
- neon_mrc,\
- neon_mrrc,\
- neon_ldm_2,\
- neon_stm_2,\
- neon_crypto_aes,\
- neon_crypto_sha1_xor,\
- neon_crypto_sha1_fast,\
- neon_crypto_sha1_slow,\
- neon_crypto_sha256_fast,\
- neon_crypto_sha256_slow,\
- neon_mul_d_long,\
- none"
- (const_string "none"))
+; YES if the "type" attribute assigned to the insn denotes an
+; Advanced SIMD instruction, NO otherwise.
+(define_attr "is_neon_type" "yes,no"
+ (if_then_else (eq_attr "type"
+ "neon_add, neon_add_q, neon_add_widen, neon_add_long,\
+ neon_qadd, neon_qadd_q, neon_add_halve, neon_add_halve_q,\
+ neon_add_halve_narrow_q,\
+ neon_sub, neon_sub_q, neon_sub_widen, neon_sub_long, neon_qsub,\
+ neon_qsub_q, neon_sub_halve, neon_sub_halve_q,\
+ neon_sub_halve_narrow_q,\
+ neon_abs, neon_abs_q, neon_neg, neon_neg_q, neon_qneg,\
+ neon_qneg_q, neon_qabs, neon_qabs_q, neon_abd, neon_abd_q,\
+ neon_abd_long, neon_minmax, neon_minmax_q, neon_compare,\
+ neon_compare_q, neon_compare_zero, neon_compare_zero_q,\
+ neon_arith_acc, neon_arith_acc_q, neon_reduc_add,\
+ neon_reduc_add_q, neon_reduc_add_long, neon_reduc_add_acc,\
+ neon_reduc_add_acc_q, neon_reduc_minmax, neon_reduc_minmax_q,\
+ neon_logic, neon_logic_q, neon_tst, neon_tst_q,\
+ neon_shift_imm, neon_shift_imm_q, neon_shift_imm_narrow_q,\
+ neon_shift_imm_long, neon_shift_reg, neon_shift_reg_q,\
+ neon_shift_acc, neon_shift_acc_q, neon_sat_shift_imm,\
+ neon_sat_shift_imm_q, neon_sat_shift_imm_narrow_q,\
+ neon_sat_shift_reg, neon_sat_shift_reg_q,\
+ neon_ins, neon_ins_q, neon_move, neon_move_q, neon_move_narrow_q,\
+ neon_permute, neon_permute_q, neon_zip, neon_zip_q, neon_tbl1,\
+ neon_tbl1_q, neon_tbl2, neon_tbl2_q, neon_tbl3, neon_tbl3_q,\
+ neon_tbl4, neon_tbl4_q, neon_bsl, neon_bsl_q, neon_cls,\
+ neon_cls_q, neon_cnt, neon_cnt_q, neon_dup, neon_dup_q,\
+ neon_ext, neon_ext_q, neon_rbit, neon_rbit_q,\
+ neon_rev, neon_rev_q, neon_mul_b, neon_mul_b_q, neon_mul_h,\
+ neon_mul_h_q, neon_mul_s, neon_mul_s_q, neon_mul_b_long,\
+ neon_mul_h_long, neon_mul_s_long, neon_mul_h_scalar,\
+ neon_mul_h_scalar_q, neon_mul_s_scalar, neon_mul_s_scalar_q,\
+ neon_mul_h_scalar_long, neon_mul_s_scalar_long, neon_sat_mul_b,\
+ neon_sat_mul_b_q, neon_sat_mul_h, neon_sat_mul_h_q,\
+ neon_sat_mul_s, neon_sat_mul_s_q, neon_sat_mul_b_long,\
+ neon_sat_mul_h_long, neon_sat_mul_s_long, neon_sat_mul_h_scalar,\
+ neon_sat_mul_h_scalar_q, neon_sat_mul_s_scalar,\
+ neon_sat_mul_s_scalar_q, neon_sat_mul_h_scalar_long,\
+ neon_sat_mul_s_scalar_long, neon_mla_b, neon_mla_b_q, neon_mla_h,\
+ neon_mla_h_q, neon_mla_s, neon_mla_s_q, neon_mla_b_long,\
+ neon_mla_h_long, neon_mla_s_long, neon_mla_h_scalar,\
+ neon_mla_h_scalar_q, neon_mla_s_scalar, neon_mla_s_scalar_q,\
+ neon_mla_h_scalar_long, neon_mla_s_scalar_long,\
+ neon_sat_mla_b_long, neon_sat_mla_h_long,\
+ neon_sat_mla_s_long, neon_sat_mla_h_scalar_long,\
+ neon_sat_mla_s_scalar_long,\
+ neon_to_gp, neon_to_gp_q, neon_from_gp, neon_from_gp_q,\
+ neon_ldr, neon_load1_1reg, neon_load1_1reg_q, neon_load1_2reg,\
+ neon_load1_2reg_q, neon_load1_3reg, neon_load1_3reg_q,\
+ neon_load1_4reg, neon_load1_4reg_q, neon_load1_all_lanes,\
+ neon_load1_all_lanes_q, neon_load1_one_lane, neon_load1_one_lane_q,\
+ neon_load2_2reg, neon_load2_2reg_q, neon_load2_4reg,\
+ neon_load2_4reg_q, neon_load2_all_lanes, neon_load2_all_lanes_q,\
+ neon_load2_one_lane, neon_load2_one_lane_q,\
+ neon_load3_3reg, neon_load3_3reg_q, neon_load3_all_lanes,\
+ neon_load3_all_lanes_q, neon_load3_one_lane, neon_load3_one_lane_q,\
+ neon_load4_4reg, neon_load4_4reg_q, neon_load4_all_lanes,\
+ neon_load4_all_lanes_q, neon_load4_one_lane, neon_load4_one_lane_q,\
+ neon_str, neon_store1_1reg, neon_store1_1reg_q, neon_store1_2reg,\
+ neon_store1_2reg_q, neon_store1_3reg, neon_store1_3reg_q,\
+ neon_store1_4reg, neon_store1_4reg_q, neon_store1_one_lane,\
+ neon_store1_one_lane_q, neon_store2_2reg, neon_store2_2reg_q,\
+ neon_store2_4reg, neon_store2_4reg_q, neon_store2_one_lane,\
+ neon_store2_one_lane_q, neon_store3_3reg, neon_store3_3reg_q,\
+ neon_store3_one_lane, neon_store3_one_lane_q, neon_store4_4reg,\
+ neon_store4_4reg_q, neon_store4_one_lane, neon_store4_one_lane_q,\
+ neon_fp_abd_s, neon_fp_abd_s_q, neon_fp_abd_d, neon_fp_abd_d_q,\
+ neon_fp_addsub_s, neon_fp_addsub_s_q, neon_fp_addsub_d,\
+ neon_fp_addsub_d_q, neon_fp_compare_s, neon_fp_compare_s_q,\
+ neon_fp_compare_d, neon_fp_compare_d_q, neon_fp_minmax_s,\
+ neon_fp_minmax_s_q, neon_fp_minmax_d, neon_fp_minmax_d_q,\
+ neon_fp_reduc_add_s, neon_fp_reduc_add_s_q, neon_fp_reduc_add_d,\
+ neon_fp_reduc_add_d_q, neon_fp_reduc_minmax_s,
+ neon_fp_reduc_minmax_s_q, neon_fp_reduc_minmax_d,\
+ neon_fp_reduc_minmax_d_q,\
+ neon_fp_cvt_narrow_s_q, neon_fp_cvt_narrow_d_q,\
+ neon_fp_cvt_widen_h, neon_fp_cvt_widen_s, neon_fp_to_int_s,\
+ neon_fp_to_int_s_q, neon_int_to_fp_s, neon_int_to_fp_s_q,\
+ neon_fp_round_s, neon_fp_round_s_q, neon_fp_recpe_s,\
+ neon_fp_recpe_s_q,\
+ neon_fp_recpe_d, neon_fp_recpe_d_q, neon_fp_recps_s,\
+ neon_fp_recps_s_q, neon_fp_recps_d, neon_fp_recps_d_q,\
+ neon_fp_recpx_s, neon_fp_recpx_s_q, neon_fp_recpx_d,\
+ neon_fp_recpx_d_q, neon_fp_rsqrte_s, neon_fp_rsqrte_s_q,\
+ neon_fp_rsqrte_d, neon_fp_rsqrte_d_q, neon_fp_rsqrts_s,\
+ neon_fp_rsqrts_s_q, neon_fp_rsqrts_d, neon_fp_rsqrts_d_q,\
+ neon_fp_mul_s, neon_fp_mul_s_q, neon_fp_mul_s_scalar,\
+ neon_fp_mul_s_scalar_q, neon_fp_mul_d, neon_fp_mul_d_q,\
+ neon_fp_mul_d_scalar_q, neon_fp_mla_s, neon_fp_mla_s_q,\
+ neon_fp_mla_s_scalar, neon_fp_mla_s_scalar_q, neon_fp_mla_d,\
+ neon_fp_mla_d_q, neon_fp_mla_d_scalar_q, neon_fp_sqrt_s,\
+ neon_fp_sqrt_s_q, neon_fp_sqrt_d, neon_fp_sqrt_d_q,\
+ neon_fp_div_s, neon_fp_div_s_q, neon_fp_div_d, neon_fp_div_d_q")
+ (const_string "yes")
+ (const_string "no")))
; condition codes: this one is used by final_prescan_insn to speed up
; conditionalizing instructions. It saves having to scan the rtl to see if
@@ -664,9 +355,9 @@
(ior (eq_attr "is_thumb1" "yes")
(eq_attr "type" "call"))
(const_string "clob")
- (if_then_else (eq_attr "neon_type" "none")
- (const_string "nocond")
- (const_string "unconditional"))))
+ (if_then_else (eq_attr "is_neon_type" "no")
+ (const_string "nocond")
+ (const_string "unconditional"))))
; Predicable means that the insn can be conditionally executed based on
; an automatically added predicate (additional patterns are generated by
@@ -692,8 +383,11 @@
; than one on the main cpu execution unit.
(define_attr "core_cycles" "single,multi"
(if_then_else (eq_attr "type"
- "arlo_imm, arlo_reg,\
- extend, shift, arlo_shift, float, fdivd, fdivs,\
+ "adc_imm, adc_reg, adcs_imm, adcs_reg, adr, alu_ext, alu_imm, alu_reg,\
+ alu_shift_imm, alu_shift_reg, alus_ext, alus_imm, alus_reg,\
+ alus_shift_imm, alus_shift_reg, bfm, csel, rev, logic_imm, logic_reg,\
+ logic_shift_imm, logic_shift_reg, logics_imm, logics_reg,\
+ logics_shift_imm, logics_shift_reg, extend, shift_imm, float, fcsel,\
wmmx_wor, wmmx_wxor, wmmx_wand, wmmx_wandn, wmmx_wmov, wmmx_tmcrr,\
wmmx_tmrrc, wmmx_wldr, wmmx_wstr, wmmx_tmcr, wmmx_tmrc, wmmx_wadd,\
wmmx_wsub, wmmx_wmul, wmmx_wmac, wmmx_wavg2, wmmx_tinsr, wmmx_textrm,\
@@ -819,7 +513,8 @@
]
"TARGET_THUMB1"
"add\\t%Q0, %Q0, %Q2\;adc\\t%R0, %R0, %R2"
- [(set_attr "length" "4")]
+ [(set_attr "length" "4")
+ (set_attr "type" "multiple")]
)
(define_insn_and_split "*arm_adddi3"
@@ -847,7 +542,8 @@
operands[2] = gen_lowpart (SImode, operands[2]);
}"
[(set_attr "conds" "clob")
- (set_attr "length" "8")]
+ (set_attr "length" "8")
+ (set_attr "type" "multiple")]
)
(define_insn_and_split "*adddi_sesidi_di"
@@ -876,7 +572,8 @@
operands[2] = gen_lowpart (SImode, operands[2]);
}"
[(set_attr "conds" "clob")
- (set_attr "length" "8")]
+ (set_attr "length" "8")
+ (set_attr "type" "multiple")]
)
(define_insn_and_split "*adddi_zesidi_di"
@@ -903,7 +600,8 @@
operands[2] = gen_lowpart (SImode, operands[2]);
}"
[(set_attr "conds" "clob")
- (set_attr "length" "8")]
+ (set_attr "length" "8")
+ (set_attr "type" "multiple")]
)
(define_expand "addsi3"
@@ -978,8 +676,8 @@
(set_attr "predicable_short_it" "yes,yes,yes,yes,no,no,no,no,no,no,no,no,no,no,no")
(set_attr "arch" "t2,t2,t2,t2,*,*,*,t2,t2,*,*,a,t2,t2,*")
(set (attr "type") (if_then_else (match_operand 2 "const_int_operand" "")
- (const_string "arlo_imm")
- (const_string "arlo_reg")))
+ (const_string "alu_imm")
+ (const_string "alu_reg")))
]
)
@@ -1029,7 +727,9 @@
operands[3] = GEN_INT (offset);
operands[2] = GEN_INT (INTVAL (operands[2]) - offset);
}
- [(set_attr "length" "2,2,2,2,2,2,2,4,4,4")]
+ [(set_attr "length" "2,2,2,2,2,2,2,4,4,4")
+ (set_attr "type" "alus_imm,alus_imm,alus_reg,alus_reg,alus_reg,
+ alus_reg,alus_reg,multiple,multiple,multiple")]
)
;; Reloading and elimination of the frame pointer can
@@ -1060,7 +760,7 @@
sub%.\\t%0, %1, #%n2
add%.\\t%0, %1, %2"
[(set_attr "conds" "set")
- (set_attr "type" "arlo_imm,arlo_imm,*")]
+ (set_attr "type" "alus_imm,alus_imm,alus_reg")]
)
(define_insn "*addsi3_compare0_scratch"
@@ -1076,8 +776,7 @@
cmn%?\\t%0, %1"
[(set_attr "conds" "set")
(set_attr "predicable" "yes")
- (set_attr "type" "arlo_imm,arlo_imm,*")
- ]
+ (set_attr "type" "alus_imm,alus_imm,alus_reg")]
)
(define_insn "*compare_negsi_si"
@@ -1091,7 +790,8 @@
(set_attr "predicable" "yes")
(set_attr "arch" "t2,*")
(set_attr "length" "2,4")
- (set_attr "predicable_short_it" "yes,no")]
+ (set_attr "predicable_short_it" "yes,no")
+ (set_attr "type" "alus_reg")]
)
;; This is the canonicalization of addsi3_compare0_for_combiner when the
@@ -1108,7 +808,8 @@
"@
add%.\\t%0, %1, %3
sub%.\\t%0, %1, #%n3"
- [(set_attr "conds" "set")]
+ [(set_attr "conds" "set")
+ (set_attr "type" "alus_reg")]
)
;; Convert the sequence
@@ -1166,7 +867,7 @@
sub%.\\t%0, %1, #%n2
add%.\\t%0, %1, %2"
[(set_attr "conds" "set")
- (set_attr "type" "arlo_imm,arlo_imm,*")]
+ (set_attr "type" "alus_imm,alus_imm,alus_reg")]
)
(define_insn "*addsi3_compare_op2"
@@ -1183,7 +884,7 @@
add%.\\t%0, %1, %2
sub%.\\t%0, %1, #%n2"
[(set_attr "conds" "set")
- (set_attr "type" "arlo_imm,arlo_imm,*")]
+ (set_attr "type" "alus_imm,alus_imm,alus_reg")]
)
(define_insn "*compare_addsi2_op0"
@@ -1204,7 +905,7 @@
(set_attr "arch" "t2,t2,*,*,*")
(set_attr "predicable_short_it" "yes,yes,no,no,no")
(set_attr "length" "2,2,4,4,4")
- (set_attr "type" "arlo_imm,*,arlo_imm,arlo_imm,*")]
+ (set_attr "type" "alus_imm,alus_reg,alus_imm,alus_imm,alus_reg")]
)
(define_insn "*compare_addsi2_op1"
@@ -1225,8 +926,7 @@
(set_attr "arch" "t2,t2,*,*,*")
(set_attr "predicable_short_it" "yes,yes,no,no,no")
(set_attr "length" "2,2,4,4,4")
- (set_attr "type"
- "arlo_imm,*,arlo_imm,arlo_imm,*")]
+ (set_attr "type" "alus_imm,alus_reg,alus_imm,alus_imm,alus_reg")]
)
(define_insn "*addsi3_carryin_<optab>"
@@ -1243,7 +943,8 @@
(set_attr "predicable" "yes")
(set_attr "arch" "t2,*,*")
(set_attr "length" "4")
- (set_attr "predicable_short_it" "yes,no,no")]
+ (set_attr "predicable_short_it" "yes,no,no")
+ (set_attr "type" "adc_reg,adc_reg,adc_imm")]
)
(define_insn "*addsi3_carryin_alt2_<optab>"
@@ -1260,7 +961,8 @@
(set_attr "predicable" "yes")
(set_attr "arch" "t2,*,*")
(set_attr "length" "4")
- (set_attr "predicable_short_it" "yes,no,no")]
+ (set_attr "predicable_short_it" "yes,no,no")
+ (set_attr "type" "adc_reg,adc_reg,adc_imm")]
)
(define_insn "*addsi3_carryin_shift_<optab>"
@@ -1277,8 +979,8 @@
(set_attr "predicable" "yes")
(set_attr "predicable_short_it" "no")
(set (attr "type") (if_then_else (match_operand 4 "const_int_operand" "")
- (const_string "arlo_shift")
- (const_string "arlo_shift_reg")))]
+ (const_string "alu_shift_imm")
+ (const_string "alu_shift_reg")))]
)
(define_insn "*addsi3_carryin_clobercc_<optab>"
@@ -1289,7 +991,8 @@
(clobber (reg:CC CC_REGNUM))]
"TARGET_32BIT"
"adc%.\\t%0, %1, %2"
- [(set_attr "conds" "set")]
+ [(set_attr "conds" "set")
+ (set_attr "type" "adcs_reg")]
)
(define_insn "*subsi3_carryin"
@@ -1304,7 +1007,8 @@
[(set_attr "conds" "use")
(set_attr "arch" "*,a")
(set_attr "predicable" "yes")
- (set_attr "predicable_short_it" "no")]
+ (set_attr "predicable_short_it" "no")
+ (set_attr "type" "adc_reg,adc_imm")]
)
(define_insn "*subsi3_carryin_const"
@@ -1314,7 +1018,8 @@
(ltu:SI (reg:CC_C CC_REGNUM) (const_int 0))))]
"TARGET_32BIT"
"sbc\\t%0, %1, #%B2"
- [(set_attr "conds" "use")]
+ [(set_attr "conds" "use")
+ (set_attr "type" "adc_imm")]
)
(define_insn "*subsi3_carryin_compare"
@@ -1327,7 +1032,8 @@
(ltu:SI (reg:CC_C CC_REGNUM) (const_int 0))))]
"TARGET_32BIT"
"sbcs\\t%0, %1, %2"
- [(set_attr "conds" "set")]
+ [(set_attr "conds" "set")
+ (set_attr "type" "adcs_reg")]
)
(define_insn "*subsi3_carryin_compare_const"
@@ -1340,7 +1046,8 @@
(ltu:SI (reg:CC_C CC_REGNUM) (const_int 0))))]
"TARGET_32BIT"
"sbcs\\t%0, %1, #%B2"
- [(set_attr "conds" "set")]
+ [(set_attr "conds" "set")
+ (set_attr "type" "adcs_imm")]
)
(define_insn "*subsi3_carryin_shift"
@@ -1356,8 +1063,8 @@
[(set_attr "conds" "use")
(set_attr "predicable" "yes")
(set (attr "type") (if_then_else (match_operand 4 "const_int_operand" "")
- (const_string "arlo_shift")
- (const_string "arlo_shift_reg")))]
+ (const_string "alu_shift_imm")
+ (const_string "alu_shift_reg")))]
)
(define_insn "*rsbsi3_carryin_shift"
@@ -1373,8 +1080,8 @@
[(set_attr "conds" "use")
(set_attr "predicable" "yes")
(set (attr "type") (if_then_else (match_operand 4 "const_int_operand" "")
- (const_string "arlo_shift")
- (const_string "arlo_shift_reg")))]
+ (const_string "alu_shift_imm")
+ (const_string "alu_shift_reg")))]
)
; transform ((x << y) - 1) to ~(~(x-1) << y) Where X is a constant.
@@ -1447,7 +1154,8 @@
operands[2] = gen_lowpart (SImode, operands[2]);
}
[(set_attr "conds" "clob")
- (set_attr "length" "8")]
+ (set_attr "length" "8")
+ (set_attr "type" "multiple")]
)
(define_insn "*thumb_subdi3"
@@ -1457,7 +1165,8 @@
(clobber (reg:CC CC_REGNUM))]
"TARGET_THUMB1"
"sub\\t%Q0, %Q0, %Q2\;sbc\\t%R0, %R0, %R2"
- [(set_attr "length" "4")]
+ [(set_attr "length" "4")
+ (set_attr "type" "multiple")]
)
(define_insn_and_split "*subdi_di_zesidi"
@@ -1482,7 +1191,8 @@
operands[5] = GEN_INT (~0);
}
[(set_attr "conds" "clob")
- (set_attr "length" "8")]
+ (set_attr "length" "8")
+ (set_attr "type" "multiple")]
)
(define_insn_and_split "*subdi_di_sesidi"
@@ -1508,7 +1218,8 @@
operands[1] = gen_lowpart (SImode, operands[1]);
}
[(set_attr "conds" "clob")
- (set_attr "length" "8")]
+ (set_attr "length" "8")
+ (set_attr "type" "multiple")]
)
(define_insn_and_split "*subdi_zesidi_di"
@@ -1534,7 +1245,8 @@
operands[1] = gen_lowpart (SImode, operands[1]);
}
[(set_attr "conds" "clob")
- (set_attr "length" "8")]
+ (set_attr "length" "8")
+ (set_attr "type" "multiple")]
)
(define_insn_and_split "*subdi_sesidi_di"
@@ -1563,7 +1275,8 @@
operands[1] = gen_lowpart (SImode, operands[1]);
}
[(set_attr "conds" "clob")
- (set_attr "length" "8")]
+ (set_attr "length" "8")
+ (set_attr "type" "multiple")]
)
(define_insn_and_split "*subdi_zesidi_zesidi"
@@ -1586,7 +1299,8 @@
operands[0] = gen_lowpart (SImode, operands[0]);
}
[(set_attr "conds" "clob")
- (set_attr "length" "8")]
+ (set_attr "length" "8")
+ (set_attr "type" "multiple")]
)
(define_expand "subsi3"
@@ -1617,7 +1331,9 @@
"TARGET_THUMB1"
"sub\\t%0, %1, %2"
[(set_attr "length" "2")
- (set_attr "conds" "set")])
+ (set_attr "conds" "set")
+ (set_attr "type" "alus_reg")]
+)
; ??? Check Thumb-2 split length
(define_insn_and_split "*arm_subsi3_insn"
@@ -1647,7 +1363,7 @@
(set_attr "arch" "t2,t2,t2,t2,*,*,*,*,*")
(set_attr "predicable" "yes")
(set_attr "predicable_short_it" "yes,yes,yes,yes,no,no,no,no,no")
- (set_attr "type" "*,*,*,*,arlo_imm,arlo_imm,*,*,arlo_imm")]
+ (set_attr "type" "alu_reg,alu_reg,alu_reg,alu_reg,alu_imm,alu_imm,alu_reg,alu_reg,multiple")]
)
(define_peephole2
@@ -1677,7 +1393,7 @@
sub%.\\t%0, %1, %2
rsb%.\\t%0, %2, %1"
[(set_attr "conds" "set")
- (set_attr "type" "arlo_imm,*,*")]
+ (set_attr "type" "alus_imm,alus_reg,alus_reg")]
)
(define_insn "subsi3_compare"
@@ -1692,7 +1408,7 @@
sub%.\\t%0, %1, %2
rsb%.\\t%0, %2, %1"
[(set_attr "conds" "set")
- (set_attr "type" "arlo_imm,*,*")]
+ (set_attr "type" "alus_imm,alus_reg,alus_reg")]
)
(define_expand "subsf3"
@@ -2499,7 +2215,8 @@
gen_highpart_mode (SImode, DImode, operands[2]));
}"
- [(set_attr "neon_type" "neon_int_1,neon_int_1,*,*,*,*,neon_int_1,neon_int_1")
+ [(set_attr "type" "neon_logic,neon_logic,multiple,multiple,\
+ multiple,multiple,neon_logic,neon_logic")
(set_attr "arch" "neon_for_64bits,neon_for_64bits,*,*,*,*,
avoid_neon_for_64bits,avoid_neon_for_64bits")
(set_attr "length" "*,*,8,8,8,8,*,*")
@@ -2524,7 +2241,8 @@
operands[0] = gen_lowpart (SImode, operands[0]);
operands[1] = gen_lowpart (SImode, operands[1]);
}"
- [(set_attr "length" "8")]
+ [(set_attr "length" "8")
+ (set_attr "type" "multiple")]
)
(define_insn "*anddi_sesdi_di"
@@ -2534,7 +2252,8 @@
(match_operand:DI 1 "s_register_operand" "0,r")))]
"TARGET_32BIT"
"#"
- [(set_attr "length" "8")]
+ [(set_attr "length" "8")
+ (set_attr "type" "multiple")]
)
(define_expand "andsi3"
@@ -2641,8 +2360,7 @@
[(set_attr "length" "4,4,4,4,16")
(set_attr "predicable" "yes")
(set_attr "predicable_short_it" "no,yes,no,no,no")
- (set_attr "type"
- "arlo_imm,arlo_imm,*,*,arlo_imm")]
+ (set_attr "type" "logic_imm,logic_imm,logic_reg,logic_reg,logic_imm")]
)
(define_insn "*thumb1_andsi3_insn"
@@ -2652,7 +2370,7 @@
"TARGET_THUMB1"
"and\\t%0, %2"
[(set_attr "length" "2")
- (set_attr "type" "arlo_imm")
+ (set_attr "type" "logic_imm")
(set_attr "conds" "set")])
(define_insn "*andsi3_compare0"
@@ -2669,7 +2387,7 @@
bic%.\\t%0, %1, #%B2
and%.\\t%0, %1, %2"
[(set_attr "conds" "set")
- (set_attr "type" "arlo_imm,arlo_imm,*")]
+ (set_attr "type" "logics_imm,logics_imm,logics_reg")]
)
(define_insn "*andsi3_compare0_scratch"
@@ -2685,7 +2403,7 @@
bic%.\\t%2, %0, #%B1
tst%?\\t%0, %1"
[(set_attr "conds" "set")
- (set_attr "type" "arlo_imm,arlo_imm,*")]
+ (set_attr "type" "logics_imm,logics_imm,logics_reg")]
)
(define_insn "*zeroextractsi_compare0_scratch"
@@ -2709,7 +2427,7 @@
[(set_attr "conds" "set")
(set_attr "predicable" "yes")
(set_attr "predicable_short_it" "no")
- (set_attr "type" "arlo_imm")]
+ (set_attr "type" "logics_imm")]
)
(define_insn_and_split "*ne_zeroextractsi"
@@ -2746,7 +2464,8 @@
(set (attr "length")
(if_then_else (eq_attr "is_thumb" "yes")
(const_int 12)
- (const_int 8)))]
+ (const_int 8)))
+ (set_attr "type" "multiple")]
)
(define_insn_and_split "*ne_zeroextractsi_shifted"
@@ -2771,7 +2490,8 @@
operands[2] = GEN_INT (32 - INTVAL (operands[2]));
"
[(set_attr "conds" "clob")
- (set_attr "length" "8")]
+ (set_attr "length" "8")
+ (set_attr "type" "multiple")]
)
(define_insn_and_split "*ite_ne_zeroextractsi"
@@ -2809,7 +2529,8 @@
<< INTVAL (operands[3]));
"
[(set_attr "conds" "clob")
- (set_attr "length" "8")]
+ (set_attr "length" "8")
+ (set_attr "type" "multiple")]
)
(define_insn_and_split "*ite_ne_zeroextractsi_shifted"
@@ -2836,7 +2557,8 @@
operands[2] = GEN_INT (32 - INTVAL (operands[2]));
"
[(set_attr "conds" "clob")
- (set_attr "length" "8")]
+ (set_attr "length" "8")
+ (set_attr "type" "multiple")]
)
(define_split
@@ -3137,7 +2859,8 @@
"bfc%?\t%0, %2, %1"
[(set_attr "length" "4")
(set_attr "predicable" "yes")
- (set_attr "predicable_short_it" "no")]
+ (set_attr "predicable_short_it" "no")
+ (set_attr "type" "bfm")]
)
(define_insn "insv_t2"
@@ -3149,7 +2872,8 @@
"bfi%?\t%0, %3, %2, %1"
[(set_attr "length" "4")
(set_attr "predicable" "yes")
- (set_attr "predicable_short_it" "no")]
+ (set_attr "predicable_short_it" "no")
+ (set_attr "type" "bfm")]
)
; constants for op 2 will never be given to these patterns.
@@ -3174,7 +2898,8 @@
operands[2] = gen_lowpart (SImode, operands[2]);
}"
[(set_attr "length" "8")
- (set_attr "predicable" "yes")]
+ (set_attr "predicable" "yes")
+ (set_attr "type" "multiple")]
)
(define_insn_and_split "*anddi_notzesidi_di"
@@ -3202,7 +2927,8 @@
}"
[(set_attr "length" "4,8")
(set_attr "predicable" "yes")
- (set_attr "predicable_short_it" "no")]
+ (set_attr "predicable_short_it" "no")
+ (set_attr "type" "multiple")]
)
(define_insn_and_split "*anddi_notsesidi_di"
@@ -3226,7 +2952,8 @@
}"
[(set_attr "length" "8")
(set_attr "predicable" "yes")
- (set_attr "predicable_short_it" "no")]
+ (set_attr "predicable_short_it" "no")
+ (set_attr "type" "multiple")]
)
(define_insn "andsi_notsi_si"
@@ -3236,7 +2963,8 @@
"TARGET_32BIT"
"bic%?\\t%0, %1, %2"
[(set_attr "predicable" "yes")
- (set_attr "predicable_short_it" "no")]
+ (set_attr "predicable_short_it" "no")
+ (set_attr "type" "logic_reg")]
)
(define_insn "thumb1_bicsi3"
@@ -3246,7 +2974,9 @@
"TARGET_THUMB1"
"bic\\t%0, %1"
[(set_attr "length" "2")
- (set_attr "conds" "set")])
+ (set_attr "conds" "set")
+ (set_attr "type" "logics_reg")]
+)
(define_insn "andsi_not_shiftsi_si"
[(set (match_operand:SI 0 "s_register_operand" "=r")
@@ -3259,8 +2989,8 @@
[(set_attr "predicable" "yes")
(set_attr "shift" "2")
(set (attr "type") (if_then_else (match_operand 3 "const_int_operand" "")
- (const_string "arlo_shift")
- (const_string "arlo_shift_reg")))]
+ (const_string "logic_shift_imm")
+ (const_string "logic_shift_reg")))]
)
(define_insn "*andsi_notsi_si_compare0"
@@ -3273,7 +3003,8 @@
(and:SI (not:SI (match_dup 2)) (match_dup 1)))]
"TARGET_32BIT"
"bic%.\\t%0, %1, %2"
- [(set_attr "conds" "set")]
+ [(set_attr "conds" "set")
+ (set_attr "type" "logics_shift_reg")]
)
(define_insn "*andsi_notsi_si_compare0_scratch"
@@ -3285,7 +3016,8 @@
(clobber (match_scratch:SI 0 "=r"))]
"TARGET_32BIT"
"bic%.\\t%0, %1, %2"
- [(set_attr "conds" "set")]
+ [(set_attr "conds" "set")
+ (set_attr "type" "logics_shift_reg")]
)
(define_expand "iordi3"
@@ -3334,7 +3066,8 @@
gen_highpart_mode (SImode, DImode, operands[2]));
}"
- [(set_attr "neon_type" "neon_int_1,neon_int_1,*,*,*,*,neon_int_1,neon_int_1")
+ [(set_attr "type" "neon_logic,neon_logic,multiple,multiple,multiple,\
+ multiple,neon_logic,neon_logic")
(set_attr "length" "*,*,8,8,8,8,*,*")
(set_attr "arch" "neon_for_64bits,neon_for_64bits,*,*,*,*,avoid_neon_for_64bits,avoid_neon_for_64bits")]
)
@@ -3350,7 +3083,8 @@
#"
[(set_attr "length" "4,8")
(set_attr "predicable" "yes")
- (set_attr "predicable_short_it" "no")]
+ (set_attr "predicable_short_it" "no")
+ (set_attr "type" "logic_reg,multiple")]
)
(define_insn "*iordi_sesidi_di"
@@ -3361,7 +3095,8 @@
"TARGET_32BIT"
"#"
[(set_attr "length" "8")
- (set_attr "predicable" "yes")]
+ (set_attr "predicable" "yes")
+ (set_attr "type" "multiple")]
)
(define_expand "iorsi3"
@@ -3419,7 +3154,7 @@
(set_attr "arch" "32,t2,t2,32,32")
(set_attr "predicable" "yes")
(set_attr "predicable_short_it" "no,yes,no,no,no")
- (set_attr "type" "arlo_imm,*,arlo_imm,*,*")]
+ (set_attr "type" "logic_imm,logic_reg,logic_imm,logic_reg,logic_reg")]
)
(define_insn "*thumb1_iorsi3_insn"
@@ -3429,7 +3164,8 @@
"TARGET_THUMB1"
"orr\\t%0, %2"
[(set_attr "length" "2")
- (set_attr "conds" "set")])
+ (set_attr "conds" "set")
+ (set_attr "type" "logics_reg")])
(define_peephole2
[(match_scratch:SI 3 "r")
@@ -3454,7 +3190,7 @@
"TARGET_32BIT"
"orr%.\\t%0, %1, %2"
[(set_attr "conds" "set")
- (set_attr "type" "arlo_imm,*")]
+ (set_attr "type" "logics_imm,logics_reg")]
)
(define_insn "*iorsi3_compare0_scratch"
@@ -3466,7 +3202,7 @@
"TARGET_32BIT"
"orr%.\\t%0, %1, %2"
[(set_attr "conds" "set")
- (set_attr "type" "arlo_imm,*")]
+ (set_attr "type" "logics_imm,logics_reg")]
)
(define_expand "xordi3"
@@ -3513,7 +3249,7 @@
}"
[(set_attr "length" "*,8,8,8,8,*")
- (set_attr "neon_type" "neon_int_1,*,*,*,*,neon_int_1")
+ (set_attr "type" "neon_logic,multiple,multiple,multiple,multiple,neon_logic")
(set_attr "arch" "neon_for_64bits,*,*,*,*,avoid_neon_for_64bits")]
)
@@ -3528,7 +3264,8 @@
#"
[(set_attr "length" "4,8")
(set_attr "predicable" "yes")
- (set_attr "predicable_short_it" "no")]
+ (set_attr "predicable_short_it" "no")
+ (set_attr "type" "logic_reg")]
)
(define_insn "*xordi_sesidi_di"
@@ -3539,7 +3276,8 @@
"TARGET_32BIT"
"#"
[(set_attr "length" "8")
- (set_attr "predicable" "yes")]
+ (set_attr "predicable" "yes")
+ (set_attr "type" "multiple")]
)
(define_expand "xorsi3"
@@ -3592,7 +3330,7 @@
[(set_attr "length" "4,4,4,16")
(set_attr "predicable" "yes")
(set_attr "predicable_short_it" "no,yes,no,no")
- (set_attr "type" "arlo_imm,*,*,*")]
+ (set_attr "type" "logic_imm,logic_reg,logic_reg,multiple")]
)
(define_insn "*thumb1_xorsi3_insn"
@@ -3603,7 +3341,7 @@
"eor\\t%0, %2"
[(set_attr "length" "2")
(set_attr "conds" "set")
- (set_attr "type" "arlo_imm")]
+ (set_attr "type" "logics_reg")]
)
(define_insn "*xorsi3_compare0"
@@ -3616,7 +3354,7 @@
"TARGET_32BIT"
"eor%.\\t%0, %1, %2"
[(set_attr "conds" "set")
- (set_attr "type" "arlo_imm,*")]
+ (set_attr "type" "logics_imm,logics_reg")]
)
(define_insn "*xorsi3_compare0_scratch"
@@ -3627,7 +3365,7 @@
"TARGET_32BIT"
"teq%?\\t%0, %1"
[(set_attr "conds" "set")
- (set_attr "type" "arlo_imm,*")]
+ (set_attr "type" "logics_imm,logics_reg")]
)
; By splitting (IOR (AND (NOT A) (NOT B)) C) as D = AND (IOR A B) (NOT C),
@@ -3661,7 +3399,8 @@
[(set_attr "length" "8")
(set_attr "ce_count" "2")
(set_attr "predicable" "yes")
- (set_attr "predicable_short_it" "no")]
+ (set_attr "predicable_short_it" "no")
+ (set_attr "type" "multiple")]
)
; ??? Are these four splitters still beneficial when the Thumb-2 bitfield
@@ -3798,7 +3537,8 @@
"TARGET_32BIT"
"bic%?\\t%0, %1, %1, asr #31"
[(set_attr "predicable" "yes")
- (set_attr "predicable_short_it" "no")]
+ (set_attr "predicable_short_it" "no")
+ (set_attr "type" "logic_shift_reg")]
)
(define_insn "*smax_m1"
@@ -3808,7 +3548,8 @@
"TARGET_32BIT"
"orr%?\\t%0, %1, %1, asr #31"
[(set_attr "predicable" "yes")
- (set_attr "predicable_short_it" "no")]
+ (set_attr "predicable_short_it" "no")
+ (set_attr "type" "logic_shift_reg")]
)
(define_insn_and_split "*arm_smax_insn"
@@ -3829,7 +3570,8 @@
(match_dup 2)))]
""
[(set_attr "conds" "clob")
- (set_attr "length" "8,12")]
+ (set_attr "length" "8,12")
+ (set_attr "type" "multiple")]
)
(define_expand "sminsi3"
@@ -3857,7 +3599,8 @@
"TARGET_32BIT"
"and%?\\t%0, %1, %1, asr #31"
[(set_attr "predicable" "yes")
- (set_attr "predicable_short_it" "no")]
+ (set_attr "predicable_short_it" "no")
+ (set_attr "type" "logic_shift_reg")]
)
(define_insn_and_split "*arm_smin_insn"
@@ -3878,7 +3621,8 @@
(match_dup 2)))]
""
[(set_attr "conds" "clob")
- (set_attr "length" "8,12")]
+ (set_attr "length" "8,12")
+ (set_attr "type" "multiple,multiple")]
)
(define_expand "umaxsi3"
@@ -3910,7 +3654,8 @@
(match_dup 2)))]
""
[(set_attr "conds" "clob")
- (set_attr "length" "8,8,12")]
+ (set_attr "length" "8,8,12")
+ (set_attr "type" "store1")]
)
(define_expand "uminsi3"
@@ -3942,7 +3687,8 @@
(match_dup 2)))]
""
[(set_attr "conds" "clob")
- (set_attr "length" "8,8,12")]
+ (set_attr "length" "8,8,12")
+ (set_attr "type" "store1")]
)
(define_insn "*store_minmaxsi"
@@ -4011,7 +3757,8 @@
(set (attr "length")
(if_then_else (eq_attr "is_thumb" "yes")
(const_int 14)
- (const_int 12)))]
+ (const_int 12)))
+ (set_attr "type" "multiple")]
)
; Reject the frame pointer in operand[1], since reloading this after
@@ -4059,7 +3806,8 @@
(set (attr "length")
(if_then_else (eq_attr "is_thumb" "yes")
(const_int 14)
- (const_int 12)))]
+ (const_int 12)))
+ (set_attr "type" "multiple")]
)
(define_code_iterator SAT [smin smax])
@@ -4088,7 +3836,8 @@
return "usat%?\t%0, %1, %3";
}
[(set_attr "predicable" "yes")
- (set_attr "predicable_short_it" "no")]
+ (set_attr "predicable_short_it" "no")
+ (set_attr "type" "alus_imm")]
)
(define_insn "*satsi_<SAT:code>_shift"
@@ -4116,7 +3865,7 @@
[(set_attr "predicable" "yes")
(set_attr "predicable_short_it" "no")
(set_attr "shift" "3")
- (set_attr "type" "arlo_shift")])
+ (set_attr "type" "logic_shift_reg")])
;; Shift and rotation insns
@@ -4181,7 +3930,8 @@
"TARGET_32BIT"
"movs\\t%Q0, %Q1, asl #1\;adc\\t%R0, %R1, %R1"
[(set_attr "conds" "clob")
- (set_attr "length" "8")]
+ (set_attr "length" "8")
+ (set_attr "type" "multiple")]
)
(define_expand "ashlsi3"
@@ -4206,7 +3956,7 @@
"TARGET_THUMB1"
"lsl\\t%0, %1, %2"
[(set_attr "length" "2")
- (set_attr "type" "shift,shift_reg")
+ (set_attr "type" "shift_imm,shift_reg")
(set_attr "conds" "set")])
(define_expand "ashrdi3"
@@ -4264,7 +4014,8 @@
"TARGET_32BIT"
"movs\\t%R0, %R1, asr #1\;mov\\t%Q0, %Q1, rrx"
[(set_attr "conds" "clob")
- (set_attr "length" "8")]
+ (set_attr "length" "8")
+ (set_attr "type" "multiple")]
)
(define_expand "ashrsi3"
@@ -4286,7 +4037,7 @@
"TARGET_THUMB1"
"asr\\t%0, %1, %2"
[(set_attr "length" "2")
- (set_attr "type" "shift,shift_reg")
+ (set_attr "type" "shift_imm,shift_reg")
(set_attr "conds" "set")])
(define_expand "lshrdi3"
@@ -4344,7 +4095,8 @@
"TARGET_32BIT"
"movs\\t%R0, %R1, lsr #1\;mov\\t%Q0, %Q1, rrx"
[(set_attr "conds" "clob")
- (set_attr "length" "8")]
+ (set_attr "length" "8")
+ (set_attr "type" "multiple")]
)
(define_expand "lshrsi3"
@@ -4369,7 +4121,7 @@
"TARGET_THUMB1"
"lsr\\t%0, %1, %2"
[(set_attr "length" "2")
- (set_attr "type" "shift,shift_reg")
+ (set_attr "type" "shift_imm,shift_reg")
(set_attr "conds" "set")])
(define_expand "rotlsi3"
@@ -4431,7 +4183,7 @@
(set_attr "predicable_short_it" "yes,no,no")
(set_attr "length" "4")
(set_attr "shift" "1")
- (set_attr "type" "arlo_shift_reg,arlo_shift,arlo_shift_reg")]
+ (set_attr "type" "alu_shift_reg,alu_shift_imm,alu_shift_reg")]
)
(define_insn "*shiftsi3_compare0"
@@ -4446,7 +4198,7 @@
"* return arm_output_shift(operands, 1);"
[(set_attr "conds" "set")
(set_attr "shift" "1")
- (set_attr "type" "arlo_shift,arlo_shift_reg")]
+ (set_attr "type" "alus_shift_imm,alus_shift_reg")]
)
(define_insn "*shiftsi3_compare0_scratch"
@@ -4460,7 +4212,7 @@
"* return arm_output_shift(operands, 1);"
[(set_attr "conds" "set")
(set_attr "shift" "1")
- (set_attr "type" "shift,shift_reg")]
+ (set_attr "type" "shift_imm,shift_reg")]
)
(define_insn "*not_shiftsi"
@@ -4802,7 +4554,8 @@
"sbfx%?\t%0, %1, %3, %2"
[(set_attr "length" "4")
(set_attr "predicable" "yes")
- (set_attr "predicable_short_it" "no")]
+ (set_attr "predicable_short_it" "no")
+ (set_attr "type" "bfm")]
)
(define_insn "extzv_t2"
@@ -4814,7 +4567,8 @@
"ubfx%?\t%0, %1, %3, %2"
[(set_attr "length" "4")
(set_attr "predicable" "yes")
- (set_attr "predicable_short_it" "no")]
+ (set_attr "predicable_short_it" "no")
+ (set_attr "type" "bfm")]
)
@@ -4880,7 +4634,8 @@
operands[1] = gen_lowpart (SImode, operands[1]);
}
[(set_attr "conds" "clob")
- (set_attr "length" "8")]
+ (set_attr "length" "8")
+ (set_attr "type" "multiple")]
)
(define_insn "*thumb1_negdi2"
@@ -4889,7 +4644,8 @@
(clobber (reg:CC CC_REGNUM))]
"TARGET_THUMB1"
"mov\\t%R0, #0\;neg\\t%Q0, %Q1\;sbc\\t%R0, %R1"
- [(set_attr "length" "6")]
+ [(set_attr "length" "6")
+ (set_attr "type" "multiple")]
)
(define_expand "negsi2"
@@ -4907,7 +4663,8 @@
[(set_attr "predicable" "yes")
(set_attr "predicable_short_it" "yes,no")
(set_attr "arch" "t2,*")
- (set_attr "length" "4")]
+ (set_attr "length" "4")
+ (set_attr "type" "alu_reg")]
)
(define_insn "*thumb1_negsi2"
@@ -4915,7 +4672,8 @@
(neg:SI (match_operand:SI 1 "register_operand" "l")))]
"TARGET_THUMB1"
"neg\\t%0, %1"
- [(set_attr "length" "2")]
+ [(set_attr "length" "2")
+ (set_attr "type" "alu_imm")]
)
(define_expand "negsf2"
@@ -5013,7 +4771,8 @@
[(set_attr "conds" "clob,*")
(set_attr "shift" "1")
(set_attr "predicable" "no, yes")
- (set_attr "length" "8")]
+ (set_attr "length" "8")
+ (set_attr "type" "multiple")]
)
(define_insn_and_split "*thumb1_abssi2"
@@ -5027,7 +4786,8 @@
(set (match_dup 0) (plus:SI (match_dup 1) (match_dup 2)))
(set (match_dup 0) (xor:SI (match_dup 0) (match_dup 2)))]
""
- [(set_attr "length" "6")]
+ [(set_attr "length" "6")
+ (set_attr "type" "multiple")]
)
(define_insn_and_split "*arm_neg_abssi2"
@@ -5083,7 +4843,8 @@
[(set_attr "conds" "clob,*")
(set_attr "shift" "1")
(set_attr "predicable" "no, yes")
- (set_attr "length" "8")]
+ (set_attr "length" "8")
+ (set_attr "type" "multiple")]
)
(define_insn_and_split "*thumb1_neg_abssi2"
@@ -5097,7 +4858,8 @@
(set (match_dup 0) (minus:SI (match_dup 2) (match_dup 1)))
(set (match_dup 0) (xor:SI (match_dup 0) (match_dup 2)))]
""
- [(set_attr "length" "6")]
+ [(set_attr "length" "6")
+ (set_attr "type" "multiple")]
)
(define_expand "abssf2"
@@ -5146,7 +4908,7 @@
}"
[(set_attr "length" "*,8,8,*")
(set_attr "predicable" "no,yes,yes,no")
- (set_attr "neon_type" "neon_int_1,*,*,neon_int_1")
+ (set_attr "type" "neon_move,multiple,multiple,neon_move")
(set_attr "arch" "neon_for_64bits,*,*,avoid_neon_for_64bits")]
)
@@ -5320,7 +5082,8 @@
[(set_attr "length" "8,4,8,8")
(set_attr "arch" "neon_for_64bits,*,*,avoid_neon_for_64bits")
(set_attr "ce_count" "2")
- (set_attr "predicable" "yes")]
+ (set_attr "predicable" "yes")
+ (set_attr "type" "multiple,mov_reg,multiple,multiple")]
)
(define_insn "extend<mode>di2"
@@ -5333,7 +5096,8 @@
(set_attr "ce_count" "2")
(set_attr "shift" "1")
(set_attr "predicable" "yes")
- (set_attr "arch" "neon_for_64bits,*,a,t,avoid_neon_for_64bits")]
+ (set_attr "arch" "neon_for_64bits,*,a,t,avoid_neon_for_64bits")
+ (set_attr "type" "multiple,mov_reg,multiple,multiple,multiple")]
)
;; Splits for all extensions to DImode
@@ -5469,7 +5233,7 @@
"@
#
ldr%(h%)\\t%0, %1"
- [(set_attr "type" "arlo_shift,load_byte")
+ [(set_attr "type" "alu_shift_reg,load_byte")
(set_attr "predicable" "yes")]
)
@@ -5490,7 +5254,7 @@
(match_operand:SI 2 "s_register_operand" "r")))]
"TARGET_INT_SIMD"
"uxtah%?\\t%0, %2, %1"
- [(set_attr "type" "arlo_shift")
+ [(set_attr "type" "alu_shift_reg")
(set_attr "predicable" "yes")
(set_attr "predicable_short_it" "no")]
)
@@ -5540,7 +5304,7 @@
#
ldrb\\t%0, %1"
[(set_attr "length" "4,2")
- (set_attr "type" "arlo_shift,load_byte")
+ (set_attr "type" "alu_shift_reg,load_byte")
(set_attr "pool_range" "*,32")]
)
@@ -5563,7 +5327,7 @@
#
ldr%(b%)\\t%0, %1\\t%@ zero_extendqisi2"
[(set_attr "length" "8,4")
- (set_attr "type" "arlo_shift,load_byte")
+ (set_attr "type" "alu_shift_reg,load_byte")
(set_attr "predicable" "yes")]
)
@@ -5586,7 +5350,7 @@
"uxtab%?\\t%0, %2, %1"
[(set_attr "predicable" "yes")
(set_attr "predicable_short_it" "no")
- (set_attr "type" "arlo_shift")]
+ (set_attr "type" "alu_shift_reg")]
)
(define_split
@@ -5638,7 +5402,8 @@
"tst%?\\t%0, #255"
[(set_attr "conds" "set")
(set_attr "predicable" "yes")
- (set_attr "predicable_short_it" "no")]
+ (set_attr "predicable_short_it" "no")
+ (set_attr "type" "logic_imm")]
)
(define_expand "extendhisi2"
@@ -5808,7 +5573,7 @@
#
ldr%(sh%)\\t%0, %1"
[(set_attr "length" "8,4")
- (set_attr "type" "arlo_shift,load_byte")
+ (set_attr "type" "alu_shift_reg,load_byte")
(set_attr "predicable" "yes")
(set_attr "pool_range" "*,256")
(set_attr "neg_pool_range" "*,244")]
@@ -5835,6 +5600,7 @@
(match_operand:SI 2 "s_register_operand" "r")))]
"TARGET_INT_SIMD"
"sxtah%?\\t%0, %2, %1"
+ [(set_attr "type" "alu_shift_reg")]
)
(define_expand "extendqihi2"
@@ -5909,7 +5675,7 @@
#
ldr%(sb%)\\t%0, %1"
[(set_attr "length" "8,4")
- (set_attr "type" "arlo_shift,load_byte")
+ (set_attr "type" "alu_shift_reg,load_byte")
(set_attr "predicable" "yes")
(set_attr "pool_range" "*,256")
(set_attr "neg_pool_range" "*,244")]
@@ -5935,7 +5701,7 @@
(match_operand:SI 2 "s_register_operand" "r")))]
"TARGET_INT_SIMD"
"sxtab%?\\t%0, %2, %1"
- [(set_attr "type" "arlo_shift")
+ [(set_attr "type" "alu_shift_reg")
(set_attr "predicable" "yes")
(set_attr "predicable_short_it" "no")]
)
@@ -6155,7 +5921,7 @@
}
"
[(set_attr "length" "8,12,16,8,8")
- (set_attr "type" "*,*,*,load2,store2")
+ (set_attr "type" "multiple,multiple,multiple,load2,store2")
(set_attr "arm_pool_range" "*,*,*,1020,*")
(set_attr "arm_neg_pool_range" "*,*,*,1004,*")
(set_attr "thumb2_pool_range" "*,*,*,4094,*")
@@ -6295,7 +6061,7 @@
}
}"
[(set_attr "length" "4,4,6,2,2,6,4,4")
- (set_attr "type" "*,mov_reg,*,load2,store2,load2,store2,mov_reg")
+ (set_attr "type" "multiple,multiple,multiple,load2,store2,load2,store2,multiple")
(set_attr "pool_range" "*,*,*,*,*,1018,*,*")]
)
@@ -6393,7 +6159,8 @@
"movt%?\t%0, #:upper16:%c2"
[(set_attr "predicable" "yes")
(set_attr "predicable_short_it" "no")
- (set_attr "length" "4")]
+ (set_attr "length" "4")
+ (set_attr "type" "mov_imm")]
)
(define_insn "*arm_movsi_insn"
@@ -6465,7 +6232,7 @@
str\\t%1, %0
mov\\t%0, %1"
[(set_attr "length" "2,2,4,4,2,2,2,2,2")
- (set_attr "type" "*,*,*,*,load1,store1,load1,store1,*")
+ (set_attr "type" "mov_reg,mov_imm,multiple,multiple,load1,store1,load1,store1,mov_reg")
(set_attr "pool_range" "*,*,*,*,*,*,1018,*,*")
(set_attr "conds" "set,clob,*,*,nocond,nocond,nocond,nocond,nocond")])
@@ -6621,7 +6388,8 @@
INTVAL (operands[2]));
return \"add\\t%0, %|pc\";
"
- [(set_attr "length" "2")]
+ [(set_attr "length" "2")
+ (set_attr "type" "alu_reg")]
)
(define_insn "pic_add_dot_plus_eight"
@@ -6636,7 +6404,8 @@
INTVAL (operands[2]));
return \"add%?\\t%0, %|pc, %1\";
"
- [(set_attr "predicable" "yes")]
+ [(set_attr "predicable" "yes")
+ (set_attr "type" "alu_reg")]
)
(define_insn "tls_load_dot_plus_eight"
@@ -6651,7 +6420,8 @@
INTVAL (operands[2]));
return \"ldr%?\\t%0, [%|pc, %1]\t\t@ tls_load_dot_plus_eight\";
"
- [(set_attr "predicable" "yes")]
+ [(set_attr "predicable" "yes")
+ (set_attr "type" "load1")]
)
;; PIC references to local variables can generate pic_add_dot_plus_eight
@@ -6712,7 +6482,7 @@
cmp%?\\t%0, #0
sub%.\\t%0, %1, #0"
[(set_attr "conds" "set")
- (set_attr "type" "arlo_imm,arlo_imm")]
+ (set_attr "type" "alus_imm,alus_imm")]
)
;; Subroutine to store a half word from a register into memory.
@@ -7058,7 +6828,7 @@
return \"ldrh %0, %1\";
}"
[(set_attr "length" "2,4,2,2,2,2")
- (set_attr "type" "*,load1,store1,*,*,*")
+ (set_attr "type" "alus_imm,load1,store1,mov_reg,mov_reg,mov_imm")
(set_attr "conds" "clob,nocond,nocond,nocond,nocond,clob")])
@@ -7306,7 +7076,7 @@
mov\\t%0, %1
mov\\t%0, %1"
[(set_attr "length" "2")
- (set_attr "type" "arlo_imm,load1,store1,mov_reg,mov_imm,mov_imm")
+ (set_attr "type" "alu_imm,load1,store1,mov_reg,mov_imm,mov_imm")
(set_attr "pool_range" "*,32,*,*,*,*")
(set_attr "conds" "clob,nocond,nocond,nocond,nocond,clob")])
@@ -7371,7 +7141,7 @@
}
"
[(set_attr "conds" "unconditional")
- (set_attr "type" "load1,store1,mov_reg,mov_reg")
+ (set_attr "type" "load1,store1,mov_reg,multiple")
(set_attr "length" "4,4,4,8")
(set_attr "predicable" "yes")]
)
@@ -7484,7 +7254,7 @@
mov\\t%0, %1
mov\\t%0, %1"
[(set_attr "length" "2")
- (set_attr "type" "*,load1,store1,load1,store1,mov_reg,mov_reg")
+ (set_attr "type" "alus_imm,load1,store1,load1,store1,mov_reg,mov_reg")
(set_attr "pool_range" "*,*,*,1018,*,*,*")
(set_attr "conds" "clob,nocond,nocond,nocond,nocond,nocond,nocond")]
)
@@ -7572,7 +7342,7 @@
}
"
[(set_attr "length" "8,12,16,8,8")
- (set_attr "type" "*,*,*,load2,store2")
+ (set_attr "type" "multiple,multiple,multiple,load2,store2")
(set_attr "arm_pool_range" "*,*,*,1020,*")
(set_attr "thumb2_pool_range" "*,*,*,1018,*")
(set_attr "arm_neg_pool_range" "*,*,*,1004,*")
@@ -7616,7 +7386,7 @@
}
"
[(set_attr "length" "4,2,2,6,4,4")
- (set_attr "type" "*,load2,store2,load2,store2,mov_reg")
+ (set_attr "type" "multiple,load2,store2,load2,store2,multiple")
(set_attr "pool_range" "*,*,*,1018,*,*")]
)
@@ -7924,7 +7694,8 @@
(and (ge (minus (match_dup 3) (pc)) (const_int -2040))
(le (minus (match_dup 3) (pc)) (const_int 2048)))
(const_int 6)
- (const_int 8))))]
+ (const_int 8))))
+ (set_attr "type" "multiple")]
)
(define_insn "cbranchsi4_scratch"
@@ -7960,7 +7731,8 @@
(and (ge (minus (match_dup 3) (pc)) (const_int -2040))
(le (minus (match_dup 3) (pc)) (const_int 2048)))
(const_int 6)
- (const_int 8))))]
+ (const_int 8))))
+ (set_attr "type" "multiple")]
)
(define_insn "*negated_cbranchsi4"
@@ -7995,7 +7767,8 @@
(and (ge (minus (match_dup 3) (pc)) (const_int -2040))
(le (minus (match_dup 3) (pc)) (const_int 2048)))
(const_int 6)
- (const_int 8))))]
+ (const_int 8))))
+ (set_attr "type" "multiple")]
)
(define_insn "*tbit_cbranch"
@@ -8039,7 +7812,8 @@
(and (ge (minus (match_dup 3) (pc)) (const_int -2040))
(le (minus (match_dup 3) (pc)) (const_int 2048)))
(const_int 6)
- (const_int 8))))]
+ (const_int 8))))
+ (set_attr "type" "multiple")]
)
(define_insn "*tlobits_cbranch"
@@ -8083,7 +7857,8 @@
(and (ge (minus (match_dup 3) (pc)) (const_int -2040))
(le (minus (match_dup 3) (pc)) (const_int 2048)))
(const_int 6)
- (const_int 8))))]
+ (const_int 8))))
+ (set_attr "type" "multiple")]
)
(define_insn "*tstsi3_cbranch"
@@ -8120,7 +7895,8 @@
(and (ge (minus (match_dup 2) (pc)) (const_int -2040))
(le (minus (match_dup 2) (pc)) (const_int 2048)))
(const_int 6)
- (const_int 8))))]
+ (const_int 8))))
+ (set_attr "type" "multiple")]
)
(define_insn "*cbranchne_decr1"
@@ -8223,7 +7999,8 @@
(and (ge (minus (match_dup 4) (pc)) (const_int -2038))
(le (minus (match_dup 4) (pc)) (const_int 2048)))
(const_int 8)
- (const_int 10)))])]
+ (const_int 10)))])
+ (set_attr "type" "multiple")]
)
(define_insn "*addsi3_cbranch"
@@ -8304,7 +8081,8 @@
(and (ge (minus (match_dup 5) (pc)) (const_int -2038))
(le (minus (match_dup 5) (pc)) (const_int 2048)))
(const_int 8)
- (const_int 10)))))]
+ (const_int 10)))))
+ (set_attr "type" "multiple")]
)
(define_insn "*addsi3_cbranch_scratch"
@@ -8372,7 +8150,8 @@
(and (ge (minus (match_dup 4) (pc)) (const_int -2040))
(le (minus (match_dup 4) (pc)) (const_int 2048)))
(const_int 6)
- (const_int 8))))]
+ (const_int 8))))
+ (set_attr "type" "multiple")]
)
@@ -8380,46 +8159,48 @@
(define_insn "*arm_cmpsi_insn"
[(set (reg:CC CC_REGNUM)
- (compare:CC (match_operand:SI 0 "s_register_operand" "l,r,r,r")
- (match_operand:SI 1 "arm_add_operand" "Py,r,rI,L")))]
+ (compare:CC (match_operand:SI 0 "s_register_operand" "l,r,r,r,r")
+ (match_operand:SI 1 "arm_add_operand" "Py,r,r,I,L")))]
"TARGET_32BIT"
"@
cmp%?\\t%0, %1
cmp%?\\t%0, %1
cmp%?\\t%0, %1
+ cmp%?\\t%0, %1
cmn%?\\t%0, #%n1"
[(set_attr "conds" "set")
- (set_attr "arch" "t2,t2,any,any")
- (set_attr "length" "2,2,4,4")
+ (set_attr "arch" "t2,t2,any,any,any")
+ (set_attr "length" "2,2,4,4,4")
(set_attr "predicable" "yes")
- (set_attr "type" "*,*,*,arlo_imm")]
+ (set_attr "predicable_short_it" "yes,yes,yes,no,no")
+ (set_attr "type" "alus_imm,alus_reg,alus_reg,alus_imm,alus_imm")]
)
(define_insn "*cmpsi_shiftsi"
[(set (reg:CC CC_REGNUM)
- (compare:CC (match_operand:SI 0 "s_register_operand" "r,r")
+ (compare:CC (match_operand:SI 0 "s_register_operand" "r,r,r")
(match_operator:SI 3 "shift_operator"
- [(match_operand:SI 1 "s_register_operand" "r,r")
- (match_operand:SI 2 "shift_amount_operand" "M,rM")])))]
+ [(match_operand:SI 1 "s_register_operand" "r,r,r")
+ (match_operand:SI 2 "shift_amount_operand" "M,r,M")])))]
"TARGET_32BIT"
- "cmp%?\\t%0, %1%S3"
+ "cmp\\t%0, %1%S3"
[(set_attr "conds" "set")
(set_attr "shift" "1")
- (set_attr "arch" "32,a")
- (set_attr "type" "arlo_shift,arlo_shift_reg")])
+ (set_attr "arch" "32,a,a")
+ (set_attr "type" "alus_shift_imm,alu_shift_reg,alus_shift_imm")])
(define_insn "*cmpsi_shiftsi_swp"
[(set (reg:CC_SWP CC_REGNUM)
(compare:CC_SWP (match_operator:SI 3 "shift_operator"
- [(match_operand:SI 1 "s_register_operand" "r,r")
- (match_operand:SI 2 "shift_amount_operand" "M,rM")])
- (match_operand:SI 0 "s_register_operand" "r,r")))]
+ [(match_operand:SI 1 "s_register_operand" "r,r,r")
+ (match_operand:SI 2 "shift_amount_operand" "M,r,M")])
+ (match_operand:SI 0 "s_register_operand" "r,r,r")))]
"TARGET_32BIT"
"cmp%?\\t%0, %1%S3"
[(set_attr "conds" "set")
(set_attr "shift" "1")
- (set_attr "arch" "32,a")
- (set_attr "type" "arlo_shift,arlo_shift_reg")])
+ (set_attr "arch" "32,a,a")
+ (set_attr "type" "alus_shift_imm,alu_shift_reg,alus_shift_imm")])
(define_insn "*arm_cmpsi_negshiftsi_si"
[(set (reg:CC_Z CC_REGNUM)
@@ -8432,8 +8213,8 @@
"cmn%?\\t%0, %2%S1"
[(set_attr "conds" "set")
(set (attr "type") (if_then_else (match_operand 3 "const_int_operand" "")
- (const_string "arlo_shift")
- (const_string "arlo_shift_reg")))
+ (const_string "alus_shift_imm")
+ (const_string "alus_shift_reg")))
(set_attr "predicable" "yes")]
)
@@ -8475,7 +8256,8 @@
operands[2] = gen_lowpart (SImode, operands[2]);
}
[(set_attr "conds" "set")
- (set_attr "length" "8")]
+ (set_attr "length" "8")
+ (set_attr "type" "multiple")]
)
(define_insn_and_split "*arm_cmpdi_unsigned"
@@ -8503,7 +8285,8 @@
[(set_attr "conds" "set")
(set_attr "enabled_for_depr_it" "yes,yes,no")
(set_attr "arch" "t2,t2,*")
- (set_attr "length" "6,6,8")]
+ (set_attr "length" "6,6,8")
+ (set_attr "type" "multiple")]
)
(define_insn "*arm_cmpdi_zero"
@@ -8513,7 +8296,8 @@
(clobber (match_scratch:SI 1 "=r"))]
"TARGET_32BIT"
"orr%.\\t%1, %Q0, %R0"
- [(set_attr "conds" "set")]
+ [(set_attr "conds" "set")
+ (set_attr "type" "logics_reg")]
)
(define_insn "*thumb_cmpdi_zero"
@@ -8524,7 +8308,8 @@
"TARGET_THUMB1"
"orr\\t%1, %Q0, %R0"
[(set_attr "conds" "set")
- (set_attr "length" "2")]
+ (set_attr "length" "2")
+ (set_attr "type" "logics_reg")]
)
; This insn allows redundant compares to be removed by cse, nothing should
@@ -8538,7 +8323,8 @@
"TARGET_32BIT"
"\\t%@ deleted compare"
[(set_attr "conds" "set")
- (set_attr "length" "0")]
+ (set_attr "length" "0")
+ (set_attr "type" "no_insn")]
)
@@ -8639,7 +8425,8 @@
(const_int 0)))]
""
[(set_attr "conds" "use")
- (set_attr "length" "8")]
+ (set_attr "length" "8")
+ (set_attr "type" "multiple")]
)
(define_insn_and_split "*mov_negscc"
@@ -8657,7 +8444,8 @@
operands[3] = GEN_INT (~0);
}
[(set_attr "conds" "use")
- (set_attr "length" "8")]
+ (set_attr "length" "8")
+ (set_attr "type" "multiple")]
)
(define_insn_and_split "*mov_notscc"
@@ -8676,7 +8464,8 @@
operands[4] = GEN_INT (~0);
}
[(set_attr "conds" "use")
- (set_attr "length" "8")]
+ (set_attr "length" "8")
+ (set_attr "type" "multiple")]
)
(define_expand "cstoresi4"
@@ -8881,7 +8670,8 @@
"@
neg\\t%0, %1\;adc\\t%0, %0, %1
neg\\t%2, %1\;adc\\t%0, %1, %2"
- [(set_attr "length" "4")]
+ [(set_attr "length" "4")
+ (set_attr "type" "multiple")]
)
(define_insn "*cstoresi_ne0_thumb1_insn"
@@ -8901,7 +8691,8 @@
(match_operand:SI 2 "thumb1_cmp_operand" "lI*h,*r"))))]
"TARGET_THUMB1"
"cmp\\t%1, %2\;sbc\\t%0, %0, %0"
- [(set_attr "length" "4")]
+ [(set_attr "length" "4")
+ (set_attr "type" "multiple")]
)
(define_insn_and_split "cstoresi_ltu_thumb1"
@@ -8915,7 +8706,8 @@
(neg:SI (ltu:SI (match_dup 1) (match_dup 2))))
(set (match_dup 0) (neg:SI (match_dup 3)))]
"operands[3] = gen_reg_rtx (SImode);"
- [(set_attr "length" "4")]
+ [(set_attr "length" "4")
+ (set_attr "type" "multiple")]
)
;; Used as part of the expansion of thumb les sequence.
@@ -8927,7 +8719,8 @@
(match_operand:SI 4 "thumb1_cmp_operand" "lI"))))]
"TARGET_THUMB1"
"cmp\\t%3, %4\;adc\\t%0, %1, %2"
- [(set_attr "length" "4")]
+ [(set_attr "length" "4")
+ (set_attr "type" "multiple")]
)
@@ -9145,7 +8938,8 @@
(and (ge (minus (match_dup 0) (pc)) (const_int -2044))
(le (minus (match_dup 0) (pc)) (const_int 2048))))
(const_int 2)
- (const_int 4)))]
+ (const_int 4)))
+ (set_attr "type" "branch")]
)
(define_insn "*thumb_jump"
@@ -9167,7 +8961,8 @@
(and (ge (minus (match_dup 0) (pc)) (const_int -2044))
(le (minus (match_dup 0) (pc)) (const_int 2048)))
(const_int 2)
- (const_int 4)))]
+ (const_int 4)))
+ (set_attr "type" "branch")]
)
(define_expand "call"
@@ -9651,7 +9446,8 @@
"TARGET_ARM"
"teq\\t%|r0, %|r0\;teq\\t%|pc, %|pc"
[(set_attr "length" "8")
- (set_attr "conds" "set")]
+ (set_attr "conds" "set")
+ (set_attr "type" "multiple")]
)
;; Call subroutine returning any type.
@@ -9842,7 +9638,8 @@
return \"cmp\\t%0, %1\;ldrls\\t%|pc, [%|pc, %0, asl #2]\;b\\t%l3\";
"
[(set_attr "conds" "clob")
- (set_attr "length" "12")]
+ (set_attr "length" "12")
+ (set_attr "type" "multiple")]
)
(define_expand "thumb1_casesi_internal_pic"
@@ -9873,7 +9670,8 @@
(clobber (reg:SI LR_REGNUM))])]
"TARGET_THUMB1"
"* return thumb1_output_casesi(operands);"
- [(set_attr "length" "4")]
+ [(set_attr "length" "4")
+ (set_attr "type" "multiple")]
)
(define_expand "indirect_jump"
@@ -9899,7 +9697,8 @@
(match_operand:SI 0 "s_register_operand" "r"))]
"TARGET_ARM"
"mov%?\\t%|pc, %0\\t%@ indirect register jump"
- [(set_attr "predicable" "yes")]
+ [(set_attr "predicable" "yes")
+ (set_attr "type" "branch")]
)
(define_insn "*load_indirect_jump"
@@ -9920,7 +9719,8 @@
"TARGET_THUMB1"
"mov\\tpc, %0"
[(set_attr "conds" "clob")
- (set_attr "length" "2")]
+ (set_attr "length" "2")
+ (set_attr "type" "branch")]
)
@@ -9939,7 +9739,8 @@
[(set (attr "length")
(if_then_else (eq_attr "is_thumb" "yes")
(const_int 2)
- (const_int 4)))]
+ (const_int 4)))
+ (set_attr "type" "mov_reg")]
)
@@ -9975,7 +9776,7 @@
(if_then_else
(match_operand:SI 3 "mult_operator" "")
(const_string "no") (const_string "yes"))])
- (set_attr "type" "arlo_shift,arlo_shift,arlo_shift,arlo_shift_reg")])
+ (set_attr "type" "alu_shift_imm,alu_shift_imm,alu_shift_imm,alu_shift_reg")])
(define_split
[(set (match_operand:SI 0 "s_register_operand" "")
@@ -10012,7 +9813,7 @@
[(set_attr "conds" "set")
(set_attr "shift" "4")
(set_attr "arch" "32,a")
- (set_attr "type" "arlo_shift,arlo_shift_reg")])
+ (set_attr "type" "alus_shift_imm,alus_shift_reg")])
(define_insn "*arith_shiftsi_compare0_scratch"
[(set (reg:CC_NOOV CC_REGNUM)
@@ -10029,7 +9830,7 @@
[(set_attr "conds" "set")
(set_attr "shift" "4")
(set_attr "arch" "32,a")
- (set_attr "type" "arlo_shift,arlo_shift_reg")])
+ (set_attr "type" "alus_shift_imm,alus_shift_reg")])
(define_insn "*sub_shiftsi"
[(set (match_operand:SI 0 "s_register_operand" "=r,r")
@@ -10042,41 +9843,41 @@
[(set_attr "predicable" "yes")
(set_attr "shift" "3")
(set_attr "arch" "32,a")
- (set_attr "type" "arlo_shift,arlo_shift_reg")])
+ (set_attr "type" "alus_shift_imm,alus_shift_reg")])
(define_insn "*sub_shiftsi_compare0"
[(set (reg:CC_NOOV CC_REGNUM)
(compare:CC_NOOV
- (minus:SI (match_operand:SI 1 "s_register_operand" "r,r")
+ (minus:SI (match_operand:SI 1 "s_register_operand" "r,r,r")
(match_operator:SI 2 "shift_operator"
- [(match_operand:SI 3 "s_register_operand" "r,r")
- (match_operand:SI 4 "shift_amount_operand" "M,rM")]))
+ [(match_operand:SI 3 "s_register_operand" "r,r,r")
+ (match_operand:SI 4 "shift_amount_operand" "M,r,M")]))
(const_int 0)))
- (set (match_operand:SI 0 "s_register_operand" "=r,r")
+ (set (match_operand:SI 0 "s_register_operand" "=r,r,r")
(minus:SI (match_dup 1)
(match_op_dup 2 [(match_dup 3) (match_dup 4)])))]
"TARGET_32BIT"
"sub%.\\t%0, %1, %3%S2"
[(set_attr "conds" "set")
(set_attr "shift" "3")
- (set_attr "arch" "32,a")
- (set_attr "type" "arlo_shift,arlo_shift_reg")])
+ (set_attr "arch" "32,a,a")
+ (set_attr "type" "alus_shift_imm,alus_shift_reg,alus_shift_imm")])
(define_insn "*sub_shiftsi_compare0_scratch"
[(set (reg:CC_NOOV CC_REGNUM)
(compare:CC_NOOV
- (minus:SI (match_operand:SI 1 "s_register_operand" "r,r")
+ (minus:SI (match_operand:SI 1 "s_register_operand" "r,r,r")
(match_operator:SI 2 "shift_operator"
- [(match_operand:SI 3 "s_register_operand" "r,r")
- (match_operand:SI 4 "shift_amount_operand" "M,rM")]))
+ [(match_operand:SI 3 "s_register_operand" "r,r,r")
+ (match_operand:SI 4 "shift_amount_operand" "M,r,M")]))
(const_int 0)))
- (clobber (match_scratch:SI 0 "=r,r"))]
+ (clobber (match_scratch:SI 0 "=r,r,r"))]
"TARGET_32BIT"
"sub%.\\t%0, %1, %3%S2"
[(set_attr "conds" "set")
(set_attr "shift" "3")
- (set_attr "arch" "32,a")
- (set_attr "type" "arlo_shift,arlo_shift_reg")])
+ (set_attr "arch" "32,a,a")
+ (set_attr "type" "alus_shift_imm,alus_shift_reg,alus_shift_imm")])
(define_insn_and_split "*and_scc"
@@ -10104,7 +9905,7 @@
operands[5] = gen_rtx_fmt_ee (rc, VOIDmode, operands[2], const0_rtx);
}
[(set_attr "conds" "use")
- (set_attr "type" "mov_reg")
+ (set_attr "type" "multiple")
(set_attr "length" "8")]
)
@@ -10138,7 +9939,8 @@
operands[5] = gen_rtx_fmt_ee (rc, VOIDmode, operands[2], const0_rtx);
}
[(set_attr "conds" "use")
- (set_attr "length" "4,8")]
+ (set_attr "length" "4,8")
+ (set_attr "type" "logic_imm,multiple")]
)
; A series of splitters for the compare_scc pattern below. Note that
@@ -10240,7 +10042,9 @@
else
rc = reverse_condition (rc);
operands[4] = gen_rtx_fmt_ee (rc, VOIDmode, tmp1, const0_rtx);
-})
+}
+ [(set_attr "type" "multiple")]
+)
;; Attempt to improve the sequence generated by the compare_scc splitters
;; not to use conditional execution.
@@ -10357,7 +10161,7 @@
return \"\";
"
[(set_attr "conds" "use")
- (set_attr "type" "mov_reg")
+ (set_attr "type" "mov_reg,mov_reg,multiple")
(set_attr "length" "4,4,8")]
)
@@ -10384,7 +10188,8 @@
return \"%i5%d4\\t%0, %1, #1\";
"
[(set_attr "conds" "clob")
- (set_attr "length" "12")]
+ (set_attr "length" "12")
+ (set_attr "type" "multiple")]
)
(define_insn "*cond_sub"
@@ -10402,7 +10207,8 @@
return \"sub%d4\\t%0, %1, #1\";
"
[(set_attr "conds" "clob")
- (set_attr "length" "8,12")]
+ (set_attr "length" "8,12")
+ (set_attr "type" "multiple")]
)
(define_insn "*cmp_ite0"
@@ -10466,6 +10272,7 @@
}"
[(set_attr "conds" "set")
(set_attr "arch" "t2,t2,t2,t2,t2,any,any,any,any")
+ (set_attr "type" "multiple")
(set_attr_alternative "length"
[(const_int 6)
(const_int 8)
@@ -10565,7 +10372,8 @@
(const_int 10))
(if_then_else (eq_attr "is_thumb" "no")
(const_int 8)
- (const_int 10))])]
+ (const_int 10))])
+ (set_attr "type" "multiple")]
)
(define_insn "*cmp_and"
@@ -10646,7 +10454,8 @@
(const_int 10))
(if_then_else (eq_attr "is_thumb" "no")
(const_int 8)
- (const_int 10))])]
+ (const_int 10))])
+ (set_attr "type" "multiple")]
)
(define_insn "*cmp_ior"
@@ -10727,7 +10536,8 @@
(const_int 10))
(if_then_else (eq_attr "is_thumb" "no")
(const_int 8)
- (const_int 10))])]
+ (const_int 10))])
+ (set_attr "type" "multiple")]
)
(define_insn_and_split "*ior_scc_scc"
@@ -10756,7 +10566,9 @@
DOM_CC_X_OR_Y),
CC_REGNUM);"
[(set_attr "conds" "clob")
- (set_attr "length" "16")])
+ (set_attr "length" "16")
+ (set_attr "type" "multiple")]
+)
; If the above pattern is followed by a CMP insn, then the compare is
; redundant, since we can rework the conditional instruction that follows.
@@ -10784,7 +10596,9 @@
(set (match_dup 7) (ne:SI (match_dup 0) (const_int 0)))]
""
[(set_attr "conds" "set")
- (set_attr "length" "16")])
+ (set_attr "length" "16")
+ (set_attr "type" "multiple")]
+)
(define_insn_and_split "*and_scc_scc"
[(set (match_operand:SI 0 "s_register_operand" "=Ts")
@@ -10814,7 +10628,9 @@
DOM_CC_X_AND_Y),
CC_REGNUM);"
[(set_attr "conds" "clob")
- (set_attr "length" "16")])
+ (set_attr "length" "16")
+ (set_attr "type" "multiple")]
+)
; If the above pattern is followed by a CMP insn, then the compare is
; redundant, since we can rework the conditional instruction that follows.
@@ -10842,7 +10658,9 @@
(set (match_dup 7) (ne:SI (match_dup 0) (const_int 0)))]
""
[(set_attr "conds" "set")
- (set_attr "length" "16")])
+ (set_attr "length" "16")
+ (set_attr "type" "multiple")]
+)
;; If there is no dominance in the comparison, then we can still save an
;; instruction in the AND case, since we can know that the second compare
@@ -10876,7 +10694,9 @@
operands[8] = gen_rtx_COMPARE (GET_MODE (operands[7]), operands[4],
operands[5]);"
[(set_attr "conds" "clob")
- (set_attr "length" "20")])
+ (set_attr "length" "20")
+ (set_attr "type" "multiple")]
+)
(define_split
[(set (reg:CC_NOOV CC_REGNUM)
@@ -10987,7 +10807,8 @@
FAIL;
}
[(set_attr "conds" "clob")
- (set_attr "length" "12")]
+ (set_attr "length" "12")
+ (set_attr "type" "multiple")]
)
(define_insn_and_split "movcond_addsi"
@@ -11025,7 +10846,8 @@
}
"
[(set_attr "conds" "clob")
- (set_attr "enabled_for_depr_it" "no,yes,yes")]
+ (set_attr "enabled_for_depr_it" "no,yes,yes")
+ (set_attr "type" "multiple")]
)
(define_insn "movcond"
@@ -11088,7 +10910,8 @@
return \"\";
"
[(set_attr "conds" "clob")
- (set_attr "length" "8,8,12")]
+ (set_attr "length" "8,8,12")
+ (set_attr "type" "multiple")]
)
;; ??? The patterns below need checking for Thumb-2 usefulness.
@@ -11106,7 +10929,8 @@
"TARGET_ARM"
"#"
[(set_attr "conds" "clob")
- (set_attr "length" "8,12")]
+ (set_attr "length" "8,12")
+ (set_attr "type" "multiple")]
)
(define_insn "*if_plus_move"
@@ -11128,11 +10952,11 @@
(set_attr "length" "4,4,8,8")
(set_attr_alternative "type"
[(if_then_else (match_operand 3 "const_int_operand" "")
- (const_string "arlo_imm" )
- (const_string "*"))
- (const_string "arlo_imm")
- (const_string "*")
- (const_string "*")])]
+ (const_string "alu_imm" )
+ (const_string "alu_reg"))
+ (const_string "alu_imm")
+ (const_string "alu_reg")
+ (const_string "alu_reg")])]
)
(define_insn "*ifcompare_move_plus"
@@ -11148,7 +10972,8 @@
"TARGET_ARM"
"#"
[(set_attr "conds" "clob")
- (set_attr "length" "8,12")]
+ (set_attr "length" "8,12")
+ (set_attr "type" "multiple")]
)
(define_insn "*if_move_plus"
@@ -11168,13 +10993,7 @@
sub%D4\\t%0, %2, #%n3\;mov%d4\\t%0, %1"
[(set_attr "conds" "use")
(set_attr "length" "4,4,8,8")
- (set_attr_alternative "type"
- [(if_then_else (match_operand 3 "const_int_operand" "")
- (const_string "arlo_imm" )
- (const_string "*"))
- (const_string "arlo_imm")
- (const_string "*")
- (const_string "*")])]
+ (set_attr "type" "alu_reg,alu_imm,multiple,multiple")]
)
(define_insn "*ifcompare_arith_arith"
@@ -11192,7 +11011,8 @@
"TARGET_ARM"
"#"
[(set_attr "conds" "clob")
- (set_attr "length" "12")]
+ (set_attr "length" "12")
+ (set_attr "type" "multiple")]
)
(define_insn "*if_arith_arith"
@@ -11208,7 +11028,8 @@
"TARGET_ARM"
"%I6%d5\\t%0, %1, %2\;%I7%D5\\t%0, %3, %4"
[(set_attr "conds" "use")
- (set_attr "length" "8")]
+ (set_attr "length" "8")
+ (set_attr "type" "multiple")]
)
(define_insn "*ifcompare_arith_move"
@@ -11249,7 +11070,8 @@
return \"\";
"
[(set_attr "conds" "clob")
- (set_attr "length" "8,12")]
+ (set_attr "length" "8,12")
+ (set_attr "type" "multiple")]
)
(define_insn "*if_arith_move"
@@ -11266,7 +11088,7 @@
%I5%d4\\t%0, %2, %3\;mov%D4\\t%0, %1"
[(set_attr "conds" "use")
(set_attr "length" "4,8")
- (set_attr "type" "*,*")]
+ (set_attr "type" "alu_shift_reg,multiple")]
)
(define_insn "*ifcompare_move_arith"
@@ -11308,7 +11130,8 @@
return \"%I7%D6\\t%0, %2, %3\";
"
[(set_attr "conds" "clob")
- (set_attr "length" "8,12")]
+ (set_attr "length" "8,12")
+ (set_attr "type" "multiple")]
)
(define_insn "*if_move_arith"
@@ -11326,7 +11149,7 @@
%I5%D4\\t%0, %2, %3\;mov%d4\\t%0, %1"
[(set_attr "conds" "use")
(set_attr "length" "4,8")
- (set_attr "type" "*,*")]
+ (set_attr "type" "alu_shift_reg,multiple")]
)
(define_insn "*ifcompare_move_not"
@@ -11342,7 +11165,8 @@
"TARGET_ARM"
"#"
[(set_attr "conds" "clob")
- (set_attr "length" "8,12")]
+ (set_attr "length" "8,12")
+ (set_attr "type" "multiple")]
)
(define_insn "*if_move_not"
@@ -11359,7 +11183,8 @@
mvn%d4\\t%0, #%B1\;mvn%D4\\t%0, %2"
[(set_attr "conds" "use")
(set_attr "type" "mvn_reg")
- (set_attr "length" "4,8,8")]
+ (set_attr "length" "4,8,8")
+ (set_attr "type" "mvn_reg,multiple,multiple")]
)
(define_insn "*ifcompare_not_move"
@@ -11375,7 +11200,8 @@
"TARGET_ARM"
"#"
[(set_attr "conds" "clob")
- (set_attr "length" "8,12")]
+ (set_attr "length" "8,12")
+ (set_attr "type" "multiple")]
)
(define_insn "*if_not_move"
@@ -11391,7 +11217,7 @@
mov%D4\\t%0, %1\;mvn%d4\\t%0, %2
mvn%D4\\t%0, #%B1\;mvn%d4\\t%0, %2"
[(set_attr "conds" "use")
- (set_attr "type" "mvn_reg")
+ (set_attr "type" "mvn_reg,multiple,multiple")
(set_attr "length" "4,8,8")]
)
@@ -11409,7 +11235,8 @@
"TARGET_ARM"
"#"
[(set_attr "conds" "clob")
- (set_attr "length" "8,12")]
+ (set_attr "length" "8,12")
+ (set_attr "type" "multiple")]
)
(define_insn "*if_shift_move"
@@ -11429,9 +11256,7 @@
[(set_attr "conds" "use")
(set_attr "shift" "2")
(set_attr "length" "4,8,8")
- (set (attr "type") (if_then_else (match_operand 3 "const_int_operand" "")
- (const_string "mov_shift")
- (const_string "mov_shift_reg")))]
+ (set_attr "type" "mov_shift_reg,multiple,multiple")]
)
(define_insn "*ifcompare_move_shift"
@@ -11448,7 +11273,8 @@
"TARGET_ARM"
"#"
[(set_attr "conds" "clob")
- (set_attr "length" "8,12")]
+ (set_attr "length" "8,12")
+ (set_attr "type" "multiple")]
)
(define_insn "*if_move_shift"
@@ -11468,9 +11294,7 @@
[(set_attr "conds" "use")
(set_attr "shift" "2")
(set_attr "length" "4,8,8")
- (set (attr "type") (if_then_else (match_operand 3 "const_int_operand" "")
- (const_string "mov_shift")
- (const_string "mov_shift_reg")))]
+ (set_attr "type" "mov_shift_reg,multiple,multiple")]
)
(define_insn "*ifcompare_shift_shift"
@@ -11489,7 +11313,8 @@
"TARGET_ARM"
"#"
[(set_attr "conds" "clob")
- (set_attr "length" "12")]
+ (set_attr "length" "12")
+ (set_attr "type" "multiple")]
)
(define_insn "*if_shift_shift"
@@ -11529,7 +11354,8 @@
"TARGET_ARM"
"#"
[(set_attr "conds" "clob")
- (set_attr "length" "12")]
+ (set_attr "length" "12")
+ (set_attr "type" "multiple")]
)
(define_insn "*if_not_arith"
@@ -11562,7 +11388,8 @@
"TARGET_ARM"
"#"
[(set_attr "conds" "clob")
- (set_attr "length" "12")]
+ (set_attr "length" "12")
+ (set_attr "type" "multiple")]
)
(define_insn "*if_arith_not"
@@ -11577,7 +11404,7 @@
"TARGET_ARM"
"mvn%D5\\t%0, %1\;%I6%d5\\t%0, %2, %3"
[(set_attr "conds" "use")
- (set_attr "type" "mvn_reg")
+ (set_attr "type" "multiple")
(set_attr "length" "8")]
)
@@ -11593,7 +11420,8 @@
"TARGET_ARM"
"#"
[(set_attr "conds" "clob")
- (set_attr "length" "8,12")]
+ (set_attr "length" "8,12")
+ (set_attr "type" "multiple")]
)
(define_insn "*if_neg_move"
@@ -11609,7 +11437,8 @@
mov%D4\\t%0, %1\;rsb%d4\\t%0, %2, #0
mvn%D4\\t%0, #%B1\;rsb%d4\\t%0, %2, #0"
[(set_attr "conds" "use")
- (set_attr "length" "4,8,8")]
+ (set_attr "length" "4,8,8")
+ (set_attr "type" "logic_shift_imm,multiple,multiple")]
)
(define_insn "*ifcompare_move_neg"
@@ -11624,7 +11453,8 @@
"TARGET_ARM"
"#"
[(set_attr "conds" "clob")
- (set_attr "length" "8,12")]
+ (set_attr "length" "8,12")
+ (set_attr "type" "multiple")]
)
(define_insn "*if_move_neg"
@@ -11640,7 +11470,8 @@
mov%d4\\t%0, %1\;rsb%D4\\t%0, %2, #0
mvn%d4\\t%0, #%B1\;rsb%D4\\t%0, %2, #0"
[(set_attr "conds" "use")
- (set_attr "length" "4,8,8")]
+ (set_attr "length" "4,8,8")
+ (set_attr "type" "logic_shift_imm,multiple,multiple")]
)
(define_insn "*arith_adjacentmem"
@@ -11838,7 +11669,8 @@
[(unspec_volatile [(const_int 0)] VUNSPEC_THUMB1_INTERWORK)]
"TARGET_THUMB1"
"* return thumb1_output_interwork ();"
- [(set_attr "length" "8")]
+ [(set_attr "length" "8")
+ (set_attr "type" "multiple")]
)
;; Note - although unspec_volatile's USE all hard registers,
@@ -12025,7 +11857,7 @@
mvn%D4\\t%0, %2
mov%d4\\t%0, %1\;mvn%D4\\t%0, %2"
[(set_attr "conds" "use")
- (set_attr "type" "mvn_reg")
+ (set_attr "type" "mvn_reg,multiple")
(set_attr "length" "4,8")]
)
@@ -12045,7 +11877,8 @@
return \"mvnne\\t%0, #0\";
"
[(set_attr "conds" "clob")
- (set_attr "length" "8")]
+ (set_attr "length" "8")
+ (set_attr "type" "multiple")]
)
(define_insn "*not_signextract_onebit"
@@ -12063,7 +11896,8 @@
return \"movne\\t%0, #0\";
"
[(set_attr "conds" "clob")
- (set_attr "length" "12")]
+ (set_attr "length" "12")
+ (set_attr "type" "multiple")]
)
;; ??? The above patterns need auditing for Thumb-2
@@ -12125,7 +11959,8 @@
UNSPEC_PRLG_STK))]
""
""
- [(set_attr "length" "0")]
+ [(set_attr "length" "0")
+ (set_attr "type" "block")]
)
;; Pop (as used in epilogue RTL)
@@ -12255,6 +12090,7 @@
assemble_align (32);
return \"\";
"
+ [(set_attr "type" "no_insn")]
)
(define_insn "align_8"
@@ -12264,6 +12100,7 @@
assemble_align (64);
return \"\";
"
+ [(set_attr "type" "no_insn")]
)
(define_insn "consttable_end"
@@ -12273,6 +12110,7 @@
making_const_table = FALSE;
return \"\";
"
+ [(set_attr "type" "no_insn")]
)
(define_insn "consttable_1"
@@ -12284,7 +12122,8 @@
assemble_zeros (3);
return \"\";
"
- [(set_attr "length" "4")]
+ [(set_attr "length" "4")
+ (set_attr "type" "no_insn")]
)
(define_insn "consttable_2"
@@ -12297,7 +12136,8 @@
assemble_zeros (2);
return \"\";
"
- [(set_attr "length" "4")]
+ [(set_attr "length" "4")
+ (set_attr "type" "no_insn")]
)
(define_insn "consttable_4"
@@ -12333,7 +12173,8 @@
}
return \"\";
}"
- [(set_attr "length" "4")]
+ [(set_attr "length" "4")
+ (set_attr "type" "no_insn")]
)
(define_insn "consttable_8"
@@ -12357,7 +12198,8 @@
}
return \"\";
}"
- [(set_attr "length" "8")]
+ [(set_attr "length" "8")
+ (set_attr "type" "no_insn")]
)
(define_insn "consttable_16"
@@ -12381,7 +12223,8 @@
}
return \"\";
}"
- [(set_attr "length" "16")]
+ [(set_attr "length" "16")
+ (set_attr "type" "no_insn")]
)
;; Miscellaneous Thumb patterns
@@ -12409,7 +12252,8 @@
(use (label_ref (match_operand 1 "" "")))]
"TARGET_THUMB1"
"mov\\t%|pc, %0"
- [(set_attr "length" "2")]
+ [(set_attr "length" "2")
+ (set_attr "type" "no_insn")]
)
;; V5 Instructions,
@@ -12470,7 +12314,8 @@
[(unspec:SI [(match_operand:SI 0 "register_operand" "")] UNSPEC_REGISTER_USE)]
""
"%@ %0 needed"
- [(set_attr "length" "0")]
+ [(set_attr "length" "0")
+ (set_attr "type" "no_insn")]
)
@@ -12518,6 +12363,7 @@
thumb_set_return_address (operands[0], operands[1]);
DONE;
}"
+ [(set_attr "type" "mov_reg")]
)
@@ -12528,7 +12374,8 @@
(unspec:SI [(const_int 0)] UNSPEC_TLS))]
"TARGET_HARD_TP"
"mrc%?\\tp15, 0, %0, c13, c0, 3\\t@ load_tp_hard"
- [(set_attr "predicable" "yes")]
+ [(set_attr "predicable" "yes")
+ (set_attr "type" "mrs")]
)
;; Doesn't clobber R1-R3. Must use r0 for the first operand.
@@ -12539,7 +12386,8 @@
(clobber (reg:CC CC_REGNUM))]
"TARGET_SOFT_TP"
"bl\\t__aeabi_read_tp\\t@ load_tp_soft"
- [(set_attr "conds" "clob")]
+ [(set_attr "conds" "clob")
+ (set_attr "type" "branch")]
)
;; tls descriptor call
@@ -12558,7 +12406,8 @@
return "bl\\t%c0(tlscall)";
}
[(set_attr "conds" "clob")
- (set_attr "length" "4")]
+ (set_attr "length" "4")
+ (set_attr "type" "branch")]
)
;; For thread pointer builtin
@@ -12584,7 +12433,8 @@
"movt%?\t%0, %L1"
[(set_attr "predicable" "yes")
(set_attr "predicable_short_it" "no")
- (set_attr "length" "4")]
+ (set_attr "length" "4")
+ (set_attr "type" "mov_imm")]
)
(define_insn "*arm_rev"
@@ -12596,7 +12446,8 @@
rev%?\t%0, %1
rev%?\t%0, %1"
[(set_attr "arch" "t1,t2,32")
- (set_attr "length" "2,2,4")]
+ (set_attr "length" "2,2,4")
+ (set_attr "type" "rev")]
)
(define_expand "arm_legacy_rev"
@@ -12696,7 +12547,8 @@
revsh%?\t%0, %1
revsh%?\t%0, %1"
[(set_attr "arch" "t1,t2,32")
- (set_attr "length" "2,2,4")]
+ (set_attr "length" "2,2,4")
+ (set_attr "type" "rev")]
)
(define_insn "*arm_rev16"
@@ -12708,7 +12560,8 @@
rev16%?\t%0, %1
rev16%?\t%0, %1"
[(set_attr "arch" "t1,t2,32")
- (set_attr "length" "2,2,4")]
+ (set_attr "length" "2,2,4")
+ (set_attr "type" "rev")]
)
(define_expand "bswaphi2"
diff --git a/gcc/config/arm/arm1020e.md b/gcc/config/arm/arm1020e.md
index 317e4cd4ad6..7df84d52481 100644
--- a/gcc/config/arm/arm1020e.md
+++ b/gcc/config/arm/arm1020e.md
@@ -66,14 +66,21 @@
;; ALU operations with no shifted operand
(define_insn_reservation "1020alu_op" 1
(and (eq_attr "tune" "arm1020e,arm1022e")
- (eq_attr "type" "arlo_imm,arlo_reg,shift,shift_reg,\
- mov_imm,mov_reg,mvn_imm,mvn_reg"))
+ (eq_attr "type" "alu_imm,alus_imm,logic_imm,logics_imm,\
+ alu_reg,alus_reg,logic_reg,logics_reg,\
+ adc_imm,adcs_imm,adc_reg,adcs_reg,\
+ adr,bfm,rev,\
+ shift_imm,shift_reg,\
+ mov_imm,mov_reg,mvn_imm,mvn_reg,\
+ multiple,no_insn"))
"1020a_e,1020a_m,1020a_w")
;; ALU operations with a shift-by-constant operand
(define_insn_reservation "1020alu_shift_op" 1
(and (eq_attr "tune" "arm1020e,arm1022e")
- (eq_attr "type" "extend,arlo_shift,mov_shift,mvn_shift"))
+ (eq_attr "type" "alu_shift_imm,alus_shift_imm,\
+ logic_shift_imm,logics_shift_imm,\
+ extend,mov_shift,mvn_shift"))
"1020a_e,1020a_m,1020a_w")
;; ALU operations with a shift-by-register operand
@@ -82,7 +89,9 @@
;; the execute stage.
(define_insn_reservation "1020alu_shift_reg_op" 2
(and (eq_attr "tune" "arm1020e,arm1022e")
- (eq_attr "type" "arlo_shift_reg,mov_shift_reg,mvn_shift_reg"))
+ (eq_attr "type" "alu_shift_reg,alus_shift_reg,\
+ logic_shift_reg,logics_shift_reg,\
+ mov_shift_reg,mvn_shift_reg"))
"1020a_e*2,1020a_m,1020a_w")
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
@@ -270,7 +279,7 @@
;; first execute state. We model this by using 1020a_e in the first cycle.
(define_insn_reservation "v10_ffarith" 5
(and (eq_attr "vfp10" "yes")
- (eq_attr "type" "fcpys,ffariths,ffarithd,fcmps,fcmpd"))
+ (eq_attr "type" "fmov,ffariths,ffarithd,fcmps,fcmpd"))
"1020a_e+v10_fmac")
(define_insn_reservation "v10_farith" 5
@@ -280,7 +289,7 @@
(define_insn_reservation "v10_cvt" 5
(and (eq_attr "vfp10" "yes")
- (eq_attr "type" "f_cvt"))
+ (eq_attr "type" "f_cvt,f_cvti2f,f_cvtf2i"))
"1020a_e+v10_fmac")
(define_insn_reservation "v10_fmul" 6
@@ -290,12 +299,12 @@
(define_insn_reservation "v10_fdivs" 18
(and (eq_attr "vfp10" "yes")
- (eq_attr "type" "fdivs"))
+ (eq_attr "type" "fdivs, fsqrts"))
"1020a_e+v10_ds*14")
(define_insn_reservation "v10_fdivd" 32
(and (eq_attr "vfp10" "yes")
- (eq_attr "type" "fdivd"))
+ (eq_attr "type" "fdivd, fsqrtd"))
"1020a_e+v10_fmac+v10_ds*28")
(define_insn_reservation "v10_floads" 4
@@ -316,7 +325,7 @@
(define_insn_reservation "v10_c2v" 4
(and (eq_attr "vfp10" "yes")
- (eq_attr "type" "r_2_f"))
+ (eq_attr "type" "f_mcr,f_mcrr"))
"1020a_e+1020l_e+v10_ls1,v10_ls2")
(define_insn_reservation "v10_fstores" 1
@@ -331,7 +340,7 @@
(define_insn_reservation "v10_v2c" 1
(and (eq_attr "vfp10" "yes")
- (eq_attr "type" "f_2_r"))
+ (eq_attr "type" "f_mrc,f_mrrc"))
"1020a_e+1020l_e,1020l_m,1020l_w")
(define_insn_reservation "v10_to_cpsr" 2
diff --git a/gcc/config/arm/arm1026ejs.md b/gcc/config/arm/arm1026ejs.md
index 9112122d67b..f5a0447f5da 100644
--- a/gcc/config/arm/arm1026ejs.md
+++ b/gcc/config/arm/arm1026ejs.md
@@ -66,14 +66,21 @@
;; ALU operations with no shifted operand
(define_insn_reservation "alu_op" 1
(and (eq_attr "tune" "arm1026ejs")
- (eq_attr "type" "arlo_imm,arlo_reg,shift,shift_reg,\
- mov_imm,mov_reg,mvn_imm,mvn_reg"))
+ (eq_attr "type" "alu_imm,alus_imm,logic_imm,logics_imm,\
+ alu_reg,alus_reg,logic_reg,logics_reg,\
+ adc_imm,adcs_imm,adc_reg,adcs_reg,\
+ adr,bfm,rev,\
+ shift_imm,shift_reg,\
+ mov_imm,mov_reg,mvn_imm,mvn_reg,\
+ multiple,no_insn"))
"a_e,a_m,a_w")
;; ALU operations with a shift-by-constant operand
(define_insn_reservation "alu_shift_op" 1
(and (eq_attr "tune" "arm1026ejs")
- (eq_attr "type" "extend,arlo_shift,mov_shift,mvn_shift"))
+ (eq_attr "type" "alu_shift_imm,alus_shift_imm,\
+ logic_shift_imm,logics_shift_imm,\
+ extend,mov_shift,mvn_shift"))
"a_e,a_m,a_w")
;; ALU operations with a shift-by-register operand
@@ -82,7 +89,9 @@
;; the execute stage.
(define_insn_reservation "alu_shift_reg_op" 2
(and (eq_attr "tune" "arm1026ejs")
- (eq_attr "type" "arlo_shift_reg,mov_shift_reg,mvn_shift_reg"))
+ (eq_attr "type" "alu_shift_reg,alus_shift_reg,\
+ logic_shift_reg,logics_shift_reg,\
+ mov_shift_reg,mvn_shift_reg"))
"a_e*2,a_m,a_w")
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
diff --git a/gcc/config/arm/arm1136jfs.md b/gcc/config/arm/arm1136jfs.md
index f83b9d14f2b..f6e0b8da8b6 100644
--- a/gcc/config/arm/arm1136jfs.md
+++ b/gcc/config/arm/arm1136jfs.md
@@ -75,14 +75,21 @@
;; ALU operations with no shifted operand
(define_insn_reservation "11_alu_op" 2
(and (eq_attr "tune" "arm1136js,arm1136jfs")
- (eq_attr "type" "arlo_imm,arlo_reg,shift,shift_reg,\
- mov_imm,mov_reg,mvn_imm,mvn_reg"))
+ (eq_attr "type" "alu_imm,alus_imm,logic_imm,logics_imm,\
+ alu_reg,alus_reg,logic_reg,logics_reg,\
+ adc_imm,adcs_imm,adc_reg,adcs_reg,\
+ adr,bfm,rev,\
+ shift_imm,shift_reg,\
+ mov_imm,mov_reg,mvn_imm,mvn_reg,\
+ multiple,no_insn"))
"e_1,e_2,e_3,e_wb")
;; ALU operations with a shift-by-constant operand
(define_insn_reservation "11_alu_shift_op" 2
(and (eq_attr "tune" "arm1136js,arm1136jfs")
- (eq_attr "type" "extend,arlo_shift,mov_shift,mvn_shift"))
+ (eq_attr "type" "alu_shift_imm,alus_shift_imm,\
+ logic_shift_imm,logics_shift_imm,\
+ extend,mov_shift,mvn_shift"))
"e_1,e_2,e_3,e_wb")
;; ALU operations with a shift-by-register operand
@@ -91,7 +98,9 @@
;; the shift stage.
(define_insn_reservation "11_alu_shift_reg_op" 3
(and (eq_attr "tune" "arm1136js,arm1136jfs")
- (eq_attr "type" "arlo_shift_reg,mov_shift_reg,mvn_shift_reg"))
+ (eq_attr "type" "alu_shift_reg,alus_shift_reg,\
+ logic_shift_reg,logics_shift_reg,\
+ mov_shift_reg,mvn_shift_reg"))
"e_1*2,e_2,e_3,e_wb")
;; alu_ops can start sooner, if there is no shifter dependency
diff --git a/gcc/config/arm/arm926ejs.md b/gcc/config/arm/arm926ejs.md
index 8c38e86ce66..d2b0e9e3cf8 100644
--- a/gcc/config/arm/arm926ejs.md
+++ b/gcc/config/arm/arm926ejs.md
@@ -58,9 +58,16 @@
;; ALU operations with no shifted operand
(define_insn_reservation "9_alu_op" 1
(and (eq_attr "tune" "arm926ejs")
- (eq_attr "type" "arlo_imm,arlo_reg,shift,shift_reg,extend,arlo_shift,\
+ (eq_attr "type" "alu_imm,alus_imm,logic_imm,logics_imm,\
+ alu_reg,alus_reg,logic_reg,logics_reg,\
+ adc_imm,adcs_imm,adc_reg,adcs_reg,\
+ adr,bfm,rev,\
+ alu_shift_imm,alus_shift_imm,\
+ logic_shift_imm,logics_shift_imm,\
+ shift_imm,shift_reg,extend,\
mov_imm,mov_reg,mov_shift,\
- mvn_imm,mvn_reg,mvn_shift"))
+ mvn_imm,mvn_reg,mvn_shift,\
+ multiple,no_insn"))
"e,m,w")
;; ALU operations with a shift-by-register operand
@@ -69,7 +76,9 @@
;; the execute stage.
(define_insn_reservation "9_alu_shift_reg_op" 2
(and (eq_attr "tune" "arm926ejs")
- (eq_attr "type" "arlo_shift_reg,mov_shift_reg,mvn_shift_reg"))
+ (eq_attr "type" "alu_shift_reg,alus_shift_reg,\
+ logic_shift_reg,logics_shift_reg,\
+ mov_shift_reg,mvn_shift_reg"))
"e*2,m,w")
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
diff --git a/gcc/config/arm/cortex-a15-neon.md b/gcc/config/arm/cortex-a15-neon.md
index bfa2f5e8818..ebb6b66f782 100644
--- a/gcc/config/arm/cortex-a15-neon.md
+++ b/gcc/config/arm/cortex-a15-neon.md
@@ -17,6 +17,199 @@
;; along with GCC; see the file COPYING3. If not see
;; <http://www.gnu.org/licenses/>.
+(define_attr "cortex_a15_neon_type"
+ "neon_abd, neon_abd_q, neon_arith_acc, neon_arith_acc_q,
+ neon_arith_basic, neon_arith_complex,
+ neon_reduc_add_acc, neon_multiply, neon_multiply_q,
+ neon_multiply_long, neon_mla, neon_mla_q, neon_mla_long,
+ neon_sat_mla_long, neon_shift_acc, neon_shift_imm_basic,\
+ neon_shift_imm_complex,
+ neon_shift_reg_basic, neon_shift_reg_basic_q, neon_shift_reg_complex,
+ neon_shift_reg_complex_q, neon_fp_negabs, neon_fp_arith,
+ neon_fp_arith_q, neon_fp_cvt_int,
+ neon_fp_cvt_int_q, neon_fp_cvt16, neon_fp_minmax, neon_fp_mul,
+ neon_fp_mul_q, neon_fp_mla, neon_fp_mla_q, neon_fp_recpe_rsqrte,
+ neon_fp_recpe_rsqrte_q, neon_bitops, neon_bitops_q, neon_from_gp,
+ neon_from_gp_q, neon_move, neon_tbl3_tbl4, neon_zip_q, neon_to_gp,
+ neon_load_a, neon_load_b, neon_load_c, neon_load_d, neon_load_e,
+ neon_load_f, neon_store_a, neon_store_b, neon_store_c, neon_store_d,
+ neon_store_e, neon_store_f, neon_store_g, neon_store_h,
+ unknown"
+ (cond [
+ (eq_attr "type" "neon_abd, neon_abd_long")
+ (const_string "neon_abd")
+ (eq_attr "type" "neon_abd_q")
+ (const_string "neon_abd_q")
+ (eq_attr "type" "neon_arith_acc, neon_reduc_add_acc,\
+ neon_reduc_add_acc_q")
+ (const_string "neon_arith_acc")
+ (eq_attr "type" "neon_arith_acc_q")
+ (const_string "neon_arith_acc_q")
+ (eq_attr "type" "neon_add, neon_add_q, neon_add_long,\
+ neon_add_widen, neon_neg, neon_neg_q,\
+ neon_reduc_add, neon_reduc_add_q,\
+ neon_reduc_add_long, neon_sub, neon_sub_q,\
+ neon_sub_long, neon_sub_widen, neon_logic,\
+ neon_logic_q, neon_tst, neon_tst_q")
+ (const_string "neon_arith_basic")
+ (eq_attr "type" "neon_abs, neon_abs_q, neon_add_halve_narrow_q,\
+ neon_add_halve, neon_add_halve_q,\
+ neon_sub_halve, neon_sub_halve_q, neon_qabs,\
+ neon_qabs_q, neon_qadd, neon_qadd_q, neon_qneg,\
+ neon_qneg_q, neon_qsub, neon_qsub_q,\
+ neon_sub_halve_narrow_q,\
+ neon_compare, neon_compare_q,\
+ neon_compare_zero, neon_compare_zero_q,\
+ neon_minmax, neon_minmax_q, neon_reduc_minmax,\
+ neon_reduc_minmax_q")
+ (const_string "neon_arith_complex")
+
+ (eq_attr "type" "neon_mul_b, neon_mul_h, neon_mul_s,\
+ neon_mul_h_scalar, neon_mul_s_scalar,\
+ neon_sat_mul_b, neon_sat_mul_h,\
+ neon_sat_mul_s, neon_sat_mul_h_scalar,\
+ neon_sat_mul_s_scalar,\
+ neon_mul_b_long, neon_mul_h_long,\
+ neon_mul_s_long,\
+ neon_mul_h_scalar_long, neon_mul_s_scalar_long,\
+ neon_sat_mul_b_long, neon_sat_mul_h_long,\
+ neon_sat_mul_s_long, neon_sat_mul_h_scalar_long,\
+ neon_sat_mul_s_scalar_long")
+ (const_string "neon_multiply")
+ (eq_attr "type" "neon_mul_b_q, neon_mul_h_q, neon_mul_s_q,\
+ neon_mul_h_scalar_q, neon_mul_s_scalar_q,\
+ neon_sat_mul_b_q, neon_sat_mul_h_q,\
+ neon_sat_mul_s_q, neon_sat_mul_h_scalar_q,\
+ neon_sat_mul_s_scalar_q")
+ (const_string "neon_multiply_q")
+ (eq_attr "type" "neon_mla_b, neon_mla_h, neon_mla_s,\
+ neon_mla_h_scalar, neon_mla_s_scalar,\
+ neon_mla_b_long, neon_mla_h_long,\
+ neon_mla_s_long,\
+ neon_mla_h_scalar_long, neon_mla_s_scalar_long")
+ (const_string "neon_mla")
+ (eq_attr "type" "neon_mla_b_q, neon_mla_h_q, neon_mla_s_q,\
+ neon_mla_h_scalar_q, neon_mla_s_scalar_q")
+ (const_string "neon_mla_q")
+ (eq_attr "type" "neon_sat_mla_b_long, neon_sat_mla_h_long,\
+ neon_sat_mla_s_long, neon_sat_mla_h_scalar_long,\
+ neon_sat_mla_s_scalar_long")
+ (const_string "neon_sat_mla_long")
+
+ (eq_attr "type" "neon_shift_acc, neon_shift_acc_q")
+ (const_string "neon_shift_acc")
+ (eq_attr "type" "neon_shift_imm, neon_shift_imm_q,\
+ neon_shift_imm_narrow_q, neon_shift_imm_long")
+ (const_string "neon_shift_imm_basic")
+ (eq_attr "type" "neon_sat_shift_imm, neon_sat_shift_imm_q,\
+ neon_sat_shift_imm_narrow_q")
+ (const_string "neon_shift_imm_complex")
+ (eq_attr "type" "neon_shift_reg")
+ (const_string "neon_shift_reg_basic")
+ (eq_attr "type" "neon_shift_reg_q")
+ (const_string "neon_shift_reg_basic_q")
+ (eq_attr "type" "neon_sat_shift_reg")
+ (const_string "neon_shift_reg_complex")
+ (eq_attr "type" "neon_sat_shift_reg_q")
+ (const_string "neon_shift_reg_complex_q")
+
+ (eq_attr "type" "neon_fp_neg_s, neon_fp_neg_s_q,\
+ neon_fp_abs_s, neon_fp_abs_s_q")
+ (const_string "neon_fp_negabs")
+ (eq_attr "type" "neon_fp_addsub_s, neon_fp_abd_s,\
+ neon_fp_reduc_add_s, neon_fp_compare_s,\
+ neon_fp_minmax_s, neon_fp_minmax_s_q,\
+ neon_fp_reduc_minmax_s, neon_fp_reduc_minmax_s_q")
+ (const_string "neon_fp_arith")
+ (eq_attr "type" "neon_fp_addsub_s_q, neon_fp_abd_s_q,\
+ neon_fp_reduc_add_s_q, neon_fp_compare_s_q")
+ (const_string "neon_fp_arith_q")
+ (eq_attr "type" "neon_fp_to_int_s, neon_int_to_fp_s")
+ (const_string "neon_fp_cvt_int")
+ (eq_attr "type" "neon_fp_to_int_s_q, neon_int_to_fp_s_q")
+ (const_string "neon_fp_cvt_int_q")
+ (eq_attr "type" "neon_fp_cvt_narrow_s_q, neon_fp_cvt_widen_h")
+ (const_string "neon_fp_cvt16")
+ (eq_attr "type" "neon_fp_mul_s, neon_fp_mul_s_scalar")
+ (const_string "neon_fp_mul")
+ (eq_attr "type" "neon_fp_mul_s_q, neon_fp_mul_s_scalar_q")
+ (const_string "neon_fp_mul_q")
+ (eq_attr "type" "neon_fp_mla_s, neon_fp_mla_s_scalar")
+ (const_string "neon_fp_mla")
+ (eq_attr "type" "neon_fp_mla_s_q, neon_fp_mla_s_scalar_q")
+ (const_string "neon_fp_mla_q")
+ (eq_attr "type" "neon_fp_recpe_s, neon_fp_rsqrte_s")
+ (const_string "neon_fp_recpe_rsqrte")
+ (eq_attr "type" "neon_fp_recpe_s_q, neon_fp_rsqrte_s_q")
+ (const_string "neon_fp_recpe_rsqrte_q")
+
+ (eq_attr "type" "neon_bsl, neon_cls, neon_cnt,\
+ neon_rev, neon_permute,\
+ neon_tbl1, neon_tbl2, neon_zip,\
+ neon_dup, neon_dup_q, neon_ext, neon_ext_q,\
+ neon_move, neon_move_q, neon_move_narrow_q")
+ (const_string "neon_bitops")
+ (eq_attr "type" "neon_bsl_q, neon_cls_q, neon_cnt_q,\
+ neon_rev_q, neon_permute_q")
+ (const_string "neon_bitops_q")
+ (eq_attr "type" "neon_from_gp")
+ (const_string "neon_from_gp")
+ (eq_attr "type" "neon_from_gp_q")
+ (const_string "neon_from_gp_q")
+ (eq_attr "type" "neon_tbl3, neon_tbl4")
+ (const_string "neon_tbl3_tbl4")
+ (eq_attr "type" "neon_zip_q")
+ (const_string "neon_zip_q")
+ (eq_attr "type" "neon_to_gp, neon_to_gp_q")
+ (const_string "neon_to_gp")
+
+ (eq_attr "type" "f_loads, f_loadd,\
+ neon_load1_1reg, neon_load1_1reg_q,\
+ neon_load1_2reg, neon_load1_2reg_q")
+ (const_string "neon_load_a")
+ (eq_attr "type" "neon_load1_3reg, neon_load1_3reg_q,\
+ neon_load1_4reg, neon_load1_4reg_q")
+ (const_string "neon_load_b")
+ (eq_attr "type" "neon_load1_one_lane, neon_load1_one_lane_q,\
+ neon_load1_all_lanes, neon_load1_all_lanes_q,\
+ neon_load2_2reg, neon_load2_2reg_q,\
+ neon_load2_all_lanes, neon_load2_all_lanes_q")
+ (const_string "neon_load_c")
+ (eq_attr "type" "neon_load2_4reg, neon_load2_4reg_q,\
+ neon_load3_3reg, neon_load3_3reg_q,\
+ neon_load3_one_lane, neon_load3_one_lane_q,\
+ neon_load4_4reg, neon_load4_4reg_q")
+ (const_string "neon_load_d")
+ (eq_attr "type" "neon_load2_one_lane, neon_load2_one_lane_q,\
+ neon_load3_all_lanes, neon_load3_all_lanes_q,\
+ neon_load4_all_lanes, neon_load4_all_lanes_q")
+ (const_string "neon_load_e")
+ (eq_attr "type" "neon_load4_one_lane, neon_load4_one_lane_q")
+ (const_string "neon_load_f")
+
+ (eq_attr "type" "f_stores, f_stored,\
+ neon_store1_1reg, neon_store1_1reg_q")
+ (const_string "neon_store_a")
+ (eq_attr "type" "neon_store1_2reg, neon_store1_2reg_q")
+ (const_string "neon_store_b")
+ (eq_attr "type" "neon_store1_3reg, neon_store1_3reg_q")
+ (const_string "neon_store_c")
+ (eq_attr "type" "neon_store1_4reg, neon_store1_4reg_q")
+ (const_string "neon_store_d")
+ (eq_attr "type" "neon_store1_one_lane, neon_store1_one_lane_q,\
+ neon_store2_one_lane, neon_store2_one_lane_q")
+ (const_string "neon_store_e")
+ (eq_attr "type" "neon_store2_2reg, neon_store2_2reg_q,\
+ neon_store3_one_lane, neon_store3_one_lane_q,\
+ neon_store4_one_lane, neon_store4_one_lane_q")
+ (const_string "neon_store_f")
+ (eq_attr "type" "neon_store2_4reg, neon_store2_4reg_q,\
+ neon_store4_4reg, neon_store4_4reg_q")
+ (const_string "neon_store_g")
+ (eq_attr "type" "neon_store3_3reg, neon_store3_3reg_q")
+ (const_string "neon_store_h")]
+ (const_string "unknown")))
+
(define_automaton "cortex_a15_neon")
;; Dispatch unit.
@@ -91,392 +284,316 @@
(define_reservation "ca15_cx_perm" "ca15_cx_ij|ca15_cx_ik")
(define_reservation "ca15_cx_perm_2" "ca15_cx_ij+ca15_cx_ik")
-(define_insn_reservation "cortex_a15_neon_int_1" 5
- (and (eq_attr "tune" "cortexa15")
- (eq_attr "neon_type" "neon_int_1"))
- "ca15_issue1,ca15_cx_ialu")
+;; Integer Arithmetic Instructions.
-(define_insn_reservation "cortex_a15_neon_int_2" 5
+(define_insn_reservation "cortex_a15_neon_abd" 5
(and (eq_attr "tune" "cortexa15")
- (eq_attr "neon_type" "neon_int_2"))
+ (eq_attr "cortex_a15_neon_type" "neon_abd"))
"ca15_issue1,ca15_cx_ialu")
-(define_insn_reservation "cortex_a15_neon_int_3" 5
+(define_insn_reservation "cortex_a15_neon_abd_q" 5
(and (eq_attr "tune" "cortexa15")
- (eq_attr "neon_type" "neon_int_3"))
- "ca15_issue1,ca15_cx_ialu")
+ (eq_attr "cortex_a15_neon_type" "neon_abd_q"))
+ "ca15_issue2,ca15_cx_ialu*2")
-(define_insn_reservation "cortex_a15_neon_int_4" 5
+(define_insn_reservation "cortex_a15_neon_aba" 7
(and (eq_attr "tune" "cortexa15")
- (eq_attr "neon_type" "neon_int_4"))
- "ca15_issue1,ca15_cx_ialu")
+ (eq_attr "cortex_a15_neon_type" "neon_arith_acc"))
+ "ca15_issue1,ca15_cx_ialu_with_acc")
-(define_insn_reservation "cortex_a15_neon_int_5" 5
+(define_insn_reservation "cortex_a15_neon_aba_q" 8
(and (eq_attr "tune" "cortexa15")
- (eq_attr "neon_type" "neon_int_5"))
- "ca15_issue1,ca15_cx_ialu")
+ (eq_attr "cortex_a15_neon_type" "neon_arith_acc_q"))
+ "ca15_issue2,ca15_cx_ialu_with_acc*2")
-(define_insn_reservation "cortex_a15_neon_vqneg_vqabs" 5
+(define_insn_reservation "cortex_a15_neon_arith_basic" 4
(and (eq_attr "tune" "cortexa15")
- (eq_attr "neon_type" "neon_vqneg_vqabs"))
+ (eq_attr "cortex_a15_neon_type" "neon_arith_basic"))
"ca15_issue1,ca15_cx_ialu")
-(define_insn_reservation "cortex_a15_neon_vmov" 5
+(define_insn_reservation "cortex_a15_neon_arith_complex" 5
(and (eq_attr "tune" "cortexa15")
- (eq_attr "neon_type" "neon_vmov"))
+ (eq_attr "cortex_a15_neon_type" "neon_arith_complex"))
"ca15_issue1,ca15_cx_ialu")
-(define_insn_reservation "cortex_a15_neon_vaba" 7
- (and (eq_attr "tune" "cortexa15")
- (eq_attr "neon_type" "neon_vaba"))
- "ca15_issue1,ca15_cx_ialu_with_acc")
-
-(define_insn_reservation "cortex_a15_neon_vaba_qqq" 8
- (and (eq_attr "tune" "cortexa15")
- (eq_attr "neon_type" "neon_vaba_qqq"))
- "ca15_issue2,ca15_cx_ialu_with_acc*2")
+;; Integer Multiply Instructions.
-(define_insn_reservation
- "cortex_a15_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long" 6
+(define_insn_reservation "cortex_a15_neon_multiply" 6
(and (eq_attr "tune" "cortexa15")
- (eq_attr "neon_type" "neon_mul_ddd_8_16_qdd_16_8_long_32_16_long"))
+ (eq_attr "cortex_a15_neon_type" "neon_multiply"))
"ca15_issue1,ca15_cx_imac")
-(define_insn_reservation "cortex_a15_neon_mul_qqq_8_16_32_ddd_32" 7
+(define_insn_reservation "cortex_a15_neon_multiply_q" 7
(and (eq_attr "tune" "cortexa15")
- (eq_attr "neon_type" "neon_mul_qqq_8_16_32_ddd_32"))
- "ca15_issue1,ca15_cx_imac*2")
+ (eq_attr "cortex_a15_neon_type" "neon_multiply_q"))
+ "ca15_issue2,ca15_cx_imac*2")
-(define_insn_reservation
- "cortex_a15_neon_mul_qdd_64_32_long_qqd_16_ddd_32_scalar_64_32_long_scalar" 7
+(define_insn_reservation "cortex_a15_neon_mla" 6
(and (eq_attr "tune" "cortexa15")
- (eq_attr "neon_type"
- "neon_mul_qdd_64_32_long_qqd_16_ddd_32_scalar_64_32_long_scalar"))
- "ca15_issue1,ca15_cx_imac*2")
-
-(define_insn_reservation
- "cortex_a15_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long" 6
- (and (eq_attr "tune" "cortexa15")
- (eq_attr "neon_type"
- "neon_mla_ddd_8_16_qdd_16_8_long_32_16_long"))
+ (eq_attr "cortex_a15_neon_type" "neon_mla"))
"ca15_issue1,ca15_cx_imac")
-(define_insn_reservation
- "cortex_a15_neon_mla_qqq_8_16" 7
+(define_insn_reservation "cortex_a15_neon_mla_q" 7
(and (eq_attr "tune" "cortexa15")
- (eq_attr "neon_type"
- "neon_mla_qqq_8_16"))
+ (eq_attr "cortex_a15_neon_type" "neon_mla_q"))
"ca15_issue1,ca15_cx_imac*2")
-(define_insn_reservation
- "cortex_a15_neon_mla_ddd_32_qqd_16_ddd_32_scalar_\
- qdd_64_32_long_scalar_qdd_64_32_long" 7
+(define_insn_reservation "cortex_a15_neon_sat_mla_long" 6
(and (eq_attr "tune" "cortexa15")
- (eq_attr "neon_type"
- "neon_mla_ddd_32_qqd_16_ddd_32_scalar_qdd_64_32_long_scalar_qdd_64_32_long"))
+ (eq_attr "cortex_a15_neon_type" "neon_sat_mla_long"))
"ca15_issue1,ca15_cx_imac")
-(define_insn_reservation
- "cortex_a15_neon_mla_qqq_32_qqd_32_scalar" 7
- (and (eq_attr "tune" "cortexa15")
- (eq_attr "neon_type"
- "neon_mla_qqq_32_qqd_32_scalar"))
- "ca15_issue1,ca15_cx_imac*2")
+;; Integer Shift Instructions.
(define_insn_reservation
- "cortex_a15_neon_mul_ddd_16_scalar_32_16_long_scalar" 6
+ "cortex_a15_neon_shift_acc" 7
(and (eq_attr "tune" "cortexa15")
- (eq_attr "neon_type"
- "neon_mul_ddd_16_scalar_32_16_long_scalar"))
- "ca15_issue1,ca15_cx_imac")
-
-(define_insn_reservation
- "cortex_a15_neon_mul_qqd_32_scalar" 7
- (and (eq_attr "tune" "cortexa15")
- (eq_attr "neon_type"
- "neon_mul_qqd_32_scalar"))
- "ca15_issue1,ca15_cx_imac*2")
+ (eq_attr "cortex_a15_neon_type" "neon_shift_acc"))
+ "ca15_issue1,ca15_cx_ishf_with_acc")
(define_insn_reservation
- "cortex_a15_neon_mla_ddd_16_scalar_qdd_32_16_long_scalar" 6
+ "cortex_a15_neon_shift_imm_basic" 4
(and (eq_attr "tune" "cortexa15")
- (eq_attr "neon_type"
- "neon_mla_ddd_16_scalar_qdd_32_16_long_scalar"))
- "ca15_issue1,ca15_cx_imac")
+ (eq_attr "cortex_a15_neon_type" "neon_shift_imm_basic"))
+ "ca15_issue1,ca15_cx_ik+ca15_cx_ishf")
(define_insn_reservation
- "cortex_a15_neon_shift_1" 5
+ "cortex_a15_neon_shift_imm_complex" 5
(and (eq_attr "tune" "cortexa15")
- (eq_attr "neon_type"
- "neon_shift_1"))
+ (eq_attr "cortex_a15_neon_type" "neon_shift_imm_complex"))
"ca15_issue1,ca15_cx_ik+ca15_cx_ishf")
(define_insn_reservation
- "cortex_a15_neon_shift_2" 5
+ "cortex_a15_neon_shift_reg_basic" 4
(and (eq_attr "tune" "cortexa15")
- (eq_attr "neon_type"
- "neon_shift_2"))
+ (eq_attr "cortex_a15_neon_type" "neon_shift_reg_basic"))
"ca15_issue1,ca15_cx_ik+ca15_cx_ishf")
(define_insn_reservation
- "cortex_a15_neon_shift_3" 6
+ "cortex_a15_neon_shift_reg_basic_q" 5
(and (eq_attr "tune" "cortexa15")
- (eq_attr "neon_type"
- "neon_shift_3"))
- "ca15_issue2,(ca15_cx_ik+ca15_cx_ishf)*2")
+ (eq_attr "cortex_a15_neon_type" "neon_shift_reg_basic_q"))
+ "ca15_issue2,(ca15_cx_ik+ca15_cx_ishf*2)")
(define_insn_reservation
- "cortex_a15_neon_vshl_ddd" 5
+ "cortex_a15_neon_shift_reg_complex" 5
(and (eq_attr "tune" "cortexa15")
- (eq_attr "neon_type"
- "neon_vshl_ddd"))
- "ca15_issue1,ca15_cx_ik+ca15_cx_ishf")
+ (eq_attr "cortex_a15_neon_type" "neon_shift_reg_complex"))
+ "ca15_issue2,ca15_cx_ik+ca15_cx_ishf")
(define_insn_reservation
- "cortex_a15_neon_vqshl_vrshl_vqrshl_qqq" 6
+ "cortex_a15_neon_shift_reg_complex_q" 6
(and (eq_attr "tune" "cortexa15")
- (eq_attr "neon_type"
- "neon_vqshl_vrshl_vqrshl_qqq"))
+ (eq_attr "cortex_a15_neon_type" "neon_shift_reg_complex_q"))
"ca15_issue2,(ca15_cx_ik+ca15_cx_ishf)*2")
+;; Floating Point Instructions.
+
(define_insn_reservation
- "cortex_a15_neon_vsra_vrsra" 7
+ "cortex_a15_neon_fp_negabs" 4
(and (eq_attr "tune" "cortexa15")
- (eq_attr "neon_type"
- "neon_vsra_vrsra"))
- "ca15_issue1,ca15_cx_ishf_with_acc")
+ (eq_attr "cortex_a15_neon_type" "neon_fp_negabs"))
+ "ca15_issue1,ca15_cx_falu")
(define_insn_reservation
- "cortex_a15_neon_fp_vadd_ddd_vabs_dd" 6
+ "cortex_a15_neon_fp_arith" 6
(and (eq_attr "tune" "cortexa15")
- (eq_attr "neon_type"
- "neon_fp_vadd_ddd_vabs_dd"))
+ (eq_attr "cortex_a15_neon_type" "neon_fp_arith"))
"ca15_issue1,ca15_cx_falu")
(define_insn_reservation
- "cortex_a15_neon_fp_vadd_qqq_vabs_qq" 7
+ "cortex_a15_neon_fp_arith_q" 6
(and (eq_attr "tune" "cortexa15")
- (eq_attr "neon_type"
- "neon_fp_vadd_qqq_vabs_qq"))
+ (eq_attr "cortex_a15_neon_type" "neon_fp_arith_q"))
"ca15_issue2,ca15_cx_falu_2")
(define_insn_reservation
- "cortex_a15_neon_fp_vmul_ddd" 5
+ "cortex_a15_neon_fp_cvt_int" 6
(and (eq_attr "tune" "cortexa15")
- (eq_attr "neon_type"
- "neon_fp_vmul_ddd"))
- "ca15_issue1,ca15_cx_fmul")
+ (eq_attr "cortex_a15_neon_type" "neon_fp_cvt_int"))
+ "ca15_issue1,ca15_cx_falu+ca15_cx_ishf")
(define_insn_reservation
- "cortex_a15_neon_fp_vmul_qqd" 6
+ "cortex_a15_neon_fp_cvt_int_q" 6
(and (eq_attr "tune" "cortexa15")
- (eq_attr "neon_type"
- "neon_fp_vmul_qqd"))
- "ca15_issue2,ca15_cx_fmul_2")
+ (eq_attr "cortex_a15_neon_type" "neon_fp_cvt_int_q"))
+ "ca15_issue2,(ca15_cx_falu+ca15_cx_ishf)*2")
(define_insn_reservation
- "cortex_a15_neon_fp_vmla_ddd" 9
+ "cortex_a15_neon_fp_cvt16" 10
(and (eq_attr "tune" "cortexa15")
- (eq_attr "neon_type"
- "neon_fp_vmla_ddd"))
- "ca15_issue1,ca15_cx_fmac")
+ (eq_attr "cortex_a15_neon_type" "neon_fp_cvt16"))
+ "ca15_issue3,(ca15_cx_falu+ca15_cx_ishf)*2+ca15_cx_falu")
(define_insn_reservation
- "cortex_a15_neon_fp_vmla_qqq" 11
+ "cortex_a15_neon_fp_mul" 5
(and (eq_attr "tune" "cortexa15")
- (eq_attr "neon_type"
- "neon_fp_vmla_qqq"))
- "ca15_issue2,ca15_cx_fmac_2")
+ (eq_attr "cortex_a15_neon_type" "neon_fp_mul"))
+ "ca15_issue1,ca15_cx_fmul")
(define_insn_reservation
- "cortex_a15_neon_fp_vmla_ddd_scalar" 9
+ "cortex_a15_neon_fp_mul_q" 5
(and (eq_attr "tune" "cortexa15")
- (eq_attr "neon_type"
- "neon_fp_vmla_ddd_scalar"))
- "ca15_issue1,ca15_cx_fmac")
+ (eq_attr "cortex_a15_neon_type" "neon_fp_mul_q"))
+ "ca15_issue2,ca15_cx_fmul_2")
(define_insn_reservation
- "cortex_a15_neon_fp_vmla_qqq_scalar" 11
+ "cortex_a15_neon_fp_mla" 9
(and (eq_attr "tune" "cortexa15")
- (eq_attr "neon_type"
- "neon_fp_vmla_qqq_scalar"))
- "ca15_issue2,ca15_cx_fmac_2")
+ (eq_attr "cortex_a15_neon_type" "neon_fp_mla"))
+ "ca15_issue1,ca15_cx_fmul")
(define_insn_reservation
- "cortex_a15_neon_fp_vrecps_vrsqrts_ddd" 9
+ "cortex_a15_neon_fp_mla_q" 9
(and (eq_attr "tune" "cortexa15")
- (eq_attr "neon_type"
- "neon_fp_vrecps_vrsqrts_ddd"))
- "ca15_issue1,ca15_cx_fmac")
+ (eq_attr "cortex_a15_neon_type" "neon_fp_mla_q"))
+ "ca15_issue2,ca15_cx_fmul_2")
(define_insn_reservation
- "cortex_a15_neon_fp_vrecps_vrsqrts_qqq" 11
+ "cortex_a15_neon_fp_recps_rsqrte" 9
(and (eq_attr "tune" "cortexa15")
- (eq_attr "neon_type"
- "neon_fp_vrecps_vrsqrts_qqq"))
- "ca15_issue2,ca15_cx_fmac_2")
+ (eq_attr "cortex_a15_neon_type" "neon_fp_recpe_rsqrte"))
+ "ca15_issue1,ca15_cx_fmac")
(define_insn_reservation
- "cortex_a15_neon_bp_simple" 4
+ "cortex_a15_neon_fp_recps_rsqrte_q" 9
(and (eq_attr "tune" "cortexa15")
- (eq_attr "neon_type"
- "neon_bp_simple"))
- "ca15_issue3,ca15_ls+ca15_cx_perm_2,ca15_cx_perm")
+ (eq_attr "cortex_a15_neon_type" "neon_fp_recpe_rsqrte_q"))
+ "ca15_issue2,ca15_cx_fmac_2")
+
+;; Miscelaaneous Instructions.
(define_insn_reservation
- "cortex_a15_neon_bp_2cycle" 4
+ "cortex_a15_neon_bitops" 4
(and (eq_attr "tune" "cortexa15")
- (eq_attr "neon_type"
- "neon_bp_2cycle"))
+ (eq_attr "cortex_a15_neon_type" "neon_bitops"))
"ca15_issue1,ca15_cx_perm")
(define_insn_reservation
- "cortex_a15_neon_bp_3cycle" 7
+ "cortex_a15_neon_bitops_q" 4
(and (eq_attr "tune" "cortexa15")
- (eq_attr "neon_type"
- "neon_bp_3cycle"))
- "ca15_issue3,ca15_cx_ialu+ca15_cx_perm_2,ca15_cx_perm")
+ (eq_attr "cortex_a15_neon_type" "neon_bitops_q"))
+ "ca15_issue2,ca15_cx_perm_2")
(define_insn_reservation
- "cortex_a15_neon_vld1_1_2_regs" 7
+ "cortex_a15_neon_from_gp" 9
(and (eq_attr "tune" "cortexa15")
- (eq_attr "neon_type"
- "neon_vld1_1_2_regs"))
- "ca15_issue2,ca15_ls,ca15_ldr")
+ (eq_attr "cortex_a15_neon_type" "neon_from_gp"))
+ "ca15_issue2,ca15_ls1+ca15_ls2+ca15_cx_perm")
(define_insn_reservation
- "cortex_a15_neon_vld1_3_4_regs" 8
+ "cortex_a15_neon_from_gp_q" 9
(and (eq_attr "tune" "cortexa15")
- (eq_attr "neon_type"
- "neon_vld1_3_4_regs"))
- "ca15_issue3,ca15_ls1+ca15_ls2,ca15_ldr,ca15_ldr")
+ (eq_attr "cortex_a15_neon_type" "neon_from_gp_q"))
+ "ca15_issue2,ca15_ls1+ca15_ls2+ca15_cx_perm_2")
(define_insn_reservation
- "cortex_a15_neon_vld2_2_regs_vld1_vld2_all_lanes" 9
+ "cortex_a15_neon_tbl3_tbl4" 7
(and (eq_attr "tune" "cortexa15")
- (eq_attr "neon_type"
- "neon_vld2_2_regs_vld1_vld2_all_lanes"))
- "ca15_issue3,ca15_ls,ca15_ldr")
+ (eq_attr "cortex_a15_neon_type" "neon_tbl3_tbl4"))
+ "ca15_issue2,ca15_cx_perm_2")
(define_insn_reservation
- "cortex_a15_neon_vld2_4_regs" 12
+ "cortex_a15_neon_zip_q" 7
(and (eq_attr "tune" "cortexa15")
- (eq_attr "neon_type"
- "neon_vld2_4_regs"))
- "ca15_issue3,ca15_issue3+ca15_ls1+ca15_ls2,ca15_ldr*2")
+ (eq_attr "cortex_a15_neon_type" "neon_zip_q"))
+ "ca15_issue3,ca15_cx_perm*3")
(define_insn_reservation
- "cortex_a15_neon_vld3_vld4" 12
+ "cortex_a15_neon_to_gp" 7
(and (eq_attr "tune" "cortexa15")
- (eq_attr "neon_type"
- "neon_vld3_vld4"))
- "ca15_issue3,ca15_issue3+ca15_ls1+ca15_ls2,ca15_ldr*2")
+ (eq_attr "cortex_a15_neon_type" "neon_to_gp"))
+ "ca15_issue2,ca15_ls1+ca15_ls2")
-(define_insn_reservation
- "cortex_a15_neon_vst1_1_2_regs_vst2_2_regs" 0
- (and (eq_attr "tune" "cortexa15")
- (eq_attr "neon_type"
- "neon_vst1_1_2_regs_vst2_2_regs"))
- "ca15_issue3,ca15_issue3+ca15_cx_perm+ca15_ls1+ca15_ls2,ca15_str*2")
+;; Load Instructions.
(define_insn_reservation
- "cortex_a15_neon_vst1_3_4_regs" 0
+ "cortex_a15_neon_load_a" 6
(and (eq_attr "tune" "cortexa15")
- (eq_attr "neon_type"
- "neon_vst1_3_4_regs"))
- "ca15_issue3,ca15_issue3+ca15_ls1+ca15_ls2,ca15_str*3")
+ (eq_attr "cortex_a15_neon_type" "neon_load_a"))
+ "ca15_issue1,ca15_ls,ca15_ldr")
(define_insn_reservation
- "cortex_a15_neon_vst2_4_regs_vst3_vst4" 0
+ "cortex_a15_neon_load_b" 7
(and (eq_attr "tune" "cortexa15")
- (eq_attr "neon_type"
- "neon_vst2_4_regs_vst3_vst4"))
- "ca15_issue3,ca15_issue3+ca15_cx_perm_2+ca15_ls1+ca15_ls2,\
- ca15_issue3+ca15_str,ca15_str*3")
+ (eq_attr "cortex_a15_neon_type" "neon_load_b"))
+ "ca15_issue2,ca15_ls1+ca15_ls2,ca15_ldr,ca15_ldr")
(define_insn_reservation
- "cortex_a15_neon_vst3_vst4" 0
+ "cortex_a15_neon_load_c" 9
(and (eq_attr "tune" "cortexa15")
- (eq_attr "neon_type"
- "neon_vst3_vst4"))
- "ca15_issue3,ca15_issue3+ca15_cx_perm_2+ca15_ls1+ca15_ls2,ca15_str*4")
+ (eq_attr "cortex_a15_neon_type" "neon_load_c"))
+ "ca15_issue2,ca15_ls1+ca15_ls2,ca15_ldr,ca15_ldr")
(define_insn_reservation
- "cortex_a15_neon_vld1_vld2_lane" 9
+ "cortex_a15_neon_load_d" 11
(and (eq_attr "tune" "cortexa15")
- (eq_attr "neon_type"
- "neon_vld1_vld2_lane"))
- "ca15_issue3,ca15_ls,ca15_ldr")
+ (eq_attr "cortex_a15_neon_type" "neon_load_d"))
+ "ca15_issue1,ca15_issue3+ca15_ls1+ca15_ls2,ca15_ldr*2")
(define_insn_reservation
- "cortex_a15_neon_vld3_vld4_lane" 10
+ "cortex_a15_neon_load_e" 9
(and (eq_attr "tune" "cortexa15")
- (eq_attr "neon_type"
- "neon_vld3_vld4_lane"))
- "ca15_issue3,ca15_issue3+ca15_ls,ca15_issue3+ca15_ldr")
+ (eq_attr "cortex_a15_neon_type" "neon_load_e"))
+ "ca15_issue3+ca15_ls1+ca15_ls2,ca15_ldr*2")
(define_insn_reservation
- "cortex_a15_neon_vst1_vst2_lane" 0
+ "cortex_a15_neon_load_f" 11
(and (eq_attr "tune" "cortexa15")
- (eq_attr "neon_type"
- "neon_vst1_vst2_lane"))
- "ca15_issue3,ca15_cx_perm+ca15_ls,ca15_str")
+ (eq_attr "cortex_a15_neon_type" "neon_load_f"))
+ "ca15_issue3,ca15_issue3+ca15_ls1+ca15_ls2,ca15_ldr*2")
+
+;; Store Instructions.
(define_insn_reservation
- "cortex_a15_neon_vst3_vst4_lane" 0
+ "cortex_a15_neon_store_a" 0
(and (eq_attr "tune" "cortexa15")
- (eq_attr "neon_type"
- "neon_vst3_vst4_lane"))
- "ca15_issue3,ca15_issue3+ca15_cx_perm+ca15_ls1+ca15_ls2,ca15_str*2")
+ (eq_attr "cortex_a15_neon_type" "neon_store_a"))
+ "ca15_issue1,ca15_ls1+ca15_ls2,ca15_str")
(define_insn_reservation
- "cortex_a15_neon_vld3_vld4_all_lanes" 11
+ "cortex_a15_neon_store_b" 0
(and (eq_attr "tune" "cortexa15")
- (eq_attr "neon_type"
- "neon_vld3_vld4_all_lanes"))
- "ca15_issue3,ca15_issue3+ca15_ls,ca15_ldr")
+ (eq_attr "cortex_a15_neon_type" "neon_store_b"))
+ "ca15_issue2,ca15_ls1+ca15_ls2,ca15_str*2")
(define_insn_reservation
- "cortex_a15_neon_ldm_2" 20
+ "cortex_a15_neon_store_c" 0
(and (eq_attr "tune" "cortexa15")
- (eq_attr "neon_type"
- "neon_ldm_2"))
- "ca15_issue3*6")
+ (eq_attr "cortex_a15_neon_type" "neon_store_c"))
+ "ca15_issue3,ca15_ls1+ca15_ls2,ca15_str*3")
(define_insn_reservation
- "cortex_a15_neon_stm_2" 0
+ "cortex_a15_neon_store_d" 0
(and (eq_attr "tune" "cortexa15")
- (eq_attr "neon_type"
- "neon_stm_2"))
- "ca15_issue3*6")
+ (eq_attr "cortex_a15_neon_type" "neon_store_d"))
+ "ca15_issue3,ca15_issue1,ca15_ls1+ca15_ls2,ca15_str*4")
(define_insn_reservation
- "cortex_a15_neon_mcr" 6
+ "cortex_a15_neon_store_e" 0
(and (eq_attr "tune" "cortexa15")
- (eq_attr "neon_type"
- "neon_mcr"))
- "ca15_issue2,ca15_ls,ca15_cx_perm")
+ (eq_attr "cortex_a15_neon_type" "neon_store_e"))
+ "ca15_issue2,ca15_ls1+ca15_ls2,ca15_str+ca15_cx_perm")
(define_insn_reservation
- "cortex_a15_neon_mcr_2_mcrr" 6
+ "cortex_a15_neon_store_f" 0
(and (eq_attr "tune" "cortexa15")
- (eq_attr "neon_type"
- "neon_mcr_2_mcrr"))
- "ca15_issue2,ca15_ls1+ca15_ls2")
+ (eq_attr "cortex_a15_neon_type" "neon_store_f"))
+ "ca15_issue3,ca15_ls1+ca15_ls2,ca15_str*2+ca15_cx_perm")
(define_insn_reservation
- "cortex_a15_neon_mrc" 5
+ "cortex_a15_neon_store_g" 0
(and (eq_attr "tune" "cortexa15")
- (eq_attr "neon_type"
- "neon_mrc"))
- "ca15_issue1,ca15_ls")
+ (eq_attr "cortex_a15_neon_type" "neon_store_g"))
+ "ca15_issue3,ca15_issue3+ca15_cx_perm+ca15_ls1+ca15_ls2,ca15_str*2")
(define_insn_reservation
- "cortex_a15_neon_mrrc" 6
+ "cortex_a15_neon_store_h" 0
(and (eq_attr "tune" "cortexa15")
- (eq_attr "neon_type"
- "neon_mrrc"))
- "ca15_issue2,ca15_ls1+ca15_ls2")
+ (eq_attr "cortex_a15_neon_type" "neon_store_h"))
+ "ca15_issue3,ca15_issue2+ca15_cx_perm+ca15_ls1+ca15_ls2,ca15_str*2")
+
+;; VFP Operations.
(define_insn_reservation "cortex_a15_vfp_const" 4
(and (eq_attr "tune" "cortexa15")
@@ -515,7 +632,7 @@
(define_insn_reservation "cortex_a15_vfp_cvt" 6
(and (eq_attr "tune" "cortexa15")
- (eq_attr "type" "f_cvt"))
+ (eq_attr "type" "f_cvt,f_cvtf2i,f_cvti2f"))
"ca15_issue1,ca15_cx_vfp")
(define_insn_reservation "cortex_a15_vfp_cmpd" 8
@@ -535,9 +652,14 @@
(define_insn_reservation "cortex_a15_vfp_cpys" 4
(and (eq_attr "tune" "cortexa15")
- (eq_attr "type" "fcpys"))
+ (eq_attr "type" "fmov"))
"ca15_issue1,ca15_cx_perm")
+(define_insn_reservation "cortex_a15_vfp_to_from_gp" 5
+ (and (eq_attr "tune" "cortexa15")
+ (eq_attr "type" "f_mcr, f_mcrr, f_mrc, f_mrrc"))
+ "ca15_issue1,ca15_ls1+ca15_ls2")
+
(define_insn_reservation "cortex_a15_vfp_ariths" 7
(and (eq_attr "tune" "cortexa15")
(eq_attr "type" "ffariths"))
@@ -545,671 +667,11 @@
(define_insn_reservation "cortex_a15_vfp_divs" 10
(and (eq_attr "tune" "cortexa15")
- (eq_attr "type" "fdivs"))
+ (eq_attr "type" "fdivs, fsqrts"))
"ca15_issue1,ca15_cx_ik")
(define_insn_reservation "cortex_a15_vfp_divd" 18
(and (eq_attr "tune" "cortexa15")
- (eq_attr "type" "fdivd"))
+ (eq_attr "type" "fdivd, fsqrtd"))
"ca15_issue1,ca15_cx_ik")
-;; Define bypasses.
-(define_bypass 5 "cortex_a15_neon_mcr_2_mcrr"
- "cortex_a15_neon_int_1,\
- cortex_a15_neon_int_4,\
- cortex_a15_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mul_qqq_8_16_32_ddd_32,\
- cortex_a15_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mla_qqq_8_16,\
- cortex_a15_neon_fp_vadd_ddd_vabs_dd,\
- cortex_a15_neon_fp_vadd_qqq_vabs_qq,\
- cortex_a15_neon_fp_vmla_ddd,\
- cortex_a15_neon_fp_vmla_qqq,\
- cortex_a15_neon_fp_vrecps_vrsqrts_ddd,\
- cortex_a15_neon_fp_vrecps_vrsqrts_qqq")
-
-(define_bypass 5 "cortex_a15_neon_mcr"
- "cortex_a15_neon_int_1,\
- cortex_a15_neon_int_4,\
- cortex_a15_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mul_qqq_8_16_32_ddd_32,\
- cortex_a15_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mla_qqq_8_16,\
- cortex_a15_neon_fp_vadd_ddd_vabs_dd,\
- cortex_a15_neon_fp_vadd_qqq_vabs_qq,\
- cortex_a15_neon_fp_vmla_ddd,\
- cortex_a15_neon_fp_vmla_qqq,\
- cortex_a15_neon_fp_vrecps_vrsqrts_ddd,\
- cortex_a15_neon_fp_vrecps_vrsqrts_qqq")
-
-(define_bypass 10 "cortex_a15_neon_vld3_vld4_all_lanes"
- "cortex_a15_neon_int_1,\
- cortex_a15_neon_int_4,\
- cortex_a15_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mul_qqq_8_16_32_ddd_32,\
- cortex_a15_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mla_qqq_8_16,\
- cortex_a15_neon_fp_vadd_ddd_vabs_dd,\
- cortex_a15_neon_fp_vadd_qqq_vabs_qq,\
- cortex_a15_neon_fp_vmla_ddd,\
- cortex_a15_neon_fp_vmla_qqq,\
- cortex_a15_neon_fp_vrecps_vrsqrts_ddd,\
- cortex_a15_neon_fp_vrecps_vrsqrts_qqq")
-
-(define_bypass 9 "cortex_a15_neon_vld3_vld4_lane"
- "cortex_a15_neon_int_1,\
- cortex_a15_neon_int_4,\
- cortex_a15_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mul_qqq_8_16_32_ddd_32,\
- cortex_a15_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mla_qqq_8_16,\
- cortex_a15_neon_fp_vadd_ddd_vabs_dd,\
- cortex_a15_neon_fp_vadd_qqq_vabs_qq,\
- cortex_a15_neon_fp_vmla_ddd,\
- cortex_a15_neon_fp_vmla_qqq,\
- cortex_a15_neon_fp_vrecps_vrsqrts_ddd,\
- cortex_a15_neon_fp_vrecps_vrsqrts_qqq")
-
-(define_bypass 8 "cortex_a15_neon_vld1_vld2_lane"
- "cortex_a15_neon_int_1,\
- cortex_a15_neon_int_4,\
- cortex_a15_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mul_qqq_8_16_32_ddd_32,\
- cortex_a15_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mla_qqq_8_16,\
- cortex_a15_neon_fp_vadd_ddd_vabs_dd,\
- cortex_a15_neon_fp_vadd_qqq_vabs_qq,\
- cortex_a15_neon_fp_vmla_ddd,\
- cortex_a15_neon_fp_vmla_qqq,\
- cortex_a15_neon_fp_vrecps_vrsqrts_ddd,\
- cortex_a15_neon_fp_vrecps_vrsqrts_qqq")
-
-(define_bypass 11 "cortex_a15_neon_vld3_vld4"
- "cortex_a15_neon_int_1,\
- cortex_a15_neon_int_4,\
- cortex_a15_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mul_qqq_8_16_32_ddd_32,\
- cortex_a15_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mla_qqq_8_16,\
- cortex_a15_neon_fp_vadd_ddd_vabs_dd,\
- cortex_a15_neon_fp_vadd_qqq_vabs_qq,\
- cortex_a15_neon_fp_vmla_ddd,\
- cortex_a15_neon_fp_vmla_qqq,\
- cortex_a15_neon_fp_vrecps_vrsqrts_ddd,\
- cortex_a15_neon_fp_vrecps_vrsqrts_qqq")
-
-(define_bypass 11 "cortex_a15_neon_vld2_4_regs"
- "cortex_a15_neon_int_1,\
- cortex_a15_neon_int_4,\
- cortex_a15_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mul_qqq_8_16_32_ddd_32,\
- cortex_a15_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mla_qqq_8_16,\
- cortex_a15_neon_fp_vadd_ddd_vabs_dd,\
- cortex_a15_neon_fp_vadd_qqq_vabs_qq,\
- cortex_a15_neon_fp_vmla_ddd,\
- cortex_a15_neon_fp_vmla_qqq,\
- cortex_a15_neon_fp_vrecps_vrsqrts_ddd,\
- cortex_a15_neon_fp_vrecps_vrsqrts_qqq")
-
-(define_bypass 8 "cortex_a15_neon_vld2_2_regs_vld1_vld2_all_lanes"
- "cortex_a15_neon_int_1,\
- cortex_a15_neon_int_4,\
- cortex_a15_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mul_qqq_8_16_32_ddd_32,\
- cortex_a15_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mla_qqq_8_16,\
- cortex_a15_neon_fp_vadd_ddd_vabs_dd,\
- cortex_a15_neon_fp_vadd_qqq_vabs_qq,\
- cortex_a15_neon_fp_vmla_ddd,\
- cortex_a15_neon_fp_vmla_qqq,\
- cortex_a15_neon_fp_vrecps_vrsqrts_ddd,\
- cortex_a15_neon_fp_vrecps_vrsqrts_qqq")
-
-(define_bypass 7 "cortex_a15_neon_vld1_3_4_regs"
- "cortex_a15_neon_int_1,\
- cortex_a15_neon_int_4,\
- cortex_a15_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mul_qqq_8_16_32_ddd_32,\
- cortex_a15_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mla_qqq_8_16,\
- cortex_a15_neon_fp_vadd_ddd_vabs_dd,\
- cortex_a15_neon_fp_vadd_qqq_vabs_qq,\
- cortex_a15_neon_fp_vmla_ddd,\
- cortex_a15_neon_fp_vmla_qqq,\
- cortex_a15_neon_fp_vrecps_vrsqrts_ddd,\
- cortex_a15_neon_fp_vrecps_vrsqrts_qqq")
-
-(define_bypass 6 "cortex_a15_neon_vld1_1_2_regs"
- "cortex_a15_neon_int_1,\
- cortex_a15_neon_int_4,\
- cortex_a15_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mul_qqq_8_16_32_ddd_32,\
- cortex_a15_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mla_qqq_8_16,\
- cortex_a15_neon_fp_vadd_ddd_vabs_dd,\
- cortex_a15_neon_fp_vadd_qqq_vabs_qq,\
- cortex_a15_neon_fp_vmla_ddd,\
- cortex_a15_neon_fp_vmla_qqq,\
- cortex_a15_neon_fp_vrecps_vrsqrts_ddd,\
- cortex_a15_neon_fp_vrecps_vrsqrts_qqq")
-
-(define_bypass 6 "cortex_a15_neon_bp_3cycle"
- "cortex_a15_neon_int_1,\
- cortex_a15_neon_int_4,\
- cortex_a15_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mul_qqq_8_16_32_ddd_32,\
- cortex_a15_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mla_qqq_8_16,\
- cortex_a15_neon_fp_vadd_ddd_vabs_dd,\
- cortex_a15_neon_fp_vadd_qqq_vabs_qq,\
- cortex_a15_neon_fp_vmla_ddd,\
- cortex_a15_neon_fp_vmla_qqq,\
- cortex_a15_neon_fp_vrecps_vrsqrts_ddd,\
- cortex_a15_neon_fp_vrecps_vrsqrts_qqq")
-
-(define_bypass 3 "cortex_a15_neon_bp_2cycle"
- "cortex_a15_neon_int_1,\
- cortex_a15_neon_int_4,\
- cortex_a15_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mul_qqq_8_16_32_ddd_32,\
- cortex_a15_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mla_qqq_8_16,\
- cortex_a15_neon_fp_vadd_ddd_vabs_dd,\
- cortex_a15_neon_fp_vadd_qqq_vabs_qq,\
- cortex_a15_neon_fp_vmla_ddd,\
- cortex_a15_neon_fp_vmla_qqq,\
- cortex_a15_neon_fp_vrecps_vrsqrts_ddd,\
- cortex_a15_neon_fp_vrecps_vrsqrts_qqq")
-
-(define_bypass 3 "cortex_a15_neon_bp_simple"
- "cortex_a15_neon_int_1,\
- cortex_a15_neon_int_4,\
- cortex_a15_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mul_qqq_8_16_32_ddd_32,\
- cortex_a15_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mla_qqq_8_16,\
- cortex_a15_neon_fp_vadd_ddd_vabs_dd,\
- cortex_a15_neon_fp_vadd_qqq_vabs_qq,\
- cortex_a15_neon_fp_vmla_ddd,\
- cortex_a15_neon_fp_vmla_qqq,\
- cortex_a15_neon_fp_vrecps_vrsqrts_ddd,\
- cortex_a15_neon_fp_vrecps_vrsqrts_qqq")
-
-(define_bypass 10 "cortex_a15_neon_fp_vrecps_vrsqrts_qqq"
- "cortex_a15_neon_int_1,\
- cortex_a15_neon_int_4,\
- cortex_a15_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mul_qqq_8_16_32_ddd_32,\
- cortex_a15_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mla_qqq_8_16,\
- cortex_a15_neon_fp_vadd_ddd_vabs_dd,\
- cortex_a15_neon_fp_vadd_qqq_vabs_qq,\
- cortex_a15_neon_fp_vmla_ddd,\
- cortex_a15_neon_fp_vmla_qqq,\
- cortex_a15_neon_fp_vrecps_vrsqrts_ddd,\
- cortex_a15_neon_fp_vrecps_vrsqrts_qqq")
-
-(define_bypass 8 "cortex_a15_neon_fp_vrecps_vrsqrts_ddd"
- "cortex_a15_neon_int_1,\
- cortex_a15_neon_int_4,\
- cortex_a15_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mul_qqq_8_16_32_ddd_32,\
- cortex_a15_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mla_qqq_8_16,\
- cortex_a15_neon_fp_vadd_ddd_vabs_dd,\
- cortex_a15_neon_fp_vadd_qqq_vabs_qq,\
- cortex_a15_neon_fp_vmla_ddd,\
- cortex_a15_neon_fp_vmla_qqq,\
- cortex_a15_neon_fp_vrecps_vrsqrts_ddd,\
- cortex_a15_neon_fp_vrecps_vrsqrts_qqq")
-
-(define_bypass 10 "cortex_a15_neon_fp_vmla_qqq_scalar"
- "cortex_a15_neon_int_1,\
- cortex_a15_neon_int_4,\
- cortex_a15_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mul_qqq_8_16_32_ddd_32,\
- cortex_a15_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mla_qqq_8_16,\
- cortex_a15_neon_fp_vadd_ddd_vabs_dd,\
- cortex_a15_neon_fp_vadd_qqq_vabs_qq,\
- cortex_a15_neon_fp_vmla_ddd,\
- cortex_a15_neon_fp_vmla_qqq,\
- cortex_a15_neon_fp_vrecps_vrsqrts_ddd,\
- cortex_a15_neon_fp_vrecps_vrsqrts_qqq")
-
-(define_bypass 8 "cortex_a15_neon_fp_vmla_ddd_scalar"
- "cortex_a15_neon_int_1,\
- cortex_a15_neon_int_4,\
- cortex_a15_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mul_qqq_8_16_32_ddd_32,\
- cortex_a15_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mla_qqq_8_16,\
- cortex_a15_neon_fp_vadd_ddd_vabs_dd,\
- cortex_a15_neon_fp_vadd_qqq_vabs_qq,\
- cortex_a15_neon_fp_vmla_ddd,\
- cortex_a15_neon_fp_vmla_qqq,\
- cortex_a15_neon_fp_vrecps_vrsqrts_ddd,\
- cortex_a15_neon_fp_vrecps_vrsqrts_qqq")
-
-(define_bypass 10 "cortex_a15_neon_fp_vmla_qqq"
- "cortex_a15_neon_int_1,\
- cortex_a15_neon_int_4,\
- cortex_a15_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mul_qqq_8_16_32_ddd_32,\
- cortex_a15_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mla_qqq_8_16,\
- cortex_a15_neon_fp_vadd_ddd_vabs_dd,\
- cortex_a15_neon_fp_vadd_qqq_vabs_qq,\
- cortex_a15_neon_fp_vmla_ddd,\
- cortex_a15_neon_fp_vmla_qqq,\
- cortex_a15_neon_fp_vrecps_vrsqrts_ddd,\
- cortex_a15_neon_fp_vrecps_vrsqrts_qqq")
-
-(define_bypass 8 "cortex_a15_neon_fp_vmla_ddd"
- "cortex_a15_neon_int_1,\
- cortex_a15_neon_int_4,\
- cortex_a15_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mul_qqq_8_16_32_ddd_32,\
- cortex_a15_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mla_qqq_8_16,\
- cortex_a15_neon_fp_vadd_ddd_vabs_dd,\
- cortex_a15_neon_fp_vadd_qqq_vabs_qq,\
- cortex_a15_neon_fp_vmla_ddd,\
- cortex_a15_neon_fp_vmla_qqq,\
- cortex_a15_neon_fp_vrecps_vrsqrts_ddd,\
- cortex_a15_neon_fp_vrecps_vrsqrts_qqq")
-
-(define_bypass 5 "cortex_a15_neon_fp_vmul_qqd"
- "cortex_a15_neon_int_1,\
- cortex_a15_neon_int_4,\
- cortex_a15_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mul_qqq_8_16_32_ddd_32,\
- cortex_a15_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mla_qqq_8_16,\
- cortex_a15_neon_fp_vadd_ddd_vabs_dd,\
- cortex_a15_neon_fp_vadd_qqq_vabs_qq,\
- cortex_a15_neon_fp_vmla_ddd,\
- cortex_a15_neon_fp_vmla_qqq,\
- cortex_a15_neon_fp_vrecps_vrsqrts_ddd,\
- cortex_a15_neon_fp_vrecps_vrsqrts_qqq")
-
-(define_bypass 4 "cortex_a15_neon_fp_vmul_ddd"
- "cortex_a15_neon_int_1,\
- cortex_a15_neon_int_4,\
- cortex_a15_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mul_qqq_8_16_32_ddd_32,\
- cortex_a15_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mla_qqq_8_16,\
- cortex_a15_neon_fp_vadd_ddd_vabs_dd,\
- cortex_a15_neon_fp_vadd_qqq_vabs_qq,\
- cortex_a15_neon_fp_vmla_ddd,\
- cortex_a15_neon_fp_vmla_qqq,\
- cortex_a15_neon_fp_vrecps_vrsqrts_ddd,\
- cortex_a15_neon_fp_vrecps_vrsqrts_qqq")
-
-(define_bypass 6 "cortex_a15_neon_fp_vadd_qqq_vabs_qq"
- "cortex_a15_neon_int_1,\
- cortex_a15_neon_int_4,\
- cortex_a15_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mul_qqq_8_16_32_ddd_32,\
- cortex_a15_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mla_qqq_8_16,\
- cortex_a15_neon_fp_vadd_ddd_vabs_dd,\
- cortex_a15_neon_fp_vadd_qqq_vabs_qq,\
- cortex_a15_neon_fp_vmla_ddd,\
- cortex_a15_neon_fp_vmla_qqq,\
- cortex_a15_neon_fp_vrecps_vrsqrts_ddd,\
- cortex_a15_neon_fp_vrecps_vrsqrts_qqq")
-
-(define_bypass 5 "cortex_a15_neon_fp_vadd_ddd_vabs_dd"
- "cortex_a15_neon_int_1,\
- cortex_a15_neon_int_4,\
- cortex_a15_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mul_qqq_8_16_32_ddd_32,\
- cortex_a15_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mla_qqq_8_16,\
- cortex_a15_neon_fp_vadd_ddd_vabs_dd,\
- cortex_a15_neon_fp_vadd_qqq_vabs_qq,\
- cortex_a15_neon_fp_vmla_ddd,\
- cortex_a15_neon_fp_vmla_qqq,\
- cortex_a15_neon_fp_vrecps_vrsqrts_ddd,\
- cortex_a15_neon_fp_vrecps_vrsqrts_qqq")
-
-(define_bypass 6 "cortex_a15_neon_vsra_vrsra"
- "cortex_a15_neon_int_1,\
- cortex_a15_neon_int_4,\
- cortex_a15_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mul_qqq_8_16_32_ddd_32,\
- cortex_a15_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mla_qqq_8_16,\
- cortex_a15_neon_fp_vadd_ddd_vabs_dd,\
- cortex_a15_neon_fp_vadd_qqq_vabs_qq,\
- cortex_a15_neon_fp_vmla_ddd,\
- cortex_a15_neon_fp_vmla_qqq,\
- cortex_a15_neon_fp_vrecps_vrsqrts_ddd,\
- cortex_a15_neon_fp_vrecps_vrsqrts_qqq")
-
-(define_bypass 5 "cortex_a15_neon_vqshl_vrshl_vqrshl_qqq"
- "cortex_a15_neon_int_1,\
- cortex_a15_neon_int_4,\
- cortex_a15_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mul_qqq_8_16_32_ddd_32,\
- cortex_a15_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mla_qqq_8_16,\
- cortex_a15_neon_fp_vadd_ddd_vabs_dd,\
- cortex_a15_neon_fp_vadd_qqq_vabs_qq,\
- cortex_a15_neon_fp_vmla_ddd,\
- cortex_a15_neon_fp_vmla_qqq,\
- cortex_a15_neon_fp_vrecps_vrsqrts_ddd,\
- cortex_a15_neon_fp_vrecps_vrsqrts_qqq")
-
-(define_bypass 4 "cortex_a15_neon_vshl_ddd"
- "cortex_a15_neon_int_1,\
- cortex_a15_neon_int_4,\
- cortex_a15_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mul_qqq_8_16_32_ddd_32,\
- cortex_a15_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mla_qqq_8_16,\
- cortex_a15_neon_fp_vadd_ddd_vabs_dd,\
- cortex_a15_neon_fp_vadd_qqq_vabs_qq,\
- cortex_a15_neon_fp_vmla_ddd,\
- cortex_a15_neon_fp_vmla_qqq,\
- cortex_a15_neon_fp_vrecps_vrsqrts_ddd,\
- cortex_a15_neon_fp_vrecps_vrsqrts_qqq")
-
-(define_bypass 5 "cortex_a15_neon_shift_3"
- "cortex_a15_neon_int_1,\
- cortex_a15_neon_int_4,\
- cortex_a15_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mul_qqq_8_16_32_ddd_32,\
- cortex_a15_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mla_qqq_8_16,\
- cortex_a15_neon_fp_vadd_ddd_vabs_dd,\
- cortex_a15_neon_fp_vadd_qqq_vabs_qq,\
- cortex_a15_neon_fp_vmla_ddd,\
- cortex_a15_neon_fp_vmla_qqq,\
- cortex_a15_neon_fp_vrecps_vrsqrts_ddd,\
- cortex_a15_neon_fp_vrecps_vrsqrts_qqq")
-
-(define_bypass 4 "cortex_a15_neon_shift_2"
- "cortex_a15_neon_int_1,\
- cortex_a15_neon_int_4,\
- cortex_a15_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mul_qqq_8_16_32_ddd_32,\
- cortex_a15_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mla_qqq_8_16,\
- cortex_a15_neon_fp_vadd_ddd_vabs_dd,\
- cortex_a15_neon_fp_vadd_qqq_vabs_qq,\
- cortex_a15_neon_fp_vmla_ddd,\
- cortex_a15_neon_fp_vmla_qqq,\
- cortex_a15_neon_fp_vrecps_vrsqrts_ddd,\
- cortex_a15_neon_fp_vrecps_vrsqrts_qqq")
-
-(define_bypass 4 "cortex_a15_neon_shift_1"
- "cortex_a15_neon_int_1,\
- cortex_a15_neon_int_4,\
- cortex_a15_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mul_qqq_8_16_32_ddd_32,\
- cortex_a15_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mla_qqq_8_16,\
- cortex_a15_neon_fp_vadd_ddd_vabs_dd,\
- cortex_a15_neon_fp_vadd_qqq_vabs_qq,\
- cortex_a15_neon_fp_vmla_ddd,\
- cortex_a15_neon_fp_vmla_qqq,\
- cortex_a15_neon_fp_vrecps_vrsqrts_ddd,\
- cortex_a15_neon_fp_vrecps_vrsqrts_qqq")
-
-(define_bypass 5 "cortex_a15_neon_mla_ddd_16_scalar_qdd_32_16_long_scalar"
- "cortex_a15_neon_int_1,\
- cortex_a15_neon_int_4,\
- cortex_a15_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mul_qqq_8_16_32_ddd_32,\
- cortex_a15_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mla_qqq_8_16,\
- cortex_a15_neon_fp_vadd_ddd_vabs_dd,\
- cortex_a15_neon_fp_vadd_qqq_vabs_qq,\
- cortex_a15_neon_fp_vmla_ddd,\
- cortex_a15_neon_fp_vmla_qqq,\
- cortex_a15_neon_fp_vrecps_vrsqrts_ddd,\
- cortex_a15_neon_fp_vrecps_vrsqrts_qqq")
-
-(define_bypass 6 "cortex_a15_neon_mul_qqd_32_scalar"
- "cortex_a15_neon_int_1,\
- cortex_a15_neon_int_4,\
- cortex_a15_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mul_qqq_8_16_32_ddd_32,\
- cortex_a15_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mla_qqq_8_16,\
- cortex_a15_neon_fp_vadd_ddd_vabs_dd,\
- cortex_a15_neon_fp_vadd_qqq_vabs_qq,\
- cortex_a15_neon_fp_vmla_ddd,\
- cortex_a15_neon_fp_vmla_qqq,\
- cortex_a15_neon_fp_vrecps_vrsqrts_ddd,\
- cortex_a15_neon_fp_vrecps_vrsqrts_qqq")
-
-(define_bypass 5 "cortex_a15_neon_mul_ddd_16_scalar_32_16_long_scalar"
- "cortex_a15_neon_int_1,\
- cortex_a15_neon_int_4,\
- cortex_a15_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mul_qqq_8_16_32_ddd_32,\
- cortex_a15_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mla_qqq_8_16,\
- cortex_a15_neon_fp_vadd_ddd_vabs_dd,\
- cortex_a15_neon_fp_vadd_qqq_vabs_qq,\
- cortex_a15_neon_fp_vmla_ddd,\
- cortex_a15_neon_fp_vmla_qqq,\
- cortex_a15_neon_fp_vrecps_vrsqrts_ddd,\
- cortex_a15_neon_fp_vrecps_vrsqrts_qqq")
-
-(define_bypass 6 "cortex_a15_neon_mla_qqq_32_qqd_32_scalar"
- "cortex_a15_neon_int_1,\
- cortex_a15_neon_int_4,\
- cortex_a15_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mul_qqq_8_16_32_ddd_32,\
- cortex_a15_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mla_qqq_8_16,\
- cortex_a15_neon_fp_vadd_ddd_vabs_dd,\
- cortex_a15_neon_fp_vadd_qqq_vabs_qq,\
- cortex_a15_neon_fp_vmla_ddd,\
- cortex_a15_neon_fp_vmla_qqq,\
- cortex_a15_neon_fp_vrecps_vrsqrts_ddd,\
- cortex_a15_neon_fp_vrecps_vrsqrts_qqq")
-
-(define_bypass 6 "cortex_a15_neon_mla_qqq_8_16"
- "cortex_a15_neon_int_1,\
- cortex_a15_neon_int_4,\
- cortex_a15_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mul_qqq_8_16_32_ddd_32,\
- cortex_a15_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mla_qqq_8_16,\
- cortex_a15_neon_fp_vadd_ddd_vabs_dd,\
- cortex_a15_neon_fp_vadd_qqq_vabs_qq,\
- cortex_a15_neon_fp_vmla_ddd,\
- cortex_a15_neon_fp_vmla_qqq,\
- cortex_a15_neon_fp_vrecps_vrsqrts_ddd,\
- cortex_a15_neon_fp_vrecps_vrsqrts_qqq")
-
-(define_bypass 5 "cortex_a15_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long"
- "cortex_a15_neon_int_1,\
- cortex_a15_neon_int_4,\
- cortex_a15_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mul_qqq_8_16_32_ddd_32,\
- cortex_a15_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mla_qqq_8_16,\
- cortex_a15_neon_fp_vadd_ddd_vabs_dd,\
- cortex_a15_neon_fp_vadd_qqq_vabs_qq,\
- cortex_a15_neon_fp_vmla_ddd,\
- cortex_a15_neon_fp_vmla_qqq,\
- cortex_a15_neon_fp_vrecps_vrsqrts_ddd,\
- cortex_a15_neon_fp_vrecps_vrsqrts_qqq")
-
-(define_bypass 6
- "cortex_a15_neon_mul_qdd_64_32_long_qqd_16_ddd_32_scalar_64_32_long_scalar"
- "cortex_a15_neon_int_1,\
- cortex_a15_neon_int_4,\
- cortex_a15_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mul_qqq_8_16_32_ddd_32,\
- cortex_a15_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mla_qqq_8_16,\
- cortex_a15_neon_fp_vadd_ddd_vabs_dd,\
- cortex_a15_neon_fp_vadd_qqq_vabs_qq,\
- cortex_a15_neon_fp_vmla_ddd,\
- cortex_a15_neon_fp_vmla_qqq,\
- cortex_a15_neon_fp_vrecps_vrsqrts_ddd,\
- cortex_a15_neon_fp_vrecps_vrsqrts_qqq")
-
-(define_bypass 6 "cortex_a15_neon_mul_qqq_8_16_32_ddd_32"
- "cortex_a15_neon_int_1,\
- cortex_a15_neon_int_4,\
- cortex_a15_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mul_qqq_8_16_32_ddd_32,\
- cortex_a15_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mla_qqq_8_16,\
- cortex_a15_neon_fp_vadd_ddd_vabs_dd,\
- cortex_a15_neon_fp_vadd_qqq_vabs_qq,\
- cortex_a15_neon_fp_vmla_ddd,\
- cortex_a15_neon_fp_vmla_qqq,\
- cortex_a15_neon_fp_vrecps_vrsqrts_ddd,\
- cortex_a15_neon_fp_vrecps_vrsqrts_qqq")
-
-(define_bypass 5 "cortex_a15_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long"
- "cortex_a15_neon_int_1,\
- cortex_a15_neon_int_4,\
- cortex_a15_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mul_qqq_8_16_32_ddd_32,\
- cortex_a15_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mla_qqq_8_16,\
- cortex_a15_neon_fp_vadd_ddd_vabs_dd,\
- cortex_a15_neon_fp_vadd_qqq_vabs_qq,\
- cortex_a15_neon_fp_vmla_ddd,\
- cortex_a15_neon_fp_vmla_qqq,\
- cortex_a15_neon_fp_vrecps_vrsqrts_ddd,\
- cortex_a15_neon_fp_vrecps_vrsqrts_qqq")
-
-(define_bypass 7 "cortex_a15_neon_vaba_qqq"
- "cortex_a15_neon_int_1,\
- cortex_a15_neon_int_4,\
- cortex_a15_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mul_qqq_8_16_32_ddd_32,\
- cortex_a15_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mla_qqq_8_16,\
- cortex_a15_neon_fp_vadd_ddd_vabs_dd,\
- cortex_a15_neon_fp_vadd_qqq_vabs_qq,\
- cortex_a15_neon_fp_vmla_ddd,\
- cortex_a15_neon_fp_vmla_qqq,\
- cortex_a15_neon_fp_vrecps_vrsqrts_ddd,\
- cortex_a15_neon_fp_vrecps_vrsqrts_qqq")
-
-(define_bypass 6 "cortex_a15_neon_vaba"
- "cortex_a15_neon_int_1,\
- cortex_a15_neon_int_4,\
- cortex_a15_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mul_qqq_8_16_32_ddd_32,\
- cortex_a15_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mla_qqq_8_16,\
- cortex_a15_neon_fp_vadd_ddd_vabs_dd,\
- cortex_a15_neon_fp_vadd_qqq_vabs_qq,\
- cortex_a15_neon_fp_vmla_ddd,\
- cortex_a15_neon_fp_vmla_qqq,\
- cortex_a15_neon_fp_vrecps_vrsqrts_ddd,\
- cortex_a15_neon_fp_vrecps_vrsqrts_qqq")
-
-(define_bypass 4 "cortex_a15_neon_vmov"
- "cortex_a15_neon_int_1,\
- cortex_a15_neon_int_4,\
- cortex_a15_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mul_qqq_8_16_32_ddd_32,\
- cortex_a15_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mla_qqq_8_16,\
- cortex_a15_neon_fp_vadd_ddd_vabs_dd,\
- cortex_a15_neon_fp_vadd_qqq_vabs_qq,\
- cortex_a15_neon_fp_vmla_ddd,\
- cortex_a15_neon_fp_vmla_qqq,\
- cortex_a15_neon_fp_vrecps_vrsqrts_ddd,\
- cortex_a15_neon_fp_vrecps_vrsqrts_qqq")
-
-(define_bypass 4 "cortex_a15_neon_vqneg_vqabs"
- "cortex_a15_neon_int_1,\
- cortex_a15_neon_int_4,\
- cortex_a15_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mul_qqq_8_16_32_ddd_32,\
- cortex_a15_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mla_qqq_8_16,\
- cortex_a15_neon_fp_vadd_ddd_vabs_dd,\
- cortex_a15_neon_fp_vadd_qqq_vabs_qq,\
- cortex_a15_neon_fp_vmla_ddd,\
- cortex_a15_neon_fp_vmla_qqq,\
- cortex_a15_neon_fp_vrecps_vrsqrts_ddd,\
- cortex_a15_neon_fp_vrecps_vrsqrts_qqq")
-
-(define_bypass 4 "cortex_a15_neon_int_5"
- "cortex_a15_neon_int_1,\
- cortex_a15_neon_int_4,\
- cortex_a15_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mul_qqq_8_16_32_ddd_32,\
- cortex_a15_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mla_qqq_8_16,\
- cortex_a15_neon_fp_vadd_ddd_vabs_dd,\
- cortex_a15_neon_fp_vadd_qqq_vabs_qq,\
- cortex_a15_neon_fp_vmla_ddd,\
- cortex_a15_neon_fp_vmla_qqq,\
- cortex_a15_neon_fp_vrecps_vrsqrts_ddd,\
- cortex_a15_neon_fp_vrecps_vrsqrts_qqq")
-
-(define_bypass 4 "cortex_a15_neon_int_4"
- "cortex_a15_neon_int_1,\
- cortex_a15_neon_int_4,\
- cortex_a15_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mul_qqq_8_16_32_ddd_32,\
- cortex_a15_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mla_qqq_8_16,\
- cortex_a15_neon_fp_vadd_ddd_vabs_dd,\
- cortex_a15_neon_fp_vadd_qqq_vabs_qq,\
- cortex_a15_neon_fp_vmla_ddd,\
- cortex_a15_neon_fp_vmla_qqq,\
- cortex_a15_neon_fp_vrecps_vrsqrts_ddd,\
- cortex_a15_neon_fp_vrecps_vrsqrts_qqq")
-
-(define_bypass 4 "cortex_a15_neon_int_3"
- "cortex_a15_neon_int_1,\
- cortex_a15_neon_int_4,\
- cortex_a15_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mul_qqq_8_16_32_ddd_32,\
- cortex_a15_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mla_qqq_8_16,\
- cortex_a15_neon_fp_vadd_ddd_vabs_dd,\
- cortex_a15_neon_fp_vadd_qqq_vabs_qq,\
- cortex_a15_neon_fp_vmla_ddd,\
- cortex_a15_neon_fp_vmla_qqq,\
- cortex_a15_neon_fp_vrecps_vrsqrts_ddd,\
- cortex_a15_neon_fp_vrecps_vrsqrts_qqq")
-
-(define_bypass 4 "cortex_a15_neon_int_2"
- "cortex_a15_neon_int_1,\
- cortex_a15_neon_int_4,\
- cortex_a15_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mul_qqq_8_16_32_ddd_32,\
- cortex_a15_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mla_qqq_8_16,\
- cortex_a15_neon_fp_vadd_ddd_vabs_dd,\
- cortex_a15_neon_fp_vadd_qqq_vabs_qq,\
- cortex_a15_neon_fp_vmla_ddd,\
- cortex_a15_neon_fp_vmla_qqq,\
- cortex_a15_neon_fp_vrecps_vrsqrts_ddd,\
- cortex_a15_neon_fp_vrecps_vrsqrts_qqq")
-
-(define_bypass 4 "cortex_a15_neon_int_1"
- "cortex_a15_neon_int_1,\
- cortex_a15_neon_int_4,\
- cortex_a15_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mul_qqq_8_16_32_ddd_32,\
- cortex_a15_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a15_neon_mla_qqq_8_16,\
- cortex_a15_neon_fp_vadd_ddd_vabs_dd,\
- cortex_a15_neon_fp_vadd_qqq_vabs_qq,\
- cortex_a15_neon_fp_vmla_ddd,\
- cortex_a15_neon_fp_vmla_qqq,\
- cortex_a15_neon_fp_vrecps_vrsqrts_ddd,\
- cortex_a15_neon_fp_vrecps_vrsqrts_qqq")
-
diff --git a/gcc/config/arm/cortex-a15.md b/gcc/config/arm/cortex-a15.md
index 4ad87121d6d..5a31a097918 100644
--- a/gcc/config/arm/cortex-a15.md
+++ b/gcc/config/arm/cortex-a15.md
@@ -61,25 +61,32 @@
;; Simple ALU without shift
(define_insn_reservation "cortex_a15_alu" 2
(and (eq_attr "tune" "cortexa15")
- (and (eq_attr "type" "arlo_imm,arlo_reg,shift,shift_reg,\
- mov_imm,mov_reg,\
- mvn_imm,mvn_reg")
- (eq_attr "neon_type" "none")))
+ (eq_attr "type" "alu_imm,alus_imm,logic_imm,logics_imm,\
+ alu_reg,alus_reg,logic_reg,logics_reg,\
+ adc_imm,adcs_imm,adc_reg,adcs_reg,\
+ adr,bfm,rev,\
+ shift_imm,shift_reg,\
+ mov_imm,mov_reg,\
+ mvn_imm,mvn_reg,\
+ mrs,multiple,no_insn"))
"ca15_issue1,(ca15_sx1,ca15_sx1_alu)|(ca15_sx2,ca15_sx2_alu)")
;; ALU ops with immediate shift
(define_insn_reservation "cortex_a15_alu_shift" 3
(and (eq_attr "tune" "cortexa15")
- (and (eq_attr "type" "extend,arlo_shift,,mov_shift,mvn_shift")
- (eq_attr "neon_type" "none")))
+ (eq_attr "type" "extend,\
+ alu_shift_imm,alus_shift_imm,\
+ logic_shift_imm,logics_shift_imm,\
+ mov_shift,mvn_shift"))
"ca15_issue1,(ca15_sx1,ca15_sx1+ca15_sx1_shf,ca15_sx1_alu)\
|(ca15_sx2,ca15_sx2+ca15_sx2_shf,ca15_sx2_alu)")
;; ALU ops with register controlled shift
(define_insn_reservation "cortex_a15_alu_shift_reg" 3
(and (eq_attr "tune" "cortexa15")
- (and (eq_attr "type" "arlo_shift_reg,mov_shift_reg,mvn_shift_reg")
- (eq_attr "neon_type" "none")))
+ (eq_attr "type" "alu_shift_reg,alus_shift_reg,\
+ logic_shift_reg,logics_shift_reg,\
+ mov_shift_reg,mvn_shift_reg"))
"(ca15_issue2,ca15_sx1+ca15_sx2,ca15_sx1_shf,ca15_sx2_alu)\
|(ca15_issue1,(ca15_issue1+ca15_sx2,ca15_sx1+ca15_sx2_shf)\
|(ca15_issue1+ca15_sx1,ca15_sx1+ca15_sx1_shf),ca15_sx1_alu)")
@@ -89,15 +96,13 @@
;; 32-bit multiplies
(define_insn_reservation "cortex_a15_mult32" 3
(and (eq_attr "tune" "cortexa15")
- (and (eq_attr "mul32" "yes")
- (eq_attr "neon_type" "none")))
+ (eq_attr "mul32" "yes"))
"ca15_issue1,ca15_mx")
;; 64-bit multiplies
(define_insn_reservation "cortex_a15_mult64" 4
(and (eq_attr "tune" "cortexa15")
- (and (eq_attr "mul64" "yes")
- (eq_attr "neon_type" "none")))
+ (eq_attr "mul64" "yes"))
"ca15_issue1,ca15_mx*2")
;; Integer divide
@@ -114,8 +119,7 @@
;; Block all issue pipes for a cycle
(define_insn_reservation "cortex_a15_block" 1
(and (eq_attr "tune" "cortexa15")
- (and (eq_attr "type" "block")
- (eq_attr "neon_type" "none")))
+ (eq_attr "type" "block"))
"ca15_issue3")
;; Branch execution Unit
@@ -124,8 +128,7 @@
;; No latency as there is no result
(define_insn_reservation "cortex_a15_branch" 0
(and (eq_attr "tune" "cortexa15")
- (and (eq_attr "type" "branch")
- (eq_attr "neon_type" "none")))
+ (eq_attr "type" "branch"))
"ca15_issue1,ca15_bx")
;; Load-store execution Unit
@@ -133,40 +136,35 @@
;; Loads of up to two words.
(define_insn_reservation "cortex_a15_load1" 4
(and (eq_attr "tune" "cortexa15")
- (and (eq_attr "type" "load_byte,load1,load2")
- (eq_attr "neon_type" "none")))
+ (eq_attr "type" "load_byte,load1,load2"))
"ca15_issue1,ca15_ls,ca15_ldr,nothing")
;; Loads of three or four words.
(define_insn_reservation "cortex_a15_load3" 5
(and (eq_attr "tune" "cortexa15")
- (and (eq_attr "type" "load3,load4")
- (eq_attr "neon_type" "none")))
+ (eq_attr "type" "load3,load4"))
"ca15_issue2,ca15_ls1+ca15_ls2,ca15_ldr,ca15_ldr,nothing")
;; Stores of up to two words.
(define_insn_reservation "cortex_a15_store1" 0
(and (eq_attr "tune" "cortexa15")
- (and (eq_attr "type" "store1,store2")
- (eq_attr "neon_type" "none")))
+ (eq_attr "type" "store1,store2"))
"ca15_issue1,ca15_ls,ca15_str")
;; Stores of three or four words.
(define_insn_reservation "cortex_a15_store3" 0
(and (eq_attr "tune" "cortexa15")
- (and (eq_attr "type" "store3,store4")
- (eq_attr "neon_type" "none")))
+ (eq_attr "type" "store3,store4"))
"ca15_issue2,ca15_ls1+ca15_ls2,ca15_str,ca15_str")
;; We include Neon.md here to ensure that the branch can block the Neon units.
-(include "cortex-a15-neon.md")
+(include "../arm/cortex-a15-neon.md")
;; We lie with calls. They take up all issue slots, and form a block in the
;; pipeline. The result however is available the next cycle.
(define_insn_reservation "cortex_a15_call" 1
(and (eq_attr "tune" "cortexa15")
- (and (eq_attr "type" "call")
- (eq_attr "neon_type" "none")))
+ (eq_attr "type" "call"))
"ca15_issue3,\
ca15_sx1+ca15_sx2+ca15_bx+ca15_mx+ca15_cx_ij+ca15_cx_ik+ca15_ls1+ca15_ls2+\
ca15_cx_imac1+ca15_cx_ialu1+ca15_cx_ialu2+ca15_cx_ishf+\
diff --git a/gcc/config/arm/cortex-a5.md b/gcc/config/arm/cortex-a5.md
index 1400c47d95a..22e0a08f38e 100644
--- a/gcc/config/arm/cortex-a5.md
+++ b/gcc/config/arm/cortex-a5.md
@@ -58,13 +58,22 @@
(define_insn_reservation "cortex_a5_alu" 2
(and (eq_attr "tune" "cortexa5")
- (eq_attr "type" "arlo_imm,arlo_reg,shift,shift_reg,\
- mov_imm,mov_reg,mvn_imm,mvn_reg"))
+ (eq_attr "type" "alu_imm,alus_imm,logic_imm,logics_imm,\
+ alu_reg,alus_reg,logic_reg,logics_reg,\
+ adc_imm,adcs_imm,adc_reg,adcs_reg,\
+ adr,bfm,rev,\
+ shift_imm,shift_reg,\
+ mov_imm,mov_reg,mvn_imm,mvn_reg,\
+ mrs,multiple,no_insn"))
"cortex_a5_ex1")
(define_insn_reservation "cortex_a5_alu_shift" 2
(and (eq_attr "tune" "cortexa5")
- (eq_attr "type" "extend,arlo_shift,arlo_shift_reg,\
+ (eq_attr "type" "extend,\
+ alu_shift_imm,alus_shift_imm,\
+ logic_shift_imm,logics_shift_imm,\
+ alu_shift_reg,alus_shift_reg,\
+ logic_shift_reg,logics_shift_reg,\
mov_shift,mov_shift_reg,\
mvn_shift,mvn_shift_reg"))
"cortex_a5_ex1")
@@ -159,7 +168,8 @@
(define_insn_reservation "cortex_a5_fpalu" 4
(and (eq_attr "tune" "cortexa5")
- (eq_attr "type" "ffariths, fadds, ffarithd, faddd, fcpys, fmuls, f_cvt,\
+ (eq_attr "type" "ffariths, fadds, ffarithd, faddd, fmov, fmuls,\
+ f_cvt,f_cvtf2i,f_cvti2f,\
fcmps, fcmpd"))
"cortex_a5_ex1+cortex_a5_fpadd_pipe")
@@ -223,14 +233,14 @@
(define_insn_reservation "cortex_a5_fdivs" 14
(and (eq_attr "tune" "cortexa5")
- (eq_attr "type" "fdivs"))
+ (eq_attr "type" "fdivs, fsqrts"))
"cortex_a5_ex1, cortex_a5_fp_div_sqrt * 13")
;; ??? Similarly for fdivd.
(define_insn_reservation "cortex_a5_fdivd" 29
(and (eq_attr "tune" "cortexa5")
- (eq_attr "type" "fdivd"))
+ (eq_attr "type" "fdivd, fsqrtd"))
"cortex_a5_ex1, cortex_a5_fp_div_sqrt * 28")
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
@@ -243,12 +253,12 @@
(define_insn_reservation "cortex_a5_r2f" 4
(and (eq_attr "tune" "cortexa5")
- (eq_attr "type" "r_2_f"))
+ (eq_attr "type" "f_mcr,f_mcrr"))
"cortex_a5_ex1")
(define_insn_reservation "cortex_a5_f2r" 2
(and (eq_attr "tune" "cortexa5")
- (eq_attr "type" "f_2_r"))
+ (eq_attr "type" "f_mrc,f_mrrc"))
"cortex_a5_ex1")
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
diff --git a/gcc/config/arm/cortex-a53.md b/gcc/config/arm/cortex-a53.md
index 2f9107994c9..48d0d03853f 100644
--- a/gcc/config/arm/cortex-a53.md
+++ b/gcc/config/arm/cortex-a53.md
@@ -67,14 +67,22 @@
(define_insn_reservation "cortex_a53_alu" 2
(and (eq_attr "tune" "cortexa53")
- (eq_attr "type" "arlo_imm,arlo_reg,shift,shift_reg,\
- mov_imm,mov_reg,mvn_imm,mvn_reg"))
+ (eq_attr "type" "alu_imm,alus_imm,logic_imm,logics_imm,\
+ alu_reg,alus_reg,logic_reg,logics_reg,\
+ adc_imm,adcs_imm,adc_reg,adcs_reg,\
+ adr,bfm,csel,rev,\
+ shift_imm,shift_reg,\
+ mov_imm,mov_reg,mvn_imm,mvn_reg,\
+ mrs,multiple,no_insn"))
"cortex_a53_slot_any")
(define_insn_reservation "cortex_a53_alu_shift" 2
(and (eq_attr "tune" "cortexa53")
- (eq_attr "type" "arlo_shift,arlo_shift_reg,\
- mov_shift,mov_shift_reg,\
+ (eq_attr "type" "alu_shift_imm,alus_shift_imm,\
+ logic_shift_imm,logics_shift_imm,\
+ alu_shift_reg,alus_shift_reg,\
+ logic_shift_reg,logics_shift_reg,\
+ extend,mov_shift,mov_shift_reg,\
mvn_shift,mvn_shift_reg"))
"cortex_a53_slot_any")
@@ -130,12 +138,12 @@
(define_insn_reservation "cortex_a53_load1" 3
(and (eq_attr "tune" "cortexa53")
- (eq_attr "type" "load_byte,load1"))
+ (eq_attr "type" "load_byte,load1,load_acq"))
"cortex_a53_slot_any+cortex_a53_ls")
(define_insn_reservation "cortex_a53_store1" 2
(and (eq_attr "tune" "cortexa53")
- (eq_attr "type" "store1"))
+ (eq_attr "type" "store1,store_rel"))
"cortex_a53_slot_any+cortex_a53_ls+cortex_a53_store")
(define_insn_reservation "cortex_a53_load2" 3
@@ -201,8 +209,9 @@
(define_insn_reservation "cortex_a53_fpalu" 4
(and (eq_attr "tune" "cortexa53")
- (eq_attr "type" "ffariths, fadds, ffarithd, faddd, fcpys, fmuls, f_cvt,\
- fcmps, fcmpd"))
+ (eq_attr "type" "ffariths, fadds, ffarithd, faddd, fmov, fmuls,\
+ f_cvt,f_cvtf2i,f_cvti2f,\
+ fcmps, fcmpd, fcsel"))
"cortex_a53_slot0+cortex_a53_fpadd_pipe")
(define_insn_reservation "cortex_a53_fconst" 2
@@ -230,12 +239,12 @@
(define_insn_reservation "cortex_a53_fdivs" 14
(and (eq_attr "tune" "cortexa53")
- (eq_attr "type" "fdivs"))
+ (eq_attr "type" "fdivs, fsqrts"))
"cortex_a53_slot0, cortex_a53_fp_div_sqrt * 13")
(define_insn_reservation "cortex_a53_fdivd" 29
(and (eq_attr "tune" "cortexa53")
- (eq_attr "type" "fdivd"))
+ (eq_attr "type" "fdivd, fsqrtd"))
"cortex_a53_slot0, cortex_a53_fp_div_sqrt * 28")
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
@@ -244,12 +253,12 @@
(define_insn_reservation "cortex_a53_r2f" 4
(and (eq_attr "tune" "cortexa53")
- (eq_attr "type" "r_2_f"))
+ (eq_attr "type" "f_mcr,f_mcrr"))
"cortex_a53_slot0")
(define_insn_reservation "cortex_a53_f2r" 2
(and (eq_attr "tune" "cortexa53")
- (eq_attr "type" "f_2_r"))
+ (eq_attr "type" "f_mrc,f_mrrc"))
"cortex_a53_slot0")
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
diff --git a/gcc/config/arm/cortex-a7.md b/gcc/config/arm/cortex-a7.md
index e14413d5083..7db6c5b24fb 100644
--- a/gcc/config/arm/cortex-a7.md
+++ b/gcc/config/arm/cortex-a7.md
@@ -20,6 +20,45 @@
;; along with GCC; see the file COPYING3. If not see
;; <http://www.gnu.org/licenses/>.
+(define_attr "cortex_a7_neon_type"
+ "neon_mul, neon_mla, neon_other"
+ (cond [
+ (eq_attr "type" "neon_mul_b, neon_mul_b_q,\
+ neon_mul_h, neon_mul_h_q,\
+ neon_mul_s, neon_mul_s_q,\
+ neon_mul_b_long, neon_mul_h_long,\
+ neon_mul_s_long, neon_mul_h_scalar,\
+ neon_mul_h_scalar_q, neon_mul_s_scalar,\
+ neon_mul_s_scalar_q, neon_mul_h_scalar_long,\
+ neon_mul_s_scalar_long,\
+ neon_sat_mul_b, neon_sat_mul_b_q,\
+ neon_sat_mul_h, neon_sat_mul_h_q,\
+ neon_sat_mul_s, neon_sat_mul_s_q,\
+ neon_sat_mul_b_long, neon_sat_mul_h_long,\
+ neon_sat_mul_s_long,\
+ neon_sat_mul_h_scalar, neon_sat_mul_h_scalar_q,\
+ neon_sat_mul_s_scalar, neon_sat_mul_s_scalar_q,\
+ neon_sat_mul_h_scalar_long,\
+ neon_sat_mul_s_scalar_long,\
+ neon_fp_mul_s, neon_fp_mul_s_q,\
+ neon_fp_mul_s_scalar, neon_fp_mul_s_scalar_q")
+ (const_string "neon_mul")
+ (eq_attr "type" "neon_mla_b, neon_mla_b_q, neon_mla_h,\
+ neon_mla_h_q, neon_mla_s, neon_mla_s_q,\
+ neon_mla_b_long, neon_mla_h_long,\
+ neon_mla_s_long,\
+ neon_mla_h_scalar, neon_mla_h_scalar_q,\
+ neon_mla_s_scalar, neon_mla_s_scalar_q,\
+ neon_mla_h_scalar_long, neon_mla_s_scalar_long,\
+ neon_sat_mla_b_long, neon_sat_mla_h_long,\
+ neon_sat_mla_s_long,\
+ neon_sat_mla_h_scalar_long,\
+ neon_sat_mla_s_scalar_long,\
+ neon_fp_mla_s, neon_fp_mla_s_q,\
+ neon_fp_mla_s_scalar, neon_fp_mla_s_scalar_q")
+ (const_string "neon_mla")]
+ (const_string "neon_other")))
+
(define_automaton "cortex_a7")
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
@@ -67,8 +106,7 @@
(define_insn_reservation "cortex_a7_branch" 0
(and (eq_attr "tune" "cortexa7")
- (and (eq_attr "type" "branch")
- (eq_attr "neon_type" "none")))
+ (eq_attr "type" "branch"))
"(cortex_a7_ex2|cortex_a7_ex1)+cortex_a7_branch")
;; Call cannot dual-issue as an older instruction. It can dual-issue
@@ -77,8 +115,7 @@
;; cycle.
(define_insn_reservation "cortex_a7_call" 1
(and (eq_attr "tune" "cortexa7")
- (and (eq_attr "type" "call")
- (eq_attr "neon_type" "none")))
+ (eq_attr "type" "call"))
"(cortex_a7_ex2|cortex_a7_both)+cortex_a7_branch")
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
@@ -88,27 +125,31 @@
;; ALU instruction with an immediate operand can dual-issue.
(define_insn_reservation "cortex_a7_alu_imm" 2
(and (eq_attr "tune" "cortexa7")
- (and (ior (eq_attr "type" "arlo_imm,mov_imm,mvn_imm")
- (ior (eq_attr "type" "extend")
- (and (eq_attr "type" "mov_reg,mov_shift,mov_shift_reg")
- (not (eq_attr "length" "8")))))
- (eq_attr "neon_type" "none")))
+ (ior (eq_attr "type" "adr,alu_imm,alus_imm,logic_imm,logics_imm,\
+ mov_imm,mvn_imm,extend")
+ (and (eq_attr "type" "mov_reg,mov_shift,mov_shift_reg")
+ (not (eq_attr "length" "8")))))
"cortex_a7_ex2|cortex_a7_ex1")
;; ALU instruction with register operands can dual-issue
;; with a younger immediate-based instruction.
(define_insn_reservation "cortex_a7_alu_reg" 2
(and (eq_attr "tune" "cortexa7")
- (and (eq_attr "type" "arlo_reg,shift,shift_reg,mov_reg,mvn_reg")
- (eq_attr "neon_type" "none")))
+ (eq_attr "type" "alu_reg,alus_reg,logic_reg,logics_reg,\
+ adc_imm,adcs_imm,adc_reg,adcs_reg,\
+ bfm,rev,\
+ shift_imm,shift_reg,mov_reg,mvn_reg"))
"cortex_a7_ex1")
(define_insn_reservation "cortex_a7_alu_shift" 2
(and (eq_attr "tune" "cortexa7")
- (and (eq_attr "type" "arlo_shift,arlo_shift_reg,\
- mov_shift,mov_shift_reg,\
- mvn_shift,mvn_shift_reg")
- (eq_attr "neon_type" "none")))
+ (eq_attr "type" "alu_shift_imm,alus_shift_imm,\
+ logic_shift_imm,logics_shift_imm,\
+ alu_shift_reg,alus_shift_reg,\
+ logic_shift_reg,logics_shift_reg,\
+ mov_shift,mov_shift_reg,\
+ mvn_shift,mvn_shift_reg,\
+ mrs,multiple,no_insn"))
"cortex_a7_ex1")
;; Forwarding path for unshifted operands.
@@ -129,9 +170,8 @@
(define_insn_reservation "cortex_a7_mul" 2
(and (eq_attr "tune" "cortexa7")
- (and (eq_attr "neon_type" "none")
- (ior (eq_attr "mul32" "yes")
- (eq_attr "mul64" "yes"))))
+ (ior (eq_attr "mul32" "yes")
+ (eq_attr "mul64" "yes")))
"cortex_a7_both")
;; Forward the result of a multiply operation to the accumulator
@@ -156,50 +196,42 @@
(define_insn_reservation "cortex_a7_load1" 2
(and (eq_attr "tune" "cortexa7")
- (and (eq_attr "type" "load_byte,load1")
- (eq_attr "neon_type" "none")))
+ (eq_attr "type" "load_byte,load1"))
"cortex_a7_ex1")
(define_insn_reservation "cortex_a7_store1" 0
(and (eq_attr "tune" "cortexa7")
- (and (eq_attr "type" "store1")
- (eq_attr "neon_type" "none")))
+ (eq_attr "type" "store1"))
"cortex_a7_ex1")
(define_insn_reservation "cortex_a7_load2" 2
(and (eq_attr "tune" "cortexa7")
- (and (eq_attr "type" "load2")
- (eq_attr "neon_type" "none")))
+ (eq_attr "type" "load2"))
"cortex_a7_both")
(define_insn_reservation "cortex_a7_store2" 0
(and (eq_attr "tune" "cortexa7")
- (and (eq_attr "type" "store2")
- (eq_attr "neon_type" "none")))
+ (eq_attr "type" "store2"))
"cortex_a7_both")
(define_insn_reservation "cortex_a7_load3" 3
(and (eq_attr "tune" "cortexa7")
- (and (eq_attr "type" "load3")
- (eq_attr "neon_type" "none")))
+ (eq_attr "type" "load3"))
"cortex_a7_both, cortex_a7_ex1")
(define_insn_reservation "cortex_a7_store3" 0
(and (eq_attr "tune" "cortexa7")
- (and (eq_attr "type" "store4")
- (eq_attr "neon_type" "none")))
+ (eq_attr "type" "store4"))
"cortex_a7_both, cortex_a7_ex1")
(define_insn_reservation "cortex_a7_load4" 3
(and (eq_attr "tune" "cortexa7")
- (and (eq_attr "type" "load4")
- (eq_attr "neon_type" "none")))
+ (eq_attr "type" "load4"))
"cortex_a7_both, cortex_a7_both")
(define_insn_reservation "cortex_a7_store4" 0
(and (eq_attr "tune" "cortexa7")
- (and (eq_attr "type" "store3")
- (eq_attr "neon_type" "none")))
+ (eq_attr "type" "store3"))
"cortex_a7_both, cortex_a7_both")
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
@@ -211,9 +243,8 @@
(define_insn_reservation "cortex_a7_fpalu" 4
(and (eq_attr "tune" "cortexa7")
- (and (eq_attr "type" "ffariths, fadds, ffarithd, faddd, fcpys,\
- f_cvt, fcmps, fcmpd")
- (eq_attr "neon_type" "none")))
+ (eq_attr "type" "ffariths, fadds, ffarithd, faddd, fmov,\
+ f_cvt, f_cvtf2i, f_cvti2f, fcmps, fcmpd"))
"cortex_a7_ex1+cortex_a7_fpadd_pipe")
;; For fconsts and fconstd, 8-bit immediate data is passed directly from
@@ -221,8 +252,7 @@
(define_insn_reservation "cortex_a7_fconst" 3
(and (eq_attr "tune" "cortexa7")
- (and (eq_attr "type" "fconsts,fconstd")
- (eq_attr "neon_type" "none")))
+ (eq_attr "type" "fconsts,fconstd"))
"cortex_a7_ex1+cortex_a7_fpadd_pipe")
;; We should try not to attempt to issue a single-precision multiplication in
@@ -231,40 +261,22 @@
(define_insn_reservation "cortex_a7_fpmuls" 4
(and (eq_attr "tune" "cortexa7")
- (and (eq_attr "type" "fmuls")
- (eq_attr "neon_type" "none")))
+ (eq_attr "type" "fmuls"))
"cortex_a7_ex1+cortex_a7_fpmul_pipe")
(define_insn_reservation "cortex_a7_neon_mul" 4
(and (eq_attr "tune" "cortexa7")
- (eq_attr "neon_type"
- "neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
- neon_mul_qqq_8_16_32_ddd_32,\
- neon_mul_qdd_64_32_long_qqd_16_ddd_32_scalar_64_32_long_scalar,\
- neon_mul_ddd_16_scalar_32_16_long_scalar,\
- neon_mul_qqd_32_scalar,\
- neon_fp_vmul_ddd,\
- neon_fp_vmul_qqd"))
+ (eq_attr "cortex_a7_neon_type" "neon_mul"))
"(cortex_a7_both+cortex_a7_fpmul_pipe)*2")
(define_insn_reservation "cortex_a7_fpmacs" 8
(and (eq_attr "tune" "cortexa7")
- (and (eq_attr "type" "fmacs,ffmas")
- (eq_attr "neon_type" "none")))
+ (eq_attr "type" "fmacs,ffmas"))
"cortex_a7_ex1+cortex_a7_fpmul_pipe")
(define_insn_reservation "cortex_a7_neon_mla" 8
(and (eq_attr "tune" "cortexa7")
- (eq_attr "neon_type"
- "neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
- neon_mla_qqq_8_16,\
- neon_mla_ddd_32_qqd_16_ddd_32_scalar_qdd_64_32_long_scalar_qdd_64_32_long,\
- neon_mla_qqq_32_qqd_32_scalar,\
- neon_mla_ddd_16_scalar_qdd_32_16_long_scalar,\
- neon_fp_vmla_ddd,\
- neon_fp_vmla_qqq,\
- neon_fp_vmla_ddd_scalar,\
- neon_fp_vmla_qqq_scalar"))
+ (eq_attr "cortex_a7_neon_type" "neon_mla"))
"cortex_a7_both+cortex_a7_fpmul_pipe")
(define_bypass 4 "cortex_a7_fpmacs,cortex_a7_neon_mla"
@@ -276,20 +288,17 @@
(define_insn_reservation "cortex_a7_fpmuld" 7
(and (eq_attr "tune" "cortexa7")
- (and (eq_attr "type" "fmuld")
- (eq_attr "neon_type" "none")))
+ (eq_attr "type" "fmuld"))
"cortex_a7_ex1+cortex_a7_fpmul_pipe, cortex_a7_fpmul_pipe*3")
(define_insn_reservation "cortex_a7_fpmacd" 11
(and (eq_attr "tune" "cortexa7")
- (and (eq_attr "type" "fmacd")
- (eq_attr "neon_type" "none")))
+ (eq_attr "type" "fmacd"))
"cortex_a7_ex1+cortex_a7_fpmul_pipe, cortex_a7_fpmul_pipe*3")
(define_insn_reservation "cortex_a7_fpfmad" 8
(and (eq_attr "tune" "cortexa7")
- (and (eq_attr "type" "ffmad")
- (eq_attr "neon_type" "none")))
+ (eq_attr "type" "ffmad"))
"cortex_a7_ex1+cortex_a7_fpmul_pipe, cortex_a7_fpmul_pipe*4")
(define_bypass 7 "cortex_a7_fpmacd"
@@ -302,14 +311,12 @@
(define_insn_reservation "cortex_a7_fdivs" 16
(and (eq_attr "tune" "cortexa7")
- (and (eq_attr "type" "fdivs")
- (eq_attr "neon_type" "none")))
+ (eq_attr "type" "fdivs, fsqrts"))
"cortex_a7_ex1+cortex_a7_fp_div_sqrt, cortex_a7_fp_div_sqrt * 13")
(define_insn_reservation "cortex_a7_fdivd" 31
(and (eq_attr "tune" "cortexa7")
- (and (eq_attr "type" "fdivd")
- (eq_attr "neon_type" "none")))
+ (eq_attr "type" "fdivd, fsqrtd"))
"cortex_a7_ex1+cortex_a7_fp_div_sqrt, cortex_a7_fp_div_sqrt * 28")
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
@@ -320,14 +327,12 @@
(define_insn_reservation "cortex_a7_r2f" 4
(and (eq_attr "tune" "cortexa7")
- (and (eq_attr "type" "r_2_f")
- (eq_attr "neon_type" "none")))
+ (eq_attr "type" "f_mcr,f_mcrr"))
"cortex_a7_both")
(define_insn_reservation "cortex_a7_f2r" 2
(and (eq_attr "tune" "cortexa7")
- (and (eq_attr "type" "f_2_r")
- (eq_attr "neon_type" "none")))
+ (eq_attr "type" "f_mrc,f_mrrc"))
"cortex_a7_ex1")
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
@@ -339,8 +344,7 @@
(define_insn_reservation "cortex_a7_f_flags" 4
(and (eq_attr "tune" "cortexa7")
- (and (eq_attr "type" "f_flag")
- (eq_attr "neon_type" "none")))
+ (eq_attr "type" "f_flag"))
"cortex_a7_ex1")
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
@@ -349,26 +353,22 @@
(define_insn_reservation "cortex_a7_f_loads" 4
(and (eq_attr "tune" "cortexa7")
- (and (eq_attr "type" "f_loads")
- (eq_attr "neon_type" "none")))
+ (eq_attr "type" "f_loads"))
"cortex_a7_ex1")
(define_insn_reservation "cortex_a7_f_loadd" 4
(and (eq_attr "tune" "cortexa7")
- (and (eq_attr "type" "f_loadd")
- (eq_attr "neon_type" "none")))
+ (eq_attr "type" "f_loadd"))
"cortex_a7_both")
(define_insn_reservation "cortex_a7_f_stores" 0
(and (eq_attr "tune" "cortexa7")
- (and (eq_attr "type" "f_stores")
- (eq_attr "neon_type" "none")))
+ (eq_attr "type" "f_stores"))
"cortex_a7_ex1")
(define_insn_reservation "cortex_a7_f_stored" 0
(and (eq_attr "tune" "cortexa7")
- (and (eq_attr "type" "f_stored")
- (eq_attr "neon_type" "none")))
+ (eq_attr "type" "f_stored"))
"cortex_a7_both")
;; Load-to-use for floating-point values has a penalty of one cycle,
@@ -389,22 +389,6 @@
(define_insn_reservation "cortex_a7_neon" 4
(and (eq_attr "tune" "cortexa7")
- (eq_attr "neon_type"
- "!none,\
- neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
- neon_mul_qqq_8_16_32_ddd_32,\
- neon_mul_qdd_64_32_long_qqd_16_ddd_32_scalar_64_32_long_scalar,\
- neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
- neon_mla_qqq_8_16,\
- neon_mla_ddd_32_qqd_16_ddd_32_scalar_qdd_64_32_long_scalar_qdd_64_32_long,\
- neon_mla_qqq_32_qqd_32_scalar,\
- neon_mul_ddd_16_scalar_32_16_long_scalar,\
- neon_mul_qqd_32_scalar,\
- neon_mla_ddd_16_scalar_qdd_32_16_long_scalar,\
- neon_fp_vmul_ddd,\
- neon_fp_vmul_qqd,\
- neon_fp_vmla_ddd,\
- neon_fp_vmla_qqq,\
- neon_fp_vmla_ddd_scalar,\
- neon_fp_vmla_qqq_scalar"))
+ (and (eq_attr "is_neon_type" "yes")
+ (eq_attr "cortex_a7_neon_type" "neon_other")))
"cortex_a7_both*2")
diff --git a/gcc/config/arm/cortex-a8-neon.md b/gcc/config/arm/cortex-a8-neon.md
index 2f0cc7b3a5a..6adfd136569 100644
--- a/gcc/config/arm/cortex-a8-neon.md
+++ b/gcc/config/arm/cortex-a8-neon.md
@@ -18,6 +18,221 @@
;; along with GCC; see the file COPYING3. If not see
;; <http://www.gnu.org/licenses/>.
+(define_attr "cortex_a8_neon_type"
+ "neon_int_1,neon_int_2,neon_int_3,neon_int_4,neon_int_5,neon_vqneg_vqabs,
+ neon_bit_ops_q,
+ neon_vaba,neon_vaba_qqq, neon_vmov,
+ neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,neon_mul_qqq_8_16_32_ddd_32,
+ neon_mul_qdd_64_32_long_qqd_16_ddd_32_scalar_64_32_long_scalar,
+ neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,neon_mla_qqq_8_16,
+ neon_mla_ddd_32_qqd_16_ddd_32_scalar_qdd_64_32_long_scalar_qdd_64_32_long,
+ neon_mla_qqq_32_qqd_32_scalar,neon_mul_ddd_16_scalar_32_16_long_scalar,
+ neon_mul_qqd_32_scalar,neon_mla_ddd_16_scalar_qdd_32_16_long_scalar,
+ neon_shift_1,neon_shift_2,neon_shift_3,
+ neon_vqshl_vrshl_vqrshl_qqq,neon_vsra_vrsra,neon_fp_vadd_ddd_vabs_dd,
+ neon_fp_vadd_qqq_vabs_qq,neon_fp_vsum,neon_fp_vmul_ddd,neon_fp_vmul_qqd,
+ neon_fp_vmla_ddd,neon_fp_vmla_qqq,neon_fp_vmla_ddd_scalar,
+ neon_fp_vmla_qqq_scalar,neon_fp_vrecps_vrsqrts_ddd,
+ neon_fp_vrecps_vrsqrts_qqq,neon_bp_simple,neon_bp_2cycle,neon_bp_3cycle,
+ neon_ldr,neon_str,neon_vld1_1_2_regs,neon_vld1_3_4_regs,
+ neon_vld2_2_regs_vld1_vld2_all_lanes,neon_vld2_4_regs,neon_vld3_vld4,
+ neon_vst1_1_2_regs_vst2_2_regs,neon_vst1_3_4_regs,
+ neon_vst2_4_regs_vst3_vst4,neon_vld1_vld2_lane,
+ neon_vld3_vld4_lane,neon_vst1_vst2_lane,neon_vst3_vst4_lane,
+ neon_vld3_vld4_all_lanes,neon_mcr,neon_mcr_2_mcrr,neon_mrc,neon_mrrc,
+ neon_ldm_2,neon_stm_2,none,unknown"
+ (cond [
+ (eq_attr "type" "neon_logic, neon_logic_q,\
+ neon_bsl, neon_cls, neon_cnt,\
+ neon_add, neon_add_q")
+ (const_string "neon_int_1")
+ (eq_attr "type" "neon_add_widen, neon_sub_widen,\
+ neon_sub, neon_sub_q")
+ (const_string "neon_int_2")
+ (eq_attr "type" "neon_neg, neon_neg_q,\
+ neon_reduc_add, neon_reduc_add_q,\
+ neon_reduc_add_long,\
+ neon_add_long, neon_sub_long")
+ (const_string "neon_int_3")
+ (eq_attr "type" "neon_abs, neon_abs_q,
+ neon_compare_zero, neon_compare_zero_q,\
+ neon_add_halve_narrow_q,\
+ neon_sub_halve_narrow_q,\
+ neon_add_halve, neon_add_halve_q,\
+ neon_qadd, neon_qadd_q,\
+ neon_tst, neon_tst_q")
+ (const_string "neon_int_4")
+ (eq_attr "type" "neon_abd_long, neon_sub_halve, neon_sub_halve_q,\
+ neon_qsub, neon_qsub_q,\
+ neon_abd, neon_abd_q,\
+ neon_compare, neon_compare_q,\
+ neon_minmax, neon_minmax_q, neon_reduc_minmax,\
+ neon_reduc_minmax_q")
+ (const_string "neon_int_5")
+ (eq_attr "type" "neon_qneg, neon_qneg_q, neon_qabs, neon_qabs_q")
+ (const_string "neon_vqneg_vqabs")
+ (eq_attr "type" "neon_move, neon_move_q")
+ (const_string "neon_vmov")
+ (eq_attr "type" "neon_bsl_q, neon_cls_q, neon_cnt_q")
+ (const_string "neon_bit_ops_q")
+ (eq_attr "type" "neon_arith_acc, neon_reduc_add_acc")
+ (const_string "neon_vaba")
+ (eq_attr "type" "neon_arith_acc_q")
+ (const_string "neon_vaba_qqq")
+ (eq_attr "type" "neon_shift_imm, neon_shift_imm_q,\
+ neon_shift_imm_long, neon_shift_imm_narrow_q,\
+ neon_shift_reg")
+ (const_string "neon_shift_1")
+ (eq_attr "type" "neon_sat_shift_imm, neon_sat_shift_imm_q,
+ neon_sat_shift_imm_narrow_q,\
+ neon_sat_shift_reg")
+ (const_string "neon_shift_2")
+ (eq_attr "type" "neon_shift_reg_q")
+ (const_string "neon_shift_3")
+ (eq_attr "type" "neon_sat_shift_reg_q")
+ (const_string "neon_vqshl_vrshl_vqrshl_qqq")
+ (eq_attr "type" "neon_shift_acc, neon_shift_acc_q")
+ (const_string "neon_vsra_vrsra")
+ (eq_attr "type" "neon_mul_b, neon_mul_h,\
+ neon_mul_b_long, neon_mul_h_long,\
+ neon_sat_mul_b, neon_sat_mul_h,\
+ neon_sat_mul_b_long, neon_sat_mul_h_long")
+ (const_string
+ "neon_mul_ddd_8_16_qdd_16_8_long_32_16_long")
+ (eq_attr "type" "neon_mul_b_q, neon_mul_h_q,\
+ neon_sat_mul_b_q, neon_sat_mul_h_q")
+ (const_string "neon_mul_qqq_8_16_32_ddd_32")
+ (eq_attr "type" "neon_mul_s, neon_mul_s_long,\
+ neon_sat_mul_s, neon_sat_mul_s_long,\
+ neon_mul_h_scalar_q, neon_sat_mul_h_scalar_q,\
+ neon_mul_s_scalar, neon_sat_mul_s_scalar,\
+ neon_mul_s_scalar_long,\
+ neon_sat_mul_s_scalar_long")
+ (const_string
+ "neon_mul_qdd_64_32_long_qqd_16_ddd_32_scalar_64_32_long_scalar")
+ (eq_attr "type" "neon_mla_b, neon_mla_h,\
+ neon_mla_b_long, neon_mla_h_long,\
+ neon_sat_mla_b_long, neon_sat_mla_h_long,\
+ neon_sat_mla_h_scalar_long")
+ (const_string
+ "neon_mla_ddd_8_16_qdd_16_8_long_32_16_long")
+ (eq_attr "type" "neon_mla_b_q, neon_mla_h_q")
+ (const_string "neon_mla_qqq_8_16")
+ (eq_attr "type" "neon_mla_s, neon_mla_s_long,\
+ neon_sat_mla_s_long,\
+ neon_mla_h_scalar_q, neon_mla_s_scalar,\
+ neon_mla_s_scalar_long,\
+ neon_sat_mla_s_scalar_long")
+ (const_string
+ "neon_mla_ddd_32_qqd_16_ddd_32_scalar_qdd_64_32_long_scalar_qdd_64_32_long")
+ (eq_attr "type" "neon_mla_s_q, neon_mla_s_scalar_q")
+ (const_string "neon_mla_qqq_32_qqd_32_scalar")
+ (eq_attr "type" "neon_mul_h_scalar, neon_sat_mul_h_scalar,\
+ neon_mul_h_scalar_long,\
+ neon_sat_mul_h_scalar_long")
+ (const_string
+ "neon_mul_ddd_16_scalar_32_16_long_scalar")
+ (eq_attr "type" "neon_mul_s_q, neon_sat_mul_s_q,\
+ neon_mul_s_scalar_q")
+ (const_string "neon_mul_qqd_32_scalar")
+ (eq_attr "type" "neon_mla_h_scalar, neon_mla_h_scalar_long")
+ (const_string
+ "neon_mla_ddd_16_scalar_qdd_32_16_long_scalar")
+ (eq_attr "type" "neon_fp_abd_s, neon_fp_abs_s, neon_fp_neg_s,\
+ neon_fp_addsub_s, neon_fp_compare_s,\
+ neon_fp_minmax_s, neon_fp_mul_s,\
+ neon_fp_recpe_s, neon_fp_rsqrte_s,\
+ neon_fp_to_int_s, neon_int_to_fp_s")
+ (const_string "neon_fp_vadd_ddd_vabs_dd")
+ (eq_attr "type" "neon_fp_abd_s_q, neon_fp_abs_s_q,\
+ neon_fp_neg_s_q,\
+ neon_fp_addsub_s_q, neon_fp_compare_s_q,\
+ neon_fp_minmax_s_q, neon_fp_mul_s_q,\
+ neon_fp_recpe_s_q, neon_fp_rsqrte_s_q,\
+ neon_fp_to_int_s_q, neon_int_to_fp_s_q")
+ (const_string "neon_fp_vadd_qqq_vabs_qq")
+ (eq_attr "type" "neon_fp_reduc_add_s, neon_fp_reduc_minmax_s,\
+ neon_fp_reduc_add_s_q, neon_fp_reduc_minmax_s_q")
+ (const_string "neon_fp_vsum")
+ (eq_attr "type" "neon_fp_mul_s_scalar")
+ (const_string "neon_fp_vmul_ddd")
+ (eq_attr "type" "neon_fp_mul_s_scalar_q")
+ (const_string "neon_fp_vmul_qqd")
+ (eq_attr "type" "neon_fp_mla_s")
+ (const_string "neon_fp_vmla_ddd")
+ (eq_attr "type" "neon_fp_mla_s_q")
+ (const_string "neon_fp_vmla_qqq")
+ (eq_attr "type" "neon_fp_mla_s_scalar")
+ (const_string "neon_fp_vmla_ddd_scalar")
+ (eq_attr "type" "neon_fp_mla_s_scalar_q")
+ (const_string "neon_fp_vmla_qqq_scalar")
+ (eq_attr "type" "neon_fp_recps_s, neon_fp_rsqrts_s")
+ (const_string "neon_fp_vrecps_vrsqrts_ddd")
+ (eq_attr "type" "neon_fp_recps_s_q, neon_fp_rsqrts_s_q")
+ (const_string "neon_fp_vrecps_vrsqrts_qqq")
+ (eq_attr "type" "neon_move_narrow_q, neon_dup,\
+ neon_dup_q, neon_permute, neon_zip,\
+ neon_ext, neon_rev, neon_rev_q")
+ (const_string "neon_bp_simple")
+ (eq_attr "type" "neon_permute_q, neon_ext_q, neon_tbl1, neon_tbl2")
+ (const_string "neon_bp_2cycle")
+ (eq_attr "type" "neon_zip_q, neon_tbl3, neon_tbl4")
+ (const_string "neon_bp_3cycle")
+ (eq_attr "type" "neon_ldr")
+ (const_string "neon_ldr")
+ (eq_attr "type" "neon_str")
+ (const_string "neon_str")
+ (eq_attr "type" "neon_load1_1reg, neon_load1_1reg_q,\
+ neon_load1_2reg, neon_load1_2reg_q,\
+ neon_load2_2reg, neon_load2_2reg_q")
+ (const_string "neon_vld1_1_2_regs")
+ (eq_attr "type" "neon_load1_3reg, neon_load1_3reg_q,\
+ neon_load1_4reg, neon_load1_4reg_q")
+ (const_string "neon_vld1_3_4_regs")
+ (eq_attr "type" "neon_load1_all_lanes, neon_load1_all_lanes_q,\
+ neon_load2_all_lanes, neon_load2_all_lanes_q")
+ (const_string
+ "neon_vld2_2_regs_vld1_vld2_all_lanes")
+ (eq_attr "type" "neon_load3_all_lanes, neon_load3_all_lanes_q,\
+ neon_load4_all_lanes, neon_load4_all_lanes_q,\
+ neon_load2_4reg, neon_load2_4reg_q")
+ (const_string "neon_vld2_4_regs")
+ (eq_attr "type" "neon_load3_3reg, neon_load3_3reg_q,\
+ neon_load4_4reg, neon_load4_4reg_q")
+ (const_string "neon_vld3_vld4")
+ (eq_attr "type" "f_loads, f_loadd, f_stores, f_stored,\
+ neon_load1_one_lane, neon_load1_one_lane_q,\
+ neon_load2_one_lane, neon_load2_one_lane_q")
+ (const_string "neon_vld1_vld2_lane")
+ (eq_attr "type" "neon_load3_one_lane, neon_load3_one_lane_q,\
+ neon_load4_one_lane, neon_load4_one_lane_q")
+ (const_string "neon_vld3_vld4_lane")
+ (eq_attr "type" "neon_store1_1reg, neon_store1_1reg_q,\
+ neon_store1_2reg, neon_store1_2reg_q,\
+ neon_store2_2reg, neon_store2_2reg_q")
+ (const_string "neon_vst1_1_2_regs_vst2_2_regs")
+ (eq_attr "type" "neon_store1_3reg, neon_store1_3reg_q,\
+ neon_store1_4reg, neon_store1_4reg_q")
+ (const_string "neon_vst1_3_4_regs")
+ (eq_attr "type" "neon_store2_4reg, neon_store2_4reg_q,\
+ neon_store3_3reg, neon_store3_3reg_q,\
+ neon_store4_4reg, neon_store4_4reg_q")
+ (const_string "neon_vst2_4_regs_vst3_vst4")
+ (eq_attr "type" "neon_store1_one_lane, neon_store1_one_lane_q,\
+ neon_store2_one_lane, neon_store2_one_lane_q")
+ (const_string "neon_vst1_vst2_lane")
+ (eq_attr "type" "neon_store3_one_lane, neon_store3_one_lane_q,\
+ neon_store4_one_lane, neon_store4_one_lane_q")
+ (const_string "neon_vst3_vst4_lane")
+ (eq_attr "type" "neon_from_gp, f_mcr")
+ (const_string "neon_mcr")
+ (eq_attr "type" "neon_from_gp_q, f_mcrr")
+ (const_string "neon_mcr_2_mcrr")
+ (eq_attr "type" "neon_to_gp, f_mrc")
+ (const_string "neon_mrc")
+ (eq_attr "type" "neon_to_gp_q, f_mrrc")
+ (const_string "neon_mrrc")]
+ (const_string "unknown")))
(define_automaton "cortex_a8_neon")
@@ -159,12 +374,12 @@
(define_insn_reservation "cortex_a8_vfp_divs" 37
(and (eq_attr "tune" "cortexa8")
- (eq_attr "type" "fdivs"))
+ (eq_attr "type" "fdivs, fsqrts"))
"cortex_a8_vfp,cortex_a8_vfplite*36")
(define_insn_reservation "cortex_a8_vfp_divd" 65
(and (eq_attr "tune" "cortexa8")
- (eq_attr "type" "fdivd"))
+ (eq_attr "type" "fdivd, fsqrtd"))
"cortex_a8_vfp,cortex_a8_vfplite*64")
;; Comparisons can actually take 7 cycles sometimes instead of four,
@@ -172,74 +387,74 @@
;; take four cycles, we pick that latency.
(define_insn_reservation "cortex_a8_vfp_farith" 4
(and (eq_attr "tune" "cortexa8")
- (eq_attr "type" "fcpys,ffariths,ffarithd,fconsts,fconstd,fcmps,fcmpd"))
+ (eq_attr "type" "fmov,ffariths,ffarithd,fconsts,fconstd,fcmps,fcmpd"))
"cortex_a8_vfp,cortex_a8_vfplite*3")
(define_insn_reservation "cortex_a8_vfp_cvt" 7
(and (eq_attr "tune" "cortexa8")
- (eq_attr "type" "f_cvt"))
+ (eq_attr "type" "f_cvt,f_cvtf2i,f_cvti2f"))
"cortex_a8_vfp,cortex_a8_vfplite*6")
;; NEON -> core transfers.
(define_insn_reservation "cortex_a8_neon_mrc" 20
(and (eq_attr "tune" "cortexa8")
- (eq_attr "neon_type" "neon_mrc"))
+ (eq_attr "cortex_a8_neon_type" "neon_mrc"))
"cortex_a8_neon_ls")
(define_insn_reservation "cortex_a8_neon_mrrc" 21
(and (eq_attr "tune" "cortexa8")
- (eq_attr "neon_type" "neon_mrrc"))
+ (eq_attr "cortex_a8_neon_type" "neon_mrrc"))
"cortex_a8_neon_ls_2")
-;; The remainder of this file is auto-generated by neon-schedgen.
+;; Arithmetic Operations
;; Instructions using this reservation read their source operands at N2, and
;; produce a result at N3.
(define_insn_reservation "cortex_a8_neon_int_1" 3
(and (eq_attr "tune" "cortexa8")
- (eq_attr "neon_type" "neon_int_1"))
+ (eq_attr "cortex_a8_neon_type" "neon_int_1"))
"cortex_a8_neon_dp")
;; Instructions using this reservation read their (D|Q)m operands at N1,
;; their (D|Q)n operands at N2, and produce a result at N3.
(define_insn_reservation "cortex_a8_neon_int_2" 3
(and (eq_attr "tune" "cortexa8")
- (eq_attr "neon_type" "neon_int_2"))
+ (eq_attr "cortex_a8_neon_type" "neon_int_2"))
"cortex_a8_neon_dp")
;; Instructions using this reservation read their source operands at N1, and
;; produce a result at N3.
(define_insn_reservation "cortex_a8_neon_int_3" 3
(and (eq_attr "tune" "cortexa8")
- (eq_attr "neon_type" "neon_int_3"))
+ (eq_attr "cortex_a8_neon_type" "neon_int_3"))
"cortex_a8_neon_dp")
;; Instructions using this reservation read their source operands at N2, and
;; produce a result at N4.
(define_insn_reservation "cortex_a8_neon_int_4" 4
(and (eq_attr "tune" "cortexa8")
- (eq_attr "neon_type" "neon_int_4"))
+ (eq_attr "cortex_a8_neon_type" "neon_int_4"))
"cortex_a8_neon_dp")
;; Instructions using this reservation read their (D|Q)m operands at N1,
;; their (D|Q)n operands at N2, and produce a result at N4.
(define_insn_reservation "cortex_a8_neon_int_5" 4
(and (eq_attr "tune" "cortexa8")
- (eq_attr "neon_type" "neon_int_5"))
+ (eq_attr "cortex_a8_neon_type" "neon_int_5"))
"cortex_a8_neon_dp")
;; Instructions using this reservation read their source operands at N1, and
;; produce a result at N4.
(define_insn_reservation "cortex_a8_neon_vqneg_vqabs" 4
(and (eq_attr "tune" "cortexa8")
- (eq_attr "neon_type" "neon_vqneg_vqabs"))
+ (eq_attr "cortex_a8_neon_type" "neon_vqneg_vqabs"))
"cortex_a8_neon_dp")
;; Instructions using this reservation produce a result at N3.
(define_insn_reservation "cortex_a8_neon_vmov" 3
(and (eq_attr "tune" "cortexa8")
- (eq_attr "neon_type" "neon_vmov"))
+ (eq_attr "cortex_a8_neon_type" "neon_vmov"))
"cortex_a8_neon_dp")
;; Instructions using this reservation read their (D|Q)n operands at N2,
@@ -247,7 +462,7 @@
;; produce a result at N6.
(define_insn_reservation "cortex_a8_neon_vaba" 6
(and (eq_attr "tune" "cortexa8")
- (eq_attr "neon_type" "neon_vaba"))
+ (eq_attr "cortex_a8_neon_type" "neon_vaba"))
"cortex_a8_neon_dp")
;; Instructions using this reservation read their (D|Q)n operands at N2,
@@ -255,35 +470,39 @@
;; produce a result at N6 on cycle 2.
(define_insn_reservation "cortex_a8_neon_vaba_qqq" 7
(and (eq_attr "tune" "cortexa8")
- (eq_attr "neon_type" "neon_vaba_qqq"))
+ (eq_attr "cortex_a8_neon_type" "neon_vaba_qqq"))
"cortex_a8_neon_dp_2")
-;; Instructions using this reservation read their (D|Q)m operands at N1,
-;; their (D|Q)d operands at N3, and produce a result at N6.
-(define_insn_reservation "cortex_a8_neon_vsma" 6
+;; Instructions using this reservation read their source operands at N2, and
+;; produce a result at N3 on cycle 2.
+(define_insn_reservation "cortex_a8_neon_bit_ops_q" 4
(and (eq_attr "tune" "cortexa8")
- (eq_attr "neon_type" "neon_vsma"))
- "cortex_a8_neon_dp")
+ (eq_attr "cortex_a8_neon_type" "neon_bit_ops_q"))
+ "cortex_a8_neon_dp_2")
+
+;; Integer Multiply/Accumulate Operations
;; Instructions using this reservation read their source operands at N2, and
;; produce a result at N6.
(define_insn_reservation "cortex_a8_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long" 6
(and (eq_attr "tune" "cortexa8")
- (eq_attr "neon_type" "neon_mul_ddd_8_16_qdd_16_8_long_32_16_long"))
+ (eq_attr "cortex_a8_neon_type"
+ "neon_mul_ddd_8_16_qdd_16_8_long_32_16_long"))
"cortex_a8_neon_dp")
;; Instructions using this reservation read their source operands at N2, and
;; produce a result at N6 on cycle 2.
(define_insn_reservation "cortex_a8_neon_mul_qqq_8_16_32_ddd_32" 7
(and (eq_attr "tune" "cortexa8")
- (eq_attr "neon_type" "neon_mul_qqq_8_16_32_ddd_32"))
+ (eq_attr "cortex_a8_neon_type" "neon_mul_qqq_8_16_32_ddd_32"))
"cortex_a8_neon_dp_2")
;; Instructions using this reservation read their (D|Q)n operands at N2,
;; their (D|Q)m operands at N1, and produce a result at N6 on cycle 2.
(define_insn_reservation "cortex_a8_neon_mul_qdd_64_32_long_qqd_16_ddd_32_scalar_64_32_long_scalar" 7
(and (eq_attr "tune" "cortexa8")
- (eq_attr "neon_type" "neon_mul_qdd_64_32_long_qqd_16_ddd_32_scalar_64_32_long_scalar"))
+ (eq_attr "cortex_a8_neon_type"
+ "neon_mul_qdd_64_32_long_qqd_16_ddd_32_scalar_64_32_long_scalar"))
"cortex_a8_neon_dp_2")
;; Instructions using this reservation read their (D|Q)n operands at N2,
@@ -291,7 +510,8 @@
;; produce a result at N6.
(define_insn_reservation "cortex_a8_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long" 6
(and (eq_attr "tune" "cortexa8")
- (eq_attr "neon_type" "neon_mla_ddd_8_16_qdd_16_8_long_32_16_long"))
+ (eq_attr "cortex_a8_neon_type"
+ "neon_mla_ddd_8_16_qdd_16_8_long_32_16_long"))
"cortex_a8_neon_dp")
;; Instructions using this reservation read their (D|Q)n operands at N2,
@@ -299,7 +519,7 @@
;; produce a result at N6 on cycle 2.
(define_insn_reservation "cortex_a8_neon_mla_qqq_8_16" 7
(and (eq_attr "tune" "cortexa8")
- (eq_attr "neon_type" "neon_mla_qqq_8_16"))
+ (eq_attr "cortex_a8_neon_type" "neon_mla_qqq_8_16"))
"cortex_a8_neon_dp_2")
;; Instructions using this reservation read their (D|Q)n operands at N2,
@@ -307,7 +527,8 @@
;; produce a result at N6 on cycle 2.
(define_insn_reservation "cortex_a8_neon_mla_ddd_32_qqd_16_ddd_32_scalar_qdd_64_32_long_scalar_qdd_64_32_long" 7
(and (eq_attr "tune" "cortexa8")
- (eq_attr "neon_type" "neon_mla_ddd_32_qqd_16_ddd_32_scalar_qdd_64_32_long_scalar_qdd_64_32_long"))
+ (eq_attr "cortex_a8_neon_type"
+ "neon_mla_ddd_32_qqd_16_ddd_32_scalar_qdd_64_32_long_scalar_qdd_64_32_long"))
"cortex_a8_neon_dp_2")
;; Instructions using this reservation read their (D|Q)n operands at N2,
@@ -315,21 +536,22 @@
;; produce a result at N6 on cycle 4.
(define_insn_reservation "cortex_a8_neon_mla_qqq_32_qqd_32_scalar" 9
(and (eq_attr "tune" "cortexa8")
- (eq_attr "neon_type" "neon_mla_qqq_32_qqd_32_scalar"))
+ (eq_attr "cortex_a8_neon_type" "neon_mla_qqq_32_qqd_32_scalar"))
"cortex_a8_neon_dp_4")
;; Instructions using this reservation read their (D|Q)n operands at N2,
;; their (D|Q)m operands at N1, and produce a result at N6.
(define_insn_reservation "cortex_a8_neon_mul_ddd_16_scalar_32_16_long_scalar" 6
(and (eq_attr "tune" "cortexa8")
- (eq_attr "neon_type" "neon_mul_ddd_16_scalar_32_16_long_scalar"))
+ (eq_attr "cortex_a8_neon_type"
+ "neon_mul_ddd_16_scalar_32_16_long_scalar"))
"cortex_a8_neon_dp")
;; Instructions using this reservation read their (D|Q)n operands at N2,
;; their (D|Q)m operands at N1, and produce a result at N6 on cycle 4.
(define_insn_reservation "cortex_a8_neon_mul_qqd_32_scalar" 9
(and (eq_attr "tune" "cortexa8")
- (eq_attr "neon_type" "neon_mul_qqd_32_scalar"))
+ (eq_attr "cortex_a8_neon_type" "neon_mul_qqd_32_scalar"))
"cortex_a8_neon_dp_4")
;; Instructions using this reservation read their (D|Q)n operands at N2,
@@ -337,84 +559,82 @@
;; produce a result at N6.
(define_insn_reservation "cortex_a8_neon_mla_ddd_16_scalar_qdd_32_16_long_scalar" 6
(and (eq_attr "tune" "cortexa8")
- (eq_attr "neon_type" "neon_mla_ddd_16_scalar_qdd_32_16_long_scalar"))
+ (eq_attr "cortex_a8_neon_type"
+ "neon_mla_ddd_16_scalar_qdd_32_16_long_scalar"))
"cortex_a8_neon_dp")
+;; Shift Operations
+
;; Instructions using this reservation read their source operands at N1, and
;; produce a result at N3.
(define_insn_reservation "cortex_a8_neon_shift_1" 3
(and (eq_attr "tune" "cortexa8")
- (eq_attr "neon_type" "neon_shift_1"))
+ (eq_attr "cortex_a8_neon_type" "neon_shift_1"))
"cortex_a8_neon_dp")
;; Instructions using this reservation read their source operands at N1, and
;; produce a result at N4.
(define_insn_reservation "cortex_a8_neon_shift_2" 4
(and (eq_attr "tune" "cortexa8")
- (eq_attr "neon_type" "neon_shift_2"))
+ (eq_attr "cortex_a8_neon_type" "neon_shift_2"))
"cortex_a8_neon_dp")
;; Instructions using this reservation read their source operands at N1, and
;; produce a result at N3 on cycle 2.
(define_insn_reservation "cortex_a8_neon_shift_3" 4
(and (eq_attr "tune" "cortexa8")
- (eq_attr "neon_type" "neon_shift_3"))
+ (eq_attr "cortex_a8_neon_type" "neon_shift_3"))
"cortex_a8_neon_dp_2")
;; Instructions using this reservation read their source operands at N1, and
-;; produce a result at N1.
-(define_insn_reservation "cortex_a8_neon_vshl_ddd" 1
- (and (eq_attr "tune" "cortexa8")
- (eq_attr "neon_type" "neon_vshl_ddd"))
- "cortex_a8_neon_dp")
-
-;; Instructions using this reservation read their source operands at N1, and
;; produce a result at N4 on cycle 2.
(define_insn_reservation "cortex_a8_neon_vqshl_vrshl_vqrshl_qqq" 5
(and (eq_attr "tune" "cortexa8")
- (eq_attr "neon_type" "neon_vqshl_vrshl_vqrshl_qqq"))
+ (eq_attr "cortex_a8_neon_type" "neon_vqshl_vrshl_vqrshl_qqq"))
"cortex_a8_neon_dp_2")
;; Instructions using this reservation read their (D|Q)m operands at N1,
;; their (D|Q)d operands at N3, and produce a result at N6.
(define_insn_reservation "cortex_a8_neon_vsra_vrsra" 6
(and (eq_attr "tune" "cortexa8")
- (eq_attr "neon_type" "neon_vsra_vrsra"))
+ (eq_attr "cortex_a8_neon_type" "neon_vsra_vrsra"))
"cortex_a8_neon_dp")
+;; Floating point Operations
+
;; Instructions using this reservation read their source operands at N2, and
;; produce a result at N5.
(define_insn_reservation "cortex_a8_neon_fp_vadd_ddd_vabs_dd" 5
(and (eq_attr "tune" "cortexa8")
- (eq_attr "neon_type" "neon_fp_vadd_ddd_vabs_dd"))
- "cortex_a8_neon_fadd")
+ (eq_attr "cortex_a8_neon_type" "neon_fp_vadd_ddd_vabs_dd"))
+ "cortex_a8_neon_fadd")
;; Instructions using this reservation read their source operands at N2, and
;; produce a result at N5 on cycle 2.
(define_insn_reservation "cortex_a8_neon_fp_vadd_qqq_vabs_qq" 6
(and (eq_attr "tune" "cortexa8")
- (eq_attr "neon_type" "neon_fp_vadd_qqq_vabs_qq"))
+ (eq_attr "cortex_a8_neon_type" "neon_fp_vadd_qqq_vabs_qq"))
"cortex_a8_neon_fadd_2")
;; Instructions using this reservation read their source operands at N1, and
;; produce a result at N5.
(define_insn_reservation "cortex_a8_neon_fp_vsum" 5
(and (eq_attr "tune" "cortexa8")
- (eq_attr "neon_type" "neon_fp_vsum"))
+ (eq_attr "cortex_a8_neon_type" "neon_fp_vsum"))
"cortex_a8_neon_fadd")
;; Instructions using this reservation read their (D|Q)n operands at N2,
;; their (D|Q)m operands at N1, and produce a result at N5.
(define_insn_reservation "cortex_a8_neon_fp_vmul_ddd" 5
(and (eq_attr "tune" "cortexa8")
- (eq_attr "neon_type" "neon_fp_vmul_ddd"))
+ (eq_attr "cortex_a8_neon_type" "neon_fp_vmul_ddd"))
"cortex_a8_neon_dp")
;; Instructions using this reservation read their (D|Q)n operands at N2,
;; their (D|Q)m operands at N1, and produce a result at N5 on cycle 2.
(define_insn_reservation "cortex_a8_neon_fp_vmul_qqd" 6
(and (eq_attr "tune" "cortexa8")
- (eq_attr "neon_type" "neon_fp_vmul_qqd"))
+ (eq_attr "cortex_a8_neon_type" "neon_fp_vmul_qqd"))
"cortex_a8_neon_dp_2")
;; Instructions using this reservation read their (D|Q)n operands at N2,
@@ -422,7 +642,7 @@
;; produce a result at N9.
(define_insn_reservation "cortex_a8_neon_fp_vmla_ddd" 9
(and (eq_attr "tune" "cortexa8")
- (eq_attr "neon_type" "neon_fp_vmla_ddd"))
+ (eq_attr "cortex_a8_neon_type" "neon_fp_vmla_ddd"))
"cortex_a8_neon_fmul_then_fadd")
;; Instructions using this reservation read their (D|Q)n operands at N2,
@@ -430,7 +650,7 @@
;; produce a result at N9 on cycle 2.
(define_insn_reservation "cortex_a8_neon_fp_vmla_qqq" 10
(and (eq_attr "tune" "cortexa8")
- (eq_attr "neon_type" "neon_fp_vmla_qqq"))
+ (eq_attr "cortex_a8_neon_type" "neon_fp_vmla_qqq"))
"cortex_a8_neon_fmul_then_fadd_2")
;; Instructions using this reservation read their (D|Q)n operands at N2,
@@ -438,7 +658,7 @@
;; produce a result at N9.
(define_insn_reservation "cortex_a8_neon_fp_vmla_ddd_scalar" 9
(and (eq_attr "tune" "cortexa8")
- (eq_attr "neon_type" "neon_fp_vmla_ddd_scalar"))
+ (eq_attr "cortex_a8_neon_type" "neon_fp_vmla_ddd_scalar"))
"cortex_a8_neon_fmul_then_fadd")
;; Instructions using this reservation read their (D|Q)n operands at N2,
@@ -446,152 +666,148 @@
;; produce a result at N9 on cycle 2.
(define_insn_reservation "cortex_a8_neon_fp_vmla_qqq_scalar" 10
(and (eq_attr "tune" "cortexa8")
- (eq_attr "neon_type" "neon_fp_vmla_qqq_scalar"))
+ (eq_attr "cortex_a8_neon_type" "neon_fp_vmla_qqq_scalar"))
"cortex_a8_neon_fmul_then_fadd_2")
;; Instructions using this reservation read their source operands at N2, and
;; produce a result at N9.
(define_insn_reservation "cortex_a8_neon_fp_vrecps_vrsqrts_ddd" 9
(and (eq_attr "tune" "cortexa8")
- (eq_attr "neon_type" "neon_fp_vrecps_vrsqrts_ddd"))
+ (eq_attr "cortex_a8_neon_type" "neon_fp_vrecps_vrsqrts_ddd"))
"cortex_a8_neon_fmul_then_fadd")
;; Instructions using this reservation read their source operands at N2, and
;; produce a result at N9 on cycle 2.
(define_insn_reservation "cortex_a8_neon_fp_vrecps_vrsqrts_qqq" 10
(and (eq_attr "tune" "cortexa8")
- (eq_attr "neon_type" "neon_fp_vrecps_vrsqrts_qqq"))
+ (eq_attr "type" "neon_fp_recps_s_q, neon_fp_rsqrts_s_q"))
"cortex_a8_neon_fmul_then_fadd_2")
+;; Permute operations.
+
;; Instructions using this reservation read their source operands at N1, and
;; produce a result at N2.
(define_insn_reservation "cortex_a8_neon_bp_simple" 2
(and (eq_attr "tune" "cortexa8")
- (eq_attr "neon_type" "neon_bp_simple"))
+ (eq_attr "cortex_a8_neon_type" "neon_bp_simple"))
"cortex_a8_neon_perm")
;; Instructions using this reservation read their source operands at N1, and
;; produce a result at N2 on cycle 2.
(define_insn_reservation "cortex_a8_neon_bp_2cycle" 3
(and (eq_attr "tune" "cortexa8")
- (eq_attr "neon_type" "neon_bp_2cycle"))
+ (eq_attr "cortex_a8_neon_type" "neon_bp_2cycle"))
"cortex_a8_neon_perm_2")
;; Instructions using this reservation read their source operands at N1, and
;; produce a result at N2 on cycle 3.
(define_insn_reservation "cortex_a8_neon_bp_3cycle" 4
(and (eq_attr "tune" "cortexa8")
- (eq_attr "neon_type" "neon_bp_3cycle"))
+ (eq_attr "cortex_a8_neon_type" "neon_bp_3cycle"))
"cortex_a8_neon_perm_3")
+;; Load Operations.
+
;; Instructions using this reservation produce a result at N1.
(define_insn_reservation "cortex_a8_neon_ldr" 1
(and (eq_attr "tune" "cortexa8")
- (eq_attr "neon_type" "neon_ldr"))
+ (eq_attr "cortex_a8_neon_type" "neon_ldr"))
"cortex_a8_neon_ls")
;; Instructions using this reservation read their source operands at N1.
(define_insn_reservation "cortex_a8_neon_str" 0
(and (eq_attr "tune" "cortexa8")
- (eq_attr "neon_type" "neon_str"))
+ (eq_attr "cortex_a8_neon_type" "neon_str"))
"cortex_a8_neon_ls")
;; Instructions using this reservation produce a result at N1 on cycle 2.
(define_insn_reservation "cortex_a8_neon_vld1_1_2_regs" 2
(and (eq_attr "tune" "cortexa8")
- (eq_attr "neon_type" "neon_vld1_1_2_regs"))
+ (eq_attr "cortex_a8_neon_type" "neon_vld1_1_2_regs"))
"cortex_a8_neon_ls_2")
;; Instructions using this reservation produce a result at N1 on cycle 3.
(define_insn_reservation "cortex_a8_neon_vld1_3_4_regs" 3
(and (eq_attr "tune" "cortexa8")
- (eq_attr "neon_type" "neon_vld1_3_4_regs"))
+ (eq_attr "cortex_a8_neon_type" "neon_vld1_3_4_regs"))
"cortex_a8_neon_ls_3")
;; Instructions using this reservation produce a result at N2 on cycle 2.
(define_insn_reservation "cortex_a8_neon_vld2_2_regs_vld1_vld2_all_lanes" 3
(and (eq_attr "tune" "cortexa8")
- (eq_attr "neon_type" "neon_vld2_2_regs_vld1_vld2_all_lanes"))
+ (eq_attr "cortex_a8_neon_type" "neon_vld2_2_regs_vld1_vld2_all_lanes"))
"cortex_a8_neon_ls_2")
;; Instructions using this reservation produce a result at N2 on cycle 3.
(define_insn_reservation "cortex_a8_neon_vld2_4_regs" 4
(and (eq_attr "tune" "cortexa8")
- (eq_attr "neon_type" "neon_vld2_4_regs"))
+ (eq_attr "cortex_a8_neon_type" "neon_vld2_4_regs"))
"cortex_a8_neon_ls_3")
;; Instructions using this reservation produce a result at N2 on cycle 4.
(define_insn_reservation "cortex_a8_neon_vld3_vld4" 5
(and (eq_attr "tune" "cortexa8")
- (eq_attr "neon_type" "neon_vld3_vld4"))
+ (eq_attr "cortex_a8_neon_type" "neon_vld3_vld4"))
"cortex_a8_neon_ls_4")
+;; Store operations.
+
;; Instructions using this reservation read their source operands at N1.
(define_insn_reservation "cortex_a8_neon_vst1_1_2_regs_vst2_2_regs" 0
(and (eq_attr "tune" "cortexa8")
- (eq_attr "neon_type" "neon_vst1_1_2_regs_vst2_2_regs"))
+ (eq_attr "cortex_a8_neon_type" "neon_vst1_1_2_regs_vst2_2_regs"))
"cortex_a8_neon_ls_2")
;; Instructions using this reservation read their source operands at N1.
(define_insn_reservation "cortex_a8_neon_vst1_3_4_regs" 0
(and (eq_attr "tune" "cortexa8")
- (eq_attr "neon_type" "neon_vst1_3_4_regs"))
+ (eq_attr "cortex_a8_neon_type" "neon_vst1_3_4_regs"))
"cortex_a8_neon_ls_3")
;; Instructions using this reservation read their source operands at N1.
(define_insn_reservation "cortex_a8_neon_vst2_4_regs_vst3_vst4" 0
(and (eq_attr "tune" "cortexa8")
- (eq_attr "neon_type" "neon_vst2_4_regs_vst3_vst4"))
- "cortex_a8_neon_ls_4")
-
-;; Instructions using this reservation read their source operands at N1.
-(define_insn_reservation "cortex_a8_neon_vst3_vst4" 0
- (and (eq_attr "tune" "cortexa8")
- (eq_attr "neon_type" "neon_vst3_vst4"))
+ (eq_attr "cortex_a8_neon_type" "neon_vst2_4_regs_vst3_vst4"))
"cortex_a8_neon_ls_4")
;; Instructions using this reservation read their source operands at N1, and
;; produce a result at N2 on cycle 3.
(define_insn_reservation "cortex_a8_neon_vld1_vld2_lane" 4
(and (eq_attr "tune" "cortexa8")
- (eq_attr "neon_type" "neon_vld1_vld2_lane"))
+ (eq_attr "cortex_a8_neon_type" "neon_vld1_vld2_lane"))
"cortex_a8_neon_ls_3")
;; Instructions using this reservation read their source operands at N1, and
;; produce a result at N2 on cycle 5.
(define_insn_reservation "cortex_a8_neon_vld3_vld4_lane" 6
(and (eq_attr "tune" "cortexa8")
- (eq_attr "neon_type" "neon_vld3_vld4_lane"))
+ (eq_attr "cortex_a8_neon_type" "neon_vld3_vld4_lane"))
"cortex_a8_neon_ls_5")
;; Instructions using this reservation read their source operands at N1.
(define_insn_reservation "cortex_a8_neon_vst1_vst2_lane" 0
(and (eq_attr "tune" "cortexa8")
- (eq_attr "neon_type" "neon_vst1_vst2_lane"))
+ (eq_attr "cortex_a8_neon_type" "neon_vst1_vst2_lane"))
"cortex_a8_neon_ls_2")
;; Instructions using this reservation read their source operands at N1.
(define_insn_reservation "cortex_a8_neon_vst3_vst4_lane" 0
(and (eq_attr "tune" "cortexa8")
- (eq_attr "neon_type" "neon_vst3_vst4_lane"))
+ (eq_attr "cortex_a8_neon_type" "neon_vst3_vst4_lane"))
"cortex_a8_neon_ls_3")
-;; Instructions using this reservation produce a result at N2 on cycle 2.
-(define_insn_reservation "cortex_a8_neon_vld3_vld4_all_lanes" 3
- (and (eq_attr "tune" "cortexa8")
- (eq_attr "neon_type" "neon_vld3_vld4_all_lanes"))
- "cortex_a8_neon_ls_3")
+;; Register Transfer Operations
;; Instructions using this reservation produce a result at N2.
(define_insn_reservation "cortex_a8_neon_mcr" 2
(and (eq_attr "tune" "cortexa8")
- (eq_attr "neon_type" "neon_mcr"))
+ (eq_attr "cortex_a8_neon_type" "neon_mcr"))
"cortex_a8_neon_perm")
;; Instructions using this reservation produce a result at N2.
(define_insn_reservation "cortex_a8_neon_mcr_2_mcrr" 2
(and (eq_attr "tune" "cortexa8")
- (eq_attr "neon_type" "neon_mcr_2_mcrr"))
+ (eq_attr "cortex_a8_neon_type" "neon_mcr_2_mcrr"))
"cortex_a8_neon_perm_2")
;; Exceptions to the default latencies.
@@ -599,6 +815,7 @@
(define_bypass 1 "cortex_a8_neon_mcr_2_mcrr"
"cortex_a8_neon_int_1,\
cortex_a8_neon_int_4,\
+ cortex_a8_neon_bit_ops_q,\
cortex_a8_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a8_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a8_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -613,20 +830,7 @@
(define_bypass 1 "cortex_a8_neon_mcr"
"cortex_a8_neon_int_1,\
cortex_a8_neon_int_4,\
- cortex_a8_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a8_neon_mul_qqq_8_16_32_ddd_32,\
- cortex_a8_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a8_neon_mla_qqq_8_16,\
- cortex_a8_neon_fp_vadd_ddd_vabs_dd,\
- cortex_a8_neon_fp_vadd_qqq_vabs_qq,\
- cortex_a8_neon_fp_vmla_ddd,\
- cortex_a8_neon_fp_vmla_qqq,\
- cortex_a8_neon_fp_vrecps_vrsqrts_ddd,\
- cortex_a8_neon_fp_vrecps_vrsqrts_qqq")
-
-(define_bypass 2 "cortex_a8_neon_vld3_vld4_all_lanes"
- "cortex_a8_neon_int_1,\
- cortex_a8_neon_int_4,\
+ cortex_a8_neon_bit_ops_q,\
cortex_a8_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a8_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a8_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -641,6 +845,7 @@
(define_bypass 5 "cortex_a8_neon_vld3_vld4_lane"
"cortex_a8_neon_int_1,\
cortex_a8_neon_int_4,\
+ cortex_a8_neon_bit_ops_q,\
cortex_a8_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a8_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a8_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -655,6 +860,7 @@
(define_bypass 3 "cortex_a8_neon_vld1_vld2_lane"
"cortex_a8_neon_int_1,\
cortex_a8_neon_int_4,\
+ cortex_a8_neon_bit_ops_q,\
cortex_a8_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a8_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a8_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -669,6 +875,7 @@
(define_bypass 4 "cortex_a8_neon_vld3_vld4"
"cortex_a8_neon_int_1,\
cortex_a8_neon_int_4,\
+ cortex_a8_neon_bit_ops_q,\
cortex_a8_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a8_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a8_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -683,6 +890,7 @@
(define_bypass 3 "cortex_a8_neon_vld2_4_regs"
"cortex_a8_neon_int_1,\
cortex_a8_neon_int_4,\
+ cortex_a8_neon_bit_ops_q,\
cortex_a8_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a8_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a8_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -697,6 +905,7 @@
(define_bypass 2 "cortex_a8_neon_vld2_2_regs_vld1_vld2_all_lanes"
"cortex_a8_neon_int_1,\
cortex_a8_neon_int_4,\
+ cortex_a8_neon_bit_ops_q,\
cortex_a8_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a8_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a8_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -711,6 +920,7 @@
(define_bypass 2 "cortex_a8_neon_vld1_3_4_regs"
"cortex_a8_neon_int_1,\
cortex_a8_neon_int_4,\
+ cortex_a8_neon_bit_ops_q,\
cortex_a8_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a8_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a8_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -725,6 +935,7 @@
(define_bypass 1 "cortex_a8_neon_vld1_1_2_regs"
"cortex_a8_neon_int_1,\
cortex_a8_neon_int_4,\
+ cortex_a8_neon_bit_ops_q,\
cortex_a8_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a8_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a8_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -739,6 +950,7 @@
(define_bypass 0 "cortex_a8_neon_ldr"
"cortex_a8_neon_int_1,\
cortex_a8_neon_int_4,\
+ cortex_a8_neon_bit_ops_q,\
cortex_a8_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a8_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a8_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -753,6 +965,7 @@
(define_bypass 3 "cortex_a8_neon_bp_3cycle"
"cortex_a8_neon_int_1,\
cortex_a8_neon_int_4,\
+ cortex_a8_neon_bit_ops_q,\
cortex_a8_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a8_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a8_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -767,6 +980,7 @@
(define_bypass 2 "cortex_a8_neon_bp_2cycle"
"cortex_a8_neon_int_1,\
cortex_a8_neon_int_4,\
+ cortex_a8_neon_bit_ops_q,\
cortex_a8_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a8_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a8_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -781,6 +995,7 @@
(define_bypass 1 "cortex_a8_neon_bp_simple"
"cortex_a8_neon_int_1,\
cortex_a8_neon_int_4,\
+ cortex_a8_neon_bit_ops_q,\
cortex_a8_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a8_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a8_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -795,6 +1010,7 @@
(define_bypass 9 "cortex_a8_neon_fp_vrecps_vrsqrts_qqq"
"cortex_a8_neon_int_1,\
cortex_a8_neon_int_4,\
+ cortex_a8_neon_bit_ops_q,\
cortex_a8_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a8_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a8_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -809,6 +1025,7 @@
(define_bypass 8 "cortex_a8_neon_fp_vrecps_vrsqrts_ddd"
"cortex_a8_neon_int_1,\
cortex_a8_neon_int_4,\
+ cortex_a8_neon_bit_ops_q,\
cortex_a8_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a8_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a8_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -823,6 +1040,7 @@
(define_bypass 9 "cortex_a8_neon_fp_vmla_qqq_scalar"
"cortex_a8_neon_int_1,\
cortex_a8_neon_int_4,\
+ cortex_a8_neon_bit_ops_q,\
cortex_a8_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a8_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a8_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -837,6 +1055,7 @@
(define_bypass 8 "cortex_a8_neon_fp_vmla_ddd_scalar"
"cortex_a8_neon_int_1,\
cortex_a8_neon_int_4,\
+ cortex_a8_neon_bit_ops_q,\
cortex_a8_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a8_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a8_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -851,6 +1070,7 @@
(define_bypass 9 "cortex_a8_neon_fp_vmla_qqq"
"cortex_a8_neon_int_1,\
cortex_a8_neon_int_4,\
+ cortex_a8_neon_bit_ops_q,\
cortex_a8_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a8_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a8_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -865,6 +1085,7 @@
(define_bypass 8 "cortex_a8_neon_fp_vmla_ddd"
"cortex_a8_neon_int_1,\
cortex_a8_neon_int_4,\
+ cortex_a8_neon_bit_ops_q,\
cortex_a8_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a8_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a8_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -879,6 +1100,7 @@
(define_bypass 5 "cortex_a8_neon_fp_vmul_qqd"
"cortex_a8_neon_int_1,\
cortex_a8_neon_int_4,\
+ cortex_a8_neon_bit_ops_q,\
cortex_a8_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a8_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a8_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -893,6 +1115,7 @@
(define_bypass 4 "cortex_a8_neon_fp_vmul_ddd"
"cortex_a8_neon_int_1,\
cortex_a8_neon_int_4,\
+ cortex_a8_neon_bit_ops_q,\
cortex_a8_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a8_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a8_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -907,6 +1130,7 @@
(define_bypass 4 "cortex_a8_neon_fp_vsum"
"cortex_a8_neon_int_1,\
cortex_a8_neon_int_4,\
+ cortex_a8_neon_bit_ops_q,\
cortex_a8_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a8_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a8_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -921,6 +1145,7 @@
(define_bypass 5 "cortex_a8_neon_fp_vadd_qqq_vabs_qq"
"cortex_a8_neon_int_1,\
cortex_a8_neon_int_4,\
+ cortex_a8_neon_bit_ops_q,\
cortex_a8_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a8_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a8_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -935,6 +1160,7 @@
(define_bypass 4 "cortex_a8_neon_fp_vadd_ddd_vabs_dd"
"cortex_a8_neon_int_1,\
cortex_a8_neon_int_4,\
+ cortex_a8_neon_bit_ops_q,\
cortex_a8_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a8_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a8_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -949,6 +1175,7 @@
(define_bypass 5 "cortex_a8_neon_vsra_vrsra"
"cortex_a8_neon_int_1,\
cortex_a8_neon_int_4,\
+ cortex_a8_neon_bit_ops_q,\
cortex_a8_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a8_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a8_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -963,20 +1190,7 @@
(define_bypass 4 "cortex_a8_neon_vqshl_vrshl_vqrshl_qqq"
"cortex_a8_neon_int_1,\
cortex_a8_neon_int_4,\
- cortex_a8_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a8_neon_mul_qqq_8_16_32_ddd_32,\
- cortex_a8_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a8_neon_mla_qqq_8_16,\
- cortex_a8_neon_fp_vadd_ddd_vabs_dd,\
- cortex_a8_neon_fp_vadd_qqq_vabs_qq,\
- cortex_a8_neon_fp_vmla_ddd,\
- cortex_a8_neon_fp_vmla_qqq,\
- cortex_a8_neon_fp_vrecps_vrsqrts_ddd,\
- cortex_a8_neon_fp_vrecps_vrsqrts_qqq")
-
-(define_bypass 0 "cortex_a8_neon_vshl_ddd"
- "cortex_a8_neon_int_1,\
- cortex_a8_neon_int_4,\
+ cortex_a8_neon_bit_ops_q,\
cortex_a8_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a8_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a8_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -991,6 +1205,7 @@
(define_bypass 3 "cortex_a8_neon_shift_3"
"cortex_a8_neon_int_1,\
cortex_a8_neon_int_4,\
+ cortex_a8_neon_bit_ops_q,\
cortex_a8_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a8_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a8_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -1005,6 +1220,7 @@
(define_bypass 3 "cortex_a8_neon_shift_2"
"cortex_a8_neon_int_1,\
cortex_a8_neon_int_4,\
+ cortex_a8_neon_bit_ops_q,\
cortex_a8_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a8_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a8_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -1019,6 +1235,7 @@
(define_bypass 2 "cortex_a8_neon_shift_1"
"cortex_a8_neon_int_1,\
cortex_a8_neon_int_4,\
+ cortex_a8_neon_bit_ops_q,\
cortex_a8_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a8_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a8_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -1033,6 +1250,7 @@
(define_bypass 5 "cortex_a8_neon_mla_ddd_16_scalar_qdd_32_16_long_scalar"
"cortex_a8_neon_int_1,\
cortex_a8_neon_int_4,\
+ cortex_a8_neon_bit_ops_q,\
cortex_a8_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a8_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a8_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -1047,6 +1265,7 @@
(define_bypass 8 "cortex_a8_neon_mul_qqd_32_scalar"
"cortex_a8_neon_int_1,\
cortex_a8_neon_int_4,\
+ cortex_a8_neon_bit_ops_q,\
cortex_a8_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a8_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a8_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -1061,6 +1280,7 @@
(define_bypass 5 "cortex_a8_neon_mul_ddd_16_scalar_32_16_long_scalar"
"cortex_a8_neon_int_1,\
cortex_a8_neon_int_4,\
+ cortex_a8_neon_bit_ops_q,\
cortex_a8_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a8_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a8_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -1075,6 +1295,7 @@
(define_bypass 8 "cortex_a8_neon_mla_qqq_32_qqd_32_scalar"
"cortex_a8_neon_int_1,\
cortex_a8_neon_int_4,\
+ cortex_a8_neon_bit_ops_q,\
cortex_a8_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a8_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a8_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -1089,6 +1310,7 @@
(define_bypass 6 "cortex_a8_neon_mla_ddd_32_qqd_16_ddd_32_scalar_qdd_64_32_long_scalar_qdd_64_32_long"
"cortex_a8_neon_int_1,\
cortex_a8_neon_int_4,\
+ cortex_a8_neon_bit_ops_q,\
cortex_a8_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a8_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a8_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -1103,6 +1325,7 @@
(define_bypass 6 "cortex_a8_neon_mla_qqq_8_16"
"cortex_a8_neon_int_1,\
cortex_a8_neon_int_4,\
+ cortex_a8_neon_bit_ops_q,\
cortex_a8_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a8_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a8_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -1117,6 +1340,7 @@
(define_bypass 5 "cortex_a8_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long"
"cortex_a8_neon_int_1,\
cortex_a8_neon_int_4,\
+ cortex_a8_neon_bit_ops_q,\
cortex_a8_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a8_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a8_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -1131,6 +1355,7 @@
(define_bypass 6 "cortex_a8_neon_mul_qdd_64_32_long_qqd_16_ddd_32_scalar_64_32_long_scalar"
"cortex_a8_neon_int_1,\
cortex_a8_neon_int_4,\
+ cortex_a8_neon_bit_ops_q,\
cortex_a8_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a8_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a8_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -1145,6 +1370,7 @@
(define_bypass 6 "cortex_a8_neon_mul_qqq_8_16_32_ddd_32"
"cortex_a8_neon_int_1,\
cortex_a8_neon_int_4,\
+ cortex_a8_neon_bit_ops_q,\
cortex_a8_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a8_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a8_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -1159,20 +1385,7 @@
(define_bypass 5 "cortex_a8_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long"
"cortex_a8_neon_int_1,\
cortex_a8_neon_int_4,\
- cortex_a8_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a8_neon_mul_qqq_8_16_32_ddd_32,\
- cortex_a8_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a8_neon_mla_qqq_8_16,\
- cortex_a8_neon_fp_vadd_ddd_vabs_dd,\
- cortex_a8_neon_fp_vadd_qqq_vabs_qq,\
- cortex_a8_neon_fp_vmla_ddd,\
- cortex_a8_neon_fp_vmla_qqq,\
- cortex_a8_neon_fp_vrecps_vrsqrts_ddd,\
- cortex_a8_neon_fp_vrecps_vrsqrts_qqq")
-
-(define_bypass 5 "cortex_a8_neon_vsma"
- "cortex_a8_neon_int_1,\
- cortex_a8_neon_int_4,\
+ cortex_a8_neon_bit_ops_q,\
cortex_a8_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a8_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a8_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -1187,6 +1400,7 @@
(define_bypass 6 "cortex_a8_neon_vaba_qqq"
"cortex_a8_neon_int_1,\
cortex_a8_neon_int_4,\
+ cortex_a8_neon_bit_ops_q,\
cortex_a8_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a8_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a8_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -1201,6 +1415,7 @@
(define_bypass 5 "cortex_a8_neon_vaba"
"cortex_a8_neon_int_1,\
cortex_a8_neon_int_4,\
+ cortex_a8_neon_bit_ops_q,\
cortex_a8_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a8_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a8_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -1212,9 +1427,10 @@
cortex_a8_neon_fp_vrecps_vrsqrts_ddd,\
cortex_a8_neon_fp_vrecps_vrsqrts_qqq")
-(define_bypass 2 "cortex_a8_neon_vmov"
+(define_bypass 3 "cortex_a8_neon_bit_ops_q"
"cortex_a8_neon_int_1,\
cortex_a8_neon_int_4,\
+ cortex_a8_neon_bit_ops_q,\
cortex_a8_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a8_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a8_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -1229,6 +1445,7 @@
(define_bypass 3 "cortex_a8_neon_vqneg_vqabs"
"cortex_a8_neon_int_1,\
cortex_a8_neon_int_4,\
+ cortex_a8_neon_bit_ops_q,\
cortex_a8_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a8_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a8_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -1243,6 +1460,7 @@
(define_bypass 3 "cortex_a8_neon_int_5"
"cortex_a8_neon_int_1,\
cortex_a8_neon_int_4,\
+ cortex_a8_neon_bit_ops_q,\
cortex_a8_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a8_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a8_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -1257,6 +1475,7 @@
(define_bypass 3 "cortex_a8_neon_int_4"
"cortex_a8_neon_int_1,\
cortex_a8_neon_int_4,\
+ cortex_a8_neon_bit_ops_q,\
cortex_a8_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a8_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a8_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -1271,6 +1490,7 @@
(define_bypass 2 "cortex_a8_neon_int_3"
"cortex_a8_neon_int_1,\
cortex_a8_neon_int_4,\
+ cortex_a8_neon_bit_ops_q,\
cortex_a8_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a8_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a8_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -1285,6 +1505,7 @@
(define_bypass 2 "cortex_a8_neon_int_2"
"cortex_a8_neon_int_1,\
cortex_a8_neon_int_4,\
+ cortex_a8_neon_bit_ops_q,\
cortex_a8_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a8_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a8_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -1299,6 +1520,7 @@
(define_bypass 2 "cortex_a8_neon_int_1"
"cortex_a8_neon_int_1,\
cortex_a8_neon_int_4,\
+ cortex_a8_neon_bit_ops_q,\
cortex_a8_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a8_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a8_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
diff --git a/gcc/config/arm/cortex-a8.md b/gcc/config/arm/cortex-a8.md
index 1113a45ff0e..1eade5e1244 100644
--- a/gcc/config/arm/cortex-a8.md
+++ b/gcc/config/arm/cortex-a8.md
@@ -85,19 +85,25 @@
;; (source read in E2 and destination available at the end of that cycle).
(define_insn_reservation "cortex_a8_alu" 2
(and (eq_attr "tune" "cortexa8")
- (ior (and (eq_attr "type" "arlo_imm,arlo_reg,shift,shift_reg")
- (eq_attr "neon_type" "none"))
- (eq_attr "type" "clz")))
+ (eq_attr "type" "alu_imm,alus_imm,logic_imm,logics_imm,\
+ alu_reg,alus_reg,logic_reg,logics_reg,\
+ adc_imm,adcs_imm,adc_reg,adcs_reg,\
+ adr,bfm,clz,rbit,rev,\
+ shift_imm,shift_reg,\
+ multiple,no_insn"))
"cortex_a8_default")
(define_insn_reservation "cortex_a8_alu_shift" 2
(and (eq_attr "tune" "cortexa8")
- (eq_attr "type" "extend,arlo_shift"))
+ (eq_attr "type" "alu_shift_imm,alus_shift_imm,\
+ logic_shift_imm,logics_shift_imm,\
+ extend"))
"cortex_a8_default")
(define_insn_reservation "cortex_a8_alu_shift_reg" 2
(and (eq_attr "tune" "cortexa8")
- (eq_attr "type" "arlo_shift_reg"))
+ (eq_attr "type" "alu_shift_reg,alus_shift_reg,\
+ logic_shift_reg,logics_shift_reg"))
"cortex_a8_default")
;; Move instructions.
@@ -105,7 +111,8 @@
(define_insn_reservation "cortex_a8_mov" 1
(and (eq_attr "tune" "cortexa8")
(eq_attr "type" "mov_imm,mov_reg,mov_shift,mov_shift_reg,\
- mvn_imm,mvn_reg,mvn_shift,mvn_shift_reg"))
+ mvn_imm,mvn_reg,mvn_shift,mvn_shift_reg,\
+ mrs"))
"cortex_a8_default")
;; Exceptions to the default latencies for data processing instructions.
diff --git a/gcc/config/arm/cortex-a9-neon.md b/gcc/config/arm/cortex-a9-neon.md
index 9688edc8f72..cd6b7a4fd36 100644
--- a/gcc/config/arm/cortex-a9-neon.md
+++ b/gcc/config/arm/cortex-a9-neon.md
@@ -19,6 +19,220 @@
;; along with GCC; see the file COPYING3. If not see
;; <http://www.gnu.org/licenses/>.
+(define_attr "cortex_a9_neon_type"
+ "neon_int_1,neon_int_2,neon_int_3,neon_int_4,neon_int_5,neon_vqneg_vqabs,
+ neon_bit_ops_q,
+ neon_vaba,neon_vaba_qqq, neon_vmov,
+ neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,neon_mul_qqq_8_16_32_ddd_32,
+ neon_mul_qdd_64_32_long_qqd_16_ddd_32_scalar_64_32_long_scalar,
+ neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,neon_mla_qqq_8_16,
+ neon_mla_ddd_32_qqd_16_ddd_32_scalar_qdd_64_32_long_scalar_qdd_64_32_long,
+ neon_mla_qqq_32_qqd_32_scalar,neon_mul_ddd_16_scalar_32_16_long_scalar,
+ neon_mul_qqd_32_scalar,neon_mla_ddd_16_scalar_qdd_32_16_long_scalar,
+ neon_shift_1,neon_shift_2,neon_shift_3,
+ neon_vqshl_vrshl_vqrshl_qqq,neon_vsra_vrsra,neon_fp_vadd_ddd_vabs_dd,
+ neon_fp_vadd_qqq_vabs_qq,neon_fp_vsum,neon_fp_vmul_ddd,neon_fp_vmul_qqd,
+ neon_fp_vmla_ddd,neon_fp_vmla_qqq,neon_fp_vmla_ddd_scalar,
+ neon_fp_vmla_qqq_scalar,neon_fp_vrecps_vrsqrts_ddd,
+ neon_fp_vrecps_vrsqrts_qqq,neon_bp_simple,neon_bp_2cycle,neon_bp_3cycle,
+ neon_ldr,neon_str,neon_vld1_1_2_regs,neon_vld1_3_4_regs,
+ neon_vld2_2_regs_vld1_vld2_all_lanes,neon_vld2_4_regs,neon_vld3_vld4,
+ neon_vst1_1_2_regs_vst2_2_regs,neon_vst1_3_4_regs,
+ neon_vst2_4_regs_vst3_vst4,neon_vld1_vld2_lane,
+ neon_vld3_vld4_lane,neon_vst1_vst2_lane,neon_vst3_vst4_lane,
+ neon_vld3_vld4_all_lanes,neon_mcr,neon_mcr_2_mcrr,neon_mrc,neon_mrrc,
+ neon_ldm_2,neon_stm_2,none,unknown"
+ (cond [
+ (eq_attr "type" "neon_logic, neon_logic_q,\
+ neon_bsl, neon_cls, neon_cnt,\
+ neon_add, neon_add_q")
+ (const_string "neon_int_1")
+ (eq_attr "type" "neon_add_widen, neon_sub_widen,\
+ neon_sub, neon_sub_q")
+ (const_string "neon_int_2")
+ (eq_attr "type" "neon_neg, neon_neg_q,\
+ neon_reduc_add, neon_reduc_add_q,\
+ neon_reduc_add_long,\
+ neon_add_long, neon_sub_long")
+ (const_string "neon_int_3")
+ (eq_attr "type" "neon_abs, neon_abs_q,
+ neon_compare_zero, neon_compare_zero_q,\
+ neon_add_halve_narrow_q,\
+ neon_sub_halve_narrow_q,\
+ neon_add_halve, neon_add_halve_q,\
+ neon_qadd, neon_qadd_q,\
+ neon_tst, neon_tst_q")
+ (const_string "neon_int_4")
+ (eq_attr "type" "neon_abd_long, neon_sub_halve, neon_sub_halve_q,\
+ neon_qsub, neon_qsub_q,\
+ neon_abd, neon_abd_q,\
+ neon_compare, neon_compare_q,\
+ neon_minmax, neon_minmax_q, neon_reduc_minmax,\
+ neon_reduc_minmax_q")
+ (const_string "neon_int_5")
+ (eq_attr "type" "neon_qneg, neon_qneg_q, neon_qabs, neon_qabs_q")
+ (const_string "neon_vqneg_vqabs")
+ (eq_attr "type" "neon_move, neon_move_q")
+ (const_string "neon_vmov")
+ (eq_attr "type" "neon_bsl_q, neon_cls_q, neon_cnt_q")
+ (const_string "neon_bit_ops_q")
+ (eq_attr "type" "neon_arith_acc, neon_reduc_add_acc")
+ (const_string "neon_vaba")
+ (eq_attr "type" "neon_arith_acc_q")
+ (const_string "neon_vaba_qqq")
+ (eq_attr "type" "neon_shift_imm, neon_shift_imm_q,\
+ neon_shift_imm_long, neon_shift_imm_narrow_q,\
+ neon_shift_reg")
+ (const_string "neon_shift_1")
+ (eq_attr "type" "neon_sat_shift_imm, neon_sat_shift_imm_q,
+ neon_sat_shift_imm_narrow_q,\
+ neon_sat_shift_reg")
+ (const_string "neon_shift_2")
+ (eq_attr "type" "neon_shift_reg_q")
+ (const_string "neon_shift_3")
+ (eq_attr "type" "neon_sat_shift_reg_q")
+ (const_string "neon_vqshl_vrshl_vqrshl_qqq")
+ (eq_attr "type" "neon_shift_acc, neon_shift_acc_q")
+ (const_string "neon_vsra_vrsra")
+ (eq_attr "type" "neon_mul_b, neon_mul_h,\
+ neon_mul_b_long, neon_mul_h_long,\
+ neon_sat_mul_b, neon_sat_mul_h,\
+ neon_sat_mul_b_long, neon_sat_mul_h_long")
+ (const_string
+ "neon_mul_ddd_8_16_qdd_16_8_long_32_16_long")
+ (eq_attr "type" "neon_mul_b_q, neon_mul_h_q,\
+ neon_sat_mul_b_q, neon_sat_mul_h_q")
+ (const_string "neon_mul_qqq_8_16_32_ddd_32")
+ (eq_attr "type" "neon_mul_s, neon_mul_s_long,\
+ neon_sat_mul_s, neon_sat_mul_s_long,\
+ neon_mul_h_scalar_q, neon_sat_mul_h_scalar_q,\
+ neon_mul_s_scalar, neon_sat_mul_s_scalar,\
+ neon_mul_s_scalar_long,\
+ neon_sat_mul_s_scalar_long")
+ (const_string
+ "neon_mul_qdd_64_32_long_qqd_16_ddd_32_scalar_64_32_long_scalar")
+ (eq_attr "type" "neon_mla_b, neon_mla_h,\
+ neon_mla_b_long, neon_mla_h_long,\
+ neon_sat_mla_b_long, neon_sat_mla_h_long,\
+ neon_sat_mla_h_scalar_long")
+ (const_string
+ "neon_mla_ddd_8_16_qdd_16_8_long_32_16_long")
+ (eq_attr "type" "neon_mla_b_q, neon_mla_h_q")
+ (const_string "neon_mla_qqq_8_16")
+ (eq_attr "type" "neon_mla_s, neon_mla_s_long,\
+ neon_sat_mla_s_long,\
+ neon_mla_h_scalar_q, neon_mla_s_scalar,\
+ neon_mla_s_scalar_long,\
+ neon_sat_mla_s_scalar_long")
+ (const_string
+ "neon_mla_ddd_32_qqd_16_ddd_32_scalar_qdd_64_32_long_scalar_qdd_64_32_long")
+ (eq_attr "type" "neon_mla_s_q, neon_mla_s_scalar_q")
+ (const_string "neon_mla_qqq_32_qqd_32_scalar")
+ (eq_attr "type" "neon_mul_h_scalar, neon_sat_mul_h_scalar,\
+ neon_mul_h_scalar_long,\
+ neon_sat_mul_h_scalar_long")
+ (const_string
+ "neon_mul_ddd_16_scalar_32_16_long_scalar")
+ (eq_attr "type" "neon_mul_s_q, neon_sat_mul_s_q,\
+ neon_mul_s_scalar_q")
+ (const_string "neon_mul_qqd_32_scalar")
+ (eq_attr "type" "neon_mla_h_scalar, neon_mla_h_scalar_long")
+ (const_string
+ "neon_mla_ddd_16_scalar_qdd_32_16_long_scalar")
+ (eq_attr "type" "neon_fp_abd_s, neon_fp_abs_s, neon_fp_neg_s,\
+ neon_fp_addsub_s, neon_fp_compare_s,\
+ neon_fp_minmax_s, neon_fp_mul_s,\
+ neon_fp_recpe_s, neon_fp_rsqrte_s,\
+ neon_fp_to_int_s, neon_int_to_fp_s")
+ (const_string "neon_fp_vadd_ddd_vabs_dd")
+ (eq_attr "type" "neon_fp_abd_s_q, neon_fp_abs_s_q,\
+ neon_fp_neg_s_q,\
+ neon_fp_addsub_s_q, neon_fp_compare_s_q,\
+ neon_fp_minmax_s_q, neon_fp_mul_s_q,\
+ neon_fp_recpe_s_q, neon_fp_rsqrte_s_q,\
+ neon_fp_to_int_s_q, neon_int_to_fp_s_q")
+ (const_string "neon_fp_vadd_qqq_vabs_qq")
+ (eq_attr "type" "neon_fp_reduc_add_s, neon_fp_reduc_minmax_s,\
+ neon_fp_reduc_add_s_q, neon_fp_reduc_minmax_s_q")
+ (const_string "neon_fp_vsum")
+ (eq_attr "type" "neon_fp_mul_s_scalar")
+ (const_string "neon_fp_vmul_ddd")
+ (eq_attr "type" "neon_fp_mul_s_scalar_q")
+ (const_string "neon_fp_vmul_qqd")
+ (eq_attr "type" "neon_fp_mla_s")
+ (const_string "neon_fp_vmla_ddd")
+ (eq_attr "type" "neon_fp_mla_s_q")
+ (const_string "neon_fp_vmla_qqq")
+ (eq_attr "type" "neon_fp_mla_s_scalar")
+ (const_string "neon_fp_vmla_ddd_scalar")
+ (eq_attr "type" "neon_fp_mla_s_scalar_q")
+ (const_string "neon_fp_vmla_qqq_scalar")
+ (eq_attr "type" "neon_fp_recps_s, neon_fp_rsqrts_s")
+ (const_string "neon_fp_vrecps_vrsqrts_ddd")
+ (eq_attr "type" "neon_fp_recps_s_q, neon_fp_rsqrts_s_q")
+ (const_string "neon_fp_vrecps_vrsqrts_qqq")
+ (eq_attr "type" "neon_move_narrow_q, neon_dup,\
+ neon_dup_q, neon_permute, neon_zip,\
+ neon_ext, neon_rev, neon_rev_q")
+ (const_string "neon_bp_simple")
+ (eq_attr "type" "neon_permute_q, neon_ext_q, neon_tbl1, neon_tbl2")
+ (const_string "neon_bp_2cycle")
+ (eq_attr "type" "neon_zip_q, neon_tbl3, neon_tbl4")
+ (const_string "neon_bp_3cycle")
+ (eq_attr "type" "neon_ldr")
+ (const_string "neon_ldr")
+ (eq_attr "type" "neon_str")
+ (const_string "neon_str")
+ (eq_attr "type" "neon_load1_1reg, neon_load1_1reg_q,\
+ neon_load1_2reg, neon_load1_2reg_q,\
+ neon_load2_2reg, neon_load2_2reg_q")
+ (const_string "neon_vld1_1_2_regs")
+ (eq_attr "type" "neon_load1_3reg, neon_load1_3reg_q,\
+ neon_load1_4reg, neon_load1_4reg_q")
+ (const_string "neon_vld1_3_4_regs")
+ (eq_attr "type" "neon_load1_all_lanes, neon_load1_all_lanes_q,\
+ neon_load2_all_lanes, neon_load2_all_lanes_q")
+ (const_string
+ "neon_vld2_2_regs_vld1_vld2_all_lanes")
+ (eq_attr "type" "neon_load3_all_lanes, neon_load3_all_lanes_q,\
+ neon_load4_all_lanes, neon_load4_all_lanes_q,\
+ neon_load2_4reg, neon_load2_4reg_q")
+ (const_string "neon_vld2_4_regs")
+ (eq_attr "type" "neon_load3_3reg, neon_load3_3reg_q,\
+ neon_load4_4reg, neon_load4_4reg_q")
+ (const_string "neon_vld3_vld4")
+ (eq_attr "type" "neon_load1_one_lane, neon_load1_one_lane_q,\
+ neon_load2_one_lane, neon_load2_one_lane_q")
+ (const_string "neon_vld1_vld2_lane")
+ (eq_attr "type" "neon_load3_one_lane, neon_load3_one_lane_q,\
+ neon_load4_one_lane, neon_load4_one_lane_q")
+ (const_string "neon_vld3_vld4_lane")
+ (eq_attr "type" "neon_store1_1reg, neon_store1_1reg_q,\
+ neon_store1_2reg, neon_store1_2reg_q,\
+ neon_store2_2reg, neon_store2_2reg_q")
+ (const_string "neon_vst1_1_2_regs_vst2_2_regs")
+ (eq_attr "type" "neon_store1_3reg, neon_store1_3reg_q,\
+ neon_store1_4reg, neon_store1_4reg_q")
+ (const_string "neon_vst1_3_4_regs")
+ (eq_attr "type" "neon_store2_4reg, neon_store2_4reg_q,\
+ neon_store3_3reg, neon_store3_3reg_q,\
+ neon_store4_4reg, neon_store4_4reg_q")
+ (const_string "neon_vst2_4_regs_vst3_vst4")
+ (eq_attr "type" "neon_store1_one_lane, neon_store1_one_lane_q,\
+ neon_store2_one_lane, neon_store2_one_lane_q")
+ (const_string "neon_vst1_vst2_lane")
+ (eq_attr "type" "neon_store3_one_lane, neon_store3_one_lane_q,\
+ neon_store4_one_lane, neon_store4_one_lane_q")
+ (const_string "neon_vst3_vst4_lane")
+ (eq_attr "type" "neon_from_gp")
+ (const_string "neon_mcr")
+ (eq_attr "type" "neon_from_gp_q")
+ (const_string "neon_mcr_2_mcrr")
+ (eq_attr "type" "neon_to_gp")
+ (const_string "neon_mrc")
+ (eq_attr "type" "neon_to_gp_q")
+ (const_string "neon_mrrc")]
+ (const_string "unknown")))
(define_automaton "cortex_a9_neon")
@@ -105,74 +319,71 @@
cortex_a9_neon_issue_fadd,\
cortex_a9_neon_issue_fadd")
-
;; NEON -> core transfers.
(define_insn_reservation "ca9_neon_mrc" 1
(and (eq_attr "tune" "cortexa9")
- (eq_attr "neon_type" "neon_mrc"))
+ (eq_attr "cortex_a9_neon_type" "neon_mrc"))
"ca9_issue_vfp_neon + cortex_a9_neon_mcr")
(define_insn_reservation "ca9_neon_mrrc" 1
(and (eq_attr "tune" "cortexa9")
- (eq_attr "neon_type" "neon_mrrc"))
+ (eq_attr "cortex_a9_neon_type" "neon_mrrc"))
"ca9_issue_vfp_neon + cortex_a9_neon_mcr")
-;; The remainder of this file is auto-generated by neon-schedgen.
-
;; Instructions using this reservation read their source operands at N2, and
;; produce a result at N3.
(define_insn_reservation "cortex_a9_neon_int_1" 3
(and (eq_attr "tune" "cortexa9")
- (eq_attr "neon_type" "neon_int_1"))
+ (eq_attr "cortex_a9_neon_type" "neon_int_1"))
"cortex_a9_neon_dp")
;; Instructions using this reservation read their (D|Q)m operands at N1,
;; their (D|Q)n operands at N2, and produce a result at N3.
(define_insn_reservation "cortex_a9_neon_int_2" 3
(and (eq_attr "tune" "cortexa9")
- (eq_attr "neon_type" "neon_int_2"))
+ (eq_attr "cortex_a9_neon_type" "neon_int_2"))
"cortex_a9_neon_dp")
;; Instructions using this reservation read their source operands at N1, and
;; produce a result at N3.
(define_insn_reservation "cortex_a9_neon_int_3" 3
(and (eq_attr "tune" "cortexa9")
- (eq_attr "neon_type" "neon_int_3"))
- "cortex_a9_neon_dp")
+ (eq_attr "cortex_a9_neon_type" "neon_int_3"))
+ "cortex_a9_neon_dp")
;; Instructions using this reservation read their source operands at N2, and
;; produce a result at N4.
(define_insn_reservation "cortex_a9_neon_int_4" 4
(and (eq_attr "tune" "cortexa9")
- (eq_attr "neon_type" "neon_int_4"))
+ (eq_attr "cortex_a9_neon_type" "neon_int_4"))
"cortex_a9_neon_dp")
;; Instructions using this reservation read their (D|Q)m operands at N1,
;; their (D|Q)n operands at N2, and produce a result at N4.
(define_insn_reservation "cortex_a9_neon_int_5" 4
(and (eq_attr "tune" "cortexa9")
- (eq_attr "neon_type" "neon_int_5"))
+ (eq_attr "cortex_a9_neon_type" "neon_int_5"))
"cortex_a9_neon_dp")
;; Instructions using this reservation read their source operands at N1, and
;; produce a result at N4.
(define_insn_reservation "cortex_a9_neon_vqneg_vqabs" 4
(and (eq_attr "tune" "cortexa9")
- (eq_attr "neon_type" "neon_vqneg_vqabs"))
- "cortex_a9_neon_dp")
+ (eq_attr "cortex_a9_neon_type" "neon_vqneg_vqabs"))
+ "cortex_a9_neon_dp")
;; Instructions using this reservation produce a result at N3.
(define_insn_reservation "cortex_a9_neon_vmov" 3
(and (eq_attr "tune" "cortexa9")
- (eq_attr "neon_type" "neon_vmov"))
- "cortex_a9_neon_dp")
+ (eq_attr "cortex_a9_neon_type" "neon_vmov"))
+ "cortex_a8_neon_dp")
;; Instructions using this reservation read their (D|Q)n operands at N2,
;; their (D|Q)m operands at N1, their (D|Q)d operands at N3, and
;; produce a result at N6.
(define_insn_reservation "cortex_a9_neon_vaba" 6
(and (eq_attr "tune" "cortexa9")
- (eq_attr "neon_type" "neon_vaba"))
+ (eq_attr "cortex_a9_neon_type" "neon_vaba"))
"cortex_a9_neon_dp")
;; Instructions using this reservation read their (D|Q)n operands at N2,
@@ -180,35 +391,35 @@
;; produce a result at N6 on cycle 2.
(define_insn_reservation "cortex_a9_neon_vaba_qqq" 7
(and (eq_attr "tune" "cortexa9")
- (eq_attr "neon_type" "neon_vaba_qqq"))
+ (eq_attr "cortex_a9_neon_type" "neon_vaba_qqq"))
"cortex_a9_neon_dp_2")
-;; Instructions using this reservation read their (D|Q)m operands at N1,
-;; their (D|Q)d operands at N3, and produce a result at N6.
-(define_insn_reservation "cortex_a9_neon_vsma" 6
+;; Instructions using this reservation read their source operands at N2, and
+;; produce a result at N3 on cycle 2.
+(define_insn_reservation "cortex_a9_neon_bit_ops_q" 4
(and (eq_attr "tune" "cortexa9")
- (eq_attr "neon_type" "neon_vsma"))
- "cortex_a9_neon_dp")
+ (eq_attr "cortex_a9_neon_type" "neon_bit_ops_q"))
+ "cortex_a9_neon_dp_2")
;; Instructions using this reservation read their source operands at N2, and
;; produce a result at N6.
(define_insn_reservation "cortex_a9_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long" 6
(and (eq_attr "tune" "cortexa9")
- (eq_attr "neon_type" "neon_mul_ddd_8_16_qdd_16_8_long_32_16_long"))
+ (eq_attr "cortex_a9_neon_type" "neon_mul_ddd_8_16_qdd_16_8_long_32_16_long"))
"cortex_a9_neon_dp")
;; Instructions using this reservation read their source operands at N2, and
;; produce a result at N6 on cycle 2.
(define_insn_reservation "cortex_a9_neon_mul_qqq_8_16_32_ddd_32" 7
(and (eq_attr "tune" "cortexa9")
- (eq_attr "neon_type" "neon_mul_qqq_8_16_32_ddd_32"))
+ (eq_attr "cortex_a9_neon_type" "neon_mul_qqq_8_16_32_ddd_32"))
"cortex_a9_neon_dp_2")
;; Instructions using this reservation read their (D|Q)n operands at N2,
;; their (D|Q)m operands at N1, and produce a result at N6 on cycle 2.
(define_insn_reservation "cortex_a9_neon_mul_qdd_64_32_long_qqd_16_ddd_32_scalar_64_32_long_scalar" 7
(and (eq_attr "tune" "cortexa9")
- (eq_attr "neon_type" "neon_mul_qdd_64_32_long_qqd_16_ddd_32_scalar_64_32_long_scalar"))
+ (eq_attr "cortex_a9_neon_type" "neon_mul_qdd_64_32_long_qqd_16_ddd_32_scalar_64_32_long_scalar"))
"cortex_a9_neon_dp_2")
;; Instructions using this reservation read their (D|Q)n operands at N2,
@@ -216,7 +427,7 @@
;; produce a result at N6.
(define_insn_reservation "cortex_a9_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long" 6
(and (eq_attr "tune" "cortexa9")
- (eq_attr "neon_type" "neon_mla_ddd_8_16_qdd_16_8_long_32_16_long"))
+ (eq_attr "cortex_a9_neon_type" "neon_mla_ddd_8_16_qdd_16_8_long_32_16_long"))
"cortex_a9_neon_dp")
;; Instructions using this reservation read their (D|Q)n operands at N2,
@@ -224,7 +435,7 @@
;; produce a result at N6 on cycle 2.
(define_insn_reservation "cortex_a9_neon_mla_qqq_8_16" 7
(and (eq_attr "tune" "cortexa9")
- (eq_attr "neon_type" "neon_mla_qqq_8_16"))
+ (eq_attr "cortex_a9_neon_type" "neon_mla_qqq_8_16"))
"cortex_a9_neon_dp_2")
;; Instructions using this reservation read their (D|Q)n operands at N2,
@@ -232,7 +443,7 @@
;; produce a result at N6 on cycle 2.
(define_insn_reservation "cortex_a9_neon_mla_ddd_32_qqd_16_ddd_32_scalar_qdd_64_32_long_scalar_qdd_64_32_long" 7
(and (eq_attr "tune" "cortexa9")
- (eq_attr "neon_type" "neon_mla_ddd_32_qqd_16_ddd_32_scalar_qdd_64_32_long_scalar_qdd_64_32_long"))
+ (eq_attr "cortex_a9_neon_type" "neon_mla_ddd_32_qqd_16_ddd_32_scalar_qdd_64_32_long_scalar_qdd_64_32_long"))
"cortex_a9_neon_dp_2")
;; Instructions using this reservation read their (D|Q)n operands at N2,
@@ -240,21 +451,21 @@
;; produce a result at N6 on cycle 4.
(define_insn_reservation "cortex_a9_neon_mla_qqq_32_qqd_32_scalar" 9
(and (eq_attr "tune" "cortexa9")
- (eq_attr "neon_type" "neon_mla_qqq_32_qqd_32_scalar"))
+ (eq_attr "cortex_a9_neon_type" "neon_mla_qqq_32_qqd_32_scalar"))
"cortex_a9_neon_dp_4")
;; Instructions using this reservation read their (D|Q)n operands at N2,
;; their (D|Q)m operands at N1, and produce a result at N6.
(define_insn_reservation "cortex_a9_neon_mul_ddd_16_scalar_32_16_long_scalar" 6
(and (eq_attr "tune" "cortexa9")
- (eq_attr "neon_type" "neon_mul_ddd_16_scalar_32_16_long_scalar"))
+ (eq_attr "cortex_a9_neon_type" "neon_mul_ddd_16_scalar_32_16_long_scalar"))
"cortex_a9_neon_dp")
;; Instructions using this reservation read their (D|Q)n operands at N2,
;; their (D|Q)m operands at N1, and produce a result at N6 on cycle 4.
(define_insn_reservation "cortex_a9_neon_mul_qqd_32_scalar" 9
(and (eq_attr "tune" "cortexa9")
- (eq_attr "neon_type" "neon_mul_qqd_32_scalar"))
+ (eq_attr "cortex_a9_neon_type" "neon_mul_qqd_32_scalar"))
"cortex_a9_neon_dp_4")
;; Instructions using this reservation read their (D|Q)n operands at N2,
@@ -262,84 +473,77 @@
;; produce a result at N6.
(define_insn_reservation "cortex_a9_neon_mla_ddd_16_scalar_qdd_32_16_long_scalar" 6
(and (eq_attr "tune" "cortexa9")
- (eq_attr "neon_type" "neon_mla_ddd_16_scalar_qdd_32_16_long_scalar"))
+ (eq_attr "cortex_a9_neon_type" "neon_mla_ddd_16_scalar_qdd_32_16_long_scalar"))
"cortex_a9_neon_dp")
;; Instructions using this reservation read their source operands at N1, and
;; produce a result at N3.
(define_insn_reservation "cortex_a9_neon_shift_1" 3
(and (eq_attr "tune" "cortexa9")
- (eq_attr "neon_type" "neon_shift_1"))
+ (eq_attr "cortex_a9_neon_type" "neon_shift_1"))
"cortex_a9_neon_dp")
;; Instructions using this reservation read their source operands at N1, and
;; produce a result at N4.
(define_insn_reservation "cortex_a9_neon_shift_2" 4
(and (eq_attr "tune" "cortexa9")
- (eq_attr "neon_type" "neon_shift_2"))
+ (eq_attr "cortex_a9_neon_type" "neon_shift_2"))
"cortex_a9_neon_dp")
;; Instructions using this reservation read their source operands at N1, and
;; produce a result at N3 on cycle 2.
(define_insn_reservation "cortex_a9_neon_shift_3" 4
(and (eq_attr "tune" "cortexa9")
- (eq_attr "neon_type" "neon_shift_3"))
+ (eq_attr "cortex_a9_neon_type" "neon_shift_3"))
"cortex_a9_neon_dp_2")
;; Instructions using this reservation read their source operands at N1, and
-;; produce a result at N1.
-(define_insn_reservation "cortex_a9_neon_vshl_ddd" 1
- (and (eq_attr "tune" "cortexa9")
- (eq_attr "neon_type" "neon_vshl_ddd"))
- "cortex_a9_neon_dp")
-
-;; Instructions using this reservation read their source operands at N1, and
;; produce a result at N4 on cycle 2.
(define_insn_reservation "cortex_a9_neon_vqshl_vrshl_vqrshl_qqq" 5
(and (eq_attr "tune" "cortexa9")
- (eq_attr "neon_type" "neon_vqshl_vrshl_vqrshl_qqq"))
+ (eq_attr "cortex_a9_neon_type" "neon_vqshl_vrshl_vqrshl_qqq"))
"cortex_a9_neon_dp_2")
;; Instructions using this reservation read their (D|Q)m operands at N1,
;; their (D|Q)d operands at N3, and produce a result at N6.
(define_insn_reservation "cortex_a9_neon_vsra_vrsra" 6
(and (eq_attr "tune" "cortexa9")
- (eq_attr "neon_type" "neon_vsra_vrsra"))
+ (eq_attr "cortex_a9_neon_type" "neon_vsra_vrsra"))
"cortex_a9_neon_dp")
;; Instructions using this reservation read their source operands at N2, and
;; produce a result at N5.
(define_insn_reservation "cortex_a9_neon_fp_vadd_ddd_vabs_dd" 5
(and (eq_attr "tune" "cortexa9")
- (eq_attr "neon_type" "neon_fp_vadd_ddd_vabs_dd"))
+ (eq_attr "cortex_a9_neon_type" "neon_fp_vadd_ddd_vabs_dd"))
"cortex_a9_neon_fadd")
;; Instructions using this reservation read their source operands at N2, and
;; produce a result at N5 on cycle 2.
(define_insn_reservation "cortex_a9_neon_fp_vadd_qqq_vabs_qq" 6
(and (eq_attr "tune" "cortexa9")
- (eq_attr "neon_type" "neon_fp_vadd_qqq_vabs_qq"))
+ (eq_attr "cortex_a9_neon_type" "neon_fp_vadd_qqq_vabs_qq"))
"cortex_a9_neon_fadd_2")
;; Instructions using this reservation read their source operands at N1, and
;; produce a result at N5.
(define_insn_reservation "cortex_a9_neon_fp_vsum" 5
(and (eq_attr "tune" "cortexa9")
- (eq_attr "neon_type" "neon_fp_vsum"))
+ (eq_attr "cortex_a9_neon_type" "neon_fp_vsum"))
"cortex_a9_neon_fadd")
;; Instructions using this reservation read their (D|Q)n operands at N2,
;; their (D|Q)m operands at N1, and produce a result at N5.
(define_insn_reservation "cortex_a9_neon_fp_vmul_ddd" 5
(and (eq_attr "tune" "cortexa9")
- (eq_attr "neon_type" "neon_fp_vmul_ddd"))
+ (eq_attr "cortex_a9_neon_type" "neon_fp_vmul_ddd"))
"cortex_a9_neon_dp")
;; Instructions using this reservation read their (D|Q)n operands at N2,
;; their (D|Q)m operands at N1, and produce a result at N5 on cycle 2.
(define_insn_reservation "cortex_a9_neon_fp_vmul_qqd" 6
(and (eq_attr "tune" "cortexa9")
- (eq_attr "neon_type" "neon_fp_vmul_qqd"))
+ (eq_attr "cortex_a9_neon_type" "neon_fp_vmul_qqd"))
"cortex_a9_neon_dp_2")
;; Instructions using this reservation read their (D|Q)n operands at N2,
@@ -347,7 +551,7 @@
;; produce a result at N9.
(define_insn_reservation "cortex_a9_neon_fp_vmla_ddd" 9
(and (eq_attr "tune" "cortexa9")
- (eq_attr "neon_type" "neon_fp_vmla_ddd"))
+ (eq_attr "cortex_a9_neon_type" "neon_fp_vmla_ddd"))
"cortex_a9_neon_fmul_then_fadd")
;; Instructions using this reservation read their (D|Q)n operands at N2,
@@ -355,7 +559,7 @@
;; produce a result at N9 on cycle 2.
(define_insn_reservation "cortex_a9_neon_fp_vmla_qqq" 10
(and (eq_attr "tune" "cortexa9")
- (eq_attr "neon_type" "neon_fp_vmla_qqq"))
+ (eq_attr "cortex_a9_neon_type" "neon_fp_vmla_qqq"))
"cortex_a9_neon_fmul_then_fadd_2")
;; Instructions using this reservation read their (D|Q)n operands at N2,
@@ -363,7 +567,7 @@
;; produce a result at N9.
(define_insn_reservation "cortex_a9_neon_fp_vmla_ddd_scalar" 9
(and (eq_attr "tune" "cortexa9")
- (eq_attr "neon_type" "neon_fp_vmla_ddd_scalar"))
+ (eq_attr "cortex_a9_neon_type" "neon_fp_vmla_ddd_scalar"))
"cortex_a9_neon_fmul_then_fadd")
;; Instructions using this reservation read their (D|Q)n operands at N2,
@@ -371,152 +575,146 @@
;; produce a result at N9 on cycle 2.
(define_insn_reservation "cortex_a9_neon_fp_vmla_qqq_scalar" 10
(and (eq_attr "tune" "cortexa9")
- (eq_attr "neon_type" "neon_fp_vmla_qqq_scalar"))
+ (eq_attr "cortex_a9_neon_type" "neon_fp_vmla_qqq_scalar"))
"cortex_a9_neon_fmul_then_fadd_2")
;; Instructions using this reservation read their source operands at N2, and
;; produce a result at N9.
(define_insn_reservation "cortex_a9_neon_fp_vrecps_vrsqrts_ddd" 9
(and (eq_attr "tune" "cortexa9")
- (eq_attr "neon_type" "neon_fp_vrecps_vrsqrts_ddd"))
+ (eq_attr "cortex_a9_neon_type" "neon_fp_vrecps_vrsqrts_ddd"))
"cortex_a9_neon_fmul_then_fadd")
;; Instructions using this reservation read their source operands at N2, and
;; produce a result at N9 on cycle 2.
(define_insn_reservation "cortex_a9_neon_fp_vrecps_vrsqrts_qqq" 10
(and (eq_attr "tune" "cortexa9")
- (eq_attr "neon_type" "neon_fp_vrecps_vrsqrts_qqq"))
+ (eq_attr "cortex_a9_neon_type" "neon_fp_vrecps_vrsqrts_qqq"))
"cortex_a9_neon_fmul_then_fadd_2")
;; Instructions using this reservation read their source operands at N1, and
;; produce a result at N2.
(define_insn_reservation "cortex_a9_neon_bp_simple" 2
(and (eq_attr "tune" "cortexa9")
- (eq_attr "neon_type" "neon_bp_simple"))
+ (eq_attr "cortex_a9_neon_type" "neon_bp_simple"))
"cortex_a9_neon_perm")
;; Instructions using this reservation read their source operands at N1, and
;; produce a result at N2 on cycle 2.
(define_insn_reservation "cortex_a9_neon_bp_2cycle" 3
(and (eq_attr "tune" "cortexa9")
- (eq_attr "neon_type" "neon_bp_2cycle"))
+ (eq_attr "cortex_a9_neon_type" "neon_bp_2cycle"))
"cortex_a9_neon_perm_2")
;; Instructions using this reservation read their source operands at N1, and
;; produce a result at N2 on cycle 3.
(define_insn_reservation "cortex_a9_neon_bp_3cycle" 4
(and (eq_attr "tune" "cortexa9")
- (eq_attr "neon_type" "neon_bp_3cycle"))
+ (eq_attr "cortex_a9_neon_type" "neon_bp_3cycle"))
"cortex_a9_neon_perm_3")
;; Instructions using this reservation produce a result at N1.
(define_insn_reservation "cortex_a9_neon_ldr" 1
(and (eq_attr "tune" "cortexa9")
- (eq_attr "neon_type" "neon_ldr"))
+ (eq_attr "cortex_a9_neon_type" "neon_ldr"))
"cortex_a9_neon_ls")
;; Instructions using this reservation read their source operands at N1.
(define_insn_reservation "cortex_a9_neon_str" 0
(and (eq_attr "tune" "cortexa9")
- (eq_attr "neon_type" "neon_str"))
+ (eq_attr "cortex_a9_neon_type" "neon_str"))
"cortex_a9_neon_ls")
;; Instructions using this reservation produce a result at N1 on cycle 2.
(define_insn_reservation "cortex_a9_neon_vld1_1_2_regs" 2
(and (eq_attr "tune" "cortexa9")
- (eq_attr "neon_type" "neon_vld1_1_2_regs"))
+ (eq_attr "cortex_a9_neon_type" "neon_vld1_1_2_regs"))
"cortex_a9_neon_ls_2")
;; Instructions using this reservation produce a result at N1 on cycle 3.
(define_insn_reservation "cortex_a9_neon_vld1_3_4_regs" 3
(and (eq_attr "tune" "cortexa9")
- (eq_attr "neon_type" "neon_vld1_3_4_regs"))
+ (eq_attr "cortex_a9_neon_type" "neon_vld1_3_4_regs"))
"cortex_a9_neon_ls_3")
;; Instructions using this reservation produce a result at N2 on cycle 2.
(define_insn_reservation "cortex_a9_neon_vld2_2_regs_vld1_vld2_all_lanes" 3
(and (eq_attr "tune" "cortexa9")
- (eq_attr "neon_type" "neon_vld2_2_regs_vld1_vld2_all_lanes"))
+ (eq_attr "cortex_a9_neon_type" "neon_vld2_2_regs_vld1_vld2_all_lanes"))
"cortex_a9_neon_ls_2")
;; Instructions using this reservation produce a result at N2 on cycle 3.
(define_insn_reservation "cortex_a9_neon_vld2_4_regs" 4
(and (eq_attr "tune" "cortexa9")
- (eq_attr "neon_type" "neon_vld2_4_regs"))
+ (eq_attr "cortex_a9_neon_type" "neon_vld2_4_regs"))
"cortex_a9_neon_ls_3")
;; Instructions using this reservation produce a result at N2 on cycle 4.
(define_insn_reservation "cortex_a9_neon_vld3_vld4" 5
(and (eq_attr "tune" "cortexa9")
- (eq_attr "neon_type" "neon_vld3_vld4"))
+ (eq_attr "cortex_a9_neon_type" "neon_vld3_vld4"))
"cortex_a9_neon_ls_4")
;; Instructions using this reservation read their source operands at N1.
(define_insn_reservation "cortex_a9_neon_vst1_1_2_regs_vst2_2_regs" 0
(and (eq_attr "tune" "cortexa9")
- (eq_attr "neon_type" "neon_vst1_1_2_regs_vst2_2_regs"))
+ (eq_attr "cortex_a9_neon_type" "neon_vst1_1_2_regs_vst2_2_regs"))
"cortex_a9_neon_ls_2")
;; Instructions using this reservation read their source operands at N1.
(define_insn_reservation "cortex_a9_neon_vst1_3_4_regs" 0
(and (eq_attr "tune" "cortexa9")
- (eq_attr "neon_type" "neon_vst1_3_4_regs"))
+ (eq_attr "cortex_a9_neon_type" "neon_vst1_3_4_regs"))
"cortex_a9_neon_ls_3")
;; Instructions using this reservation read their source operands at N1.
(define_insn_reservation "cortex_a9_neon_vst2_4_regs_vst3_vst4" 0
(and (eq_attr "tune" "cortexa9")
- (eq_attr "neon_type" "neon_vst2_4_regs_vst3_vst4"))
- "cortex_a9_neon_ls_4")
-
-;; Instructions using this reservation read their source operands at N1.
-(define_insn_reservation "cortex_a9_neon_vst3_vst4" 0
- (and (eq_attr "tune" "cortexa9")
- (eq_attr "neon_type" "neon_vst3_vst4"))
+ (eq_attr "cortex_a9_neon_type" "neon_vst2_4_regs_vst3_vst4"))
"cortex_a9_neon_ls_4")
;; Instructions using this reservation read their source operands at N1, and
;; produce a result at N2 on cycle 3.
(define_insn_reservation "cortex_a9_neon_vld1_vld2_lane" 4
(and (eq_attr "tune" "cortexa9")
- (eq_attr "neon_type" "neon_vld1_vld2_lane"))
+ (eq_attr "cortex_a9_neon_type" "neon_vld1_vld2_lane"))
"cortex_a9_neon_ls_3")
;; Instructions using this reservation read their source operands at N1, and
;; produce a result at N2 on cycle 5.
(define_insn_reservation "cortex_a9_neon_vld3_vld4_lane" 6
(and (eq_attr "tune" "cortexa9")
- (eq_attr "neon_type" "neon_vld3_vld4_lane"))
+ (eq_attr "cortex_a9_neon_type" "neon_vld3_vld4_lane"))
"cortex_a9_neon_ls_5")
;; Instructions using this reservation read their source operands at N1.
(define_insn_reservation "cortex_a9_neon_vst1_vst2_lane" 0
(and (eq_attr "tune" "cortexa9")
- (eq_attr "neon_type" "neon_vst1_vst2_lane"))
+ (eq_attr "cortex_a9_neon_type" "neon_vst1_vst2_lane"))
"cortex_a9_neon_ls_2")
;; Instructions using this reservation read their source operands at N1.
(define_insn_reservation "cortex_a9_neon_vst3_vst4_lane" 0
(and (eq_attr "tune" "cortexa9")
- (eq_attr "neon_type" "neon_vst3_vst4_lane"))
+ (eq_attr "cortex_a9_neon_type" "neon_vst3_vst4_lane"))
"cortex_a9_neon_ls_3")
;; Instructions using this reservation produce a result at N2 on cycle 2.
(define_insn_reservation "cortex_a9_neon_vld3_vld4_all_lanes" 3
(and (eq_attr "tune" "cortexa9")
- (eq_attr "neon_type" "neon_vld3_vld4_all_lanes"))
+ (eq_attr "cortex_a9_neon_type" "neon_vld3_vld4_all_lanes"))
"cortex_a9_neon_ls_3")
;; Instructions using this reservation produce a result at N2.
(define_insn_reservation "cortex_a9_neon_mcr" 2
(and (eq_attr "tune" "cortexa9")
- (eq_attr "neon_type" "neon_mcr"))
+ (eq_attr "cortex_a9_neon_type" "neon_mcr"))
"cortex_a9_neon_perm")
;; Instructions using this reservation produce a result at N2.
(define_insn_reservation "cortex_a9_neon_mcr_2_mcrr" 2
(and (eq_attr "tune" "cortexa9")
- (eq_attr "neon_type" "neon_mcr_2_mcrr"))
+ (eq_attr "cortex_a9_neon_type" "neon_mcr_2_mcrr"))
"cortex_a9_neon_perm_2")
;; Exceptions to the default latencies.
@@ -524,6 +722,7 @@
(define_bypass 1 "cortex_a9_neon_mcr_2_mcrr"
"cortex_a9_neon_int_1,\
cortex_a9_neon_int_4,\
+ cortex_a9_neon_bit_ops_q,\
cortex_a9_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a9_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a9_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -538,6 +737,7 @@
(define_bypass 1 "cortex_a9_neon_mcr"
"cortex_a9_neon_int_1,\
cortex_a9_neon_int_4,\
+ cortex_a9_neon_bit_ops_q,\
cortex_a9_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a9_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a9_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -552,6 +752,7 @@
(define_bypass 2 "cortex_a9_neon_vld3_vld4_all_lanes"
"cortex_a9_neon_int_1,\
cortex_a9_neon_int_4,\
+ cortex_a9_neon_bit_ops_q,\
cortex_a9_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a9_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a9_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -566,6 +767,7 @@
(define_bypass 5 "cortex_a9_neon_vld3_vld4_lane"
"cortex_a9_neon_int_1,\
cortex_a9_neon_int_4,\
+ cortex_a9_neon_bit_ops_q,\
cortex_a9_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a9_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a9_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -580,6 +782,7 @@
(define_bypass 3 "cortex_a9_neon_vld1_vld2_lane"
"cortex_a9_neon_int_1,\
cortex_a9_neon_int_4,\
+ cortex_a9_neon_bit_ops_q,\
cortex_a9_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a9_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a9_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -594,6 +797,7 @@
(define_bypass 4 "cortex_a9_neon_vld3_vld4"
"cortex_a9_neon_int_1,\
cortex_a9_neon_int_4,\
+ cortex_a9_neon_bit_ops_q,\
cortex_a9_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a9_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a9_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -608,6 +812,7 @@
(define_bypass 3 "cortex_a9_neon_vld2_4_regs"
"cortex_a9_neon_int_1,\
cortex_a9_neon_int_4,\
+ cortex_a9_neon_bit_ops_q,\
cortex_a9_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a9_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a9_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -622,6 +827,7 @@
(define_bypass 2 "cortex_a9_neon_vld2_2_regs_vld1_vld2_all_lanes"
"cortex_a9_neon_int_1,\
cortex_a9_neon_int_4,\
+ cortex_a9_neon_bit_ops_q,\
cortex_a9_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a9_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a9_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -636,6 +842,7 @@
(define_bypass 2 "cortex_a9_neon_vld1_3_4_regs"
"cortex_a9_neon_int_1,\
cortex_a9_neon_int_4,\
+ cortex_a9_neon_bit_ops_q,\
cortex_a9_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a9_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a9_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -650,6 +857,7 @@
(define_bypass 1 "cortex_a9_neon_vld1_1_2_regs"
"cortex_a9_neon_int_1,\
cortex_a9_neon_int_4,\
+ cortex_a9_neon_bit_ops_q,\
cortex_a9_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a9_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a9_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -664,6 +872,7 @@
(define_bypass 0 "cortex_a9_neon_ldr"
"cortex_a9_neon_int_1,\
cortex_a9_neon_int_4,\
+ cortex_a9_neon_bit_ops_q,\
cortex_a9_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a9_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a9_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -678,6 +887,7 @@
(define_bypass 3 "cortex_a9_neon_bp_3cycle"
"cortex_a9_neon_int_1,\
cortex_a9_neon_int_4,\
+ cortex_a9_neon_bit_ops_q,\
cortex_a9_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a9_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a9_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -692,6 +902,7 @@
(define_bypass 2 "cortex_a9_neon_bp_2cycle"
"cortex_a9_neon_int_1,\
cortex_a9_neon_int_4,\
+ cortex_a9_neon_bit_ops_q,\
cortex_a9_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a9_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a9_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -706,6 +917,7 @@
(define_bypass 1 "cortex_a9_neon_bp_simple"
"cortex_a9_neon_int_1,\
cortex_a9_neon_int_4,\
+ cortex_a9_neon_bit_ops_q,\
cortex_a9_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a9_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a9_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -720,6 +932,7 @@
(define_bypass 9 "cortex_a9_neon_fp_vrecps_vrsqrts_qqq"
"cortex_a9_neon_int_1,\
cortex_a9_neon_int_4,\
+ cortex_a9_neon_bit_ops_q,\
cortex_a9_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a9_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a9_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -734,6 +947,7 @@
(define_bypass 8 "cortex_a9_neon_fp_vrecps_vrsqrts_ddd"
"cortex_a9_neon_int_1,\
cortex_a9_neon_int_4,\
+ cortex_a9_neon_bit_ops_q,\
cortex_a9_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a9_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a9_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -748,6 +962,7 @@
(define_bypass 9 "cortex_a9_neon_fp_vmla_qqq_scalar"
"cortex_a9_neon_int_1,\
cortex_a9_neon_int_4,\
+ cortex_a9_neon_bit_ops_q,\
cortex_a9_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a9_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a9_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -762,6 +977,7 @@
(define_bypass 8 "cortex_a9_neon_fp_vmla_ddd_scalar"
"cortex_a9_neon_int_1,\
cortex_a9_neon_int_4,\
+ cortex_a9_neon_bit_ops_q,\
cortex_a9_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a9_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a9_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -776,6 +992,7 @@
(define_bypass 9 "cortex_a9_neon_fp_vmla_qqq"
"cortex_a9_neon_int_1,\
cortex_a9_neon_int_4,\
+ cortex_a9_neon_bit_ops_q,\
cortex_a9_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a9_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a9_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -790,6 +1007,7 @@
(define_bypass 8 "cortex_a9_neon_fp_vmla_ddd"
"cortex_a9_neon_int_1,\
cortex_a9_neon_int_4,\
+ cortex_a9_neon_bit_ops_q,\
cortex_a9_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a9_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a9_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -804,6 +1022,7 @@
(define_bypass 5 "cortex_a9_neon_fp_vmul_qqd"
"cortex_a9_neon_int_1,\
cortex_a9_neon_int_4,\
+ cortex_a9_neon_bit_ops_q,\
cortex_a9_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a9_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a9_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -818,6 +1037,7 @@
(define_bypass 4 "cortex_a9_neon_fp_vmul_ddd"
"cortex_a9_neon_int_1,\
cortex_a9_neon_int_4,\
+ cortex_a9_neon_bit_ops_q,\
cortex_a9_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a9_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a9_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -832,6 +1052,7 @@
(define_bypass 4 "cortex_a9_neon_fp_vsum"
"cortex_a9_neon_int_1,\
cortex_a9_neon_int_4,\
+ cortex_a9_neon_bit_ops_q,\
cortex_a9_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a9_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a9_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -846,6 +1067,7 @@
(define_bypass 5 "cortex_a9_neon_fp_vadd_qqq_vabs_qq"
"cortex_a9_neon_int_1,\
cortex_a9_neon_int_4,\
+ cortex_a9_neon_bit_ops_q,\
cortex_a9_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a9_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a9_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -860,6 +1082,7 @@
(define_bypass 4 "cortex_a9_neon_fp_vadd_ddd_vabs_dd"
"cortex_a9_neon_int_1,\
cortex_a9_neon_int_4,\
+ cortex_a9_neon_bit_ops_q,\
cortex_a9_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a9_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a9_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -874,6 +1097,7 @@
(define_bypass 5 "cortex_a9_neon_vsra_vrsra"
"cortex_a9_neon_int_1,\
cortex_a9_neon_int_4,\
+ cortex_a9_neon_bit_ops_q,\
cortex_a9_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a9_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a9_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -888,20 +1112,7 @@
(define_bypass 4 "cortex_a9_neon_vqshl_vrshl_vqrshl_qqq"
"cortex_a9_neon_int_1,\
cortex_a9_neon_int_4,\
- cortex_a9_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a9_neon_mul_qqq_8_16_32_ddd_32,\
- cortex_a9_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
- cortex_a9_neon_mla_qqq_8_16,\
- cortex_a9_neon_fp_vadd_ddd_vabs_dd,\
- cortex_a9_neon_fp_vadd_qqq_vabs_qq,\
- cortex_a9_neon_fp_vmla_ddd,\
- cortex_a9_neon_fp_vmla_qqq,\
- cortex_a9_neon_fp_vrecps_vrsqrts_ddd,\
- cortex_a9_neon_fp_vrecps_vrsqrts_qqq")
-
-(define_bypass 0 "cortex_a9_neon_vshl_ddd"
- "cortex_a9_neon_int_1,\
- cortex_a9_neon_int_4,\
+ cortex_a9_neon_bit_ops_q,\
cortex_a9_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a9_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a9_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -916,6 +1127,7 @@
(define_bypass 3 "cortex_a9_neon_shift_3"
"cortex_a9_neon_int_1,\
cortex_a9_neon_int_4,\
+ cortex_a9_neon_bit_ops_q,\
cortex_a9_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a9_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a9_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -930,6 +1142,7 @@
(define_bypass 3 "cortex_a9_neon_shift_2"
"cortex_a9_neon_int_1,\
cortex_a9_neon_int_4,\
+ cortex_a9_neon_bit_ops_q,\
cortex_a9_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a9_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a9_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -944,6 +1157,7 @@
(define_bypass 2 "cortex_a9_neon_shift_1"
"cortex_a9_neon_int_1,\
cortex_a9_neon_int_4,\
+ cortex_a9_neon_bit_ops_q,\
cortex_a9_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a9_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a9_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -958,6 +1172,7 @@
(define_bypass 5 "cortex_a9_neon_mla_ddd_16_scalar_qdd_32_16_long_scalar"
"cortex_a9_neon_int_1,\
cortex_a9_neon_int_4,\
+ cortex_a9_neon_bit_ops_q,\
cortex_a9_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a9_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a9_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -972,6 +1187,7 @@
(define_bypass 8 "cortex_a9_neon_mul_qqd_32_scalar"
"cortex_a9_neon_int_1,\
cortex_a9_neon_int_4,\
+ cortex_a9_neon_bit_ops_q,\
cortex_a9_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a9_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a9_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -986,6 +1202,7 @@
(define_bypass 5 "cortex_a9_neon_mul_ddd_16_scalar_32_16_long_scalar"
"cortex_a9_neon_int_1,\
cortex_a9_neon_int_4,\
+ cortex_a9_neon_bit_ops_q,\
cortex_a9_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a9_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a9_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -1000,6 +1217,7 @@
(define_bypass 8 "cortex_a9_neon_mla_qqq_32_qqd_32_scalar"
"cortex_a9_neon_int_1,\
cortex_a9_neon_int_4,\
+ cortex_a9_neon_bit_ops_q,\
cortex_a9_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a9_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a9_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -1014,6 +1232,7 @@
(define_bypass 6 "cortex_a9_neon_mla_ddd_32_qqd_16_ddd_32_scalar_qdd_64_32_long_scalar_qdd_64_32_long"
"cortex_a9_neon_int_1,\
cortex_a9_neon_int_4,\
+ cortex_a9_neon_bit_ops_q,\
cortex_a9_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a9_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a9_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -1028,6 +1247,7 @@
(define_bypass 6 "cortex_a9_neon_mla_qqq_8_16"
"cortex_a9_neon_int_1,\
cortex_a9_neon_int_4,\
+ cortex_a9_neon_bit_ops_q,\
cortex_a9_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a9_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a9_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -1042,6 +1262,7 @@
(define_bypass 5 "cortex_a9_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long"
"cortex_a9_neon_int_1,\
cortex_a9_neon_int_4,\
+ cortex_a9_neon_bit_ops_q,\
cortex_a9_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a9_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a9_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -1056,6 +1277,7 @@
(define_bypass 6 "cortex_a9_neon_mul_qdd_64_32_long_qqd_16_ddd_32_scalar_64_32_long_scalar"
"cortex_a9_neon_int_1,\
cortex_a9_neon_int_4,\
+ cortex_a9_neon_bit_ops_q,\
cortex_a9_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a9_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a9_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -1070,6 +1292,7 @@
(define_bypass 6 "cortex_a9_neon_mul_qqq_8_16_32_ddd_32"
"cortex_a9_neon_int_1,\
cortex_a9_neon_int_4,\
+ cortex_a9_neon_bit_ops_q,\
cortex_a9_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a9_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a9_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -1084,6 +1307,7 @@
(define_bypass 5 "cortex_a9_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long"
"cortex_a9_neon_int_1,\
cortex_a9_neon_int_4,\
+ cortex_a9_neon_bit_ops_q,\
cortex_a9_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a9_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a9_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -1095,9 +1319,10 @@
cortex_a9_neon_fp_vrecps_vrsqrts_ddd,\
cortex_a9_neon_fp_vrecps_vrsqrts_qqq")
-(define_bypass 5 "cortex_a9_neon_vsma"
+(define_bypass 6 "cortex_a9_neon_vaba_qqq"
"cortex_a9_neon_int_1,\
cortex_a9_neon_int_4,\
+ cortex_a9_neon_bit_ops_q,\
cortex_a9_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a9_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a9_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -1109,9 +1334,10 @@
cortex_a9_neon_fp_vrecps_vrsqrts_ddd,\
cortex_a9_neon_fp_vrecps_vrsqrts_qqq")
-(define_bypass 6 "cortex_a9_neon_vaba_qqq"
+(define_bypass 5 "cortex_a9_neon_vaba"
"cortex_a9_neon_int_1,\
cortex_a9_neon_int_4,\
+ cortex_a9_neon_bit_ops_q,\
cortex_a9_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a9_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a9_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -1123,9 +1349,10 @@
cortex_a9_neon_fp_vrecps_vrsqrts_ddd,\
cortex_a9_neon_fp_vrecps_vrsqrts_qqq")
-(define_bypass 5 "cortex_a9_neon_vaba"
+(define_bypass 2 "cortex_a9_neon_vmov"
"cortex_a9_neon_int_1,\
cortex_a9_neon_int_4,\
+ cortex_a9_neon_bit_ops_q,\
cortex_a9_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a9_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a9_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -1137,9 +1364,10 @@
cortex_a9_neon_fp_vrecps_vrsqrts_ddd,\
cortex_a9_neon_fp_vrecps_vrsqrts_qqq")
-(define_bypass 2 "cortex_a9_neon_vmov"
+(define_bypass 3 "cortex_a9_neon_bit_ops_q"
"cortex_a9_neon_int_1,\
cortex_a9_neon_int_4,\
+ cortex_a9_neon_bit_ops_q,\
cortex_a9_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a9_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a9_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -1154,6 +1382,7 @@
(define_bypass 3 "cortex_a9_neon_vqneg_vqabs"
"cortex_a9_neon_int_1,\
cortex_a9_neon_int_4,\
+ cortex_a9_neon_bit_ops_q,\
cortex_a9_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a9_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a9_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -1168,6 +1397,7 @@
(define_bypass 3 "cortex_a9_neon_int_5"
"cortex_a9_neon_int_1,\
cortex_a9_neon_int_4,\
+ cortex_a9_neon_bit_ops_q,\
cortex_a9_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a9_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a9_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -1182,6 +1412,7 @@
(define_bypass 3 "cortex_a9_neon_int_4"
"cortex_a9_neon_int_1,\
cortex_a9_neon_int_4,\
+ cortex_a9_neon_bit_ops_q,\
cortex_a9_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a9_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a9_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -1196,6 +1427,7 @@
(define_bypass 2 "cortex_a9_neon_int_3"
"cortex_a9_neon_int_1,\
cortex_a9_neon_int_4,\
+ cortex_a9_neon_bit_ops_q,\
cortex_a9_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a9_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a9_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -1210,6 +1442,7 @@
(define_bypass 2 "cortex_a9_neon_int_2"
"cortex_a9_neon_int_1,\
cortex_a9_neon_int_4,\
+ cortex_a9_neon_bit_ops_q,\
cortex_a9_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a9_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a9_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
@@ -1224,6 +1457,7 @@
(define_bypass 2 "cortex_a9_neon_int_1"
"cortex_a9_neon_int_1,\
cortex_a9_neon_int_4,\
+ cortex_a9_neon_bit_ops_q,\
cortex_a9_neon_mul_ddd_8_16_qdd_16_8_long_32_16_long,\
cortex_a9_neon_mul_qqq_8_16_32_ddd_32,\
cortex_a9_neon_mla_ddd_8_16_qdd_16_8_long_32_16_long,\
diff --git a/gcc/config/arm/cortex-a9.md b/gcc/config/arm/cortex-a9.md
index 11dc0b32c38..7c62d8489ae 100644
--- a/gcc/config/arm/cortex-a9.md
+++ b/gcc/config/arm/cortex-a9.md
@@ -80,17 +80,24 @@ cortex_a9_p1_e2 + cortex_a9_p0_e1 + cortex_a9_p1_e1")
;; which can go down E2 without any problem.
(define_insn_reservation "cortex_a9_dp" 2
(and (eq_attr "tune" "cortexa9")
- (and (eq_attr "type" "arlo_imm,arlo_reg,shift,shift_reg,\
- mov_imm,mov_reg,mvn_imm,mvn_reg,\
- mov_shift_reg,mov_shift")
- (eq_attr "neon_type" "none")))
+ (eq_attr "type" "alu_imm,alus_imm,logic_imm,logics_imm,\
+ alu_reg,alus_reg,logic_reg,logics_reg,\
+ adc_imm,adcs_imm,adc_reg,adcs_reg,\
+ adr,bfm,rev,\
+ shift_imm,shift_reg,\
+ mov_imm,mov_reg,mvn_imm,mvn_reg,\
+ mov_shift_reg,mov_shift,\
+ mrs,multiple,no_insn"))
"cortex_a9_p0_default|cortex_a9_p1_default")
;; An instruction using the shifter will go down E1.
(define_insn_reservation "cortex_a9_dp_shift" 3
(and (eq_attr "tune" "cortexa9")
- (eq_attr "type" "arlo_shift_reg,extend,arlo_shift,\
- mvn_shift,mvn_shift_reg"))
+ (eq_attr "type" "alu_shift_imm,alus_shift_imm,\
+ logic_shift_imm,logics_shift_imm,\
+ alu_shift_reg,alus_shift_reg,\
+ logic_shift_reg,logics_shift_reg,\
+ extend,mvn_shift,mvn_shift_reg"))
"cortex_a9_p0_shift | cortex_a9_p1_shift")
;; Loads have a latency of 4 cycles.
@@ -200,7 +207,7 @@ cortex_a9_store3_4, cortex_a9_store1_2, cortex_a9_load3_4")
;; Pipelining for VFP instructions.
;; Issue happens either along load store unit or the VFP / Neon unit.
;; Pipeline Instruction Classification.
-;; FPS - fcpys, ffariths, ffarithd,r_2_f,f_2_r
+;; FPS - fmov, ffariths, ffarithd,f_mcr,f_mcrr,f_mrc,f_mrrc
;; FP_ADD - fadds, faddd, fcmps (1)
;; FPMUL - fmul{s,d}, fmac{s,d}, ffma{s,d}
;; FPDIV - fdiv{s,d}
@@ -213,7 +220,8 @@ cortex_a9_store3_4, cortex_a9_store1_2, cortex_a9_load3_4")
;; fmrs, fmrrd, fmstat and fmrx - The data is available after 1 cycle.
(define_insn_reservation "cortex_a9_fps" 2
(and (eq_attr "tune" "cortexa9")
- (eq_attr "type" "fcpys, fconsts, fconstd, ffariths, ffarithd, r_2_f, f_2_r, f_flag"))
+ (eq_attr "type" "fmov, fconsts, fconstd, ffariths, ffarithd,\
+ f_mcr, f_mcrr, f_mrc, f_mrrc, f_flag"))
"ca9_issue_vfp_neon + ca9fps")
(define_bypass 1
@@ -225,7 +233,7 @@ cortex_a9_store3_4, cortex_a9_store1_2, cortex_a9_load3_4")
(define_insn_reservation "cortex_a9_fadd" 4
(and (eq_attr "tune" "cortexa9")
- (eq_attr "type" "fadds, faddd, f_cvt"))
+ (eq_attr "type" "fadds, faddd, f_cvt, f_cvtf2i, f_cvti2f"))
"ca9fp_add")
(define_insn_reservation "cortex_a9_fcmp" 1
@@ -263,12 +271,12 @@ cortex_a9_store3_4, cortex_a9_store1_2, cortex_a9_load3_4")
;; Division pipeline description.
(define_insn_reservation "cortex_a9_fdivs" 15
(and (eq_attr "tune" "cortexa9")
- (eq_attr "type" "fdivs"))
+ (eq_attr "type" "fdivs, fsqrts"))
"ca9fp_ds1 + ca9_issue_vfp_neon, nothing*14")
(define_insn_reservation "cortex_a9_fdivd" 25
(and (eq_attr "tune" "cortexa9")
- (eq_attr "type" "fdivd"))
+ (eq_attr "type" "fdivd, fsqrtd"))
"ca9fp_ds1 + ca9_issue_vfp_neon, nothing*24")
;; Include Neon pipeline description
diff --git a/gcc/config/arm/cortex-m4-fpu.md b/gcc/config/arm/cortex-m4-fpu.md
index a1945bed3a3..f31118b5f65 100644
--- a/gcc/config/arm/cortex-m4-fpu.md
+++ b/gcc/config/arm/cortex-m4-fpu.md
@@ -26,17 +26,17 @@
;; Integer instructions following VDIV or VSQRT complete out-of-order.
(define_insn_reservation "cortex_m4_fdivs" 15
(and (eq_attr "tune" "cortexm4")
- (eq_attr "type" "fdivs"))
+ (eq_attr "type" "fdivs, fsqrts"))
"cortex_m4_ex_v,cortex_m4_v*13")
(define_insn_reservation "cortex_m4_vmov_1" 1
(and (eq_attr "tune" "cortexm4")
- (eq_attr "type" "fcpys,fconsts"))
+ (eq_attr "type" "fmov,fconsts"))
"cortex_m4_ex_v")
(define_insn_reservation "cortex_m4_vmov_2" 2
(and (eq_attr "tune" "cortexm4")
- (eq_attr "type" "f_2_r,r_2_f"))
+ (eq_attr "type" "f_mrc,f_mrrc,f_mcr,f_mcrr"))
"cortex_m4_ex_v*2")
(define_insn_reservation "cortex_m4_fmuls" 2
@@ -71,7 +71,7 @@
(define_insn_reservation "cortex_m4_f_cvt" 2
(and (eq_attr "tune" "cortexm4")
- (eq_attr "type" "f_cvt"))
+ (eq_attr "type" "f_cvt,f_cvtf2i,f_cvti2f"))
"cortex_m4_ex_v")
(define_insn_reservation "cortex_m4_f_load" 2
diff --git a/gcc/config/arm/cortex-m4.md b/gcc/config/arm/cortex-m4.md
index 5b7ce50df94..44df11fee30 100644
--- a/gcc/config/arm/cortex-m4.md
+++ b/gcc/config/arm/cortex-m4.md
@@ -31,10 +31,18 @@
;; ALU and multiply is one cycle.
(define_insn_reservation "cortex_m4_alu" 1
(and (eq_attr "tune" "cortexm4")
- (ior (eq_attr "type" "arlo_imm,arlo_reg,shift,shift_reg,extend,\
- arlo_shift,arlo_shift_reg,\
+ (ior (eq_attr "type" "alu_imm,alus_imm,logic_imm,logics_imm,\
+ alu_reg,alus_reg,logic_reg,logics_reg,\
+ adc_imm,adcs_imm,adc_reg,adcs_reg,\
+ adr,bfm,rev,\
+ shift_imm,shift_reg,extend,\
+ alu_shift_imm,alus_shift_imm,\
+ logic_shift_imm,logics_shift_imm,\
+ alu_shift_reg,alus_shift_reg,\
+ logic_shift_reg,logics_shift_reg,\
mov_imm,mov_reg,mov_shift,mov_shift_reg,\
- mvn_imm,mvn_reg,mvn_shift,mvn_shift_reg")
+ mvn_imm,mvn_reg,mvn_shift,mvn_shift_reg,\
+ mrs,multiple,no_insn")
(ior (eq_attr "mul32" "yes")
(eq_attr "mul64" "yes"))))
"cortex_m4_ex")
diff --git a/gcc/config/arm/cortex-r4.md b/gcc/config/arm/cortex-r4.md
index 597774dbd89..7a3ceeb15d7 100644
--- a/gcc/config/arm/cortex-r4.md
+++ b/gcc/config/arm/cortex-r4.md
@@ -78,7 +78,11 @@
;; for the purposes of the dual-issue constraints above.
(define_insn_reservation "cortex_r4_alu" 2
(and (eq_attr "tune_cortexr4" "yes")
- (eq_attr "type" "arlo_imm,arlo_reg,shift,shift_reg,mvn_imm,mvn_reg"))
+ (eq_attr "type" "alu_imm,alus_imm,logic_imm,logics_imm,\
+ alu_reg,alus_reg,logic_reg,logics_reg,\
+ adc_imm,adcs_imm,adc_reg,adcs_reg,\
+ adr,bfm,rev,\
+ shift_imm,shift_reg,mvn_imm,mvn_reg"))
"cortex_r4_alu")
(define_insn_reservation "cortex_r4_mov" 2
@@ -88,12 +92,17 @@
(define_insn_reservation "cortex_r4_alu_shift" 2
(and (eq_attr "tune_cortexr4" "yes")
- (eq_attr "type" "extend,arlo_shift,mov_shift,mvn_shift"))
+ (eq_attr "type" "alu_shift_imm,alus_shift_imm,\
+ logic_shift_imm,logics_shift_imm,\
+ extend,mov_shift,mvn_shift"))
"cortex_r4_alu")
(define_insn_reservation "cortex_r4_alu_shift_reg" 2
(and (eq_attr "tune_cortexr4" "yes")
- (eq_attr "type" "arlo_shift_reg,mov_shift_reg,mvn_shift_reg"))
+ (eq_attr "type" "alu_shift_reg,alus_shift_reg,\
+ logic_shift_reg,logics_shift_reg,\
+ mov_shift_reg,mvn_shift_reg,\
+ mrs,multiple,no_insn"))
"cortex_r4_alu_shift_reg")
;; An ALU instruction followed by an ALU instruction with no early dep.
diff --git a/gcc/config/arm/cortex-r4f.md b/gcc/config/arm/cortex-r4f.md
index 0c0bae0cd74..1bc4249d4d1 100644
--- a/gcc/config/arm/cortex-r4f.md
+++ b/gcc/config/arm/cortex-r4f.md
@@ -48,7 +48,7 @@
(define_insn_reservation "cortex_r4_fcpys" 2
(and (eq_attr "tune_cortexr4" "yes")
- (eq_attr "type" "fcpys"))
+ (eq_attr "type" "fmov"))
"cortex_r4_issue_ab")
(define_insn_reservation "cortex_r4_ffariths" 2
@@ -68,7 +68,7 @@
(define_insn_reservation "cortex_r4_fdivs" 17
(and (eq_attr "tune_cortexr4" "yes")
- (eq_attr "type" "fdivs"))
+ (eq_attr "type" "fdivs, fsqrts"))
"cortex_r4_issue_ab+cortex_r4_v1,cortex_r4_issue_a+cortex_r4_v1")
(define_insn_reservation "cortex_r4_floads" 2
@@ -83,12 +83,12 @@
(define_insn_reservation "cortex_r4_mcr" 2
(and (eq_attr "tune_cortexr4" "yes")
- (eq_attr "type" "r_2_f"))
+ (eq_attr "type" "f_mcr,f_mcrr"))
"cortex_r4_issue_ab")
(define_insn_reservation "cortex_r4_mrc" 3
(and (eq_attr "tune_cortexr4" "yes")
- (eq_attr "type" "f_2_r"))
+ (eq_attr "type" "f_mrc,f_mrrc"))
"cortex_r4_issue_ab")
;; Bypasses for normal (not early) regs.
@@ -131,7 +131,7 @@
;; out of order. Chances are this is not a pipelined operation.
(define_insn_reservation "cortex_r4_fdivd" 97
(and (eq_attr "tune_cortexr4" "yes")
- (eq_attr "type" "fdivd"))
+ (eq_attr "type" "fdivd, fsqrtd"))
"cortex_r4_single_issue*3")
(define_insn_reservation "cortex_r4_ffarithd" 2
@@ -146,7 +146,7 @@
(define_insn_reservation "cortex_r4_f_cvt" 8
(and (eq_attr "tune_cortexr4" "yes")
- (eq_attr "type" "f_cvt"))
+ (eq_attr "type" "f_cvt,f_cvtf2i,f_cvti2f"))
"cortex_r4_single_issue*3")
(define_insn_reservation "cortex_r4_f_memd" 8
diff --git a/gcc/config/arm/crypto.md b/gcc/config/arm/crypto.md
index a77db3f122d..9f249803d22 100644
--- a/gcc/config/arm/crypto.md
+++ b/gcc/config/arm/crypto.md
@@ -25,7 +25,7 @@
CRYPTO_UNARY))]
"TARGET_CRYPTO"
"<crypto_pattern>.<crypto_size_sfx>\\t%q0, %q1"
- [(set_attr "neon_type" "<crypto_type>")]
+ [(set_attr "type" "<crypto_type>")]
)
(define_insn "crypto_<crypto_pattern>"
@@ -35,7 +35,7 @@
CRYPTO_BINARY))]
"TARGET_CRYPTO"
"<crypto_pattern>.<crypto_size_sfx>\\t%q0, %q2"
- [(set_attr "neon_type" "<crypto_type>")]
+ [(set_attr "type" "<crypto_type>")]
)
(define_insn "crypto_<crypto_pattern>"
@@ -46,7 +46,7 @@
CRYPTO_TERNARY))]
"TARGET_CRYPTO"
"<crypto_pattern>.<crypto_size_sfx>\\t%q0, %q2, %q3"
- [(set_attr "neon_type" "<crypto_type>")]
+ [(set_attr "type" "<crypto_type>")]
)
(define_insn "crypto_sha1h"
@@ -58,7 +58,7 @@
UNSPEC_SHA1H)))]
"TARGET_CRYPTO"
"sha1h.32\\t%q0, %q1"
- [(set_attr "neon_type" "neon_crypto_sha1_fast")]
+ [(set_attr "type" "crypto_sha1_fast")]
)
(define_insn "crypto_vmullp64"
@@ -68,7 +68,7 @@
UNSPEC_VMULLP64))]
"TARGET_CRYPTO"
"vmull.p64\\t%q0, %P1, %P2"
- [(set_attr "neon_type" "neon_mul_d_long")]
+ [(set_attr "type" "neon_mul_d_long")]
)
(define_insn "crypto_<crypto_pattern>"
@@ -82,5 +82,5 @@
CRYPTO_SELECTING))]
"TARGET_CRYPTO"
"<crypto_pattern>.<crypto_size_sfx>\\t%q0, %q2, %q3"
- [(set_attr "neon_type" "<crypto_type>")]
+ [(set_attr "type" "<crypto_type>")]
)
diff --git a/gcc/config/arm/fa526.md b/gcc/config/arm/fa526.md
index 9ec92d60dc5..401abd3c0a0 100644
--- a/gcc/config/arm/fa526.md
+++ b/gcc/config/arm/fa526.md
@@ -62,13 +62,22 @@
;; ALU operations
(define_insn_reservation "526_alu_op" 1
(and (eq_attr "tune" "fa526")
- (eq_attr "type" "arlo_imm,arlo_reg,shift,shift_reg,\
- mov_imm,mov_reg,mvn_imm,mvn_reg"))
+ (eq_attr "type" "alu_imm,alus_imm,logic_imm,logics_imm,\
+ alu_reg,alus_reg,logic_reg,logics_reg,\
+ adc_imm,adcs_imm,adc_reg,adcs_reg,\
+ adr,bfm,rev,\
+ shift_imm,shift_reg,\
+ mov_imm,mov_reg,mvn_imm,mvn_reg,\
+ mrs,multiple,no_insn"))
"fa526_core")
(define_insn_reservation "526_alu_shift_op" 2
(and (eq_attr "tune" "fa526")
- (eq_attr "type" "extend,arlo_shift,arlo_shift_reg,\
+ (eq_attr "type" "extend,\
+ alu_shift_imm,alus_shift_imm,\
+ logic_shift_imm,logics_shift_imm,\
+ alu_shift_reg,alus_shift_reg,\
+ logic_shift_reg,logics_shift_reg,\
mov_shift,mov_shift_reg,\
mvn_shift,mvn_shift_reg"))
"fa526_core")
diff --git a/gcc/config/arm/fa606te.md b/gcc/config/arm/fa606te.md
index e61242886d7..88347bc2d96 100644
--- a/gcc/config/arm/fa606te.md
+++ b/gcc/config/arm/fa606te.md
@@ -62,10 +62,18 @@
;; ALU operations
(define_insn_reservation "606te_alu_op" 1
(and (eq_attr "tune" "fa606te")
- (eq_attr "type" "arlo_imm,arlo_reg,shift,shift_reg,
- extend,arlo_shift,arlo_shift_reg,\
+ (eq_attr "type" "alu_imm,alus_imm,logic_imm,logics_imm,\
+ alu_reg,alus_reg,logic_reg,logics_reg,\
+ adc_imm,adcs_imm,adc_reg,adcs_reg,\
+ adr,bfm,rev,\
+ shift_imm,shift_reg,extend,\
+ alu_shift_imm,alus_shift_imm,\
+ logic_shift_imm,logics_shift_imm,\
+ alu_shift_reg,alus_shift_reg,\
+ logic_shift_reg,logics_shift_reg,\
mov_imm,mov_reg,mov_shift,mov_shift_reg,\
- mvn_imm,mvn_reg,mvn_shift,mvn_shift_reg"))
+ mvn_imm,mvn_reg,mvn_shift,mvn_shift_reg,\
+ mrs,multiple,no_insn"))
"fa606te_core")
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
diff --git a/gcc/config/arm/fa626te.md b/gcc/config/arm/fa626te.md
index 04d2a5cf33f..e6790a21215 100644
--- a/gcc/config/arm/fa626te.md
+++ b/gcc/config/arm/fa626te.md
@@ -68,13 +68,22 @@
;; ALU operations
(define_insn_reservation "626te_alu_op" 1
(and (eq_attr "tune" "fa626,fa626te")
- (eq_attr "type" "arlo_imm,arlo_reg,shift,shift_reg,\
- mov_imm,mov_reg,mvn_imm,mvn_reg"))
+ (eq_attr "type" "alu_imm,alus_imm,logic_imm,logics_imm,\
+ alu_reg,alus_reg,logic_reg,logics_reg,\
+ adc_imm,adcs_imm,adc_reg,adcs_reg,\
+ adr,bfm,rev,\
+ shift_imm,shift_reg,\
+ mov_imm,mov_reg,mvn_imm,mvn_reg,\
+ mrs,multiple,no_insn"))
"fa626te_core")
(define_insn_reservation "626te_alu_shift_op" 2
(and (eq_attr "tune" "fa626,fa626te")
- (eq_attr "type" "extend,arlo_shift,arlo_shift_reg,\
+ (eq_attr "type" "extend,\
+ alu_shift_imm,alus_shift_imm,\
+ logic_shift_imm,logics_shift_imm,\
+ alu_shift_reg,alus_shift_reg,\
+ logic_shift_reg,logics_shift_reg,\
mov_shift,mov_shift_reg,\
mvn_shift,mvn_shift_reg"))
"fa626te_core")
diff --git a/gcc/config/arm/fa726te.md b/gcc/config/arm/fa726te.md
index 342b9bf5d33..d0a03981eec 100644
--- a/gcc/config/arm/fa726te.md
+++ b/gcc/config/arm/fa726te.md
@@ -86,7 +86,12 @@
;; Other ALU instructions 2 cycles.
(define_insn_reservation "726te_alu_op" 1
(and (eq_attr "tune" "fa726te")
- (eq_attr "type" "arlo_imm,arlo_reg,shift,shift_reg"))
+ (eq_attr "type" "alu_imm,alus_imm,logic_imm,logics_imm,\
+ alu_reg,alus_reg,logic_reg,logics_reg,\
+ adc_imm,adcs_imm,adc_reg,adcs_reg,\
+ adr,bfm,rev,\
+ shift_imm,shift_reg,\
+ mrs,multiple,no_insn"))
"fa726te_issue+(fa726te_alu0_pipe|fa726te_alu1_pipe)")
;; ALU operations with a shift-by-register operand.
@@ -95,12 +100,14 @@
;; it takes 3 cycles.
(define_insn_reservation "726te_alu_shift_op" 3
(and (eq_attr "tune" "fa726te")
- (eq_attr "type" "extend,arlo_shift"))
+ (eq_attr "type" "extend,alu_shift_imm,alus_shift_imm,\
+ logic_shift_imm,logics_shift_imm"))
"fa726te_issue+(fa726te_alu0_pipe|fa726te_alu1_pipe)")
(define_insn_reservation "726te_alu_shift_reg_op" 3
(and (eq_attr "tune" "fa726te")
- (eq_attr "type" "arlo_shift_reg"))
+ (eq_attr "type" "alu_shift_reg,alus_shift_reg,\
+ logic_shift_reg,logics_shift_reg"))
"fa726te_issue+(fa726te_alu0_pipe|fa726te_alu1_pipe)")
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Multiplication Instructions
diff --git a/gcc/config/arm/fmp626.md b/gcc/config/arm/fmp626.md
index 944645b9ead..ffb68570e37 100644
--- a/gcc/config/arm/fmp626.md
+++ b/gcc/config/arm/fmp626.md
@@ -63,13 +63,19 @@
;; ALU operations
(define_insn_reservation "mp626_alu_op" 1
(and (eq_attr "tune" "fmp626")
- (eq_attr "type" "arlo_imm,arlo_reg,shift,shift_reg,\
+ (eq_attr "type" "alu_imm,alus_imm,alu_reg,alus_reg,\
+ logic_imm,logics_imm,logic_reg,logics_reg,\
+ adc_imm,adcs_imm,adc_reg,adcs_reg,\
+ adr,bfm,rev,\
+ shift_imm,shift_reg,\
mov_imm,mov_reg,mvn_imm,mvn_reg"))
"fmp626_core")
(define_insn_reservation "mp626_alu_shift_op" 2
(and (eq_attr "tune" "fmp626")
- (eq_attr "type" "extend,arlo_shift,arlo_shift_reg,\
+ (eq_attr "type" "alu_shift_imm,logic_shift_imm,alus_shift_imm,logics_shift_imm,\
+ alu_shift_reg,logic_shift_reg,alus_shift_reg,logics_shift_reg,\
+ extend,\
mov_shift,mov_shift_reg,\
mvn_shift,mvn_shift_reg"))
"fmp626_core")
diff --git a/gcc/config/arm/iterators.md b/gcc/config/arm/iterators.md
index ffe4ceb7b76..299c9594532 100644
--- a/gcc/config/arm/iterators.md
+++ b/gcc/config/arm/iterators.md
@@ -369,6 +369,12 @@
(DI "64") (V2DI "64")
(V2SF "32") (V4SF "32")])
+(define_mode_attr V_elem_ch [(V8QI "b") (V16QI "b")
+ (V4HI "h") (V8HI "h")
+ (V2SI "s") (V4SI "s")
+ (DI "d") (V2DI "d")
+ (V2SF "s") (V4SF "s")])
+
;; Element sizes for duplicating ARM registers to all elements of a vector.
(define_mode_attr VD_dup [(V8QI "8") (V4HI "16") (V2SI "32") (V2SF "32")])
@@ -405,7 +411,7 @@
(define_mode_attr scalar_mul_constraint [(V4HI "x") (V2SI "t") (V2SF "t")
(V8HI "x") (V4SI "t") (V4SF "t")])
-;; Predicates used for setting neon_type
+;; Predicates used for setting type for neon instructions
(define_mode_attr Is_float_mode [(V8QI "false") (V16QI "false")
(V4HI "false") (V8HI "false")
@@ -466,6 +472,14 @@
(define_mode_attr vfp_type [(SF "s") (DF "d")])
(define_mode_attr vfp_double_cond [(SF "") (DF "&& TARGET_VFP_DOUBLE")])
+;; Mode attribute used to build the "type" attribute.
+(define_mode_attr q [(V8QI "") (V16QI "_q")
+ (V4HI "") (V8HI "_q")
+ (V2SI "") (V4SI "_q")
+ (V2SF "") (V4SF "_q")
+ (DI "") (V2DI "_q")
+ (DF "") (V2DF "_q")])
+
;;----------------------------------------------------------------------------
;; Code attributes
;;----------------------------------------------------------------------------
@@ -474,6 +488,10 @@
(define_code_attr VQH_mnem [(plus "vadd") (smin "vmin") (smax "vmax")
(umin "vmin") (umax "vmax")])
+;; Type attributes for vqh_ops and vqhs_ops iterators.
+(define_code_attr VQH_type [(plus "add") (smin "minmax") (smax "minmax")
+ (umin "minmax") (umax "minmax")])
+
;; Signs of above, where relevant.
(define_code_attr VQH_sign [(plus "i") (smin "s") (smax "s") (umin "u")
(umax "u")])
@@ -533,13 +551,13 @@
(UNSPEC_SHA256SU1 "sha256su1")])
(define_int_attr crypto_type
- [(UNSPEC_AESE "neon_crypto_aes") (UNSPEC_AESD "neon_crypto_aes")
- (UNSPEC_AESMC "neon_crypto_aes") (UNSPEC_AESIMC "neon_crypto_aes")
- (UNSPEC_SHA1C "neon_crypto_sha1_slow") (UNSPEC_SHA1P "neon_crypto_sha1_slow")
- (UNSPEC_SHA1M "neon_crypto_sha1_slow") (UNSPEC_SHA1SU1 "neon_crypto_sha1_fast")
- (UNSPEC_SHA1SU0 "neon_crypto_sha1_xor") (UNSPEC_SHA256H "neon_crypto_sha256_slow")
- (UNSPEC_SHA256H2 "neon_crypto_sha256_slow") (UNSPEC_SHA256SU0 "neon_crypto_sha256_fast")
- (UNSPEC_SHA256SU1 "neon_crypto_sha256_slow")])
+ [(UNSPEC_AESE "crypto_aes") (UNSPEC_AESD "crypto_aes")
+ (UNSPEC_AESMC "crypto_aes") (UNSPEC_AESIMC "crypto_aes")
+ (UNSPEC_SHA1C "crypto_sha1_slow") (UNSPEC_SHA1P "crypto_sha1_slow")
+ (UNSPEC_SHA1M "crypto_sha1_slow") (UNSPEC_SHA1SU1 "crypto_sha1_fast")
+ (UNSPEC_SHA1SU0 "crypto_sha1_xor") (UNSPEC_SHA256H "crypto_sha256_slow")
+ (UNSPEC_SHA256H2 "crypto_sha256_slow") (UNSPEC_SHA256SU0 "crypto_sha256_fast")
+ (UNSPEC_SHA256SU1 "crypto_sha256_slow")])
(define_int_attr crypto_size_sfx [(UNSPEC_SHA1H "32") (UNSPEC_AESMC "8")
(UNSPEC_AESIMC "8") (UNSPEC_AESD "8")
diff --git a/gcc/config/arm/iwmmxt.md b/gcc/config/arm/iwmmxt.md
index 7066e601d2f..8fa59739e96 100644
--- a/gcc/config/arm/iwmmxt.md
+++ b/gcc/config/arm/iwmmxt.md
@@ -155,7 +155,8 @@
(const_int 8)
(const_int 4))]
(const_int 4)))
- (set_attr "type" "*,*,*,load2,store2,wmmx_wmov,wmmx_tmcrr,wmmx_tmrrc,wmmx_wldr,wmmx_wstr,r_2_f,f_2_r,ffarithd,f_loadd,f_stored")
+ (set_attr "type" "*,*,*,load2,store2,*,*,*,*,*,f_mcrr,f_mrrc,\
+ ffarithd,f_loadd,f_stored")
(set_attr "arm_pool_range" "*,*,*,1020,*,*,*,*,*,*,*,*,*,1020,*")
(set_attr "arm_neg_pool_range" "*,*,*,1008,*,*,*,*,*,*,*,*,*,1008,*")]
)
@@ -187,7 +188,8 @@
default:
gcc_unreachable ();
}"
- [(set_attr "type" "*,*,*,*,load1,store1,wmmx_tmcr,wmmx_tmrc,wmmx_wldr,wmmx_wstr,r_2_f,f_2_r,fcpys,f_loads,f_stores")
+ [(set_attr "type" "*,*,*,*,load1,store1,*,*,*,*,f_mcr,f_mrc,\
+ fmov,f_loads,f_stores")
(set_attr "length" "*,*,*,*,*, *,*,*, 16, *,*,*,*,*,*")
(set_attr "pool_range" "*,*,*,*,4096, *,*,*,1024, *,*,*,*,1020,*")
(set_attr "neg_pool_range" "*,*,*,*,4084, *,*,*, *, 1012,*,*,*,1008,*")
diff --git a/gcc/config/arm/marvell-pj4.md b/gcc/config/arm/marvell-pj4.md
index 0e2c443721e..880789600e0 100644
--- a/gcc/config/arm/marvell-pj4.md
+++ b/gcc/config/arm/marvell-pj4.md
@@ -53,26 +53,42 @@
(define_insn_reservation "pj4_alu" 1
(and (eq_attr "tune" "marvell_pj4")
- (eq_attr "type" "arlo_imm,arlo_reg,shift,shift_reg")
+ (eq_attr "type" "alu_imm,alus_imm,alu_reg,alus_reg,\
+ logic_imm,logics_imm,logic_reg,logics_reg,\
+ adc_imm,adcs_imm,adc_reg,adcs_reg,\
+ adr,bfm,rev,\
+ shift_imm,shift_reg")
(not (eq_attr "conds" "set")))
"pj4_is,(pj4_alu1,pj4_w1+pj4_cp)|(pj4_alu2,pj4_w2+pj4_cp)")
(define_insn_reservation "pj4_alu_conds" 4
(and (eq_attr "tune" "marvell_pj4")
- (eq_attr "type" "arlo_imm,arlo_reg,shift,shift_reg")
+ (eq_attr "type" "alu_imm,alus_imm,alu_reg,alus_reg,\
+ logic_imm,logics_imm,logic_reg,logics_reg,\
+ adc_imm,adcs_imm,adc_reg,adcs_reg,\
+ adr,bfm,rev,\
+ shift_imm,shift_reg")
(eq_attr "conds" "set"))
"pj4_is,(pj4_alu1,pj4_w1+pj4_cp)|(pj4_alu2,pj4_w2+pj4_cp)")
(define_insn_reservation "pj4_shift" 1
(and (eq_attr "tune" "marvell_pj4")
- (eq_attr "type" "arlo_shift,arlo_shift_reg,extend,\
+ (eq_attr "type" "alu_shift_imm,logic_shift_imm,\
+ alus_shift_imm,logics_shift_imm,\
+ alu_shift_reg,logic_shift_reg,\
+ alus_shift_reg,logics_shift_reg,\
+ extend,\
mov_shift,mvn_shift,mov_shift_reg,mvn_shift_reg")
(not (eq_attr "conds" "set"))
(eq_attr "shift" "1")) "pj4_is,(pj4_alu1,pj4_w1+pj4_cp)|(pj4_alu2,pj4_w2+pj4_cp)")
(define_insn_reservation "pj4_shift_conds" 4
(and (eq_attr "tune" "marvell_pj4")
- (eq_attr "type" "arlo_shift,arlo_shift_reg,extend,\
+ (eq_attr "type" "alu_shift_imm,logic_shift_imm,\
+ alus_shift_imm,logics_shift_imm,\
+ alu_shift_reg,logic_shift_reg,\
+ alus_shift_reg,logics_shift_reg,\
+ extend,\
mov_shift,mvn_shift,mov_shift_reg,mvn_shift_reg")
(eq_attr "conds" "set")
(eq_attr "shift" "1")) "pj4_is,(pj4_alu1,pj4_w1+pj4_cp)|(pj4_alu2,pj4_w2+pj4_cp)")
@@ -80,14 +96,20 @@
(define_insn_reservation "pj4_alu_shift" 1
(and (eq_attr "tune" "marvell_pj4")
(not (eq_attr "conds" "set"))
- (eq_attr "type" "arlo_shift,arlo_shift_reg,extend,\
+ (eq_attr "type" "alu_shift_imm,logic_shift_imm,\
+ alus_shift_imm,logics_shift_imm,\
+ alu_shift_reg,logic_shift_reg,\
+ alus_shift_reg,logics_shift_reg,\
+ extend,\
mov_shift,mvn_shift,mov_shift_reg,mvn_shift_reg"))
"pj4_is,(pj4_alu1,nothing,pj4_w1+pj4_cp)|(pj4_alu2,nothing,pj4_w2+pj4_cp)")
(define_insn_reservation "pj4_alu_shift_conds" 4
(and (eq_attr "tune" "marvell_pj4")
(eq_attr "conds" "set")
- (eq_attr "type" "arlo_shift,arlo_shift_reg,extend,\
+ (eq_attr "type" "alu_shift_imm,logic_shift_imm,alus_shift_imm,logics_shift_imm,\
+ alu_shift_reg,logic_shift_reg,alus_shift_reg,logics_shift_reg,\
+ extend,\
mov_shift,mvn_shift,mov_shift_reg,mvn_shift_reg"))
"pj4_is,(pj4_alu1,nothing,pj4_w1+pj4_cp)|(pj4_alu2,nothing,pj4_w2+pj4_cp)")
@@ -171,11 +193,11 @@
(define_insn_reservation "pj4_vfp_divs" 20
(and (eq_attr "tune" "marvell_pj4")
- (eq_attr "type" "fdivs")) "pj4_is,nothing*2,vissue,vdiv*18,nothing")
+ (eq_attr "type" "fdivs, fsqrts")) "pj4_is,nothing*2,vissue,vdiv*18,nothing")
(define_insn_reservation "pj4_vfp_divd" 34
(and (eq_attr "tune" "marvell_pj4")
- (eq_attr "type" "fdivd")) "pj4_is,nothing*2,vissue,vdiv*32,nothing")
+ (eq_attr "type" "fdivd, fsqrtd")) "pj4_is,nothing*2,vissue,vdiv*32,nothing")
(define_insn_reservation "pj4_vfp_mac" 9
(and (eq_attr "tune" "marvell_pj4")
@@ -186,8 +208,9 @@
(define_insn_reservation "pj4_vfp_cpy" 4
(and (eq_attr "tune" "marvell_pj4")
- (eq_attr "type" "fcpys,ffariths,ffarithd,fconsts,fconstd,\
- fcmps,fcmpd,f_cvt")) "pj4_is,nothing*2,vissue,vfast,nothing*2")
+ (eq_attr "type" "fmov,ffariths,ffarithd,fconsts,fconstd,\
+ fcmps,fcmpd,f_cvt,f_cvtf2i,f_cvti2f"))
+"pj4_is,nothing*2,vissue,vfast,nothing*2")
;; Enlarge latency, and wish that more nondependent insns are
;; scheduled immediately after VFP load.
@@ -201,9 +224,9 @@
(define_insn_reservation "pj4_vfp_to_core" 7
(and (eq_attr "tune" "marvell_pj4")
- (eq_attr "type" "f_2_r,f_flag")) "pj4_isb,nothing,nothing,vissue,vfast,nothing*2")
+ (eq_attr "type" "f_mrc,f_mrrc,f_flag")) "pj4_isb,nothing,nothing,vissue,vfast,nothing*2")
(define_insn_reservation "pj4_core_to_vfp" 2
(and (eq_attr "tune" "marvell_pj4")
- (eq_attr "type" "r_2_f")) "pj4_isb,pj4_alu1,pj4_w1,vissue,pj4_cp")
+ (eq_attr "type" "f_mcr,f_mcrr")) "pj4_isb,pj4_alu1,pj4_w1,vissue,pj4_cp")
diff --git a/gcc/config/arm/neon-schedgen.ml b/gcc/config/arm/neon-schedgen.ml
deleted file mode 100644
index 7dacbab2625..00000000000
--- a/gcc/config/arm/neon-schedgen.ml
+++ /dev/null
@@ -1,543 +0,0 @@
-(* Emission of the core of the Cortex-A8 NEON scheduling description.
- Copyright (C) 2007-2013 Free Software Foundation, Inc.
- Contributed by CodeSourcery.
- This file is part of GCC.
-
- GCC is free software; you can redistribute it and/or modify it under
- the terms of the GNU General Public License as published by the Free
- Software Foundation; either version 3, or (at your option) any later
- version.
-
- GCC is distributed in the hope that it will be useful, but WITHOUT ANY
- WARRANTY; without even the implied warranty of MERCHANTABILITY or
- FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
- for more details.
-
- You should have received a copy of the GNU General Public License
- along with GCC; see the file COPYING3. If not see
- <http://www.gnu.org/licenses/>.
-*)
-
-(* This scheduling description generator works as follows.
- - Each group of instructions has source and destination requirements
- specified and a list of cores supported. This is then filtered
- and per core scheduler descriptions are generated out.
- The reservations generated are prefixed by the name of the
- core and the check is performed on the basis of what the tuning
- string is. Running this will generate Neon scheduler descriptions
- for all cores supported.
-
- The source requirements may be specified using
- Source (the stage at which all source operands not otherwise
- described are read), Source_m (the stage at which Rm operands are
- read), Source_n (likewise for Rn) and Source_d (likewise for Rd).
- - For each group of instructions the earliest stage where a source
- operand may be required is calculated.
- - Each group of instructions is selected in turn as a producer.
- The latencies between this group and every other group are then
- calculated, yielding up to four values for each combination:
- 1. Producer -> consumer Rn latency
- 2. Producer -> consumer Rm latency
- 3. Producer -> consumer Rd (as a source) latency
- 4. Producer -> consumer worst-case latency.
- Value 4 is calculated from the destination availability requirements
- of the consumer and the earliest source availability requirements
- of the producer.
- - The largest Value 4 calculated for the current producer is the
- worse-case latency, L, for that instruction group. This value is written
- out in a define_insn_reservation for the producer group.
- - For each producer and consumer pair, the latencies calculated above
- are collated. The average (of up to four values) is calculated and
- if this average is different from the worst-case latency, an
- unguarded define_bypass construction is issued for that pair.
- (For each pair only one define_bypass construction will be emitted,
- and at present we do not emit specific guards.)
-*)
-
-let find_with_result fn lst =
- let rec scan = function
- [] -> raise Not_found
- | l::ls ->
- match fn l with
- Some result -> result
- | _ -> scan ls in
- scan lst
-
-let n1 = 1 and n2 = 2 and n3 = 3 and n4 = 4 and n5 = 5 and n6 = 6
- and n7 = 7 and n8 = 8 and n9 = 9
-
-type availability = Source of int
- | Source_n of int
- | Source_m of int
- | Source_d of int
- | Dest of int
- | Dest_n_after of int * int
-
-type guard = Guard_none | Guard_only_m | Guard_only_n | Guard_only_d
-
-(* Reservation behaviors. All but the last row here correspond to one
- pipeline each. Each constructor will correspond to one
- define_reservation. *)
-type reservation =
- Mul | Mul_2cycle | Mul_4cycle
-| Shift | Shift_2cycle
-| ALU | ALU_2cycle
-| Fmul | Fmul_2cycle
-| Fadd | Fadd_2cycle
-(* | VFP *)
-| Permute of int
-| Ls of int
-| Fmul_then_fadd | Fmul_then_fadd_2
-
-type core = CortexA8 | CortexA9
-let allCores = [CortexA8; CortexA9]
-let coreStr = function
- CortexA8 -> "cortex_a8"
- | CortexA9 -> "cortex_a9"
-
-let tuneStr = function
- CortexA8 -> "cortexa8"
- | CortexA9 -> "cortexa9"
-
-
-(* This table must be kept as short as possible by conflating
- entries with the same availability behavior.
-
- First components: instruction group names
- Second components: availability requirements, in the order in which
- they should appear in the comments in the .md file.
- Third components: reservation info
- Fourth components: List of supported cores.
-*)
-let availability_table = [
- (* NEON integer ALU instructions. *)
- (* vbit vbif vbsl vorr vbic vnot vcls vclz vcnt vadd vand vorr
- veor vbic vorn ddd qqq *)
- "neon_int_1", [Source n2; Dest n3], ALU, allCores;
- (* vadd vsub qqd vsub ddd qqq *)
- "neon_int_2", [Source_m n1; Source_n n2; Dest n3], ALU, allCores;
- (* vsum vneg dd qq vadd vsub qdd *)
- "neon_int_3", [Source n1; Dest n3], ALU, allCores;
- (* vabs vceqz vcgez vcbtz vclez vcltz vadh vradh vsbh vrsbh dqq *)
- (* vhadd vrhadd vqadd vtst ddd qqq *)
- "neon_int_4", [Source n2; Dest n4], ALU, allCores;
- (* vabd qdd vhsub vqsub vabd vceq vcge vcgt vmax vmin vfmx vfmn ddd ddd *)
- "neon_int_5", [Source_m n1; Source_n n2; Dest n4], ALU, allCores;
- (* vqneg vqabs dd qq *)
- "neon_vqneg_vqabs", [Source n1; Dest n4], ALU, allCores;
- (* vmov vmvn *)
- "neon_vmov", [Dest n3], ALU, allCores;
- (* vaba *)
- "neon_vaba", [Source_n n2; Source_m n1; Source_d n3; Dest n6], ALU, allCores;
- "neon_vaba_qqq",
- [Source_n n2; Source_m n1; Source_d n3; Dest_n_after (1, n6)],
- ALU_2cycle, allCores;
- (* vsma *)
- "neon_vsma", [Source_m n1; Source_d n3; Dest n6], ALU, allCores;
-
- (* NEON integer multiply instructions. *)
- (* vmul, vqdmlh, vqrdmlh *)
- (* vmul, vqdmul, qdd 16/8 long 32/16 long *)
- "neon_mul_ddd_8_16_qdd_16_8_long_32_16_long", [Source n2; Dest n6],
- Mul, allCores;
- "neon_mul_qqq_8_16_32_ddd_32", [Source n2; Dest_n_after (1, n6)],
- Mul_2cycle, allCores;
- (* vmul, vqdmul again *)
- "neon_mul_qdd_64_32_long_qqd_16_ddd_32_scalar_64_32_long_scalar",
- [Source_n n2; Source_m n1; Dest_n_after (1, n6)], Mul_2cycle, allCores;
- (* vmla, vmls *)
- "neon_mla_ddd_8_16_qdd_16_8_long_32_16_long",
- [Source_n n2; Source_m n2; Source_d n3; Dest n6], Mul, allCores;
- "neon_mla_qqq_8_16",
- [Source_n n2; Source_m n2; Source_d n3; Dest_n_after (1, n6)],
- Mul_2cycle, allCores;
- "neon_mla_ddd_32_qqd_16_ddd_32_scalar_qdd_64_32_long_scalar_qdd_64_32_long",
- [Source_n n2; Source_m n1; Source_d n3; Dest_n_after (1, n6)],
- Mul_2cycle, allCores;
- "neon_mla_qqq_32_qqd_32_scalar",
- [Source_n n2; Source_m n1; Source_d n3; Dest_n_after (3, n6)],
- Mul_4cycle, allCores;
- (* vmul, vqdmulh, vqrdmulh *)
- (* vmul, vqdmul *)
- "neon_mul_ddd_16_scalar_32_16_long_scalar",
- [Source_n n2; Source_m n1; Dest n6], Mul, allCores;
- "neon_mul_qqd_32_scalar",
- [Source_n n2; Source_m n1; Dest_n_after (3, n6)], Mul_4cycle, allCores;
- (* vmla, vmls *)
- (* vmla, vmla, vqdmla, vqdmls *)
- "neon_mla_ddd_16_scalar_qdd_32_16_long_scalar",
- [Source_n n2; Source_m n1; Source_d n3; Dest n6], Mul, allCores;
-
- (* NEON integer shift instructions. *)
- (* vshr/vshl immediate, vshr_narrow, vshl_vmvh, vsli_vsri_ddd *)
- "neon_shift_1", [Source n1; Dest n3], Shift, allCores;
- (* vqshl, vrshr immediate; vqshr, vqmov, vrshr, vqrshr narrow, allCores;
- vqshl_vrshl_vqrshl_ddd *)
- "neon_shift_2", [Source n1; Dest n4], Shift, allCores;
- (* vsli, vsri and vshl for qqq *)
- "neon_shift_3", [Source n1; Dest_n_after (1, n3)], Shift_2cycle, allCores;
- "neon_vshl_ddd", [Source n1; Dest n1], Shift, allCores;
- "neon_vqshl_vrshl_vqrshl_qqq", [Source n1; Dest_n_after (1, n4)],
- Shift_2cycle, allCores;
- "neon_vsra_vrsra", [Source_m n1; Source_d n3; Dest n6], Shift, allCores;
-
- (* NEON floating-point instructions. *)
- (* vadd, vsub, vabd, vmul, vceq, vcge, vcgt, vcage, vcagt, vmax, vmin *)
- (* vabs, vneg, vceqz, vcgez, vcgtz, vclez, vcltz, vrecpe, vrsqrte, vcvt *)
- "neon_fp_vadd_ddd_vabs_dd", [Source n2; Dest n5], Fadd, allCores;
- "neon_fp_vadd_qqq_vabs_qq", [Source n2; Dest_n_after (1, n5)],
- Fadd_2cycle, allCores;
- (* vsum, fvmx, vfmn *)
- "neon_fp_vsum", [Source n1; Dest n5], Fadd, allCores;
- "neon_fp_vmul_ddd", [Source_n n2; Source_m n1; Dest n5], Fmul, allCores;
- "neon_fp_vmul_qqd", [Source_n n2; Source_m n1; Dest_n_after (1, n5)],
- Fmul_2cycle, allCores;
- (* vmla, vmls *)
- "neon_fp_vmla_ddd",
- [Source_n n2; Source_m n2; Source_d n3; Dest n9], Fmul_then_fadd, allCores;
- "neon_fp_vmla_qqq",
- [Source_n n2; Source_m n2; Source_d n3; Dest_n_after (1, n9)],
- Fmul_then_fadd_2, allCores;
- "neon_fp_vmla_ddd_scalar",
- [Source_n n2; Source_m n1; Source_d n3; Dest n9], Fmul_then_fadd, allCores;
- "neon_fp_vmla_qqq_scalar",
- [Source_n n2; Source_m n1; Source_d n3; Dest_n_after (1, n9)],
- Fmul_then_fadd_2, allCores;
- "neon_fp_vrecps_vrsqrts_ddd", [Source n2; Dest n9], Fmul_then_fadd, allCores;
- "neon_fp_vrecps_vrsqrts_qqq", [Source n2; Dest_n_after (1, n9)],
- Fmul_then_fadd_2, allCores;
-
- (* NEON byte permute instructions. *)
- (* vmov; vtrn and vswp for dd; vzip for dd; vuzp for dd; vrev; vext for dd *)
- "neon_bp_simple", [Source n1; Dest n2], Permute 1, allCores;
- (* vswp for qq; vext for qqq; vtbl with {Dn} or {Dn, Dn1}, allCores;
- similarly for vtbx *)
- "neon_bp_2cycle", [Source n1; Dest_n_after (1, n2)], Permute 2, allCores;
- (* all the rest *)
- "neon_bp_3cycle", [Source n1; Dest_n_after (2, n2)], Permute 3, allCores;
-
- (* NEON load/store instructions. *)
- "neon_ldr", [Dest n1], Ls 1, allCores;
- "neon_str", [Source n1], Ls 1, allCores;
- "neon_vld1_1_2_regs", [Dest_n_after (1, n1)], Ls 2, allCores;
- "neon_vld1_3_4_regs", [Dest_n_after (2, n1)], Ls 3, allCores;
- "neon_vld2_2_regs_vld1_vld2_all_lanes", [Dest_n_after (1, n2)], Ls 2, allCores;
- "neon_vld2_4_regs", [Dest_n_after (2, n2)], Ls 3, allCores;
- "neon_vld3_vld4", [Dest_n_after (3, n2)], Ls 4, allCores;
- "neon_vst1_1_2_regs_vst2_2_regs", [Source n1], Ls 2, allCores;
- "neon_vst1_3_4_regs", [Source n1], Ls 3, allCores;
- "neon_vst2_4_regs_vst3_vst4", [Source n1], Ls 4, allCores;
- "neon_vst3_vst4", [Source n1], Ls 4, allCores;
- "neon_vld1_vld2_lane", [Source n1; Dest_n_after (2, n2)], Ls 3, allCores;
- "neon_vld3_vld4_lane", [Source n1; Dest_n_after (4, n2)], Ls 5, allCores;
- "neon_vst1_vst2_lane", [Source n1], Ls 2, allCores;
- "neon_vst3_vst4_lane", [Source n1], Ls 3, allCores;
- "neon_vld3_vld4_all_lanes", [Dest_n_after (1, n2)], Ls 3, allCores;
-
- (* NEON register transfer instructions. *)
- "neon_mcr", [Dest n2], Permute 1, allCores;
- "neon_mcr_2_mcrr", [Dest n2], Permute 2, allCores;
- (* MRC instructions are in the .tpl file. *)
-]
-
-(* Augment the tuples in the availability table with an extra component
- that describes the earliest stage where a source operand may be
- required. (It is also possible that an entry in the table has no
- source requirements.) *)
-let calculate_sources =
- List.map (fun (name, avail, res, cores) ->
- let earliest_stage =
- List.fold_left
- (fun cur -> fun info ->
- match info with
- Source stage
- | Source_n stage
- | Source_m stage
- | Source_d stage ->
- (match cur with
- None -> Some stage
- | Some stage' when stage < stage' -> Some stage
- | _ -> cur)
- | _ -> cur) None avail
- in
- (name, avail, res, earliest_stage))
-
-(* Find the stage, if any, at the end of which a group produces a result. *)
-let find_dest (attr, avail, _, _) =
- try
- find_with_result
- (fun av -> match av with
- Dest st -> Some (Some st)
- | Dest_n_after (after, st) -> Some (Some (after + st))
- | _ -> None) avail
- with Not_found -> None
-
-(* Find the worst-case latency between a producer and a consumer. *)
-let worst_case_latency producer (_, _, _, earliest_required) =
- let dest = find_dest producer in
- match earliest_required, dest with
- None, _ ->
- (* The consumer doesn't have any source requirements. *)
- None
- | _, None ->
- (* The producer doesn't produce any results (e.g. a store insn). *)
- None
- | Some consumed, Some produced -> Some (produced - consumed + 1)
-
-(* Helper function for below. *)
-let latency_calc f producer (_, avail, _, _) =
- try
- let source_avail = find_with_result f avail in
- match find_dest producer with
- None ->
- (* The producer does not produce a result. *)
- Some 0
- | Some produced ->
- let latency = produced - source_avail + 1 in
- (* Latencies below zero are raised to zero since we don't have
- delay slots. *)
- if latency < 0 then Some 0 else Some latency
- with Not_found -> None
-
-(* Find any Rm latency between a producer and a consumer. If no
- Rm source requirement is explicitly specified for the consumer,
- return "positive infinity". Also return "positive infinity" if
- the latency matches the supplied worst-case latency for this
- producer. *)
-let get_m_latency producer consumer =
- match latency_calc (fun av -> match av with Source_m stage -> Some stage
- | _ -> None) producer consumer
- with None -> [] | Some latency -> [(Guard_only_m, latency)]
-
-(* Likewise for Rn. *)
-let get_n_latency producer consumer =
- match latency_calc (fun av -> match av with Source_n stage -> Some stage
- | _ -> None) producer consumer
- with None -> [] | Some latency -> [(Guard_only_n, latency)]
-
-(* Likewise for Rd. *)
-let get_d_latency producer consumer =
- match
- latency_calc (fun av -> match av with Source_d stage -> Some stage
- | _ -> None) producer consumer
- with None -> [] | Some latency -> [(Guard_only_d, latency)]
-
-(* Given a producer and a consumer, work out the latency of the producer
- to the consumer in each of the four cases (availability information
- permitting) identified at the top of this file. Return the
- consumer, the worst-case unguarded latency and any guarded latencies. *)
-let calculate_latencies producer consumer =
- let worst = worst_case_latency producer consumer in
- let m_latency = get_m_latency producer consumer in
- let n_latency = get_n_latency producer consumer in
- let d_latency = get_d_latency producer consumer in
- (consumer, worst, m_latency @ n_latency @ d_latency)
-
-(* Helper function for below. *)
-let pick_latency largest worst guards =
- let guards =
- match worst with
- None -> guards
- | Some worst -> (Guard_none, worst) :: guards
- in
- if List.length guards = 0 then None else
- let total_latency =
- List.fold_left (fun acc -> fun (_, latency) -> acc + latency) 0 guards
- in
- let average_latency = (float_of_int total_latency) /.
- (float_of_int (List.length guards)) in
- let rounded_latency = int_of_float (ceil average_latency) in
- if rounded_latency = largest then None
- else Some (Guard_none, rounded_latency)
-
-(* Collate all bypasses for a particular producer as required in
- worst_case_latencies_and_bypasses. (By this stage there is a maximum
- of one bypass from this producer to any particular consumer listed
- in LATENCIES.) Use a hash table to collate bypasses with the
- same latency and guard. *)
-let collate_bypasses (producer_name, _, _, _) largest latencies core =
- let ht = Hashtbl.create 42 in
- let keys = ref [] in
- List.iter (
- fun ((consumer, _, _, _), worst, guards) ->
- (* Find out which latency to use. Ignoring latencies that match
- the *overall* worst-case latency for this producer (which will
- be in define_insn_reservation), we have to examine:
- 1. the latency with no guard between this producer and this
- consumer; and
- 2. any guarded latency. *)
- let guard_latency_opt = pick_latency largest worst guards in
- match guard_latency_opt with
- None -> ()
- | Some (guard, latency) ->
- begin
- (if (try ignore (Hashtbl.find ht (guard, latency)); false
- with Not_found -> true) then
- keys := (guard, latency) :: !keys);
- Hashtbl.add ht (guard, latency) ((coreStr core) ^ "_" ^ consumer)
- end
- ) latencies;
- (* The hash table now has bypasses collated so that ones with the
- same latency and guard have the same keys. Walk through all the
- keys, extract the associated bypasses, and concatenate the names
- of the consumers for each bypass. *)
- List.map (
- fun ((guard, latency) as key) ->
- let consumers = Hashtbl.find_all ht key in
- (producer_name,
- String.concat ",\\\n " consumers,
- latency,
- guard)
- ) !keys
-
-(* For every producer, find the worst-case latency between it and
- *any* consumer. Also determine (if such a thing exists) the
- lowest-latency bypass from each producer to each consumer. Group
- the output in such a way that all bypasses with the same producer
- and latency are together, and so that bypasses with the worst-case
- latency are ignored. *)
-let worst_case_latencies_and_bypasses core =
- let rec f (worst_acc, bypasses_acc) prev xs =
- match xs with
- [] -> (worst_acc, bypasses_acc)
- | ((producer_name, producer_avail, res_string, _) as producer)::next ->
- (* For this particular producer, work out the latencies between
- it and every consumer. *)
- let latencies =
- List.fold_left (fun acc -> fun consumer ->
- (calculate_latencies producer consumer) :: acc)
- [] (prev @ xs)
- in
- (* Now work out what the overall worst case latency was for this
- particular producer. *)
- match latencies with
- [] -> assert false
- | _ ->
- let comp_fn (_, l1, _) (_, l2, _) =
- if l1 > l2 then -1 else if l1 = l2 then 0 else 1
- in
- let largest =
- match List.hd (List.sort comp_fn latencies) with
- (_, None, _) -> 0 (* Producer has no consumers. *)
- | (_, Some worst, _) -> worst
- in
- (* Having got the largest latency, collect all bypasses for
- this producer and filter out those with that larger
- latency. Record the others for later emission. *)
- let bypasses = collate_bypasses producer largest latencies core in
- (* Go on to process remaining producers, having noted
- the result for this one. *)
- f ((producer_name, producer_avail, largest,
- res_string) :: worst_acc,
- bypasses @ bypasses_acc)
- (prev @ [producer]) next
- in
- f ([], []) []
-
-(* Emit a helpful comment for a define_insn_reservation. *)
-let write_comment producer avail =
- let seen_source = ref false in
- let describe info =
- let read = if !seen_source then "" else "read " in
- match info with
- Source stage ->
- seen_source := true;
- Printf.printf "%stheir source operands at N%d" read stage
- | Source_n stage ->
- seen_source := true;
- Printf.printf "%stheir (D|Q)n operands at N%d" read stage
- | Source_m stage ->
- seen_source := true;
- Printf.printf "%stheir (D|Q)m operands at N%d" read stage
- | Source_d stage ->
- Printf.printf "%stheir (D|Q)d operands at N%d" read stage
- | Dest stage ->
- Printf.printf "produce a result at N%d" stage
- | Dest_n_after (after, stage) ->
- Printf.printf "produce a result at N%d on cycle %d" stage (after + 1)
- in
- Printf.printf ";; Instructions using this reservation ";
- let rec f infos x =
- let sep = if x mod 2 = 1 then "" else "\n;;" in
- match infos with
- [] -> assert false
- | [info] -> describe info; Printf.printf ".\n"
- | info::(_::[] as infos) ->
- describe info; Printf.printf ", and%s " sep; f infos (x+1)
- | info::infos -> describe info; Printf.printf ",%s " sep; f infos (x+1)
- in
- f avail 0
-
-
-(* Emit a define_insn_reservation for each producer. The latency
- written in will be its worst-case latency. *)
-let emit_insn_reservations core =
- let corestring = coreStr core in
- let tunestring = tuneStr core
- in List.iter (
- fun (producer, avail, latency, reservation) ->
- write_comment producer avail;
- Printf.printf "(define_insn_reservation \"%s_%s\" %d\n"
- corestring producer latency;
- Printf.printf " (and (eq_attr \"tune\" \"%s\")\n" tunestring;
- Printf.printf " (eq_attr \"neon_type\" \"%s\"))\n" producer;
- let str =
- match reservation with
- Mul -> "dp" | Mul_2cycle -> "dp_2" | Mul_4cycle -> "dp_4"
- | Shift -> "dp" | Shift_2cycle -> "dp_2"
- | ALU -> "dp" | ALU_2cycle -> "dp_2"
- | Fmul -> "dp" | Fmul_2cycle -> "dp_2"
- | Fadd -> "fadd" | Fadd_2cycle -> "fadd_2"
- | Ls 1 -> "ls"
- | Ls n -> "ls_" ^ (string_of_int n)
- | Permute 1 -> "perm"
- | Permute n -> "perm_" ^ (string_of_int n)
- | Fmul_then_fadd -> "fmul_then_fadd"
- | Fmul_then_fadd_2 -> "fmul_then_fadd_2"
- in
- Printf.printf " \"%s_neon_%s\")\n\n" corestring str
- )
-
-(* Given a guard description, return the name of the C function to
- be used as the guard for define_bypass. *)
-let guard_fn g =
- match g with
- Guard_only_m -> "arm_neon_only_m_dependency"
- | Guard_only_n -> "arm_neon_only_n_dependency"
- | Guard_only_d -> "arm_neon_only_d_dependency"
- | Guard_none -> assert false
-
-(* Emit a define_bypass for each bypass. *)
-let emit_bypasses core =
- List.iter (
- fun (producer, consumers, latency, guard) ->
- Printf.printf "(define_bypass %d \"%s_%s\"\n"
- latency (coreStr core) producer;
-
- if guard = Guard_none then
- Printf.printf " \"%s\")\n\n" consumers
- else
- begin
- Printf.printf " \"%s\"\n" consumers;
- Printf.printf " \"%s\")\n\n" (guard_fn guard)
- end
- )
-
-
-let calculate_per_core_availability_table core availability_table =
- let table = calculate_sources availability_table in
- let worst_cases, bypasses = worst_case_latencies_and_bypasses core table in
- emit_insn_reservations core (List.rev worst_cases);
- Printf.printf ";; Exceptions to the default latencies.\n\n";
- emit_bypasses core bypasses
-
-let calculate_core_availability_table core availability_table =
-let filter_core = List.filter (fun (_, _, _, cores)
- -> List.exists ((=) core) cores)
-in calculate_per_core_availability_table core (filter_core availability_table)
-
-
-(* Program entry point. *)
-let main =
- List.map (fun core -> calculate_core_availability_table
- core availability_table) allCores
diff --git a/gcc/config/arm/neon.md b/gcc/config/arm/neon.md
index 7442cabe16d..9d561bdb14e 100644
--- a/gcc/config/arm/neon.md
+++ b/gcc/config/arm/neon.md
@@ -20,7 +20,7 @@
;; Attribute used to permit string comparisons against <VQH_mnem> in
-;; neon_type attribute definitions.
+;; type attribute definitions.
(define_attr "vqh_mnem" "vadd,vmin,vmax" (const_string "vadd"))
(define_insn "*neon_mov<mode>"
@@ -60,8 +60,9 @@
default: return output_move_double (operands, true, NULL);
}
}
- [(set_attr "neon_type" "neon_int_1,*,neon_vmov,*,neon_mrrc,neon_mcr_2_mcrr,*,*,*")
- (set_attr "type" "*,f_stored,*,f_loadd,*,*,mov_reg,load2,store2")
+ [(set_attr "type" "neon_move<q>,neon_store1_1reg,neon_move<q>,\
+ neon_load1_1reg, neon_to_gp<q>,neon_from_gp<q>,mov_reg,\
+ neon_load1_2reg, neon_store1_2reg")
(set_attr "length" "4,4,4,4,4,4,8,8,8")
(set_attr "arm_pool_range" "*,*,*,1020,*,*,*,1020,*")
(set_attr "thumb2_pool_range" "*,*,*,1018,*,*,*,1018,*")
@@ -104,9 +105,9 @@
default: return output_move_quad (operands);
}
}
- [(set_attr "neon_type" "neon_int_1,neon_stm_2,neon_vmov,neon_ldm_2,\
- neon_mrrc,neon_mcr_2_mcrr,*,*,*")
- (set_attr "type" "*,*,*,*,*,*,mov_reg,load4,store4")
+ [(set_attr "type" "neon_move_q,neon_store2_2reg_q,neon_move_q,\
+ neon_load2_2reg_q,neon_to_gp_q,neon_from_gp_q,\
+ mov_reg,neon_load1_4reg,neon_store1_4reg")
(set_attr "length" "4,8,4,8,8,8,16,8,16")
(set_attr "arm_pool_range" "*,*,*,1020,*,*,*,1020,*")
(set_attr "thumb2_pool_range" "*,*,*,1018,*,*,*,1018,*")
@@ -150,7 +151,7 @@
default: gcc_unreachable ();
}
}
- [(set_attr "neon_type" "neon_int_1,neon_stm_2,neon_ldm_2")
+ [(set_attr "type" "neon_move_q,neon_store2_2reg_q,neon_load2_2reg_q")
(set (attr "length") (symbol_ref "arm_attr_length_move_neon (insn)"))])
(define_split
@@ -258,7 +259,7 @@
UNSPEC_MISALIGNED_ACCESS))]
"TARGET_NEON && !BYTES_BIG_ENDIAN && unaligned_access"
"vst1.<V_sz_elem>\t{%P1}, %A0"
- [(set_attr "neon_type" "neon_vst1_1_2_regs_vst2_2_regs")])
+ [(set_attr "type" "neon_store1_1reg<q>")])
(define_insn "*movmisalign<mode>_neon_load"
[(set (match_operand:VDX 0 "s_register_operand" "=w")
@@ -266,7 +267,7 @@
UNSPEC_MISALIGNED_ACCESS))]
"TARGET_NEON && !BYTES_BIG_ENDIAN && unaligned_access"
"vld1.<V_sz_elem>\t{%P0}, %A1"
- [(set_attr "neon_type" "neon_vld1_1_2_regs")])
+ [(set_attr "type" "neon_load1_1reg<q>")])
(define_insn "*movmisalign<mode>_neon_store"
[(set (match_operand:VQX 0 "neon_struct_operand" "=Um")
@@ -274,7 +275,7 @@
UNSPEC_MISALIGNED_ACCESS))]
"TARGET_NEON && !BYTES_BIG_ENDIAN && unaligned_access"
"vst1.<V_sz_elem>\t{%q1}, %A0"
- [(set_attr "neon_type" "neon_vst1_1_2_regs_vst2_2_regs")])
+ [(set_attr "type" "neon_store1_1reg<q>")])
(define_insn "*movmisalign<mode>_neon_load"
[(set (match_operand:VQX 0 "s_register_operand" "=w")
@@ -282,7 +283,7 @@
UNSPEC_MISALIGNED_ACCESS))]
"TARGET_NEON && !BYTES_BIG_ENDIAN && unaligned_access"
"vld1.<V_sz_elem>\t{%q0}, %A1"
- [(set_attr "neon_type" "neon_vld1_1_2_regs")])
+ [(set_attr "type" "neon_store1_1reg<q>")])
(define_insn "vec_set<mode>_internal"
[(set (match_operand:VD 0 "s_register_operand" "=w,w")
@@ -303,7 +304,7 @@
else
return "vmov.<V_sz_elem>\t%P0[%c2], %1";
}
- [(set_attr "neon_type" "neon_vld1_vld2_lane,neon_mcr")])
+ [(set_attr "type" "neon_load1_all_lanes<q>,neon_from_gp<q>")])
(define_insn "vec_set<mode>_internal"
[(set (match_operand:VQ 0 "s_register_operand" "=w,w")
@@ -331,7 +332,7 @@
else
return "vmov.<V_sz_elem>\t%P0[%c2], %1";
}
- [(set_attr "neon_type" "neon_vld1_vld2_lane,neon_mcr")]
+ [(set_attr "type" "neon_load1_all_lanes<q>,neon_from_gp<q>")]
)
(define_insn "vec_setv2di_internal"
@@ -353,7 +354,7 @@
else
return "vmov\t%P0, %Q1, %R1";
}
- [(set_attr "neon_type" "neon_vld1_1_2_regs,neon_mcr_2_mcrr")]
+ [(set_attr "type" "neon_load1_all_lanes_q,neon_from_gp_q")]
)
(define_expand "vec_set<mode>"
@@ -387,7 +388,7 @@
else
return "vmov.<V_uf_sclr>\t%0, %P1[%c2]";
}
- [(set_attr "neon_type" "neon_vst1_vst2_lane,neon_bp_simple")]
+ [(set_attr "type" "neon_store1_one_lane<q>,neon_to_gp<q>")]
)
(define_insn "vec_extract<mode>"
@@ -413,7 +414,7 @@
else
return "vmov.<V_uf_sclr>\t%0, %P1[%c2]";
}
- [(set_attr "neon_type" "neon_vst1_vst2_lane,neon_bp_simple")]
+ [(set_attr "type" "neon_store1_one_lane<q>,neon_to_gp<q>")]
)
(define_insn "vec_extractv2di"
@@ -432,7 +433,7 @@
else
return "vmov\t%Q0, %R0, %P1 @ v2di";
}
- [(set_attr "neon_type" "neon_vst1_vst2_lane,neon_int_1")]
+ [(set_attr "type" "neon_store1_one_lane_q,neon_to_gp_q")]
)
(define_expand "vec_init<mode>"
@@ -455,12 +456,10 @@
(match_operand:VDQ 2 "s_register_operand" "w")))]
"TARGET_NEON && (!<Is_float_mode> || flag_unsafe_math_optimizations)"
"vadd.<V_if_elem>\t%<V_reg>0, %<V_reg>1, %<V_reg>2"
- [(set (attr "neon_type")
+ [(set (attr "type")
(if_then_else (match_test "<Is_float_mode>")
- (if_then_else (match_test "<Is_d_reg>")
- (const_string "neon_fp_vadd_ddd_vabs_dd")
- (const_string "neon_fp_vadd_qqq_vabs_qq"))
- (const_string "neon_int_1")))]
+ (const_string "neon_fp_addsub_s<q>")
+ (const_string "neon_add<q>")))]
)
(define_insn "adddi3_neon"
@@ -482,7 +481,8 @@
default: gcc_unreachable ();
}
}
- [(set_attr "neon_type" "neon_int_1,*,*,neon_int_1,*,*,*")
+ [(set_attr "type" "neon_add,multiple,multiple,neon_add,\
+ multiple,multiple,multiple")
(set_attr "conds" "*,clob,clob,*,clob,clob,clob")
(set_attr "length" "*,8,8,*,8,8,8")
(set_attr "arch" "neon_for_64bits,*,*,avoid_neon_for_64bits,*,*,*")]
@@ -494,12 +494,10 @@
(match_operand:VDQ 2 "s_register_operand" "w")))]
"TARGET_NEON && (!<Is_float_mode> || flag_unsafe_math_optimizations)"
"vsub.<V_if_elem>\t%<V_reg>0, %<V_reg>1, %<V_reg>2"
- [(set (attr "neon_type")
+ [(set (attr "type")
(if_then_else (match_test "<Is_float_mode>")
- (if_then_else (match_test "<Is_d_reg>")
- (const_string "neon_fp_vadd_ddd_vabs_dd")
- (const_string "neon_fp_vadd_qqq_vabs_qq"))
- (const_string "neon_int_2")))]
+ (const_string "neon_fp_addsub_s<q>")
+ (const_string "neon_sub<q>")))]
)
(define_insn "subdi3_neon"
@@ -519,75 +517,48 @@
default: gcc_unreachable ();
}
}
- [(set_attr "neon_type" "neon_int_2,*,*,*,neon_int_2")
+ [(set_attr "type" "neon_sub,multiple,multiple,multiple,neon_sub")
(set_attr "conds" "*,clob,clob,clob,*")
(set_attr "length" "*,8,8,8,*")
(set_attr "arch" "neon_for_64bits,*,*,*,avoid_neon_for_64bits")]
)
(define_insn "*mul<mode>3_neon"
- [(set (match_operand:VDQ 0 "s_register_operand" "=w")
- (mult:VDQ (match_operand:VDQ 1 "s_register_operand" "w")
- (match_operand:VDQ 2 "s_register_operand" "w")))]
+ [(set (match_operand:VDQW 0 "s_register_operand" "=w")
+ (mult:VDQW (match_operand:VDQW 1 "s_register_operand" "w")
+ (match_operand:VDQW 2 "s_register_operand" "w")))]
"TARGET_NEON && (!<Is_float_mode> || flag_unsafe_math_optimizations)"
"vmul.<V_if_elem>\t%<V_reg>0, %<V_reg>1, %<V_reg>2"
- [(set (attr "neon_type")
+ [(set (attr "type")
(if_then_else (match_test "<Is_float_mode>")
- (if_then_else (match_test "<Is_d_reg>")
- (const_string "neon_fp_vadd_ddd_vabs_dd")
- (const_string "neon_fp_vadd_qqq_vabs_qq"))
- (if_then_else (match_test "<Is_d_reg>")
- (if_then_else
- (match_test "<Scalar_mul_8_16>")
- (const_string "neon_mul_ddd_8_16_qdd_16_8_long_32_16_long")
- (const_string "neon_mul_qqq_8_16_32_ddd_32"))
- (if_then_else (match_test "<Scalar_mul_8_16>")
- (const_string "neon_mul_qqq_8_16_32_ddd_32")
- (const_string "neon_mul_qqq_8_16_32_ddd_32")))))]
+ (const_string "neon_fp_mul_s<q>")
+ (const_string "neon_mul_<V_elem_ch><q>")))]
)
(define_insn "mul<mode>3add<mode>_neon"
- [(set (match_operand:VDQ 0 "s_register_operand" "=w")
- (plus:VDQ (mult:VDQ (match_operand:VDQ 2 "s_register_operand" "w")
- (match_operand:VDQ 3 "s_register_operand" "w"))
- (match_operand:VDQ 1 "s_register_operand" "0")))]
+ [(set (match_operand:VDQW 0 "s_register_operand" "=w")
+ (plus:VDQW (mult:VDQW (match_operand:VDQW 2 "s_register_operand" "w")
+ (match_operand:VDQW 3 "s_register_operand" "w"))
+ (match_operand:VDQW 1 "s_register_operand" "0")))]
"TARGET_NEON && (!<Is_float_mode> || flag_unsafe_math_optimizations)"
"vmla.<V_if_elem>\t%<V_reg>0, %<V_reg>2, %<V_reg>3"
- [(set (attr "neon_type")
+ [(set (attr "type")
(if_then_else (match_test "<Is_float_mode>")
- (if_then_else (match_test "<Is_d_reg>")
- (const_string "neon_fp_vmla_ddd")
- (const_string "neon_fp_vmla_qqq"))
- (if_then_else (match_test "<Is_d_reg>")
- (if_then_else
- (match_test "<Scalar_mul_8_16>")
- (const_string "neon_mla_ddd_8_16_qdd_16_8_long_32_16_long")
- (const_string "neon_mla_ddd_32_qqd_16_ddd_32_scalar_qdd_64_32_long_scalar_qdd_64_32_long"))
- (if_then_else (match_test "<Scalar_mul_8_16>")
- (const_string "neon_mla_qqq_8_16")
- (const_string "neon_mla_qqq_32_qqd_32_scalar")))))]
+ (const_string "neon_fp_mla_s<q>")
+ (const_string "neon_mla_<V_elem_ch><q>")))]
)
(define_insn "mul<mode>3neg<mode>add<mode>_neon"
- [(set (match_operand:VDQ 0 "s_register_operand" "=w")
- (minus:VDQ (match_operand:VDQ 1 "s_register_operand" "0")
- (mult:VDQ (match_operand:VDQ 2 "s_register_operand" "w")
- (match_operand:VDQ 3 "s_register_operand" "w"))))]
+ [(set (match_operand:VDQW 0 "s_register_operand" "=w")
+ (minus:VDQW (match_operand:VDQW 1 "s_register_operand" "0")
+ (mult:VDQW (match_operand:VDQW 2 "s_register_operand" "w")
+ (match_operand:VDQW 3 "s_register_operand" "w"))))]
"TARGET_NEON && (!<Is_float_mode> || flag_unsafe_math_optimizations)"
"vmls.<V_if_elem>\t%<V_reg>0, %<V_reg>2, %<V_reg>3"
- [(set (attr "neon_type")
+ [(set (attr "type")
(if_then_else (match_test "<Is_float_mode>")
- (if_then_else (match_test "<Is_d_reg>")
- (const_string "neon_fp_vmla_ddd")
- (const_string "neon_fp_vmla_qqq"))
- (if_then_else (match_test "<Is_d_reg>")
- (if_then_else
- (match_test "<Scalar_mul_8_16>")
- (const_string "neon_mla_ddd_8_16_qdd_16_8_long_32_16_long")
- (const_string "neon_mla_ddd_32_qqd_16_ddd_32_scalar_qdd_64_32_long_scalar_qdd_64_32_long"))
- (if_then_else (match_test "<Scalar_mul_8_16>")
- (const_string "neon_mla_qqq_8_16")
- (const_string "neon_mla_qqq_32_qqd_32_scalar")))))]
+ (const_string "neon_fp_mla_s<q>")
+ (const_string "neon_mla_<V_elem_ch><q>")))]
)
;; Fused multiply-accumulate
@@ -602,10 +573,7 @@
(match_operand:VCVTF 3 "register_operand" "0")))]
"TARGET_NEON && TARGET_FMA && flag_unsafe_math_optimizations"
"vfma%?.<V_if_elem>\\t%<V_reg>0, %<V_reg>1, %<V_reg>2"
- [(set (attr "neon_type")
- (if_then_else (match_test "<Is_d_reg>")
- (const_string "neon_fp_vmla_ddd")
- (const_string "neon_fp_vmla_qqq")))]
+ [(set_attr "type" "neon_fp_mla_s<q>")]
)
(define_insn "fma<VCVTF:mode>4_intrinsic"
@@ -615,10 +583,7 @@
(match_operand:VCVTF 3 "register_operand" "0")))]
"TARGET_NEON && TARGET_FMA"
"vfma%?.<V_if_elem>\\t%<V_reg>0, %<V_reg>1, %<V_reg>2"
- [(set (attr "neon_type")
- (if_then_else (match_test "<Is_d_reg>")
- (const_string "neon_fp_vmla_ddd")
- (const_string "neon_fp_vmla_qqq")))]
+ [(set_attr "type" "neon_fp_mla_s<q>")]
)
(define_insn "*fmsub<VCVTF:mode>4"
@@ -628,10 +593,7 @@
(match_operand:VCVTF 3 "register_operand" "0")))]
"TARGET_NEON && TARGET_FMA && flag_unsafe_math_optimizations"
"vfms%?.<V_if_elem>\\t%<V_reg>0, %<V_reg>1, %<V_reg>2"
- [(set (attr "neon_type")
- (if_then_else (match_test "<Is_d_reg>")
- (const_string "neon_fp_vmla_ddd")
- (const_string "neon_fp_vmla_qqq")))]
+ [(set_attr "type" "neon_fp_mla_s<q>")]
)
(define_insn "fmsub<VCVTF:mode>4_intrinsic"
@@ -641,10 +603,7 @@
(match_operand:VCVTF 3 "register_operand" "0")))]
"TARGET_NEON && TARGET_FMA"
"vfms%?.<V_if_elem>\\t%<V_reg>0, %<V_reg>1, %<V_reg>2"
- [(set (attr "neon_type")
- (if_then_else (match_test "<Is_d_reg>")
- (const_string "neon_fp_vmla_ddd")
- (const_string "neon_fp_vmla_qqq")))]
+ [(set_attr "type" "neon_fp_mla_s<q>")]
)
(define_insn "neon_vrint<NEON_VRINT:nvrint_variant><VCVTF:mode>"
@@ -654,10 +613,7 @@
NEON_VRINT))]
"TARGET_NEON && TARGET_FPU_ARMV8"
"vrint<nvrint_variant>%?.f32\\t%<V_reg>0, %<V_reg>1"
- [(set (attr "neon_type")
- (if_then_else (match_test "<Is_d_reg>")
- (const_string "neon_fp_vadd_ddd_vabs_dd")
- (const_string "neon_fp_vadd_qqq_vabs_qq")))]
+ [(set_attr "type" "neon_fp_round_<V_elem_ch><q>")]
)
(define_insn "ior<mode>3"
@@ -674,7 +630,7 @@
default: gcc_unreachable ();
}
}
- [(set_attr "neon_type" "neon_int_1")]
+ [(set_attr "type" "neon_logic<q>")]
)
;; The concrete forms of the Neon immediate-logic instructions are vbic and
@@ -696,7 +652,7 @@
default: gcc_unreachable ();
}
}
- [(set_attr "neon_type" "neon_int_1")]
+ [(set_attr "type" "neon_logic<q>")]
)
(define_insn "orn<mode>3_neon"
@@ -705,7 +661,7 @@
(match_operand:VDQ 1 "s_register_operand" "w")))]
"TARGET_NEON"
"vorn\t%<V_reg>0, %<V_reg>1, %<V_reg>2"
- [(set_attr "neon_type" "neon_int_1")]
+ [(set_attr "type" "neon_logic<q>")]
)
;; TODO: investigate whether we should disable
@@ -743,7 +699,7 @@
DONE;
}
}"
- [(set_attr "neon_type" "neon_int_1,*,*,*")
+ [(set_attr "type" "neon_logic,multiple,multiple,multiple")
(set_attr "length" "*,16,8,8")
(set_attr "arch" "any,a,t2,t2")]
)
@@ -754,7 +710,7 @@
(match_operand:VDQ 1 "s_register_operand" "w")))]
"TARGET_NEON"
"vbic\t%<V_reg>0, %<V_reg>1, %<V_reg>2"
- [(set_attr "neon_type" "neon_int_1")]
+ [(set_attr "type" "neon_logic<q>")]
)
;; Compare to *anddi_notdi_di.
@@ -767,7 +723,7 @@
vbic\t%P0, %P1, %P2
#
#"
- [(set_attr "neon_type" "neon_int_1,*,*")
+ [(set_attr "type" "neon_logic,multiple,multiple")
(set_attr "length" "*,8,8")]
)
@@ -777,7 +733,7 @@
(match_operand:VDQ 2 "s_register_operand" "w")))]
"TARGET_NEON"
"veor\t%<V_reg>0, %<V_reg>1, %<V_reg>2"
- [(set_attr "neon_type" "neon_int_1")]
+ [(set_attr "type" "neon_logic<q>")]
)
(define_insn "one_cmpl<mode>2"
@@ -785,7 +741,7 @@
(not:VDQ (match_operand:VDQ 1 "s_register_operand" "w")))]
"TARGET_NEON"
"vmvn\t%<V_reg>0, %<V_reg>1"
- [(set_attr "neon_type" "neon_int_1")]
+ [(set_attr "type" "neon_move<q>")]
)
(define_insn "abs<mode>2"
@@ -793,12 +749,10 @@
(abs:VDQW (match_operand:VDQW 1 "s_register_operand" "w")))]
"TARGET_NEON"
"vabs.<V_s_elem>\t%<V_reg>0, %<V_reg>1"
- [(set (attr "neon_type")
+ [(set (attr "type")
(if_then_else (match_test "<Is_float_mode>")
- (if_then_else (match_test "<Is_d_reg>")
- (const_string "neon_fp_vadd_ddd_vabs_dd")
- (const_string "neon_fp_vadd_qqq_vabs_qq"))
- (const_string "neon_int_3")))]
+ (const_string "neon_fp_abs_s<q>")
+ (const_string "neon_abs<q>")))]
)
(define_insn "neg<mode>2"
@@ -806,12 +760,10 @@
(neg:VDQW (match_operand:VDQW 1 "s_register_operand" "w")))]
"TARGET_NEON"
"vneg.<V_s_elem>\t%<V_reg>0, %<V_reg>1"
- [(set (attr "neon_type")
+ [(set (attr "type")
(if_then_else (match_test "<Is_float_mode>")
- (if_then_else (match_test "<Is_d_reg>")
- (const_string "neon_fp_vadd_ddd_vabs_dd")
- (const_string "neon_fp_vadd_qqq_vabs_qq"))
- (const_string "neon_int_3")))]
+ (const_string "neon_fp_neg_s<q>")
+ (const_string "neon_neg<q>")))]
)
(define_insn "negdi2_neon"
@@ -821,7 +773,8 @@
(clobber (reg:CC CC_REGNUM))]
"TARGET_NEON"
"#"
- [(set_attr "length" "8")]
+ [(set_attr "length" "8")
+ (set_attr "type" "multiple")]
)
; Split negdi2_neon for vfp registers
@@ -859,7 +812,7 @@
(match_operand:VDQIW 2 "s_register_operand" "w")))]
"TARGET_NEON"
"vmin.<V_u_elem>\t%<V_reg>0, %<V_reg>1, %<V_reg>2"
- [(set_attr "neon_type" "neon_int_5")]
+ [(set_attr "type" "neon_minmax<q>")]
)
(define_insn "*umax<mode>3_neon"
@@ -868,7 +821,7 @@
(match_operand:VDQIW 2 "s_register_operand" "w")))]
"TARGET_NEON"
"vmax.<V_u_elem>\t%<V_reg>0, %<V_reg>1, %<V_reg>2"
- [(set_attr "neon_type" "neon_int_5")]
+ [(set_attr "type" "neon_minmax<q>")]
)
(define_insn "*smin<mode>3_neon"
@@ -877,10 +830,10 @@
(match_operand:VDQW 2 "s_register_operand" "w")))]
"TARGET_NEON"
"vmin.<V_s_elem>\t%<V_reg>0, %<V_reg>1, %<V_reg>2"
- [(set (attr "neon_type")
+ [(set (attr "type")
(if_then_else (match_test "<Is_float_mode>")
- (const_string "neon_fp_vadd_ddd_vabs_dd")
- (const_string "neon_int_5")))]
+ (const_string "neon_fp_minmax_s<q>")
+ (const_string "neon_minmax<q>")))]
)
(define_insn "*smax<mode>3_neon"
@@ -889,10 +842,10 @@
(match_operand:VDQW 2 "s_register_operand" "w")))]
"TARGET_NEON"
"vmax.<V_s_elem>\t%<V_reg>0, %<V_reg>1, %<V_reg>2"
- [(set (attr "neon_type")
+ [(set (attr "type")
(if_then_else (match_test "<Is_float_mode>")
- (const_string "neon_fp_vadd_ddd_vabs_dd")
- (const_string "neon_int_5")))]
+ (const_string "neon_fp_minmax_s<q>")
+ (const_string "neon_minmax<q>")))]
)
; TODO: V2DI shifts are current disabled because there are bugs in the
@@ -915,10 +868,7 @@
default: gcc_unreachable ();
}
}
- [(set (attr "neon_type")
- (if_then_else (match_test "<Is_d_reg>")
- (const_string "neon_vshl_ddd")
- (const_string "neon_shift_3")))]
+ [(set_attr "type" "neon_shift_reg<q>, neon_shift_imm<q>")]
)
(define_insn "vashr<mode>3_imm"
@@ -931,10 +881,7 @@
<MODE>mode, VALID_NEON_QREG_MODE (<MODE>mode),
false);
}
- [(set (attr "neon_type")
- (if_then_else (match_test "<Is_d_reg>")
- (const_string "neon_vshl_ddd")
- (const_string "neon_shift_3")))]
+ [(set_attr "type" "neon_shift_imm<q>")]
)
(define_insn "vlshr<mode>3_imm"
@@ -947,10 +894,7 @@
<MODE>mode, VALID_NEON_QREG_MODE (<MODE>mode),
false);
}
- [(set (attr "neon_type")
- (if_then_else (match_test "<Is_d_reg>")
- (const_string "neon_vshl_ddd")
- (const_string "neon_shift_3")))]
+ [(set_attr "type" "neon_shift_imm<q>")]
)
; Used for implementing logical shift-right, which is a left-shift by a negative
@@ -965,10 +909,7 @@
UNSPEC_ASHIFT_SIGNED))]
"TARGET_NEON"
"vshl.<V_s_elem>\t%<V_reg>0, %<V_reg>1, %<V_reg>2"
- [(set (attr "neon_type")
- (if_then_else (match_test "<Is_d_reg>")
- (const_string "neon_vshl_ddd")
- (const_string "neon_shift_3")))]
+ [(set_attr "type" "neon_shift_reg<q>")]
)
; Used for implementing logical shift-right, which is a left-shift by a negative
@@ -981,10 +922,7 @@
UNSPEC_ASHIFT_UNSIGNED))]
"TARGET_NEON"
"vshl.<V_u_elem>\t%<V_reg>0, %<V_reg>1, %<V_reg>2"
- [(set (attr "neon_type")
- (if_then_else (match_test "<Is_d_reg>")
- (const_string "neon_vshl_ddd")
- (const_string "neon_shift_3")))]
+ [(set_attr "type" "neon_shift_reg<q>")]
)
(define_expand "vashr<mode>3"
@@ -1036,7 +974,7 @@
"@
vld1.32\t{%P0[0]}, %A1
vmov.32\t%P0[0], %1"
- [(set_attr "neon_type" "neon_vld1_vld2_lane,neon_mcr")]
+ [(set_attr "type" "neon_load1_1reg,neon_from_gp")]
)
(define_insn "ashldi3_neon_noclobber"
@@ -1049,7 +987,7 @@
"@
vshl.u64\t%P0, %P1, %2
vshl.u64\t%P0, %P1, %P2"
- [(set_attr "neon_type" "neon_vshl_ddd,neon_vshl_ddd")]
+ [(set_attr "type" "neon_shift_imm, neon_shift_reg")]
)
(define_insn_and_split "ashldi3_neon"
@@ -1100,7 +1038,8 @@
DONE;
}"
[(set_attr "arch" "neon_for_64bits,neon_for_64bits,*,*,avoid_neon_for_64bits,avoid_neon_for_64bits")
- (set_attr "opt" "*,*,speed,speed,*,*")]
+ (set_attr "opt" "*,*,speed,speed,*,*")
+ (set_attr "type" "multiple")]
)
; The shift amount needs to be negated for right-shifts
@@ -1111,7 +1050,7 @@
UNSPEC_ASHIFT_SIGNED))]
"TARGET_NEON && reload_completed"
"vshl.s64\t%P0, %P1, %P2"
- [(set_attr "neon_type" "neon_vshl_ddd")]
+ [(set_attr "type" "neon_shift_reg")]
)
; The shift amount needs to be negated for right-shifts
@@ -1122,7 +1061,7 @@
UNSPEC_ASHIFT_UNSIGNED))]
"TARGET_NEON && reload_completed"
"vshl.u64\t%P0, %P1, %P2"
- [(set_attr "neon_type" "neon_vshl_ddd")]
+ [(set_attr "type" "neon_shift_reg")]
)
(define_insn "ashrdi3_neon_imm_noclobber"
@@ -1132,7 +1071,7 @@
"TARGET_NEON && reload_completed
&& INTVAL (operands[2]) > 0 && INTVAL (operands[2]) <= 64"
"vshr.s64\t%P0, %P1, %2"
- [(set_attr "neon_type" "neon_vshl_ddd")]
+ [(set_attr "type" "neon_shift_imm")]
)
(define_insn "lshrdi3_neon_imm_noclobber"
@@ -1142,7 +1081,7 @@
"TARGET_NEON && reload_completed
&& INTVAL (operands[2]) > 0 && INTVAL (operands[2]) <= 64"
"vshr.u64\t%P0, %P1, %2"
- [(set_attr "neon_type" "neon_vshl_ddd")]
+ [(set_attr "type" "neon_shift_imm")]
)
;; ashrdi3_neon
@@ -1201,7 +1140,8 @@
DONE;
}"
[(set_attr "arch" "neon_for_64bits,neon_for_64bits,*,*,avoid_neon_for_64bits,avoid_neon_for_64bits")
- (set_attr "opt" "*,*,speed,speed,*,*")]
+ (set_attr "opt" "*,*,speed,speed,*,*")
+ (set_attr "type" "multiple")]
)
;; Widening operations
@@ -1213,7 +1153,7 @@
(match_operand:<V_widen> 2 "s_register_operand" "w")))]
"TARGET_NEON"
"vaddw.<V_s_elem>\t%q0, %q2, %P1"
- [(set_attr "neon_type" "neon_int_3")]
+ [(set_attr "type" "neon_add_widen")]
)
(define_insn "widen_usum<mode>3"
@@ -1223,7 +1163,7 @@
(match_operand:<V_widen> 2 "s_register_operand" "w")))]
"TARGET_NEON"
"vaddw.<V_u_elem>\t%q0, %q2, %P1"
- [(set_attr "neon_type" "neon_int_3")]
+ [(set_attr "type" "neon_add_widen")]
)
;; VEXT can be used to synthesize coarse whole-vector shifts with 8-bit
@@ -1307,9 +1247,7 @@
"TARGET_NEON"
"<VQH_mnem>.<VQH_sign>32\t%P0, %e1, %f1"
[(set_attr "vqh_mnem" "<VQH_mnem>")
- (set (attr "neon_type")
- (if_then_else (eq_attr "vqh_mnem" "vadd")
- (const_string "neon_int_1") (const_string "neon_int_5")))]
+ (set_attr "type" "neon_reduc_<VQH_type>_q")]
)
(define_insn "quad_halves_<code>v4sf"
@@ -1322,9 +1260,7 @@
"TARGET_NEON && flag_unsafe_math_optimizations"
"<VQH_mnem>.f32\t%P0, %e1, %f1"
[(set_attr "vqh_mnem" "<VQH_mnem>")
- (set (attr "neon_type")
- (if_then_else (eq_attr "vqh_mnem" "vadd")
- (const_string "neon_int_1") (const_string "neon_int_5")))]
+ (set_attr "type" "neon_fp_reduc_<VQH_type>_s_q")]
)
(define_insn "quad_halves_<code>v8hi"
@@ -1339,9 +1275,7 @@
"TARGET_NEON"
"<VQH_mnem>.<VQH_sign>16\t%P0, %e1, %f1"
[(set_attr "vqh_mnem" "<VQH_mnem>")
- (set (attr "neon_type")
- (if_then_else (eq_attr "vqh_mnem" "vadd")
- (const_string "neon_int_1") (const_string "neon_int_5")))]
+ (set_attr "type" "neon_reduc_<VQH_type>_q")]
)
(define_insn "quad_halves_<code>v16qi"
@@ -1360,9 +1294,7 @@
"TARGET_NEON"
"<VQH_mnem>.<VQH_sign>8\t%P0, %e1, %f1"
[(set_attr "vqh_mnem" "<VQH_mnem>")
- (set (attr "neon_type")
- (if_then_else (eq_attr "vqh_mnem" "vadd")
- (const_string "neon_int_1") (const_string "neon_int_5")))]
+ (set_attr "type" "neon_reduc_<VQH_type>_q")]
)
(define_expand "move_hi_quad_<mode>"
@@ -1421,7 +1353,7 @@
UNSPEC_VPADD))]
"TARGET_NEON && !BYTES_BIG_ENDIAN"
"vadd.i64\t%e0, %e1, %f1"
- [(set_attr "neon_type" "neon_int_1")]
+ [(set_attr "type" "neon_add_q")]
)
;; NEON does not distinguish between signed and unsigned addition except on
@@ -1545,12 +1477,10 @@
"TARGET_NEON"
"vpadd.<V_if_elem>\t%P0, %P1, %P2"
;; Assume this schedules like vadd.
- [(set (attr "neon_type")
+ [(set (attr "type")
(if_then_else (match_test "<Is_float_mode>")
- (if_then_else (match_test "<Is_d_reg>")
- (const_string "neon_fp_vadd_ddd_vabs_dd")
- (const_string "neon_fp_vadd_qqq_vabs_qq"))
- (const_string "neon_int_1")))]
+ (const_string "neon_fp_reduc_add_s<q>")
+ (const_string "neon_reduc_add<q>")))]
)
(define_insn "neon_vpsmin<mode>"
@@ -1560,11 +1490,10 @@
UNSPEC_VPSMIN))]
"TARGET_NEON"
"vpmin.<V_s_elem>\t%P0, %P1, %P2"
- ;; Assume this schedules like vmin.
- [(set (attr "neon_type")
+ [(set (attr "type")
(if_then_else (match_test "<Is_float_mode>")
- (const_string "neon_fp_vadd_ddd_vabs_dd")
- (const_string "neon_int_5")))]
+ (const_string "neon_fp_reduc_minmax_s<q>")
+ (const_string "neon_reduc_minmax<q>")))]
)
(define_insn "neon_vpsmax<mode>"
@@ -1574,11 +1503,10 @@
UNSPEC_VPSMAX))]
"TARGET_NEON"
"vpmax.<V_s_elem>\t%P0, %P1, %P2"
- ;; Assume this schedules like vmax.
- [(set (attr "neon_type")
+ [(set (attr "type")
(if_then_else (match_test "<Is_float_mode>")
- (const_string "neon_fp_vadd_ddd_vabs_dd")
- (const_string "neon_int_5")))]
+ (const_string "neon_fp_reduc_minmax_s<q>")
+ (const_string "neon_reduc_minmax<q>")))]
)
(define_insn "neon_vpumin<mode>"
@@ -1588,8 +1516,7 @@
UNSPEC_VPUMIN))]
"TARGET_NEON"
"vpmin.<V_u_elem>\t%P0, %P1, %P2"
- ;; Assume this schedules like umin.
- [(set_attr "neon_type" "neon_int_5")]
+ [(set_attr "type" "neon_reduc_minmax<q>")]
)
(define_insn "neon_vpumax<mode>"
@@ -1599,8 +1526,7 @@
UNSPEC_VPUMAX))]
"TARGET_NEON"
"vpmax.<V_u_elem>\t%P0, %P1, %P2"
- ;; Assume this schedules like umax.
- [(set_attr "neon_type" "neon_int_5")]
+ [(set_attr "type" "neon_reduc_minmax<q>")]
)
;; Saturating arithmetic
@@ -1617,7 +1543,7 @@
(match_operand:VD 2 "s_register_operand" "w")))]
"TARGET_NEON"
"vqadd.<V_s_elem>\t%P0, %P1, %P2"
- [(set_attr "neon_type" "neon_int_4")]
+ [(set_attr "type" "neon_qadd<q>")]
)
(define_insn "*us_add<mode>_neon"
@@ -1626,7 +1552,7 @@
(match_operand:VD 2 "s_register_operand" "w")))]
"TARGET_NEON"
"vqadd.<V_u_elem>\t%P0, %P1, %P2"
- [(set_attr "neon_type" "neon_int_4")]
+ [(set_attr "type" "neon_qadd<q>")]
)
(define_insn "*ss_sub<mode>_neon"
@@ -1635,7 +1561,7 @@
(match_operand:VD 2 "s_register_operand" "w")))]
"TARGET_NEON"
"vqsub.<V_s_elem>\t%P0, %P1, %P2"
- [(set_attr "neon_type" "neon_int_5")]
+ [(set_attr "type" "neon_qsub<q>")]
)
(define_insn "*us_sub<mode>_neon"
@@ -1644,7 +1570,7 @@
(match_operand:VD 2 "s_register_operand" "w")))]
"TARGET_NEON"
"vqsub.<V_u_elem>\t%P0, %P1, %P2"
- [(set_attr "neon_type" "neon_int_5")]
+ [(set_attr "type" "neon_qsub<q>")]
)
;; Conditional instructions. These are comparisons with conditional moves for
@@ -1936,12 +1862,10 @@
UNSPEC_VADD))]
"TARGET_NEON"
"vadd.<V_if_elem>\t%<V_reg>0, %<V_reg>1, %<V_reg>2"
- [(set (attr "neon_type")
+ [(set (attr "type")
(if_then_else (match_test "<Is_float_mode>")
- (if_then_else (match_test "<Is_d_reg>")
- (const_string "neon_fp_vadd_ddd_vabs_dd")
- (const_string "neon_fp_vadd_qqq_vabs_qq"))
- (const_string "neon_int_1")))]
+ (const_string "neon_fp_addsub_s<q>")
+ (const_string "neon_add<q>")))]
)
; operand 3 represents in bits:
@@ -1956,7 +1880,7 @@
UNSPEC_VADDL))]
"TARGET_NEON"
"vaddl.%T3%#<V_sz_elem>\t%q0, %P1, %P2"
- [(set_attr "neon_type" "neon_int_3")]
+ [(set_attr "type" "neon_add_long")]
)
(define_insn "neon_vaddw<mode>"
@@ -1967,7 +1891,7 @@
UNSPEC_VADDW))]
"TARGET_NEON"
"vaddw.%T3%#<V_sz_elem>\t%q0, %q1, %P2"
- [(set_attr "neon_type" "neon_int_2")]
+ [(set_attr "type" "neon_add_widen")]
)
; vhadd and vrhadd.
@@ -1980,7 +1904,7 @@
UNSPEC_VHADD))]
"TARGET_NEON"
"v%O3hadd.%T3%#<V_sz_elem>\t%<V_reg>0, %<V_reg>1, %<V_reg>2"
- [(set_attr "neon_type" "neon_int_4")]
+ [(set_attr "type" "neon_add_halve_q")]
)
(define_insn "neon_vqadd<mode>"
@@ -1991,7 +1915,7 @@
UNSPEC_VQADD))]
"TARGET_NEON"
"vqadd.%T3%#<V_sz_elem>\t%<V_reg>0, %<V_reg>1, %<V_reg>2"
- [(set_attr "neon_type" "neon_int_4")]
+ [(set_attr "type" "neon_qadd<q>")]
)
(define_insn "neon_vaddhn<mode>"
@@ -2002,7 +1926,7 @@
UNSPEC_VADDHN))]
"TARGET_NEON"
"v%O3addhn.<V_if_elem>\t%P0, %q1, %q2"
- [(set_attr "neon_type" "neon_int_4")]
+ [(set_attr "type" "neon_add_halve_narrow_q")]
)
;; We cannot replace this unspec with mul<mode>3 because of the odd
@@ -2015,19 +1939,10 @@
UNSPEC_VMUL))]
"TARGET_NEON"
"vmul.%F3%#<V_sz_elem>\t%<V_reg>0, %<V_reg>1, %<V_reg>2"
- [(set (attr "neon_type")
+ [(set (attr "type")
(if_then_else (match_test "<Is_float_mode>")
- (if_then_else (match_test "<Is_d_reg>")
- (const_string "neon_fp_vadd_ddd_vabs_dd")
- (const_string "neon_fp_vadd_qqq_vabs_qq"))
- (if_then_else (match_test "<Is_d_reg>")
- (if_then_else
- (match_test "<Scalar_mul_8_16>")
- (const_string "neon_mul_ddd_8_16_qdd_16_8_long_32_16_long")
- (const_string "neon_mul_qqq_8_16_32_ddd_32"))
- (if_then_else (match_test "<Scalar_mul_8_16>")
- (const_string "neon_mul_qqq_8_16_32_ddd_32")
- (const_string "neon_mul_qqq_8_16_32_ddd_32")))))]
+ (const_string "neon_fp_mul_s<q>")
+ (const_string "neon_mul_<V_elem_ch><q>")))]
)
(define_expand "neon_vmla<mode>"
@@ -2076,26 +1991,17 @@
; Used for intrinsics when flag_unsafe_math_optimizations is false.
(define_insn "neon_vmla<mode>_unspec"
- [(set (match_operand:VDQ 0 "s_register_operand" "=w")
- (unspec:VDQ [(match_operand:VDQ 1 "s_register_operand" "0")
- (match_operand:VDQ 2 "s_register_operand" "w")
- (match_operand:VDQ 3 "s_register_operand" "w")]
+ [(set (match_operand:VDQW 0 "s_register_operand" "=w")
+ (unspec:VDQW [(match_operand:VDQW 1 "s_register_operand" "0")
+ (match_operand:VDQW 2 "s_register_operand" "w")
+ (match_operand:VDQW 3 "s_register_operand" "w")]
UNSPEC_VMLA))]
"TARGET_NEON"
"vmla.<V_if_elem>\t%<V_reg>0, %<V_reg>2, %<V_reg>3"
- [(set (attr "neon_type")
+ [(set (attr "type")
(if_then_else (match_test "<Is_float_mode>")
- (if_then_else (match_test "<Is_d_reg>")
- (const_string "neon_fp_vmla_ddd")
- (const_string "neon_fp_vmla_qqq"))
- (if_then_else (match_test "<Is_d_reg>")
- (if_then_else
- (match_test "<Scalar_mul_8_16>")
- (const_string "neon_mla_ddd_8_16_qdd_16_8_long_32_16_long")
- (const_string "neon_mla_ddd_32_qqd_16_ddd_32_scalar_qdd_64_32_long_scalar_qdd_64_32_long"))
- (if_then_else (match_test "<Scalar_mul_8_16>")
- (const_string "neon_mla_qqq_8_16")
- (const_string "neon_mla_qqq_32_qqd_32_scalar")))))]
+ (const_string "neon_fp_mla_s<q>")
+ (const_string "neon_mla_<V_elem_ch><q>")))]
)
(define_insn "neon_vmlal<mode>"
@@ -2107,10 +2013,7 @@
UNSPEC_VMLAL))]
"TARGET_NEON"
"vmlal.%T4%#<V_sz_elem>\t%q0, %P2, %P3"
- [(set (attr "neon_type")
- (if_then_else (match_test "<Scalar_mul_8_16>")
- (const_string "neon_mla_ddd_8_16_qdd_16_8_long_32_16_long")
- (const_string "neon_mla_ddd_32_qqd_16_ddd_32_scalar_qdd_64_32_long_scalar_qdd_64_32_long")))]
+ [(set_attr "type" "neon_mla_<V_elem_ch>_long")]
)
(define_expand "neon_vmls<mode>"
@@ -2133,27 +2036,17 @@
; Used for intrinsics when flag_unsafe_math_optimizations is false.
(define_insn "neon_vmls<mode>_unspec"
- [(set (match_operand:VDQ 0 "s_register_operand" "=w")
- (unspec:VDQ [(match_operand:VDQ 1 "s_register_operand" "0")
- (match_operand:VDQ 2 "s_register_operand" "w")
- (match_operand:VDQ 3 "s_register_operand" "w")]
+ [(set (match_operand:VDQW 0 "s_register_operand" "=w")
+ (unspec:VDQW [(match_operand:VDQW 1 "s_register_operand" "0")
+ (match_operand:VDQW 2 "s_register_operand" "w")
+ (match_operand:VDQW 3 "s_register_operand" "w")]
UNSPEC_VMLS))]
"TARGET_NEON"
"vmls.<V_if_elem>\t%<V_reg>0, %<V_reg>2, %<V_reg>3"
- [(set (attr "neon_type")
+ [(set (attr "type")
(if_then_else (match_test "<Is_float_mode>")
- (if_then_else (match_test "<Is_d_reg>")
- (const_string "neon_fp_vmla_ddd")
- (const_string "neon_fp_vmla_qqq"))
- (if_then_else (match_test "<Is_d_reg>")
- (if_then_else
- (match_test "<Scalar_mul_8_16>")
- (const_string "neon_mla_ddd_8_16_qdd_16_8_long_32_16_long")
- (const_string "neon_mla_ddd_32_qqd_16_ddd_32_scalar_qdd_64_32_long_scalar_qdd_64_32_long"))
- (if_then_else
- (match_test "<Scalar_mul_8_16>")
- (const_string "neon_mla_qqq_8_16")
- (const_string "neon_mla_qqq_32_qqd_32_scalar")))))]
+ (const_string "neon_fp_mla_s<q>")
+ (const_string "neon_mla_<V_elem_ch><q>")))]
)
(define_insn "neon_vmlsl<mode>"
@@ -2165,10 +2058,7 @@
UNSPEC_VMLSL))]
"TARGET_NEON"
"vmlsl.%T4%#<V_sz_elem>\t%q0, %P2, %P3"
- [(set (attr "neon_type")
- (if_then_else (match_test "<Scalar_mul_8_16>")
- (const_string "neon_mla_ddd_8_16_qdd_16_8_long_32_16_long")
- (const_string "neon_mla_ddd_32_qqd_16_ddd_32_scalar_qdd_64_32_long_scalar_qdd_64_32_long")))]
+ [(set_attr "type" "neon_mla_<V_elem_ch>_long")]
)
(define_insn "neon_vqdmulh<mode>"
@@ -2179,14 +2069,7 @@
UNSPEC_VQDMULH))]
"TARGET_NEON"
"vq%O3dmulh.<V_s_elem>\t%<V_reg>0, %<V_reg>1, %<V_reg>2"
- [(set (attr "neon_type")
- (if_then_else (match_test "<Is_d_reg>")
- (if_then_else (match_test "<Scalar_mul_8_16>")
- (const_string "neon_mul_ddd_8_16_qdd_16_8_long_32_16_long")
- (const_string "neon_mul_qqq_8_16_32_ddd_32"))
- (if_then_else (match_test "<Scalar_mul_8_16>")
- (const_string "neon_mul_qqq_8_16_32_ddd_32")
- (const_string "neon_mul_qqq_8_16_32_ddd_32"))))]
+ [(set_attr "type" "neon_sat_mul_<V_elem_ch><q>")]
)
(define_insn "neon_vqdmlal<mode>"
@@ -2198,10 +2081,7 @@
UNSPEC_VQDMLAL))]
"TARGET_NEON"
"vqdmlal.<V_s_elem>\t%q0, %P2, %P3"
- [(set (attr "neon_type")
- (if_then_else (match_test "<Scalar_mul_8_16>")
- (const_string "neon_mla_ddd_8_16_qdd_16_8_long_32_16_long")
- (const_string "neon_mla_ddd_32_qqd_16_ddd_32_scalar_qdd_64_32_long_scalar_qdd_64_32_long")))]
+ [(set_attr "type" "neon_sat_mla_<V_elem_ch>_long")]
)
(define_insn "neon_vqdmlsl<mode>"
@@ -2213,10 +2093,7 @@
UNSPEC_VQDMLSL))]
"TARGET_NEON"
"vqdmlsl.<V_s_elem>\t%q0, %P2, %P3"
- [(set (attr "neon_type")
- (if_then_else (match_test "<Scalar_mul_8_16>")
- (const_string "neon_mla_ddd_8_16_qdd_16_8_long_32_16_long")
- (const_string "neon_mla_ddd_32_qqd_16_ddd_32_scalar_qdd_64_32_long_scalar_qdd_64_32_long")))]
+ [(set_attr "type" "neon_sat_mla_<V_elem_ch>_long")]
)
(define_insn "neon_vmull<mode>"
@@ -2227,10 +2104,7 @@
UNSPEC_VMULL))]
"TARGET_NEON"
"vmull.%T3%#<V_sz_elem>\t%q0, %P1, %P2"
- [(set (attr "neon_type")
- (if_then_else (match_test "<Scalar_mul_8_16>")
- (const_string "neon_mul_ddd_8_16_qdd_16_8_long_32_16_long")
- (const_string "neon_mul_qdd_64_32_long_qqd_16_ddd_32_scalar_64_32_long_scalar")))]
+ [(set_attr "type" "neon_mul_<V_elem_ch>_long")]
)
(define_insn "neon_vqdmull<mode>"
@@ -2241,10 +2115,7 @@
UNSPEC_VQDMULL))]
"TARGET_NEON"
"vqdmull.<V_s_elem>\t%q0, %P1, %P2"
- [(set (attr "neon_type")
- (if_then_else (match_test "<Scalar_mul_8_16>")
- (const_string "neon_mul_ddd_8_16_qdd_16_8_long_32_16_long")
- (const_string "neon_mul_qdd_64_32_long_qqd_16_ddd_32_scalar_64_32_long_scalar")))]
+ [(set_attr "type" "neon_sat_mul_<V_elem_ch>_long")]
)
(define_expand "neon_vsub<mode>"
@@ -2271,12 +2142,10 @@
UNSPEC_VSUB))]
"TARGET_NEON"
"vsub.<V_if_elem>\t%<V_reg>0, %<V_reg>1, %<V_reg>2"
- [(set (attr "neon_type")
+ [(set (attr "type")
(if_then_else (match_test "<Is_float_mode>")
- (if_then_else (match_test "<Is_d_reg>")
- (const_string "neon_fp_vadd_ddd_vabs_dd")
- (const_string "neon_fp_vadd_qqq_vabs_qq"))
- (const_string "neon_int_2")))]
+ (const_string "neon_fp_addsub_s<q>")
+ (const_string "neon_sub<q>")))]
)
(define_insn "neon_vsubl<mode>"
@@ -2287,7 +2156,7 @@
UNSPEC_VSUBL))]
"TARGET_NEON"
"vsubl.%T3%#<V_sz_elem>\t%q0, %P1, %P2"
- [(set_attr "neon_type" "neon_int_2")]
+ [(set_attr "type" "neon_sub_long")]
)
(define_insn "neon_vsubw<mode>"
@@ -2298,7 +2167,7 @@
UNSPEC_VSUBW))]
"TARGET_NEON"
"vsubw.%T3%#<V_sz_elem>\t%q0, %q1, %P2"
- [(set_attr "neon_type" "neon_int_2")]
+ [(set_attr "type" "neon_sub_widen")]
)
(define_insn "neon_vqsub<mode>"
@@ -2309,7 +2178,7 @@
UNSPEC_VQSUB))]
"TARGET_NEON"
"vqsub.%T3%#<V_sz_elem>\t%<V_reg>0, %<V_reg>1, %<V_reg>2"
- [(set_attr "neon_type" "neon_int_5")]
+ [(set_attr "type" "neon_qsub<q>")]
)
(define_insn "neon_vhsub<mode>"
@@ -2320,7 +2189,7 @@
UNSPEC_VHSUB))]
"TARGET_NEON"
"vhsub.%T3%#<V_sz_elem>\t%<V_reg>0, %<V_reg>1, %<V_reg>2"
- [(set_attr "neon_type" "neon_int_5")]
+ [(set_attr "type" "neon_sub_halve<q>")]
)
(define_insn "neon_vsubhn<mode>"
@@ -2331,7 +2200,7 @@
UNSPEC_VSUBHN))]
"TARGET_NEON"
"v%O3subhn.<V_if_elem>\t%P0, %q1, %q2"
- [(set_attr "neon_type" "neon_int_4")]
+ [(set_attr "type" "neon_sub_halve_narrow_q")]
)
(define_insn "neon_vceq<mode>"
@@ -2345,12 +2214,12 @@
"@
vceq.<V_if_elem>\t%<V_reg>0, %<V_reg>1, %<V_reg>2
vceq.<V_if_elem>\t%<V_reg>0, %<V_reg>1, #0"
- [(set (attr "neon_type")
+ [(set (attr "type")
(if_then_else (match_test "<Is_float_mode>")
- (if_then_else (match_test "<Is_d_reg>")
- (const_string "neon_fp_vadd_ddd_vabs_dd")
- (const_string "neon_fp_vadd_qqq_vabs_qq"))
- (const_string "neon_int_5")))]
+ (const_string "neon_fp_compare_s<q>")
+ (if_then_else (match_operand 2 "zero_operand")
+ (const_string "neon_compare_zero<q>")
+ (const_string "neon_compare<q>"))))]
)
(define_insn "neon_vcge<mode>"
@@ -2364,12 +2233,12 @@
"@
vcge.%T3%#<V_sz_elem>\t%<V_reg>0, %<V_reg>1, %<V_reg>2
vcge.%T3%#<V_sz_elem>\t%<V_reg>0, %<V_reg>1, #0"
- [(set (attr "neon_type")
+ [(set (attr "type")
(if_then_else (match_test "<Is_float_mode>")
- (if_then_else (match_test "<Is_d_reg>")
- (const_string "neon_fp_vadd_ddd_vabs_dd")
- (const_string "neon_fp_vadd_qqq_vabs_qq"))
- (const_string "neon_int_5")))]
+ (const_string "neon_fp_compare_s<q>")
+ (if_then_else (match_operand 2 "zero_operand")
+ (const_string "neon_compare_zero<q>")
+ (const_string "neon_compare<q>"))))]
)
(define_insn "neon_vcgeu<mode>"
@@ -2381,7 +2250,7 @@
UNSPEC_VCGEU))]
"TARGET_NEON"
"vcge.%T3%#<V_sz_elem>\t%<V_reg>0, %<V_reg>1, %<V_reg>2"
- [(set_attr "neon_type" "neon_int_5")]
+ [(set_attr "type" "neon_compare<q>")]
)
(define_insn "neon_vcgt<mode>"
@@ -2395,12 +2264,12 @@
"@
vcgt.%T3%#<V_sz_elem>\t%<V_reg>0, %<V_reg>1, %<V_reg>2
vcgt.%T3%#<V_sz_elem>\t%<V_reg>0, %<V_reg>1, #0"
- [(set (attr "neon_type")
+ [(set (attr "type")
(if_then_else (match_test "<Is_float_mode>")
- (if_then_else (match_test "<Is_d_reg>")
- (const_string "neon_fp_vadd_ddd_vabs_dd")
- (const_string "neon_fp_vadd_qqq_vabs_qq"))
- (const_string "neon_int_5")))]
+ (const_string "neon_fp_compare_s<q>")
+ (if_then_else (match_operand 2 "zero_operand")
+ (const_string "neon_compare_zero<q>")
+ (const_string "neon_compare<q>"))))]
)
(define_insn "neon_vcgtu<mode>"
@@ -2412,7 +2281,7 @@
UNSPEC_VCGTU))]
"TARGET_NEON"
"vcgt.%T3%#<V_sz_elem>\t%<V_reg>0, %<V_reg>1, %<V_reg>2"
- [(set_attr "neon_type" "neon_int_5")]
+ [(set_attr "type" "neon_compare<q>")]
)
;; VCLE and VCLT only support comparisons with immediate zero (register
@@ -2427,12 +2296,12 @@
UNSPEC_VCLE))]
"TARGET_NEON"
"vcle.%T3%#<V_sz_elem>\t%<V_reg>0, %<V_reg>1, #0"
- [(set (attr "neon_type")
+ [(set (attr "type")
(if_then_else (match_test "<Is_float_mode>")
- (if_then_else (match_test "<Is_d_reg>")
- (const_string "neon_fp_vadd_ddd_vabs_dd")
- (const_string "neon_fp_vadd_qqq_vabs_qq"))
- (const_string "neon_int_5")))]
+ (const_string "neon_fp_compare_s<q>")
+ (if_then_else (match_operand 2 "zero_operand")
+ (const_string "neon_compare_zero<q>")
+ (const_string "neon_compare<q>"))))]
)
(define_insn "neon_vclt<mode>"
@@ -2444,12 +2313,12 @@
UNSPEC_VCLT))]
"TARGET_NEON"
"vclt.%T3%#<V_sz_elem>\t%<V_reg>0, %<V_reg>1, #0"
- [(set (attr "neon_type")
+ [(set (attr "type")
(if_then_else (match_test "<Is_float_mode>")
- (if_then_else (match_test "<Is_d_reg>")
- (const_string "neon_fp_vadd_ddd_vabs_dd")
- (const_string "neon_fp_vadd_qqq_vabs_qq"))
- (const_string "neon_int_5")))]
+ (const_string "neon_fp_compare_s<q>")
+ (if_then_else (match_operand 2 "zero_operand")
+ (const_string "neon_compare_zero<q>")
+ (const_string "neon_compare<q>"))))]
)
(define_insn "neon_vcage<mode>"
@@ -2460,10 +2329,7 @@
UNSPEC_VCAGE))]
"TARGET_NEON"
"vacge.<V_if_elem>\t%<V_reg>0, %<V_reg>1, %<V_reg>2"
- [(set (attr "neon_type")
- (if_then_else (match_test "<Is_d_reg>")
- (const_string "neon_fp_vadd_ddd_vabs_dd")
- (const_string "neon_fp_vadd_qqq_vabs_qq")))]
+ [(set_attr "type" "neon_fp_compare_s<q>")]
)
(define_insn "neon_vcagt<mode>"
@@ -2474,10 +2340,7 @@
UNSPEC_VCAGT))]
"TARGET_NEON"
"vacgt.<V_if_elem>\t%<V_reg>0, %<V_reg>1, %<V_reg>2"
- [(set (attr "neon_type")
- (if_then_else (match_test "<Is_d_reg>")
- (const_string "neon_fp_vadd_ddd_vabs_dd")
- (const_string "neon_fp_vadd_qqq_vabs_qq")))]
+ [(set_attr "type" "neon_fp_compare_s<q>")]
)
(define_insn "neon_vtst<mode>"
@@ -2488,7 +2351,7 @@
UNSPEC_VTST))]
"TARGET_NEON"
"vtst.<V_sz_elem>\t%<V_reg>0, %<V_reg>1, %<V_reg>2"
- [(set_attr "neon_type" "neon_int_4")]
+ [(set_attr "type" "neon_tst<q>")]
)
(define_insn "neon_vabd<mode>"
@@ -2499,12 +2362,10 @@
UNSPEC_VABD))]
"TARGET_NEON"
"vabd.%T3%#<V_sz_elem>\t%<V_reg>0, %<V_reg>1, %<V_reg>2"
- [(set (attr "neon_type")
+ [(set (attr "type")
(if_then_else (match_test "<Is_float_mode>")
- (if_then_else (match_test "<Is_d_reg>")
- (const_string "neon_fp_vadd_ddd_vabs_dd")
- (const_string "neon_fp_vadd_qqq_vabs_qq"))
- (const_string "neon_int_5")))]
+ (const_string "neon_fp_abd_s<q>")
+ (const_string "neon_abd<q>")))]
)
(define_insn "neon_vabdl<mode>"
@@ -2515,7 +2376,7 @@
UNSPEC_VABDL))]
"TARGET_NEON"
"vabdl.%T3%#<V_sz_elem>\t%q0, %P1, %P2"
- [(set_attr "neon_type" "neon_int_5")]
+ [(set_attr "type" "neon_abd_long")]
)
(define_insn "neon_vaba<mode>"
@@ -2527,9 +2388,7 @@
(match_operand:VDQIW 1 "s_register_operand" "0")))]
"TARGET_NEON"
"vaba.%T4%#<V_sz_elem>\t%<V_reg>0, %<V_reg>2, %<V_reg>3"
- [(set (attr "neon_type")
- (if_then_else (match_test "<Is_d_reg>")
- (const_string "neon_vaba") (const_string "neon_vaba_qqq")))]
+ [(set_attr "type" "neon_arith_acc<q>")]
)
(define_insn "neon_vabal<mode>"
@@ -2541,7 +2400,7 @@
(match_operand:<V_widen> 1 "s_register_operand" "0")))]
"TARGET_NEON"
"vabal.%T4%#<V_sz_elem>\t%q0, %P2, %P3"
- [(set_attr "neon_type" "neon_vaba")]
+ [(set_attr "type" "neon_arith_acc<q>")]
)
(define_insn "neon_vmax<mode>"
@@ -2552,12 +2411,10 @@
UNSPEC_VMAX))]
"TARGET_NEON"
"vmax.%T3%#<V_sz_elem>\t%<V_reg>0, %<V_reg>1, %<V_reg>2"
- [(set (attr "neon_type")
+ [(set (attr "type")
(if_then_else (match_test "<Is_float_mode>")
- (if_then_else (match_test "<Is_d_reg>")
- (const_string "neon_fp_vadd_ddd_vabs_dd")
- (const_string "neon_fp_vadd_qqq_vabs_qq"))
- (const_string "neon_int_5")))]
+ (const_string "neon_fp_minmax_s<q>")
+ (const_string "neon_minmax<q>")))]
)
(define_insn "neon_vmin<mode>"
@@ -2568,12 +2425,10 @@
UNSPEC_VMIN))]
"TARGET_NEON"
"vmin.%T3%#<V_sz_elem>\t%<V_reg>0, %<V_reg>1, %<V_reg>2"
- [(set (attr "neon_type")
+ [(set (attr "type")
(if_then_else (match_test "<Is_float_mode>")
- (if_then_else (match_test "<Is_d_reg>")
- (const_string "neon_fp_vadd_ddd_vabs_dd")
- (const_string "neon_fp_vadd_qqq_vabs_qq"))
- (const_string "neon_int_5")))]
+ (const_string "neon_fp_minmax_s<q>")
+ (const_string "neon_minmax<q>")))]
)
(define_expand "neon_vpadd<mode>"
@@ -2595,8 +2450,7 @@
UNSPEC_VPADDL))]
"TARGET_NEON"
"vpaddl.%T2%#<V_sz_elem>\t%<V_reg>0, %<V_reg>1"
- ;; Assume this schedules like vaddl.
- [(set_attr "neon_type" "neon_int_3")]
+ [(set_attr "type" "neon_reduc_add_long")]
)
(define_insn "neon_vpadal<mode>"
@@ -2607,8 +2461,7 @@
UNSPEC_VPADAL))]
"TARGET_NEON"
"vpadal.%T3%#<V_sz_elem>\t%<V_reg>0, %<V_reg>2"
- ;; Assume this schedules like vpadd.
- [(set_attr "neon_type" "neon_int_1")]
+ [(set_attr "type" "neon_reduc_add_acc")]
)
(define_insn "neon_vpmax<mode>"
@@ -2619,11 +2472,10 @@
UNSPEC_VPMAX))]
"TARGET_NEON"
"vpmax.%T3%#<V_sz_elem>\t%<V_reg>0, %<V_reg>1, %<V_reg>2"
- ;; Assume this schedules like vmax.
- [(set (attr "neon_type")
+ [(set (attr "type")
(if_then_else (match_test "<Is_float_mode>")
- (const_string "neon_fp_vadd_ddd_vabs_dd")
- (const_string "neon_int_5")))]
+ (const_string "neon_fp_reduc_minmax_s<q>")
+ (const_string "neon_reduc_minmax<q>")))]
)
(define_insn "neon_vpmin<mode>"
@@ -2634,11 +2486,10 @@
UNSPEC_VPMIN))]
"TARGET_NEON"
"vpmin.%T3%#<V_sz_elem>\t%<V_reg>0, %<V_reg>1, %<V_reg>2"
- ;; Assume this schedules like vmin.
- [(set (attr "neon_type")
+ [(set (attr "type")
(if_then_else (match_test "<Is_float_mode>")
- (const_string "neon_fp_vadd_ddd_vabs_dd")
- (const_string "neon_int_5")))]
+ (const_string "neon_fp_reduc_minmax_s<q>")
+ (const_string "neon_reduc_minmax<q>")))]
)
(define_insn "neon_vrecps<mode>"
@@ -2649,10 +2500,7 @@
UNSPEC_VRECPS))]
"TARGET_NEON"
"vrecps.<V_if_elem>\t%<V_reg>0, %<V_reg>1, %<V_reg>2"
- [(set (attr "neon_type")
- (if_then_else (match_test "<Is_d_reg>")
- (const_string "neon_fp_vrecps_vrsqrts_ddd")
- (const_string "neon_fp_vrecps_vrsqrts_qqq")))]
+ [(set_attr "type" "neon_fp_recps_s<q>")]
)
(define_insn "neon_vrsqrts<mode>"
@@ -2663,10 +2511,7 @@
UNSPEC_VRSQRTS))]
"TARGET_NEON"
"vrsqrts.<V_if_elem>\t%<V_reg>0, %<V_reg>1, %<V_reg>2"
- [(set (attr "neon_type")
- (if_then_else (match_test "<Is_d_reg>")
- (const_string "neon_fp_vrecps_vrsqrts_ddd")
- (const_string "neon_fp_vrecps_vrsqrts_qqq")))]
+ [(set_attr "type" "neon_fp_rsqrts_s<q>")]
)
(define_expand "neon_vabs<mode>"
@@ -2686,7 +2531,7 @@
UNSPEC_VQABS))]
"TARGET_NEON"
"vqabs.<V_s_elem>\t%<V_reg>0, %<V_reg>1"
- [(set_attr "neon_type" "neon_vqneg_vqabs")]
+ [(set_attr "type" "neon_qabs<q>")]
)
(define_expand "neon_vneg<mode>"
@@ -2706,7 +2551,7 @@
UNSPEC_VQNEG))]
"TARGET_NEON"
"vqneg.<V_s_elem>\t%<V_reg>0, %<V_reg>1"
- [(set_attr "neon_type" "neon_vqneg_vqabs")]
+ [(set_attr "type" "neon_qneg<q>")]
)
(define_insn "neon_vcls<mode>"
@@ -2716,7 +2561,7 @@
UNSPEC_VCLS))]
"TARGET_NEON"
"vcls.<V_s_elem>\t%<V_reg>0, %<V_reg>1"
- [(set_attr "neon_type" "neon_int_1")]
+ [(set_attr "type" "neon_cls<q>")]
)
(define_insn "clz<mode>2"
@@ -2724,7 +2569,7 @@
(clz:VDQIW (match_operand:VDQIW 1 "s_register_operand" "w")))]
"TARGET_NEON"
"vclz.<V_if_elem>\t%<V_reg>0, %<V_reg>1"
- [(set_attr "neon_type" "neon_int_1")]
+ [(set_attr "type" "neon_cnt<q>")]
)
(define_expand "neon_vclz<mode>"
@@ -2742,7 +2587,7 @@
(popcount:VE (match_operand:VE 1 "s_register_operand" "w")))]
"TARGET_NEON"
"vcnt.<V_sz_elem>\t%<V_reg>0, %<V_reg>1"
- [(set_attr "neon_type" "neon_int_1")]
+ [(set_attr "type" "neon_cnt<q>")]
)
(define_expand "neon_vcnt<mode>"
@@ -2762,10 +2607,7 @@
UNSPEC_VRECPE))]
"TARGET_NEON"
"vrecpe.<V_u_elem>\t%<V_reg>0, %<V_reg>1"
- [(set (attr "neon_type")
- (if_then_else (match_test "<Is_d_reg>")
- (const_string "neon_fp_vadd_ddd_vabs_dd")
- (const_string "neon_fp_vadd_qqq_vabs_qq")))]
+ [(set_attr "type" "neon_fp_recpe_s<q>")]
)
(define_insn "neon_vrsqrte<mode>"
@@ -2775,10 +2617,7 @@
UNSPEC_VRSQRTE))]
"TARGET_NEON"
"vrsqrte.<V_u_elem>\t%<V_reg>0, %<V_reg>1"
- [(set (attr "neon_type")
- (if_then_else (match_test "<Is_d_reg>")
- (const_string "neon_fp_vadd_ddd_vabs_dd")
- (const_string "neon_fp_vadd_qqq_vabs_qq")))]
+ [(set_attr "type" "neon_fp_rsqrte_s<q>")]
)
(define_expand "neon_vmvn<mode>"
@@ -2807,7 +2646,7 @@
}
return "vmov.s<V_sz_elem>\t%0, %P1[%c2]";
}
- [(set_attr "neon_type" "neon_bp_simple")]
+ [(set_attr "type" "neon_to_gp")]
)
(define_insn "neon_vget_lane<mode>_zext_internal"
@@ -2826,7 +2665,7 @@
}
return "vmov.u<V_sz_elem>\t%0, %P1[%c2]";
}
- [(set_attr "neon_type" "neon_bp_simple")]
+ [(set_attr "type" "neon_to_gp")]
)
(define_insn "neon_vget_lane<mode>_sext_internal"
@@ -2853,7 +2692,7 @@
return "";
}
- [(set_attr "neon_type" "neon_bp_simple")]
+ [(set_attr "type" "neon_to_gp_q")]
)
(define_insn "neon_vget_lane<mode>_zext_internal"
@@ -2880,7 +2719,7 @@
return "";
}
- [(set_attr "neon_type" "neon_bp_simple")]
+ [(set_attr "type" "neon_to_gp_q")]
)
(define_expand "neon_vget_lane<mode>"
@@ -3012,8 +2851,7 @@
(vec_duplicate:VX (match_operand:<V_elem> 1 "s_register_operand" "r")))]
"TARGET_NEON"
"vdup.<V_sz_elem>\t%<V_reg>0, %1"
- ;; Assume this schedules like vmov.
- [(set_attr "neon_type" "neon_bp_simple")]
+ [(set_attr "type" "neon_from_gp<q>")]
)
(define_insn "neon_vdup_n<mode>"
@@ -3023,8 +2861,7 @@
"@
vdup.<V_sz_elem>\t%<V_reg>0, %1
vdup.<V_sz_elem>\t%<V_reg>0, %y1"
- ;; Assume this schedules like vmov.
- [(set_attr "neon_type" "neon_bp_simple")]
+ [(set_attr "type" "neon_from_gp<q>,neon_dup<q>")]
)
(define_expand "neon_vdup_ndi"
@@ -3045,7 +2882,7 @@
vmov\t%e0, %Q1, %R1\;vmov\t%f0, %Q1, %R1
vmov\t%e0, %P1\;vmov\t%f0, %P1"
[(set_attr "length" "8")
- (set_attr "neon_type" "neon_bp_simple")]
+ (set_attr "type" "multiple")]
)
(define_insn "neon_vdup_lane<mode>_internal"
@@ -3067,8 +2904,7 @@
else
return "vdup.<V_sz_elem>\t%q0, %P1[%c2]";
}
- ;; Assume this schedules like vmov.
- [(set_attr "neon_type" "neon_bp_simple")]
+ [(set_attr "type" "neon_dup<q>")]
)
(define_expand "neon_vdup_lane<mode>"
@@ -3123,10 +2959,7 @@
(set (match_dup 1) (match_dup 0))]
"TARGET_NEON && reload_completed"
"vswp\t%<V_reg>0, %<V_reg>1"
- [(set (attr "neon_type")
- (if_then_else (match_test "<Is_d_reg>")
- (const_string "neon_bp_simple")
- (const_string "neon_bp_2cycle")))]
+ [(set_attr "type" "neon_permute<q>")]
)
;; In this insn, operand 1 should be low, and operand 2 the high part of the
@@ -3148,7 +2981,9 @@
{
neon_split_vcombine (operands);
DONE;
-})
+}
+[(set_attr "type" "multiple")]
+)
(define_expand "neon_vget_high<mode>"
[(match_operand:<V_HALF> 0 "s_register_operand")
@@ -3177,10 +3012,7 @@
(float:<V_CVTTO> (match_operand:VCVTI 1 "s_register_operand" "w")))]
"TARGET_NEON && !flag_rounding_math"
"vcvt.f32.s32\t%<V_reg>0, %<V_reg>1"
- [(set (attr "neon_type")
- (if_then_else (match_test "<Is_d_reg>")
- (const_string "neon_fp_vadd_ddd_vabs_dd")
- (const_string "neon_fp_vadd_qqq_vabs_qq")))]
+ [(set_attr "type" "neon_int_to_fp_<V_elem_ch><q>")]
)
(define_insn "floatuns<mode><V_cvtto>2"
@@ -3188,10 +3020,7 @@
(unsigned_float:<V_CVTTO> (match_operand:VCVTI 1 "s_register_operand" "w")))]
"TARGET_NEON && !flag_rounding_math"
"vcvt.f32.u32\t%<V_reg>0, %<V_reg>1"
- [(set (attr "neon_type")
- (if_then_else (match_test "<Is_d_reg>")
- (const_string "neon_fp_vadd_ddd_vabs_dd")
- (const_string "neon_fp_vadd_qqq_vabs_qq")))]
+ [(set_attr "type" "neon_int_to_fp_<V_elem_ch><q>")]
)
(define_insn "fix_trunc<mode><V_cvtto>2"
@@ -3199,10 +3028,7 @@
(fix:<V_CVTTO> (match_operand:VCVTF 1 "s_register_operand" "w")))]
"TARGET_NEON"
"vcvt.s32.f32\t%<V_reg>0, %<V_reg>1"
- [(set (attr "neon_type")
- (if_then_else (match_test "<Is_d_reg>")
- (const_string "neon_fp_vadd_ddd_vabs_dd")
- (const_string "neon_fp_vadd_qqq_vabs_qq")))]
+ [(set_attr "type" "neon_fp_to_int_<V_elem_ch><q>")]
)
(define_insn "fixuns_trunc<mode><V_cvtto>2"
@@ -3210,10 +3036,7 @@
(unsigned_fix:<V_CVTTO> (match_operand:VCVTF 1 "s_register_operand" "w")))]
"TARGET_NEON"
"vcvt.u32.f32\t%<V_reg>0, %<V_reg>1"
- [(set (attr "neon_type")
- (if_then_else (match_test "<Is_d_reg>")
- (const_string "neon_fp_vadd_ddd_vabs_dd")
- (const_string "neon_fp_vadd_qqq_vabs_qq")))]
+ [(set_attr "type" "neon_fp_to_int_<V_elem_ch><q>")]
)
(define_insn "neon_vcvt<mode>"
@@ -3223,10 +3046,7 @@
UNSPEC_VCVT))]
"TARGET_NEON"
"vcvt.%T2%#32.f32\t%<V_reg>0, %<V_reg>1"
- [(set (attr "neon_type")
- (if_then_else (match_test "<Is_d_reg>")
- (const_string "neon_fp_vadd_ddd_vabs_dd")
- (const_string "neon_fp_vadd_qqq_vabs_qq")))]
+ [(set_attr "type" "neon_fp_to_int_<V_elem_ch><q>")]
)
(define_insn "neon_vcvt<mode>"
@@ -3236,10 +3056,7 @@
UNSPEC_VCVT))]
"TARGET_NEON"
"vcvt.f32.%T2%#32\t%<V_reg>0, %<V_reg>1"
- [(set (attr "neon_type")
- (if_then_else (match_test "<Is_d_reg>")
- (const_string "neon_fp_vadd_ddd_vabs_dd")
- (const_string "neon_fp_vadd_qqq_vabs_qq")))]
+ [(set_attr "type" "neon_int_to_fp_<V_elem_ch><q>")]
)
(define_insn "neon_vcvtv4sfv4hf"
@@ -3248,7 +3065,7 @@
UNSPEC_VCVT))]
"TARGET_NEON && TARGET_FP16"
"vcvt.f32.f16\t%q0, %P1"
- [(set_attr "neon_type" "neon_fp_vadd_ddd_vabs_dd")]
+ [(set_attr "type" "neon_fp_cvt_widen_h")]
)
(define_insn "neon_vcvtv4hfv4sf"
@@ -3257,7 +3074,7 @@
UNSPEC_VCVT))]
"TARGET_NEON && TARGET_FP16"
"vcvt.f16.f32\t%P0, %q1"
- [(set_attr "neon_type" "neon_fp_vadd_ddd_vabs_dd")]
+ [(set_attr "type" "neon_fp_cvt_narrow_s_q")]
)
(define_insn "neon_vcvt_n<mode>"
@@ -3271,10 +3088,7 @@
neon_const_bounds (operands[2], 1, 33);
return "vcvt.%T3%#32.f32\t%<V_reg>0, %<V_reg>1, %2";
}
- [(set (attr "neon_type")
- (if_then_else (match_test "<Is_d_reg>")
- (const_string "neon_fp_vadd_ddd_vabs_dd")
- (const_string "neon_fp_vadd_qqq_vabs_qq")))]
+ [(set_attr "type" "neon_fp_to_int_<V_elem_ch><q>")]
)
(define_insn "neon_vcvt_n<mode>"
@@ -3288,10 +3102,7 @@
neon_const_bounds (operands[2], 1, 33);
return "vcvt.f32.%T3%#32\t%<V_reg>0, %<V_reg>1, %2";
}
- [(set (attr "neon_type")
- (if_then_else (match_test "<Is_d_reg>")
- (const_string "neon_fp_vadd_ddd_vabs_dd")
- (const_string "neon_fp_vadd_qqq_vabs_qq")))]
+ [(set_attr "type" "neon_int_to_fp_<V_elem_ch><q>")]
)
(define_insn "neon_vmovn<mode>"
@@ -3301,7 +3112,7 @@
UNSPEC_VMOVN))]
"TARGET_NEON"
"vmovn.<V_if_elem>\t%P0, %q1"
- [(set_attr "neon_type" "neon_bp_simple")]
+ [(set_attr "type" "neon_shift_imm_narrow_q")]
)
(define_insn "neon_vqmovn<mode>"
@@ -3311,7 +3122,7 @@
UNSPEC_VQMOVN))]
"TARGET_NEON"
"vqmovn.%T2%#<V_sz_elem>\t%P0, %q1"
- [(set_attr "neon_type" "neon_shift_2")]
+ [(set_attr "type" "neon_sat_shift_imm_narrow_q")]
)
(define_insn "neon_vqmovun<mode>"
@@ -3321,7 +3132,7 @@
UNSPEC_VQMOVUN))]
"TARGET_NEON"
"vqmovun.<V_s_elem>\t%P0, %q1"
- [(set_attr "neon_type" "neon_shift_2")]
+ [(set_attr "type" "neon_sat_shift_imm_narrow_q")]
)
(define_insn "neon_vmovl<mode>"
@@ -3331,7 +3142,7 @@
UNSPEC_VMOVL))]
"TARGET_NEON"
"vmovl.%T2%#<V_sz_elem>\t%q0, %P1"
- [(set_attr "neon_type" "neon_shift_1")]
+ [(set_attr "type" "neon_shift_imm_long")]
)
(define_insn "neon_vmul_lane<mode>"
@@ -3347,12 +3158,10 @@
neon_lane_bounds (operands[3], 0, GET_MODE_NUNITS (<MODE>mode));
return "vmul.<V_if_elem>\t%P0, %P1, %P2[%c3]";
}
- [(set (attr "neon_type")
+ [(set (attr "type")
(if_then_else (match_test "<Is_float_mode>")
- (const_string "neon_fp_vmul_ddd")
- (if_then_else (match_test "<Scalar_mul_8_16>")
- (const_string "neon_mul_ddd_16_scalar_32_16_long_scalar")
- (const_string "neon_mul_qdd_64_32_long_qqd_16_ddd_32_scalar_64_32_long_scalar"))))]
+ (const_string "neon_fp_mul_s_scalar<q>")
+ (const_string "neon_mul_<V_elem_ch>_scalar<q>")))]
)
(define_insn "neon_vmul_lane<mode>"
@@ -3368,12 +3177,10 @@
neon_lane_bounds (operands[3], 0, GET_MODE_NUNITS (<V_HALF>mode));
return "vmul.<V_if_elem>\t%q0, %q1, %P2[%c3]";
}
- [(set (attr "neon_type")
+ [(set (attr "type")
(if_then_else (match_test "<Is_float_mode>")
- (const_string "neon_fp_vmul_qqd")
- (if_then_else (match_test "<Scalar_mul_8_16>")
- (const_string "neon_mul_qdd_64_32_long_qqd_16_ddd_32_scalar_64_32_long_scalar")
- (const_string "neon_mul_qqd_32_scalar"))))]
+ (const_string "neon_fp_mul_s_scalar<q>")
+ (const_string "neon_mul_<V_elem_ch>_scalar<q>")))]
)
(define_insn "neon_vmull_lane<mode>"
@@ -3389,10 +3196,7 @@
neon_lane_bounds (operands[3], 0, GET_MODE_NUNITS (<MODE>mode));
return "vmull.%T4%#<V_sz_elem>\t%q0, %P1, %P2[%c3]";
}
- [(set (attr "neon_type")
- (if_then_else (match_test "<Scalar_mul_8_16>")
- (const_string "neon_mul_ddd_16_scalar_32_16_long_scalar")
- (const_string "neon_mul_qdd_64_32_long_qqd_16_ddd_32_scalar_64_32_long_scalar")))]
+ [(set_attr "type" "neon_mul_<V_elem_ch>_scalar_long")]
)
(define_insn "neon_vqdmull_lane<mode>"
@@ -3408,10 +3212,7 @@
neon_lane_bounds (operands[3], 0, GET_MODE_NUNITS (<MODE>mode));
return "vqdmull.<V_s_elem>\t%q0, %P1, %P2[%c3]";
}
- [(set (attr "neon_type")
- (if_then_else (match_test "<Scalar_mul_8_16>")
- (const_string "neon_mul_ddd_16_scalar_32_16_long_scalar")
- (const_string "neon_mul_qdd_64_32_long_qqd_16_ddd_32_scalar_64_32_long_scalar")))]
+ [(set_attr "type" "neon_sat_mul_<V_elem_ch>_scalar_long")]
)
(define_insn "neon_vqdmulh_lane<mode>"
@@ -3427,10 +3228,7 @@
neon_lane_bounds (operands[3], 0, GET_MODE_NUNITS (<MODE>mode));
return "vq%O4dmulh.%T4%#<V_sz_elem>\t%q0, %q1, %P2[%c3]";
}
- [(set (attr "neon_type")
- (if_then_else (match_test "<Scalar_mul_8_16>")
- (const_string "neon_mul_qdd_64_32_long_qqd_16_ddd_32_scalar_64_32_long_scalar")
- (const_string "neon_mul_qqd_32_scalar")))]
+ [(set_attr "type" "neon_sat_mul_<V_elem_ch>_scalar_q")]
)
(define_insn "neon_vqdmulh_lane<mode>"
@@ -3446,10 +3244,7 @@
neon_lane_bounds (operands[3], 0, GET_MODE_NUNITS (<MODE>mode));
return "vq%O4dmulh.%T4%#<V_sz_elem>\t%P0, %P1, %P2[%c3]";
}
- [(set (attr "neon_type")
- (if_then_else (match_test "<Scalar_mul_8_16>")
- (const_string "neon_mul_ddd_16_scalar_32_16_long_scalar")
- (const_string "neon_mul_qdd_64_32_long_qqd_16_ddd_32_scalar_64_32_long_scalar")))]
+ [(set_attr "type" "neon_sat_mul_<V_elem_ch>_scalar_q")]
)
(define_insn "neon_vmla_lane<mode>"
@@ -3466,12 +3261,10 @@
neon_lane_bounds (operands[4], 0, GET_MODE_NUNITS (<MODE>mode));
return "vmla.<V_if_elem>\t%P0, %P2, %P3[%c4]";
}
- [(set (attr "neon_type")
+ [(set (attr "type")
(if_then_else (match_test "<Is_float_mode>")
- (const_string "neon_fp_vmla_ddd_scalar")
- (if_then_else (match_test "<Scalar_mul_8_16>")
- (const_string "neon_mla_ddd_16_scalar_qdd_32_16_long_scalar")
- (const_string "neon_mla_ddd_32_qqd_16_ddd_32_scalar_qdd_64_32_long_scalar_qdd_64_32_long"))))]
+ (const_string "neon_fp_mla_s_scalar<q>")
+ (const_string "neon_mla_<V_elem_ch>_scalar<q>")))]
)
(define_insn "neon_vmla_lane<mode>"
@@ -3488,12 +3281,10 @@
neon_lane_bounds (operands[4], 0, GET_MODE_NUNITS (<MODE>mode));
return "vmla.<V_if_elem>\t%q0, %q2, %P3[%c4]";
}
- [(set (attr "neon_type")
+ [(set (attr "type")
(if_then_else (match_test "<Is_float_mode>")
- (const_string "neon_fp_vmla_qqq_scalar")
- (if_then_else (match_test "<Scalar_mul_8_16>")
- (const_string "neon_mla_ddd_32_qqd_16_ddd_32_scalar_qdd_64_32_long_scalar_qdd_64_32_long")
- (const_string "neon_mla_qqq_32_qqd_32_scalar"))))]
+ (const_string "neon_fp_mla_s_scalar<q>")
+ (const_string "neon_mla_<V_elem_ch>_scalar<q>")))]
)
(define_insn "neon_vmlal_lane<mode>"
@@ -3510,10 +3301,7 @@
neon_lane_bounds (operands[4], 0, GET_MODE_NUNITS (<MODE>mode));
return "vmlal.%T5%#<V_sz_elem>\t%q0, %P2, %P3[%c4]";
}
- [(set (attr "neon_type")
- (if_then_else (match_test "<Scalar_mul_8_16>")
- (const_string "neon_mla_ddd_16_scalar_qdd_32_16_long_scalar")
- (const_string "neon_mla_ddd_32_qqd_16_ddd_32_scalar_qdd_64_32_long_scalar_qdd_64_32_long")))]
+ [(set_attr "type" "neon_mla_<V_elem_ch>_scalar_long")]
)
(define_insn "neon_vqdmlal_lane<mode>"
@@ -3530,10 +3318,7 @@
neon_lane_bounds (operands[4], 0, GET_MODE_NUNITS (<MODE>mode));
return "vqdmlal.<V_s_elem>\t%q0, %P2, %P3[%c4]";
}
- [(set (attr "neon_type")
- (if_then_else (match_test "<Scalar_mul_8_16>")
- (const_string "neon_mla_ddd_16_scalar_qdd_32_16_long_scalar")
- (const_string "neon_mla_ddd_32_qqd_16_ddd_32_scalar_qdd_64_32_long_scalar_qdd_64_32_long")))]
+ [(set_attr "type" "neon_sat_mla_<V_elem_ch>_scalar_long")]
)
(define_insn "neon_vmls_lane<mode>"
@@ -3550,12 +3335,10 @@
neon_lane_bounds (operands[4], 0, GET_MODE_NUNITS (<MODE>mode));
return "vmls.<V_if_elem>\t%P0, %P2, %P3[%c4]";
}
- [(set (attr "neon_type")
+ [(set (attr "type")
(if_then_else (match_test "<Is_float_mode>")
- (const_string "neon_fp_vmla_ddd_scalar")
- (if_then_else (match_test "<Scalar_mul_8_16>")
- (const_string "neon_mla_ddd_16_scalar_qdd_32_16_long_scalar")
- (const_string "neon_mla_ddd_32_qqd_16_ddd_32_scalar_qdd_64_32_long_scalar_qdd_64_32_long"))))]
+ (const_string "neon_fp_mla_s_scalar<q>")
+ (const_string "neon_mla_<V_elem_ch>_scalar<q>")))]
)
(define_insn "neon_vmls_lane<mode>"
@@ -3572,12 +3355,10 @@
neon_lane_bounds (operands[4], 0, GET_MODE_NUNITS (<MODE>mode));
return "vmls.<V_if_elem>\t%q0, %q2, %P3[%c4]";
}
- [(set (attr "neon_type")
+ [(set (attr "type")
(if_then_else (match_test "<Is_float_mode>")
- (const_string "neon_fp_vmla_qqq_scalar")
- (if_then_else (match_test "<Scalar_mul_8_16>")
- (const_string "neon_mla_ddd_32_qqd_16_ddd_32_scalar_qdd_64_32_long_scalar_qdd_64_32_long")
- (const_string "neon_mla_qqq_32_qqd_32_scalar"))))]
+ (const_string "neon_fp_mla_s_scalar<q>")
+ (const_string "neon_mla_<V_elem_ch>_scalar<q>")))]
)
(define_insn "neon_vmlsl_lane<mode>"
@@ -3594,10 +3375,7 @@
neon_lane_bounds (operands[4], 0, GET_MODE_NUNITS (<MODE>mode));
return "vmlsl.%T5%#<V_sz_elem>\t%q0, %P2, %P3[%c4]";
}
- [(set (attr "neon_type")
- (if_then_else (match_test "<Scalar_mul_8_16>")
- (const_string "neon_mla_ddd_16_scalar_qdd_32_16_long_scalar")
- (const_string "neon_mla_ddd_32_qqd_16_ddd_32_scalar_qdd_64_32_long_scalar_qdd_64_32_long")))]
+ [(set_attr "type" "neon_mla_<V_elem_ch>_scalar_long")]
)
(define_insn "neon_vqdmlsl_lane<mode>"
@@ -3614,10 +3392,7 @@
neon_lane_bounds (operands[4], 0, GET_MODE_NUNITS (<MODE>mode));
return "vqdmlsl.<V_s_elem>\t%q0, %P2, %P3[%c4]";
}
- [(set (attr "neon_type")
- (if_then_else (match_test "<Scalar_mul_8_16>")
- (const_string "neon_mla_ddd_16_scalar_qdd_32_16_long_scalar")
- (const_string "neon_mla_ddd_32_qqd_16_ddd_32_scalar_qdd_64_32_long_scalar_qdd_64_32_long")))]
+ [(set_attr "type" "neon_sat_mla_<V_elem_ch>_scalar_long")]
)
; FIXME: For the "_n" multiply/multiply-accumulate insns, we copy a value in a
@@ -3842,10 +3617,7 @@
neon_const_bounds (operands[3], 0, GET_MODE_NUNITS (<MODE>mode));
return "vext.<V_sz_elem>\t%<V_reg>0, %<V_reg>1, %<V_reg>2, %3";
}
- [(set (attr "neon_type")
- (if_then_else (match_test "<Is_d_reg>")
- (const_string "neon_bp_simple")
- (const_string "neon_bp_2cycle")))]
+ [(set_attr "type" "neon_ext<q>")]
)
(define_insn "neon_vrev64<mode>"
@@ -3855,7 +3627,7 @@
UNSPEC_VREV64))]
"TARGET_NEON"
"vrev64.<V_sz_elem>\t%<V_reg>0, %<V_reg>1"
- [(set_attr "neon_type" "neon_bp_simple")]
+ [(set_attr "type" "neon_rev<q>")]
)
(define_insn "neon_vrev32<mode>"
@@ -3865,7 +3637,7 @@
UNSPEC_VREV32))]
"TARGET_NEON"
"vrev32.<V_sz_elem>\t%<V_reg>0, %<V_reg>1"
- [(set_attr "neon_type" "neon_bp_simple")]
+ [(set_attr "type" "neon_rev<q>")]
)
(define_insn "neon_vrev16<mode>"
@@ -3875,7 +3647,7 @@
UNSPEC_VREV16))]
"TARGET_NEON"
"vrev16.<V_sz_elem>\t%<V_reg>0, %<V_reg>1"
- [(set_attr "neon_type" "neon_bp_simple")]
+ [(set_attr "type" "neon_rev<q>")]
)
; vbsl_* intrinsics may compile to any of vbsl/vbif/vbit depending on register
@@ -3897,7 +3669,7 @@
vbsl\t%<V_reg>0, %<V_reg>2, %<V_reg>3
vbit\t%<V_reg>0, %<V_reg>2, %<V_reg>1
vbif\t%<V_reg>0, %<V_reg>3, %<V_reg>1"
- [(set_attr "neon_type" "neon_int_1")]
+ [(set_attr "type" "neon_bsl<q>")]
)
(define_expand "neon_vbsl<mode>"
@@ -3920,10 +3692,7 @@
UNSPEC_VSHL))]
"TARGET_NEON"
"v%O3shl.%T3%#<V_sz_elem>\t%<V_reg>0, %<V_reg>1, %<V_reg>2"
- [(set (attr "neon_type")
- (if_then_else (match_test "<Is_d_reg>")
- (const_string "neon_vshl_ddd")
- (const_string "neon_shift_3")))]
+ [(set_attr "type" "neon_shift_imm<q>")]
)
(define_insn "neon_vqshl<mode>"
@@ -3934,10 +3703,7 @@
UNSPEC_VQSHL))]
"TARGET_NEON"
"vq%O3shl.%T3%#<V_sz_elem>\t%<V_reg>0, %<V_reg>1, %<V_reg>2"
- [(set (attr "neon_type")
- (if_then_else (match_test "<Is_d_reg>")
- (const_string "neon_shift_2")
- (const_string "neon_vqshl_vrshl_vqrshl_qqq")))]
+ [(set_attr "type" "neon_sat_shift_imm<q>")]
)
(define_insn "neon_vshr_n<mode>"
@@ -3951,7 +3717,7 @@
neon_const_bounds (operands[2], 1, neon_element_bits (<MODE>mode) + 1);
return "v%O3shr.%T3%#<V_sz_elem>\t%<V_reg>0, %<V_reg>1, %2";
}
- [(set_attr "neon_type" "neon_shift_1")]
+ [(set_attr "type" "neon_shift_imm<q>")]
)
(define_insn "neon_vshrn_n<mode>"
@@ -3965,7 +3731,7 @@
neon_const_bounds (operands[2], 1, neon_element_bits (<MODE>mode) / 2 + 1);
return "v%O3shrn.<V_if_elem>\t%P0, %q1, %2";
}
- [(set_attr "neon_type" "neon_shift_1")]
+ [(set_attr "type" "neon_shift_imm_narrow_q")]
)
(define_insn "neon_vqshrn_n<mode>"
@@ -3979,7 +3745,7 @@
neon_const_bounds (operands[2], 1, neon_element_bits (<MODE>mode) / 2 + 1);
return "vq%O3shrn.%T3%#<V_sz_elem>\t%P0, %q1, %2";
}
- [(set_attr "neon_type" "neon_shift_2")]
+ [(set_attr "type" "neon_sat_shift_imm_narrow_q")]
)
(define_insn "neon_vqshrun_n<mode>"
@@ -3993,7 +3759,7 @@
neon_const_bounds (operands[2], 1, neon_element_bits (<MODE>mode) / 2 + 1);
return "vq%O3shrun.%T3%#<V_sz_elem>\t%P0, %q1, %2";
}
- [(set_attr "neon_type" "neon_shift_2")]
+ [(set_attr "type" "neon_sat_shift_imm_narrow_q")]
)
(define_insn "neon_vshl_n<mode>"
@@ -4007,7 +3773,7 @@
neon_const_bounds (operands[2], 0, neon_element_bits (<MODE>mode));
return "vshl.<V_if_elem>\t%<V_reg>0, %<V_reg>1, %2";
}
- [(set_attr "neon_type" "neon_shift_1")]
+ [(set_attr "type" "neon_shift_imm<q>")]
)
(define_insn "neon_vqshl_n<mode>"
@@ -4021,7 +3787,7 @@
neon_const_bounds (operands[2], 0, neon_element_bits (<MODE>mode));
return "vqshl.%T3%#<V_sz_elem>\t%<V_reg>0, %<V_reg>1, %2";
}
- [(set_attr "neon_type" "neon_shift_2")]
+ [(set_attr "type" "neon_sat_shift_imm<q>")]
)
(define_insn "neon_vqshlu_n<mode>"
@@ -4035,7 +3801,7 @@
neon_const_bounds (operands[2], 0, neon_element_bits (<MODE>mode));
return "vqshlu.%T3%#<V_sz_elem>\t%<V_reg>0, %<V_reg>1, %2";
}
- [(set_attr "neon_type" "neon_shift_2")]
+ [(set_attr "type" "neon_sat_shift_imm<q>")]
)
(define_insn "neon_vshll_n<mode>"
@@ -4050,7 +3816,7 @@
neon_const_bounds (operands[2], 0, neon_element_bits (<MODE>mode) + 1);
return "vshll.%T3%#<V_sz_elem>\t%q0, %P1, %2";
}
- [(set_attr "neon_type" "neon_shift_1")]
+ [(set_attr "type" "neon_shift_imm_long")]
)
(define_insn "neon_vsra_n<mode>"
@@ -4065,7 +3831,7 @@
neon_const_bounds (operands[3], 1, neon_element_bits (<MODE>mode) + 1);
return "v%O4sra.%T4%#<V_sz_elem>\t%<V_reg>0, %<V_reg>2, %3";
}
- [(set_attr "neon_type" "neon_vsra_vrsra")]
+ [(set_attr "type" "neon_shift_acc<q>")]
)
(define_insn "neon_vsri_n<mode>"
@@ -4079,10 +3845,7 @@
neon_const_bounds (operands[3], 1, neon_element_bits (<MODE>mode) + 1);
return "vsri.<V_sz_elem>\t%<V_reg>0, %<V_reg>2, %3";
}
- [(set (attr "neon_type")
- (if_then_else (match_test "<Is_d_reg>")
- (const_string "neon_shift_1")
- (const_string "neon_shift_3")))]
+ [(set_attr "type" "neon_shift_reg<q>")]
)
(define_insn "neon_vsli_n<mode>"
@@ -4096,10 +3859,7 @@
neon_const_bounds (operands[3], 0, neon_element_bits (<MODE>mode));
return "vsli.<V_sz_elem>\t%<V_reg>0, %<V_reg>2, %3";
}
- [(set (attr "neon_type")
- (if_then_else (match_test "<Is_d_reg>")
- (const_string "neon_shift_1")
- (const_string "neon_shift_3")))]
+ [(set_attr "type" "neon_shift_reg<q>")]
)
(define_insn "neon_vtbl1v8qi"
@@ -4109,7 +3869,7 @@
UNSPEC_VTBL))]
"TARGET_NEON"
"vtbl.8\t%P0, {%P1}, %P2"
- [(set_attr "neon_type" "neon_bp_2cycle")]
+ [(set_attr "type" "neon_tbl1")]
)
(define_insn "neon_vtbl2v8qi"
@@ -4130,7 +3890,7 @@
return "";
}
- [(set_attr "neon_type" "neon_bp_2cycle")]
+ [(set_attr "type" "neon_tbl2")]
)
(define_insn "neon_vtbl3v8qi"
@@ -4152,7 +3912,7 @@
return "";
}
- [(set_attr "neon_type" "neon_bp_3cycle")]
+ [(set_attr "type" "neon_tbl3")]
)
(define_insn "neon_vtbl4v8qi"
@@ -4175,7 +3935,7 @@
return "";
}
- [(set_attr "neon_type" "neon_bp_3cycle")]
+ [(set_attr "type" "neon_tbl4")]
)
;; These three are used by the vec_perm infrastructure for V16QImode.
@@ -4206,7 +3966,9 @@
part2 = simplify_subreg (V8QImode, op2, V16QImode, ofs);
emit_insn (gen_neon_vtbl2v8qi (part0, op1, part2));
DONE;
-})
+}
+ [(set_attr "type" "multiple")]
+)
(define_insn_and_split "neon_vtbl2v16qi"
[(set (match_operand:V16QI 0 "s_register_operand" "=&w")
@@ -4235,7 +3997,9 @@
part2 = simplify_subreg (V8QImode, op2, V16QImode, ofs);
emit_insn (gen_neon_vtbl2v8qi (part0, op1, part2));
DONE;
-})
+}
+ [(set_attr "type" "multiple")]
+)
;; ??? Logically we should extend the regular neon_vcombine pattern to
;; handle quad-word input modes, producing octa-word output modes. But
@@ -4253,7 +4017,9 @@
{
neon_split_vcombine (operands);
DONE;
-})
+}
+[(set_attr "type" "multiple")]
+)
(define_insn "neon_vtbx1v8qi"
[(set (match_operand:V8QI 0 "s_register_operand" "=w")
@@ -4263,7 +4029,7 @@
UNSPEC_VTBX))]
"TARGET_NEON"
"vtbx.8\t%P0, {%P2}, %P3"
- [(set_attr "neon_type" "neon_bp_2cycle")]
+ [(set_attr "type" "neon_tbl1")]
)
(define_insn "neon_vtbx2v8qi"
@@ -4285,7 +4051,7 @@
return "";
}
- [(set_attr "neon_type" "neon_bp_2cycle")]
+ [(set_attr "type" "neon_tbl2")]
)
(define_insn "neon_vtbx3v8qi"
@@ -4308,7 +4074,7 @@
return "";
}
- [(set_attr "neon_type" "neon_bp_3cycle")]
+ [(set_attr "type" "neon_tbl3")]
)
(define_insn "neon_vtbx4v8qi"
@@ -4332,7 +4098,7 @@
return "";
}
- [(set_attr "neon_type" "neon_bp_3cycle")]
+ [(set_attr "type" "neon_tbl4")]
)
(define_expand "neon_vtrn<mode>_internal"
@@ -4358,10 +4124,7 @@
UNSPEC_VTRN2))]
"TARGET_NEON"
"vtrn.<V_sz_elem>\t%<V_reg>0, %<V_reg>2"
- [(set (attr "neon_type")
- (if_then_else (match_test "<Is_d_reg>")
- (const_string "neon_bp_simple")
- (const_string "neon_bp_3cycle")))]
+ [(set_attr "type" "neon_permute<q>")]
)
(define_expand "neon_vtrn<mode>"
@@ -4398,10 +4161,7 @@
UNSPEC_VZIP2))]
"TARGET_NEON"
"vzip.<V_sz_elem>\t%<V_reg>0, %<V_reg>2"
- [(set (attr "neon_type")
- (if_then_else (match_test "<Is_d_reg>")
- (const_string "neon_bp_simple")
- (const_string "neon_bp_3cycle")))]
+ [(set_attr "type" "neon_zip<q>")]
)
(define_expand "neon_vzip<mode>"
@@ -4438,10 +4198,7 @@
UNSPEC_VUZP2))]
"TARGET_NEON"
"vuzp.<V_sz_elem>\t%<V_reg>0, %<V_reg>2"
- [(set (attr "neon_type")
- (if_then_else (match_test "<Is_d_reg>")
- (const_string "neon_bp_simple")
- (const_string "neon_bp_3cycle")))]
+ [(set_attr "type" "neon_zip<q>")]
)
(define_expand "neon_vuzp<mode>"
@@ -4567,7 +4324,7 @@
UNSPEC_VLD1))]
"TARGET_NEON"
"vld1.<V_sz_elem>\t%h0, %A1"
- [(set_attr "neon_type" "neon_vld1_1_2_regs")]
+ [(set_attr "type" "neon_load1_1reg<q>")]
)
(define_insn "neon_vld1_lane<mode>"
@@ -4587,10 +4344,7 @@
else
return "vld1.<V_sz_elem>\t{%P0[%c3]}, %A1";
}
- [(set (attr "neon_type")
- (if_then_else (eq (const_string "<V_mode_nunits>") (const_int 2))
- (const_string "neon_vld1_1_2_regs")
- (const_string "neon_vld1_vld2_lane")))]
+ [(set_attr "type" "neon_load1_one_lane<q>")]
)
(define_insn "neon_vld1_lane<mode>"
@@ -4618,10 +4372,7 @@
else
return "vld1.<V_sz_elem>\t{%P0[%c3]}, %A1";
}
- [(set (attr "neon_type")
- (if_then_else (eq (const_string "<V_mode_nunits>") (const_int 2))
- (const_string "neon_vld1_1_2_regs")
- (const_string "neon_vld1_vld2_lane")))]
+ [(set_attr "type" "neon_load1_one_lane<q>")]
)
(define_insn "neon_vld1_dup<mode>"
@@ -4629,7 +4380,7 @@
(vec_duplicate:VD (match_operand:<V_elem> 1 "neon_struct_operand" "Um")))]
"TARGET_NEON"
"vld1.<V_sz_elem>\t{%P0[]}, %A1"
- [(set_attr "neon_type" "neon_vld2_2_regs_vld1_vld2_all_lanes")]
+ [(set_attr "type" "neon_load1_all_lanes<q>")]
)
;; Special case for DImode. Treat it exactly like a simple load.
@@ -4648,7 +4399,7 @@
{
return "vld1.<V_sz_elem>\t{%e0[], %f0[]}, %A1";
}
- [(set_attr "neon_type" "neon_vld2_2_regs_vld1_vld2_all_lanes")]
+ [(set_attr "type" "neon_load1_all_lanes<q>")]
)
(define_insn_and_split "neon_vld1_dupv2di"
@@ -4665,7 +4416,7 @@
DONE;
}
[(set_attr "length" "8")
- (set_attr "neon_type" "neon_vld2_2_regs_vld1_vld2_all_lanes")]
+ (set_attr "type" "neon_load1_all_lanes_q")]
)
(define_expand "vec_store_lanes<mode><mode>"
@@ -4680,7 +4431,7 @@
UNSPEC_VST1))]
"TARGET_NEON"
"vst1.<V_sz_elem>\t%h1, %A0"
- [(set_attr "neon_type" "neon_vst1_1_2_regs_vst2_2_regs")])
+ [(set_attr "type" "neon_store1_1reg<q>")])
(define_insn "neon_vst1_lane<mode>"
[(set (match_operand:<V_elem> 0 "neon_struct_operand" "=Um")
@@ -4699,10 +4450,8 @@
else
return "vst1.<V_sz_elem>\t{%P1[%c2]}, %A0";
}
- [(set (attr "neon_type")
- (if_then_else (eq (const_string "<V_mode_nunits>") (const_int 1))
- (const_string "neon_vst1_1_2_regs_vst2_2_regs")
- (const_string "neon_vst1_vst2_lane")))])
+ [(set_attr "type" "neon_store1_one_lane<q>")]
+)
(define_insn "neon_vst1_lane<mode>"
[(set (match_operand:<V_elem> 0 "neon_struct_operand" "=Um")
@@ -4729,7 +4478,7 @@
else
return "vst1.<V_sz_elem>\t{%P1[%c2]}, %A0";
}
- [(set_attr "neon_type" "neon_vst1_vst2_lane")]
+ [(set_attr "type" "neon_store1_one_lane<q>")]
)
(define_expand "vec_load_lanesti<mode>"
@@ -4751,10 +4500,10 @@
else
return "vld2.<V_sz_elem>\t%h0, %A1";
}
- [(set (attr "neon_type")
+ [(set (attr "type")
(if_then_else (eq (const_string "<V_sz_elem>") (const_string "64"))
- (const_string "neon_vld1_1_2_regs")
- (const_string "neon_vld2_2_regs_vld1_vld2_all_lanes")))]
+ (const_string "neon_load1_2reg<q>")
+ (const_string "neon_load2_2reg<q>")))]
)
(define_expand "vec_load_lanesoi<mode>"
@@ -4771,7 +4520,7 @@
UNSPEC_VLD2))]
"TARGET_NEON"
"vld2.<V_sz_elem>\t%h0, %A1"
- [(set_attr "neon_type" "neon_vld2_2_regs_vld1_vld2_all_lanes")])
+ [(set_attr "type" "neon_load2_2reg_q")])
(define_insn "neon_vld2_lane<mode>"
[(set (match_operand:TI 0 "s_register_operand" "=w")
@@ -4795,7 +4544,7 @@
output_asm_insn ("vld2.<V_sz_elem>\t{%P0[%c3], %P1[%c3]}, %A2", ops);
return "";
}
- [(set_attr "neon_type" "neon_vld1_vld2_lane")]
+ [(set_attr "type" "neon_load2_one_lane<q>")]
)
(define_insn "neon_vld2_lane<mode>"
@@ -4825,7 +4574,7 @@
output_asm_insn ("vld2.<V_sz_elem>\t{%P0[%c3], %P1[%c3]}, %A2", ops);
return "";
}
- [(set_attr "neon_type" "neon_vld1_vld2_lane")]
+ [(set_attr "type" "neon_load2_one_lane<q>")]
)
(define_insn "neon_vld2_dup<mode>"
@@ -4840,10 +4589,10 @@
else
return "vld1.<V_sz_elem>\t%h0, %A1";
}
- [(set (attr "neon_type")
+ [(set (attr "type")
(if_then_else (gt (const_string "<V_mode_nunits>") (const_string "1"))
- (const_string "neon_vld2_2_regs_vld1_vld2_all_lanes")
- (const_string "neon_vld1_1_2_regs")))]
+ (const_string "neon_load2_all_lanes<q>")
+ (const_string "neon_load1_1reg<q>")))]
)
(define_expand "vec_store_lanesti<mode>"
@@ -4865,10 +4614,10 @@
else
return "vst2.<V_sz_elem>\t%h1, %A0";
}
- [(set (attr "neon_type")
+ [(set (attr "type")
(if_then_else (eq (const_string "<V_sz_elem>") (const_string "64"))
- (const_string "neon_vst1_1_2_regs_vst2_2_regs")
- (const_string "neon_vst1_1_2_regs_vst2_2_regs")))]
+ (const_string "neon_store1_2reg<q>")
+ (const_string "neon_store2_one_lane<q>")))]
)
(define_expand "vec_store_lanesoi<mode>"
@@ -4885,7 +4634,7 @@
UNSPEC_VST2))]
"TARGET_NEON"
"vst2.<V_sz_elem>\t%h1, %A0"
- [(set_attr "neon_type" "neon_vst1_1_2_regs_vst2_2_regs")]
+ [(set_attr "type" "neon_store2_4reg<q>")]
)
(define_insn "neon_vst2_lane<mode>"
@@ -4910,7 +4659,7 @@
output_asm_insn ("vst2.<V_sz_elem>\t{%P1[%c3], %P2[%c3]}, %A0", ops);
return "";
}
- [(set_attr "neon_type" "neon_vst1_vst2_lane")]
+ [(set_attr "type" "neon_store2_one_lane<q>")]
)
(define_insn "neon_vst2_lane<mode>"
@@ -4940,7 +4689,7 @@
output_asm_insn ("vst2.<V_sz_elem>\t{%P1[%c3], %P2[%c3]}, %A0", ops);
return "";
}
- [(set_attr "neon_type" "neon_vst1_vst2_lane")]
+ [(set_attr "type" "neon_store2_one_lane<q>")]
)
(define_expand "vec_load_lanesei<mode>"
@@ -4962,10 +4711,10 @@
else
return "vld3.<V_sz_elem>\t%h0, %A1";
}
- [(set (attr "neon_type")
+ [(set (attr "type")
(if_then_else (eq (const_string "<V_sz_elem>") (const_string "64"))
- (const_string "neon_vld1_1_2_regs")
- (const_string "neon_vld3_vld4")))]
+ (const_string "neon_load1_3reg<q>")
+ (const_string "neon_load3_3reg<q>")))]
)
(define_expand "vec_load_lanesci<mode>"
@@ -5009,7 +4758,7 @@
output_asm_insn ("vld3.<V_sz_elem>\t{%P0, %P1, %P2}, %A3", ops);
return "";
}
- [(set_attr "neon_type" "neon_vld3_vld4")]
+ [(set_attr "type" "neon_load3_3reg<q>")]
)
(define_insn "neon_vld3qb<mode>"
@@ -5029,7 +4778,7 @@
output_asm_insn ("vld3.<V_sz_elem>\t{%P0, %P1, %P2}, %A3", ops);
return "";
}
- [(set_attr "neon_type" "neon_vld3_vld4")]
+ [(set_attr "type" "neon_load3_3reg<q>")]
)
(define_insn "neon_vld3_lane<mode>"
@@ -5056,7 +4805,7 @@
ops);
return "";
}
- [(set_attr "neon_type" "neon_vld3_vld4_lane")]
+ [(set_attr "type" "neon_load3_one_lane<q>")]
)
(define_insn "neon_vld3_lane<mode>"
@@ -5088,7 +4837,7 @@
ops);
return "";
}
- [(set_attr "neon_type" "neon_vld3_vld4_lane")]
+ [(set_attr "type" "neon_load3_one_lane<q>")]
)
(define_insn "neon_vld3_dup<mode>"
@@ -5112,10 +4861,10 @@
else
return "vld1.<V_sz_elem>\t%h0, %A1";
}
- [(set (attr "neon_type")
+ [(set (attr "type")
(if_then_else (gt (const_string "<V_mode_nunits>") (const_string "1"))
- (const_string "neon_vld3_vld4_all_lanes")
- (const_string "neon_vld1_1_2_regs")))])
+ (const_string "neon_load3_all_lanes<q>")
+ (const_string "neon_load1_1reg<q>")))])
(define_expand "vec_store_lanesei<mode>"
[(set (match_operand:EI 0 "neon_struct_operand")
@@ -5136,10 +4885,10 @@
else
return "vst3.<V_sz_elem>\t%h1, %A0";
}
- [(set (attr "neon_type")
+ [(set (attr "type")
(if_then_else (eq (const_string "<V_sz_elem>") (const_string "64"))
- (const_string "neon_vst1_1_2_regs_vst2_2_regs")
- (const_string "neon_vst2_4_regs_vst3_vst4")))])
+ (const_string "neon_store1_3reg<q>")
+ (const_string "neon_store3_one_lane<q>")))])
(define_expand "vec_store_lanesci<mode>"
[(match_operand:CI 0 "neon_struct_operand")
@@ -5182,7 +4931,7 @@
output_asm_insn ("vst3.<V_sz_elem>\t{%P1, %P2, %P3}, %A0", ops);
return "";
}
- [(set_attr "neon_type" "neon_vst2_4_regs_vst3_vst4")]
+ [(set_attr "type" "neon_store3_3reg<q>")]
)
(define_insn "neon_vst3qb<mode>"
@@ -5201,7 +4950,7 @@
output_asm_insn ("vst3.<V_sz_elem>\t{%P1, %P2, %P3}, %A0", ops);
return "";
}
- [(set_attr "neon_type" "neon_vst2_4_regs_vst3_vst4")]
+ [(set_attr "type" "neon_store3_3reg<q>")]
)
(define_insn "neon_vst3_lane<mode>"
@@ -5228,7 +4977,7 @@
ops);
return "";
}
- [(set_attr "neon_type" "neon_vst3_vst4_lane")]
+ [(set_attr "type" "neon_store3_one_lane<q>")]
)
(define_insn "neon_vst3_lane<mode>"
@@ -5260,7 +5009,8 @@
ops);
return "";
}
-[(set_attr "neon_type" "neon_vst3_vst4_lane")])
+ [(set_attr "type" "neon_store3_one_lane<q>")]
+)
(define_expand "vec_load_lanesoi<mode>"
[(set (match_operand:OI 0 "s_register_operand")
@@ -5281,10 +5031,10 @@
else
return "vld4.<V_sz_elem>\t%h0, %A1";
}
- [(set (attr "neon_type")
+ [(set (attr "type")
(if_then_else (eq (const_string "<V_sz_elem>") (const_string "64"))
- (const_string "neon_vld1_1_2_regs")
- (const_string "neon_vld3_vld4")))]
+ (const_string "neon_load1_4reg<q>")
+ (const_string "neon_load4_4reg<q>")))]
)
(define_expand "vec_load_lanesxi<mode>"
@@ -5329,7 +5079,7 @@
output_asm_insn ("vld4.<V_sz_elem>\t{%P0, %P1, %P2, %P3}, %A4", ops);
return "";
}
- [(set_attr "neon_type" "neon_vld3_vld4")]
+ [(set_attr "type" "neon_load4_4reg<q>")]
)
(define_insn "neon_vld4qb<mode>"
@@ -5350,7 +5100,7 @@
output_asm_insn ("vld4.<V_sz_elem>\t{%P0, %P1, %P2, %P3}, %A4", ops);
return "";
}
- [(set_attr "neon_type" "neon_vld3_vld4")]
+ [(set_attr "type" "neon_load4_4reg<q>")]
)
(define_insn "neon_vld4_lane<mode>"
@@ -5378,7 +5128,7 @@
ops);
return "";
}
- [(set_attr "neon_type" "neon_vld3_vld4_lane")]
+ [(set_attr "type" "neon_load4_one_lane<q>")]
)
(define_insn "neon_vld4_lane<mode>"
@@ -5411,7 +5161,7 @@
ops);
return "";
}
- [(set_attr "neon_type" "neon_vld3_vld4_lane")]
+ [(set_attr "type" "neon_load4_one_lane<q>")]
)
(define_insn "neon_vld4_dup<mode>"
@@ -5437,10 +5187,10 @@
else
return "vld1.<V_sz_elem>\t%h0, %A1";
}
- [(set (attr "neon_type")
+ [(set (attr "type")
(if_then_else (gt (const_string "<V_mode_nunits>") (const_string "1"))
- (const_string "neon_vld3_vld4_all_lanes")
- (const_string "neon_vld1_1_2_regs")))]
+ (const_string "neon_load4_all_lanes<q>")
+ (const_string "neon_load1_1reg<q>")))]
)
(define_expand "vec_store_lanesoi<mode>"
@@ -5462,10 +5212,10 @@
else
return "vst4.<V_sz_elem>\t%h1, %A0";
}
- [(set (attr "neon_type")
+ [(set (attr "type")
(if_then_else (eq (const_string "<V_sz_elem>") (const_string "64"))
- (const_string "neon_vst1_1_2_regs_vst2_2_regs")
- (const_string "neon_vst2_4_regs_vst3_vst4")))]
+ (const_string "neon_store1_4reg<q>")
+ (const_string "neon_store4_4reg<q>")))]
)
(define_expand "vec_store_lanesxi<mode>"
@@ -5510,7 +5260,7 @@
output_asm_insn ("vst4.<V_sz_elem>\t{%P1, %P2, %P3, %P4}, %A0", ops);
return "";
}
- [(set_attr "neon_type" "neon_vst2_4_regs_vst3_vst4")]
+ [(set_attr "type" "neon_store4_4reg<q>")]
)
(define_insn "neon_vst4qb<mode>"
@@ -5530,7 +5280,7 @@
output_asm_insn ("vst4.<V_sz_elem>\t{%P1, %P2, %P3, %P4}, %A0", ops);
return "";
}
- [(set_attr "neon_type" "neon_vst2_4_regs_vst3_vst4")]
+ [(set_attr "type" "neon_store4_4reg<q>")]
)
(define_insn "neon_vst4_lane<mode>"
@@ -5558,7 +5308,7 @@
ops);
return "";
}
- [(set_attr "neon_type" "neon_vst3_vst4_lane")]
+ [(set_attr "type" "neon_store4_one_lane<q>")]
)
(define_insn "neon_vst4_lane<mode>"
@@ -5591,7 +5341,7 @@
ops);
return "";
}
- [(set_attr "neon_type" "neon_vst3_vst4_lane")]
+ [(set_attr "type" "neon_store4_4reg<q>")]
)
(define_expand "neon_vand<mode>"
@@ -5656,7 +5406,7 @@
(match_operand:VU 2 "vect_par_constant_low" ""))))]
"TARGET_NEON && !BYTES_BIG_ENDIAN"
"vmovl.<US><V_sz_elem> %q0, %e1"
- [(set_attr "neon_type" "neon_shift_1")]
+ [(set_attr "type" "neon_shift_imm_long")]
)
(define_insn "neon_vec_unpack<US>_hi_<mode>"
@@ -5666,7 +5416,7 @@
(match_operand:VU 2 "vect_par_constant_high" ""))))]
"TARGET_NEON && !BYTES_BIG_ENDIAN"
"vmovl.<US><V_sz_elem> %q0, %f1"
- [(set_attr "neon_type" "neon_shift_1")]
+ [(set_attr "type" "neon_shift_imm_long")]
)
(define_expand "vec_unpack<US>_hi_<mode>"
@@ -5716,7 +5466,7 @@
(match_dup 2)))))]
"TARGET_NEON && !BYTES_BIG_ENDIAN"
"vmull.<US><V_sz_elem> %q0, %e1, %e3"
- [(set_attr "neon_type" "neon_shift_1")]
+ [(set_attr "type" "neon_mul_<V_elem_ch>_long")]
)
(define_expand "vec_widen_<US>mult_lo_<mode>"
@@ -5750,7 +5500,7 @@
(match_dup 2)))))]
"TARGET_NEON && !BYTES_BIG_ENDIAN"
"vmull.<US><V_sz_elem> %q0, %f1, %f3"
- [(set_attr "neon_type" "neon_shift_1")]
+ [(set_attr "type" "neon_mul_<V_elem_ch>_long")]
)
(define_expand "vec_widen_<US>mult_hi_<mode>"
@@ -5783,7 +5533,7 @@
{
return "vshll.<US><V_sz_elem> %q0, %P1, %2";
}
- [(set_attr "neon_type" "neon_shift_1")]
+ [(set_attr "type" "neon_shift_imm_long")]
)
(define_expand "vec_widen_<US>shiftl_lo_<mode>"
@@ -5819,7 +5569,7 @@
(SE:<V_widen> (match_operand:VDI 1 "register_operand" "w")))]
"TARGET_NEON"
"vmovl.<US><V_sz_elem> %q0, %P1"
- [(set_attr "neon_type" "neon_shift_1")]
+ [(set_attr "type" "neon_move")]
)
(define_expand "vec_unpack<US>_lo_<mode>"
@@ -5856,7 +5606,7 @@
(match_operand:VDI 2 "register_operand" "w"))))]
"TARGET_NEON"
"vmull.<US><V_sz_elem> %q0, %P1, %P2"
- [(set_attr "neon_type" "neon_shift_1")]
+ [(set_attr "type" "neon_mul_<V_elem_ch>_long")]
)
(define_expand "vec_widen_<US>mult_hi_<mode>"
@@ -5930,7 +5680,7 @@
(match_operand:VN 2 "register_operand" "w"))))]
"TARGET_NEON && !BYTES_BIG_ENDIAN"
"vmovn.i<V_sz_elem>\t%e0, %q1\;vmovn.i<V_sz_elem>\t%f0, %q2"
- [(set_attr "neon_type" "neon_shift_1")
+ [(set_attr "type" "multiple")
(set_attr "length" "8")]
)
@@ -5940,7 +5690,7 @@
(truncate:<V_narrow> (match_operand:VN 1 "register_operand" "w")))]
"TARGET_NEON && !BYTES_BIG_ENDIAN"
"vmovn.i<V_sz_elem>\t%P0, %q1"
- [(set_attr "neon_type" "neon_shift_1")]
+ [(set_attr "type" "neon_move_narrow_q")]
)
(define_expand "vec_pack_trunc_<mode>"
@@ -5963,12 +5713,10 @@
(match_operand:VDQ 2 "s_register_operand" "w"))))]
"TARGET_NEON && (!<Is_float_mode> || flag_unsafe_math_optimizations)"
"vabd.<V_s_elem> %<V_reg>0, %<V_reg>1, %<V_reg>2"
- [(set (attr "neon_type")
+ [(set (attr "type")
(if_then_else (ne (symbol_ref "<Is_float_mode>") (const_int 0))
- (if_then_else (ne (symbol_ref "<Is_d_reg>") (const_int 0))
- (const_string "neon_fp_vadd_ddd_vabs_dd")
- (const_string "neon_fp_vadd_qqq_vabs_qq"))
- (const_string "neon_int_5")))]
+ (const_string "neon_fp_abd_s<q>")
+ (const_string "neon_abd<q>")))]
)
(define_insn "neon_vabd<mode>_3"
@@ -5978,12 +5726,10 @@
UNSPEC_VSUB)))]
"TARGET_NEON && (!<Is_float_mode> || flag_unsafe_math_optimizations)"
"vabd.<V_if_elem> %<V_reg>0, %<V_reg>1, %<V_reg>2"
- [(set (attr "neon_type")
+ [(set (attr "type")
(if_then_else (ne (symbol_ref "<Is_float_mode>") (const_int 0))
- (if_then_else (ne (symbol_ref "<Is_d_reg>") (const_int 0))
- (const_string "neon_fp_vadd_ddd_vabs_dd")
- (const_string "neon_fp_vadd_qqq_vabs_qq"))
- (const_string "neon_int_5")))]
+ (const_string "neon_fp_abd_s<q>")
+ (const_string "neon_abd<q>")))]
)
;; Copy from core-to-neon regs, then extend, not vice-versa
diff --git a/gcc/config/arm/t-arm b/gcc/config/arm/t-arm
index fe075e5862a..212ba911c5c 100644
--- a/gcc/config/arm/t-arm
+++ b/gcc/config/arm/t-arm
@@ -78,6 +78,11 @@ $(srcdir)/config/arm/arm-tables.opt: $(srcdir)/config/arm/genopt.sh \
$(SHELL) $(srcdir)/config/arm/genopt.sh $(srcdir)/config/arm > \
$(srcdir)/config/arm/arm-tables.opt
+aarch-common.o: $(srcdir)/config/arm/aarch-common.c $(CONFIG_H) $(SYSTEM_H) \
+ coretypes.h $(TM_H) $(TM_P_H) $(RTL_H) $(TREE_H) output.h $(C_COMMON_H)
+ $(COMPILER) -c $(ALL_COMPILERFLAGS) $(ALL_CPPFLAGS) $(INCLUDES) \
+ $(srcdir)/config/arm/aarch-common.c
+
arm.o: $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TM_H) \
$(RTL_H) $(TREE_H) $(OBSTACK_H) $(REGS_H) hard-reg-set.h \
insn-config.h conditions.h output.h dumpfile.h \
diff --git a/gcc/config/arm/thumb2.md b/gcc/config/arm/thumb2.md
index a2c3420ce13..d724e81da36 100644
--- a/gcc/config/arm/thumb2.md
+++ b/gcc/config/arm/thumb2.md
@@ -62,7 +62,7 @@
[(set_attr "predicable" "yes")
(set_attr "predicable_short_it" "no")
(set_attr "shift" "2")
- (set_attr "type" "arlo_shift")]
+ (set_attr "type" "alu_shift_imm")]
)
;; We use the '0' constraint for operand 1 because reload should
@@ -84,7 +84,8 @@
""
[(set_attr "conds" "clob")
(set_attr "enabled_for_depr_it" "yes,yes,no")
- (set_attr "length" "6,6,10")]
+ (set_attr "length" "6,6,10")
+ (set_attr "type" "multiple")]
)
(define_insn_and_split "*thumb2_sminsi3"
@@ -104,7 +105,8 @@
""
[(set_attr "conds" "clob")
(set_attr "enabled_for_depr_it" "yes,yes,no")
- (set_attr "length" "6,6,10")]
+ (set_attr "length" "6,6,10")
+ (set_attr "type" "multiple")]
)
(define_insn_and_split "*thumb32_umaxsi3"
@@ -124,7 +126,8 @@
""
[(set_attr "conds" "clob")
(set_attr "length" "6,6,10")
- (set_attr "enabled_for_depr_it" "yes,yes,no")]
+ (set_attr "enabled_for_depr_it" "yes,yes,no")
+ (set_attr "type" "multiple")]
)
(define_insn_and_split "*thumb2_uminsi3"
@@ -144,7 +147,8 @@
""
[(set_attr "conds" "clob")
(set_attr "length" "6,6,10")
- (set_attr "enabled_for_depr_it" "yes,yes,no")]
+ (set_attr "enabled_for_depr_it" "yes,yes,no")
+ (set_attr "type" "multiple")]
)
;; Thumb-2 does not have rsc, so use a clever trick with shifter operands.
@@ -169,7 +173,8 @@
operands[1] = gen_lowpart (SImode, operands[1]);
}
[(set_attr "conds" "clob")
- (set_attr "length" "8")]
+ (set_attr "length" "8")
+ (set_attr "type" "multiple")]
)
(define_insn_and_split "*thumb2_abssi2"
@@ -226,7 +231,8 @@
(set_attr "predicable_short_it" "no")
(set_attr "enabled_for_depr_it" "yes,yes,no")
(set_attr "ce_count" "2")
- (set_attr "length" "8,6,10")]
+ (set_attr "length" "8,6,10")
+ (set_attr "type" "multiple")]
)
(define_insn_and_split "*thumb2_neg_abssi2"
@@ -283,7 +289,8 @@
(set_attr "enabled_for_depr_it" "yes,yes,no")
(set_attr "predicable_short_it" "no")
(set_attr "ce_count" "2")
- (set_attr "length" "8,6,10")]
+ (set_attr "length" "8,6,10")
+ (set_attr "type" "multiple")]
)
;; We have two alternatives here for memory loads (and similarly for stores)
@@ -308,7 +315,7 @@
ldr%?\\t%0, %1
str%?\\t%1, %0
str%?\\t%1, %0"
- [(set_attr "type" "*,arlo_imm,arlo_imm,arlo_imm,*,load1,load1,store1,store1")
+ [(set_attr "type" "mov_reg,alu_imm,alu_imm,alu_imm,mov_imm,load1,load1,store1,store1")
(set_attr "length" "2,4,2,4,4,4,4,4,4")
(set_attr "predicable" "yes")
(set_attr "predicable_short_it" "yes,no,yes,no,no,no,no,no,no")
@@ -329,7 +336,8 @@
INTVAL (operands[3]));
return \"add\\t%2, %|pc\;ldr%?\\t%0, [%2]\";
"
- [(set_attr "length" "4,4,6,6")]
+ [(set_attr "length" "4,4,6,6")
+ (set_attr "type" "multiple")]
)
;; Thumb-2 always has load/store halfword instructions, so we can avoid a lot
@@ -345,7 +353,7 @@
movw%?\\t%0, %L1\\t%@ movhi
str%(h%)\\t%1, %0\\t%@ movhi
ldr%(h%)\\t%0, %1\\t%@ movhi"
- [(set_attr "type" "*,*,store1,load1")
+ [(set_attr "type" "mov_imm,mov_reg,store1,load1")
(set_attr "predicable" "yes")
(set_attr "pool_range" "*,*,*,4094")
(set_attr "neg_pool_range" "*,*,*,250")]
@@ -376,7 +384,7 @@
"cmn%?\\t%0, %1%S3"
[(set_attr "conds" "set")
(set_attr "shift" "1")
- (set_attr "type" "arlo_shift")]
+ (set_attr "type" "alus_shift_imm")]
)
(define_insn_and_split "*thumb2_mov_scc"
@@ -393,7 +401,8 @@
""
[(set_attr "conds" "use")
(set_attr "enabled_for_depr_it" "yes,no")
- (set_attr "length" "8,10")]
+ (set_attr "length" "8,10")
+ (set_attr "type" "multiple")]
)
(define_insn_and_split "*thumb2_mov_negscc"
@@ -411,7 +420,8 @@
operands[3] = GEN_INT (~0);
}
[(set_attr "conds" "use")
- (set_attr "length" "10")]
+ (set_attr "length" "10")
+ (set_attr "type" "multiple")]
)
(define_insn_and_split "*thumb2_mov_negscc_strict_it"
@@ -439,7 +449,8 @@
}
[(set_attr "conds" "use")
- (set_attr "length" "8")]
+ (set_attr "length" "8")
+ (set_attr "type" "multiple")]
)
(define_insn_and_split "*thumb2_mov_notscc"
@@ -458,7 +469,8 @@
operands[4] = GEN_INT (~0);
}
[(set_attr "conds" "use")
- (set_attr "length" "10")]
+ (set_attr "length" "10")
+ (set_attr "type" "multiple")]
)
(define_insn_and_split "*thumb2_mov_notscc_strict_it"
@@ -480,7 +492,8 @@
VOIDmode, operands[2], const0_rtx);
}
[(set_attr "conds" "use")
- (set_attr "length" "8")]
+ (set_attr "length" "8")
+ (set_attr "type" "multiple")]
)
(define_insn_and_split "*thumb2_movsicc_insn"
@@ -540,7 +553,8 @@
}
[(set_attr "length" "4,4,6,6,6,6,10,10,10,10,6")
(set_attr "enabled_for_depr_it" "yes,yes,no,no,no,no,no,no,no,no,yes")
- (set_attr "conds" "use")]
+ (set_attr "conds" "use")
+ (set_attr "type" "multiple")]
)
(define_insn "*thumb2_movsfcc_soft_insn"
@@ -554,7 +568,8 @@
it\\t%D3\;mov%D3\\t%0, %2
it\\t%d3\;mov%d3\\t%0, %1"
[(set_attr "length" "6,6")
- (set_attr "conds" "use")]
+ (set_attr "conds" "use")
+ (set_attr "type" "multiple")]
)
(define_insn "*call_reg_thumb2"
@@ -583,7 +598,8 @@
(match_operand:SI 0 "register_operand" "l*r"))]
"TARGET_THUMB2"
"bx\\t%0"
- [(set_attr "conds" "clob")]
+ [(set_attr "conds" "clob")
+ (set_attr "type" "branch")]
)
;; Don't define thumb2_load_indirect_jump because we can't guarantee label
;; addresses will have the thumb bit set correctly.
@@ -611,6 +627,7 @@
operands[4] = gen_rtx_fmt_ee (rc, VOIDmode, operands[2], const0_rtx);
}
[(set_attr "conds" "use")
+ (set_attr "type" "multiple")
(set (attr "length") (if_then_else (match_test "arm_restrict_it")
(const_int 8)
(const_int 10)))]
@@ -643,7 +660,8 @@
operands[5] = gen_rtx_fmt_ee (rc, VOIDmode, operands[2], const0_rtx);
}
[(set_attr "conds" "use")
- (set_attr "length" "6,10")]
+ (set_attr "length" "6,10")
+ (set_attr "type" "multiple")]
)
(define_insn "*thumb2_ior_scc_strict_it"
@@ -656,7 +674,8 @@
it\\t%d2\;mov%d2\\t%0, #1\;it\\t%d2\;orr%d2\\t%0, %1
mov\\t%0, #1\;orr\\t%0, %1\;it\\t%D2\;mov%D2\\t%0, %1"
[(set_attr "conds" "use")
- (set_attr "length" "8")]
+ (set_attr "length" "8")
+ (set_attr "type" "multiple")]
)
(define_insn "*thumb2_cond_move"
@@ -705,7 +724,8 @@
return \"\";
"
[(set_attr "conds" "use")
- (set_attr "length" "6,6,10")]
+ (set_attr "length" "6,6,10")
+ (set_attr "type" "multiple")]
)
(define_insn "*thumb2_cond_arith"
@@ -742,7 +762,8 @@
return \"%i5%d4\\t%0, %1, #1\";
"
[(set_attr "conds" "clob")
- (set_attr "length" "14")]
+ (set_attr "length" "14")
+ (set_attr "type" "multiple")]
)
(define_insn_and_split "*thumb2_cond_arith_strict_it"
@@ -811,7 +832,8 @@
FAIL;
}
[(set_attr "conds" "clob")
- (set_attr "length" "12")]
+ (set_attr "length" "12")
+ (set_attr "type" "multiple")]
)
(define_insn "*thumb2_cond_sub"
@@ -842,7 +864,8 @@
return \"sub%d4\\t%0, %1, #1\";
"
[(set_attr "conds" "clob")
- (set_attr "length" "10,14")]
+ (set_attr "length" "10,14")
+ (set_attr "type" "multiple")]
)
(define_insn_and_split "*thumb2_negscc"
@@ -910,7 +933,8 @@
FAIL;
}
[(set_attr "conds" "clob")
- (set_attr "length" "14")]
+ (set_attr "length" "14")
+ (set_attr "type" "multiple")]
)
(define_insn "*thumb2_movcond"
@@ -993,7 +1017,8 @@
return \"\";
"
[(set_attr "conds" "clob")
- (set_attr "length" "10,10,14")]
+ (set_attr "length" "10,10,14")
+ (set_attr "type" "multiple")]
)
;; Zero and sign extension instructions.
@@ -1056,7 +1081,8 @@
"TARGET_THUMB2 && !flag_pic"
"* return thumb2_output_casesi(operands);"
[(set_attr "conds" "clob")
- (set_attr "length" "16")]
+ (set_attr "length" "16")
+ (set_attr "type" "multiple")]
)
(define_insn "thumb2_casesi_internal_pic"
@@ -1074,7 +1100,8 @@
"TARGET_THUMB2 && flag_pic"
"* return thumb2_output_casesi(operands);"
[(set_attr "conds" "clob")
- (set_attr "length" "20")]
+ (set_attr "length" "20")
+ (set_attr "type" "multiple")]
)
(define_insn "*thumb2_return"
@@ -1111,7 +1138,8 @@
&& GET_CODE(operands[3]) != MINUS"
"%I3%!\\t%0, %1, %2"
[(set_attr "predicable" "yes")
- (set_attr "length" "2")]
+ (set_attr "length" "2")
+ (set_attr "type" "alu_reg")]
)
(define_insn "*thumb2_shiftsi3_short"
@@ -1128,8 +1156,8 @@
(set_attr "shift" "1")
(set_attr "length" "2")
(set (attr "type") (if_then_else (match_operand 2 "const_int_operand" "")
- (const_string "arlo_shift")
- (const_string "arlo_shift_reg")))]
+ (const_string "alu_shift_imm")
+ (const_string "alu_shift_reg")))]
)
(define_insn "*thumb2_mov<mode>_shortim"
@@ -1139,7 +1167,8 @@
"TARGET_THUMB2 && reload_completed"
"mov%!\t%0, %1"
[(set_attr "predicable" "yes")
- (set_attr "length" "2")]
+ (set_attr "length" "2")
+ (set_attr "type" "mov_imm")]
)
(define_insn "*thumb2_addsi_short"
@@ -1163,7 +1192,8 @@
return \"add%!\\t%0, %1, %2\";
"
[(set_attr "predicable" "yes")
- (set_attr "length" "2")]
+ (set_attr "length" "2")
+ (set_attr "type" "alu_reg")]
)
(define_insn "*thumb2_subsi_short"
@@ -1174,7 +1204,8 @@
"TARGET_THUMB2 && reload_completed"
"sub%!\\t%0, %1, %2"
[(set_attr "predicable" "yes")
- (set_attr "length" "2")]
+ (set_attr "length" "2")
+ (set_attr "type" "alu_reg")]
)
(define_peephole2
@@ -1226,7 +1257,8 @@
return \"adds\\t%0, %1, %2\";
"
[(set_attr "conds" "set")
- (set_attr "length" "2,2,4")]
+ (set_attr "length" "2,2,4")
+ (set_attr "type" "alu_reg")]
)
(define_insn "*thumb2_addsi3_compare0_scratch"
@@ -1251,7 +1283,7 @@
"
[(set_attr "conds" "set")
(set_attr "length" "2,2,4,4")
- (set_attr "type" "arlo_imm,*,arlo_imm,*")]
+ (set_attr "type" "alus_imm,alus_reg,alus_imm,alus_reg")]
)
(define_insn "*thumb2_mulsi_short"
@@ -1310,7 +1342,8 @@
(le (minus (match_dup 1) (pc)) (const_int 128))
(not (match_test "which_alternative")))
(const_int 2)
- (const_int 8)))]
+ (const_int 8)))
+ (set_attr "type" "branch,multiple")]
)
(define_insn "*thumb2_cbnz"
@@ -1333,7 +1366,8 @@
(le (minus (match_dup 1) (pc)) (const_int 128))
(not (match_test "which_alternative")))
(const_int 2)
- (const_int 8)))]
+ (const_int 8)))
+ (set_attr "type" "branch,multiple")]
)
(define_insn "*thumb2_one_cmplsi2_short"
@@ -1343,7 +1377,8 @@
"TARGET_THUMB2 && reload_completed"
"mvn%!\t%0, %1"
[(set_attr "predicable" "yes")
- (set_attr "length" "2")]
+ (set_attr "length" "2")
+ (set_attr "type" "mvn_reg")]
)
(define_insn "*thumb2_negsi2_short"
@@ -1353,7 +1388,8 @@
"TARGET_THUMB2 && reload_completed"
"neg%!\t%0, %1"
[(set_attr "predicable" "yes")
- (set_attr "length" "2")]
+ (set_attr "length" "2")
+ (set_attr "type" "alu_reg")]
)
(define_insn "*orsi_notsi_si"
@@ -1363,7 +1399,8 @@
"TARGET_THUMB2"
"orn%?\\t%0, %1, %2"
[(set_attr "predicable" "yes")
- (set_attr "predicable_short_it" "no")]
+ (set_attr "predicable_short_it" "no")
+ (set_attr "type" "logic_reg")]
)
(define_insn "*orsi_not_shiftsi_si"
@@ -1377,7 +1414,7 @@
[(set_attr "predicable" "yes")
(set_attr "predicable_short_it" "no")
(set_attr "shift" "2")
- (set_attr "type" "arlo_shift")]
+ (set_attr "type" "alu_shift_imm")]
)
(define_peephole2
diff --git a/gcc/config/arm/types.md b/gcc/config/arm/types.md
new file mode 100644
index 00000000000..011950db3ab
--- /dev/null
+++ b/gcc/config/arm/types.md
@@ -0,0 +1,1076 @@
+;; Instruction Classification for ARM for GNU compiler.
+
+;; Copyright (C) 1991-2013 Free Software Foundation, Inc.
+;; Contributed by ARM Ltd.
+
+;; This file is part of GCC.
+
+;; GCC is free software; you can redistribute it and/or modify it
+;; under the terms of the GNU General Public License as published
+;; by the Free Software Foundation; either version 3, or (at your
+;; option) any later version.
+
+;; GCC is distributed in the hope that it will be useful, but WITHOUT
+;; ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+;; or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public
+;; License for more details.
+
+;; You should have received a copy of the GNU General Public License
+;; along with GCC; see the file COPYING3. If not see
+;; <http://www.gnu.org/licenses/>.
+
+; TYPE attribute is used to classify instructions for use in scheduling.
+;
+; Instruction classification:
+;
+; adc_imm add/subtract with carry and with an immediate operand.
+; adc_reg add/subtract with carry and no immediate operand.
+; adcs_imm as adc_imm, setting condition flags.
+; adcs_reg as adc_reg, setting condition flags.
+; adr calculate address.
+; alu_ext From ARMv8-A: any arithmetic instruction that has a
+; sign/zero-extended.
+; AArch64 Only.
+; source operand
+; alu_imm any arithmetic instruction that doesn't have a shifted
+; operand and has an immediate operand. This
+; excludes MOV, MVN and RSB(S) immediate.
+; alu_reg any arithmetic instruction that doesn't have a shifted
+; or an immediate operand. This excludes
+; MOV and MVN but includes MOVT. This is also the default.
+; alu_shift_imm any arithmetic instruction that has a source operand
+; shifted by a constant. This excludes simple shifts.
+; alu_shift_reg as alu_shift_imm, with the shift amount specified in a
+; register.
+; alus_ext From ARMv8-A: as alu_ext, setting condition flags.
+; AArch64 Only.
+; alus_imm as alu_imm, setting condition flags.
+; alus_reg as alu_reg, setting condition flags.
+; alus_shift_imm as alu_shift_imm, setting condition flags.
+; alus_shift_reg as alu_shift_reg, setting condition flags.
+; bfm bitfield move operation.
+; block blockage insn, this blocks all functional units.
+; branch branch.
+; call subroutine call.
+; clz count leading zeros (CLZ).
+; csel From ARMv8-A: conditional select.
+; extend extend instruction (SXTB, SXTH, UXTB, UXTH).
+; f_cvt conversion between float representations.
+; f_cvtf2i conversion between float and integral types.
+; f_cvti2f conversion between integral and float types.
+; f_flag transfer of co-processor flags to the CPSR.
+; f_load[d,s] double/single load from memory. Used for VFP unit.
+; f_mcr transfer arm to vfp reg.
+; f_mcrr transfer two arm regs to vfp reg.
+; f_minmax[d,s] double/single floating point minimum/maximum.
+; f_mrc transfer vfp to arm reg.
+; f_mrrc transfer vfp to two arm regs.
+; f_rint[d,s] double/single floating point rount to integral.
+; f_sel[d,s] double/single floating byte select.
+; f_store[d,s] double/single store to memory. Used for VFP unit.
+; fadd[d,s] double/single floating-point scalar addition.
+; fcmp[d,s] double/single floating-point compare.
+; fconst[d,s] double/single load immediate.
+; fcsel From ARMv8-A: Floating-point conditional select.
+; fdiv[d,s] double/single precision floating point division.
+; ffarith[d,s] double/single floating point abs/neg/cpy.
+; ffma[d,s] double/single floating point fused multiply-accumulate.
+; float floating point arithmetic operation.
+; fmac[d,s] double/single floating point multiply-accumulate.
+; fmov floating point to floating point register move.
+; fmul[d,s] double/single floating point multiply.
+; fsqrt[d,s] double/single precision floating point square root.
+; load_acq load-acquire.
+; load_byte load byte(s) from memory to arm registers.
+; load1 load 1 word from memory to arm registers.
+; load2 load 2 words from memory to arm registers.
+; load3 load 3 words from memory to arm registers.
+; load4 load 4 words from memory to arm registers.
+; logic_imm any logical instruction that doesn't have a shifted
+; operand and has an immediate operand.
+; logic_reg any logical instruction that doesn't have a shifted
+; operand or an immediate operand.
+; logic_shift_imm any logical instruction that has a source operand
+; shifted by a constant. This excludes simple shifts.
+; logic_shift_reg as logic_shift_imm, with the shift amount specified in a
+; register.
+; logics_imm as logic_imm, setting condition flags.
+; logics_reg as logic_reg, setting condition flags.
+; logics_shift_imm as logic_shift_imm, setting condition flags.
+; logics_shift_reg as logic_shift_reg, setting condition flags.
+; mla integer multiply accumulate.
+; mlas integer multiply accumulate, flag setting.
+; mov_imm simple MOV instruction that moves an immediate to
+; register. This includes MOVW, but not MOVT.
+; mov_reg simple MOV instruction that moves a register to another
+; register. This includes MOVW, but not MOVT.
+; mov_shift simple MOV instruction, shifted operand by a constant.
+; mov_shift_reg simple MOV instruction, shifted operand by a register.
+; mrs system/special/co-processor register move.
+; mul integer multiply.
+; muls integer multiply, flag setting.
+; multiple more than one instruction, candidate for future
+; splitting, or better modeling.
+; mvn_imm inverting move instruction, immediate.
+; mvn_reg inverting move instruction, register.
+; mvn_shift inverting move instruction, shifted operand by a constant.
+; mvn_shift_reg inverting move instruction, shifted operand by a register.
+; no_insn an insn which does not represent an instruction in the
+; final output, thus having no impact on scheduling.
+; rbit reverse bits.
+; rev reverse bytes.
+; sdiv signed division.
+; shift_imm simple shift operation (LSL, LSR, ASR, ROR) with an
+; immediate.
+; shift_reg simple shift by a register.
+; smlad signed multiply accumulate dual.
+; smladx signed multiply accumulate dual reverse.
+; smlal signed multiply accumulate long.
+; smlald signed multiply accumulate long dual.
+; smlals signed multiply accumulate long, flag setting.
+; smlalxy signed multiply accumulate, 16x16-bit, 64-bit accumulate.
+; smlawx signed multiply accumulate, 32x16-bit, 32-bit accumulate.
+; smlawy signed multiply accumulate wide, 32x16-bit,
+; 32-bit accumulate.
+; smlaxy signed multiply accumulate, 16x16-bit, 32-bit accumulate.
+; smlsd signed multiply subtract dual.
+; smlsdx signed multiply subtract dual reverse.
+; smlsld signed multiply subtract long dual.
+; smmla signed most significant word multiply accumulate.
+; smmul signed most significant word multiply.
+; smmulr signed most significant word multiply, rounded.
+; smuad signed dual multiply add.
+; smuadx signed dual multiply add reverse.
+; smull signed multiply long.
+; smulls signed multiply long, flag setting.
+; smulwy signed multiply wide, 32x16-bit, 32-bit accumulate.
+; smulxy signed multiply, 16x16-bit, 32-bit accumulate.
+; smusd signed dual multiply subtract.
+; smusdx signed dual multiply subtract reverse.
+; store_rel store-release.
+; store1 store 1 word to memory from arm registers.
+; store2 store 2 words to memory from arm registers.
+; store3 store 3 words to memory from arm registers.
+; store4 store 4 (or more) words to memory from arm registers.
+; udiv unsigned division.
+; umaal unsigned multiply accumulate accumulate long.
+; umlal unsigned multiply accumulate long.
+; umlals unsigned multiply accumulate long, flag setting.
+; umull unsigned multiply long.
+; umulls unsigned multiply long, flag setting.
+; untyped insn without type information - default, and error,
+; case.
+;
+; The classification below is for instructions used by the Wireless MMX
+; Technology. Each attribute value is used to classify an instruction of the
+; same name or family.
+;
+; wmmx_tandc
+; wmmx_tbcst
+; wmmx_textrc
+; wmmx_textrm
+; wmmx_tinsr
+; wmmx_tmcr
+; wmmx_tmcrr
+; wmmx_tmia
+; wmmx_tmiaph
+; wmmx_tmiaxy
+; wmmx_tmrc
+; wmmx_tmrrc
+; wmmx_tmovmsk
+; wmmx_torc
+; wmmx_torvsc
+; wmmx_wabs
+; wmmx_wdiff
+; wmmx_wacc
+; wmmx_wadd
+; wmmx_waddbhus
+; wmmx_waddsubhx
+; wmmx_waligni
+; wmmx_walignr
+; wmmx_wand
+; wmmx_wandn
+; wmmx_wavg2
+; wmmx_wavg4
+; wmmx_wcmpeq
+; wmmx_wcmpgt
+; wmmx_wmac
+; wmmx_wmadd
+; wmmx_wmax
+; wmmx_wmerge
+; wmmx_wmiawxy
+; wmmx_wmiaxy
+; wmmx_wmin
+; wmmx_wmov
+; wmmx_wmul
+; wmmx_wmulw
+; wmmx_wldr
+; wmmx_wor
+; wmmx_wpack
+; wmmx_wqmiaxy
+; wmmx_wqmulm
+; wmmx_wqmulwm
+; wmmx_wror
+; wmmx_wsad
+; wmmx_wshufh
+; wmmx_wsll
+; wmmx_wsra
+; wmmx_wsrl
+; wmmx_wstr
+; wmmx_wsub
+; wmmx_wsubaddhx
+; wmmx_wunpckeh
+; wmmx_wunpckel
+; wmmx_wunpckih
+; wmmx_wunpckil
+; wmmx_wxor
+;
+; The classification below is for NEON instructions.
+;
+; neon_add
+; neon_add_q
+; neon_add_widen
+; neon_add_long
+; neon_qadd
+; neon_qadd_q
+; neon_add_halve
+; neon_add_halve_q
+; neon_add_halve_narrow_q
+; neon_sub
+; neon_sub_q
+; neon_sub_widen
+; neon_sub_long
+; neon_qsub
+; neon_qsub_q
+; neon_sub_halve
+; neon_sub_halve_q
+; neon_sub_halve_narrow_q
+; neon_abs
+; neon_abs_q
+; neon_neg
+; neon_neg_q
+; neon_qneg
+; neon_qneg_q
+; neon_qabs
+; neon_qabs_q
+; neon_abd
+; neon_abd_q
+; neon_abd_long
+; neon_minmax
+; neon_minmax_q
+; neon_compare
+; neon_compare_q
+; neon_compare_zero
+; neon_compare_zero_q
+; neon_arith_acc
+; neon_arith_acc_q
+; neon_reduc_add
+; neon_reduc_add_q
+; neon_reduc_add_long
+; neon_reduc_add_acc
+; neon_reduc_add_acc_q
+; neon_reduc_minmax
+; neon_reduc_minmax_q
+; neon_logic
+; neon_logic_q
+; neon_tst
+; neon_tst_q
+; neon_shift_imm
+; neon_shift_imm_q
+; neon_shift_imm_narrow_q
+; neon_shift_imm_long
+; neon_shift_reg
+; neon_shift_reg_q
+; neon_shift_acc
+; neon_shift_acc_q
+; neon_sat_shift_imm
+; neon_sat_shift_imm_q
+; neon_sat_shift_imm_narrow_q
+; neon_sat_shift_reg
+; neon_sat_shift_reg_q
+; neon_ins
+; neon_ins_q
+; neon_move
+; neon_move_q
+; neon_move_narrow_q
+; neon_permute
+; neon_permute_q
+; neon_zip
+; neon_zip_q
+; neon_tbl1
+; neon_tbl1_q
+; neon_tbl2
+; neon_tbl2_q
+; neon_tbl3
+; neon_tbl3_q
+; neon_tbl4
+; neon_tbl4_q
+; neon_bsl
+; neon_bsl_q
+; neon_cls
+; neon_cls_q
+; neon_cnt
+; neon_cnt_q
+; neon_ext
+; neon_ext_q
+; neon_rbit
+; neon_rbit_q
+; neon_rev
+; neon_rev_q
+; neon_mul_b
+; neon_mul_b_q
+; neon_mul_h
+; neon_mul_h_q
+; neon_mul_s
+; neon_mul_s_q
+; neon_mul_b_long
+; neon_mul_h_long
+; neon_mul_s_long
+; neon_mul_d_long
+; neon_mul_h_scalar
+; neon_mul_h_scalar_q
+; neon_mul_s_scalar
+; neon_mul_s_scalar_q
+; neon_mul_h_scalar_long
+; neon_mul_s_scalar_long
+; neon_sat_mul_b
+; neon_sat_mul_b_q
+; neon_sat_mul_h
+; neon_sat_mul_h_q
+; neon_sat_mul_s
+; neon_sat_mul_s_q
+; neon_sat_mul_b_long
+; neon_sat_mul_h_long
+; neon_sat_mul_s_long
+; neon_sat_mul_h_scalar
+; neon_sat_mul_h_scalar_q
+; neon_sat_mul_s_scalar
+; neon_sat_mul_s_scalar_q
+; neon_sat_mul_h_scalar_long
+; neon_sat_mul_s_scalar_long
+; neon_mla_b
+; neon_mla_b_q
+; neon_mla_h
+; neon_mla_h_q
+; neon_mla_s
+; neon_mla_s_q
+; neon_mla_b_long
+; neon_mla_h_long
+; neon_mla_s_long
+; neon_mla_h_scalar
+; neon_mla_h_scalar_q
+; neon_mla_s_scalar
+; neon_mla_s_scalar_q
+; neon_mla_h_scalar_long
+; neon_mla_s_scalar_long
+; neon_sat_mla_b_long
+; neon_sat_mla_h_long
+; neon_sat_mla_s_long
+; neon_sat_mla_h_scalar_long
+; neon_sat_mla_s_scalar_long
+; neon_to_gp
+; neon_to_gp_q
+; neon_from_gp
+; neon_from_gp_q
+; neon_ldr
+; neon_load1_1reg
+; neon_load1_1reg_q
+; neon_load1_2reg
+; neon_load1_2reg_q
+; neon_load1_3reg
+; neon_load1_3reg_q
+; neon_load1_4reg
+; neon_load1_4reg_q
+; neon_load1_all_lanes
+; neon_load1_all_lanes_q
+; neon_load1_one_lane
+; neon_load1_one_lane_q
+; neon_load2_2reg
+; neon_load2_2reg_q
+; neon_load2_4reg
+; neon_load2_4reg_q
+; neon_load2_all_lanes
+; neon_load2_all_lanes_q
+; neon_load2_one_lane
+; neon_load2_one_lane_q
+; neon_load3_3reg
+; neon_load3_3reg_q
+; neon_load3_all_lanes
+; neon_load3_all_lanes_q
+; neon_load3_one_lane
+; neon_load3_one_lane_q
+; neon_load4_4reg
+; neon_load4_4reg_q
+; neon_load4_all_lanes
+; neon_load4_all_lanes_q
+; neon_load4_one_lane
+; neon_load4_one_lane_q
+; neon_str
+; neon_store1_1reg
+; neon_store1_1reg_q
+; neon_store1_2reg
+; neon_store1_2reg_q
+; neon_store1_3reg
+; neon_store1_3reg_q
+; neon_store1_4reg
+; neon_store1_4reg_q
+; neon_store1_one_lane
+; neon_store1_one_lane_q
+; neon_store2_2reg
+; neon_store2_2reg_q
+; neon_store2_4reg
+; neon_store2_4reg_q
+; neon_store2_one_lane
+; neon_store2_one_lane_q
+; neon_store3_3reg
+; neon_store3_3reg_q
+; neon_store3_one_lane
+; neon_store3_one_lane_q
+; neon_store4_4reg
+; neon_store4_4reg_q
+; neon_store4_one_lane
+; neon_store4_one_lane_q
+; neon_fp_abs_s
+; neon_fp_abs_s_q
+; neon_fp_abs_d
+; neon_fp_abs_d_q
+; neon_fp_neg_s
+; neon_fp_neg_s_q
+; neon_fp_neg_d
+; neon_fp_neg_d_q
+; neon_fp_abd_s
+; neon_fp_abd_s_q
+; neon_fp_abd_d
+; neon_fp_abd_d_q
+; neon_fp_addsub_s
+; neon_fp_addsub_s_q
+; neon_fp_addsub_d
+; neon_fp_addsub_d_q
+; neon_fp_compare_s
+; neon_fp_compare_s_q
+; neon_fp_compare_d
+; neon_fp_compare_d_q
+; neon_fp_minmax_s
+; neon_fp_minmax_s_q
+; neon_fp_minmax_d
+; neon_fp_minmax_d_q
+; neon_fp_reduc_add_s
+; neon_fp_reduc_add_s_q
+; neon_fp_reduc_add_d
+; neon_fp_reduc_add_d_q
+; neon_fp_reduc_minmax_s
+; neon_fp_reduc_minmax_s_q
+; neon_fp_reduc_minmax_d
+; neon_fp_reduc_minmax_d_q
+; neon_fp_cvt_narrow_s_q
+; neon_fp_cvt_narrow_d_q
+; neon_fp_cvt_widen_h
+; neon_fp_cvt_widen_s
+; neon_fp_to_int_s
+; neon_fp_to_int_s_q
+; neon_fp_to_int_d
+; neon_fp_to_int_d_q
+; neon_int_to_fp_s
+; neon_int_to_fp_s_q
+; neon_int_to_fp_d
+; neon_int_to_fp_d_q
+; neon_fp_round_s
+; neon_fp_round_s_q
+; neon_fp_round_d
+; neon_fp_round_d_q
+; neon_fp_recpe_s
+; neon_fp_recpe_s_q
+; neon_fp_recpe_d
+; neon_fp_recpe_d_q
+; neon_fp_recps_s
+; neon_fp_recps_s_q
+; neon_fp_recps_d
+; neon_fp_recps_d_q
+; neon_fp_recpx_s
+; neon_fp_recpx_s_q
+; neon_fp_recpx_d
+; neon_fp_recpx_d_q
+; neon_fp_rsqrte_s
+; neon_fp_rsqrte_s_q
+; neon_fp_rsqrte_d
+; neon_fp_rsqrte_d_q
+; neon_fp_rsqrts_s
+; neon_fp_rsqrts_s_q
+; neon_fp_rsqrts_d
+; neon_fp_rsqrts_d_q
+; neon_fp_mul_s
+; neon_fp_mul_s_q
+; neon_fp_mul_s_scalar
+; neon_fp_mul_s_scalar_q
+; neon_fp_mul_d
+; neon_fp_mul_d_q
+; neon_fp_mul_d_scalar_q
+; neon_fp_mla_s
+; neon_fp_mla_s_q
+; neon_fp_mla_s_scalar
+; neon_fp_mla_s_scalar_q
+; neon_fp_mla_d
+; neon_fp_mla_d_q
+; neon_fp_mla_d_scalar_q
+; neon_fp_sqrt_s
+; neon_fp_sqrt_s_q
+; neon_fp_sqrt_d
+; neon_fp_sqrt_d_q
+; neon_fp_div_s
+; neon_fp_div_s_q
+; neon_fp_div_d
+; neon_fp_div_d_q
+
+;
+; The classification below is for Crypto instructions.
+;
+; crypto_aes
+; crypto_sha1_xor
+; crypto_sha1_fast
+; crypto_sha1_slow
+; crypto_sha256_fast
+; crypto_sha256_slow
+
+(define_attr "type"
+ "adc_imm,\
+ adc_reg,\
+ adcs_imm,\
+ adcs_reg,\
+ adr,\
+ alu_ext,\
+ alu_imm,\
+ alu_reg,\
+ alu_shift_imm,\
+ alu_shift_reg,\
+ alus_ext,\
+ alus_imm,\
+ alus_reg,\
+ alus_shift_imm,\
+ alus_shift_reg,\
+ bfm,\
+ block,\
+ branch,\
+ call,\
+ clz,\
+ no_insn,\
+ csel,\
+ crc,\
+ extend,\
+ f_cvt,\
+ f_cvtf2i,\
+ f_cvti2f,\
+ f_flag,\
+ f_loadd,\
+ f_loads,\
+ f_mcr,\
+ f_mcrr,\
+ f_minmaxd,\
+ f_minmaxs,\
+ f_mrc,\
+ f_mrrc,\
+ f_rintd,\
+ f_rints,\
+ f_seld,\
+ f_sels,\
+ f_stored,\
+ f_stores,\
+ faddd,\
+ fadds,\
+ fcmpd,\
+ fcmps,\
+ fconstd,\
+ fconsts,\
+ fcsel,\
+ fdivd,\
+ fdivs,\
+ ffarithd,\
+ ffariths,\
+ ffmad,\
+ ffmas,\
+ float,\
+ fmacd,\
+ fmacs,\
+ fmov,\
+ fmuld,\
+ fmuls,\
+ fsqrts,\
+ fsqrtd,\
+ load_acq,\
+ load_byte,\
+ load1,\
+ load2,\
+ load3,\
+ load4,\
+ logic_imm,\
+ logic_reg,\
+ logic_shift_imm,\
+ logic_shift_reg,\
+ logics_imm,\
+ logics_reg,\
+ logics_shift_imm,\
+ logics_shift_reg,\
+ mla,\
+ mlas,\
+ mov_imm,\
+ mov_reg,\
+ mov_shift,\
+ mov_shift_reg,\
+ mrs,\
+ mul,\
+ muls,\
+ multiple,\
+ mvn_imm,\
+ mvn_reg,\
+ mvn_shift,\
+ mvn_shift_reg,\
+ nop,\
+ rbit,\
+ rev,\
+ sdiv,\
+ shift_imm,\
+ shift_reg,\
+ smlad,\
+ smladx,\
+ smlal,\
+ smlald,\
+ smlals,\
+ smlalxy,\
+ smlawx,\
+ smlawy,\
+ smlaxy,\
+ smlsd,\
+ smlsdx,\
+ smlsld,\
+ smmla,\
+ smmul,\
+ smmulr,\
+ smuad,\
+ smuadx,\
+ smull,\
+ smulls,\
+ smulwy,\
+ smulxy,\
+ smusd,\
+ smusdx,\
+ store_rel,\
+ store1,\
+ store2,\
+ store3,\
+ store4,\
+ udiv,\
+ umaal,\
+ umlal,\
+ umlals,\
+ umull,\
+ umulls,\
+ untyped,\
+ wmmx_tandc,\
+ wmmx_tbcst,\
+ wmmx_textrc,\
+ wmmx_textrm,\
+ wmmx_tinsr,\
+ wmmx_tmcr,\
+ wmmx_tmcrr,\
+ wmmx_tmia,\
+ wmmx_tmiaph,\
+ wmmx_tmiaxy,\
+ wmmx_tmrc,\
+ wmmx_tmrrc,\
+ wmmx_tmovmsk,\
+ wmmx_torc,\
+ wmmx_torvsc,\
+ wmmx_wabs,\
+ wmmx_wabsdiff,\
+ wmmx_wacc,\
+ wmmx_wadd,\
+ wmmx_waddbhus,\
+ wmmx_waddsubhx,\
+ wmmx_waligni,\
+ wmmx_walignr,\
+ wmmx_wand,\
+ wmmx_wandn,\
+ wmmx_wavg2,\
+ wmmx_wavg4,\
+ wmmx_wcmpeq,\
+ wmmx_wcmpgt,\
+ wmmx_wmac,\
+ wmmx_wmadd,\
+ wmmx_wmax,\
+ wmmx_wmerge,\
+ wmmx_wmiawxy,\
+ wmmx_wmiaxy,\
+ wmmx_wmin,\
+ wmmx_wmov,\
+ wmmx_wmul,\
+ wmmx_wmulw,\
+ wmmx_wldr,\
+ wmmx_wor,\
+ wmmx_wpack,\
+ wmmx_wqmiaxy,\
+ wmmx_wqmulm,\
+ wmmx_wqmulwm,\
+ wmmx_wror,\
+ wmmx_wsad,\
+ wmmx_wshufh,\
+ wmmx_wsll,\
+ wmmx_wsra,\
+ wmmx_wsrl,\
+ wmmx_wstr,\
+ wmmx_wsub,\
+ wmmx_wsubaddhx,\
+ wmmx_wunpckeh,\
+ wmmx_wunpckel,\
+ wmmx_wunpckih,\
+ wmmx_wunpckil,\
+ wmmx_wxor,\
+\
+ neon_add,\
+ neon_add_q,\
+ neon_add_widen,\
+ neon_add_long,\
+ neon_qadd,\
+ neon_qadd_q,\
+ neon_add_halve,\
+ neon_add_halve_q,\
+ neon_add_halve_narrow_q,\
+\
+ neon_sub,\
+ neon_sub_q,\
+ neon_sub_widen,\
+ neon_sub_long,\
+ neon_qsub,\
+ neon_qsub_q,\
+ neon_sub_halve,\
+ neon_sub_halve_q,\
+ neon_sub_halve_narrow_q,\
+\
+ neon_abs,\
+ neon_abs_q,\
+ neon_neg,\
+ neon_neg_q,\
+ neon_qneg,\
+ neon_qneg_q,\
+ neon_qabs,\
+ neon_qabs_q,\
+ neon_abd,\
+ neon_abd_q,\
+ neon_abd_long,\
+\
+ neon_minmax,\
+ neon_minmax_q,\
+ neon_compare,\
+ neon_compare_q,\
+ neon_compare_zero,\
+ neon_compare_zero_q,\
+\
+ neon_arith_acc,\
+ neon_arith_acc_q,\
+ neon_reduc_add,\
+ neon_reduc_add_q,\
+ neon_reduc_add_long,\
+ neon_reduc_add_acc,\
+ neon_reduc_add_acc_q,\
+ neon_reduc_minmax,\
+ neon_reduc_minmax_q,\
+ neon_logic,\
+ neon_logic_q,\
+ neon_tst,\
+ neon_tst_q,\
+\
+ neon_shift_imm,\
+ neon_shift_imm_q,\
+ neon_shift_imm_narrow_q,\
+ neon_shift_imm_long,\
+ neon_shift_reg,\
+ neon_shift_reg_q,\
+ neon_shift_acc,\
+ neon_shift_acc_q,\
+ neon_sat_shift_imm,\
+ neon_sat_shift_imm_q,\
+ neon_sat_shift_imm_narrow_q,\
+ neon_sat_shift_reg,\
+ neon_sat_shift_reg_q,\
+\
+ neon_ins,\
+ neon_ins_q,\
+ neon_move,\
+ neon_move_q,\
+ neon_move_narrow_q,\
+ neon_permute,\
+ neon_permute_q,\
+ neon_zip,\
+ neon_zip_q,\
+ neon_tbl1,\
+ neon_tbl1_q,\
+ neon_tbl2,\
+ neon_tbl2_q,\
+ neon_tbl3,\
+ neon_tbl3_q,\
+ neon_tbl4,\
+ neon_tbl4_q,\
+\
+ neon_bsl,\
+ neon_bsl_q,\
+ neon_cls,\
+ neon_cls_q,\
+ neon_cnt,\
+ neon_cnt_q,\
+ neon_dup,\
+ neon_dup_q,\
+ neon_ext,\
+ neon_ext_q,\
+ neon_rbit,\
+ neon_rbit_q,\
+ neon_rev,\
+ neon_rev_q,\
+\
+ neon_mul_b,\
+ neon_mul_b_q,\
+ neon_mul_h,\
+ neon_mul_h_q,\
+ neon_mul_s,\
+ neon_mul_s_q,\
+ neon_mul_b_long,\
+ neon_mul_h_long,\
+ neon_mul_s_long,\
+ neon_mul_d_long,\
+ neon_mul_h_scalar,\
+ neon_mul_h_scalar_q,\
+ neon_mul_s_scalar,\
+ neon_mul_s_scalar_q,\
+ neon_mul_h_scalar_long,\
+ neon_mul_s_scalar_long,\
+\
+ neon_sat_mul_b,\
+ neon_sat_mul_b_q,\
+ neon_sat_mul_h,\
+ neon_sat_mul_h_q,\
+ neon_sat_mul_s,\
+ neon_sat_mul_s_q,\
+ neon_sat_mul_b_long,\
+ neon_sat_mul_h_long,\
+ neon_sat_mul_s_long,\
+ neon_sat_mul_h_scalar,\
+ neon_sat_mul_h_scalar_q,\
+ neon_sat_mul_s_scalar,\
+ neon_sat_mul_s_scalar_q,\
+ neon_sat_mul_h_scalar_long,\
+ neon_sat_mul_s_scalar_long,\
+\
+ neon_mla_b,\
+ neon_mla_b_q,\
+ neon_mla_h,\
+ neon_mla_h_q,\
+ neon_mla_s,\
+ neon_mla_s_q,\
+ neon_mla_b_long,\
+ neon_mla_h_long,\
+ neon_mla_s_long,\
+ neon_mla_h_scalar,\
+ neon_mla_h_scalar_q,\
+ neon_mla_s_scalar,\
+ neon_mla_s_scalar_q,\
+ neon_mla_h_scalar_long,\
+ neon_mla_s_scalar_long,\
+\
+ neon_sat_mla_b_long,\
+ neon_sat_mla_h_long,\
+ neon_sat_mla_s_long,\
+ neon_sat_mla_h_scalar_long,\
+ neon_sat_mla_s_scalar_long,\
+\
+ neon_to_gp,\
+ neon_to_gp_q,\
+ neon_from_gp,\
+ neon_from_gp_q,\
+\
+ neon_ldr,\
+ neon_load1_1reg,\
+ neon_load1_1reg_q,\
+ neon_load1_2reg,\
+ neon_load1_2reg_q,\
+ neon_load1_3reg,\
+ neon_load1_3reg_q,\
+ neon_load1_4reg,\
+ neon_load1_4reg_q,\
+ neon_load1_all_lanes,\
+ neon_load1_all_lanes_q,\
+ neon_load1_one_lane,\
+ neon_load1_one_lane_q,\
+\
+ neon_load2_2reg,\
+ neon_load2_2reg_q,\
+ neon_load2_4reg,\
+ neon_load2_4reg_q,\
+ neon_load2_all_lanes,\
+ neon_load2_all_lanes_q,\
+ neon_load2_one_lane,\
+ neon_load2_one_lane_q,\
+\
+ neon_load3_3reg,\
+ neon_load3_3reg_q,\
+ neon_load3_all_lanes,\
+ neon_load3_all_lanes_q,\
+ neon_load3_one_lane,\
+ neon_load3_one_lane_q,\
+\
+ neon_load4_4reg,\
+ neon_load4_4reg_q,\
+ neon_load4_all_lanes,\
+ neon_load4_all_lanes_q,\
+ neon_load4_one_lane,\
+ neon_load4_one_lane_q,\
+\
+ neon_str,\
+ neon_store1_1reg,\
+ neon_store1_1reg_q,\
+ neon_store1_2reg,\
+ neon_store1_2reg_q,\
+ neon_store1_3reg,\
+ neon_store1_3reg_q,\
+ neon_store1_4reg,\
+ neon_store1_4reg_q,\
+ neon_store1_one_lane,\
+ neon_store1_one_lane_q,\
+\
+ neon_store2_2reg,\
+ neon_store2_2reg_q,\
+ neon_store2_4reg,\
+ neon_store2_4reg_q,\
+ neon_store2_one_lane,\
+ neon_store2_one_lane_q,\
+\
+ neon_store3_3reg,\
+ neon_store3_3reg_q,\
+ neon_store3_one_lane,\
+ neon_store3_one_lane_q,\
+\
+ neon_store4_4reg,\
+ neon_store4_4reg_q,\
+ neon_store4_one_lane,\
+ neon_store4_one_lane_q,\
+\
+ neon_fp_abs_s,\
+ neon_fp_abs_s_q,\
+ neon_fp_abs_d,\
+ neon_fp_abs_d_q,\
+ neon_fp_neg_s,\
+ neon_fp_neg_s_q,\
+ neon_fp_neg_d,\
+ neon_fp_neg_d_q,\
+\
+ neon_fp_abd_s,\
+ neon_fp_abd_s_q,\
+ neon_fp_abd_d,\
+ neon_fp_abd_d_q,\
+ neon_fp_addsub_s,\
+ neon_fp_addsub_s_q,\
+ neon_fp_addsub_d,\
+ neon_fp_addsub_d_q,\
+ neon_fp_compare_s,\
+ neon_fp_compare_s_q,\
+ neon_fp_compare_d,\
+ neon_fp_compare_d_q,\
+ neon_fp_minmax_s,\
+ neon_fp_minmax_s_q,\
+ neon_fp_minmax_d,\
+ neon_fp_minmax_d_q,\
+\
+ neon_fp_reduc_add_s,\
+ neon_fp_reduc_add_s_q,\
+ neon_fp_reduc_add_d,\
+ neon_fp_reduc_add_d_q,\
+ neon_fp_reduc_minmax_s,\
+ neon_fp_reduc_minmax_s_q,\
+ neon_fp_reduc_minmax_d,\
+ neon_fp_reduc_minmax_d_q,\
+\
+ neon_fp_cvt_narrow_s_q,\
+ neon_fp_cvt_narrow_d_q,\
+ neon_fp_cvt_widen_h,\
+ neon_fp_cvt_widen_s,\
+\
+ neon_fp_to_int_s,\
+ neon_fp_to_int_s_q,\
+ neon_fp_to_int_d,\
+ neon_fp_to_int_d_q,\
+ neon_int_to_fp_s,\
+ neon_int_to_fp_s_q,\
+ neon_int_to_fp_d,\
+ neon_int_to_fp_d_q,\
+ neon_fp_round_s,\
+ neon_fp_round_s_q,\
+ neon_fp_round_d,\
+ neon_fp_round_d_q,\
+\
+ neon_fp_recpe_s,\
+ neon_fp_recpe_s_q,\
+ neon_fp_recpe_d,\
+ neon_fp_recpe_d_q,\
+ neon_fp_recps_s,\
+ neon_fp_recps_s_q,\
+ neon_fp_recps_d,\
+ neon_fp_recps_d_q,\
+ neon_fp_recpx_s,\
+ neon_fp_recpx_s_q,\
+ neon_fp_recpx_d,\
+ neon_fp_recpx_d_q,\
+\
+ neon_fp_rsqrte_s,\
+ neon_fp_rsqrte_s_q,\
+ neon_fp_rsqrte_d,\
+ neon_fp_rsqrte_d_q,\
+ neon_fp_rsqrts_s,\
+ neon_fp_rsqrts_s_q,\
+ neon_fp_rsqrts_d,\
+ neon_fp_rsqrts_d_q,\
+\
+ neon_fp_mul_s,\
+ neon_fp_mul_s_q,\
+ neon_fp_mul_s_scalar,\
+ neon_fp_mul_s_scalar_q,\
+ neon_fp_mul_d,\
+ neon_fp_mul_d_q,\
+ neon_fp_mul_d_scalar_q,\
+\
+ neon_fp_mla_s,\
+ neon_fp_mla_s_q,\
+ neon_fp_mla_s_scalar,\
+ neon_fp_mla_s_scalar_q,\
+ neon_fp_mla_d,\
+ neon_fp_mla_d_q,\
+ neon_fp_mla_d_scalar_q,\
+\
+ neon_fp_sqrt_s,\
+ neon_fp_sqrt_s_q,\
+ neon_fp_sqrt_d,\
+ neon_fp_sqrt_d_q,\
+ neon_fp_div_s,\
+ neon_fp_div_s_q,\
+ neon_fp_div_d,\
+ neon_fp_div_d_q,\
+\
+ crypto_aes,\
+ crypto_sha1_xor,\
+ crypto_sha1_fast,\
+ crypto_sha1_slow,\
+ crypto_sha256_fast,\
+ crypto_sha256_slow"
+ (const_string "untyped"))
+
+; Is this an (integer side) multiply with a 32-bit (or smaller) result?
+(define_attr "mul32" "no,yes"
+ (if_then_else
+ (eq_attr "type"
+ "smulxy,smlaxy,smulwy,smlawx,mul,muls,mla,mlas,smlawy,smuad,smuadx,\
+ smlad,smladx,smusd,smusdx,smlsd,smlsdx,smmul,smmulr,smmla,smlald,smlsld")
+ (const_string "yes")
+ (const_string "no")))
+
+; Is this an (integer side) multiply with a 64-bit result?
+(define_attr "mul64" "no,yes"
+ (if_then_else
+ (eq_attr "type"
+ "smlalxy,umull,umulls,umaal,umlal,umlals,smull,smulls,smlal,smlals")
+ (const_string "yes")
+ (const_string "no")))
diff --git a/gcc/config/arm/vfp.md b/gcc/config/arm/vfp.md
index 6fd2323a1cc..0e171af1ef7 100644
--- a/gcc/config/arm/vfp.md
+++ b/gcc/config/arm/vfp.md
@@ -53,8 +53,7 @@
}
"
[(set_attr "predicable" "yes")
- (set_attr "type" "mov_reg,mov_reg,mvn_imm,mov_imm,load1,store1,r_2_f,f_2_r,fcpys,f_loads,f_stores")
- (set_attr "neon_type" "*,*,*,*,*,*,neon_mcr,neon_mrc,neon_vmov,*,*")
+ (set_attr "type" "mov_reg,mov_reg,mvn_imm,mov_imm,load1,store1,f_mcr,f_mrc,fmov,f_loads,f_stores")
(set_attr "pool_range" "*,*,*,*,4096,*,*,*,*,1020,*")
(set_attr "neg_pool_range" "*,*,*,*,4084,*,*,*,*,1008,*")]
)
@@ -101,9 +100,8 @@
"
[(set_attr "predicable" "yes")
(set_attr "predicable_short_it" "yes,no,yes,no,no,no,no,no,no,no,no,no,no,no")
- (set_attr "type" "mov_reg,mov_reg,mov_reg,mvn_reg,mov_reg,load1,load1,store1,store1,r_2_f,f_2_r,fcpys,f_loads,f_stores")
+ (set_attr "type" "mov_reg,mov_reg,mov_reg,mvn_reg,mov_reg,load1,load1,store1,store1,f_mcr,f_mrc,fmov,f_loads,f_stores")
(set_attr "length" "2,4,2,4,4,4,4,4,4,4,4,4,4,4")
- (set_attr "neon_type" "*,*,*,*,*,*,*,*,*,neon_mcr,neon_mrc,neon_vmov,*,*")
(set_attr "pool_range" "*,*,*,*,*,1018,4094,*,*,*,*,*,1018,*")
(set_attr "neg_pool_range" "*,*,*,*,*, 0, 0,*,*,*,*,*,1008,*")]
)
@@ -146,8 +144,7 @@
gcc_unreachable ();
}
"
- [(set_attr "type" "*,*,*,*,load2,load2,store2,r_2_f,f_2_r,ffarithd,f_loadd,f_stored")
- (set_attr "neon_type" "*,*,*,*,*,*,*,neon_mcr_2_mcrr,neon_mrrc,neon_vmov,*,*")
+ [(set_attr "type" "multiple,multiple,multiple,multiple,load2,load2,store2,f_mcrr,f_mrrc,ffarithd,f_loadd,f_stored")
(set (attr "length") (cond [(eq_attr "alternative" "1,4,5,6") (const_int 8)
(eq_attr "alternative" "2") (const_int 12)
(eq_attr "alternative" "3") (const_int 16)
@@ -195,8 +192,7 @@
gcc_unreachable ();
}
"
- [(set_attr "type" "*,*,*,*,load2,load2,store2,r_2_f,f_2_r,ffarithd,f_loadd,f_stored")
- (set_attr "neon_type" "*,*,*,*,*,*,*,neon_mcr_2_mcrr,neon_mrrc,neon_vmov,*,*")
+ [(set_attr "type" "multiple,multiple,multiple,multiple,load2,load2,store2,f_mcrr,f_mrrc,ffarithd,f_loadd,f_stored")
(set (attr "length") (cond [(eq_attr "alternative" "1") (const_int 8)
(eq_attr "alternative" "2") (const_int 12)
(eq_attr "alternative" "3") (const_int 16)
@@ -264,8 +260,8 @@
}
"
[(set_attr "conds" "unconditional")
- (set_attr "type" "*,*,load1,store1,fcpys,*,r_2_f,f_2_r,*")
- (set_attr "neon_type" "neon_vld1_1_2_regs,neon_vst1_1_2_regs_vst2_2_regs,*,*,*,*,*,*,*")
+ (set_attr "type" "neon_load1_1reg,neon_store1_1reg,\
+ load1,store1,fmov,mov_reg,f_mcr,f_mrc,multiple")
(set_attr "length" "4,4,4,4,4,4,4,4,8")]
)
@@ -315,7 +311,7 @@
}
"
[(set_attr "conds" "unconditional")
- (set_attr "type" "load1,store1,fcpys,*,r_2_f,f_2_r,*")
+ (set_attr "type" "load1,store1,fmov,mov_reg,f_mcr,f_mrc,multiple")
(set_attr "length" "4,4,4,4,4,4,8")]
)
@@ -355,8 +351,7 @@
"
[(set_attr "predicable" "yes")
(set_attr "type"
- "r_2_f,f_2_r,fconsts,f_loads,f_stores,load1,store1,fcpys,mov_reg")
- (set_attr "neon_type" "neon_mcr,neon_mrc,*,*,*,*,*,neon_vmov,*")
+ "f_mcr,f_mrc,fconsts,f_loads,f_stores,load1,store1,fmov,mov_reg")
(set_attr "pool_range" "*,*,*,1020,*,4096,*,*,*")
(set_attr "neg_pool_range" "*,*,*,1008,*,4080,*,*,*")]
)
@@ -393,8 +388,7 @@
[(set_attr "predicable" "yes")
(set_attr "predicable_short_it" "no")
(set_attr "type"
- "r_2_f,f_2_r,fconsts,f_loads,f_stores,load1,store1,fcpys,mov_reg")
- (set_attr "neon_type" "neon_mcr,neon_mrc,*,*,*,*,*,neon_vmov,*")
+ "f_mcr,f_mrc,fconsts,f_loads,f_stores,load1,store1,fmov,mov_reg")
(set_attr "pool_range" "*,*,*,1018,*,4090,*,*,*")
(set_attr "neg_pool_range" "*,*,*,1008,*,0,*,*,*")]
)
@@ -434,9 +428,8 @@
}
}
"
- [(set_attr "type"
- "r_2_f,f_2_r,fconstd,f_loadd,f_stored,load2,store2,ffarithd,*")
- (set_attr "neon_type" "neon_mcr_2_mcrr,neon_mrrc,*,*,*,*,*,neon_vmov,*")
+ [(set_attr "type" "f_mcrr,f_mrrc,fconstd,f_loadd,f_stored,\
+ load2,store2,ffarithd,multiple")
(set (attr "length") (cond [(eq_attr "alternative" "5,6,8") (const_int 8)
(eq_attr "alternative" "7")
(if_then_else
@@ -480,9 +473,8 @@
}
}
"
- [(set_attr "type"
- "r_2_f,f_2_r,fconstd,f_loadd,f_stored,load2,store2,ffarithd,*")
- (set_attr "neon_type" "neon_mcr_2_mcrr,neon_mrrc,*,*,*,*,*,neon_vmov,*")
+ [(set_attr "type" "f_mcrr,f_mrrc,fconstd,f_loadd,\
+ f_stored,load2,store2,ffarithd,multiple")
(set (attr "length") (cond [(eq_attr "alternative" "5,6,8") (const_int 8)
(eq_attr "alternative" "7")
(if_then_else
@@ -517,8 +509,7 @@
fmrs%D3\\t%0, %2\;fmrs%d3\\t%0, %1"
[(set_attr "conds" "use")
(set_attr "length" "4,4,8,4,4,8,4,4,8")
- (set_attr "type" "fcpys,fcpys,fcpys,r_2_f,r_2_f,r_2_f,f_2_r,f_2_r,f_2_r")
- (set_attr "neon_type" "neon_vmov,neon_vmov,neon_vmov,neon_mcr,neon_mcr,neon_mcr,neon_mrc,neon_mrc,neon_mrc")]
+ (set_attr "type" "fmov,fmov,fmov,f_mcr,f_mcr,f_mcr,f_mrc,f_mrc,f_mrc")]
)
(define_insn "*thumb2_movsfcc_vfp"
@@ -541,8 +532,7 @@
ite\\t%D3\;fmrs%D3\\t%0, %2\;fmrs%d3\\t%0, %1"
[(set_attr "conds" "use")
(set_attr "length" "6,6,10,6,6,10,6,6,10")
- (set_attr "type" "fcpys,fcpys,fcpys,r_2_f,r_2_f,r_2_f,f_2_r,f_2_r,f_2_r")
- (set_attr "neon_type" "neon_vmov,neon_vmov,neon_vmov,neon_mcr,neon_mcr,neon_mcr,neon_mrc,neon_mrc,neon_mrc")]
+ (set_attr "type" "fmov,fmov,fmov,f_mcr,f_mcr,f_mcr,f_mrc,f_mrc,f_mrc")]
)
(define_insn "*movdfcc_vfp"
@@ -565,8 +555,7 @@
fmrrd%D3\\t%Q0, %R0, %P2\;fmrrd%d3\\t%Q0, %R0, %P1"
[(set_attr "conds" "use")
(set_attr "length" "4,4,8,4,4,8,4,4,8")
- (set_attr "type" "ffarithd,ffarithd,ffarithd,r_2_f,r_2_f,r_2_f,f_2_r,f_2_r,f_2_r")
- (set_attr "neon_type" "neon_vmov,neon_vmov,neon_vmov,neon_mcr_2_mcrr,neon_mcr_2_mcrr,neon_mcr_2_mcrr,neon_mrrc,neon_mrrc,neon_mrrc")]
+ (set_attr "type" "ffarithd,ffarithd,ffarithd,f_mcr,f_mcr,f_mcr,f_mrrc,f_mrrc,f_mrrc")]
)
(define_insn "*thumb2_movdfcc_vfp"
@@ -589,8 +578,7 @@
ite\\t%D3\;fmrrd%D3\\t%Q0, %R0, %P2\;fmrrd%d3\\t%Q0, %R0, %P1"
[(set_attr "conds" "use")
(set_attr "length" "6,6,10,6,6,10,6,6,10")
- (set_attr "type" "ffarithd,ffarithd,ffarithd,r_2_f,r_2_f,r_2_f,f_2_r,f_2_r,f_2_r")
- (set_attr "neon_type" "neon_vmov,neon_vmov,neon_vmov,neon_mcr_2_mcrr,neon_mcr_2_mcrr,neon_mcr_2_mcrr,neon_mrrc,neon_mrrc,neon_mrrc")]
+ (set_attr "type" "ffarithd,ffarithd,ffarithd,f_mcr,f_mcr,f_mcrr,f_mrrc,f_mrrc,f_mrrc")]
)
@@ -1003,7 +991,7 @@
"ftosizs%?\\t%0, %1"
[(set_attr "predicable" "yes")
(set_attr "predicable_short_it" "no")
- (set_attr "type" "f_cvt")]
+ (set_attr "type" "f_cvtf2i")]
)
(define_insn "*truncsidf2_vfp"
@@ -1013,7 +1001,7 @@
"ftosizd%?\\t%0, %P1"
[(set_attr "predicable" "yes")
(set_attr "predicable_short_it" "no")
- (set_attr "type" "f_cvt")]
+ (set_attr "type" "f_cvtf2i")]
)
@@ -1024,7 +1012,7 @@
"ftouizs%?\\t%0, %1"
[(set_attr "predicable" "yes")
(set_attr "predicable_short_it" "no")
- (set_attr "type" "f_cvt")]
+ (set_attr "type" "f_cvtf2i")]
)
(define_insn "fixuns_truncdfsi2"
@@ -1034,7 +1022,7 @@
"ftouizd%?\\t%0, %P1"
[(set_attr "predicable" "yes")
(set_attr "predicable_short_it" "no")
- (set_attr "type" "f_cvt")]
+ (set_attr "type" "f_cvtf2i")]
)
@@ -1045,7 +1033,7 @@
"fsitos%?\\t%0, %1"
[(set_attr "predicable" "yes")
(set_attr "predicable_short_it" "no")
- (set_attr "type" "f_cvt")]
+ (set_attr "type" "f_cvti2f")]
)
(define_insn "*floatsidf2_vfp"
@@ -1055,7 +1043,7 @@
"fsitod%?\\t%P0, %1"
[(set_attr "predicable" "yes")
(set_attr "predicable_short_it" "no")
- (set_attr "type" "f_cvt")]
+ (set_attr "type" "f_cvti2f")]
)
@@ -1066,7 +1054,7 @@
"fuitos%?\\t%0, %1"
[(set_attr "predicable" "yes")
(set_attr "predicable_short_it" "no")
- (set_attr "type" "f_cvt")]
+ (set_attr "type" "f_cvti2f")]
)
(define_insn "floatunssidf2"
@@ -1076,7 +1064,7 @@
"fuitod%?\\t%P0, %1"
[(set_attr "predicable" "yes")
(set_attr "predicable_short_it" "no")
- (set_attr "type" "f_cvt")]
+ (set_attr "type" "f_cvti2f")]
)
@@ -1089,7 +1077,7 @@
"fsqrts%?\\t%0, %1"
[(set_attr "predicable" "yes")
(set_attr "predicable_short_it" "no")
- (set_attr "type" "fdivs")]
+ (set_attr "type" "fsqrts")]
)
(define_insn "*sqrtdf2_vfp"
@@ -1099,7 +1087,7 @@
"fsqrtd%?\\t%P0, %P1"
[(set_attr "predicable" "yes")
(set_attr "predicable_short_it" "no")
- (set_attr "type" "fdivd")]
+ (set_attr "type" "fsqrtd")]
)
@@ -1241,7 +1229,7 @@
"TARGET_32BIT && TARGET_HARD_FLOAT && TARGET_VFP3 && !flag_rounding_math"
"vcvt%?.f32.<FCVTI32typename>\\t%0, %1, %v2"
[(set_attr "predicable" "yes")
- (set_attr "type" "f_cvt")]
+ (set_attr "type" "f_cvti2f")]
)
;; Not the ideal way of implementing this. Ideally we would be able to split
@@ -1259,7 +1247,7 @@
vmov%?.f64\\t%P0, %1, %1\;vcvt%?.f64.<FCVTI32typename>\\t%P0, %P0, %v2"
[(set_attr "predicable" "yes")
(set_attr "ce_count" "2")
- (set_attr "type" "f_cvt")
+ (set_attr "type" "f_cvti2f")
(set_attr "length" "8")]
)
diff --git a/gcc/config/arm/vfp11.md b/gcc/config/arm/vfp11.md
index b027fe6c3cd..4cfa69efc24 100644
--- a/gcc/config/arm/vfp11.md
+++ b/gcc/config/arm/vfp11.md
@@ -51,12 +51,13 @@
(define_insn_reservation "vfp_ffarith" 4
(and (eq_attr "generic_vfp" "yes")
- (eq_attr "type" "fcpys,ffariths,ffarithd,fcmps,fcmpd"))
+ (eq_attr "type" "fmov,ffariths,ffarithd,fcmps,fcmpd"))
"fmac")
(define_insn_reservation "vfp_farith" 8
(and (eq_attr "generic_vfp" "yes")
- (eq_attr "type" "fadds,faddd,fconsts,fconstd,f_cvt,fmuls,fmacs,ffmas"))
+ (eq_attr "type" "fadds,faddd,fconsts,fconstd,f_cvt,f_cvtf2i,f_cvti2f,\
+ fmuls,fmacs,ffmas"))
"fmac")
(define_insn_reservation "vfp_fmul" 9
@@ -66,23 +67,23 @@
(define_insn_reservation "vfp_fdivs" 19
(and (eq_attr "generic_vfp" "yes")
- (eq_attr "type" "fdivs"))
+ (eq_attr "type" "fdivs, fsqrts"))
"ds*15")
(define_insn_reservation "vfp_fdivd" 33
(and (eq_attr "generic_vfp" "yes")
- (eq_attr "type" "fdivd"))
+ (eq_attr "type" "fdivd, fsqrtd"))
"fmac+ds*29")
;; Moves to/from arm regs also use the load/store pipeline.
(define_insn_reservation "vfp_fload" 4
(and (eq_attr "generic_vfp" "yes")
- (eq_attr "type" "f_loads,f_loadd,r_2_f"))
+ (eq_attr "type" "f_loads,f_loadd,f_mcr,f_mcrr"))
"vfp_ls")
(define_insn_reservation "vfp_fstore" 4
(and (eq_attr "generic_vfp" "yes")
- (eq_attr "type" "f_stores,f_stored,f_2_r"))
+ (eq_attr "type" "f_stores,f_stored,f_mrc,f_mrrc"))
"vfp_ls")
(define_insn_reservation "vfp_to_cpsr" 4
diff --git a/gcc/doc/md.texi b/gcc/doc/md.texi
index 2d36ca5c8ee..e2a848f31a0 100644
--- a/gcc/doc/md.texi
+++ b/gcc/doc/md.texi
@@ -9610,7 +9610,7 @@ Here's an example of int iterators in action, taken from the ARM port:
QABSNEG))]
"TARGET_NEON"
"vq<absneg>.<V_s_elem>\t%<V_reg>0, %<V_reg>1"
- [(set_attr "neon_type" "neon_vqneg_vqabs")]
+ [(set_attr "type" "neon_vqneg_vqabs")]
)
@end smallexample
@@ -9625,7 +9625,7 @@ This is equivalent to:
UNSPEC_VQABS))]
"TARGET_NEON"
"vqabs.<V_s_elem>\t%<V_reg>0, %<V_reg>1"
- [(set_attr "neon_type" "neon_vqneg_vqabs")]
+ [(set_attr "type" "neon_vqneg_vqabs")]
)
(define_insn "neon_vqneg<mode>"
@@ -9635,7 +9635,7 @@ This is equivalent to:
UNSPEC_VQNEG))]
"TARGET_NEON"
"vqneg.<V_s_elem>\t%<V_reg>0, %<V_reg>1"
- [(set_attr "neon_type" "neon_vqneg_vqabs")]
+ [(set_attr "type" "neon_vqneg_vqabs")]
)
@end smallexample
diff --git a/gcc/testsuite/ChangeLog.linaro b/gcc/testsuite/ChangeLog.linaro
index e82380c5954..de96675ab6e 100644
--- a/gcc/testsuite/ChangeLog.linaro
+++ b/gcc/testsuite/ChangeLog.linaro
@@ -1,5 +1,15 @@
2014-04-07 Michael Collison <michael.collison@linaro.org>
+ Backport from trunk r204784
+ 2013-11-14 James Greenhalgh <james.greenhalgh@arm.com>
+
+ * gcc.target/aarch64/cpu-diagnostics-2.c: Change "-mcpu="
+ to "cortex-a53".
+ * gcc.target/aarch64/cpu-diagnostics-3.c: Change "-mcpu="
+ to "cortex-a53".
+
+2014-04-07 Michael Collison <michael.collison@linaro.org>
+
Backport from trunk r202663
2013-09-17 Cong Hou <congh@google.com>
diff --git a/gcc/testsuite/gcc.target/aarch64/cpu-diagnostics-2.c b/gcc/testsuite/gcc.target/aarch64/cpu-diagnostics-2.c
index 284971d832c..2ca006598ff 100644
--- a/gcc/testsuite/gcc.target/aarch64/cpu-diagnostics-2.c
+++ b/gcc/testsuite/gcc.target/aarch64/cpu-diagnostics-2.c
@@ -1,5 +1,5 @@
/* { dg-error "missing" "" {target "aarch64*-*-*" } } */
-/* { dg-options "-O2 -mcpu=example-1+no" } */
+/* { dg-options "-O2 -mcpu=cortex-a53+no" } */
void f ()
{
diff --git a/gcc/testsuite/gcc.target/aarch64/cpu-diagnostics-3.c b/gcc/testsuite/gcc.target/aarch64/cpu-diagnostics-3.c
index 4e5d17c3b82..155def05155 100644
--- a/gcc/testsuite/gcc.target/aarch64/cpu-diagnostics-3.c
+++ b/gcc/testsuite/gcc.target/aarch64/cpu-diagnostics-3.c
@@ -1,5 +1,5 @@
/* { dg-error "unknown" "" {target "aarch64*-*-*" } } */
-/* { dg-options "-O2 -mcpu=example-1+dummy" } */
+/* { dg-options "-O2 -mcpu=cortex-a53+dummy" } */
void f ()
{