aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorTobias Burnus <burnus@net-b.de>2006-12-19 17:14:22 +0000
committerTobias Burnus <burnus@net-b.de>2006-12-19 17:14:22 +0000
commitbb96b0453d05f1d51e2c4979bd7e22fed0d937b2 (patch)
treed178a75abfca195f620e69803f678321edecad5a
parenta868b42a235bf7c21f86b57faa2112b985dfbbab (diff)
Merged revisions 119412-119459 via svnmerge from
svn+ssh://gcc.gnu.org/svn/gcc/trunk ........ r119412 | tkoenig | 2006-12-01 22:04:38 +0100 (Fr, 01 Dez 2006) | 70 lines 2006-12-01 Thomas Koenig <Thomas.Koenig@online.de> PR libfortran/29568 * gfortran.dg/convert_implied_open.f90: Change to new default record length. * gfortran.dg/unf_short_record_1.f90: Adapt to new error message. * gfortran.dg/unformatted_subrecords_1.f90: New test. 2006-12-01 Thomas Koenig <Thomas.Koenig@online.de> PR libfortran/29568 * gfortran.h (gfc_option_t): Add max_subrecord_length. (top level): Define MAX_SUBRECORD_LENGTH. * lang.opt: Add option -fmax-subrecord-length=. * trans-decl.c: Add new function set_max_subrecord_length. (gfc_generate_function_code): If we are within the main program and max_subrecord_length has been set, call set_max_subrecord_length. * options.c (gfc_init_options): Add defaults for max_subrecord_lenght, convert and record_marker. (gfc_handle_option): Add handling for -fmax_subrecord_length. * invoke.texi: Document the new default for -frecord-marker=<n>. 2006-12-01 Thomas Koenig <Thomas.Koenig@online.de> PR libfortran/29568 * libgfortran/libgfortran.h (compile_options_t): Add record_marker. (top level): Define GFC_MAX_SUBRECORD_LENGTH. * runtime/compile_options.c (set_record_marker): Change default to four-byte record marker. (set_max_subrecord_length): New function. * runtime/error.c (translate_error): Change error message for short record on unformatted read. * io/io.h (gfc_unit): Add recl_subrecord, bytes_left_subrecord and continued. * io/file_pos.c (unformatted_backspace): Change default of record marker size to four bytes. Loop over subrecords. * io/open.c: Default recl is max_offset. If compile_options.max_subrecord_length has been set, set set u->recl_subrecord to its value, to the maximum value otherwise. * io/transfer.c (top level): Add prototypes for us_read, us_write, next_record_r_unf and next_record_w_unf. (read_block_direct): Separate codepaths for unformatted direct and unformatted sequential. If a recl has been set by the user, use the number of bytes left for the record if it is smaller than the read request. Loop over subrecords. Set an error if the user has set a recl and the read was short. (write_buf): Separate codepaths for unformatted direct and unformatted sequential. If a recl has been set by the user, use the number of bytes left for the record if it is smaller than the read request. Loop over subrecords. Set an error if the user has set a recl and the read was short. (us_read): Add parameter continued (to indicate that bytes_left should not be intialized). Change default of record marker size to four bytes. Use subrecord. If the subrecord length is smaller than zero, this indicates a continuation. (us_write): Add parameter continued (to indicate that the continued flag should be set). Use subrecord. (pre_position): Use 0 for continued on us_write and us_read calls. (skip_record): New function. (next_record_r_unf): New function. (next_record_r): Use next_record_r_unf. (write_us_marker): Default size for record markers is four bytes. (next_record_w_unf): New function. (next_record_w): Use next_record_w_unf. ........ r119415 | reichelt | 2006-12-01 22:28:35 +0100 (Fr, 01 Dez 2006) | 5 lines PR c++/30021 * c-common.c (check_main_parameter_types): Check for error_mark_node. * g++.dg/other/main1.C: New test. ........ r119416 | reichelt | 2006-12-01 22:35:25 +0100 (Fr, 01 Dez 2006) | 7 lines PR c++/30022 * typeck.c (type_after_usual_arithmetic_conversions): Fix assertion for vector types. (build_binary_op): Use temporary for inner type of vector types. * g++.dg/ext/vector5.C: New test. ........ r119421 | tsmigiel | 2006-12-01 23:43:18 +0100 (Fr, 01 Dez 2006) | 8 lines * config/spu/predicates.md (spu_mov_operand): Add. * config/spu/spu.c (spu_expand_extv): Remove unused code. (print_operand_address, print_operand): Handle addresses containing AND. (spu_split_load, spu_split_store): Use updated movti pattern. * config/spu/spu.md: (_mov<mode>, _movdi, _movti): Handle loads and stores in mov patterns for correct operation of reload. (lq, lq_<mode>, stq, stq_<mode>): Remove. ........ r119422 | ebotcazou | 2006-12-01 23:46:45 +0100 (Fr, 01 Dez 2006) | 6 lines * fold-const.c (fold_binary) <LT_EXPR>: Use the precision of the type instead of the size of its mode to compute the highest and lowest possible values. Still check the size of the mode before flipping the signedness of the comparison. ........ r119424 | tsmigiel | 2006-12-01 23:51:06 +0100 (Fr, 01 Dez 2006) | 19 lines * config/spu/spu.c (spu_immediate): Remove trailing comma. (reloc_diagnostic): Call warning when -mwarn-reloc is specified. * config/spu/spu.md: (zero_extendhisi2): Expand instead of split for better optimization. (floatv4siv4sf2): New. (fix_truncv4sfv4si2): New. (floatunsv4siv4sf2): New. (fixuns_truncv4sfv4si2): New. (addv16qi3): New. (subv16qi3): New. (negv16qi2): New. (mulv8hi3): New. (mulsi3): Remove. (mul<mode>3): New. (_mulv4si3): New. (cmp<mode>): Don't accept constant arguments for DI, TI and SF. * config/spu/spu_internals.h: Handle overloaded intrinsics in C++ with spu_resolve_overloaded_builtin instead of static inline functions. ........ r119427 | geoffk | 2006-12-02 00:01:05 +0100 (Sa, 02 Dez 2006) | 10 lines * decl.c (poplevel): Check DECL_INITIAL invariant. (duplicate_decls): Preserve DECL_INITIAL when eliminating a new definition in favour of an old declaration. (start_preparsed_function): Define and document value of DECL_INITIAL before and after routine. (finish_function): Check DECL_INITIAL invariant. * parser.c (cp_parser_function_definition_from_specifiers_and_declarator): Skip duplicate function definitions. ........ r119433 | gccadmin | 2006-12-02 01:17:43 +0100 (Sa, 02 Dez 2006) | 1 line Daily bump. ........ r119435 | paolo | 2006-12-02 01:31:34 +0100 (Sa, 02 Dez 2006) | 5 lines 2006-12-01 Paolo Carlini <pcarlini@suse.de> * include/ext/mt_allocator.h (__pool_base::_M_get_align): Remove redundant const qualifier on the return type. ........ r119437 | kazu | 2006-12-02 02:03:11 +0100 (Sa, 02 Dez 2006) | 4 lines * Makefile.in, mingw32.h, trans.c: Fix comment typos. * gnat_rm.texi, gnat_ugn.texi: Follow spelling conventions. Fix typos. ........ r119440 | kazu | 2006-12-02 02:44:17 +0100 (Sa, 02 Dez 2006) | 2 lines * name-lookup.c: Follow spelling conventions. ........ r119441 | kazu | 2006-12-02 03:06:52 +0100 (Sa, 02 Dez 2006) | 2 lines * doc/extend.texi, doc/invoke.texi, doc/md.texi: Fix typos. ........ r119442 | kazu | 2006-12-02 03:26:04 +0100 (Sa, 02 Dez 2006) | 13 lines * builtins.c, cfgloop.h, cgraph.h, config/arm/arm.c, config/i386/i386.c, config/i386/i386.h, config/mips/mips.h, config/rs6000/cell.md, config/rs6000/rs6000.c, config/sh/sh.c, config/sh/sh4-300.md, config/spu/spu-builtins.def, config/spu/spu-c.c, config/spu/spu-modes.def, config/spu/spu.c, config/spu/spu.md, config/spu/spu_internals.h, config/spu/vmx2spu.h, fold-const.c, fwprop.c, predict.c, tree-data-ref.h, tree-flow.h, tree-ssa-loop-manip.c, tree-ssa-loop-niter.c, tree-ssa-pre.c, tree-vect-analyze.c, tree-vect-transform.c, tree-vectorizer.c, tree-vrp.c: Fix comment typos. Follow spelling conventions. ........ r119443 | kazu | 2006-12-02 03:47:07 +0100 (Sa, 02 Dez 2006) | 2 lines * config/i386/i386.c: Fix a comment typo. ........ r119445 | hubicka | 2006-12-02 14:16:27 +0100 (Sa, 02 Dez 2006) | 6 lines * config/i386/i386.c (pentium4_cost, nocona_cost): Update 32bit memcpy/memset decriptors. (decide_alg): With -minline-all-stringops and sizes that are best to be copied via libcall still work hard enough to pick non-libcall strategy. ........ r119446 | lmillward | 2006-12-02 17:34:26 +0100 (Sa, 02 Dez 2006) | 5 lines PR c/27953 * c-decl.c (store_parm_decls_oldstyle): Robustify * gcc.dg/pr27953.c: New test. ........ r119447 | ghazi | 2006-12-02 17:52:15 +0100 (Sa, 02 Dez 2006) | 12 lines * configure.in: Update MPFR version in error message. * configure: Regenerate. gcc: * doc/install.texi: Update recommended MPFR version. Remove obsolete reference to cumulative patch. gcc/testsuite: * gcc.dg/torture/builtin-sin-mpfr-1.c: Update MPFR comment. ........ r119448 | lmillward | 2006-12-02 17:54:35 +0100 (Sa, 02 Dez 2006) | 3 lines fix testcase from previous commit ........ r119449 | pinskia | 2006-12-02 18:01:04 +0100 (Sa, 02 Dez 2006) | 12 lines 2006-12-02 Andrew Pinski <andrew_pinski@playstation.sony.com> PR C++/30033 * decl.c (cp_tree_node_structure): Handle STATIC_ASSERT. 2006-12-02 Andrew Pinski <andrew_pinski@playstation.sony.com> PR C++/30033 * g++.dg/cpp0x/static_assert4.C: New testcase. ........ r119450 | paolo | 2006-12-02 18:06:57 +0100 (Sa, 02 Dez 2006) | 7 lines 2006-12-02 Howard Hinnant <hhinnant@apple.com> * acinclude.m4: Allow OPTIMIZE_CXXFLAGS to be set by configure.host. * configure.host: Set OPTIMIZE_CXXFLAGS to -fvisibility-inlines-hidden for x86/darwin. * configure: Regenerate. ........ r119452 | ebotcazou | 2006-12-02 21:01:34 +0100 (Sa, 02 Dez 2006) | 3 lines * configure.tgt: Force initial-exec TLS model on Linux only. ........ r119454 | hjl | 2006-12-02 23:18:25 +0100 (Sa, 02 Dez 2006) | 14 lines 2006-12-02 H.J. Lu <hongjiu.lu@intel.com> PR target/30040 * config/i386/driver-i386.c: Include "coretypes.h" and "tm.h". (bit_SSSE3): New. (host_detect_local_cpu): Check -mtune= vs. -march=. Rewrite processor detection. * config/i386/i386.h (CC1_CPU_SPEC): Add -mtune=native for -march=native if there is no -mtune=*. * config/i386/x-i386 (driver-i386.o): Also depend on $(TM_H) coretypes.h. ........ r119459 | gccadmin | 2006-12-03 01:17:51 +0100 (So, 03 Dez 2006) | 1 line Daily bump. ........ git-svn-id: https://gcc.gnu.org/svn/gcc/branches/fortran-experiments@120053 138bc75d-0d04-0410-961f-82ee72b054a4
-rw-r--r--ChangeLog6
-rwxr-xr-xconfigure2
-rw-r--r--configure.in2
-rw-r--r--gcc/ChangeLog95
-rw-r--r--gcc/DATESTAMP2
-rw-r--r--gcc/ada/ChangeLog6
-rw-r--r--gcc/ada/Makefile.in2
-rw-r--r--gcc/ada/gnat_rm.texi6
-rw-r--r--gcc/ada/gnat_ugn.texi2
-rw-r--r--gcc/ada/mingw32.h2
-rw-r--r--gcc/ada/trans.c2
-rw-r--r--gcc/builtins.c4
-rw-r--r--gcc/c-common.c2
-rw-r--r--gcc/c-decl.c4
-rw-r--r--gcc/cfgloop.h2
-rw-r--r--gcc/cgraph.h2
-rw-r--r--gcc/config/arm/arm.c2
-rw-r--r--gcc/config/i386/driver-i386.c203
-rw-r--r--gcc/config/i386/i386.c34
-rw-r--r--gcc/config/i386/i386.h5
-rw-r--r--gcc/config/i386/x-i3862
-rw-r--r--gcc/config/mips/mips.h2
-rw-r--r--gcc/config/rs6000/cell.md12
-rw-r--r--gcc/config/rs6000/rs6000.c2
-rw-r--r--gcc/config/sh/sh.c8
-rw-r--r--gcc/config/sh/sh4-300.md2
-rw-r--r--gcc/config/spu/predicates.md4
-rw-r--r--gcc/config/spu/spu-builtins.def12
-rw-r--r--gcc/config/spu/spu-c.c2
-rw-r--r--gcc/config/spu/spu-modes.def4
-rw-r--r--gcc/config/spu/spu.c95
-rw-r--r--gcc/config/spu/spu.md267
-rw-r--r--gcc/config/spu/spu_internals.h2442
-rw-r--r--gcc/config/spu/vmx2spu.h10
-rw-r--r--gcc/cp/ChangeLog30
-rw-r--r--gcc/cp/decl.c70
-rw-r--r--gcc/cp/method.c12
-rw-r--r--gcc/cp/name-lookup.c2
-rw-r--r--gcc/cp/parser.c10
-rw-r--r--gcc/cp/typeck.c14
-rw-r--r--gcc/doc/extend.texi2
-rw-r--r--gcc/doc/install.texi14
-rw-r--r--gcc/doc/invoke.texi4
-rw-r--r--gcc/doc/md.texi2
-rw-r--r--gcc/fold-const.c37
-rw-r--r--gcc/fortran/ChangeLog17
-rw-r--r--gcc/fortran/gfortran.h5
-rw-r--r--gcc/fortran/invoke.texi18
-rw-r--r--gcc/fortran/lang.opt4
-rw-r--r--gcc/fortran/options.c9
-rw-r--r--gcc/fortran/trans-decl.c17
-rw-r--r--gcc/fwprop.c2
-rw-r--r--gcc/predict.c2
-rw-r--r--gcc/testsuite/ChangeLog31
-rw-r--r--gcc/testsuite/g++.dg/cpp0x/static_assert4.C15
-rw-r--r--gcc/testsuite/g++.dg/ext/vector5.C8
-rw-r--r--gcc/testsuite/g++.dg/other/main1.C4
-rw-r--r--gcc/testsuite/gcc.dg/pr27953.c4
-rw-r--r--gcc/testsuite/gcc.dg/torture/builtin-sin-mpfr-1.c3
-rw-r--r--gcc/testsuite/gfortran.dg/convert_implied_open.f908
-rw-r--r--gcc/testsuite/gfortran.dg/unf_short_record_1.f902
-rw-r--r--gcc/testsuite/gfortran.dg/unformatted_subrecord_1.f9045
-rw-r--r--gcc/tree-data-ref.h2
-rw-r--r--gcc/tree-flow.h4
-rw-r--r--gcc/tree-ssa-loop-manip.c2
-rw-r--r--gcc/tree-ssa-loop-niter.c4
-rw-r--r--gcc/tree-ssa-pre.c2
-rw-r--r--gcc/tree-vect-analyze.c5
-rw-r--r--gcc/tree-vect-transform.c20
-rw-r--r--gcc/tree-vectorizer.c6
-rw-r--r--gcc/tree-vrp.c2
-rw-r--r--libgfortran/ChangeLog43
-rw-r--r--libgfortran/io/file_pos.c108
-rw-r--r--libgfortran/io/io.h19
-rw-r--r--libgfortran/io/open.c38
-rw-r--r--libgfortran/io/transfer.c552
-rw-r--r--libgfortran/libgfortran.h2
-rw-r--r--libgfortran/runtime/compile_options.c20
-rw-r--r--libgfortran/runtime/error.c2
-rw-r--r--libgomp/ChangeLog4
-rw-r--r--libgomp/configure.tgt7
-rw-r--r--libstdc++-v3/ChangeLog12
-rw-r--r--libstdc++-v3/acinclude.m42
-rwxr-xr-xlibstdc++-v3/configure2
-rw-r--r--libstdc++-v3/configure.host5
-rw-r--r--libstdc++-v3/include/ext/mt_allocator.h2
86 files changed, 1483 insertions, 3022 deletions
diff --git a/ChangeLog b/ChangeLog
index 4034b820f57..ff02b607990 100644
--- a/ChangeLog
+++ b/ChangeLog
@@ -1,3 +1,9 @@
+2006-12-02 Kaveh R. Ghazi <ghazi@caip.rutgers.edu>
+
+ * configure.in: Update MPFR version in error message.
+
+ * configure: Regenerate.
+
2006-11-26 Kaveh R. Ghazi <ghazi@caip.rutgers.edu>
* configure.in (--with-mpfr-dir, --with-gmp-dir): Remove flags.
diff --git a/configure b/configure
index 6519af09c69..d76717ac833 100755
--- a/configure
+++ b/configure
@@ -2392,7 +2392,7 @@ fi
CFLAGS="$saved_CFLAGS"
if test -d ${srcdir}/gcc && test x$have_gmp != xyes; then
- { echo "configure: error: Building GCC requires GMP 4.1+ and MPFR 2.2+.
+ { echo "configure: error: Building GCC requires GMP 4.1+ and MPFR 2.2.1+.
Try the --with-gmp and/or --with-mpfr options to specify their locations.
Copies of these libraries' source code can be found at their respective
hosting sites as well as at ftp://gcc.gnu.org/pub/gcc/infrastructure/.
diff --git a/configure.in b/configure.in
index f9d6bca3bc3..094e910131e 100644
--- a/configure.in
+++ b/configure.in
@@ -1130,7 +1130,7 @@ fi
CFLAGS="$saved_CFLAGS"
if test -d ${srcdir}/gcc && test x$have_gmp != xyes; then
- AC_MSG_ERROR([Building GCC requires GMP 4.1+ and MPFR 2.2+.
+ AC_MSG_ERROR([Building GCC requires GMP 4.1+ and MPFR 2.2.1+.
Try the --with-gmp and/or --with-mpfr options to specify their locations.
Copies of these libraries' source code can be found at their respective
hosting sites as well as at ftp://gcc.gnu.org/pub/gcc/infrastructure/.
diff --git a/gcc/ChangeLog b/gcc/ChangeLog
index 34ac82f96a6..eebf3f81cf0 100644
--- a/gcc/ChangeLog
+++ b/gcc/ChangeLog
@@ -1,3 +1,98 @@
+2006-12-02 H.J. Lu <hongjiu.lu@intel.com>
+
+ PR target/30040
+ * config/i386/driver-i386.c: Include "coretypes.h" and "tm.h".
+ (bit_SSSE3): New.
+ (host_detect_local_cpu): Check -mtune= vs. -march=. Rewrite
+ processor detection.
+
+ * config/i386/i386.h (CC1_CPU_SPEC): Add -mtune=native for
+ -march=native if there is no -mtune=*.
+
+ * config/i386/x-i386 (driver-i386.o): Also depend on $(TM_H)
+ coretypes.h.
+
+2006-12-02 Kaveh R. Ghazi <ghazi@caip.rutgers.edu>
+
+ * doc/install.texi: Update recommended MPFR version. Remove
+ obsolete reference to cumulative patch.
+
+2006-12-02 Lee Millward <lee.millward@codesourcery.com>
+
+ PR c/27953
+ * c-decl.c (store_parm_decls_oldstyle): Robustify.
+
+2006-11-30 Jan Hubicka <jh@suse.cz>
+ Uros Bizjak <ubizjak@gmail.com>
+
+ * config/i386/i386.c (pentium4_cost, nocona_cost): Update
+ 32bit memcpy/memset decriptors.
+ (decide_alg): With -minline-all-stringops and sizes that are best
+ to be copied via libcall still work hard enough to pick non-libcall
+ strategy.
+
+2006-12-02 Kazu Hirata <kazu@codesourcery.com>
+
+ * doc/extend.texi, doc/invoke.texi, doc/md.texi: Fix typos.
+
+ * builtins.c, cfgloop.h, cgraph.h, config/arm/arm.c,
+ config/i386/i386.c, config/i386/i386.h, config/mips/mips.h,
+ config/rs6000/cell.md, config/rs6000/rs6000.c, config/sh/sh.c,
+ config/sh/sh4-300.md, config/spu/spu-builtins.def,
+ config/spu/spu-c.c, config/spu/spu-modes.def,
+ config/spu/spu.c, config/spu/spu.md,
+ config/spu/spu_internals.h, config/spu/vmx2spu.h,
+ fold-const.c, fwprop.c, predict.c, tree-data-ref.h,
+ tree-flow.h, tree-ssa-loop-manip.c, tree-ssa-loop-niter.c,
+ tree-ssa-pre.c, tree-vect-analyze.c, tree-vect-transform.c,
+ tree-vectorizer.c, tree-vrp.c: Fix comment typos. Follow
+ spelling conventions.
+
+ * config/i386/i386.c: Fix a comment typo.
+
+2006-12-01 Trevor Smigiel <trevor_smigiel@playstation.sony.com>
+
+ * config/spu/spu.c (spu_immediate): Remove trailing comma.
+ (reloc_diagnostic): Call warning when -mwarn-reloc is specified.
+ * config/spu/spu.md: (zero_extendhisi2): Expand instead of split for
+ better optimization.
+ (floatv4siv4sf2): New.
+ (fix_truncv4sfv4si2): New.
+ (floatunsv4siv4sf2): New.
+ (fixuns_truncv4sfv4si2): New.
+ (addv16qi3): New.
+ (subv16qi3): New.
+ (negv16qi2): New.
+ (mulv8hi3): New.
+ (mulsi3): Remove.
+ (mul<mode>3): New.
+ (_mulv4si3): New.
+ (cmp<mode>): Don't accept constant arguments for DI, TI and SF.
+ * config/spu/spu_internals.h: Handle overloaded intrinsics in C++ with
+ spu_resolve_overloaded_builtin instead of static inline functions.
+
+2006-12-01 Eric Botcazou <ebotcazou@adacore.com>
+
+ * fold-const.c (fold_binary) <LT_EXPR>: Use the precision of the
+ type instead of the size of its mode to compute the highest and
+ lowest possible values. Still check the size of the mode before
+ flipping the signedness of the comparison.
+
+2006-12-01 Trevor Smigiel <trevor_smigiel@playstation.sony.com>
+
+ * config/spu/predicates.md (spu_mov_operand): Add.
+ * config/spu/spu.c (spu_expand_extv): Remove unused code.
+ (print_operand_address, print_operand): Handle addresses containing AND.
+ (spu_split_load, spu_split_store): Use updated movti pattern.
+ * config/spu/spu.md: (_mov<mode>, _movdi, _movti): Handle loads and
+ stores in mov patterns for correct operation of reload.
+ (lq, lq_<mode>, stq, stq_<mode>): Remove.
+
+2006-12-01 Volker Reichelt <reichelt@igpm.rwth-aachen.de>
+
+ PR c++/30021
+ * c-common.c (check_main_parameter_types): Check for error_mark_node.
+
2006-12-01 Andrew MacLeod <amacleod@redhat.com>
* common.opt (ftree-combine-temps): Remove.
diff --git a/gcc/DATESTAMP b/gcc/DATESTAMP
index 4eae5187573..29725b28d81 100644
--- a/gcc/DATESTAMP
+++ b/gcc/DATESTAMP
@@ -1 +1 @@
-20061201
+20061203
diff --git a/gcc/ada/ChangeLog b/gcc/ada/ChangeLog
index acd9d6ecf09..08cb9de96ac 100644
--- a/gcc/ada/ChangeLog
+++ b/gcc/ada/ChangeLog
@@ -1,3 +1,9 @@
+2006-12-02 Kazu Hirata <kazu@codesourcery.com>
+
+ * Makefile.in, mingw32.h, trans.c: Fix comment typos.
+ * gnat_rm.texi, gnat_ugn.texi: Follow spelling conventions.
+ Fix typos.
+
2006-11-17 Eric Botcazou <ebotcazou@adacore.com>
PR ada/27936
diff --git a/gcc/ada/Makefile.in b/gcc/ada/Makefile.in
index a8c5fe043af..760d42582ec 100644
--- a/gcc/ada/Makefile.in
+++ b/gcc/ada/Makefile.in
@@ -1428,7 +1428,7 @@ ifneq ($(EH_MECHANISM),)
endif
# Use the Ada 2005 version of Ada.Exceptions by default, unless specified
-# explicitely already. The base files (a-except.ad?) are used only for building
+# explicitly already. The base files (a-except.ad?) are used only for building
# the compiler and other basic tools.
# These base versions lack Ada 2005 additions which would cause bootstrap
# problems if included in the compiler and other basic tools.
diff --git a/gcc/ada/gnat_rm.texi b/gcc/ada/gnat_rm.texi
index bf2d61ddc08..fad7e14652e 100644
--- a/gcc/ada/gnat_rm.texi
+++ b/gcc/ada/gnat_rm.texi
@@ -5136,7 +5136,7 @@ prefix) provides the same value as @code{System.Storage_Unit}.
@findex Stub_Type
@noindent
The GNAT implementation of remote access-to-classwide types is
-organised as described in AARM section E.4 (20.t): a value of an RACW type
+organized as described in AARM section E.4 (20.t): a value of an RACW type
(designating a remote object) is represented as a normal access
value, pointing to a "stub" object which in turn contains the
necessary information to contact the designated remote object. A
@@ -10522,7 +10522,7 @@ when one of these values is read, any nonzero value is treated as True.
For 64-bit OpenVMS systems, access types (other than those for unconstrained
arrays) are 64-bits long. An exception to this rule is for the case of
C-convention access types where there is no explicit size clause present (or
-inheritied for derived types). In this case, GNAT chooses to make these
+inherited for derived types). In this case, GNAT chooses to make these
pointers 32-bits, which provides an easier path for migration of 32-bit legacy
code. size clause specifying 64-bits must be used to obtain a 64-bit pointer.
@@ -11754,7 +11754,7 @@ is only used for wide characters with a code greater than @code{16#FF#}.
Note that brackets coding is not normally used in the context of
Wide_Text_IO or Wide_Wide_Text_IO, since it is really just designed as
-a portable way of encoding source files. In the contect of Wide_Text_IO
+a portable way of encoding source files. In the context of Wide_Text_IO
or Wide_Wide_Text_IO, it can only be used if the file does not contain
any instance of the left bracket character other than to encode wide
character values using the brackets encoding method. In practice it is
diff --git a/gcc/ada/gnat_ugn.texi b/gcc/ada/gnat_ugn.texi
index 480ad9c540e..214369a37bd 100644
--- a/gcc/ada/gnat_ugn.texi
+++ b/gcc/ada/gnat_ugn.texi
@@ -4674,7 +4674,7 @@ A range in a @code{for} loop that is known to be null or might be null
@noindent
The following section lists compiler switches that are available
to control the handling of warning messages. It is also possible
-to excercise much finer control over what warnings are issued and
+to exercise much finer control over what warnings are issued and
suppressed using the GNAT pragma Warnings, which is documented
in the GNAT Reference manual.
diff --git a/gcc/ada/mingw32.h b/gcc/ada/mingw32.h
index 1f5a7115a44..7b6353178e3 100644
--- a/gcc/ada/mingw32.h
+++ b/gcc/ada/mingw32.h
@@ -48,7 +48,7 @@
#else
-/* Older MingW versions have no defintion for _tfreopen, add it here to have a
+/* Older MingW versions have no definition for _tfreopen, add it here to have a
proper build without unicode support. */
#ifndef _tfreopen
#define _tfreopen freopen
diff --git a/gcc/ada/trans.c b/gcc/ada/trans.c
index 8adff5e0a41..7b9c260ff6b 100644
--- a/gcc/ada/trans.c
+++ b/gcc/ada/trans.c
@@ -1199,7 +1199,7 @@ Case_Statement_to_gnu (Node_Id gnat_node)
/* If the case value is a subtype that raises Constraint_Error at
run-time because of a wrong bound, then gnu_low or gnu_high
- is not transtaleted into an INTEGER_CST. In such a case, we need
+ is not translated into an INTEGER_CST. In such a case, we need
to ensure that the when statement is not added in the tree,
otherwise it will crash the gimplifier. */
if ((!gnu_low || TREE_CODE (gnu_low) == INTEGER_CST)
diff --git a/gcc/builtins.c b/gcc/builtins.c
index 806d55600a0..80c5e1f6d08 100644
--- a/gcc/builtins.c
+++ b/gcc/builtins.c
@@ -554,7 +554,7 @@ expand_builtin_return_addr (enum built_in_function fndecl_code, int count)
override us. Therefore frame pointer elimination is OK, and using
the soft frame pointer is OK.
- For a non-zero count, or a zero count with __builtin_frame_address,
+ For a nonzero count, or a zero count with __builtin_frame_address,
we require a stable offset from the current frame pointer to the
previous one, so we must use the hard frame pointer, and
we must disable frame pointer elimination. */
@@ -11495,7 +11495,7 @@ init_target_chars (void)
/* Helper function for do_mpfr_arg*(). Ensure M is a normal number
and no overflow/underflow occurred. INEXACT is true if M was not
- exacly calculated. TYPE is the tree type for the result. This
+ exactly calculated. TYPE is the tree type for the result. This
function assumes that you cleared the MPFR flags and then
calculated M to see if anything subsequently set a flag prior to
entering this function. Return NULL_TREE if any checks fail. */
diff --git a/gcc/c-common.c b/gcc/c-common.c
index d7e98deeb2f..d2c39bd1077 100644
--- a/gcc/c-common.c
+++ b/gcc/c-common.c
@@ -1034,7 +1034,7 @@ check_main_parameter_types (tree decl)
{
tree type = args ? TREE_VALUE (args) : 0;
- if (type == void_type_node)
+ if (type == void_type_node || type == error_mark_node )
break;
++argct;
diff --git a/gcc/c-decl.c b/gcc/c-decl.c
index fa1c3406d95..bd737d16e7c 100644
--- a/gcc/c-decl.c
+++ b/gcc/c-decl.c
@@ -6482,8 +6482,8 @@ store_parm_decls_oldstyle (tree fndecl, const struct c_arg_info *arg_info)
tree type;
for (parm = DECL_ARGUMENTS (fndecl),
type = current_function_prototype_arg_types;
- parm || (type && (TYPE_MAIN_VARIANT (TREE_VALUE (type))
- != void_type_node));
+ parm || (type && TREE_VALUE (type) != error_mark_node
+ && (TYPE_MAIN_VARIANT (TREE_VALUE (type)) != void_type_node));
parm = TREE_CHAIN (parm), type = TREE_CHAIN (type))
{
if (parm == 0 || type == 0
diff --git a/gcc/cfgloop.h b/gcc/cfgloop.h
index 9c755c8e262..fa0a456fc56 100644
--- a/gcc/cfgloop.h
+++ b/gcc/cfgloop.h
@@ -143,7 +143,7 @@ struct loop
struct nb_iter_bound *bounds;
/* If not NULL, loop has just single exit edge stored here (edges to the
- EXIT_BLOCK_PTR do not count. Do not use direcly, this field should
+ EXIT_BLOCK_PTR do not count. Do not use directly; this field should
only be accessed via single_exit/set_single_exit functions. */
edge single_exit_;
diff --git a/gcc/cgraph.h b/gcc/cgraph.h
index b7aa81bcd16..be800620e82 100644
--- a/gcc/cgraph.h
+++ b/gcc/cgraph.h
@@ -51,7 +51,7 @@ enum availability
struct cgraph_local_info GTY(())
{
- /* Estiimated stack frame consumption by the function. */
+ /* Estimated stack frame consumption by the function. */
HOST_WIDE_INT estimated_self_stack_size;
/* Size of the function before inlining. */
diff --git a/gcc/config/arm/arm.c b/gcc/config/arm/arm.c
index 4342c4682f2..41c44e274dd 100644
--- a/gcc/config/arm/arm.c
+++ b/gcc/config/arm/arm.c
@@ -394,7 +394,7 @@ rtx arm_compare_op0, arm_compare_op1;
/* The processor for which instructions should be scheduled. */
enum processor_type arm_tune = arm_none;
-/* The default processor used if not overriden by commandline. */
+/* The default processor used if not overridden by commandline. */
static enum processor_type arm_default_cpu = arm_none;
/* Which floating point model to use. */
diff --git a/gcc/config/i386/driver-i386.c b/gcc/config/i386/driver-i386.c
index 6767997a091..d623c2c7b10 100644
--- a/gcc/config/i386/driver-i386.c
+++ b/gcc/config/i386/driver-i386.c
@@ -20,6 +20,8 @@ Boston, MA 02110-1301, USA. */
#include "config.h"
#include "system.h"
+#include "coretypes.h"
+#include "tm.h"
#include <stdlib.h>
const char *host_detect_local_cpu (int argc, const char **argv);
@@ -37,6 +39,7 @@ const char *host_detect_local_cpu (int argc, const char **argv);
#define bit_SSE2 (1 << 26)
#define bit_SSE3 (1 << 0)
+#define bit_SSSE3 (1 << 9)
#define bit_CMPXCHG16B (1 << 13)
#define bit_3DNOW (1 << 31)
@@ -57,19 +60,24 @@ const char *host_detect_local_cpu (int argc, const char **argv);
in the spec. */
const char *host_detect_local_cpu (int argc, const char **argv)
{
- const char *cpu = "i386";
+ const char *cpu = NULL;
+ enum processor_type processor = PROCESSOR_I386;
unsigned int eax, ebx, ecx, edx;
unsigned int max_level;
unsigned int vendor;
unsigned int ext_level;
unsigned char has_mmx = 0, has_3dnow = 0, has_3dnowp = 0, has_sse = 0;
- unsigned char has_sse2 = 0, has_sse3 = 0, has_cmov = 0;
- unsigned char has_longmode = 0;
+ unsigned char has_sse2 = 0, has_sse3 = 0, has_ssse3 = 0, has_cmov = 0;
+ unsigned char has_longmode = 0, has_cmpxchg8b = 0;
unsigned char is_amd = 0;
unsigned int family = 0;
- if (argc < 1
- || (strcmp (argv[0], "arch")
- && strcmp (argv[0], "tune")))
+ bool arch;
+
+ if (argc < 1)
+ return NULL;
+
+ arch = strcmp (argv[0], "arch") == 0;
+ if (!arch && strcmp (argv[0], "tune"))
return NULL;
#ifndef __x86_64__
@@ -83,7 +91,7 @@ const char *host_detect_local_cpu (int argc, const char **argv)
goto done;
#endif
- cpu = "i586";
+ processor = PROCESSOR_PENTIUM;
/* Check the highest input value for eax. */
cpuid (0, eax, ebx, ecx, edx);
@@ -94,11 +102,13 @@ const char *host_detect_local_cpu (int argc, const char **argv)
goto done;
cpuid (1, eax, ebx, ecx, edx);
+ has_cmpxchg8b = !!(edx & bit_CMPXCHG8B);
has_cmov = !!(edx & bit_CMOV);
has_mmx = !!(edx & bit_MMX);
has_sse = !!(edx & bit_SSE);
has_sse2 = !!(edx & bit_SSE2);
has_sse3 = !!(ecx & bit_SSE3);
+ has_ssse3 = !!(ecx & bit_SSSE3);
/* We don't care for extended family. */
family = (eax >> 8) & ~(1 << 4);
@@ -117,44 +127,152 @@ const char *host_detect_local_cpu (int argc, const char **argv)
if (is_amd)
{
if (has_mmx)
- cpu = "k6";
- if (has_3dnow)
- cpu = "k6-3";
+ processor = PROCESSOR_K6;
if (has_3dnowp)
- cpu = "athlon";
- if (has_sse)
- cpu = "athlon-4";
+ processor = PROCESSOR_ATHLON;
if (has_sse2 || has_longmode)
- cpu = "k8";
+ processor = PROCESSOR_K8;
}
else
{
- if (family == 5)
- {
- if (has_mmx)
- cpu = "pentium-mmx";
+ switch (family)
+ {
+ case 5:
+ /* Default is PROCESSOR_PENTIUM. */
+ break;
+ case 6:
+ processor = PROCESSOR_PENTIUMPRO;
+ break;
+ case 15:
+ processor = PROCESSOR_PENTIUM4;
+ break;
+ default:
+ /* We have no idea. Use something reasonable. */
+ if (arch)
+ {
+ if (has_ssse3)
+ cpu = "core2";
+ else if (has_sse3)
+ {
+ if (has_longmode)
+ cpu = "nocona";
+ else
+ cpu = "prescott";
+ }
+ else if (has_sse2)
+ cpu = "pentium4";
+ else if (has_cmov)
+ cpu = "pentiumpro";
+ else if (has_mmx)
+ cpu = "pentium-mmx";
+ else if (has_cmpxchg8b)
+ cpu = "pentium";
+ else
+ cpu = "i386";
+ }
+ else
+ cpu = "generic";
+ goto done;
+ break;
+ }
+ }
+
+ switch (processor)
+ {
+ case PROCESSOR_I386:
+ cpu = "i386";
+ break;
+ case PROCESSOR_I486:
+ cpu = "i486";
+ break;
+ case PROCESSOR_PENTIUM:
+ if (has_mmx)
+ cpu = "pentium-mmx";
+ else
+ cpu = "pentium";
+ break;
+ case PROCESSOR_PENTIUMPRO:
+ if (has_longmode)
+ {
+ /* It is Core 2 Duo. */
+ cpu = "core2";
}
- else if (has_mmx)
- cpu = "pentium2";
- if (has_sse)
- cpu = "pentium3";
- if (has_sse2)
+ else
{
- if (family == 6)
- /* It's a pentiumpro with sse2 --> pentium-m */
- cpu = "pentium-m";
+ if (arch)
+ {
+ if (has_sse3)
+ {
+ /* It is Core Duo. */
+ cpu = "prescott";
+ }
+ else if (has_sse2)
+ {
+ /* It is Pentium M. */
+ cpu = "pentium4";
+ }
+ else if (has_sse)
+ {
+ /* It is Pentium III. */
+ cpu = "pentium3";
+ }
+ else if (has_mmx)
+ {
+ /* It is Pentium II. */
+ cpu = "pentium2";
+ }
+ else
+ {
+ /* Default to Pentium Pro. */
+ cpu = "pentiumpro";
+ }
+ }
else
- /* Would have to look at extended family, but it's at least
- an pentium4 core. */
- cpu = "pentium4";
+ {
+ /* For -mtune, we default to -mtune=generic. */
+ cpu = "generic";
+ }
}
+ break;
+ case PROCESSOR_GEODE:
+ cpu = "geode";
+ break;
+ case PROCESSOR_K6:
+ if (has_3dnow)
+ cpu = "k6-3";
+ else
+ cpu = "k6";
+ break;
+ case PROCESSOR_ATHLON:
+ if (has_sse)
+ cpu = "athlon-4";
+ else
+ cpu = "athlon";
+ break;
+ case PROCESSOR_PENTIUM4:
if (has_sse3)
- {
+ {
if (has_longmode)
cpu = "nocona";
- else
- cpu = "prescott";
+ else
+ cpu = "prescott";
}
+ else
+ cpu = "pentium4";
+ break;
+ case PROCESSOR_K8:
+ cpu = "k8";
+ break;
+ case PROCESSOR_NOCONA:
+ cpu = "nocona";
+ break;
+ case PROCESSOR_GENERIC32:
+ case PROCESSOR_GENERIC64:
+ cpu = "generic";
+ break;
+ default:
+ abort ();
+ break;
}
done:
@@ -165,6 +283,25 @@ done:
default value. */
const char *host_detect_local_cpu (int argc, const char **argv)
{
- return concat ("-m", argv[0], "=i386", NULL);
+ const char *cpu;
+ bool arch;
+
+ if (argc < 1)
+ return NULL;
+
+ arch = strcmp (argv[0], "arch") == 0;
+ if (!arch && strcmp (argv[0], "tune"))
+ return NULL;
+
+ if (arch)
+ {
+ /* FIXME: i386 is wrong for 64bit compiler. How can we tell if
+ we are generating 64bit or 32bit code? */
+ cpu = "i386";
+ }
+ else
+ cpu = "generic";
+
+ return concat ("-m", argv[0], "=", cpu, NULL);
}
#endif /* GCC_VERSION */
diff --git a/gcc/config/i386/i386.c b/gcc/config/i386/i386.c
index fa1a1e64e00..22ed4a9c32c 100644
--- a/gcc/config/i386/i386.c
+++ b/gcc/config/i386/i386.c
@@ -530,7 +530,7 @@ struct processor_costs athlon_cost = {
COSTS_N_INSNS (2), /* cost of FCHS instruction. */
COSTS_N_INSNS (35), /* cost of FSQRT instruction. */
/* For some reason, Athlon deals better with REP prefix (relative to loops)
- comopared to K8. Alignment becomes important after 8 bytes for mempcy and
+ compared to K8. Alignment becomes important after 8 bytes for memcpy and
128 bytes for memset. */
{{libcall, {{2048, rep_prefix_4_byte}, {-1, libcall}}},
DUMMY_STRINGOP_ALGS},
@@ -655,10 +655,11 @@ struct processor_costs pentium4_cost = {
COSTS_N_INSNS (2), /* cost of FABS instruction. */
COSTS_N_INSNS (2), /* cost of FCHS instruction. */
COSTS_N_INSNS (43), /* cost of FSQRT instruction. */
- {{libcall, {{256, rep_prefix_4_byte}, {-1, libcall}}},
- {libcall, {{256, rep_prefix_4_byte}, {-1, libcall}}}},
- {{libcall, {{256, rep_prefix_4_byte}, {-1, libcall}}},
- {libcall, {{256, rep_prefix_4_byte}, {-1, libcall}}}}
+ {{libcall, {{12, loop_1_byte}, {64, loop}, {-1, rep_prefix_4_byte}}},
+ DUMMY_STRINGOP_ALGS},
+ {{libcall, {{6, loop_1_byte}, {64, loop}, {20480, rep_prefix_4_byte},
+ {-1, libcall}}},
+ DUMMY_STRINGOP_ALGS},
};
static const
@@ -712,10 +713,11 @@ struct processor_costs nocona_cost = {
COSTS_N_INSNS (3), /* cost of FABS instruction. */
COSTS_N_INSNS (3), /* cost of FCHS instruction. */
COSTS_N_INSNS (44), /* cost of FSQRT instruction. */
- {{libcall, {{256, rep_prefix_4_byte}, {-1, libcall}}},
+ {{libcall, {{12, loop_1_byte}, {64, loop}, {-1, rep_prefix_4_byte}}},
{libcall, {{32, loop}, {20000, rep_prefix_8_byte},
{100000, unrolled_loop}, {-1, libcall}}}},
- {{libcall, {{256, rep_prefix_4_byte}, {-1, libcall}}},
+ {{libcall, {{6, loop_1_byte}, {64, loop}, {20480, rep_prefix_4_byte},
+ {-1, libcall}}},
{libcall, {{24, loop}, {64, unrolled_loop},
{8192, rep_prefix_8_byte}, {-1, libcall}}}}
};
@@ -13171,7 +13173,7 @@ expand_movmem_epilogue (rtx destmem, rtx srcmem,
/* When there are stringops, we can cheaply increase dest and src pointers.
Otherwise we save code size by maintaining offset (zero is readily
- available from preceeding rep operation) and using x86 addressing modes.
+ available from preceding rep operation) and using x86 addressing modes.
*/
if (TARGET_SINGLE_STRINGOP)
{
@@ -13507,14 +13509,18 @@ decide_alg (HOST_WIDE_INT count, HOST_WIDE_INT expected_size, bool memset,
last non-libcall inline algorithm. */
if (TARGET_INLINE_ALL_STRINGOPS)
{
- gcc_assert (alg != libcall);
- return alg;
+ /* When the current size is best to be copied by a libcall,
+ but we are still forced to inline, run the heuristic bellow
+ that will pick code for medium sized blocks. */
+ if (alg != libcall)
+ return alg;
+ break;
}
else
return algs->size[i].alg;
}
}
- gcc_unreachable ();
+ gcc_assert (TARGET_INLINE_ALL_STRINGOPS);
}
/* When asked to inline the call anyway, try to pick meaningful choice.
We look for maximal size of block that is faster to copy by hand and
@@ -13621,7 +13627,7 @@ ix86_expand_movmem (rtx dst, rtx src, rtx count_exp, rtx align_exp,
if (GET_CODE (align_exp) == CONST_INT)
align = INTVAL (align_exp);
- /* i386 can do missaligned access on resonably increased cost. */
+ /* i386 can do misaligned access on reasonably increased cost. */
if (GET_CODE (expected_align_exp) == CONST_INT
&& INTVAL (expected_align_exp) > align)
align = INTVAL (expected_align_exp);
@@ -13783,7 +13789,7 @@ ix86_expand_movmem (rtx dst, rtx src, rtx count_exp, rtx align_exp,
dst = change_address (dst, BLKmode, destreg);
}
- /* Epologue to copy the remaining bytes. */
+ /* Epilogue to copy the remaining bytes. */
if (label)
{
if (size_needed < desired_align - align)
@@ -13909,7 +13915,7 @@ ix86_expand_setmem (rtx dst, rtx count_exp, rtx val_exp, rtx align_exp,
if (GET_CODE (align_exp) == CONST_INT)
align = INTVAL (align_exp);
- /* i386 can do missaligned access on resonably increased cost. */
+ /* i386 can do misaligned access on reasonably increased cost. */
if (GET_CODE (expected_align_exp) == CONST_INT
&& INTVAL (expected_align_exp) > align)
align = INTVAL (expected_align_exp);
diff --git a/gcc/config/i386/i386.h b/gcc/config/i386/i386.h
index e30d6b7bdef..78b963b7ab2 100644
--- a/gcc/config/i386/i386.h
+++ b/gcc/config/i386/i386.h
@@ -356,7 +356,8 @@ extern const char *host_detect_local_cpu (int argc, const char **argv);
#define CC1_CPU_SPEC CC1_CPU_SPEC_1
#else
#define CC1_CPU_SPEC CC1_CPU_SPEC_1 \
-"%{march=native:%<march=native %:local_cpu_detect(arch)} \
+"%{march=native:%<march=native %:local_cpu_detect(arch) \
+ %{!mtune=*:%<mtune=native %:local_cpu_detect(tune)}} \
%{mtune=native:%<mtune=native %:local_cpu_detect(tune)}"
#endif
#endif
@@ -1494,7 +1495,7 @@ typedef struct ix86_args {
int warn_mmx; /* True when we want to warn about MMX ABI. */
int maybe_vaarg; /* true for calls to possibly vardic fncts. */
int float_in_x87; /* 1 if floating point arguments should
- be passed in 80387 registere. */
+ be passed in 80387 registers. */
int float_in_sse; /* 1 if in 32-bit mode SFmode (2 for DFmode) should
be passed in SSE registers. Otherwise 0. */
} CUMULATIVE_ARGS;
diff --git a/gcc/config/i386/x-i386 b/gcc/config/i386/x-i386
index 2c35e5b5aed..e156bcde3c9 100644
--- a/gcc/config/i386/x-i386
+++ b/gcc/config/i386/x-i386
@@ -1,3 +1,3 @@
driver-i386.o : $(srcdir)/config/i386/driver-i386.c \
- $(CONFIG_H) $(SYSTEM_H)
+ $(CONFIG_H) $(SYSTEM_H) $(TM_H) coretypes.h
$(CC) -c $(ALL_CFLAGS) $(ALL_CPPFLAGS) $(INCLUDES) $<
diff --git a/gcc/config/mips/mips.h b/gcc/config/mips/mips.h
index 00f7ca8f55f..4905966cbfe 100644
--- a/gcc/config/mips/mips.h
+++ b/gcc/config/mips/mips.h
@@ -576,7 +576,7 @@ extern const struct mips_rtx_cost_data *mips_cost;
been generated up to this point. */
#define ISA_HAS_BRANCHLIKELY (!ISA_MIPS1)
-/* ISA has a three-operand multiplcation instruction (usually spelt "mul"). */
+/* ISA has a three-operand multiplication instruction (usually spelt "mul"). */
#define ISA_HAS_MUL3 ((TARGET_MIPS3900 \
|| TARGET_MIPS5400 \
|| TARGET_MIPS5500 \
diff --git a/gcc/config/rs6000/cell.md b/gcc/config/rs6000/cell.md
index f12d2a66cc8..17a07b585ed 100644
--- a/gcc/config/rs6000/cell.md
+++ b/gcc/config/rs6000/cell.md
@@ -21,10 +21,10 @@
;; Sources: BE BOOK4 (/sfs/enc/doc/PPU_BookIV_DD3.0_latest.pdf)
-;; BE Architechture *DD3.0 and DD3.1*
+;; BE Architecture *DD3.0 and DD3.1*
;; This file simulate PPU processor unit backend of pipeline, maualP24.
;; manual P27, stall and flush points
-;; IU, XU, VSU, dipatcher decodes and dispatch 2 insns per cycle in program
+;; IU, XU, VSU, dispatcher decodes and dispatch 2 insns per cycle in program
;; order, the grouped adress are aligned by 8
;; This file only simulate one thread situation
;; XU executes all fixed point insns(3 units, a simple alu, a complex unit,
@@ -43,7 +43,7 @@
;;VMX(perm,vsu_ls, fp_ls) X
;; X are illegal combination.
-;; Dual issue exceptons:
+;; Dual issue exceptions:
;;(1) nop-pipelined FXU instr in slot 0
;;(2) non-pipelined FPU inst in slot 0
;; CSI instr(contex-synchronizing insn)
@@ -51,7 +51,7 @@
;; BRU unit: bru(none register stall), bru_cr(cr register stall)
;; VSU unit: vus(vmx simple), vup(vmx permute), vuc(vmx complex),
-;; vuf(vmx float), fpu(floats). fpu_div is hypthetical, it is for
+;; vuf(vmx float), fpu(floats). fpu_div is hypothetical, it is for
;; nonpipelined simulation
;; micr insns will stall at least 7 cycles to get the first instr from ROM,
;; micro instructions are not dual issued.
@@ -378,7 +378,7 @@
; this is not correct,
;; this is a stall in general and not dependent on result
(define_bypass 13 "cell-vecstore" "cell-fpstore")
-; this is not correct, this can never be true, not depent on result
+; this is not correct, this can never be true, not dependent on result
(define_bypass 7 "cell-fp" "cell-fpload")
;; vsu1 should avoid writing to the same target register as vsu2 insn
;; within 12 cycles.
@@ -396,6 +396,6 @@
;;Things are not simulated:
;; update instruction, update address gpr are not simulated
-;; vrefp, vrsqrtefp have latency(14), currently simluated as 12 cycle float
+;; vrefp, vrsqrtefp have latency(14), currently simulated as 12 cycle float
;; insns
diff --git a/gcc/config/rs6000/rs6000.c b/gcc/config/rs6000/rs6000.c
index 977a66d265d..37e30022b55 100644
--- a/gcc/config/rs6000/rs6000.c
+++ b/gcc/config/rs6000/rs6000.c
@@ -17557,7 +17557,7 @@ rs6000_sched_reorder2 (FILE *dump, int sched_verbose, rtx *ready,
cycle and we attempt to locate another load in the ready list to
issue with it.
- - If the pedulum is -2, then two stores have already been
+ - If the pendulum is -2, then two stores have already been
issued in this cycle, so we increase the priority of the first load
in the ready list to increase it's likelihood of being chosen first
in the next cycle.
diff --git a/gcc/config/sh/sh.c b/gcc/config/sh/sh.c
index aead612187d..11766cbd7ad 100644
--- a/gcc/config/sh/sh.c
+++ b/gcc/config/sh/sh.c
@@ -1416,7 +1416,7 @@ prepare_cbranch_operands (rtx *operands, enum machine_mode mode,
compare r0. Hence, if operands[1] has to be loaded from somewhere else
into a register, that register might as well be r0, and we allow the
constant. If it is already in a register, this is likely to be
- allocatated to a different hard register, thus we load the constant into
+ allocated to a different hard register, thus we load the constant into
a register unless it is zero. */
if (!REG_P (operands[2])
&& (GET_CODE (operands[2]) != CONST_INT
@@ -1468,7 +1468,7 @@ expand_cbranchsi4 (rtx *operands, enum rtx_code comparison, int probability)
operation should be EQ or NE.
- If items are searched in an ordered tree from the root, we can expect
the highpart to be unequal about half of the time; operation should be
- an unequality comparison, operands non-constant, and overall probability
+ an inequality comparison, operands non-constant, and overall probability
about 50%. Likewise for quicksort.
- Range checks will be often made against constants. Even if we assume for
simplicity an even distribution of the non-constant operand over a
@@ -2413,7 +2413,7 @@ sh_rtx_costs (rtx x, int code, int outer_code, int *total)
&& CONST_OK_FOR_K08 (INTVAL (x)))
*total = 1;
/* prepare_cmp_insn will force costly constants int registers before
- the cbrach[sd]i4 pattterns can see them, so preserve potentially
+ the cbrach[sd]i4 patterns can see them, so preserve potentially
interesting ones not covered by I08 above. */
else if (outer_code == COMPARE
&& ((unsigned HOST_WIDE_INT) INTVAL (x)
@@ -2440,7 +2440,7 @@ sh_rtx_costs (rtx x, int code, int outer_code, int *total)
if (TARGET_SHMEDIA)
*total = COSTS_N_INSNS (4);
/* prepare_cmp_insn will force costly constants int registers before
- the cbrachdi4 patttern can see them, so preserve potentially
+ the cbrachdi4 pattern can see them, so preserve potentially
interesting ones. */
else if (outer_code == COMPARE && GET_MODE (x) == DImode)
*total = 1;
diff --git a/gcc/config/sh/sh4-300.md b/gcc/config/sh/sh4-300.md
index 228782a67fc..ac107722df4 100644
--- a/gcc/config/sh/sh4-300.md
+++ b/gcc/config/sh/sh4-300.md
@@ -189,7 +189,7 @@
;; In most cases, the insn that loads the address of the call should have
;; a non-zero latency (mov rn,rm doesn't make sense since we could use rn
;; for the address then). Thus, a preceding insn that can be paired with
-;; a call should be elegible for the delay slot.
+;; a call should be eligible for the delay slot.
;;
;; calls introduce a longisch delay that is likely to flush the pipelines
;; of the caller's instructions. Ordinary functions tend to end with a
diff --git a/gcc/config/spu/predicates.md b/gcc/config/spu/predicates.md
index 0474d840c1c..9d343e6b48d 100644
--- a/gcc/config/spu/predicates.md
+++ b/gcc/config/spu/predicates.md
@@ -35,6 +35,10 @@
(and (match_operand 0 "memory_operand")
(match_test "reload_in_progress || reload_completed || aligned_mem_p (op)")))
+(define_predicate "spu_mov_operand"
+ (ior (match_operand 0 "spu_mem_operand")
+ (match_operand 0 "spu_nonmem_operand")))
+
(define_predicate "call_operand"
(and (match_code "mem")
(match_test "(!TARGET_LARGE_MEM && satisfies_constraint_S (op))
diff --git a/gcc/config/spu/spu-builtins.def b/gcc/config/spu/spu-builtins.def
index af8a8885a83..5fdd0cb0b1a 100644
--- a/gcc/config/spu/spu-builtins.def
+++ b/gcc/config/spu/spu-builtins.def
@@ -1,4 +1,4 @@
-/* Definitions of builtin fuctions for the Synergistic Processing Unit (SPU). */
+/* Definitions of builtin functions for the Synergistic Processing Unit (SPU). */
/* Copyright (C) 2006 Free Software Foundation, Inc.
This file is free software; you can redistribute it and/or modify it under
@@ -24,8 +24,8 @@
#define _A3(a,b,c) {a, b, c, SPU_BTI_END_OF_PARAMS}
#define _A4(a,b,c,d) {a, b, c, d, SPU_BTI_END_OF_PARAMS}
-/* definitions to support si intrinisic functions: (These and other builtin
- * definitions must preceed definitions of the overloaded generic intrinsics */
+/* definitions to support si intrinsic functions: (These and other builtin
+ * definitions must precede definitions of the overloaded generic intrinsics */
DEF_BUILTIN (SI_LQD, CODE_FOR_spu_lqd, "si_lqd", B_INSN, _A3(SPU_BTI_QUADWORD, SPU_BTI_QUADWORD, SPU_BTI_S10_4))
DEF_BUILTIN (SI_LQX, CODE_FOR_spu_lqx, "si_lqx", B_INSN, _A3(SPU_BTI_QUADWORD, SPU_BTI_QUADWORD, SPU_BTI_QUADWORD))
@@ -701,10 +701,10 @@ DEF_BUILTIN (SPU_PROMOTE_7, CODE_FOR_spu_promote, "spu_promote_7",
DEF_BUILTIN (SPU_PROMOTE_8, CODE_FOR_spu_promote, "spu_promote_8", B_INTERNAL, _A3(SPU_BTI_V4SF, SPU_BTI_FLOAT, SPU_BTI_INTSI))
DEF_BUILTIN (SPU_PROMOTE_9, CODE_FOR_spu_promote, "spu_promote_9", B_INTERNAL, _A3(SPU_BTI_V2DF, SPU_BTI_DOUBLE, SPU_BTI_INTSI))
-/* We need something that is not B_INTERNAL as a sentinal. */
+/* We need something that is not B_INTERNAL as a sentinel. */
-/* These are for the convenience of imlpemnting fma() in the standard
- libraries. */
+/* These are for the convenience of implementing fma() in the standard
+ libraries. */
DEF_BUILTIN (SCALAR_FMA, CODE_FOR_fma_sf, "fmas", B_INSN, _A4(SPU_BTI_FLOAT, SPU_BTI_FLOAT, SPU_BTI_FLOAT, SPU_BTI_FLOAT))
DEF_BUILTIN (SCALAR_DFMA, CODE_FOR_fma_df, "dfmas", B_INSN, _A4(SPU_BTI_DOUBLE, SPU_BTI_DOUBLE, SPU_BTI_DOUBLE, SPU_BTI_DOUBLE))
diff --git a/gcc/config/spu/spu-c.c b/gcc/config/spu/spu-c.c
index b975e83b3cb..c88e627a49f 100644
--- a/gcc/config/spu/spu-c.c
+++ b/gcc/config/spu/spu-c.c
@@ -72,7 +72,7 @@ spu_resolve_overloaded_builtin (tree fndecl, tree fnargs)
struct spu_builtin_description *desc;
tree match = NULL_TREE;
- /* The vector types are not available if the backend is not initalized */
+ /* The vector types are not available if the backend is not initialized. */
gcc_assert (!flag_preprocess_only);
desc = &spu_builtins[fcode];
diff --git a/gcc/config/spu/spu-modes.def b/gcc/config/spu/spu-modes.def
index a3397c5a18f..49d577b63ea 100644
--- a/gcc/config/spu/spu-modes.def
+++ b/gcc/config/spu/spu-modes.def
@@ -25,8 +25,8 @@ VECTOR_MODES (INT, 16); /* V16QI V8HI V4SI V2DI */
VECTOR_MODES (FLOAT, 8); /* V4HF V2SF */
VECTOR_MODES (FLOAT, 16); /* V8HF V4SF V2DF */
-/* A special mode for the intr regsister so we can treat it differently
- for conditional moves. */
+/* A special mode for the intr register so we can treat it differently
+ for conditional moves. */
RANDOM_MODE (INTR);
/* cse_insn needs an INT_MODE larger than WORD_MODE, otherwise some
diff --git a/gcc/config/spu/spu.c b/gcc/config/spu/spu.c
index a1158e763ba..055c414c705 100644
--- a/gcc/config/spu/spu.c
+++ b/gcc/config/spu/spu.c
@@ -141,7 +141,7 @@ enum spu_immediate {
SPU_ORI,
SPU_ORHI,
SPU_ORBI,
- SPU_IOHL,
+ SPU_IOHL
};
static enum spu_immediate which_immediate_load (HOST_WIDE_INT val);
@@ -322,7 +322,7 @@ valid_subreg (rtx op)
}
/* When insv and ext[sz]v ar passed a TI SUBREG, we want to strip it off
- and ajust the start offset. */
+ and adjust the start offset. */
static rtx
adjust_operand (rtx op, HOST_WIDE_INT * start)
{
@@ -366,52 +366,6 @@ spu_expand_extv (rtx ops[], int unsignedp)
dst_mode = GET_MODE (dst);
dst_size = GET_MODE_BITSIZE (GET_MODE (dst));
- if (GET_CODE (ops[1]) == MEM)
- {
- if (start + width > MEM_ALIGN (ops[1]))
- {
- rtx addr = gen_reg_rtx (SImode);
- rtx shl = gen_reg_rtx (SImode);
- rtx shr = gen_reg_rtx (SImode);
- rtx w0 = gen_reg_rtx (TImode);
- rtx w1 = gen_reg_rtx (TImode);
- rtx a0, a1;
- src = gen_reg_rtx (TImode);
- emit_move_insn (addr, copy_rtx (XEXP (ops[1], 0)));
- a0 = memory_address (TImode, addr);
- a1 = memory_address (TImode, plus_constant (addr, 16));
- emit_insn (gen_lq (w0, a0));
- emit_insn (gen_lq (w1, a1));
- emit_insn (gen_andsi3 (shl, addr, GEN_INT (15)));
- emit_insn (gen_iorsi3 (shr, addr, GEN_INT (16)));
- emit_insn (gen_shlqby_ti (w0, w0, shl));
- emit_insn (gen_rotqmby_ti (w1, w1, shr));
- emit_insn (gen_iorti3 (src, w0, w1));
- }
- else
- {
- rtx addr = gen_reg_rtx (SImode);
- rtx a0;
- emit_move_insn (addr, copy_rtx (XEXP (ops[1], 0)));
- a0 = memory_address (TImode, addr);
- src = gen_reg_rtx (TImode);
- emit_insn (gen_lq (src, a0));
- if (MEM_ALIGN (ops[1]) < 128)
- {
- rtx t = src;
- src = gen_reg_rtx (TImode);
- emit_insn (gen_rotqby_ti (src, t, addr));
- }
- }
- /* Shifts in SImode are faster, use them if we can. */
- if (start + width < 32)
- {
- rtx t = src;
- src = gen_reg_rtx (SImode);
- emit_insn (gen_spu_convert (src, t));
- }
- }
-
src = adjust_operand (src, &start);
src_mode = GET_MODE (src);
src_size = GET_MODE_BITSIZE (GET_MODE (src));
@@ -970,6 +924,11 @@ print_operand_address (FILE * file, register rtx addr)
rtx reg;
rtx offset;
+ if (GET_CODE (addr) == AND
+ && GET_CODE (XEXP (addr, 1)) == CONST_INT
+ && INTVAL (XEXP (addr, 1)) == -16)
+ addr = XEXP (addr, 0);
+
switch (GET_CODE (addr))
{
case REG:
@@ -1254,6 +1213,11 @@ print_operand (FILE * file, rtx x, int code)
x = XEXP (x, 0);
xcode = GET_CODE (x);
}
+ if (xcode == AND)
+ {
+ x = XEXP (x, 0);
+ xcode = GET_CODE (x);
+ }
if (xcode == REG)
fprintf (file, "d");
else if (xcode == CONST_INT)
@@ -1687,8 +1651,8 @@ int spu_hint_dist = (8 * 4);
/* An array of these is used to propagate hints to predecessor blocks. */
struct spu_bb_info
{
- rtx prop_jump; /* propogated from another block */
- basic_block bb; /* the orignal block. */
+ rtx prop_jump; /* propagated from another block */
+ basic_block bb; /* the original block. */
};
/* The special $hbr register is used to prevent the insn scheduler from
@@ -2491,7 +2455,7 @@ spu_legitimate_address (enum machine_mode mode ATTRIBUTE_UNUSED,
}
/* When the address is reg + const_int, force the const_int into a
- regiser. */
+ register. */
rtx
spu_legitimize_address (rtx x, rtx oldx ATTRIBUTE_UNUSED,
enum machine_mode mode)
@@ -2733,7 +2697,7 @@ spu_pass_by_reference (CUMULATIVE_ARGS * cum ATTRIBUTE_UNUSED,
} va_list[1];
- wheare __args points to the arg that will be returned by the next
+ where __args points to the arg that will be returned by the next
va_arg(), and __skip points to the previous stack frame such that
when __args == __skip we should advance __args by 32 bytes. */
static tree
@@ -2949,8 +2913,8 @@ spu_conditional_register_usage (void)
aligned. Taking into account that CSE might replace this reg with
another one that has not been marked aligned.
So this is really only true for frame, stack and virtual registers,
- which we know are always aligned and should not be adversly effected
- by CSE. */
+ which we know are always aligned and should not be adversely effected
+ by CSE. */
static int
regno_aligned_for_load (int regno)
{
@@ -3017,7 +2981,7 @@ store_with_one_insn_p (rtx mem)
if (GET_CODE (addr) == SYMBOL_REF)
{
/* We use the associated declaration to make sure the access is
- refering to the whole object.
+ referring to the whole object.
We check both MEM_EXPR and and SYMBOL_REF_DECL. I'm not sure
if it is necessary. Will there be cases where one exists, and
the other does not? Will there be cases where both exist, but
@@ -3300,7 +3264,7 @@ spu_split_load (rtx * ops)
addr = gen_rtx_AND (SImode, copy_rtx (addr), GEN_INT (-16));
mem = change_address (ops[1], TImode, addr);
- emit_insn (gen_lq_ti (load, mem));
+ emit_insn (gen_movti (load, mem));
if (rot)
emit_insn (gen_rotqby_ti (load, load, rot));
@@ -3385,6 +3349,8 @@ spu_split_store (rtx * ops)
}
}
+ addr = gen_rtx_AND (SImode, copy_rtx (addr), GEN_INT (-16));
+
scalar = store_with_one_insn_p (ops[0]);
if (!scalar)
{
@@ -3393,7 +3359,9 @@ spu_split_store (rtx * ops)
possible, and copying the flags will prevent that in certain
cases, e.g. consider the volatile flag. */
- emit_insn (gen_lq (reg, copy_rtx (addr)));
+ rtx lmem = change_address (ops[0], TImode, copy_rtx (addr));
+ set_mem_alias_set (lmem, 0);
+ emit_insn (gen_movti (reg, lmem));
if (!p0 || reg_align (p0) >= 128)
p0 = stack_pointer_rtx;
@@ -3428,13 +3396,12 @@ spu_split_store (rtx * ops)
emit_insn (gen_shlqby_ti
(reg, reg, GEN_INT (4 - GET_MODE_SIZE (mode))));
- addr = gen_rtx_AND (SImode, copy_rtx (addr), GEN_INT (-16));
smem = change_address (ops[0], TImode, addr);
/* We can't use the previous alias set because the memory has changed
size and can potentially overlap objects of other types. */
set_mem_alias_set (smem, 0);
- emit_insn (gen_stq_ti (smem, reg));
+ emit_insn (gen_movti (smem, reg));
}
/* Return TRUE if X is MEM which is a struct member reference
@@ -3459,8 +3426,8 @@ mem_is_padded_component_ref (rtx x)
if (GET_MODE (x) != TYPE_MODE (TREE_TYPE (t)))
return 0;
/* If there are no following fields then the field alignment assures
- the structure is padded to the alignement which means this field is
- padded too. */
+ the structure is padded to the alignment which means this field is
+ padded too. */
if (TREE_CHAIN (t) == 0)
return 1;
/* If the following field is also aligned then this field will be
@@ -3703,10 +3670,10 @@ reloc_diagnostic (rtx x)
else
msg = "creating run-time relocation";
- if (TARGET_ERROR_RELOC) /** default : error reloc **/
- error (msg, loc_decl, decl);
- else
+ if (TARGET_WARN_RELOC)
warning (0, msg, loc_decl, decl);
+ else
+ error (msg, loc_decl, decl);
}
/* Hook into assemble_integer so we can generate an error for run-time
diff --git a/gcc/config/spu/spu.md b/gcc/config/spu/spu.md
index b7415a17c31..417081b6711 100644
--- a/gcc/config/spu/spu.md
+++ b/gcc/config/spu/spu.md
@@ -262,14 +262,16 @@
;; move internal
(define_insn "_mov<mode>"
- [(set (match_operand:MOV 0 "spu_reg_operand" "=r,r,r")
- (match_operand:MOV 1 "spu_nonmem_operand" "r,A,f"))]
- ""
+ [(set (match_operand:MOV 0 "spu_nonimm_operand" "=r,r,r,r,m")
+ (match_operand:MOV 1 "spu_mov_operand" "r,A,f,m,r"))]
+ "spu_valid_move (operands)"
"@
ori\t%0,%1,0
il%s1\t%0,%S1
- fsmbi\t%0,%F1"
- [(set_attr "type" "fx2,fx2,shuf")])
+ fsmbi\t%0,%F1
+ lq%p1\t%0,%1
+ stq%p0\t%1,%0"
+ [(set_attr "type" "fx2,fx2,shuf,load,store")])
(define_insn "high"
[(set (match_operand:SI 0 "spu_reg_operand" "=r")
@@ -285,24 +287,28 @@
"iohl\t%0,%2@l")
(define_insn "_movdi"
- [(set (match_operand:DI 0 "spu_reg_operand" "=r,r,r")
- (match_operand:DI 1 "spu_nonmem_operand" "r,a,f"))]
- ""
+ [(set (match_operand:DI 0 "spu_nonimm_operand" "=r,r,r,r,m")
+ (match_operand:DI 1 "spu_mov_operand" "r,a,f,m,r"))]
+ "spu_valid_move (operands)"
"@
ori\t%0,%1,0
il%d1\t%0,%D1
- fsmbi\t%0,%G1"
- [(set_attr "type" "fx2,fx2,shuf")])
+ fsmbi\t%0,%G1
+ lq%p1\t%0,%1
+ stq%p0\t%1,%0"
+ [(set_attr "type" "fx2,fx2,shuf,load,store")])
(define_insn "_movti"
- [(set (match_operand:TI 0 "spu_reg_operand" "=r,r,r")
- (match_operand:TI 1 "spu_nonmem_operand" "r,U,f"))]
- ""
+ [(set (match_operand:TI 0 "spu_nonimm_operand" "=r,r,r,r,m")
+ (match_operand:TI 1 "spu_mov_operand" "r,U,f,m,r"))]
+ "spu_valid_move (operands)"
"@
ori\t%0,%1,0
il%t1\t%0,%T1
- fsmbi\t%0,%H1"
- [(set_attr "type" "fx2,fx2,shuf")])
+ fsmbi\t%0,%H1
+ lq%p1\t%0,%1
+ stq%p0\t%1,%0"
+ [(set_attr "type" "fx2,fx2,shuf,load,store")])
(define_insn_and_split "load"
[(set (match_operand 0 "spu_reg_operand" "=r")
@@ -316,22 +322,6 @@
(match_dup 1))]
{ spu_split_load(operands); DONE; })
-(define_insn "lq"
- [(set (match_operand:TI 0 "spu_reg_operand" "=r")
- (mem:TI (and:SI (match_operand:SI 1 "address_operand" "p")
- (const_int -16))))]
- ""
- "lq%p1\t%0,%a1"
- [(set_attr "type" "load")])
-
-(define_insn "lq_<mode>"
- [(set (match_operand:ALL 0 "spu_reg_operand" "=r")
- (match_operand:ALL 1 "spu_mem_operand" "m"))]
- "spu_valid_move (operands)"
- "lq%p1\t%0,%1"
- [(set_attr "type" "load")])
-
-
(define_insn_and_split "store"
[(set (match_operand 0 "memory_operand" "=m")
(match_operand 1 "spu_reg_operand" "r"))
@@ -344,21 +334,6 @@
(match_dup 1))]
{ spu_split_store(operands); DONE; })
-(define_insn "stq"
- [(set (mem:TI (and:SI (match_operand:SI 0 "address_operand" "p")
- (const_int -16)))
- (match_operand:TI 1 "spu_reg_operand" "r"))]
- ""
- "stq%p0\t%1,%a0"
- [(set_attr "type" "load")])
-
-(define_insn "stq_<mode>"
- [(set (match_operand:ALL 0 "spu_mem_operand" "=m")
- (match_operand:ALL 1 "spu_reg_operand" "r"))]
- "spu_valid_move (operands)"
- "stq%p0\t%1,%0"
- [(set_attr "type" "load")])
-
;; Operand 3 is the number of bytes. 1:b 2:h 4:w 8:d
(define_insn "cpat"
[(set (match_operand:TI 0 "spu_reg_operand" "=r,r")
@@ -450,20 +425,19 @@
""
"andi\t%0,%1,0x00ff")
-(define_insn_and_split "zero_extendhisi2"
+(define_expand "zero_extendhisi2"
[(set (match_operand:SI 0 "spu_reg_operand" "=r")
(zero_extend:SI (match_operand:HI 1 "spu_reg_operand" "r")))
(clobber (match_scratch:SI 2 "=&r"))]
""
- "#"
- "reload_completed"
- [(set (match_dup:SI 2)
- (const_int 65535))
- (set (match_dup:SI 0)
- (and:SI (match_dup:SI 3)
- (match_dup:SI 2)))]
- "operands[3] = gen_rtx_REG (SImode, REGNO (operands[1]));")
-
+ {
+ rtx mask = gen_reg_rtx (SImode);
+ rtx op1 = simplify_gen_subreg (SImode, operands[1], HImode, 0);
+ emit_move_insn (mask, GEN_INT (0xffff));
+ emit_insn (gen_andsi3(operands[0], op1, mask));
+ DONE;
+ })
+
(define_insn "zero_extendsidi2"
[(set (match_operand:DI 0 "spu_reg_operand" "=r")
(zero_extend:DI (match_operand:SI 1 "spu_reg_operand" "r")))]
@@ -547,6 +521,13 @@
"csflt\t%0,%1,0"
[(set_attr "type" "fp7")])
+(define_insn "floatv4siv4sf2"
+ [(set (match_operand:V4SF 0 "spu_reg_operand" "=r")
+ (float:V4SF (match_operand:V4SI 1 "spu_reg_operand" "r")))]
+ ""
+ "csflt\t%0,%1,0"
+ [(set_attr "type" "fp7")])
+
(define_insn "fix_truncsfsi2"
[(set (match_operand:SI 0 "spu_reg_operand" "=r")
(fix:SI (match_operand:SF 1 "spu_reg_operand" "r")))]
@@ -554,6 +535,13 @@
"cflts\t%0,%1,0"
[(set_attr "type" "fp7")])
+(define_insn "fix_truncv4sfv4si2"
+ [(set (match_operand:V4SI 0 "spu_reg_operand" "=r")
+ (fix:V4SI (match_operand:V4SF 1 "spu_reg_operand" "r")))]
+ ""
+ "cflts\t%0,%1,0"
+ [(set_attr "type" "fp7")])
+
(define_insn "floatunssisf2"
[(set (match_operand:SF 0 "spu_reg_operand" "=r")
(unsigned_float:SF (match_operand:SI 1 "spu_reg_operand" "r")))]
@@ -561,6 +549,13 @@
"cuflt\t%0,%1,0"
[(set_attr "type" "fp7")])
+(define_insn "floatunsv4siv4sf2"
+ [(set (match_operand:V4SF 0 "spu_reg_operand" "=r")
+ (unsigned_float:V4SF (match_operand:V4SI 1 "spu_reg_operand" "r")))]
+ ""
+ "cuflt\t%0,%1,0"
+ [(set_attr "type" "fp7")])
+
(define_insn "fixuns_truncsfsi2"
[(set (match_operand:SI 0 "spu_reg_operand" "=r")
(unsigned_fix:SI (match_operand:SF 1 "spu_reg_operand" "r")))]
@@ -568,6 +563,13 @@
"cfltu\t%0,%1,0"
[(set_attr "type" "fp7")])
+(define_insn "fixuns_truncv4sfv4si2"
+ [(set (match_operand:V4SI 0 "spu_reg_operand" "=r")
+ (unsigned_fix:V4SI (match_operand:V4SF 1 "spu_reg_operand" "r")))]
+ ""
+ "cfltu\t%0,%1,0"
+ [(set_attr "type" "fp7")])
+
(define_insn "extendsfdf2"
[(set (match_operand:DF 0 "spu_reg_operand" "=r")
(float_extend:DF (match_operand:SF 1 "spu_reg_operand" "r")))]
@@ -652,6 +654,28 @@
;; add
+(define_expand "addv16qi3"
+ [(set (match_operand:V16QI 0 "spu_reg_operand" "=r")
+ (plus:V16QI (match_operand:V16QI 1 "spu_reg_operand" "r")
+ (match_operand:V16QI 2 "spu_reg_operand" "r")))]
+ ""
+ "{
+ rtx res_short = simplify_gen_subreg (V8HImode, operands[0], V16QImode, 0);
+ rtx lhs_short = simplify_gen_subreg (V8HImode, operands[1], V16QImode, 0);
+ rtx rhs_short = simplify_gen_subreg (V8HImode, operands[2], V16QImode, 0);
+ rtx rhs_and = gen_reg_rtx (V8HImode);
+ rtx hi_char = gen_reg_rtx (V8HImode);
+ rtx lo_char = gen_reg_rtx (V8HImode);
+ rtx mask = gen_reg_rtx (V8HImode);
+
+ emit_move_insn (mask, spu_const (V8HImode, 0x00ff));
+ emit_insn (gen_andv8hi3 (rhs_and, rhs_short, spu_const (V8HImode, 0xff00)));
+ emit_insn (gen_addv8hi3 (hi_char, lhs_short, rhs_and));
+ emit_insn (gen_addv8hi3 (lo_char, lhs_short, rhs_short));
+ emit_insn (gen_selb (res_short, hi_char, lo_char, mask));
+ DONE;
+ }")
+
(define_insn "add<mode>3"
[(set (match_operand:VHSI 0 "spu_reg_operand" "=r,r")
(plus:VHSI (match_operand:VHSI 1 "spu_reg_operand" "r,r")
@@ -753,6 +777,28 @@
;; sub
+(define_expand "subv16qi3"
+ [(set (match_operand:V16QI 0 "spu_reg_operand" "=r")
+ (minus:V16QI (match_operand:V16QI 1 "spu_reg_operand" "r")
+ (match_operand:V16QI 2 "spu_reg_operand" "r")))]
+ ""
+ "{
+ rtx res_short = simplify_gen_subreg (V8HImode, operands[0], V16QImode, 0);
+ rtx lhs_short = simplify_gen_subreg (V8HImode, operands[1], V16QImode, 0);
+ rtx rhs_short = simplify_gen_subreg (V8HImode, operands[2], V16QImode, 0);
+ rtx rhs_and = gen_reg_rtx (V8HImode);
+ rtx hi_char = gen_reg_rtx (V8HImode);
+ rtx lo_char = gen_reg_rtx (V8HImode);
+ rtx mask = gen_reg_rtx (V8HImode);
+
+ emit_move_insn (mask, spu_const (V8HImode, 0x00ff));
+ emit_insn (gen_andv8hi3 (rhs_and, rhs_short, spu_const (V8HImode, 0xff00)));
+ emit_insn (gen_subv8hi3 (hi_char, lhs_short, rhs_and));
+ emit_insn (gen_subv8hi3 (lo_char, lhs_short, rhs_short));
+ emit_insn (gen_selb (res_short, hi_char, lo_char, mask));
+ DONE;
+ }")
+
(define_insn "sub<mode>3"
[(set (match_operand:VHSI 0 "spu_reg_operand" "=r,r")
(minus:VHSI (match_operand:VHSI 1 "spu_arith_operand" "r,B")
@@ -850,6 +896,17 @@
;; neg
+(define_expand "negv16qi2"
+ [(set (match_operand:V16QI 0 "spu_reg_operand" "=r")
+ (neg:V16QI (match_operand:V16QI 1 "spu_reg_operand" "r")))]
+ ""
+ "{
+ rtx zero = gen_reg_rtx (V16QImode);
+ emit_move_insn (zero, CONST0_RTX (V16QImode));
+ emit_insn (gen_subv16qi3 (operands[0], zero, operands[1]));
+ DONE;
+ }")
+
(define_insn "neg<mode>2"
[(set (match_operand:VHSI 0 "spu_reg_operand" "=r")
(neg:VHSI (match_operand:VHSI 1 "spu_reg_operand" "r")))]
@@ -960,27 +1017,47 @@
mpyi\t%0,%1,%2"
[(set_attr "type" "fp7")])
-(define_expand "mulsi3"
+(define_expand "mulv8hi3"
+ [(set (match_operand:V8HI 0 "spu_reg_operand" "")
+ (mult:V8HI (match_operand:V8HI 1 "spu_reg_operand" "")
+ (match_operand:V8HI 2 "spu_reg_operand" "")))]
+ ""
+ "{
+ rtx result = simplify_gen_subreg (V4SImode, operands[0], V8HImode, 0);
+ rtx low = gen_reg_rtx (V4SImode);
+ rtx high = gen_reg_rtx (V4SImode);
+ rtx shift = gen_reg_rtx (V4SImode);
+ rtx mask = gen_reg_rtx (V4SImode);
+
+ emit_move_insn (mask, spu_const (V4SImode, 0x0000ffff));
+ emit_insn (gen_spu_mpyhh (high, operands[1], operands[2]));
+ emit_insn (gen_spu_mpy (low, operands[1], operands[2]));
+ emit_insn (gen_ashlv4si3 (shift, high, spu_const(V4SImode, 16)));
+ emit_insn (gen_selb (result, shift, low, mask));
+ DONE;
+ }")
+
+(define_expand "mul<mode>3"
[(parallel
- [(set (match_operand:SI 0 "spu_reg_operand" "")
- (mult:SI (match_operand:SI 1 "spu_reg_operand" "")
- (match_operand:SI 2 "spu_reg_operand" "")))
- (clobber (match_dup:SI 3))
- (clobber (match_dup:SI 4))
- (clobber (match_dup:SI 5))
- (clobber (match_dup:SI 6))])]
+ [(set (match_operand:VSI 0 "spu_reg_operand" "")
+ (mult:VSI (match_operand:VSI 1 "spu_reg_operand" "")
+ (match_operand:VSI 2 "spu_reg_operand" "")))
+ (clobber (match_dup:VSI 3))
+ (clobber (match_dup:VSI 4))
+ (clobber (match_dup:VSI 5))
+ (clobber (match_dup:VSI 6))])]
""
{
- operands[3] = gen_reg_rtx(SImode);
- operands[4] = gen_reg_rtx(SImode);
- operands[5] = gen_reg_rtx(SImode);
- operands[6] = gen_reg_rtx(SImode);
+ operands[3] = gen_reg_rtx(<MODE>mode);
+ operands[4] = gen_reg_rtx(<MODE>mode);
+ operands[5] = gen_reg_rtx(<MODE>mode);
+ operands[6] = gen_reg_rtx(<MODE>mode);
})
(define_insn_and_split "_mulsi3"
[(set (match_operand:SI 0 "spu_reg_operand" "=r")
(mult:SI (match_operand:SI 1 "spu_reg_operand" "r")
- (match_operand:SI 2 "spu_nonmem_operand" "ri")))
+ (match_operand:SI 2 "spu_arith_operand" "rK")))
(clobber (match_operand:SI 3 "spu_reg_operand" "=&r"))
(clobber (match_operand:SI 4 "spu_reg_operand" "=&r"))
(clobber (match_operand:SI 5 "spu_reg_operand" "=&r"))
@@ -1025,6 +1102,37 @@
DONE;
})
+(define_insn_and_split "_mulv4si3"
+ [(set (match_operand:V4SI 0 "spu_reg_operand" "=r")
+ (mult:V4SI (match_operand:V4SI 1 "spu_reg_operand" "r")
+ (match_operand:V4SI 2 "spu_reg_operand" "r")))
+ (clobber (match_operand:V4SI 3 "spu_reg_operand" "=&r"))
+ (clobber (match_operand:V4SI 4 "spu_reg_operand" "=&r"))
+ (clobber (match_operand:V4SI 5 "spu_reg_operand" "=&r"))
+ (clobber (match_operand:V4SI 6 "spu_reg_operand" "=&r"))]
+ ""
+ "#"
+ ""
+ [(set (match_dup:V4SI 0)
+ (mult:V4SI (match_dup:V4SI 1)
+ (match_dup:V4SI 2)))]
+ {
+ HOST_WIDE_INT val = 0;
+ rtx a = operands[3];
+ rtx b = operands[4];
+ rtx c = operands[5];
+ rtx d = operands[6];
+ rtx op1 = simplify_gen_subreg (V8HImode, operands[1], V4SImode, 0);
+ rtx op2 = simplify_gen_subreg (V8HImode, operands[2], V4SImode, 0);
+ rtx op3 = simplify_gen_subreg (V8HImode, operands[3], V4SImode, 0);
+ emit_insn(gen_spu_mpyh(a, op1, op2));
+ emit_insn(gen_spu_mpyh(b, op2, op1));
+ emit_insn(gen_spu_mpyu(c, op1, op2));
+ emit_insn(gen_addv4si3(d, a, b));
+ emit_insn(gen_addv4si3(operands[0], d, c));
+ DONE;
+ })
+
(define_insn "mulhisi3"
[(set (match_operand:SI 0 "spu_reg_operand" "=r")
(mult:SI (sign_extend:SI (match_operand:HI 1 "spu_reg_operand" "r"))
@@ -1070,8 +1178,8 @@
[(set_attr "type" "fp7")])
;; This isn't always profitable to use. Consider r = a * b + c * d.
-;; It's faster to do the multplies in parallel then add them. If we
-;; merge a multply and add it prevents the multplies from happening in
+;; It's faster to do the multiplies in parallel then add them. If we
+;; merge a multiply and add it prevents the multiplies from happening in
;; parallel.
(define_insn "mpya_si"
[(set (match_operand:SI 0 "spu_reg_operand" "=r")
@@ -2557,8 +2665,19 @@ selb\t%0,%4,%0,%3"
(define_expand "cmp<mode>"
[(set (cc0)
- (compare (match_operand:VINT 0 "spu_reg_operand" "")
- (match_operand:VINT 1 "spu_nonmem_operand" "")))]
+ (compare (match_operand:VQHSI 0 "spu_reg_operand" "")
+ (match_operand:VQHSI 1 "spu_nonmem_operand" "")))]
+ ""
+ {
+ spu_compare_op0 = operands[0];
+ spu_compare_op1 = operands[1];
+ DONE;
+ })
+
+(define_expand "cmp<mode>"
+ [(set (cc0)
+ (compare (match_operand:DTI 0 "spu_reg_operand" "")
+ (match_operand:DTI 1 "spu_reg_operand" "")))]
""
{
spu_compare_op0 = operands[0];
@@ -2569,7 +2688,7 @@ selb\t%0,%4,%0,%3"
(define_expand "cmp<mode>"
[(set (cc0)
(compare (match_operand:VSF 0 "spu_reg_operand" "")
- (match_operand:VSF 1 "spu_nonmem_operand" "")))]
+ (match_operand:VSF 1 "spu_reg_operand" "")))]
""
{
spu_compare_op0 = operands[0];
diff --git a/gcc/config/spu/spu_internals.h b/gcc/config/spu/spu_internals.h
index 752ddb6f04f..6a71f279819 100644
--- a/gcc/config/spu/spu_internals.h
+++ b/gcc/config/spu/spu_internals.h
@@ -256,9 +256,7 @@
#define __align_hint(ptr,base,offset) __builtin_spu_align_hint(ptr,base,offset)
-#ifndef __cplusplus
-
-/* generic spu_* intrinisics */
+/* generic spu_* intrinsics */
#define spu_splats(scalar) __builtin_spu_splats(scalar)
#define spu_convtf(ra,imm) __builtin_spu_convtf(ra,imm)
@@ -330,2444 +328,6 @@
#define spu_insert(scalar,ra,pos) __builtin_spu_insert(scalar,ra,pos)
#define spu_promote(scalar,pos) __builtin_spu_promote(scalar,pos)
-#else /* __cplusplus */
-
-/* A bit of a hack... Float conversion needs an immediate operand.
- * always_inline doesn't help because the compiler generates an error
- * before inlining happens. */
-static inline vec_float4 __hack_spu_convtf (vec_int4, vec_float4, vec_float4) __attribute__((__always_inline__));
-static inline vec_float4 __hack_spu_convtf (vec_uint4, vec_float4, vec_float4) __attribute__((__always_inline__));
-static inline vec_float4
-__hack_spu_convtf (vec_int4 ra, vec_float4 from_signed, vec_float4 from_unsigned)
-{
- (void)ra;
- (void)from_unsigned;
- return from_signed;
-}
-static inline vec_float4
-__hack_spu_convtf (vec_uint4 ra, vec_float4 from_signed, vec_float4 from_unsigned)
-{
- (void)ra;
- (void)from_signed;
- return from_unsigned;
-}
-#define spu_convtf(ra,imm) \
- __hack_spu_convtf((ra), \
- __builtin_spu_convtf_1((vec_int4)(ra), (imm)), \
- __builtin_spu_convtf_0((vec_uint4)(ra), (imm)))
-
-/* The following defines and functions were created automatically from
- * spu_builtins.def. */
-#define spu_convts(a, b) __builtin_spu_convts (a, b)
-#define spu_convtu(a, b) __builtin_spu_convtu (a, b)
-#define spu_roundtf(a) __builtin_spu_roundtf (a)
-#define spu_mulh(a, b) __builtin_spu_mulh (a, b)
-#define spu_mulsr(a, b) __builtin_spu_mulsr (a, b)
-#define spu_frest(a) __builtin_spu_frest (a)
-#define spu_frsqest(a) __builtin_spu_frsqest (a)
-#define spu_nmadd(a, b, c) __builtin_spu_nmadd (a, b, c)
-#define spu_absd(a, b) __builtin_spu_absd (a, b)
-#define spu_avg(a, b) __builtin_spu_avg (a, b)
-#define spu_sumb(a, b) __builtin_spu_sumb (a, b)
-#define spu_bisled(a) __builtin_spu_bisled (a, 0)
-#define spu_bisled_d(a) __builtin_spu_bisled_d (a, 0)
-#define spu_bisled_e(a) __builtin_spu_bisled_e (a, 0)
-#define spu_cmpabseq(a, b) __builtin_spu_cmpabseq (a, b)
-#define spu_cmpabsgt(a, b) __builtin_spu_cmpabsgt (a, b)
-
-static inline vec_short8 spu_extend (vec_char16 a) __attribute__((__always_inline__));
-static inline vec_int4 spu_extend (vec_short8 a) __attribute__((__always_inline__));
-static inline vec_llong2 spu_extend (vec_int4 a) __attribute__((__always_inline__));
-static inline vec_double2 spu_extend (vec_float4 a) __attribute__((__always_inline__));
-static inline vec_uint4 spu_add (vec_uint4 a, vec_uint4 b) __attribute__((__always_inline__));
-static inline vec_int4 spu_add (vec_int4 a, vec_int4 b) __attribute__((__always_inline__));
-static inline vec_ushort8 spu_add (vec_ushort8 a, vec_ushort8 b) __attribute__((__always_inline__));
-static inline vec_short8 spu_add (vec_short8 a, vec_short8 b) __attribute__((__always_inline__));
-static inline vec_float4 spu_add (vec_float4 a, vec_float4 b) __attribute__((__always_inline__));
-static inline vec_double2 spu_add (vec_double2 a, vec_double2 b) __attribute__((__always_inline__));
-static inline vec_ushort8 spu_add (vec_ushort8 a, unsigned short b) __attribute__((__always_inline__));
-static inline vec_short8 spu_add (vec_short8 a, short b) __attribute__((__always_inline__));
-static inline vec_uint4 spu_add (vec_uint4 a, unsigned int b) __attribute__((__always_inline__));
-static inline vec_int4 spu_add (vec_int4 a, int b) __attribute__((__always_inline__));
-static inline vec_int4 spu_addx (vec_int4 a, vec_int4 b, vec_int4 c) __attribute__((__always_inline__));
-static inline vec_uint4 spu_addx (vec_uint4 a, vec_uint4 b, vec_uint4 c) __attribute__((__always_inline__));
-static inline vec_int4 spu_genc (vec_int4 a, vec_int4 b) __attribute__((__always_inline__));
-static inline vec_uint4 spu_genc (vec_uint4 a, vec_uint4 b) __attribute__((__always_inline__));
-static inline vec_int4 spu_gencx (vec_int4 a, vec_int4 b, vec_int4 c) __attribute__((__always_inline__));
-static inline vec_uint4 spu_gencx (vec_uint4 a, vec_uint4 b, vec_uint4 c) __attribute__((__always_inline__));
-static inline vec_int4 spu_madd (vec_short8 a, vec_short8 b, vec_int4 c) __attribute__((__always_inline__));
-static inline vec_float4 spu_madd (vec_float4 a, vec_float4 b, vec_float4 c) __attribute__((__always_inline__));
-static inline vec_double2 spu_madd (vec_double2 a, vec_double2 b, vec_double2 c) __attribute__((__always_inline__));
-static inline vec_float4 spu_msub (vec_float4 a, vec_float4 b, vec_float4 c) __attribute__((__always_inline__));
-static inline vec_double2 spu_msub (vec_double2 a, vec_double2 b, vec_double2 c) __attribute__((__always_inline__));
-static inline vec_uint4 spu_mhhadd (vec_ushort8 a, vec_ushort8 b, vec_uint4 c) __attribute__((__always_inline__));
-static inline vec_int4 spu_mhhadd (vec_short8 a, vec_short8 b, vec_int4 c) __attribute__((__always_inline__));
-static inline vec_uint4 spu_mule (vec_ushort8 a, vec_ushort8 b) __attribute__((__always_inline__));
-static inline vec_int4 spu_mule (vec_short8 a, vec_short8 b) __attribute__((__always_inline__));
-static inline vec_float4 spu_mul (vec_float4 a, vec_float4 b) __attribute__((__always_inline__));
-static inline vec_double2 spu_mul (vec_double2 a, vec_double2 b) __attribute__((__always_inline__));
-static inline vec_int4 spu_mulo (vec_short8 a, vec_short8 b) __attribute__((__always_inline__));
-static inline vec_uint4 spu_mulo (vec_ushort8 a, vec_ushort8 b) __attribute__((__always_inline__));
-static inline vec_int4 spu_mulo (vec_short8 a, short b) __attribute__((__always_inline__));
-static inline vec_uint4 spu_mulo (vec_ushort8 a, unsigned short b) __attribute__((__always_inline__));
-static inline vec_float4 spu_nmsub (vec_float4 a, vec_float4 b, vec_float4 c) __attribute__((__always_inline__));
-static inline vec_double2 spu_nmsub (vec_double2 a, vec_double2 b, vec_double2 c) __attribute__((__always_inline__));
-static inline vec_ushort8 spu_sub (vec_ushort8 a, vec_ushort8 b) __attribute__((__always_inline__));
-static inline vec_short8 spu_sub (vec_short8 a, vec_short8 b) __attribute__((__always_inline__));
-static inline vec_uint4 spu_sub (vec_uint4 a, vec_uint4 b) __attribute__((__always_inline__));
-static inline vec_int4 spu_sub (vec_int4 a, vec_int4 b) __attribute__((__always_inline__));
-static inline vec_float4 spu_sub (vec_float4 a, vec_float4 b) __attribute__((__always_inline__));
-static inline vec_double2 spu_sub (vec_double2 a, vec_double2 b) __attribute__((__always_inline__));
-static inline vec_ushort8 spu_sub (unsigned short a, vec_ushort8 b) __attribute__((__always_inline__));
-static inline vec_short8 spu_sub (short a, vec_short8 b) __attribute__((__always_inline__));
-static inline vec_uint4 spu_sub (unsigned int a, vec_uint4 b) __attribute__((__always_inline__));
-static inline vec_int4 spu_sub (int a, vec_int4 b) __attribute__((__always_inline__));
-static inline vec_uint4 spu_subx (vec_uint4 a, vec_uint4 b, vec_uint4 c) __attribute__((__always_inline__));
-static inline vec_int4 spu_subx (vec_int4 a, vec_int4 b, vec_int4 c) __attribute__((__always_inline__));
-static inline vec_uint4 spu_genb (vec_uint4 a, vec_uint4 b) __attribute__((__always_inline__));
-static inline vec_int4 spu_genb (vec_int4 a, vec_int4 b) __attribute__((__always_inline__));
-static inline vec_uint4 spu_genbx (vec_uint4 a, vec_uint4 b, vec_uint4 c) __attribute__((__always_inline__));
-static inline vec_int4 spu_genbx (vec_int4 a, vec_int4 b, vec_int4 c) __attribute__((__always_inline__));
-static inline vec_uchar16 spu_cmpeq (vec_uchar16 a, vec_uchar16 b) __attribute__((__always_inline__));
-static inline vec_uchar16 spu_cmpeq (vec_char16 a, vec_char16 b) __attribute__((__always_inline__));
-static inline vec_ushort8 spu_cmpeq (vec_ushort8 a, vec_ushort8 b) __attribute__((__always_inline__));
-static inline vec_ushort8 spu_cmpeq (vec_short8 a, vec_short8 b) __attribute__((__always_inline__));
-static inline vec_uint4 spu_cmpeq (vec_uint4 a, vec_uint4 b) __attribute__((__always_inline__));
-static inline vec_uint4 spu_cmpeq (vec_int4 a, vec_int4 b) __attribute__((__always_inline__));
-static inline vec_uint4 spu_cmpeq (vec_float4 a, vec_float4 b) __attribute__((__always_inline__));
-static inline vec_uchar16 spu_cmpeq (vec_uchar16 a, unsigned char b) __attribute__((__always_inline__));
-static inline vec_uchar16 spu_cmpeq (vec_char16 a, signed char b) __attribute__((__always_inline__));
-static inline vec_ushort8 spu_cmpeq (vec_ushort8 a, unsigned short b) __attribute__((__always_inline__));
-static inline vec_ushort8 spu_cmpeq (vec_short8 a, short b) __attribute__((__always_inline__));
-static inline vec_uint4 spu_cmpeq (vec_uint4 a, unsigned int b) __attribute__((__always_inline__));
-static inline vec_uint4 spu_cmpeq (vec_int4 a, int b) __attribute__((__always_inline__));
-static inline vec_uchar16 spu_cmpgt (vec_uchar16 a, vec_uchar16 b) __attribute__((__always_inline__));
-static inline vec_uchar16 spu_cmpgt (vec_char16 a, vec_char16 b) __attribute__((__always_inline__));
-static inline vec_ushort8 spu_cmpgt (vec_ushort8 a, vec_ushort8 b) __attribute__((__always_inline__));
-static inline vec_ushort8 spu_cmpgt (vec_short8 a, vec_short8 b) __attribute__((__always_inline__));
-static inline vec_uint4 spu_cmpgt (vec_uint4 a, vec_uint4 b) __attribute__((__always_inline__));
-static inline vec_uint4 spu_cmpgt (vec_int4 a, vec_int4 b) __attribute__((__always_inline__));
-static inline vec_uint4 spu_cmpgt (vec_float4 a, vec_float4 b) __attribute__((__always_inline__));
-static inline vec_uchar16 spu_cmpgt (vec_uchar16 a, unsigned char b) __attribute__((__always_inline__));
-static inline vec_uchar16 spu_cmpgt (vec_char16 a, signed char b) __attribute__((__always_inline__));
-static inline vec_ushort8 spu_cmpgt (vec_ushort8 a, unsigned short b) __attribute__((__always_inline__));
-static inline vec_ushort8 spu_cmpgt (vec_short8 a, short b) __attribute__((__always_inline__));
-static inline vec_uint4 spu_cmpgt (vec_int4 a, int b) __attribute__((__always_inline__));
-static inline vec_uint4 spu_cmpgt (vec_uint4 a, unsigned int b) __attribute__((__always_inline__));
-static inline void spu_hcmpeq (int a, int b) __attribute__((__always_inline__));
-static inline void spu_hcmpeq (unsigned int a, unsigned int b) __attribute__((__always_inline__));
-static inline void spu_hcmpgt (int a, int b) __attribute__((__always_inline__));
-static inline void spu_hcmpgt (unsigned int a, unsigned int b) __attribute__((__always_inline__));
-static inline vec_uchar16 spu_cntb (vec_char16 a) __attribute__((__always_inline__));
-static inline vec_uchar16 spu_cntb (vec_uchar16 a) __attribute__((__always_inline__));
-static inline vec_uint4 spu_cntlz (vec_int4 a) __attribute__((__always_inline__));
-static inline vec_uint4 spu_cntlz (vec_uint4 a) __attribute__((__always_inline__));
-static inline vec_uint4 spu_cntlz (vec_float4 a) __attribute__((__always_inline__));
-static inline vec_uint4 spu_gather (vec_int4 a) __attribute__((__always_inline__));
-static inline vec_uint4 spu_gather (vec_uint4 a) __attribute__((__always_inline__));
-static inline vec_uint4 spu_gather (vec_short8 a) __attribute__((__always_inline__));
-static inline vec_uint4 spu_gather (vec_ushort8 a) __attribute__((__always_inline__));
-static inline vec_uint4 spu_gather (vec_char16 a) __attribute__((__always_inline__));
-static inline vec_uint4 spu_gather (vec_uchar16 a) __attribute__((__always_inline__));
-static inline vec_uint4 spu_gather (vec_float4 a) __attribute__((__always_inline__));
-static inline vec_uchar16 spu_maskb (unsigned short a) __attribute__((__always_inline__));
-static inline vec_uchar16 spu_maskb (short a) __attribute__((__always_inline__));
-static inline vec_uchar16 spu_maskb (unsigned int a) __attribute__((__always_inline__));
-static inline vec_uchar16 spu_maskb (int a) __attribute__((__always_inline__));
-static inline vec_ushort8 spu_maskh (unsigned char a) __attribute__((__always_inline__));
-static inline vec_ushort8 spu_maskh (signed char a) __attribute__((__always_inline__));
-static inline vec_ushort8 spu_maskh (char a) __attribute__((__always_inline__));
-static inline vec_ushort8 spu_maskh (unsigned short a) __attribute__((__always_inline__));
-static inline vec_ushort8 spu_maskh (short a) __attribute__((__always_inline__));
-static inline vec_ushort8 spu_maskh (unsigned int a) __attribute__((__always_inline__));
-static inline vec_ushort8 spu_maskh (int a) __attribute__((__always_inline__));
-static inline vec_uint4 spu_maskw (unsigned char a) __attribute__((__always_inline__));
-static inline vec_uint4 spu_maskw (signed char a) __attribute__((__always_inline__));
-static inline vec_uint4 spu_maskw (char a) __attribute__((__always_inline__));
-static inline vec_uint4 spu_maskw (unsigned short a) __attribute__((__always_inline__));
-static inline vec_uint4 spu_maskw (short a) __attribute__((__always_inline__));
-static inline vec_uint4 spu_maskw (unsigned int a) __attribute__((__always_inline__));
-static inline vec_uint4 spu_maskw (int a) __attribute__((__always_inline__));
-static inline vec_llong2 spu_sel (vec_llong2 a, vec_llong2 b, vec_ullong2 c) __attribute__((__always_inline__));
-static inline vec_ullong2 spu_sel (vec_ullong2 a, vec_ullong2 b, vec_ullong2 c) __attribute__((__always_inline__));
-static inline vec_int4 spu_sel (vec_int4 a, vec_int4 b, vec_uint4 c) __attribute__((__always_inline__));
-static inline vec_uint4 spu_sel (vec_uint4 a, vec_uint4 b, vec_uint4 c) __attribute__((__always_inline__));
-static inline vec_short8 spu_sel (vec_short8 a, vec_short8 b, vec_ushort8 c) __attribute__((__always_inline__));
-static inline vec_ushort8 spu_sel (vec_ushort8 a, vec_ushort8 b, vec_ushort8 c) __attribute__((__always_inline__));
-static inline vec_char16 spu_sel (vec_char16 a, vec_char16 b, vec_uchar16 c) __attribute__((__always_inline__));
-static inline vec_uchar16 spu_sel (vec_uchar16 a, vec_uchar16 b, vec_uchar16 c) __attribute__((__always_inline__));
-static inline vec_float4 spu_sel (vec_float4 a, vec_float4 b, vec_uint4 c) __attribute__((__always_inline__));
-static inline vec_double2 spu_sel (vec_double2 a, vec_double2 b, vec_ullong2 c) __attribute__((__always_inline__));
-static inline vec_llong2 spu_sel (vec_llong2 a, vec_llong2 b, vec_uchar16 c) __attribute__((__always_inline__));
-static inline vec_ullong2 spu_sel (vec_ullong2 a, vec_ullong2 b, vec_uchar16 c) __attribute__((__always_inline__));
-static inline vec_int4 spu_sel (vec_int4 a, vec_int4 b, vec_uchar16 c) __attribute__((__always_inline__));
-static inline vec_uint4 spu_sel (vec_uint4 a, vec_uint4 b, vec_uchar16 c) __attribute__((__always_inline__));
-static inline vec_short8 spu_sel (vec_short8 a, vec_short8 b, vec_uchar16 c) __attribute__((__always_inline__));
-static inline vec_ushort8 spu_sel (vec_ushort8 a, vec_ushort8 b, vec_uchar16 c) __attribute__((__always_inline__));
-static inline vec_float4 spu_sel (vec_float4 a, vec_float4 b, vec_uchar16 c) __attribute__((__always_inline__));
-static inline vec_double2 spu_sel (vec_double2 a, vec_double2 b, vec_uchar16 c) __attribute__((__always_inline__));
-static inline vec_uchar16 spu_shuffle (vec_uchar16 a, vec_uchar16 b, vec_uchar16 c) __attribute__((__always_inline__));
-static inline vec_char16 spu_shuffle (vec_char16 a, vec_char16 b, vec_uchar16 c) __attribute__((__always_inline__));
-static inline vec_ushort8 spu_shuffle (vec_ushort8 a, vec_ushort8 b, vec_uchar16 c) __attribute__((__always_inline__));
-static inline vec_short8 spu_shuffle (vec_short8 a, vec_short8 b, vec_uchar16 c) __attribute__((__always_inline__));
-static inline vec_uint4 spu_shuffle (vec_uint4 a, vec_uint4 b, vec_uchar16 c) __attribute__((__always_inline__));
-static inline vec_int4 spu_shuffle (vec_int4 a, vec_int4 b, vec_uchar16 c) __attribute__((__always_inline__));
-static inline vec_ullong2 spu_shuffle (vec_ullong2 a, vec_ullong2 b, vec_uchar16 c) __attribute__((__always_inline__));
-static inline vec_llong2 spu_shuffle (vec_llong2 a, vec_llong2 b, vec_uchar16 c) __attribute__((__always_inline__));
-static inline vec_float4 spu_shuffle (vec_float4 a, vec_float4 b, vec_uchar16 c) __attribute__((__always_inline__));
-static inline vec_double2 spu_shuffle (vec_double2 a, vec_double2 b, vec_uchar16 c) __attribute__((__always_inline__));
-static inline vec_uchar16 spu_and (vec_uchar16 a, vec_uchar16 b) __attribute__((__always_inline__));
-static inline vec_char16 spu_and (vec_char16 a, vec_char16 b) __attribute__((__always_inline__));
-static inline vec_ushort8 spu_and (vec_ushort8 a, vec_ushort8 b) __attribute__((__always_inline__));
-static inline vec_short8 spu_and (vec_short8 a, vec_short8 b) __attribute__((__always_inline__));
-static inline vec_uint4 spu_and (vec_uint4 a, vec_uint4 b) __attribute__((__always_inline__));
-static inline vec_int4 spu_and (vec_int4 a, vec_int4 b) __attribute__((__always_inline__));
-static inline vec_ullong2 spu_and (vec_ullong2 a, vec_ullong2 b) __attribute__((__always_inline__));
-static inline vec_llong2 spu_and (vec_llong2 a, vec_llong2 b) __attribute__((__always_inline__));
-static inline vec_float4 spu_and (vec_float4 a, vec_float4 b) __attribute__((__always_inline__));
-static inline vec_double2 spu_and (vec_double2 a, vec_double2 b) __attribute__((__always_inline__));
-static inline vec_uchar16 spu_and (vec_uchar16 a, unsigned char b) __attribute__((__always_inline__));
-static inline vec_char16 spu_and (vec_char16 a, signed char b) __attribute__((__always_inline__));
-static inline vec_ushort8 spu_and (vec_ushort8 a, unsigned short b) __attribute__((__always_inline__));
-static inline vec_short8 spu_and (vec_short8 a, short b) __attribute__((__always_inline__));
-static inline vec_uint4 spu_and (vec_uint4 a, unsigned int b) __attribute__((__always_inline__));
-static inline vec_int4 spu_and (vec_int4 a, int b) __attribute__((__always_inline__));
-static inline vec_llong2 spu_andc (vec_llong2 a, vec_llong2 b) __attribute__((__always_inline__));
-static inline vec_ullong2 spu_andc (vec_ullong2 a, vec_ullong2 b) __attribute__((__always_inline__));
-static inline vec_int4 spu_andc (vec_int4 a, vec_int4 b) __attribute__((__always_inline__));
-static inline vec_uint4 spu_andc (vec_uint4 a, vec_uint4 b) __attribute__((__always_inline__));
-static inline vec_short8 spu_andc (vec_short8 a, vec_short8 b) __attribute__((__always_inline__));
-static inline vec_ushort8 spu_andc (vec_ushort8 a, vec_ushort8 b) __attribute__((__always_inline__));
-static inline vec_char16 spu_andc (vec_char16 a, vec_char16 b) __attribute__((__always_inline__));
-static inline vec_uchar16 spu_andc (vec_uchar16 a, vec_uchar16 b) __attribute__((__always_inline__));
-static inline vec_float4 spu_andc (vec_float4 a, vec_float4 b) __attribute__((__always_inline__));
-static inline vec_double2 spu_andc (vec_double2 a, vec_double2 b) __attribute__((__always_inline__));
-static inline vec_llong2 spu_eqv (vec_llong2 a, vec_llong2 b) __attribute__((__always_inline__));
-static inline vec_ullong2 spu_eqv (vec_ullong2 a, vec_ullong2 b) __attribute__((__always_inline__));
-static inline vec_int4 spu_eqv (vec_int4 a, vec_int4 b) __attribute__((__always_inline__));
-static inline vec_uint4 spu_eqv (vec_uint4 a, vec_uint4 b) __attribute__((__always_inline__));
-static inline vec_short8 spu_eqv (vec_short8 a, vec_short8 b) __attribute__((__always_inline__));
-static inline vec_ushort8 spu_eqv (vec_ushort8 a, vec_ushort8 b) __attribute__((__always_inline__));
-static inline vec_char16 spu_eqv (vec_char16 a, vec_char16 b) __attribute__((__always_inline__));
-static inline vec_uchar16 spu_eqv (vec_uchar16 a, vec_uchar16 b) __attribute__((__always_inline__));
-static inline vec_float4 spu_eqv (vec_float4 a, vec_float4 b) __attribute__((__always_inline__));
-static inline vec_double2 spu_eqv (vec_double2 a, vec_double2 b) __attribute__((__always_inline__));
-static inline vec_llong2 spu_nand (vec_llong2 a, vec_llong2 b) __attribute__((__always_inline__));
-static inline vec_ullong2 spu_nand (vec_ullong2 a, vec_ullong2 b) __attribute__((__always_inline__));
-static inline vec_int4 spu_nand (vec_int4 a, vec_int4 b) __attribute__((__always_inline__));
-static inline vec_uint4 spu_nand (vec_uint4 a, vec_uint4 b) __attribute__((__always_inline__));
-static inline vec_short8 spu_nand (vec_short8 a, vec_short8 b) __attribute__((__always_inline__));
-static inline vec_ushort8 spu_nand (vec_ushort8 a, vec_ushort8 b) __attribute__((__always_inline__));
-static inline vec_char16 spu_nand (vec_char16 a, vec_char16 b) __attribute__((__always_inline__));
-static inline vec_uchar16 spu_nand (vec_uchar16 a, vec_uchar16 b) __attribute__((__always_inline__));
-static inline vec_float4 spu_nand (vec_float4 a, vec_float4 b) __attribute__((__always_inline__));
-static inline vec_double2 spu_nand (vec_double2 a, vec_double2 b) __attribute__((__always_inline__));
-static inline vec_llong2 spu_nor (vec_llong2 a, vec_llong2 b) __attribute__((__always_inline__));
-static inline vec_ullong2 spu_nor (vec_ullong2 a, vec_ullong2 b) __attribute__((__always_inline__));
-static inline vec_int4 spu_nor (vec_int4 a, vec_int4 b) __attribute__((__always_inline__));
-static inline vec_uint4 spu_nor (vec_uint4 a, vec_uint4 b) __attribute__((__always_inline__));
-static inline vec_short8 spu_nor (vec_short8 a, vec_short8 b) __attribute__((__always_inline__));
-static inline vec_ushort8 spu_nor (vec_ushort8 a, vec_ushort8 b) __attribute__((__always_inline__));
-static inline vec_char16 spu_nor (vec_char16 a, vec_char16 b) __attribute__((__always_inline__));
-static inline vec_uchar16 spu_nor (vec_uchar16 a, vec_uchar16 b) __attribute__((__always_inline__));
-static inline vec_float4 spu_nor (vec_float4 a, vec_float4 b) __attribute__((__always_inline__));
-static inline vec_double2 spu_nor (vec_double2 a, vec_double2 b) __attribute__((__always_inline__));
-static inline vec_uchar16 spu_or (vec_uchar16 a, vec_uchar16 b) __attribute__((__always_inline__));
-static inline vec_char16 spu_or (vec_char16 a, vec_char16 b) __attribute__((__always_inline__));
-static inline vec_ushort8 spu_or (vec_ushort8 a, vec_ushort8 b) __attribute__((__always_inline__));
-static inline vec_short8 spu_or (vec_short8 a, vec_short8 b) __attribute__((__always_inline__));
-static inline vec_uint4 spu_or (vec_uint4 a, vec_uint4 b) __attribute__((__always_inline__));
-static inline vec_int4 spu_or (vec_int4 a, vec_int4 b) __attribute__((__always_inline__));
-static inline vec_ullong2 spu_or (vec_ullong2 a, vec_ullong2 b) __attribute__((__always_inline__));
-static inline vec_llong2 spu_or (vec_llong2 a, vec_llong2 b) __attribute__((__always_inline__));
-static inline vec_float4 spu_or (vec_float4 a, vec_float4 b) __attribute__((__always_inline__));
-static inline vec_double2 spu_or (vec_double2 a, vec_double2 b) __attribute__((__always_inline__));
-static inline vec_uchar16 spu_or (vec_uchar16 a, unsigned char b) __attribute__((__always_inline__));
-static inline vec_char16 spu_or (vec_char16 a, signed char b) __attribute__((__always_inline__));
-static inline vec_ushort8 spu_or (vec_ushort8 a, unsigned short b) __attribute__((__always_inline__));
-static inline vec_short8 spu_or (vec_short8 a, short b) __attribute__((__always_inline__));
-static inline vec_uint4 spu_or (vec_uint4 a, unsigned int b) __attribute__((__always_inline__));
-static inline vec_int4 spu_or (vec_int4 a, int b) __attribute__((__always_inline__));
-static inline vec_llong2 spu_orc (vec_llong2 a, vec_llong2 b) __attribute__((__always_inline__));
-static inline vec_ullong2 spu_orc (vec_ullong2 a, vec_ullong2 b) __attribute__((__always_inline__));
-static inline vec_int4 spu_orc (vec_int4 a, vec_int4 b) __attribute__((__always_inline__));
-static inline vec_uint4 spu_orc (vec_uint4 a, vec_uint4 b) __attribute__((__always_inline__));
-static inline vec_short8 spu_orc (vec_short8 a, vec_short8 b) __attribute__((__always_inline__));
-static inline vec_ushort8 spu_orc (vec_ushort8 a, vec_ushort8 b) __attribute__((__always_inline__));
-static inline vec_char16 spu_orc (vec_char16 a, vec_char16 b) __attribute__((__always_inline__));
-static inline vec_uchar16 spu_orc (vec_uchar16 a, vec_uchar16 b) __attribute__((__always_inline__));
-static inline vec_float4 spu_orc (vec_float4 a, vec_float4 b) __attribute__((__always_inline__));
-static inline vec_double2 spu_orc (vec_double2 a, vec_double2 b) __attribute__((__always_inline__));
-static inline vec_int4 spu_orx (vec_int4 a) __attribute__((__always_inline__));
-static inline vec_uint4 spu_orx (vec_uint4 a) __attribute__((__always_inline__));
-static inline vec_uchar16 spu_xor (vec_uchar16 a, vec_uchar16 b) __attribute__((__always_inline__));
-static inline vec_char16 spu_xor (vec_char16 a, vec_char16 b) __attribute__((__always_inline__));
-static inline vec_ushort8 spu_xor (vec_ushort8 a, vec_ushort8 b) __attribute__((__always_inline__));
-static inline vec_short8 spu_xor (vec_short8 a, vec_short8 b) __attribute__((__always_inline__));
-static inline vec_uint4 spu_xor (vec_uint4 a, vec_uint4 b) __attribute__((__always_inline__));
-static inline vec_int4 spu_xor (vec_int4 a, vec_int4 b) __attribute__((__always_inline__));
-static inline vec_ullong2 spu_xor (vec_ullong2 a, vec_ullong2 b) __attribute__((__always_inline__));
-static inline vec_llong2 spu_xor (vec_llong2 a, vec_llong2 b) __attribute__((__always_inline__));
-static inline vec_float4 spu_xor (vec_float4 a, vec_float4 b) __attribute__((__always_inline__));
-static inline vec_double2 spu_xor (vec_double2 a, vec_double2 b) __attribute__((__always_inline__));
-static inline vec_uchar16 spu_xor (vec_uchar16 a, unsigned char b) __attribute__((__always_inline__));
-static inline vec_char16 spu_xor (vec_char16 a, signed char b) __attribute__((__always_inline__));
-static inline vec_ushort8 spu_xor (vec_ushort8 a, unsigned short b) __attribute__((__always_inline__));
-static inline vec_short8 spu_xor (vec_short8 a, short b) __attribute__((__always_inline__));
-static inline vec_uint4 spu_xor (vec_uint4 a, unsigned int b) __attribute__((__always_inline__));
-static inline vec_int4 spu_xor (vec_int4 a, int b) __attribute__((__always_inline__));
-static inline vec_ushort8 spu_rl (vec_ushort8 a, vec_short8 b) __attribute__((__always_inline__));
-static inline vec_short8 spu_rl (vec_short8 a, vec_short8 b) __attribute__((__always_inline__));
-static inline vec_uint4 spu_rl (vec_uint4 a, vec_int4 b) __attribute__((__always_inline__));
-static inline vec_int4 spu_rl (vec_int4 a, vec_int4 b) __attribute__((__always_inline__));
-static inline vec_ushort8 spu_rl (vec_ushort8 a, short b) __attribute__((__always_inline__));
-static inline vec_short8 spu_rl (vec_short8 a, short b) __attribute__((__always_inline__));
-static inline vec_uint4 spu_rl (vec_uint4 a, int b) __attribute__((__always_inline__));
-static inline vec_int4 spu_rl (vec_int4 a, int b) __attribute__((__always_inline__));
-static inline vec_uchar16 spu_rlqw (vec_uchar16 a, int b) __attribute__((__always_inline__));
-static inline vec_char16 spu_rlqw (vec_char16 a, int b) __attribute__((__always_inline__));
-static inline vec_ushort8 spu_rlqw (vec_ushort8 a, int b) __attribute__((__always_inline__));
-static inline vec_short8 spu_rlqw (vec_short8 a, int b) __attribute__((__always_inline__));
-static inline vec_uint4 spu_rlqw (vec_uint4 a, int b) __attribute__((__always_inline__));
-static inline vec_int4 spu_rlqw (vec_int4 a, int b) __attribute__((__always_inline__));
-static inline vec_ullong2 spu_rlqw (vec_ullong2 a, int b) __attribute__((__always_inline__));
-static inline vec_llong2 spu_rlqw (vec_llong2 a, int b) __attribute__((__always_inline__));
-static inline vec_float4 spu_rlqw (vec_float4 a, int b) __attribute__((__always_inline__));
-static inline vec_double2 spu_rlqw (vec_double2 a, int b) __attribute__((__always_inline__));
-static inline vec_uchar16 spu_rlqwbyte (vec_uchar16 a, int b) __attribute__((__always_inline__));
-static inline vec_char16 spu_rlqwbyte (vec_char16 a, int b) __attribute__((__always_inline__));
-static inline vec_ushort8 spu_rlqwbyte (vec_ushort8 a, int b) __attribute__((__always_inline__));
-static inline vec_short8 spu_rlqwbyte (vec_short8 a, int b) __attribute__((__always_inline__));
-static inline vec_uint4 spu_rlqwbyte (vec_uint4 a, int b) __attribute__((__always_inline__));
-static inline vec_int4 spu_rlqwbyte (vec_int4 a, int b) __attribute__((__always_inline__));
-static inline vec_ullong2 spu_rlqwbyte (vec_ullong2 a, int b) __attribute__((__always_inline__));
-static inline vec_llong2 spu_rlqwbyte (vec_llong2 a, int b) __attribute__((__always_inline__));
-static inline vec_float4 spu_rlqwbyte (vec_float4 a, int b) __attribute__((__always_inline__));
-static inline vec_double2 spu_rlqwbyte (vec_double2 a, int b) __attribute__((__always_inline__));
-static inline vec_uchar16 spu_rlqwbytebc (vec_uchar16 a, int b) __attribute__((__always_inline__));
-static inline vec_char16 spu_rlqwbytebc (vec_char16 a, int b) __attribute__((__always_inline__));
-static inline vec_ushort8 spu_rlqwbytebc (vec_ushort8 a, int b) __attribute__((__always_inline__));
-static inline vec_short8 spu_rlqwbytebc (vec_short8 a, int b) __attribute__((__always_inline__));
-static inline vec_uint4 spu_rlqwbytebc (vec_uint4 a, int b) __attribute__((__always_inline__));
-static inline vec_int4 spu_rlqwbytebc (vec_int4 a, int b) __attribute__((__always_inline__));
-static inline vec_ullong2 spu_rlqwbytebc (vec_ullong2 a, int b) __attribute__((__always_inline__));
-static inline vec_llong2 spu_rlqwbytebc (vec_llong2 a, int b) __attribute__((__always_inline__));
-static inline vec_float4 spu_rlqwbytebc (vec_float4 a, int b) __attribute__((__always_inline__));
-static inline vec_double2 spu_rlqwbytebc (vec_double2 a, int b) __attribute__((__always_inline__));
-static inline vec_ushort8 spu_rlmask (vec_ushort8 a, vec_short8 b) __attribute__((__always_inline__));
-static inline vec_short8 spu_rlmask (vec_short8 a, vec_short8 b) __attribute__((__always_inline__));
-static inline vec_uint4 spu_rlmask (vec_uint4 a, vec_int4 b) __attribute__((__always_inline__));
-static inline vec_int4 spu_rlmask (vec_int4 a, vec_int4 b) __attribute__((__always_inline__));
-static inline vec_ushort8 spu_rlmask (vec_ushort8 a, int b) __attribute__((__always_inline__));
-static inline vec_short8 spu_rlmask (vec_short8 a, int b) __attribute__((__always_inline__));
-static inline vec_uint4 spu_rlmask (vec_uint4 a, int b) __attribute__((__always_inline__));
-static inline vec_int4 spu_rlmask (vec_int4 a, int b) __attribute__((__always_inline__));
-static inline vec_ushort8 spu_rlmaska (vec_ushort8 a, vec_short8 b) __attribute__((__always_inline__));
-static inline vec_short8 spu_rlmaska (vec_short8 a, vec_short8 b) __attribute__((__always_inline__));
-static inline vec_uint4 spu_rlmaska (vec_uint4 a, vec_int4 b) __attribute__((__always_inline__));
-static inline vec_int4 spu_rlmaska (vec_int4 a, vec_int4 b) __attribute__((__always_inline__));
-static inline vec_ushort8 spu_rlmaska (vec_ushort8 a, int b) __attribute__((__always_inline__));
-static inline vec_short8 spu_rlmaska (vec_short8 a, int b) __attribute__((__always_inline__));
-static inline vec_uint4 spu_rlmaska (vec_uint4 a, int b) __attribute__((__always_inline__));
-static inline vec_int4 spu_rlmaska (vec_int4 a, int b) __attribute__((__always_inline__));
-static inline vec_uchar16 spu_rlmaskqw (vec_uchar16 a, int b) __attribute__((__always_inline__));
-static inline vec_char16 spu_rlmaskqw (vec_char16 a, int b) __attribute__((__always_inline__));
-static inline vec_ushort8 spu_rlmaskqw (vec_ushort8 a, int b) __attribute__((__always_inline__));
-static inline vec_short8 spu_rlmaskqw (vec_short8 a, int b) __attribute__((__always_inline__));
-static inline vec_uint4 spu_rlmaskqw (vec_uint4 a, int b) __attribute__((__always_inline__));
-static inline vec_int4 spu_rlmaskqw (vec_int4 a, int b) __attribute__((__always_inline__));
-static inline vec_ullong2 spu_rlmaskqw (vec_ullong2 a, int b) __attribute__((__always_inline__));
-static inline vec_llong2 spu_rlmaskqw (vec_llong2 a, int b) __attribute__((__always_inline__));
-static inline vec_float4 spu_rlmaskqw (vec_float4 a, int b) __attribute__((__always_inline__));
-static inline vec_double2 spu_rlmaskqw (vec_double2 a, int b) __attribute__((__always_inline__));
-static inline vec_uchar16 spu_rlmaskqwbyte (vec_uchar16 a, int b) __attribute__((__always_inline__));
-static inline vec_char16 spu_rlmaskqwbyte (vec_char16 a, int b) __attribute__((__always_inline__));
-static inline vec_ushort8 spu_rlmaskqwbyte (vec_ushort8 a, int b) __attribute__((__always_inline__));
-static inline vec_short8 spu_rlmaskqwbyte (vec_short8 a, int b) __attribute__((__always_inline__));
-static inline vec_uint4 spu_rlmaskqwbyte (vec_uint4 a, int b) __attribute__((__always_inline__));
-static inline vec_int4 spu_rlmaskqwbyte (vec_int4 a, int b) __attribute__((__always_inline__));
-static inline vec_ullong2 spu_rlmaskqwbyte (vec_ullong2 a, int b) __attribute__((__always_inline__));
-static inline vec_llong2 spu_rlmaskqwbyte (vec_llong2 a, int b) __attribute__((__always_inline__));
-static inline vec_float4 spu_rlmaskqwbyte (vec_float4 a, int b) __attribute__((__always_inline__));
-static inline vec_double2 spu_rlmaskqwbyte (vec_double2 a, int b) __attribute__((__always_inline__));
-static inline vec_uchar16 spu_rlmaskqwbytebc (vec_uchar16 a, int b) __attribute__((__always_inline__));
-static inline vec_char16 spu_rlmaskqwbytebc (vec_char16 a, int b) __attribute__((__always_inline__));
-static inline vec_ushort8 spu_rlmaskqwbytebc (vec_ushort8 a, int b) __attribute__((__always_inline__));
-static inline vec_short8 spu_rlmaskqwbytebc (vec_short8 a, int b) __attribute__((__always_inline__));
-static inline vec_uint4 spu_rlmaskqwbytebc (vec_uint4 a, int b) __attribute__((__always_inline__));
-static inline vec_int4 spu_rlmaskqwbytebc (vec_int4 a, int b) __attribute__((__always_inline__));
-static inline vec_ullong2 spu_rlmaskqwbytebc (vec_ullong2 a, int b) __attribute__((__always_inline__));
-static inline vec_llong2 spu_rlmaskqwbytebc (vec_llong2 a, int b) __attribute__((__always_inline__));
-static inline vec_float4 spu_rlmaskqwbytebc (vec_float4 a, int b) __attribute__((__always_inline__));
-static inline vec_double2 spu_rlmaskqwbytebc (vec_double2 a, int b) __attribute__((__always_inline__));
-static inline vec_ushort8 spu_sl (vec_ushort8 a, vec_ushort8 b) __attribute__((__always_inline__));
-static inline vec_short8 spu_sl (vec_short8 a, vec_ushort8 b) __attribute__((__always_inline__));
-static inline vec_uint4 spu_sl (vec_uint4 a, vec_uint4 b) __attribute__((__always_inline__));
-static inline vec_int4 spu_sl (vec_int4 a, vec_uint4 b) __attribute__((__always_inline__));
-static inline vec_ushort8 spu_sl (vec_ushort8 a, unsigned int b) __attribute__((__always_inline__));
-static inline vec_short8 spu_sl (vec_short8 a, unsigned int b) __attribute__((__always_inline__));
-static inline vec_uint4 spu_sl (vec_uint4 a, unsigned int b) __attribute__((__always_inline__));
-static inline vec_int4 spu_sl (vec_int4 a, unsigned int b) __attribute__((__always_inline__));
-static inline vec_llong2 spu_slqw (vec_llong2 a, unsigned int b) __attribute__((__always_inline__));
-static inline vec_ullong2 spu_slqw (vec_ullong2 a, unsigned int b) __attribute__((__always_inline__));
-static inline vec_int4 spu_slqw (vec_int4 a, unsigned int b) __attribute__((__always_inline__));
-static inline vec_uint4 spu_slqw (vec_uint4 a, unsigned int b) __attribute__((__always_inline__));
-static inline vec_short8 spu_slqw (vec_short8 a, unsigned int b) __attribute__((__always_inline__));
-static inline vec_ushort8 spu_slqw (vec_ushort8 a, unsigned int b) __attribute__((__always_inline__));
-static inline vec_char16 spu_slqw (vec_char16 a, unsigned int b) __attribute__((__always_inline__));
-static inline vec_uchar16 spu_slqw (vec_uchar16 a, unsigned int b) __attribute__((__always_inline__));
-static inline vec_float4 spu_slqw (vec_float4 a, unsigned int b) __attribute__((__always_inline__));
-static inline vec_double2 spu_slqw (vec_double2 a, unsigned int b) __attribute__((__always_inline__));
-static inline vec_llong2 spu_slqwbyte (vec_llong2 a, unsigned int b) __attribute__((__always_inline__));
-static inline vec_ullong2 spu_slqwbyte (vec_ullong2 a, unsigned int b) __attribute__((__always_inline__));
-static inline vec_int4 spu_slqwbyte (vec_int4 a, unsigned int b) __attribute__((__always_inline__));
-static inline vec_uint4 spu_slqwbyte (vec_uint4 a, unsigned int b) __attribute__((__always_inline__));
-static inline vec_short8 spu_slqwbyte (vec_short8 a, unsigned int b) __attribute__((__always_inline__));
-static inline vec_ushort8 spu_slqwbyte (vec_ushort8 a, unsigned int b) __attribute__((__always_inline__));
-static inline vec_char16 spu_slqwbyte (vec_char16 a, unsigned int b) __attribute__((__always_inline__));
-static inline vec_uchar16 spu_slqwbyte (vec_uchar16 a, unsigned int b) __attribute__((__always_inline__));
-static inline vec_float4 spu_slqwbyte (vec_float4 a, unsigned int b) __attribute__((__always_inline__));
-static inline vec_double2 spu_slqwbyte (vec_double2 a, unsigned int b) __attribute__((__always_inline__));
-static inline vec_llong2 spu_slqwbytebc (vec_llong2 a, unsigned int b) __attribute__((__always_inline__));
-static inline vec_ullong2 spu_slqwbytebc (vec_ullong2 a, unsigned int b) __attribute__((__always_inline__));
-static inline vec_int4 spu_slqwbytebc (vec_int4 a, unsigned int b) __attribute__((__always_inline__));
-static inline vec_uint4 spu_slqwbytebc (vec_uint4 a, unsigned int b) __attribute__((__always_inline__));
-static inline vec_short8 spu_slqwbytebc (vec_short8 a, unsigned int b) __attribute__((__always_inline__));
-static inline vec_ushort8 spu_slqwbytebc (vec_ushort8 a, unsigned int b) __attribute__((__always_inline__));
-static inline vec_char16 spu_slqwbytebc (vec_char16 a, unsigned int b) __attribute__((__always_inline__));
-static inline vec_uchar16 spu_slqwbytebc (vec_uchar16 a, unsigned int b) __attribute__((__always_inline__));
-static inline vec_float4 spu_slqwbytebc (vec_float4 a, unsigned int b) __attribute__((__always_inline__));
-static inline vec_double2 spu_slqwbytebc (vec_double2 a, unsigned int b) __attribute__((__always_inline__));
-static inline vec_uchar16 spu_splats (unsigned char a) __attribute__((__always_inline__));
-static inline vec_char16 spu_splats (signed char a) __attribute__((__always_inline__));
-static inline vec_char16 spu_splats (char a) __attribute__((__always_inline__));
-static inline vec_ushort8 spu_splats (unsigned short a) __attribute__((__always_inline__));
-static inline vec_short8 spu_splats (short a) __attribute__((__always_inline__));
-static inline vec_uint4 spu_splats (unsigned int a) __attribute__((__always_inline__));
-static inline vec_int4 spu_splats (int a) __attribute__((__always_inline__));
-static inline vec_ullong2 spu_splats (unsigned long long a) __attribute__((__always_inline__));
-static inline vec_llong2 spu_splats (long long a) __attribute__((__always_inline__));
-static inline vec_float4 spu_splats (float a) __attribute__((__always_inline__));
-static inline vec_double2 spu_splats (double a) __attribute__((__always_inline__));
-static inline unsigned char spu_extract (vec_uchar16 a, int b) __attribute__((__always_inline__));
-static inline signed char spu_extract (vec_char16 a, int b) __attribute__((__always_inline__));
-static inline unsigned short spu_extract (vec_ushort8 a, int b) __attribute__((__always_inline__));
-static inline short spu_extract (vec_short8 a, int b) __attribute__((__always_inline__));
-static inline unsigned int spu_extract (vec_uint4 a, int b) __attribute__((__always_inline__));
-static inline int spu_extract (vec_int4 a, int b) __attribute__((__always_inline__));
-static inline unsigned long long spu_extract (vec_ullong2 a, int b) __attribute__((__always_inline__));
-static inline long long spu_extract (vec_llong2 a, int b) __attribute__((__always_inline__));
-static inline float spu_extract (vec_float4 a, int b) __attribute__((__always_inline__));
-static inline double spu_extract (vec_double2 a, int b) __attribute__((__always_inline__));
-static inline vec_uchar16 spu_insert (unsigned char a, vec_uchar16 b, int c) __attribute__((__always_inline__));
-static inline vec_char16 spu_insert (signed char a, vec_char16 b, int c) __attribute__((__always_inline__));
-static inline vec_ushort8 spu_insert (unsigned short a, vec_ushort8 b, int c) __attribute__((__always_inline__));
-static inline vec_short8 spu_insert (short a, vec_short8 b, int c) __attribute__((__always_inline__));
-static inline vec_uint4 spu_insert (unsigned int a, vec_uint4 b, int c) __attribute__((__always_inline__));
-static inline vec_int4 spu_insert (int a, vec_int4 b, int c) __attribute__((__always_inline__));
-static inline vec_ullong2 spu_insert (unsigned long long a, vec_ullong2 b, int c) __attribute__((__always_inline__));
-static inline vec_llong2 spu_insert (long long a, vec_llong2 b, int c) __attribute__((__always_inline__));
-static inline vec_float4 spu_insert (float a, vec_float4 b, int c) __attribute__((__always_inline__));
-static inline vec_double2 spu_insert (double a, vec_double2 b, int c) __attribute__((__always_inline__));
-static inline vec_uchar16 spu_promote (unsigned char a, int b) __attribute__((__always_inline__));
-static inline vec_char16 spu_promote (signed char a, int b) __attribute__((__always_inline__));
-static inline vec_char16 spu_promote (char a, int b) __attribute__((__always_inline__));
-static inline vec_ushort8 spu_promote (unsigned short a, int b) __attribute__((__always_inline__));
-static inline vec_short8 spu_promote (short a, int b) __attribute__((__always_inline__));
-static inline vec_uint4 spu_promote (unsigned int a, int b) __attribute__((__always_inline__));
-static inline vec_int4 spu_promote (int a, int b) __attribute__((__always_inline__));
-static inline vec_ullong2 spu_promote (unsigned long long a, int b) __attribute__((__always_inline__));
-static inline vec_llong2 spu_promote (long long a, int b) __attribute__((__always_inline__));
-static inline vec_float4 spu_promote (float a, int b) __attribute__((__always_inline__));
-static inline vec_double2 spu_promote (double a, int b) __attribute__((__always_inline__));
-
-static inline vec_short8
-spu_extend (vec_char16 a)
-{
- return __builtin_spu_extend_0 (a);
-}
-static inline vec_int4
-spu_extend (vec_short8 a)
-{
- return __builtin_spu_extend_1 (a);
-}
-static inline vec_llong2
-spu_extend (vec_int4 a)
-{
- return __builtin_spu_extend_2 (a);
-}
-static inline vec_double2
-spu_extend (vec_float4 a)
-{
- return __builtin_spu_extend_3 (a);
-}
-static inline vec_uint4
-spu_add (vec_uint4 a, vec_uint4 b)
-{
- return __builtin_spu_add_0 (a, b);
-}
-static inline vec_int4
-spu_add (vec_int4 a, vec_int4 b)
-{
- return __builtin_spu_add_1 (a, b);
-}
-static inline vec_ushort8
-spu_add (vec_ushort8 a, vec_ushort8 b)
-{
- return __builtin_spu_add_2 (a, b);
-}
-static inline vec_short8
-spu_add (vec_short8 a, vec_short8 b)
-{
- return __builtin_spu_add_3 (a, b);
-}
-static inline vec_float4
-spu_add (vec_float4 a, vec_float4 b)
-{
- return __builtin_spu_add_4 (a, b);
-}
-static inline vec_double2
-spu_add (vec_double2 a, vec_double2 b)
-{
- return __builtin_spu_add_5 (a, b);
-}
-static inline vec_ushort8
-spu_add (vec_ushort8 a, unsigned short b)
-{
- return __builtin_spu_add_6 (a, b);
-}
-static inline vec_short8
-spu_add (vec_short8 a, short b)
-{
- return __builtin_spu_add_7 (a, b);
-}
-static inline vec_uint4
-spu_add (vec_uint4 a, unsigned int b)
-{
- return __builtin_spu_add_8 (a, b);
-}
-static inline vec_int4
-spu_add (vec_int4 a, int b)
-{
- return __builtin_spu_add_9 (a, b);
-}
-static inline vec_int4
-spu_addx (vec_int4 a, vec_int4 b, vec_int4 c)
-{
- return __builtin_spu_addx_0 (a, b, c);
-}
-static inline vec_uint4
-spu_addx (vec_uint4 a, vec_uint4 b, vec_uint4 c)
-{
- return __builtin_spu_addx_1 (a, b, c);
-}
-static inline vec_int4
-spu_genc (vec_int4 a, vec_int4 b)
-{
- return __builtin_spu_genc_0 (a, b);
-}
-static inline vec_uint4
-spu_genc (vec_uint4 a, vec_uint4 b)
-{
- return __builtin_spu_genc_1 (a, b);
-}
-static inline vec_int4
-spu_gencx (vec_int4 a, vec_int4 b, vec_int4 c)
-{
- return __builtin_spu_gencx_0 (a, b, c);
-}
-static inline vec_uint4
-spu_gencx (vec_uint4 a, vec_uint4 b, vec_uint4 c)
-{
- return __builtin_spu_gencx_1 (a, b, c);
-}
-static inline vec_int4
-spu_madd (vec_short8 a, vec_short8 b, vec_int4 c)
-{
- return __builtin_spu_madd_0 (a, b, c);
-}
-static inline vec_float4
-spu_madd (vec_float4 a, vec_float4 b, vec_float4 c)
-{
- return __builtin_spu_madd_1 (a, b, c);
-}
-static inline vec_double2
-spu_madd (vec_double2 a, vec_double2 b, vec_double2 c)
-{
- return __builtin_spu_madd_2 (a, b, c);
-}
-static inline vec_float4
-spu_msub (vec_float4 a, vec_float4 b, vec_float4 c)
-{
- return __builtin_spu_msub_0 (a, b, c);
-}
-static inline vec_double2
-spu_msub (vec_double2 a, vec_double2 b, vec_double2 c)
-{
- return __builtin_spu_msub_1 (a, b, c);
-}
-static inline vec_uint4
-spu_mhhadd (vec_ushort8 a, vec_ushort8 b, vec_uint4 c)
-{
- return __builtin_spu_mhhadd_0 (a, b, c);
-}
-static inline vec_int4
-spu_mhhadd (vec_short8 a, vec_short8 b, vec_int4 c)
-{
- return __builtin_spu_mhhadd_1 (a, b, c);
-}
-static inline vec_uint4
-spu_mule (vec_ushort8 a, vec_ushort8 b)
-{
- return __builtin_spu_mule_0 (a, b);
-}
-static inline vec_int4
-spu_mule (vec_short8 a, vec_short8 b)
-{
- return __builtin_spu_mule_1 (a, b);
-}
-static inline vec_float4
-spu_mul (vec_float4 a, vec_float4 b)
-{
- return __builtin_spu_mul_0 (a, b);
-}
-static inline vec_double2
-spu_mul (vec_double2 a, vec_double2 b)
-{
- return __builtin_spu_mul_1 (a, b);
-}
-static inline vec_int4
-spu_mulo (vec_short8 a, vec_short8 b)
-{
- return __builtin_spu_mulo_0 (a, b);
-}
-static inline vec_uint4
-spu_mulo (vec_ushort8 a, vec_ushort8 b)
-{
- return __builtin_spu_mulo_1 (a, b);
-}
-static inline vec_int4
-spu_mulo (vec_short8 a, short b)
-{
- return __builtin_spu_mulo_2 (a, b);
-}
-static inline vec_uint4
-spu_mulo (vec_ushort8 a, unsigned short b)
-{
- return __builtin_spu_mulo_3 (a, b);
-}
-static inline vec_float4
-spu_nmsub (vec_float4 a, vec_float4 b, vec_float4 c)
-{
- return __builtin_spu_nmsub_0 (a, b, c);
-}
-static inline vec_double2
-spu_nmsub (vec_double2 a, vec_double2 b, vec_double2 c)
-{
- return __builtin_spu_nmsub_1 (a, b, c);
-}
-static inline vec_ushort8
-spu_sub (vec_ushort8 a, vec_ushort8 b)
-{
- return __builtin_spu_sub_0 (a, b);
-}
-static inline vec_short8
-spu_sub (vec_short8 a, vec_short8 b)
-{
- return __builtin_spu_sub_1 (a, b);
-}
-static inline vec_uint4
-spu_sub (vec_uint4 a, vec_uint4 b)
-{
- return __builtin_spu_sub_2 (a, b);
-}
-static inline vec_int4
-spu_sub (vec_int4 a, vec_int4 b)
-{
- return __builtin_spu_sub_3 (a, b);
-}
-static inline vec_float4
-spu_sub (vec_float4 a, vec_float4 b)
-{
- return __builtin_spu_sub_4 (a, b);
-}
-static inline vec_double2
-spu_sub (vec_double2 a, vec_double2 b)
-{
- return __builtin_spu_sub_5 (a, b);
-}
-static inline vec_ushort8
-spu_sub (unsigned short a, vec_ushort8 b)
-{
- return __builtin_spu_sub_6 (a, b);
-}
-static inline vec_short8
-spu_sub (short a, vec_short8 b)
-{
- return __builtin_spu_sub_7 (a, b);
-}
-static inline vec_uint4
-spu_sub (unsigned int a, vec_uint4 b)
-{
- return __builtin_spu_sub_8 (a, b);
-}
-static inline vec_int4
-spu_sub (int a, vec_int4 b)
-{
- return __builtin_spu_sub_9 (a, b);
-}
-static inline vec_uint4
-spu_subx (vec_uint4 a, vec_uint4 b, vec_uint4 c)
-{
- return __builtin_spu_subx_0 (a, b, c);
-}
-static inline vec_int4
-spu_subx (vec_int4 a, vec_int4 b, vec_int4 c)
-{
- return __builtin_spu_subx_1 (a, b, c);
-}
-static inline vec_uint4
-spu_genb (vec_uint4 a, vec_uint4 b)
-{
- return __builtin_spu_genb_0 (a, b);
-}
-static inline vec_int4
-spu_genb (vec_int4 a, vec_int4 b)
-{
- return __builtin_spu_genb_1 (a, b);
-}
-static inline vec_uint4
-spu_genbx (vec_uint4 a, vec_uint4 b, vec_uint4 c)
-{
- return __builtin_spu_genbx_0 (a, b, c);
-}
-static inline vec_int4
-spu_genbx (vec_int4 a, vec_int4 b, vec_int4 c)
-{
- return __builtin_spu_genbx_1 (a, b, c);
-}
-static inline vec_uchar16
-spu_cmpeq (vec_uchar16 a, vec_uchar16 b)
-{
- return __builtin_spu_cmpeq_0 (a, b);
-}
-static inline vec_uchar16
-spu_cmpeq (vec_char16 a, vec_char16 b)
-{
- return __builtin_spu_cmpeq_1 (a, b);
-}
-static inline vec_ushort8
-spu_cmpeq (vec_ushort8 a, vec_ushort8 b)
-{
- return __builtin_spu_cmpeq_2 (a, b);
-}
-static inline vec_ushort8
-spu_cmpeq (vec_short8 a, vec_short8 b)
-{
- return __builtin_spu_cmpeq_3 (a, b);
-}
-static inline vec_uint4
-spu_cmpeq (vec_uint4 a, vec_uint4 b)
-{
- return __builtin_spu_cmpeq_4 (a, b);
-}
-static inline vec_uint4
-spu_cmpeq (vec_int4 a, vec_int4 b)
-{
- return __builtin_spu_cmpeq_5 (a, b);
-}
-static inline vec_uint4
-spu_cmpeq (vec_float4 a, vec_float4 b)
-{
- return __builtin_spu_cmpeq_6 (a, b);
-}
-static inline vec_uchar16
-spu_cmpeq (vec_uchar16 a, unsigned char b)
-{
- return __builtin_spu_cmpeq_7 (a, b);
-}
-static inline vec_uchar16
-spu_cmpeq (vec_char16 a, signed char b)
-{
- return __builtin_spu_cmpeq_8 (a, b);
-}
-static inline vec_ushort8
-spu_cmpeq (vec_ushort8 a, unsigned short b)
-{
- return __builtin_spu_cmpeq_9 (a, b);
-}
-static inline vec_ushort8
-spu_cmpeq (vec_short8 a, short b)
-{
- return __builtin_spu_cmpeq_10 (a, b);
-}
-static inline vec_uint4
-spu_cmpeq (vec_uint4 a, unsigned int b)
-{
- return __builtin_spu_cmpeq_11 (a, b);
-}
-static inline vec_uint4
-spu_cmpeq (vec_int4 a, int b)
-{
- return __builtin_spu_cmpeq_12 (a, b);
-}
-static inline vec_uchar16
-spu_cmpgt (vec_uchar16 a, vec_uchar16 b)
-{
- return __builtin_spu_cmpgt_0 (a, b);
-}
-static inline vec_uchar16
-spu_cmpgt (vec_char16 a, vec_char16 b)
-{
- return __builtin_spu_cmpgt_1 (a, b);
-}
-static inline vec_ushort8
-spu_cmpgt (vec_ushort8 a, vec_ushort8 b)
-{
- return __builtin_spu_cmpgt_2 (a, b);
-}
-static inline vec_ushort8
-spu_cmpgt (vec_short8 a, vec_short8 b)
-{
- return __builtin_spu_cmpgt_3 (a, b);
-}
-static inline vec_uint4
-spu_cmpgt (vec_uint4 a, vec_uint4 b)
-{
- return __builtin_spu_cmpgt_4 (a, b);
-}
-static inline vec_uint4
-spu_cmpgt (vec_int4 a, vec_int4 b)
-{
- return __builtin_spu_cmpgt_5 (a, b);
-}
-static inline vec_uint4
-spu_cmpgt (vec_float4 a, vec_float4 b)
-{
- return __builtin_spu_cmpgt_6 (a, b);
-}
-static inline vec_uchar16
-spu_cmpgt (vec_uchar16 a, unsigned char b)
-{
- return __builtin_spu_cmpgt_7 (a, b);
-}
-static inline vec_uchar16
-spu_cmpgt (vec_char16 a, signed char b)
-{
- return __builtin_spu_cmpgt_8 (a, b);
-}
-static inline vec_ushort8
-spu_cmpgt (vec_ushort8 a, unsigned short b)
-{
- return __builtin_spu_cmpgt_9 (a, b);
-}
-static inline vec_ushort8
-spu_cmpgt (vec_short8 a, short b)
-{
- return __builtin_spu_cmpgt_10 (a, b);
-}
-static inline vec_uint4
-spu_cmpgt (vec_int4 a, int b)
-{
- return __builtin_spu_cmpgt_11 (a, b);
-}
-static inline vec_uint4
-spu_cmpgt (vec_uint4 a, unsigned int b)
-{
- return __builtin_spu_cmpgt_12 (a, b);
-}
-static inline void
-spu_hcmpeq (int a, int b)
-{
- return __builtin_spu_hcmpeq_0 (a, b);
-}
-static inline void
-spu_hcmpeq (unsigned int a, unsigned int b)
-{
- return __builtin_spu_hcmpeq_1 (a, b);
-}
-static inline void
-spu_hcmpgt (int a, int b)
-{
- return __builtin_spu_hcmpgt_0 (a, b);
-}
-static inline void
-spu_hcmpgt (unsigned int a, unsigned int b)
-{
- return __builtin_spu_hcmpgt_1 (a, b);
-}
-static inline vec_uchar16
-spu_cntb (vec_char16 a)
-{
- return __builtin_spu_cntb_0 (a);
-}
-static inline vec_uchar16
-spu_cntb (vec_uchar16 a)
-{
- return __builtin_spu_cntb_1 (a);
-}
-static inline vec_uint4
-spu_cntlz (vec_int4 a)
-{
- return __builtin_spu_cntlz_0 (a);
-}
-static inline vec_uint4
-spu_cntlz (vec_uint4 a)
-{
- return __builtin_spu_cntlz_1 (a);
-}
-static inline vec_uint4
-spu_cntlz (vec_float4 a)
-{
- return __builtin_spu_cntlz_2 (a);
-}
-static inline vec_uint4
-spu_gather (vec_int4 a)
-{
- return __builtin_spu_gather_0 (a);
-}
-static inline vec_uint4
-spu_gather (vec_uint4 a)
-{
- return __builtin_spu_gather_1 (a);
-}
-static inline vec_uint4
-spu_gather (vec_short8 a)
-{
- return __builtin_spu_gather_2 (a);
-}
-static inline vec_uint4
-spu_gather (vec_ushort8 a)
-{
- return __builtin_spu_gather_3 (a);
-}
-static inline vec_uint4
-spu_gather (vec_char16 a)
-{
- return __builtin_spu_gather_4 (a);
-}
-static inline vec_uint4
-spu_gather (vec_uchar16 a)
-{
- return __builtin_spu_gather_5 (a);
-}
-static inline vec_uint4
-spu_gather (vec_float4 a)
-{
- return __builtin_spu_gather_6 (a);
-}
-static inline vec_uchar16
-spu_maskb (unsigned short a)
-{
- return __builtin_spu_maskb_0 (a);
-}
-static inline vec_uchar16
-spu_maskb (short a)
-{
- return __builtin_spu_maskb_1 (a);
-}
-static inline vec_uchar16
-spu_maskb (unsigned int a)
-{
- return __builtin_spu_maskb_2 (a);
-}
-static inline vec_uchar16
-spu_maskb (int a)
-{
- return __builtin_spu_maskb_3 (a);
-}
-static inline vec_ushort8
-spu_maskh (unsigned char a)
-{
- return __builtin_spu_maskh_0 (a);
-}
-static inline vec_ushort8
-spu_maskh (signed char a)
-{
- return __builtin_spu_maskh_1 (a);
-}
-static inline vec_ushort8
-spu_maskh (char a)
-{
- return __builtin_spu_maskh_1 (a);
-}
-static inline vec_ushort8
-spu_maskh (unsigned short a)
-{
- return __builtin_spu_maskh_2 (a);
-}
-static inline vec_ushort8
-spu_maskh (short a)
-{
- return __builtin_spu_maskh_3 (a);
-}
-static inline vec_ushort8
-spu_maskh (unsigned int a)
-{
- return __builtin_spu_maskh_4 (a);
-}
-static inline vec_ushort8
-spu_maskh (int a)
-{
- return __builtin_spu_maskh_5 (a);
-}
-static inline vec_uint4
-spu_maskw (unsigned char a)
-{
- return __builtin_spu_maskw_0 (a);
-}
-static inline vec_uint4
-spu_maskw (signed char a)
-{
- return __builtin_spu_maskw_1 (a);
-}
-static inline vec_uint4
-spu_maskw (char a)
-{
- return __builtin_spu_maskw_1 (a);
-}
-static inline vec_uint4
-spu_maskw (unsigned short a)
-{
- return __builtin_spu_maskw_2 (a);
-}
-static inline vec_uint4
-spu_maskw (short a)
-{
- return __builtin_spu_maskw_3 (a);
-}
-static inline vec_uint4
-spu_maskw (unsigned int a)
-{
- return __builtin_spu_maskw_4 (a);
-}
-static inline vec_uint4
-spu_maskw (int a)
-{
- return __builtin_spu_maskw_5 (a);
-}
-static inline vec_llong2
-spu_sel (vec_llong2 a, vec_llong2 b, vec_ullong2 c)
-{
- return __builtin_spu_sel_0 (a, b, c);
-}
-static inline vec_ullong2
-spu_sel (vec_ullong2 a, vec_ullong2 b, vec_ullong2 c)
-{
- return __builtin_spu_sel_1 (a, b, c);
-}
-static inline vec_int4
-spu_sel (vec_int4 a, vec_int4 b, vec_uint4 c)
-{
- return __builtin_spu_sel_2 (a, b, c);
-}
-static inline vec_uint4
-spu_sel (vec_uint4 a, vec_uint4 b, vec_uint4 c)
-{
- return __builtin_spu_sel_3 (a, b, c);
-}
-static inline vec_short8
-spu_sel (vec_short8 a, vec_short8 b, vec_ushort8 c)
-{
- return __builtin_spu_sel_4 (a, b, c);
-}
-static inline vec_ushort8
-spu_sel (vec_ushort8 a, vec_ushort8 b, vec_ushort8 c)
-{
- return __builtin_spu_sel_5 (a, b, c);
-}
-static inline vec_char16
-spu_sel (vec_char16 a, vec_char16 b, vec_uchar16 c)
-{
- return __builtin_spu_sel_6 (a, b, c);
-}
-static inline vec_uchar16
-spu_sel (vec_uchar16 a, vec_uchar16 b, vec_uchar16 c)
-{
- return __builtin_spu_sel_7 (a, b, c);
-}
-static inline vec_float4
-spu_sel (vec_float4 a, vec_float4 b, vec_uint4 c)
-{
- return __builtin_spu_sel_8 (a, b, c);
-}
-static inline vec_double2
-spu_sel (vec_double2 a, vec_double2 b, vec_ullong2 c)
-{
- return __builtin_spu_sel_9 (a, b, c);
-}
-static inline vec_uchar16
-spu_shuffle (vec_uchar16 a, vec_uchar16 b, vec_uchar16 c)
-{
- return __builtin_spu_shuffle_0 (a, b, c);
-}
-static inline vec_char16
-spu_shuffle (vec_char16 a, vec_char16 b, vec_uchar16 c)
-{
- return __builtin_spu_shuffle_1 (a, b, c);
-}
-static inline vec_ushort8
-spu_shuffle (vec_ushort8 a, vec_ushort8 b, vec_uchar16 c)
-{
- return __builtin_spu_shuffle_2 (a, b, c);
-}
-static inline vec_short8
-spu_shuffle (vec_short8 a, vec_short8 b, vec_uchar16 c)
-{
- return __builtin_spu_shuffle_3 (a, b, c);
-}
-static inline vec_uint4
-spu_shuffle (vec_uint4 a, vec_uint4 b, vec_uchar16 c)
-{
- return __builtin_spu_shuffle_4 (a, b, c);
-}
-static inline vec_int4
-spu_shuffle (vec_int4 a, vec_int4 b, vec_uchar16 c)
-{
- return __builtin_spu_shuffle_5 (a, b, c);
-}
-static inline vec_ullong2
-spu_shuffle (vec_ullong2 a, vec_ullong2 b, vec_uchar16 c)
-{
- return __builtin_spu_shuffle_6 (a, b, c);
-}
-static inline vec_llong2
-spu_shuffle (vec_llong2 a, vec_llong2 b, vec_uchar16 c)
-{
- return __builtin_spu_shuffle_7 (a, b, c);
-}
-static inline vec_float4
-spu_shuffle (vec_float4 a, vec_float4 b, vec_uchar16 c)
-{
- return __builtin_spu_shuffle_8 (a, b, c);
-}
-static inline vec_double2
-spu_shuffle (vec_double2 a, vec_double2 b, vec_uchar16 c)
-{
- return __builtin_spu_shuffle_9 (a, b, c);
-}
-static inline vec_uchar16
-spu_and (vec_uchar16 a, vec_uchar16 b)
-{
- return __builtin_spu_and_0 (a, b);
-}
-static inline vec_char16
-spu_and (vec_char16 a, vec_char16 b)
-{
- return __builtin_spu_and_1 (a, b);
-}
-static inline vec_ushort8
-spu_and (vec_ushort8 a, vec_ushort8 b)
-{
- return __builtin_spu_and_2 (a, b);
-}
-static inline vec_short8
-spu_and (vec_short8 a, vec_short8 b)
-{
- return __builtin_spu_and_3 (a, b);
-}
-static inline vec_uint4
-spu_and (vec_uint4 a, vec_uint4 b)
-{
- return __builtin_spu_and_4 (a, b);
-}
-static inline vec_int4
-spu_and (vec_int4 a, vec_int4 b)
-{
- return __builtin_spu_and_5 (a, b);
-}
-static inline vec_ullong2
-spu_and (vec_ullong2 a, vec_ullong2 b)
-{
- return __builtin_spu_and_6 (a, b);
-}
-static inline vec_llong2
-spu_and (vec_llong2 a, vec_llong2 b)
-{
- return __builtin_spu_and_7 (a, b);
-}
-static inline vec_float4
-spu_and (vec_float4 a, vec_float4 b)
-{
- return __builtin_spu_and_8 (a, b);
-}
-static inline vec_double2
-spu_and (vec_double2 a, vec_double2 b)
-{
- return __builtin_spu_and_9 (a, b);
-}
-static inline vec_uchar16
-spu_and (vec_uchar16 a, unsigned char b)
-{
- return __builtin_spu_and_10 (a, b);
-}
-static inline vec_char16
-spu_and (vec_char16 a, signed char b)
-{
- return __builtin_spu_and_11 (a, b);
-}
-static inline vec_ushort8
-spu_and (vec_ushort8 a, unsigned short b)
-{
- return __builtin_spu_and_12 (a, b);
-}
-static inline vec_short8
-spu_and (vec_short8 a, short b)
-{
- return __builtin_spu_and_13 (a, b);
-}
-static inline vec_uint4
-spu_and (vec_uint4 a, unsigned int b)
-{
- return __builtin_spu_and_14 (a, b);
-}
-static inline vec_int4
-spu_and (vec_int4 a, int b)
-{
- return __builtin_spu_and_15 (a, b);
-}
-static inline vec_llong2
-spu_andc (vec_llong2 a, vec_llong2 b)
-{
- return __builtin_spu_andc_0 (a, b);
-}
-static inline vec_ullong2
-spu_andc (vec_ullong2 a, vec_ullong2 b)
-{
- return __builtin_spu_andc_1 (a, b);
-}
-static inline vec_int4
-spu_andc (vec_int4 a, vec_int4 b)
-{
- return __builtin_spu_andc_2 (a, b);
-}
-static inline vec_uint4
-spu_andc (vec_uint4 a, vec_uint4 b)
-{
- return __builtin_spu_andc_3 (a, b);
-}
-static inline vec_short8
-spu_andc (vec_short8 a, vec_short8 b)
-{
- return __builtin_spu_andc_4 (a, b);
-}
-static inline vec_ushort8
-spu_andc (vec_ushort8 a, vec_ushort8 b)
-{
- return __builtin_spu_andc_5 (a, b);
-}
-static inline vec_char16
-spu_andc (vec_char16 a, vec_char16 b)
-{
- return __builtin_spu_andc_6 (a, b);
-}
-static inline vec_uchar16
-spu_andc (vec_uchar16 a, vec_uchar16 b)
-{
- return __builtin_spu_andc_7 (a, b);
-}
-static inline vec_float4
-spu_andc (vec_float4 a, vec_float4 b)
-{
- return __builtin_spu_andc_8 (a, b);
-}
-static inline vec_double2
-spu_andc (vec_double2 a, vec_double2 b)
-{
- return __builtin_spu_andc_9 (a, b);
-}
-static inline vec_llong2
-spu_eqv (vec_llong2 a, vec_llong2 b)
-{
- return __builtin_spu_eqv_0 (a, b);
-}
-static inline vec_ullong2
-spu_eqv (vec_ullong2 a, vec_ullong2 b)
-{
- return __builtin_spu_eqv_1 (a, b);
-}
-static inline vec_int4
-spu_eqv (vec_int4 a, vec_int4 b)
-{
- return __builtin_spu_eqv_2 (a, b);
-}
-static inline vec_uint4
-spu_eqv (vec_uint4 a, vec_uint4 b)
-{
- return __builtin_spu_eqv_3 (a, b);
-}
-static inline vec_short8
-spu_eqv (vec_short8 a, vec_short8 b)
-{
- return __builtin_spu_eqv_4 (a, b);
-}
-static inline vec_ushort8
-spu_eqv (vec_ushort8 a, vec_ushort8 b)
-{
- return __builtin_spu_eqv_5 (a, b);
-}
-static inline vec_char16
-spu_eqv (vec_char16 a, vec_char16 b)
-{
- return __builtin_spu_eqv_6 (a, b);
-}
-static inline vec_uchar16
-spu_eqv (vec_uchar16 a, vec_uchar16 b)
-{
- return __builtin_spu_eqv_7 (a, b);
-}
-static inline vec_float4
-spu_eqv (vec_float4 a, vec_float4 b)
-{
- return __builtin_spu_eqv_8 (a, b);
-}
-static inline vec_double2
-spu_eqv (vec_double2 a, vec_double2 b)
-{
- return __builtin_spu_eqv_9 (a, b);
-}
-static inline vec_llong2
-spu_nand (vec_llong2 a, vec_llong2 b)
-{
- return __builtin_spu_nand_0 (a, b);
-}
-static inline vec_ullong2
-spu_nand (vec_ullong2 a, vec_ullong2 b)
-{
- return __builtin_spu_nand_1 (a, b);
-}
-static inline vec_int4
-spu_nand (vec_int4 a, vec_int4 b)
-{
- return __builtin_spu_nand_2 (a, b);
-}
-static inline vec_uint4
-spu_nand (vec_uint4 a, vec_uint4 b)
-{
- return __builtin_spu_nand_3 (a, b);
-}
-static inline vec_short8
-spu_nand (vec_short8 a, vec_short8 b)
-{
- return __builtin_spu_nand_4 (a, b);
-}
-static inline vec_ushort8
-spu_nand (vec_ushort8 a, vec_ushort8 b)
-{
- return __builtin_spu_nand_5 (a, b);
-}
-static inline vec_char16
-spu_nand (vec_char16 a, vec_char16 b)
-{
- return __builtin_spu_nand_6 (a, b);
-}
-static inline vec_uchar16
-spu_nand (vec_uchar16 a, vec_uchar16 b)
-{
- return __builtin_spu_nand_7 (a, b);
-}
-static inline vec_float4
-spu_nand (vec_float4 a, vec_float4 b)
-{
- return __builtin_spu_nand_8 (a, b);
-}
-static inline vec_double2
-spu_nand (vec_double2 a, vec_double2 b)
-{
- return __builtin_spu_nand_9 (a, b);
-}
-static inline vec_llong2
-spu_nor (vec_llong2 a, vec_llong2 b)
-{
- return __builtin_spu_nor_0 (a, b);
-}
-static inline vec_ullong2
-spu_nor (vec_ullong2 a, vec_ullong2 b)
-{
- return __builtin_spu_nor_1 (a, b);
-}
-static inline vec_int4
-spu_nor (vec_int4 a, vec_int4 b)
-{
- return __builtin_spu_nor_2 (a, b);
-}
-static inline vec_uint4
-spu_nor (vec_uint4 a, vec_uint4 b)
-{
- return __builtin_spu_nor_3 (a, b);
-}
-static inline vec_short8
-spu_nor (vec_short8 a, vec_short8 b)
-{
- return __builtin_spu_nor_4 (a, b);
-}
-static inline vec_ushort8
-spu_nor (vec_ushort8 a, vec_ushort8 b)
-{
- return __builtin_spu_nor_5 (a, b);
-}
-static inline vec_char16
-spu_nor (vec_char16 a, vec_char16 b)
-{
- return __builtin_spu_nor_6 (a, b);
-}
-static inline vec_uchar16
-spu_nor (vec_uchar16 a, vec_uchar16 b)
-{
- return __builtin_spu_nor_7 (a, b);
-}
-static inline vec_float4
-spu_nor (vec_float4 a, vec_float4 b)
-{
- return __builtin_spu_nor_8 (a, b);
-}
-static inline vec_double2
-spu_nor (vec_double2 a, vec_double2 b)
-{
- return __builtin_spu_nor_9 (a, b);
-}
-static inline vec_uchar16
-spu_or (vec_uchar16 a, vec_uchar16 b)
-{
- return __builtin_spu_or_0 (a, b);
-}
-static inline vec_char16
-spu_or (vec_char16 a, vec_char16 b)
-{
- return __builtin_spu_or_1 (a, b);
-}
-static inline vec_ushort8
-spu_or (vec_ushort8 a, vec_ushort8 b)
-{
- return __builtin_spu_or_2 (a, b);
-}
-static inline vec_short8
-spu_or (vec_short8 a, vec_short8 b)
-{
- return __builtin_spu_or_3 (a, b);
-}
-static inline vec_uint4
-spu_or (vec_uint4 a, vec_uint4 b)
-{
- return __builtin_spu_or_4 (a, b);
-}
-static inline vec_int4
-spu_or (vec_int4 a, vec_int4 b)
-{
- return __builtin_spu_or_5 (a, b);
-}
-static inline vec_ullong2
-spu_or (vec_ullong2 a, vec_ullong2 b)
-{
- return __builtin_spu_or_6 (a, b);
-}
-static inline vec_llong2
-spu_or (vec_llong2 a, vec_llong2 b)
-{
- return __builtin_spu_or_7 (a, b);
-}
-static inline vec_float4
-spu_or (vec_float4 a, vec_float4 b)
-{
- return __builtin_spu_or_8 (a, b);
-}
-static inline vec_double2
-spu_or (vec_double2 a, vec_double2 b)
-{
- return __builtin_spu_or_9 (a, b);
-}
-static inline vec_uchar16
-spu_or (vec_uchar16 a, unsigned char b)
-{
- return __builtin_spu_or_10 (a, b);
-}
-static inline vec_char16
-spu_or (vec_char16 a, signed char b)
-{
- return __builtin_spu_or_11 (a, b);
-}
-static inline vec_ushort8
-spu_or (vec_ushort8 a, unsigned short b)
-{
- return __builtin_spu_or_12 (a, b);
-}
-static inline vec_short8
-spu_or (vec_short8 a, short b)
-{
- return __builtin_spu_or_13 (a, b);
-}
-static inline vec_uint4
-spu_or (vec_uint4 a, unsigned int b)
-{
- return __builtin_spu_or_14 (a, b);
-}
-static inline vec_int4
-spu_or (vec_int4 a, int b)
-{
- return __builtin_spu_or_15 (a, b);
-}
-static inline vec_llong2
-spu_orc (vec_llong2 a, vec_llong2 b)
-{
- return __builtin_spu_orc_0 (a, b);
-}
-static inline vec_ullong2
-spu_orc (vec_ullong2 a, vec_ullong2 b)
-{
- return __builtin_spu_orc_1 (a, b);
-}
-static inline vec_int4
-spu_orc (vec_int4 a, vec_int4 b)
-{
- return __builtin_spu_orc_2 (a, b);
-}
-static inline vec_uint4
-spu_orc (vec_uint4 a, vec_uint4 b)
-{
- return __builtin_spu_orc_3 (a, b);
-}
-static inline vec_short8
-spu_orc (vec_short8 a, vec_short8 b)
-{
- return __builtin_spu_orc_4 (a, b);
-}
-static inline vec_ushort8
-spu_orc (vec_ushort8 a, vec_ushort8 b)
-{
- return __builtin_spu_orc_5 (a, b);
-}
-static inline vec_char16
-spu_orc (vec_char16 a, vec_char16 b)
-{
- return __builtin_spu_orc_6 (a, b);
-}
-static inline vec_uchar16
-spu_orc (vec_uchar16 a, vec_uchar16 b)
-{
- return __builtin_spu_orc_7 (a, b);
-}
-static inline vec_float4
-spu_orc (vec_float4 a, vec_float4 b)
-{
- return __builtin_spu_orc_8 (a, b);
-}
-static inline vec_double2
-spu_orc (vec_double2 a, vec_double2 b)
-{
- return __builtin_spu_orc_9 (a, b);
-}
-static inline vec_int4
-spu_orx (vec_int4 a)
-{
- return __builtin_spu_orx_0 (a);
-}
-static inline vec_uint4
-spu_orx (vec_uint4 a)
-{
- return __builtin_spu_orx_1 (a);
-}
-static inline vec_uchar16
-spu_xor (vec_uchar16 a, vec_uchar16 b)
-{
- return __builtin_spu_xor_0 (a, b);
-}
-static inline vec_char16
-spu_xor (vec_char16 a, vec_char16 b)
-{
- return __builtin_spu_xor_1 (a, b);
-}
-static inline vec_ushort8
-spu_xor (vec_ushort8 a, vec_ushort8 b)
-{
- return __builtin_spu_xor_2 (a, b);
-}
-static inline vec_short8
-spu_xor (vec_short8 a, vec_short8 b)
-{
- return __builtin_spu_xor_3 (a, b);
-}
-static inline vec_uint4
-spu_xor (vec_uint4 a, vec_uint4 b)
-{
- return __builtin_spu_xor_4 (a, b);
-}
-static inline vec_int4
-spu_xor (vec_int4 a, vec_int4 b)
-{
- return __builtin_spu_xor_5 (a, b);
-}
-static inline vec_ullong2
-spu_xor (vec_ullong2 a, vec_ullong2 b)
-{
- return __builtin_spu_xor_6 (a, b);
-}
-static inline vec_llong2
-spu_xor (vec_llong2 a, vec_llong2 b)
-{
- return __builtin_spu_xor_7 (a, b);
-}
-static inline vec_float4
-spu_xor (vec_float4 a, vec_float4 b)
-{
- return __builtin_spu_xor_8 (a, b);
-}
-static inline vec_double2
-spu_xor (vec_double2 a, vec_double2 b)
-{
- return __builtin_spu_xor_9 (a, b);
-}
-static inline vec_uchar16
-spu_xor (vec_uchar16 a, unsigned char b)
-{
- return __builtin_spu_xor_10 (a, b);
-}
-static inline vec_char16
-spu_xor (vec_char16 a, signed char b)
-{
- return __builtin_spu_xor_11 (a, b);
-}
-static inline vec_ushort8
-spu_xor (vec_ushort8 a, unsigned short b)
-{
- return __builtin_spu_xor_12 (a, b);
-}
-static inline vec_short8
-spu_xor (vec_short8 a, short b)
-{
- return __builtin_spu_xor_13 (a, b);
-}
-static inline vec_uint4
-spu_xor (vec_uint4 a, unsigned int b)
-{
- return __builtin_spu_xor_14 (a, b);
-}
-static inline vec_int4
-spu_xor (vec_int4 a, int b)
-{
- return __builtin_spu_xor_15 (a, b);
-}
-static inline vec_ushort8
-spu_rl (vec_ushort8 a, vec_short8 b)
-{
- return __builtin_spu_rl_0 (a, b);
-}
-static inline vec_short8
-spu_rl (vec_short8 a, vec_short8 b)
-{
- return __builtin_spu_rl_1 (a, b);
-}
-static inline vec_uint4
-spu_rl (vec_uint4 a, vec_int4 b)
-{
- return __builtin_spu_rl_2 (a, b);
-}
-static inline vec_int4
-spu_rl (vec_int4 a, vec_int4 b)
-{
- return __builtin_spu_rl_3 (a, b);
-}
-static inline vec_ushort8
-spu_rl (vec_ushort8 a, short b)
-{
- return __builtin_spu_rl_4 (a, b);
-}
-static inline vec_short8
-spu_rl (vec_short8 a, short b)
-{
- return __builtin_spu_rl_5 (a, b);
-}
-static inline vec_uint4
-spu_rl (vec_uint4 a, int b)
-{
- return __builtin_spu_rl_6 (a, b);
-}
-static inline vec_int4
-spu_rl (vec_int4 a, int b)
-{
- return __builtin_spu_rl_7 (a, b);
-}
-static inline vec_uchar16
-spu_rlqw (vec_uchar16 a, int b)
-{
- return __builtin_spu_rlqw_0 (a, b);
-}
-static inline vec_char16
-spu_rlqw (vec_char16 a, int b)
-{
- return __builtin_spu_rlqw_1 (a, b);
-}
-static inline vec_ushort8
-spu_rlqw (vec_ushort8 a, int b)
-{
- return __builtin_spu_rlqw_2 (a, b);
-}
-static inline vec_short8
-spu_rlqw (vec_short8 a, int b)
-{
- return __builtin_spu_rlqw_3 (a, b);
-}
-static inline vec_uint4
-spu_rlqw (vec_uint4 a, int b)
-{
- return __builtin_spu_rlqw_4 (a, b);
-}
-static inline vec_int4
-spu_rlqw (vec_int4 a, int b)
-{
- return __builtin_spu_rlqw_5 (a, b);
-}
-static inline vec_ullong2
-spu_rlqw (vec_ullong2 a, int b)
-{
- return __builtin_spu_rlqw_6 (a, b);
-}
-static inline vec_llong2
-spu_rlqw (vec_llong2 a, int b)
-{
- return __builtin_spu_rlqw_7 (a, b);
-}
-static inline vec_float4
-spu_rlqw (vec_float4 a, int b)
-{
- return __builtin_spu_rlqw_8 (a, b);
-}
-static inline vec_double2
-spu_rlqw (vec_double2 a, int b)
-{
- return __builtin_spu_rlqw_9 (a, b);
-}
-static inline vec_uchar16
-spu_rlqwbyte (vec_uchar16 a, int b)
-{
- return __builtin_spu_rlqwbyte_0 (a, b);
-}
-static inline vec_char16
-spu_rlqwbyte (vec_char16 a, int b)
-{
- return __builtin_spu_rlqwbyte_1 (a, b);
-}
-static inline vec_ushort8
-spu_rlqwbyte (vec_ushort8 a, int b)
-{
- return __builtin_spu_rlqwbyte_2 (a, b);
-}
-static inline vec_short8
-spu_rlqwbyte (vec_short8 a, int b)
-{
- return __builtin_spu_rlqwbyte_3 (a, b);
-}
-static inline vec_uint4
-spu_rlqwbyte (vec_uint4 a, int b)
-{
- return __builtin_spu_rlqwbyte_4 (a, b);
-}
-static inline vec_int4
-spu_rlqwbyte (vec_int4 a, int b)
-{
- return __builtin_spu_rlqwbyte_5 (a, b);
-}
-static inline vec_ullong2
-spu_rlqwbyte (vec_ullong2 a, int b)
-{
- return __builtin_spu_rlqwbyte_6 (a, b);
-}
-static inline vec_llong2
-spu_rlqwbyte (vec_llong2 a, int b)
-{
- return __builtin_spu_rlqwbyte_7 (a, b);
-}
-static inline vec_float4
-spu_rlqwbyte (vec_float4 a, int b)
-{
- return __builtin_spu_rlqwbyte_8 (a, b);
-}
-static inline vec_double2
-spu_rlqwbyte (vec_double2 a, int b)
-{
- return __builtin_spu_rlqwbyte_9 (a, b);
-}
-static inline vec_uchar16
-spu_rlqwbytebc (vec_uchar16 a, int b)
-{
- return __builtin_spu_rlqwbytebc_0 (a, b);
-}
-static inline vec_char16
-spu_rlqwbytebc (vec_char16 a, int b)
-{
- return __builtin_spu_rlqwbytebc_1 (a, b);
-}
-static inline vec_ushort8
-spu_rlqwbytebc (vec_ushort8 a, int b)
-{
- return __builtin_spu_rlqwbytebc_2 (a, b);
-}
-static inline vec_short8
-spu_rlqwbytebc (vec_short8 a, int b)
-{
- return __builtin_spu_rlqwbytebc_3 (a, b);
-}
-static inline vec_uint4
-spu_rlqwbytebc (vec_uint4 a, int b)
-{
- return __builtin_spu_rlqwbytebc_4 (a, b);
-}
-static inline vec_int4
-spu_rlqwbytebc (vec_int4 a, int b)
-{
- return __builtin_spu_rlqwbytebc_5 (a, b);
-}
-static inline vec_ullong2
-spu_rlqwbytebc (vec_ullong2 a, int b)
-{
- return __builtin_spu_rlqwbytebc_6 (a, b);
-}
-static inline vec_llong2
-spu_rlqwbytebc (vec_llong2 a, int b)
-{
- return __builtin_spu_rlqwbytebc_7 (a, b);
-}
-static inline vec_float4
-spu_rlqwbytebc (vec_float4 a, int b)
-{
- return __builtin_spu_rlqwbytebc_8 (a, b);
-}
-static inline vec_double2
-spu_rlqwbytebc (vec_double2 a, int b)
-{
- return __builtin_spu_rlqwbytebc_9 (a, b);
-}
-static inline vec_ushort8
-spu_rlmask (vec_ushort8 a, vec_short8 b)
-{
- return __builtin_spu_rlmask_0 (a, b);
-}
-static inline vec_short8
-spu_rlmask (vec_short8 a, vec_short8 b)
-{
- return __builtin_spu_rlmask_1 (a, b);
-}
-static inline vec_uint4
-spu_rlmask (vec_uint4 a, vec_int4 b)
-{
- return __builtin_spu_rlmask_2 (a, b);
-}
-static inline vec_int4
-spu_rlmask (vec_int4 a, vec_int4 b)
-{
- return __builtin_spu_rlmask_3 (a, b);
-}
-static inline vec_ushort8
-spu_rlmask (vec_ushort8 a, int b)
-{
- return __builtin_spu_rlmask_4 (a, b);
-}
-static inline vec_short8
-spu_rlmask (vec_short8 a, int b)
-{
- return __builtin_spu_rlmask_5 (a, b);
-}
-static inline vec_uint4
-spu_rlmask (vec_uint4 a, int b)
-{
- return __builtin_spu_rlmask_6 (a, b);
-}
-static inline vec_int4
-spu_rlmask (vec_int4 a, int b)
-{
- return __builtin_spu_rlmask_7 (a, b);
-}
-static inline vec_ushort8
-spu_rlmaska (vec_ushort8 a, vec_short8 b)
-{
- return __builtin_spu_rlmaska_0 (a, b);
-}
-static inline vec_short8
-spu_rlmaska (vec_short8 a, vec_short8 b)
-{
- return __builtin_spu_rlmaska_1 (a, b);
-}
-static inline vec_uint4
-spu_rlmaska (vec_uint4 a, vec_int4 b)
-{
- return __builtin_spu_rlmaska_2 (a, b);
-}
-static inline vec_int4
-spu_rlmaska (vec_int4 a, vec_int4 b)
-{
- return __builtin_spu_rlmaska_3 (a, b);
-}
-static inline vec_ushort8
-spu_rlmaska (vec_ushort8 a, int b)
-{
- return __builtin_spu_rlmaska_4 (a, b);
-}
-static inline vec_short8
-spu_rlmaska (vec_short8 a, int b)
-{
- return __builtin_spu_rlmaska_5 (a, b);
-}
-static inline vec_uint4
-spu_rlmaska (vec_uint4 a, int b)
-{
- return __builtin_spu_rlmaska_6 (a, b);
-}
-static inline vec_int4
-spu_rlmaska (vec_int4 a, int b)
-{
- return __builtin_spu_rlmaska_7 (a, b);
-}
-static inline vec_uchar16
-spu_rlmaskqw (vec_uchar16 a, int b)
-{
- return __builtin_spu_rlmaskqw_0 (a, b);
-}
-static inline vec_char16
-spu_rlmaskqw (vec_char16 a, int b)
-{
- return __builtin_spu_rlmaskqw_1 (a, b);
-}
-static inline vec_ushort8
-spu_rlmaskqw (vec_ushort8 a, int b)
-{
- return __builtin_spu_rlmaskqw_2 (a, b);
-}
-static inline vec_short8
-spu_rlmaskqw (vec_short8 a, int b)
-{
- return __builtin_spu_rlmaskqw_3 (a, b);
-}
-static inline vec_uint4
-spu_rlmaskqw (vec_uint4 a, int b)
-{
- return __builtin_spu_rlmaskqw_4 (a, b);
-}
-static inline vec_int4
-spu_rlmaskqw (vec_int4 a, int b)
-{
- return __builtin_spu_rlmaskqw_5 (a, b);
-}
-static inline vec_ullong2
-spu_rlmaskqw (vec_ullong2 a, int b)
-{
- return __builtin_spu_rlmaskqw_6 (a, b);
-}
-static inline vec_llong2
-spu_rlmaskqw (vec_llong2 a, int b)
-{
- return __builtin_spu_rlmaskqw_7 (a, b);
-}
-static inline vec_float4
-spu_rlmaskqw (vec_float4 a, int b)
-{
- return __builtin_spu_rlmaskqw_8 (a, b);
-}
-static inline vec_double2
-spu_rlmaskqw (vec_double2 a, int b)
-{
- return __builtin_spu_rlmaskqw_9 (a, b);
-}
-static inline vec_uchar16
-spu_rlmaskqwbyte (vec_uchar16 a, int b)
-{
- return __builtin_spu_rlmaskqwbyte_0 (a, b);
-}
-static inline vec_char16
-spu_rlmaskqwbyte (vec_char16 a, int b)
-{
- return __builtin_spu_rlmaskqwbyte_1 (a, b);
-}
-static inline vec_ushort8
-spu_rlmaskqwbyte (vec_ushort8 a, int b)
-{
- return __builtin_spu_rlmaskqwbyte_2 (a, b);
-}
-static inline vec_short8
-spu_rlmaskqwbyte (vec_short8 a, int b)
-{
- return __builtin_spu_rlmaskqwbyte_3 (a, b);
-}
-static inline vec_uint4
-spu_rlmaskqwbyte (vec_uint4 a, int b)
-{
- return __builtin_spu_rlmaskqwbyte_4 (a, b);
-}
-static inline vec_int4
-spu_rlmaskqwbyte (vec_int4 a, int b)
-{
- return __builtin_spu_rlmaskqwbyte_5 (a, b);
-}
-static inline vec_ullong2
-spu_rlmaskqwbyte (vec_ullong2 a, int b)
-{
- return __builtin_spu_rlmaskqwbyte_6 (a, b);
-}
-static inline vec_llong2
-spu_rlmaskqwbyte (vec_llong2 a, int b)
-{
- return __builtin_spu_rlmaskqwbyte_7 (a, b);
-}
-static inline vec_float4
-spu_rlmaskqwbyte (vec_float4 a, int b)
-{
- return __builtin_spu_rlmaskqwbyte_8 (a, b);
-}
-static inline vec_double2
-spu_rlmaskqwbyte (vec_double2 a, int b)
-{
- return __builtin_spu_rlmaskqwbyte_9 (a, b);
-}
-static inline vec_uchar16
-spu_rlmaskqwbytebc (vec_uchar16 a, int b)
-{
- return __builtin_spu_rlmaskqwbytebc_0 (a, b);
-}
-static inline vec_char16
-spu_rlmaskqwbytebc (vec_char16 a, int b)
-{
- return __builtin_spu_rlmaskqwbytebc_1 (a, b);
-}
-static inline vec_ushort8
-spu_rlmaskqwbytebc (vec_ushort8 a, int b)
-{
- return __builtin_spu_rlmaskqwbytebc_2 (a, b);
-}
-static inline vec_short8
-spu_rlmaskqwbytebc (vec_short8 a, int b)
-{
- return __builtin_spu_rlmaskqwbytebc_3 (a, b);
-}
-static inline vec_uint4
-spu_rlmaskqwbytebc (vec_uint4 a, int b)
-{
- return __builtin_spu_rlmaskqwbytebc_4 (a, b);
-}
-static inline vec_int4
-spu_rlmaskqwbytebc (vec_int4 a, int b)
-{
- return __builtin_spu_rlmaskqwbytebc_5 (a, b);
-}
-static inline vec_ullong2
-spu_rlmaskqwbytebc (vec_ullong2 a, int b)
-{
- return __builtin_spu_rlmaskqwbytebc_6 (a, b);
-}
-static inline vec_llong2
-spu_rlmaskqwbytebc (vec_llong2 a, int b)
-{
- return __builtin_spu_rlmaskqwbytebc_7 (a, b);
-}
-static inline vec_float4
-spu_rlmaskqwbytebc (vec_float4 a, int b)
-{
- return __builtin_spu_rlmaskqwbytebc_8 (a, b);
-}
-static inline vec_double2
-spu_rlmaskqwbytebc (vec_double2 a, int b)
-{
- return __builtin_spu_rlmaskqwbytebc_9 (a, b);
-}
-static inline vec_ushort8
-spu_sl (vec_ushort8 a, vec_ushort8 b)
-{
- return __builtin_spu_sl_0 (a, b);
-}
-static inline vec_short8
-spu_sl (vec_short8 a, vec_ushort8 b)
-{
- return __builtin_spu_sl_1 (a, b);
-}
-static inline vec_uint4
-spu_sl (vec_uint4 a, vec_uint4 b)
-{
- return __builtin_spu_sl_2 (a, b);
-}
-static inline vec_int4
-spu_sl (vec_int4 a, vec_uint4 b)
-{
- return __builtin_spu_sl_3 (a, b);
-}
-static inline vec_ushort8
-spu_sl (vec_ushort8 a, unsigned int b)
-{
- return __builtin_spu_sl_4 (a, b);
-}
-static inline vec_short8
-spu_sl (vec_short8 a, unsigned int b)
-{
- return __builtin_spu_sl_5 (a, b);
-}
-static inline vec_uint4
-spu_sl (vec_uint4 a, unsigned int b)
-{
- return __builtin_spu_sl_6 (a, b);
-}
-static inline vec_int4
-spu_sl (vec_int4 a, unsigned int b)
-{
- return __builtin_spu_sl_7 (a, b);
-}
-static inline vec_llong2
-spu_slqw (vec_llong2 a, unsigned int b)
-{
- return __builtin_spu_slqw_0 (a, b);
-}
-static inline vec_ullong2
-spu_slqw (vec_ullong2 a, unsigned int b)
-{
- return __builtin_spu_slqw_1 (a, b);
-}
-static inline vec_int4
-spu_slqw (vec_int4 a, unsigned int b)
-{
- return __builtin_spu_slqw_2 (a, b);
-}
-static inline vec_uint4
-spu_slqw (vec_uint4 a, unsigned int b)
-{
- return __builtin_spu_slqw_3 (a, b);
-}
-static inline vec_short8
-spu_slqw (vec_short8 a, unsigned int b)
-{
- return __builtin_spu_slqw_4 (a, b);
-}
-static inline vec_ushort8
-spu_slqw (vec_ushort8 a, unsigned int b)
-{
- return __builtin_spu_slqw_5 (a, b);
-}
-static inline vec_char16
-spu_slqw (vec_char16 a, unsigned int b)
-{
- return __builtin_spu_slqw_6 (a, b);
-}
-static inline vec_uchar16
-spu_slqw (vec_uchar16 a, unsigned int b)
-{
- return __builtin_spu_slqw_7 (a, b);
-}
-static inline vec_float4
-spu_slqw (vec_float4 a, unsigned int b)
-{
- return __builtin_spu_slqw_8 (a, b);
-}
-static inline vec_double2
-spu_slqw (vec_double2 a, unsigned int b)
-{
- return __builtin_spu_slqw_9 (a, b);
-}
-static inline vec_llong2
-spu_slqwbyte (vec_llong2 a, unsigned int b)
-{
- return __builtin_spu_slqwbyte_0 (a, b);
-}
-static inline vec_ullong2
-spu_slqwbyte (vec_ullong2 a, unsigned int b)
-{
- return __builtin_spu_slqwbyte_1 (a, b);
-}
-static inline vec_int4
-spu_slqwbyte (vec_int4 a, unsigned int b)
-{
- return __builtin_spu_slqwbyte_2 (a, b);
-}
-static inline vec_uint4
-spu_slqwbyte (vec_uint4 a, unsigned int b)
-{
- return __builtin_spu_slqwbyte_3 (a, b);
-}
-static inline vec_short8
-spu_slqwbyte (vec_short8 a, unsigned int b)
-{
- return __builtin_spu_slqwbyte_4 (a, b);
-}
-static inline vec_ushort8
-spu_slqwbyte (vec_ushort8 a, unsigned int b)
-{
- return __builtin_spu_slqwbyte_5 (a, b);
-}
-static inline vec_char16
-spu_slqwbyte (vec_char16 a, unsigned int b)
-{
- return __builtin_spu_slqwbyte_6 (a, b);
-}
-static inline vec_uchar16
-spu_slqwbyte (vec_uchar16 a, unsigned int b)
-{
- return __builtin_spu_slqwbyte_7 (a, b);
-}
-static inline vec_float4
-spu_slqwbyte (vec_float4 a, unsigned int b)
-{
- return __builtin_spu_slqwbyte_8 (a, b);
-}
-static inline vec_double2
-spu_slqwbyte (vec_double2 a, unsigned int b)
-{
- return __builtin_spu_slqwbyte_9 (a, b);
-}
-static inline vec_llong2
-spu_slqwbytebc (vec_llong2 a, unsigned int b)
-{
- return __builtin_spu_slqwbytebc_0 (a, b);
-}
-static inline vec_ullong2
-spu_slqwbytebc (vec_ullong2 a, unsigned int b)
-{
- return __builtin_spu_slqwbytebc_1 (a, b);
-}
-static inline vec_int4
-spu_slqwbytebc (vec_int4 a, unsigned int b)
-{
- return __builtin_spu_slqwbytebc_2 (a, b);
-}
-static inline vec_uint4
-spu_slqwbytebc (vec_uint4 a, unsigned int b)
-{
- return __builtin_spu_slqwbytebc_3 (a, b);
-}
-static inline vec_short8
-spu_slqwbytebc (vec_short8 a, unsigned int b)
-{
- return __builtin_spu_slqwbytebc_4 (a, b);
-}
-static inline vec_ushort8
-spu_slqwbytebc (vec_ushort8 a, unsigned int b)
-{
- return __builtin_spu_slqwbytebc_5 (a, b);
-}
-static inline vec_char16
-spu_slqwbytebc (vec_char16 a, unsigned int b)
-{
- return __builtin_spu_slqwbytebc_6 (a, b);
-}
-static inline vec_uchar16
-spu_slqwbytebc (vec_uchar16 a, unsigned int b)
-{
- return __builtin_spu_slqwbytebc_7 (a, b);
-}
-static inline vec_float4
-spu_slqwbytebc (vec_float4 a, unsigned int b)
-{
- return __builtin_spu_slqwbytebc_8 (a, b);
-}
-static inline vec_double2
-spu_slqwbytebc (vec_double2 a, unsigned int b)
-{
- return __builtin_spu_slqwbytebc_9 (a, b);
-}
-static inline vec_uchar16
-spu_splats (unsigned char a)
-{
- return __builtin_spu_splats_0 (a);
-}
-static inline vec_char16
-spu_splats (signed char a)
-{
- return __builtin_spu_splats_1 (a);
-}
-static inline vec_char16
-spu_splats (char a)
-{
- return __builtin_spu_splats_1 (a);
-}
-static inline vec_ushort8
-spu_splats (unsigned short a)
-{
- return __builtin_spu_splats_2 (a);
-}
-static inline vec_short8
-spu_splats (short a)
-{
- return __builtin_spu_splats_3 (a);
-}
-static inline vec_uint4
-spu_splats (unsigned int a)
-{
- return __builtin_spu_splats_4 (a);
-}
-static inline vec_int4
-spu_splats (int a)
-{
- return __builtin_spu_splats_5 (a);
-}
-static inline vec_ullong2
-spu_splats (unsigned long long a)
-{
- return __builtin_spu_splats_6 (a);
-}
-static inline vec_llong2
-spu_splats (long long a)
-{
- return __builtin_spu_splats_7 (a);
-}
-static inline vec_float4
-spu_splats (float a)
-{
- return __builtin_spu_splats_8 (a);
-}
-static inline vec_double2
-spu_splats (double a)
-{
- return __builtin_spu_splats_9 (a);
-}
-static inline unsigned char
-spu_extract (vec_uchar16 a, int b)
-{
- return __builtin_spu_extract_0 (a, b);
-}
-static inline signed char
-spu_extract (vec_char16 a, int b)
-{
- return __builtin_spu_extract_1 (a, b);
-}
-static inline unsigned short
-spu_extract (vec_ushort8 a, int b)
-{
- return __builtin_spu_extract_2 (a, b);
-}
-static inline short
-spu_extract (vec_short8 a, int b)
-{
- return __builtin_spu_extract_3 (a, b);
-}
-static inline unsigned int
-spu_extract (vec_uint4 a, int b)
-{
- return __builtin_spu_extract_4 (a, b);
-}
-static inline int
-spu_extract (vec_int4 a, int b)
-{
- return __builtin_spu_extract_5 (a, b);
-}
-static inline unsigned long long
-spu_extract (vec_ullong2 a, int b)
-{
- return __builtin_spu_extract_6 (a, b);
-}
-static inline long long
-spu_extract (vec_llong2 a, int b)
-{
- return __builtin_spu_extract_7 (a, b);
-}
-static inline float
-spu_extract (vec_float4 a, int b)
-{
- return __builtin_spu_extract_8 (a, b);
-}
-static inline double
-spu_extract (vec_double2 a, int b)
-{
- return __builtin_spu_extract_9 (a, b);
-}
-static inline vec_uchar16
-spu_insert (unsigned char a, vec_uchar16 b, int c)
-{
- return __builtin_spu_insert_0 (a, b, c);
-}
-static inline vec_char16
-spu_insert (signed char a, vec_char16 b, int c)
-{
- return __builtin_spu_insert_1 (a, b, c);
-}
-static inline vec_ushort8
-spu_insert (unsigned short a, vec_ushort8 b, int c)
-{
- return __builtin_spu_insert_2 (a, b, c);
-}
-static inline vec_short8
-spu_insert (short a, vec_short8 b, int c)
-{
- return __builtin_spu_insert_3 (a, b, c);
-}
-static inline vec_uint4
-spu_insert (unsigned int a, vec_uint4 b, int c)
-{
- return __builtin_spu_insert_4 (a, b, c);
-}
-static inline vec_int4
-spu_insert (int a, vec_int4 b, int c)
-{
- return __builtin_spu_insert_5 (a, b, c);
-}
-static inline vec_ullong2
-spu_insert (unsigned long long a, vec_ullong2 b, int c)
-{
- return __builtin_spu_insert_6 (a, b, c);
-}
-static inline vec_llong2
-spu_insert (long long a, vec_llong2 b, int c)
-{
- return __builtin_spu_insert_7 (a, b, c);
-}
-static inline vec_float4
-spu_insert (float a, vec_float4 b, int c)
-{
- return __builtin_spu_insert_8 (a, b, c);
-}
-static inline vec_double2
-spu_insert (double a, vec_double2 b, int c)
-{
- return __builtin_spu_insert_9 (a, b, c);
-}
-static inline vec_uchar16
-spu_promote (unsigned char a, int b)
-{
- return __builtin_spu_promote_0 (a, b);
-}
-static inline vec_char16
-spu_promote (signed char a, int b)
-{
- return __builtin_spu_promote_1 (a, b);
-}
-static inline vec_char16
-spu_promote (char a, int b)
-{
- return __builtin_spu_promote_1 (a, b);
-}
-static inline vec_ushort8
-spu_promote (unsigned short a, int b)
-{
- return __builtin_spu_promote_2 (a, b);
-}
-static inline vec_short8
-spu_promote (short a, int b)
-{
- return __builtin_spu_promote_3 (a, b);
-}
-static inline vec_uint4
-spu_promote (unsigned int a, int b)
-{
- return __builtin_spu_promote_4 (a, b);
-}
-static inline vec_int4
-spu_promote (int a, int b)
-{
- return __builtin_spu_promote_5 (a, b);
-}
-static inline vec_ullong2
-spu_promote (unsigned long long a, int b)
-{
- return __builtin_spu_promote_6 (a, b);
-}
-static inline vec_llong2
-spu_promote (long long a, int b)
-{
- return __builtin_spu_promote_7 (a, b);
-}
-static inline vec_float4
-spu_promote (float a, int b)
-{
- return __builtin_spu_promote_8 (a, b);
-}
-static inline vec_double2
-spu_promote (double a, int b)
-{
- return __builtin_spu_promote_9 (a, b);
-}
-#endif /* __cplusplus */
-
#ifdef __cplusplus
extern "C" {
#endif
diff --git a/gcc/config/spu/vmx2spu.h b/gcc/config/spu/vmx2spu.h
index 7328da4a9d4..0236eba1714 100644
--- a/gcc/config/spu/vmx2spu.h
+++ b/gcc/config/spu/vmx2spu.h
@@ -2155,7 +2155,7 @@ static inline vec_int4 vec_subs(vec_int4 a, vec_bint4 b)
}
-/* vec_sum4s (vector sum across partial (1/4) staturated)
+/* vec_sum4s (vector sum across partial (1/4) saturated)
* =========
*/
static inline vec_uint4 vec_sum4s(vec_uchar16 a, vec_uint4 b)
@@ -2187,7 +2187,7 @@ static inline vec_int4 vec_sum4s(vec_short8 a, vec_int4 b)
}
-/* vec_sum2s (vector sum across partial (1/2) staturated)
+/* vec_sum2s (vector sum across partial (1/2) saturated)
* =========
*/
static inline vec_int4 vec_sum2s(vec_int4 a, vec_int4 b)
@@ -2223,7 +2223,7 @@ static inline vec_int4 vec_sum2s(vec_int4 a, vec_int4 b)
}
-/* vec_sums (vector sum staturated)
+/* vec_sums (vector sum saturated)
* ========
*/
static inline vec_int4 vec_sums(vec_int4 a, vec_int4 b)
@@ -2909,7 +2909,7 @@ static inline int vec_all_ne(vec_float4 a, vec_float4 b)
}
-/* vec_all_nge (all elements not greater than or eqaul)
+/* vec_all_nge (all elements not greater than or equal)
* ===========
*/
static inline int vec_all_nge(vec_float4 a, vec_float4 b)
@@ -3385,7 +3385,7 @@ static inline int vec_any_ne(vec_float4 a, vec_float4 b)
}
-/* vec_any_nge (any elements not greater than or eqaul)
+/* vec_any_nge (any elements not greater than or equal)
* ===========
*/
static inline int vec_any_nge(vec_float4 a, vec_float4 b)
diff --git a/gcc/cp/ChangeLog b/gcc/cp/ChangeLog
index edcb3880e68..cfd933c314d 100644
--- a/gcc/cp/ChangeLog
+++ b/gcc/cp/ChangeLog
@@ -1,6 +1,34 @@
+2006-12-02 Andrew Pinski <andrew_pinski@playstation.sony.com>
+
+ PR C++/30033
+ * decl.c (cp_tree_node_structure): Handle STATIC_ASSERT.
+
+2006-12-02 Kazu Hirata <kazu@codesourcery.com>
+
+ * name-lookup.c: Follow spelling conventions.
+
+2006-12-01 Geoffrey Keating <geoffk@apple.com>
+
+ * decl.c (poplevel): Check DECL_INITIAL invariant.
+ (duplicate_decls): Preserve DECL_INITIAL when eliminating
+ a new definition in favour of an old declaration.
+ (start_preparsed_function): Define and document value of
+ DECL_INITIAL before and after routine.
+ (finish_function): Check DECL_INITIAL invariant.
+ * parser.c
+ (cp_parser_function_definition_from_specifiers_and_declarator):
+ Skip duplicate function definitions.
+
+2006-12-01 Volker Reichelt <reichelt@igpm.rwth-aachen.de>
+
+ PR c++/30022
+ * typeck.c (type_after_usual_arithmetic_conversions):
+ Fix assertion for vector types.
+ (build_binary_op): Use temporary for inner type of vector types.
+
2006-12-01 Ryan Mansfield <rmansfield@qnx.com>
- PR c++/29066
+ PR c++/29066
* typeck.c (build_binary_op): Fix pointer to member function
comparison for ptrmemfunc_vbit_in_delta targets.
diff --git a/gcc/cp/decl.c b/gcc/cp/decl.c
index 6043596b2fd..8a55e417d40 100644
--- a/gcc/cp/decl.c
+++ b/gcc/cp/decl.c
@@ -756,7 +756,12 @@ poplevel (int keep, int reverse, int functionbody)
leave_scope ();
if (functionbody)
- DECL_INITIAL (current_function_decl) = block;
+ {
+ /* The current function is being defined, so its DECL_INITIAL
+ should be error_mark_node. */
+ gcc_assert (DECL_INITIAL (current_function_decl) == error_mark_node);
+ DECL_INITIAL (current_function_decl) = block;
+ }
else if (block)
current_binding_level->blocks
= chainon (current_binding_level->blocks, block);
@@ -1632,13 +1637,15 @@ duplicate_decls (tree newdecl, tree olddecl, bool newdecl_is_friend)
}
/* If the new declaration is a definition, update the file and
- line information on the declaration. */
+ line information on the declaration, and also make
+ the old declaration the same definition. */
if (DECL_INITIAL (old_result) == NULL_TREE
&& DECL_INITIAL (new_result) != NULL_TREE)
{
DECL_SOURCE_LOCATION (olddecl)
= DECL_SOURCE_LOCATION (old_result)
= DECL_SOURCE_LOCATION (newdecl);
+ DECL_INITIAL (old_result) = DECL_INITIAL (new_result);
if (DECL_FUNCTION_TEMPLATE_P (newdecl))
DECL_ARGUMENTS (old_result)
= DECL_ARGUMENTS (new_result);
@@ -10374,7 +10381,13 @@ check_function_type (tree decl, tree current_function_parms)
For C++, we must first check whether that datum makes any sense.
For example, "class A local_a(1,2);" means that variable local_a
is an aggregate of type A, which should have a constructor
- applied to it with the argument list [1, 2]. */
+ applied to it with the argument list [1, 2].
+
+ On entry, DECL_INITIAL (decl1) should be NULL_TREE or error_mark_node,
+ or may be a BLOCK if the function has been defined previously
+ in this translation unit. On exit, DECL_INITIAL (decl1) will be
+ error_mark_node if the function has never been defined, or
+ a BLOCK if the function has been defined somewhere. */
void
start_preparsed_function (tree decl1, tree attrs, int flags)
@@ -10503,24 +10516,6 @@ start_preparsed_function (tree decl1, tree attrs, int flags)
cp_apply_type_quals_to_decl (cp_type_quals (restype), resdecl);
}
- /* Initialize RTL machinery. We cannot do this until
- CURRENT_FUNCTION_DECL and DECL_RESULT are set up. We do this
- even when processing a template; this is how we get
- CFUN set up, and our per-function variables initialized.
- FIXME factor out the non-RTL stuff. */
- bl = current_binding_level;
- allocate_struct_function (decl1);
- current_binding_level = bl;
-
- /* Even though we're inside a function body, we still don't want to
- call expand_expr to calculate the size of a variable-sized array.
- We haven't necessarily assigned RTL to all variables yet, so it's
- not safe to try to expand expressions involving them. */
- cfun->x_dont_save_pending_sizes_p = 1;
-
- /* Start the statement-tree, start the tree now. */
- DECL_SAVED_TREE (decl1) = push_stmt_list ();
-
/* Let the user know we're compiling this function. */
announce_function (decl1);
@@ -10566,9 +10561,33 @@ start_preparsed_function (tree decl1, tree attrs, int flags)
maybe_apply_pragma_weak (decl1);
}
- /* Reset these in case the call to pushdecl changed them. */
+ /* Reset this in case the call to pushdecl changed it. */
current_function_decl = decl1;
- cfun->decl = decl1;
+
+ gcc_assert (DECL_INITIAL (decl1));
+
+ /* This function may already have been parsed, in which case just
+ return; our caller will skip over the body without parsing. */
+ if (DECL_INITIAL (decl1) != error_mark_node)
+ return;
+
+ /* Initialize RTL machinery. We cannot do this until
+ CURRENT_FUNCTION_DECL and DECL_RESULT are set up. We do this
+ even when processing a template; this is how we get
+ CFUN set up, and our per-function variables initialized.
+ FIXME factor out the non-RTL stuff. */
+ bl = current_binding_level;
+ allocate_struct_function (decl1);
+ current_binding_level = bl;
+
+ /* Even though we're inside a function body, we still don't want to
+ call expand_expr to calculate the size of a variable-sized array.
+ We haven't necessarily assigned RTL to all variables yet, so it's
+ not safe to try to expand expressions involving them. */
+ cfun->x_dont_save_pending_sizes_p = 1;
+
+ /* Start the statement-tree, start the tree now. */
+ DECL_SAVED_TREE (decl1) = push_stmt_list ();
/* If we are (erroneously) defining a function that we have already
defined before, wipe out what we knew before. */
@@ -11077,6 +11096,10 @@ finish_function (int flags)
which then got a warning when stored in a ptr-to-function variable. */
gcc_assert (building_stmt_tree ());
+ /* The current function is being defined, so its DECL_INITIAL should
+ be set, and unless there's a multiple definition, it should be
+ error_mark_node. */
+ gcc_assert (DECL_INITIAL (fndecl) == error_mark_node);
/* For a cloned function, we've already got all the code we need;
there's no need to add any extra bits. */
@@ -11583,6 +11606,7 @@ cp_tree_node_structure (union lang_tree_node * t)
case TINST_LEVEL: return TS_CP_TINST_LEVEL;
case PTRMEM_CST: return TS_CP_PTRMEM;
case BASELINK: return TS_CP_BASELINK;
+ case STATIC_ASSERT: return TS_CP_STATIC_ASSERT;
default: return TS_CP_GENERIC;
}
}
diff --git a/gcc/cp/method.c b/gcc/cp/method.c
index ded0af04716..71e34f064c1 100644
--- a/gcc/cp/method.c
+++ b/gcc/cp/method.c
@@ -407,10 +407,6 @@ use_thunk (tree thunk_fndecl, bool emit_p)
}
}
- /* The back-end expects DECL_INITIAL to contain a BLOCK, so we
- create one. */
- DECL_INITIAL (thunk_fndecl) = make_node (BLOCK);
-
/* Set up cloned argument trees for the thunk. */
t = NULL_TREE;
for (a = DECL_ARGUMENTS (function); a; a = TREE_CHAIN (a))
@@ -424,17 +420,23 @@ use_thunk (tree thunk_fndecl, bool emit_p)
}
a = nreverse (t);
DECL_ARGUMENTS (thunk_fndecl) = a;
- BLOCK_VARS (DECL_INITIAL (thunk_fndecl)) = a;
if (this_adjusting
&& targetm.asm_out.can_output_mi_thunk (thunk_fndecl, fixed_offset,
virtual_value, alias))
{
const char *fnname;
+ tree fn_block;
+
current_function_decl = thunk_fndecl;
DECL_RESULT (thunk_fndecl)
= build_decl (RESULT_DECL, 0, integer_type_node);
fnname = XSTR (XEXP (DECL_RTL (thunk_fndecl), 0), 0);
+ /* The back-end expects DECL_INITIAL to contain a BLOCK, so we
+ create one. */
+ fn_block = make_node (BLOCK);
+ BLOCK_VARS (fn_block) = a;
+ DECL_INITIAL (thunk_fndecl) = fn_block;
init_function_start (thunk_fndecl);
current_function_is_thunk = 1;
assemble_start_function (thunk_fndecl, fnname);
diff --git a/gcc/cp/name-lookup.c b/gcc/cp/name-lookup.c
index 1eb8f5d90b2..d88b8dfb3c6 100644
--- a/gcc/cp/name-lookup.c
+++ b/gcc/cp/name-lookup.c
@@ -61,7 +61,7 @@ tree global_namespace;
unit. */
static GTY(()) tree anonymous_namespace_name;
-/* Initialise anonymous_namespace_name if necessary, and return it. */
+/* Initialize anonymous_namespace_name if necessary, and return it. */
static tree
get_anonymous_namespace_name(void)
diff --git a/gcc/cp/parser.c b/gcc/cp/parser.c
index ad4d454baaf..3ed74976242 100644
--- a/gcc/cp/parser.c
+++ b/gcc/cp/parser.c
@@ -15583,6 +15583,16 @@ cp_parser_function_definition_from_specifiers_and_declarator
cp_parser_skip_to_end_of_block_or_statement (parser);
fn = error_mark_node;
}
+ else if (DECL_INITIAL (current_function_decl) != error_mark_node)
+ {
+ /* Seen already, skip it. An error message has already been output. */
+ cp_parser_skip_to_end_of_block_or_statement (parser);
+ fn = current_function_decl;
+ current_function_decl = NULL_TREE;
+ /* If this is a function from a class, pop the nested class. */
+ if (current_class_name)
+ pop_nested_class ();
+ }
else
fn = cp_parser_function_definition_after_declarator (parser,
/*inline_p=*/false);
diff --git a/gcc/cp/typeck.c b/gcc/cp/typeck.c
index c5c9f38bbb9..1acd1ffdecf 100644
--- a/gcc/cp/typeck.c
+++ b/gcc/cp/typeck.c
@@ -262,7 +262,7 @@ type_after_usual_arithmetic_conversions (tree t1, tree t2)
|| TREE_CODE (t1) == ENUMERAL_TYPE);
gcc_assert (ARITHMETIC_TYPE_P (t2)
|| TREE_CODE (t2) == COMPLEX_TYPE
- || TREE_CODE (t1) == VECTOR_TYPE
+ || TREE_CODE (t2) == VECTOR_TYPE
|| TREE_CODE (t2) == ENUMERAL_TYPE);
/* In what follows, we slightly generalize the rules given in [expr] so
@@ -3093,17 +3093,19 @@ build_binary_op (enum tree_code code, tree orig_op0, tree orig_op1,
&& (code1 == INTEGER_TYPE || code1 == REAL_TYPE
|| code1 == COMPLEX_TYPE || code1 == VECTOR_TYPE))
{
+ enum tree_code tcode0 = code0, tcode1 = code1;
+
if (TREE_CODE (op1) == INTEGER_CST && integer_zerop (op1))
warning (OPT_Wdiv_by_zero, "division by zero in %<%E / 0%>", op0);
else if (TREE_CODE (op1) == REAL_CST && real_zerop (op1))
warning (OPT_Wdiv_by_zero, "division by zero in %<%E / 0.%>", op0);
- if (code0 == COMPLEX_TYPE || code0 == VECTOR_TYPE)
- code0 = TREE_CODE (TREE_TYPE (TREE_TYPE (op0)));
- if (code1 == COMPLEX_TYPE || code1 == VECTOR_TYPE)
- code1 = TREE_CODE (TREE_TYPE (TREE_TYPE (op1)));
+ if (tcode0 == COMPLEX_TYPE || tcode0 == VECTOR_TYPE)
+ tcode0 = TREE_CODE (TREE_TYPE (TREE_TYPE (op0)));
+ if (tcode1 == COMPLEX_TYPE || tcode1 == VECTOR_TYPE)
+ tcode1 = TREE_CODE (TREE_TYPE (TREE_TYPE (op1)));
- if (!(code0 == INTEGER_TYPE && code1 == INTEGER_TYPE))
+ if (!(tcode0 == INTEGER_TYPE && tcode1 == INTEGER_TYPE))
resultcode = RDIV_EXPR;
else
/* When dividing two signed integers, we have to promote to int.
diff --git a/gcc/doc/extend.texi b/gcc/doc/extend.texi
index fa7f31e9ba0..68a1391ba42 100644
--- a/gcc/doc/extend.texi
+++ b/gcc/doc/extend.texi
@@ -9803,7 +9803,7 @@ The extended version of @code{__builtin_expect} is not supported.
@end itemize
-@emph{Note:} Only the interface descibed in the aforementioned
+@emph{Note:} Only the interface described in the aforementioned
specification is supported. Internally, GCC uses built-in functions to
implement the required functionality, but these are not supported and
are subject to change without notice.
diff --git a/gcc/doc/install.texi b/gcc/doc/install.texi
index 0fee98af025..ab2a9f29996 100644
--- a/gcc/doc/install.texi
+++ b/gcc/doc/install.texi
@@ -297,16 +297,14 @@ library search path, you will have to configure with the
@option{--with-gmp} configure option. See also
@option{--with-gmp-lib} and @option{--with-gmp-include}.
-@item MPFR Library version 2.2 (or later)
+@item MPFR Library version 2.2.1 (or later)
Necessary to build GCC. It can be downloaded from
-@uref{http://www.mpfr.org/}. If you're using version 2.2.0, You
-should also apply revision 16 (or later) of the cumulative patch from
-@uref{http://www.mpfr.org/mpfr-current/}. The version of MPFR that is
-bundled with GMP 4.1.x contains numerous bugs. Although GCC will
-appear to function with the buggy versions of MPFR, there are a few
-bugs that will not be fixed when using this version. It is strongly
-recommended to upgrade to the recommended version of MPFR.
+@uref{http://www.mpfr.org/}. The version of MPFR that is bundled with
+GMP 4.1.x contains numerous bugs. Although GCC may appear to function
+with the buggy versions of MPFR, there are a few bugs that will not be
+fixed when using this version. It is strongly recommended to upgrade
+to the recommended version of MPFR.
The @option{--with-mpfr} configure option should be used if your MPFR
Library is not installed in your default library search path. See
diff --git a/gcc/doc/invoke.texi b/gcc/doc/invoke.texi
index 36da6d571f0..cd03c92b7aa 100644
--- a/gcc/doc/invoke.texi
+++ b/gcc/doc/invoke.texi
@@ -9733,12 +9733,12 @@ and memset for short lengths.
@item -minline-stringops-dynamically
@opindex minline-stringops-dynamically
For string operation of unknown size, inline runtime checks so for small
-blocks inline code is used, while for large blocks librarly call is used.
+blocks inline code is used, while for large blocks library call is used.
@item -mstringop-strategy=@var{alg}
@opindex mstringop-strategy=@var{alg}
Overwrite internal decision heuristic about particular algorithm to inline
-string opteration with. The allowed values are @code{rep_byte},
+string operation with. The allowed values are @code{rep_byte},
@code{rep_4byte}, @code{rep_8byte} for expanding using i386 @code{rep} prefix
of specified size, @code{loop}, @code{unrolled_loop} for expanding inline loop,
@code{libcall} for always expanding library call.
diff --git a/gcc/doc/md.texi b/gcc/doc/md.texi
index 9f840b00d1d..355c1062379 100644
--- a/gcc/doc/md.texi
+++ b/gcc/doc/md.texi
@@ -3621,7 +3621,7 @@ and place the resulting N/2 values of size 2*S in the output vector (operand 0).
Signed/Unsigned widening multiplication.
The two inputs (operands 1 and 2) are vectors with N
signed/unsigned elements of size S. Multiply the high/low elements of the two
-vectors, and put the N/2 products of size 2*S in the output vector (opernad 0).
+vectors, and put the N/2 products of size 2*S in the output vector (operand 0).
@cindex @code{mulhisi3} instruction pattern
@item @samp{mulhisi3}
diff --git a/gcc/fold-const.c b/gcc/fold-const.c
index b19b7688a2d..93dee15ce28 100644
--- a/gcc/fold-const.c
+++ b/gcc/fold-const.c
@@ -7758,24 +7758,24 @@ fold_minmax (enum tree_code code, tree type, tree op0, tree op1)
else
gcc_unreachable ();
- /* MIN (MAX (a, b), b) == b.  */
+ /* MIN (MAX (a, b), b) == b. */
if (TREE_CODE (op0) == compl_code
&& operand_equal_p (TREE_OPERAND (op0, 1), op1, 0))
return omit_one_operand (type, op1, TREE_OPERAND (op0, 0));
- /* MIN (MAX (b, a), b) == b.  */
+ /* MIN (MAX (b, a), b) == b. */
if (TREE_CODE (op0) == compl_code
&& operand_equal_p (TREE_OPERAND (op0, 0), op1, 0)
&& reorder_operands_p (TREE_OPERAND (op0, 1), op1))
return omit_one_operand (type, op1, TREE_OPERAND (op0, 1));
- /* MIN (a, MAX (a, b)) == a.  */
+ /* MIN (a, MAX (a, b)) == a. */
if (TREE_CODE (op1) == compl_code
&& operand_equal_p (op0, TREE_OPERAND (op1, 0), 0)
&& reorder_operands_p (op0, TREE_OPERAND (op1, 1)))
return omit_one_operand (type, op0, TREE_OPERAND (op1, 1));
- /* MIN (a, MAX (b, a)) == a.  */
+ /* MIN (a, MAX (b, a)) == a. */
if (TREE_CODE (op1) == compl_code
&& operand_equal_p (op0, TREE_OPERAND (op1, 1), 0)
&& reorder_operands_p (op0, TREE_OPERAND (op1, 0)))
@@ -7818,7 +7818,7 @@ maybe_canonicalize_comparison_1 (enum tree_code code, tree type,
|| TREE_OVERFLOW (cst0))
return NULL_TREE;
- /* See if we can reduce the mangitude of the constant in
+ /* See if we can reduce the magnitude of the constant in
arg0 by changing the comparison code. */
if (code0 == INTEGER_CST)
{
@@ -7899,7 +7899,7 @@ maybe_canonicalize_comparison (enum tree_code code, tree type,
return t;
/* Try canonicalization by simplifying arg1 using the swapped
- comparsion. */
+ comparison. */
code = swap_tree_comparison (code);
return maybe_canonicalize_comparison_1 (code, type, arg1, arg0);
}
@@ -10994,15 +10994,15 @@ fold_binary (enum tree_code code, tree type, tree op0, tree op1)
}
/* Comparisons with the highest or lowest possible integer of
- the specified size will have known values. */
+ the specified precision will have known values. */
{
- int width = GET_MODE_BITSIZE (TYPE_MODE (TREE_TYPE (arg1)));
+ tree arg1_type = TREE_TYPE (arg1);
+ unsigned int width = TYPE_PRECISION (arg1_type);
if (TREE_CODE (arg1) == INTEGER_CST
&& ! TREE_CONSTANT_OVERFLOW (arg1)
&& width <= 2 * HOST_BITS_PER_WIDE_INT
- && (INTEGRAL_TYPE_P (TREE_TYPE (arg1))
- || POINTER_TYPE_P (TREE_TYPE (arg1))))
+ && (INTEGRAL_TYPE_P (arg1_type) || POINTER_TYPE_P (arg1_type)))
{
HOST_WIDE_INT signed_max_hi;
unsigned HOST_WIDE_INT signed_max_lo;
@@ -11015,7 +11015,7 @@ fold_binary (enum tree_code code, tree type, tree op0, tree op1)
signed_max_hi = 0;
max_hi = 0;
- if (TYPE_UNSIGNED (TREE_TYPE (arg1)))
+ if (TYPE_UNSIGNED (arg1_type))
{
max_lo = ((unsigned HOST_WIDE_INT) 2 << (width - 1)) - 1;
min_lo = 0;
@@ -11037,7 +11037,7 @@ fold_binary (enum tree_code code, tree type, tree op0, tree op1)
max_lo = -1;
min_lo = 0;
- if (TYPE_UNSIGNED (TREE_TYPE (arg1)))
+ if (TYPE_UNSIGNED (arg1_type))
{
max_hi = ((unsigned HOST_WIDE_INT) 2 << (width - 1)) - 1;
min_hi = 0;
@@ -11124,9 +11124,14 @@ fold_binary (enum tree_code code, tree type, tree op0, tree op1)
else if (TREE_INT_CST_HIGH (arg1) == signed_max_hi
&& TREE_INT_CST_LOW (arg1) == signed_max_lo
- && TYPE_UNSIGNED (TREE_TYPE (arg1))
+ && TYPE_UNSIGNED (arg1_type)
+ /* We will flip the signedness of the comparison operator
+ associated with the mode of arg1, so the sign bit is
+ specified by this mode. Check that arg1 is the signed
+ max associated with this sign bit. */
+ && width == GET_MODE_BITSIZE (TYPE_MODE (arg1_type))
/* signed_type does not work on pointer types. */
- && INTEGRAL_TYPE_P (TREE_TYPE (arg1)))
+ && INTEGRAL_TYPE_P (arg1_type))
{
/* The following case also applies to X < signed_max+1
and X >= signed_max+1 because previous transformations. */
@@ -11136,8 +11141,8 @@ fold_binary (enum tree_code code, tree type, tree op0, tree op1)
st0 = lang_hooks.types.signed_type (TREE_TYPE (arg0));
st1 = lang_hooks.types.signed_type (TREE_TYPE (arg1));
return fold_build2 (code == LE_EXPR ? GE_EXPR: LT_EXPR,
- type, fold_convert (st0, arg0),
- build_int_cst (st1, 0));
+ type, fold_convert (st0, arg0),
+ build_int_cst (st1, 0));
}
}
}
diff --git a/gcc/fortran/ChangeLog b/gcc/fortran/ChangeLog
index 9442f687e8d..be3e91e5dac 100644
--- a/gcc/fortran/ChangeLog
+++ b/gcc/fortran/ChangeLog
@@ -1,3 +1,20 @@
+2006-12-01 Thomas Koenig <Thomas.Koenig@online.de>
+
+ PR libfortran/29568
+ * gfortran.h (gfc_option_t): Add max_subrecord_length.
+ (top level): Define MAX_SUBRECORD_LENGTH.
+ * lang.opt: Add option -fmax-subrecord-length=.
+ * trans-decl.c: Add new function set_max_subrecord_length.
+ (gfc_generate_function_code): If we are within the main
+ program and max_subrecord_length has been set, call
+ set_max_subrecord_length.
+ * options.c (gfc_init_options): Add defaults for
+ max_subrecord_lenght, convert and record_marker.
+ (gfc_handle_option): Add handling for
+ -fmax_subrecord_length.
+ * invoke.texi: Document the new default for
+ -frecord-marker=<n>.
+
2006-11-28 Paul Thomas <pault@gcc.gnu.org>
PR fortran/29976
diff --git a/gcc/fortran/gfortran.h b/gcc/fortran/gfortran.h
index f770bb81fd1..dbd3271f086 100644
--- a/gcc/fortran/gfortran.h
+++ b/gcc/fortran/gfortran.h
@@ -61,6 +61,9 @@ char *alloca ();
#define GFC_MAX_DIMENSIONS 7 /* Maximum dimensions in an array. */
#define GFC_LETTERS 26 /* Number of letters in the alphabet. */
+#define MAX_SUBRECORD_LENGTH 2147483639 /* 2**31-9 */
+
+
#define free(x) Use_gfc_free_instead_of_free()
#define gfc_is_whitespace(c) ((c==' ') || (c=='\t'))
@@ -1722,12 +1725,12 @@ typedef struct
int fshort_enums;
int convert;
int record_marker;
+ int max_subrecord_length;
}
gfc_option_t;
extern gfc_option_t gfc_option;
-
/* Constructor nodes for array and structure constructors. */
typedef struct gfc_constructor
{
diff --git a/gcc/fortran/invoke.texi b/gcc/fortran/invoke.texi
index c27218c4444..c4ee5d351ba 100644
--- a/gcc/fortran/invoke.texi
+++ b/gcc/fortran/invoke.texi
@@ -650,13 +650,17 @@ variable override the default specified by -fconvert.}
@cindex -frecord-marker=@var{length}
@item -frecord-marker=@var{length}
Specify the length of record markers for unformatted files.
-Valid values for @var{length} are 4 and 8. Default is whatever
-@code{off_t} is specified to be on that particular system.
-Note that specifying @var{length} as 4 limits the record
-length of unformatted files to 2 GB. This option does not
-extend the maximum possible record length on systems where
-@code{off_t} is a four_byte quantity.
-
+Valid values for @var{length} are 4 and 8. Default is 4.
+@emph{This is different from previous versions of gfortran},
+which specified a default record marker length of 8 on most
+systems. If you want to read or write files compatible
+with earlier versions of gfortran, use @samp{-frecord-marker=8}.
+
+@cindex -fmax-subrecord-length=@var{length}
+@item -fmax-subrecord-length=@var{length}
+Specify the maximum length for a subrecord. The maximum permitted
+value for length is 2147483639, which is also the default. Only
+really useful for use by the gfortran testsuite.
@end table
@node Code Gen Options
diff --git a/gcc/fortran/lang.opt b/gcc/fortran/lang.opt
index 053f63b0019..ebd6b8dd8ec 100644
--- a/gcc/fortran/lang.opt
+++ b/gcc/fortran/lang.opt
@@ -189,6 +189,10 @@ fmax-identifier-length=
Fortran RejectNegative Joined UInteger
-fmax-identifier-length=<n> Maximum identifier length
+fmax-subrecord-length=
+Fortran RejectNegative Joined UInteger
+-fmax-subrecord-length=<n> Maximum length for subrecords
+
fmax-stack-var-size=
Fortran RejectNegative Joined UInteger
-fmax-stack-var-size=<n> Size in bytes of the largest array that will be put on the stack
diff --git a/gcc/fortran/options.c b/gcc/fortran/options.c
index f03319bbcea..6ec84676185 100644
--- a/gcc/fortran/options.c
+++ b/gcc/fortran/options.c
@@ -51,6 +51,9 @@ gfc_init_options (unsigned int argc ATTRIBUTE_UNUSED,
gfc_option.max_continue_fixed = 19;
gfc_option.max_continue_free = 39;
gfc_option.max_identifier_length = GFC_MAX_SYMBOL_LEN;
+ gfc_option.max_subrecord_length = 0;
+ gfc_option.convert = CONVERT_NATIVE;
+ gfc_option.record_marker = 0;
gfc_option.verbose = 0;
gfc_option.warn_aliasing = 0;
@@ -636,6 +639,12 @@ gfc_handle_option (size_t scode, const char *arg, int value)
case OPT_frecord_marker_8:
gfc_option.record_marker = 8;
break;
+
+ case OPT_fmax_subrecord_length_:
+ if (value > MAX_SUBRECORD_LENGTH)
+ gfc_fatal_error ("Maximum subrecord length cannot exceed %d", MAX_SUBRECORD_LENGTH);
+
+ gfc_option.max_subrecord_length = value;
}
return result;
diff --git a/gcc/fortran/trans-decl.c b/gcc/fortran/trans-decl.c
index c5fdaee2b4a..047635da64f 100644
--- a/gcc/fortran/trans-decl.c
+++ b/gcc/fortran/trans-decl.c
@@ -94,6 +94,7 @@ tree gfor_fndecl_set_fpe;
tree gfor_fndecl_set_std;
tree gfor_fndecl_set_convert;
tree gfor_fndecl_set_record_marker;
+tree gfor_fndecl_set_max_subrecord_length;
tree gfor_fndecl_ctime;
tree gfor_fndecl_fdate;
tree gfor_fndecl_ttynam;
@@ -2411,6 +2412,10 @@ gfc_build_builtin_function_decls (void)
gfc_build_library_function_decl (get_identifier (PREFIX("set_record_marker")),
void_type_node, 1, gfc_c_int_type_node);
+ gfor_fndecl_set_max_subrecord_length =
+ gfc_build_library_function_decl (get_identifier (PREFIX("set_max_subrecord_length")),
+ void_type_node, 1, gfc_c_int_type_node);
+
gfor_fndecl_in_pack = gfc_build_library_function_decl (
get_identifier (PREFIX("internal_pack")),
pvoid_type_node, 1, pvoid_type_node);
@@ -3275,6 +3280,18 @@ gfc_generate_function_code (gfc_namespace * ns)
}
+ if (sym->attr.is_main_program && gfc_option.max_subrecord_length != 0)
+ {
+ tree arglist, gfc_c_int_type_node;
+
+ gfc_c_int_type_node = gfc_get_int_type (gfc_c_int_kind);
+ arglist = gfc_chainon_list (NULL_TREE,
+ build_int_cst (gfc_c_int_type_node,
+ gfc_option.max_subrecord_length));
+ tmp = build_function_call_expr (gfor_fndecl_set_max_subrecord_length, arglist);
+ gfc_add_expr_to_block (&body, tmp);
+ }
+
if (TREE_TYPE (DECL_RESULT (fndecl)) != void_type_node
&& sym->attr.subroutine)
{
diff --git a/gcc/fwprop.c b/gcc/fwprop.c
index 9dbf4749e2e..887da7009e4 100644
--- a/gcc/fwprop.c
+++ b/gcc/fwprop.c
@@ -389,7 +389,7 @@ propagate_rtx_1 (rtx *px, rtx old, rtx new, bool can_appear)
}
/* Replace all occurrences of OLD in X with NEW and try to simplify the
- resulting expression (in mode MODE). Return a new expresion if it is
+ resulting expression (in mode MODE). Return a new expression if it is
a constant, otherwise X.
Simplifications where occurrences of NEW collapse to a constant are always
diff --git a/gcc/predict.c b/gcc/predict.c
index e865a6223cc..d28e515386a 100644
--- a/gcc/predict.c
+++ b/gcc/predict.c
@@ -1601,7 +1601,7 @@ estimate_loops_at_level (struct loop *first_loop)
}
}
-/* Propates frequencies through structure of loops. */
+/* Propagates frequencies through structure of loops. */
static void
estimate_loops (void)
diff --git a/gcc/testsuite/ChangeLog b/gcc/testsuite/ChangeLog
index 08b4e0446e5..a3f3d640418 100644
--- a/gcc/testsuite/ChangeLog
+++ b/gcc/testsuite/ChangeLog
@@ -1,3 +1,34 @@
+2006-12-02 Andrew Pinski <andrew_pinski@playstation.sony.com>
+
+ PR C++/30033
+ * g++.dg/cpp0x/static_assert4.C: New testcase.
+
+2006-12-02 Kaveh R. Ghazi <ghazi@caip.rutgers.edu>
+
+ * gcc.dg/torture/builtin-sin-mpfr-1.c: Update MPFR comment.
+
+2006-12-02 Lee Millward <lee.millward@codesourcery.com>
+
+ PR c/27953
+ * gcc.dg/pr27953.c: New test.
+
+2006-12-01 Volker Reichelt <reichelt@igpm.rwth-aachen.de>
+
+ PR c++/30022
+ * g++.dg/ext/vector5.C: New test.
+
+ PR c++/30021
+ * g++.dg/other/main1.C: New test.
+
+2006-12-01 Thomas Koenig <Thomas.Koenig@online.de>
+
+ PR libfortran/29568
+ * gfortran.dg/convert_implied_open.f90: Change to
+ new default record length.
+ * gfortran.dg/unf_short_record_1.f90: Adapt to
+ new error message.
+ * gfortran.dg/unformatted_subrecords_1.f90: New test.
+
2006-12-01 Andrew MacLeod <amacleod@redhat.com>
* gcc.dg/max-1.c: Remove reference to -fno-tree-lrs option.
diff --git a/gcc/testsuite/g++.dg/cpp0x/static_assert4.C b/gcc/testsuite/g++.dg/cpp0x/static_assert4.C
new file mode 100644
index 00000000000..b0818873f10
--- /dev/null
+++ b/gcc/testsuite/g++.dg/cpp0x/static_assert4.C
@@ -0,0 +1,15 @@
+// { dg-options "-std=c++0x --param ggc-min-heapsize=0 --param ggc-min-expand=0 " }
+// PR C++/30033
+// Make sure that the static assert does not crash the GC.
+
+template <class T>
+struct default_delete
+{
+ void
+ operator() (T * ptr) const
+ {
+ static_assert (sizeof (T) > 0, "Can't delete pointer to incomplete type");
+ }
+};
+
+
diff --git a/gcc/testsuite/g++.dg/ext/vector5.C b/gcc/testsuite/g++.dg/ext/vector5.C
new file mode 100644
index 00000000000..e5304bcb12d
--- /dev/null
+++ b/gcc/testsuite/g++.dg/ext/vector5.C
@@ -0,0 +1,8 @@
+// PR c++/30022
+// { dg-do compile }
+
+void foo()
+{
+ int __attribute__((vector_size(8))) v;
+ v = 1/v; // { dg-error "invalid operands of types" }
+}
diff --git a/gcc/testsuite/g++.dg/other/main1.C b/gcc/testsuite/g++.dg/other/main1.C
new file mode 100644
index 00000000000..ba945741efb
--- /dev/null
+++ b/gcc/testsuite/g++.dg/other/main1.C
@@ -0,0 +1,4 @@
+// PR c++/30021
+// { dg-do compile }
+
+int main(void,char**); // { dg-error "incomplete type|invalid use" }
diff --git a/gcc/testsuite/gcc.dg/pr27953.c b/gcc/testsuite/gcc.dg/pr27953.c
new file mode 100644
index 00000000000..92a63d83686
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/pr27953.c
@@ -0,0 +1,4 @@
+/* PR c/27953 */
+
+void foo(struct A a) {} /* { dg-error "parameter list|definition|incomplete type" } */
+void foo() {} /* { dg-error "redefinition" } */
diff --git a/gcc/testsuite/gcc.dg/torture/builtin-sin-mpfr-1.c b/gcc/testsuite/gcc.dg/torture/builtin-sin-mpfr-1.c
index 86396f5c46a..35972870894 100644
--- a/gcc/testsuite/gcc.dg/torture/builtin-sin-mpfr-1.c
+++ b/gcc/testsuite/gcc.dg/torture/builtin-sin-mpfr-1.c
@@ -1,7 +1,6 @@
/* Version 2.2.0 of MPFR had bugs in sin rounding. This test checks
to see if that buggy version was installed. The problem is fixed
- in the MPFR cumulative patch http://www.mpfr.org/mpfr-current and
- presumably later MPFR versions.
+ in version 2.2.1 and presumably later MPFR versions.
Origin: Kaveh R. Ghazi 10/23/2006. */
diff --git a/gcc/testsuite/gfortran.dg/convert_implied_open.f90 b/gcc/testsuite/gfortran.dg/convert_implied_open.f90
index 4066f618cc2..9c25b5d961c 100644
--- a/gcc/testsuite/gfortran.dg/convert_implied_open.f90
+++ b/gcc/testsuite/gfortran.dg/convert_implied_open.f90
@@ -3,13 +3,13 @@
! PR 26735 - implied open didn't use to honor -fconvert
program main
implicit none
- integer (kind=8) :: i1, i2, i3
- write (10) 1_8
+ integer (kind=4) :: i1, i2, i3
+ write (10) 1_4
close (10)
- open (10, form="unformatted", access="direct", recl=8)
+ open (10, form="unformatted", access="direct", recl=4)
read (10,rec=1) i1
read (10,rec=2) i2
read (10,rec=3) i3
- if (i1 /= 8 .or. i2 /= 1 .or. i3 /= 8) call abort
+ if (i1 /= 4 .or. i2 /= 1 .or. i3 /= 4) call abort
close (10,status="delete")
end program main
diff --git a/gcc/testsuite/gfortran.dg/unf_short_record_1.f90 b/gcc/testsuite/gfortran.dg/unf_short_record_1.f90
index 1bb62736a36..45c94c29405 100644
--- a/gcc/testsuite/gfortran.dg/unf_short_record_1.f90
+++ b/gcc/testsuite/gfortran.dg/unf_short_record_1.f90
@@ -11,7 +11,7 @@ program main
read (10, err=20, iomsg=msg) a
call abort
20 continue
- if (msg .ne. "Short record on unformatted read") call abort
+ if (msg .ne. "I/O past end of record on unformatted file") call abort
if (a(1) .ne. 'a' .or. a(2) .ne. 'b' .or. a(3) .ne. 'b') call abort
close (10, status="delete")
end program main
diff --git a/gcc/testsuite/gfortran.dg/unformatted_subrecord_1.f90 b/gcc/testsuite/gfortran.dg/unformatted_subrecord_1.f90
new file mode 100644
index 00000000000..5812a8eaaf5
--- /dev/null
+++ b/gcc/testsuite/gfortran.dg/unformatted_subrecord_1.f90
@@ -0,0 +1,45 @@
+! { dg-do run }
+! { dg-options "-fmax-subrecord-length=16" }
+! Test Intel record markers with 16-byte subrecord sizes.
+program main
+ implicit none
+ integer, dimension(20) :: n
+ integer, dimension(30) :: m
+ integer :: i
+ real :: r
+ integer :: k
+ ! Maximum subrecord length is 16 here, or the test will fail.
+ open (10, file="f10.dat", &
+ form="unformatted", access="sequential")
+ n = (/ (i**2, i=1, 20) /)
+ write (10) n
+ close (10)
+ ! Read back the file, including record markers.
+ open (10, file="f10.dat", form="unformatted", access="stream")
+ read (10) m
+ if (any(m .ne. (/ -16, 1, 4, 9, 16, 16, -16, 25, 36, 49, 64, &
+ -16, -16, 81, 100, 121, 144, -16, -16, 169, 196, 225, &
+ 256, -16, 16, 289, 324, 361, 400, -16 /))) call abort
+ close (10)
+ open (10, file="f10.dat", form="unformatted", &
+ access="sequential")
+ m = 42
+ read (10) m(1:5)
+ if (any(m(1:5) .ne. (/ 1, 4, 9, 16, 25 /))) call abort
+ if (any(m(6:30) .ne. 42)) call abort
+ backspace 10
+ n = 0
+ read (10) n(1:5)
+ if (any(n(1:5) .ne. (/ 1, 4, 9, 16, 25 /))) call abort
+ if (any(n(6:20) .ne. 0)) call abort
+ ! Append to the end of the file
+ write (10) 3.14
+ ! Test multiple backspace statements
+ backspace 10
+ backspace 10
+ read (10) k
+ if (k .ne. 1) call abort
+ read (10) r
+ if (abs(r-3.14) .gt. 1e-7) call abort
+ close (10, status="delete")
+end program main
diff --git a/gcc/tree-data-ref.h b/gcc/tree-data-ref.h
index d80be31e3d0..2d23dce8207 100644
--- a/gcc/tree-data-ref.h
+++ b/gcc/tree-data-ref.h
@@ -119,7 +119,7 @@ struct data_reference
a[j].b[5][j] = 0;
Here the offset expression (j * C_j + C) will not contain variables after
- subsitution of j=3 (3*C_j + C).
+ substitution of j=3 (3*C_j + C).
Misalignment can be calculated only if all the variables can be
substituted with constants, otherwise, we record maximum possible alignment
diff --git a/gcc/tree-flow.h b/gcc/tree-flow.h
index fb7b02fa82e..3be370ce403 100644
--- a/gcc/tree-flow.h
+++ b/gcc/tree-flow.h
@@ -39,8 +39,8 @@ struct basic_block_def;
typedef struct basic_block_def *basic_block;
#endif
-/* Gimple dataflow datastructure. All publically available fields shall have
- gimple_ accessor defined in tree-flow-inline.h, all publically modifiable
+/* Gimple dataflow datastructure. All publicly available fields shall have
+ gimple_ accessor defined in tree-flow-inline.h, all publicly modifiable
fields should have gimple_set accessor. */
struct gimple_df GTY(()) {
/* Array of all variables referenced in the function. */
diff --git a/gcc/tree-ssa-loop-manip.c b/gcc/tree-ssa-loop-manip.c
index 7e50b97b660..d92f6e91ba5 100644
--- a/gcc/tree-ssa-loop-manip.c
+++ b/gcc/tree-ssa-loop-manip.c
@@ -627,7 +627,7 @@ can_unroll_loop_p (struct loop *loop, unsigned factor,
|| niter->cmp == ERROR_MARK
/* Scalar evolutions analysis might have copy propagated
the abnormal ssa names into these expressions, hence
- emiting the computations based on them during loop
+ emitting the computations based on them during loop
unrolling might create overlapping life ranges for
them, and failures in out-of-ssa. */
|| contains_abnormal_ssa_name_p (niter->may_be_zero)
diff --git a/gcc/tree-ssa-loop-niter.c b/gcc/tree-ssa-loop-niter.c
index 34ce6506f34..862f993f3b6 100644
--- a/gcc/tree-ssa-loop-niter.c
+++ b/gcc/tree-ssa-loop-niter.c
@@ -1831,7 +1831,7 @@ idx_infer_loop_bounds (tree base, tree *idx, void *dta)
unsigned char).
To make things simpler, we require both bounds to fit into type, although
- there are cases where this would not be strightly necessary. */
+ there are cases where this would not be strictly necessary. */
if (!int_fits_type_p (high, type)
|| !int_fits_type_p (low, type))
return true;
@@ -2086,7 +2086,7 @@ n_of_executions_at_most (tree stmt,
-- if NITER_BOUND->is_exit is true, then everything before
NITER_BOUND->stmt is executed at most NITER_BOUND->bound + 1
- times, and everyting after it at most NITER_BOUND->bound times.
+ times, and everything after it at most NITER_BOUND->bound times.
-- If NITER_BOUND->is_exit is false, then if we can prove that when STMT
is executed, then NITER_BOUND->stmt is executed as well in the same
diff --git a/gcc/tree-ssa-pre.c b/gcc/tree-ssa-pre.c
index 80986eb0860..1bbd77e96c8 100644
--- a/gcc/tree-ssa-pre.c
+++ b/gcc/tree-ssa-pre.c
@@ -1668,7 +1668,7 @@ compute_antic_aux (basic_block block, bool block_has_abnormal_pred_edge)
(since the maximal set often has 300+ members, even when you
have a small number of blocks).
Basically, we defer the computation of ANTIC for this block
- until we have processed it's successor, which will inveitably
+ until we have processed it's successor, which will inevitably
have a *much* smaller set of values to phi translate once
clean has been run on it.
The cost of doing this is that we technically perform more
diff --git a/gcc/tree-vect-analyze.c b/gcc/tree-vect-analyze.c
index a0d6e087082..89555151387 100644
--- a/gcc/tree-vect-analyze.c
+++ b/gcc/tree-vect-analyze.c
@@ -1428,7 +1428,7 @@ vect_enhance_data_refs_alignment (loop_vec_info loop_vinfo)
{
/* For interleaved access we peel only if number of iterations in
the prolog loop ({VF - misalignment}), is a multiple of the
- number of the interelaved accesses. */
+ number of the interleaved accesses. */
int elem_size, mis_in_elements;
int vf = LOOP_VINFO_VECT_FACTOR (loop_vinfo);
@@ -2228,7 +2228,8 @@ vect_mark_stmts_to_be_vectorized (loop_vec_info loop_vinfo)
is not used inside the loop), it will be vectorized, and therefore
the corresponding DEF_STMTs need to marked as relevant.
We distinguish between two kinds of relevant stmts - those that are
- used by a reduction conputation, and those that are (also) used by a regular computation. This allows us later on to identify stmts
+ used by a reduction computation, and those that are (also) used by
+ a regular computation. This allows us later on to identify stmts
that are used solely by a reduction, and therefore the order of
the results that they produce does not have to be kept.
*/
diff --git a/gcc/tree-vect-transform.c b/gcc/tree-vect-transform.c
index 769b4af10b0..a33cbaa6d30 100644
--- a/gcc/tree-vect-transform.c
+++ b/gcc/tree-vect-transform.c
@@ -368,7 +368,7 @@ vect_create_data_ref_ptr (tree stmt,
/* Function bump_vector_ptr
Increment a pointer (to a vector type) by vector-size. Connect the new
- increment stmt to the exising def-use update-chain of the pointer.
+ increment stmt to the existing def-use update-chain of the pointer.
The pointer def-use update-chain before this function:
DATAREF_PTR = phi (p_0, p_2)
@@ -658,7 +658,7 @@ vect_get_vec_def_for_operand (tree op, tree stmt, tree *scalar_def)
stmts operating on wider types we need to create 'VF/nunits' "copies" of the
vector stmt (each computing a vector of 'nunits' results, and together
computing 'VF' results in each iteration). This function is called when
- vectorizing such a stmt (e.g. vectorizing S2 in the illusration below, in
+ vectorizing such a stmt (e.g. vectorizing S2 in the illustration below, in
which VF=16 and nuniti=4, so the number of copies required is 4):
scalar stmt: vectorized into: STMT_VINFO_RELATED_STMT
@@ -2495,13 +2495,13 @@ vect_strided_store_supported (tree vectype)
/* Function vect_permute_store_chain.
- Given a chain of interleaved strores in DR_CHAIN of LENGTH that must be
+ Given a chain of interleaved stores in DR_CHAIN of LENGTH that must be
a power of 2, generate interleave_high/low stmts to reorder the data
correctly for the stores. Return the final references for stores in
RESULT_CHAIN.
E.g., LENGTH is 4 and the scalar type is short, i.e., VF is 8.
- The input is 4 vectors each containg 8 elements. We assign a number to each
+ The input is 4 vectors each containing 8 elements. We assign a number to each
element, the input sequence is:
1st vec: 0 1 2 3 4 5 6 7
@@ -2529,7 +2529,7 @@ vect_strided_store_supported (tree vectype)
and of interleave_low: 2 6 3 7
- The permutaion is done in log LENGTH stages. In each stage interleave_high
+ The permutation is done in log LENGTH stages. In each stage interleave_high
and interleave_low stmts are created for each pair of vectors in DR_CHAIN,
where the first argument is taken from the first half of DR_CHAIN and the
second argument from it's second half.
@@ -2758,7 +2758,7 @@ vectorizable_store (tree stmt, block_stmt_iterator *bsi, tree *vec_stmt)
And they are put in STMT_VINFO_VEC_STMT of the corresponding scalar stmts
(the order of the data-refs in the output of vect_permute_store_chain
corresponds to the order of scalar stmts in the interleaving chain - see
- the documentaion of vect_permute_store_chain()).
+ the documentation of vect_permute_store_chain()).
In case of both multiple types and interleaving, above vector stores and
permutation stmts are created for every copy. The result vector stmts are
@@ -3050,7 +3050,7 @@ vect_strided_load_supported (tree vectype)
correctly. Return the final references for loads in RESULT_CHAIN.
E.g., LENGTH is 4 and the scalar type is short, i.e., VF is 8.
- The input is 4 vectors each containg 8 elements. We assign a number to each
+ The input is 4 vectors each containing 8 elements. We assign a number to each
element, the input sequence is:
1st vec: 0 1 2 3 4 5 6 7
@@ -3078,7 +3078,7 @@ vect_strided_load_supported (tree vectype)
and of extract_odd: 1 3 5 7
- The permutaion is done in log LENGTH stages. In each stage extract_even and
+ The permutation is done in log LENGTH stages. In each stage extract_even and
extract_odd stmts are created for each pair of vectors in DR_CHAIN in their
order. In our example,
@@ -3443,7 +3443,7 @@ vectorizable_load (tree stmt, block_stmt_iterator *bsi, tree *vec_stmt)
And they are put in STMT_VINFO_VEC_STMT of the corresponding scalar stmts
(the order of the data-refs in the output of vect_permute_load_chain
corresponds to the order of scalar stmts in the interleaving chain - see
- the documentaion of vect_permute_load_chain()).
+ the documentation of vect_permute_load_chain()).
The generation of permutation stmts and recording them in
STMT_VINFO_VEC_STMT is done in vect_transform_strided_load().
@@ -4332,7 +4332,7 @@ vect_gen_niters_for_prolog_loop (loop_vec_info loop_vinfo, tree loop_niters)
if (DR_GROUP_FIRST_DR (stmt_info))
{
- /* For interleaved access element size must be multipled by the size of
+ /* For interleaved access element size must be multiplied by the size of
the interleaved group. */
group_size = DR_GROUP_SIZE (vinfo_for_stmt (
DR_GROUP_FIRST_DR (stmt_info)));
diff --git a/gcc/tree-vectorizer.c b/gcc/tree-vectorizer.c
index ed37d5f040b..3e186a3c370 100644
--- a/gcc/tree-vectorizer.c
+++ b/gcc/tree-vectorizer.c
@@ -1762,7 +1762,7 @@ vect_is_simple_use (tree operand, loop_vec_info loop_vinfo, tree *def_stmt,
vector form (i.e., when operating on arguments of type VECTYPE).
The two kinds of widening operations we currently support are
- NOP and WIDEN_MULT. This function checks if these oprations
+ NOP and WIDEN_MULT. This function checks if these operations
are supported by the target platform either directly (via vector
tree-codes), or via target builtins.
@@ -1796,9 +1796,9 @@ supportable_widening_operation (enum tree_code code, tree stmt, tree vectype,
vect1: [res1,res2,res3,res4], vect2: [res5,res6,res7,res8].
However, in the special case that the result of the widening operation is
- used in a reduction copmutation only, the order doesn't matter (because
+ used in a reduction computation only, the order doesn't matter (because
when vectorizing a reduction we change the order of the computation).
- Some targets can take advatage of this and generate more efficient code.
+ Some targets can take advantage of this and generate more efficient code.
For example, targets like Altivec, that support widen_mult using a sequence
of {mult_even,mult_odd} generate the following vectors:
vect1: [res1,res3,res5,res7], vect2: [res2,res4,res6,res8]. */
diff --git a/gcc/tree-vrp.c b/gcc/tree-vrp.c
index b55480ee8cd..adcbbdc616e 100644
--- a/gcc/tree-vrp.c
+++ b/gcc/tree-vrp.c
@@ -2902,7 +2902,7 @@ register_edge_assert_for (tree name, edge e, block_stmt_iterator si, tree cond)
/* In the case of NAME == 1 or NAME != 0, for TRUTH_AND_EXPR defining
statement of NAME we can assert both operands of the TRUTH_AND_EXPR
- have non-zero value. */
+ have nonzero value. */
if (((comp_code == EQ_EXPR && integer_onep (val))
|| (comp_code == NE_EXPR && integer_zerop (val))))
{
diff --git a/libgfortran/ChangeLog b/libgfortran/ChangeLog
index 97e7f3a2c2b..ca823924d0b 100644
--- a/libgfortran/ChangeLog
+++ b/libgfortran/ChangeLog
@@ -1,3 +1,46 @@
+2006-12-01 Thomas Koenig <Thomas.Koenig@online.de>
+
+ PR libfortran/29568
+ * libgfortran/libgfortran.h (compile_options_t): Add
+ record_marker. (top level): Define GFC_MAX_SUBRECORD_LENGTH.
+ * runtime/compile_options.c (set_record_marker): Change
+ default to four-byte record marker.
+ (set_max_subrecord_length): New function.
+ * runtime/error.c (translate_error): Change error message
+ for short record on unformatted read.
+ * io/io.h (gfc_unit): Add recl_subrecord, bytes_left_subrecord
+ and continued.
+ * io/file_pos.c (unformatted_backspace): Change default of record
+ marker size to four bytes. Loop over subrecords.
+ * io/open.c: Default recl is max_offset. If
+ compile_options.max_subrecord_length has been set, set set
+ u->recl_subrecord to its value, to the maximum value otherwise.
+ * io/transfer.c (top level): Add prototypes for us_read, us_write,
+ next_record_r_unf and next_record_w_unf.
+ (read_block_direct): Separate codepaths for unformatted direct
+ and unformatted sequential. If a recl has been set by the
+ user, use the number of bytes left for the record if it is smaller
+ than the read request. Loop over subrecords. Set an error if the
+ user has set a recl and the read was short.
+ (write_buf): Separate codepaths for unformatted direct and
+ unformatted sequential. If a recl has been set by the
+ user, use the number of bytes left for the record if it is smaller
+ than the read request. Loop over subrecords. Set an error if the
+ user has set a recl and the read was short.
+ (us_read): Add parameter continued (to indicate that bytes_left
+ should not be intialized). Change default of record marker size
+ to four bytes. Use subrecord. If the subrecord length is smaller than
+ zero, this indicates a continuation.
+ (us_write): Add parameter continued (to indicate that the continued
+ flag should be set). Use subrecord.
+ (pre_position): Use 0 for continued on us_write and us_read calls.
+ (skip_record): New function.
+ (next_record_r_unf): New function.
+ (next_record_r): Use next_record_r_unf.
+ (write_us_marker): Default size for record markers is four bytes.
+ (next_record_w_unf): New function.
+ (next_record_w): Use next_record_w_unf.
+
2006-11-25 Francois-Xavier Coudert <coudert@clipper.ens.fr>
* Makefile.am: Remove intrinsics/erf.c and intrinsics/bessel.c.
diff --git a/libgfortran/io/file_pos.c b/libgfortran/io/file_pos.c
index 979dec55513..df722e4cbc7 100644
--- a/libgfortran/io/file_pos.c
+++ b/libgfortran/io/file_pos.c
@@ -98,7 +98,7 @@ formatted_backspace (st_parameter_filepos *fpp, gfc_unit *u)
/* unformatted_backspace(fpp) -- Move the file backwards for an unformatted
sequential file. We are guaranteed to be between records on entry and
- we have to shift to the previous record. */
+ we have to shift to the previous record. Loop over subrecords. */
static void
unformatted_backspace (st_parameter_filepos *fpp, gfc_unit *u)
@@ -107,74 +107,74 @@ unformatted_backspace (st_parameter_filepos *fpp, gfc_unit *u)
GFC_INTEGER_4 m4;
GFC_INTEGER_8 m8;
int length, length_read;
+ int continued;
char *p;
if (compile_options.record_marker == 0)
- length = sizeof (gfc_offset);
+ length = sizeof (GFC_INTEGER_4);
else
length = compile_options.record_marker;
- length_read = length;
+ do
+ {
+ length_read = length;
- p = salloc_r_at (u->s, &length_read,
- file_position (u->s) - length);
- if (p == NULL || length_read != length)
- goto io_error;
+ p = salloc_r_at (u->s, &length_read,
+ file_position (u->s) - length);
+ if (p == NULL || length_read != length)
+ goto io_error;
- /* Only CONVERT_NATIVE and CONVERT_SWAP are valid here. */
- if (u->flags.convert == CONVERT_NATIVE)
- {
- switch (compile_options.record_marker)
+ /* Only CONVERT_NATIVE and CONVERT_SWAP are valid here. */
+ if (u->flags.convert == CONVERT_NATIVE)
{
- case 0:
- memcpy (&m, p, sizeof(gfc_offset));
- break;
-
- case sizeof(GFC_INTEGER_4):
- memcpy (&m4, p, sizeof (m4));
- m = m4;
- break;
-
- case sizeof(GFC_INTEGER_8):
- memcpy (&m8, p, sizeof (m8));
- m = m8;
- break;
-
- default:
- runtime_error ("Illegal value for record marker");
- break;
+ switch (length)
+ {
+ case sizeof(GFC_INTEGER_4):
+ memcpy (&m4, p, sizeof (m4));
+ m = m4;
+ break;
+
+ case sizeof(GFC_INTEGER_8):
+ memcpy (&m8, p, sizeof (m8));
+ m = m8;
+ break;
+
+ default:
+ runtime_error ("Illegal value for record marker");
+ break;
+ }
}
- }
- else
- {
- switch (compile_options.record_marker)
+ else
{
- case 0:
- reverse_memcpy (&m, p, sizeof(gfc_offset));
- break;
-
- case sizeof(GFC_INTEGER_4):
- reverse_memcpy (&m4, p, sizeof (m4));
- m = m4;
- break;
-
- case sizeof(GFC_INTEGER_8):
- reverse_memcpy (&m8, p, sizeof (m8));
- m = m8;
- break;
-
- default:
- runtime_error ("Illegal value for record marker");
- break;
+ switch (length)
+ {
+ case sizeof(GFC_INTEGER_4):
+ reverse_memcpy (&m4, p, sizeof (m4));
+ m = m4;
+ break;
+
+ case sizeof(GFC_INTEGER_8):
+ reverse_memcpy (&m8, p, sizeof (m8));
+ m = m8;
+ break;
+
+ default:
+ runtime_error ("Illegal value for record marker");
+ break;
+ }
+
}
- }
+ continued = m < 0;
+ if (continued)
+ m = -m;
- if ((new = file_position (u->s) - m - 2*length) < 0)
- new = 0;
+ if ((new = file_position (u->s) - m - 2*length) < 0)
+ new = 0;
- if (sseek (u->s, new) == FAILURE)
- goto io_error;
+ if (sseek (u->s, new) == FAILURE)
+ goto io_error;
+ } while (continued);
u->last_record--;
return;
diff --git a/libgfortran/io/io.h b/libgfortran/io/io.h
index e8e8390d1c5..4d227dd3b8c 100644
--- a/libgfortran/io/io.h
+++ b/libgfortran/io/io.h
@@ -499,12 +499,19 @@ typedef struct gfc_unit
unit_mode mode;
unit_flags flags;
- /* recl -- Record length of the file.
- last_record -- Last record number read or written
- maxrec -- Maximum record number in a direct access file
- bytes_left -- Bytes left in current record.
- strm_pos -- Current position in file for STREAM I/O. */
- gfc_offset recl, last_record, maxrec, bytes_left, strm_pos;
+ /* recl -- Record length of the file.
+ last_record -- Last record number read or written
+ maxrec -- Maximum record number in a direct access file
+ bytes_left -- Bytes left in current record.
+ strm_pos -- Current position in file for STREAM I/O.
+ recl_subrecord -- Maximum length for subrecord.
+ bytes_left_subrecord -- Bytes left in current subrecord. */
+ gfc_offset recl, last_record, maxrec, bytes_left, strm_pos,
+ recl_subrecord, bytes_left_subrecord;
+
+ /* Set to 1 if we have read a subrecord. */
+
+ int continued;
__gthread_mutex_t lock;
/* Number of threads waiting to acquire this unit's lock.
diff --git a/libgfortran/io/open.c b/libgfortran/io/open.c
index 9b4f0cd7122..06fba75e1df 100644
--- a/libgfortran/io/open.c
+++ b/libgfortran/io/open.c
@@ -413,23 +413,29 @@ new_unit (st_parameter_open *opp, gfc_unit *u, unit_flags * flags)
else
{
u->flags.has_recl = 0;
- switch (compile_options.record_marker)
+ u->recl = max_offset;
+ if (compile_options.max_subrecord_length)
{
- case 0:
- u->recl = max_offset;
- break;
-
- case sizeof (GFC_INTEGER_4):
- u->recl = GFC_INTEGER_4_HUGE;
- break;
-
- case sizeof (GFC_INTEGER_8):
- u->recl = max_offset;
- break;
-
- default:
- runtime_error ("Illegal value for record marker");
- break;
+ u->recl_subrecord = compile_options.max_subrecord_length;
+ }
+ else
+ {
+ switch (compile_options.record_marker)
+ {
+ case 0:
+ /* Fall through */
+ case sizeof (GFC_INTEGER_4):
+ u->recl_subrecord = GFC_MAX_SUBRECORD_LENGTH;
+ break;
+
+ case sizeof (GFC_INTEGER_8):
+ u->recl_subrecord = max_offset - 16;
+ break;
+
+ default:
+ runtime_error ("Illegal value for record marker");
+ break;
+ }
}
}
diff --git a/libgfortran/io/transfer.c b/libgfortran/io/transfer.c
index 329d49828d4..4270d61e693 100644
--- a/libgfortran/io/transfer.c
+++ b/libgfortran/io/transfer.c
@@ -82,6 +82,11 @@ extern void transfer_array (st_parameter_dt *, gfc_array_char *, int,
gfc_charlen_type);
export_proto(transfer_array);
+static void us_read (st_parameter_dt *, int);
+static void us_write (st_parameter_dt *, int);
+static void next_record_r_unf (st_parameter_dt *, int);
+static void next_record_w_unf (st_parameter_dt *, int);
+
static const st_option advance_opt[] = {
{"yes", ADVANCE_YES},
{"no", ADVANCE_NO},
@@ -336,12 +341,16 @@ read_block (st_parameter_dt *dtp, int *length)
}
-/* Reads a block directly into application data space. */
+/* Reads a block directly into application data space. This is for
+ unformatted files. */
static void
read_block_direct (st_parameter_dt *dtp, void *buf, size_t *nbytes)
{
- size_t nread;
+ size_t to_read_record;
+ size_t have_read_record;
+ size_t to_read_subrecord;
+ size_t have_read_subrecord;
int short_record;
if (is_stream_io (dtp))
@@ -353,62 +362,169 @@ read_block_direct (st_parameter_dt *dtp, void *buf, size_t *nbytes)
return;
}
- nread = *nbytes;
- if (sread (dtp->u.p.current_unit->s, buf, &nread) != 0)
+ to_read_record = *nbytes;
+ have_read_record = to_read_record;
+ if (sread (dtp->u.p.current_unit->s, buf, &have_read_record) != 0)
{
generate_error (&dtp->common, ERROR_OS, NULL);
return;
}
- dtp->u.p.current_unit->strm_pos += (gfc_offset) nread;
-
- if (nread != *nbytes) /* Short read, e.g. if we hit EOF. */
- generate_error (&dtp->common, ERROR_END, NULL);
+ dtp->u.p.current_unit->strm_pos += (gfc_offset) have_read_record;
+ if (to_read_record != have_read_record)
+ {
+ /* Short read, e.g. if we hit EOF. */
+ generate_error (&dtp->common, ERROR_END, NULL);
+ return;
+ }
return;
}
- /* Unformatted file with records */
- if (dtp->u.p.current_unit->bytes_left < (gfc_offset) *nbytes)
+ if (dtp->u.p.current_unit->flags.access == ACCESS_DIRECT)
{
- short_record = 1;
- nread = (size_t) dtp->u.p.current_unit->bytes_left;
- *nbytes = nread;
+ if (dtp->u.p.current_unit->bytes_left < (gfc_offset) *nbytes)
+ {
+ short_record = 1;
+ to_read_record = (size_t) dtp->u.p.current_unit->bytes_left;
+ *nbytes = to_read_record;
- if (dtp->u.p.current_unit->bytes_left == 0)
+ if (dtp->u.p.current_unit->bytes_left == 0)
+ {
+ dtp->u.p.current_unit->endfile = AT_ENDFILE;
+ generate_error (&dtp->common, ERROR_END, NULL);
+ return;
+ }
+ }
+
+ else
+ {
+ short_record = 0;
+ to_read_record = *nbytes;
+ }
+
+ dtp->u.p.current_unit->bytes_left -= to_read_record;
+
+ if (sread (dtp->u.p.current_unit->s, buf, &to_read_record) != 0)
+ {
+ generate_error (&dtp->common, ERROR_OS, NULL);
+ return;
+ }
+
+ if (to_read_record != *nbytes) /* Short read, e.g. if we hit EOF. */
{
- dtp->u.p.current_unit->endfile = AT_ENDFILE;
+ *nbytes = to_read_record;
generate_error (&dtp->common, ERROR_END, NULL);
return;
}
+
+ if (short_record)
+ {
+ generate_error (&dtp->common, ERROR_SHORT_RECORD, NULL);
+ return;
+ }
+ return;
}
+ /* Unformatted sequential. We loop over the subrecords, reading
+ until the request has been fulfilled or the record has run out
+ of continuation subrecords. */
+
+ /* Check whether we exceed the total record length. */
+
+ if (dtp->u.p.current_unit->flags.has_recl)
+ {
+ to_read_record =
+ *nbytes > (size_t) dtp->u.p.current_unit->bytes_left ?
+ *nbytes : (size_t) dtp->u.p.current_unit->bytes_left;
+ short_record = 1;
+ }
else
{
+ to_read_record = *nbytes;
short_record = 0;
- nread = *nbytes;
}
+ have_read_record = 0;
- dtp->u.p.current_unit->bytes_left -= nread;
-
- if (sread (dtp->u.p.current_unit->s, buf, &nread) != 0)
+ while(1)
{
- generate_error (&dtp->common, ERROR_OS, NULL);
- return;
- }
+ if (dtp->u.p.current_unit->bytes_left_subrecord
+ < (gfc_offset) to_read_record)
+ {
+ to_read_subrecord = (size_t) dtp->u.p.current_unit->bytes_left_subrecord;
+ to_read_record -= to_read_subrecord;
- if (nread != *nbytes) /* Short read, e.g. if we hit EOF. */
- {
- *nbytes = nread;
- generate_error (&dtp->common, ERROR_END, NULL);
- return;
+ if (dtp->u.p.current_unit->bytes_left_subrecord == 0)
+ {
+ if (dtp->u.p.current_unit->continued)
+ {
+ /* Skip to the next subrecord */
+ next_record_r_unf (dtp, 0);
+ us_read (dtp, 1);
+ continue;
+ }
+ else
+ {
+ dtp->u.p.current_unit->endfile = AT_ENDFILE;
+ generate_error (&dtp->common, ERROR_END, NULL);
+ return;
+ }
+ }
+ }
+
+ else
+ {
+ to_read_subrecord = to_read_record;
+ to_read_record = 0;
+ }
+
+ dtp->u.p.current_unit->bytes_left_subrecord -= to_read_subrecord;
+
+ have_read_subrecord = to_read_subrecord;
+ if (sread (dtp->u.p.current_unit->s, buf + have_read_record,
+ &have_read_subrecord) != 0)
+ {
+ generate_error (&dtp->common, ERROR_OS, NULL);
+ return;
+ }
+
+ have_read_record += have_read_subrecord;
+
+ if (to_read_subrecord != have_read_subrecord) /* Short read,
+ e.g. if we hit EOF. */
+ {
+ *nbytes = have_read_record;
+ generate_error (&dtp->common, ERROR_END, NULL);
+ return;
+ }
+
+ if (to_read_record > 0)
+ {
+ if (dtp->u.p.current_unit->continued)
+ {
+ next_record_r_unf (dtp, 0);
+ us_read (dtp, 1);
+ }
+ else
+ {
+ generate_error (&dtp->common, ERROR_SHORT_RECORD, NULL);
+ return;
+ }
+ }
+ else
+ {
+ /* Normal exit, the read request has been fulfilled. */
+ break;
+ }
}
+ dtp->u.p.current_unit->bytes_left -= have_read_record;
if (short_record)
{
generate_error (&dtp->common, ERROR_SHORT_RECORD, NULL);
return;
}
+ return;
}
@@ -471,11 +587,20 @@ write_block (st_parameter_dt *dtp, int length)
}
-/* High level interface to swrite(), taking care of errors. */
+/* High level interface to swrite(), taking care of errors. This is only
+ called for unformatted files. There are three cases to consider:
+ Stream I/O, unformatted direct, unformatted sequential. */
static try
write_buf (st_parameter_dt *dtp, void *buf, size_t nbytes)
{
+
+ size_t have_written, to_write_subrecord;
+ int short_record;
+
+
+ /* Stream I/O. */
+
if (is_stream_io (dtp))
{
if (sseek (dtp->u.p.current_unit->s,
@@ -484,42 +609,88 @@ write_buf (st_parameter_dt *dtp, void *buf, size_t nbytes)
generate_error (&dtp->common, ERROR_OS, NULL);
return FAILURE;
}
+
+ if (swrite (dtp->u.p.current_unit->s, buf, &nbytes) != 0)
+ {
+ generate_error (&dtp->common, ERROR_OS, NULL);
+ return FAILURE;
+ }
+
+ dtp->u.p.current_unit->strm_pos += (gfc_offset) nbytes;
+
+ return SUCCESS;
}
- else
+
+ /* Unformatted direct access. */
+
+ if (dtp->u.p.current_unit->flags.access == ACCESS_DIRECT)
{
if (dtp->u.p.current_unit->bytes_left < (gfc_offset) nbytes)
{
- /* For preconnected units with default record length, set
- bytes left to unit record length and proceed, otherwise
- error. */
- if ((dtp->u.p.current_unit->unit_number == options.stdout_unit
- || dtp->u.p.current_unit->unit_number == options.stderr_unit)
- && dtp->u.p.current_unit->recl == DEFAULT_RECL)
- dtp->u.p.current_unit->bytes_left = dtp->u.p.current_unit->recl;
- else
- {
- if (dtp->u.p.current_unit->flags.access == ACCESS_DIRECT)
- generate_error (&dtp->common, ERROR_DIRECT_EOR, NULL);
- else
- generate_error (&dtp->common, ERROR_EOR, NULL);
- return FAILURE;
- }
+ generate_error (&dtp->common, ERROR_DIRECT_EOR, NULL);
+ return FAILURE;
+ }
+
+ if (swrite (dtp->u.p.current_unit->s, buf, &nbytes) != 0)
+ {
+ generate_error (&dtp->common, ERROR_OS, NULL);
+ return FAILURE;
}
- dtp->u.p.current_unit->bytes_left -= (gfc_offset) nbytes;
+ dtp->u.p.current_unit->strm_pos += (gfc_offset) nbytes;
+
+ return SUCCESS;
+
}
- if (swrite (dtp->u.p.current_unit->s, buf, &nbytes) != 0)
+ /* Unformatted sequential. */
+
+ have_written = 0;
+
+ if (dtp->u.p.current_unit->flags.has_recl
+ && (gfc_offset) nbytes > dtp->u.p.current_unit->bytes_left)
{
- generate_error (&dtp->common, ERROR_OS, NULL);
- return FAILURE;
+ nbytes = dtp->u.p.current_unit->bytes_left;
+ short_record = 1;
+ }
+ else
+ {
+ short_record = 0;
}
- if ((dtp->common.flags & IOPARM_DT_HAS_SIZE) != 0)
- dtp->u.p.size_used += (gfc_offset) nbytes;
+ while (1)
+ {
+
+ to_write_subrecord =
+ (size_t) dtp->u.p.current_unit->bytes_left_subrecord < nbytes ?
+ (size_t) dtp->u.p.current_unit->bytes_left_subrecord : nbytes;
+
+ dtp->u.p.current_unit->bytes_left_subrecord -=
+ (gfc_offset) to_write_subrecord;
- dtp->u.p.current_unit->strm_pos += (gfc_offset) nbytes;
+ if (swrite (dtp->u.p.current_unit->s, buf + have_written,
+ &to_write_subrecord) != 0)
+ {
+ generate_error (&dtp->common, ERROR_OS, NULL);
+ return FAILURE;
+ }
+
+ dtp->u.p.current_unit->strm_pos += (gfc_offset) to_write_subrecord;
+ nbytes -= to_write_subrecord;
+ have_written += to_write_subrecord;
+ if (nbytes == 0)
+ break;
+
+ next_record_w_unf (dtp, 1);
+ us_write (dtp, 1);
+ }
+ dtp->u.p.current_unit->bytes_left -= have_written;
+ if (short_record)
+ {
+ generate_error (&dtp->common, ERROR_SHORT_RECORD, NULL);
+ return FAILURE;
+ }
return SUCCESS;
}
@@ -1357,7 +1528,7 @@ transfer_array (st_parameter_dt *dtp, gfc_array_char *desc, int kind,
/* Preposition a sequential unformatted file while reading. */
static void
-us_read (st_parameter_dt *dtp)
+us_read (st_parameter_dt *dtp, int continued)
{
char *p;
int n;
@@ -1370,7 +1541,7 @@ us_read (st_parameter_dt *dtp)
return;
if (compile_options.record_marker == 0)
- n = sizeof (gfc_offset);
+ n = sizeof (GFC_INTEGER_4);
else
n = compile_options.record_marker;
@@ -1393,12 +1564,8 @@ us_read (st_parameter_dt *dtp)
/* Only CONVERT_NATIVE and CONVERT_SWAP are valid here. */
if (dtp->u.p.current_unit->flags.convert == CONVERT_NATIVE)
{
- switch (compile_options.record_marker)
+ switch (nr)
{
- case 0:
- memcpy (&i, p, sizeof(gfc_offset));
- break;
-
case sizeof(GFC_INTEGER_4):
memcpy (&i4, p, sizeof (i4));
i = i4;
@@ -1415,12 +1582,8 @@ us_read (st_parameter_dt *dtp)
}
}
else
- switch (compile_options.record_marker)
+ switch (nr)
{
- case 0:
- reverse_memcpy (&i, p, sizeof(gfc_offset));
- break;
-
case sizeof(GFC_INTEGER_4):
reverse_memcpy (&i4, p, sizeof (i4));
i = i4;
@@ -1436,7 +1599,19 @@ us_read (st_parameter_dt *dtp)
break;
}
- dtp->u.p.current_unit->bytes_left = i;
+ if (i >= 0)
+ {
+ dtp->u.p.current_unit->bytes_left_subrecord = i;
+ dtp->u.p.current_unit->continued = 0;
+ }
+ else
+ {
+ dtp->u.p.current_unit->bytes_left_subrecord = -i;
+ dtp->u.p.current_unit->continued = 1;
+ }
+
+ if (! continued)
+ dtp->u.p.current_unit->bytes_left = dtp->u.p.current_unit->recl;
}
@@ -1444,7 +1619,7 @@ us_read (st_parameter_dt *dtp)
amount to writing a bogus length that will be filled in later. */
static void
-us_write (st_parameter_dt *dtp)
+us_write (st_parameter_dt *dtp, int continued)
{
size_t nbytes;
gfc_offset dummy;
@@ -1452,7 +1627,7 @@ us_write (st_parameter_dt *dtp)
dummy = 0;
if (compile_options.record_marker == 0)
- nbytes = sizeof (gfc_offset);
+ nbytes = sizeof (GFC_INTEGER_4);
else
nbytes = compile_options.record_marker ;
@@ -1460,12 +1635,12 @@ us_write (st_parameter_dt *dtp)
generate_error (&dtp->common, ERROR_OS, NULL);
/* For sequential unformatted, if RECL= was not specified in the OPEN
- we write until we have more bytes than can fit in the record markers.
- If disk space runs out first, it will error on the write. */
- if (dtp->u.p.current_unit->flags.has_recl == 0)
- dtp->u.p.current_unit->recl = max_offset;
+ we write until we have more bytes than can fit in the subrecord
+ markers, then we write a new subrecord. */
- dtp->u.p.current_unit->bytes_left = dtp->u.p.current_unit->recl;
+ dtp->u.p.current_unit->bytes_left_subrecord =
+ dtp->u.p.current_unit->recl_subrecord;
+ dtp->u.p.current_unit->continued = continued;
}
@@ -1491,9 +1666,9 @@ pre_position (st_parameter_dt *dtp)
case UNFORMATTED_SEQUENTIAL:
if (dtp->u.p.mode == READING)
- us_read (dtp);
+ us_read (dtp, 0);
else
- us_write (dtp);
+ us_write (dtp, 0);
break;
@@ -1886,17 +2061,92 @@ next_array_record (st_parameter_dt *dtp, array_loop_spec *ls)
return index;
}
-/* Space to the next record for read mode. If the file is not
- seekable, we read MAX_READ chunks until we get to the right
+
+
+/* Skip to the end of the current record, taking care of an optional
+ record marker of size bytes. If the file is not seekable, we
+ read chunks of size MAX_READ until we get to the right
position. */
#define MAX_READ 4096
static void
+skip_record (st_parameter_dt *dtp, size_t bytes)
+{
+ gfc_offset new;
+ int rlength, length;
+ char *p;
+
+ dtp->u.p.current_unit->bytes_left_subrecord += bytes;
+ if (dtp->u.p.current_unit->bytes_left_subrecord == 0)
+ return;
+
+ if (is_seekable (dtp->u.p.current_unit->s))
+ {
+ new = file_position (dtp->u.p.current_unit->s)
+ + dtp->u.p.current_unit->bytes_left_subrecord;
+
+ /* Direct access files do not generate END conditions,
+ only I/O errors. */
+ if (sseek (dtp->u.p.current_unit->s, new) == FAILURE)
+ generate_error (&dtp->common, ERROR_OS, NULL);
+ }
+ else
+ { /* Seek by reading data. */
+ while (dtp->u.p.current_unit->bytes_left_subrecord > 0)
+ {
+ rlength = length =
+ (MAX_READ > dtp->u.p.current_unit->bytes_left_subrecord) ?
+ MAX_READ : dtp->u.p.current_unit->bytes_left_subrecord;
+
+ p = salloc_r (dtp->u.p.current_unit->s, &rlength);
+ if (p == NULL)
+ {
+ generate_error (&dtp->common, ERROR_OS, NULL);
+ return;
+ }
+
+ dtp->u.p.current_unit->bytes_left_subrecord -= length;
+ }
+ }
+
+}
+
+#undef MAX_READ
+
+/* Advance to the next record reading unformatted files, taking
+ care of subrecords. If complete_record is nonzero, we loop
+ until all subrecords are cleared. */
+
+static void
+next_record_r_unf (st_parameter_dt *dtp, int complete_record)
+{
+ size_t bytes;
+
+ bytes = compile_options.record_marker == 0 ?
+ sizeof (GFC_INTEGER_4) : compile_options.record_marker;
+
+ while(1)
+ {
+
+ /* Skip over tail */
+
+ skip_record (dtp, bytes);
+
+ if ( ! (complete_record && dtp->u.p.current_unit->continued))
+ return;
+
+ us_read (dtp, 1);
+ }
+}
+
+/* Space to the next record for read mode. */
+
+static void
next_record_r (st_parameter_dt *dtp)
{
- gfc_offset new, record;
- int bytes_left, rlength, length;
+ gfc_offset record;
+ int length, bytes_left;
char *p;
switch (current_mode (dtp))
@@ -1906,47 +2156,12 @@ next_record_r (st_parameter_dt *dtp)
return;
case UNFORMATTED_SEQUENTIAL:
-
- /* Skip over tail */
- dtp->u.p.current_unit->bytes_left +=
- compile_options.record_marker == 0 ?
- sizeof (gfc_offset) : compile_options.record_marker;
-
- /* Fall through... */
+ next_record_r_unf (dtp, 1);
+ break;
case FORMATTED_DIRECT:
case UNFORMATTED_DIRECT:
- if (dtp->u.p.current_unit->bytes_left == 0)
- break;
-
- if (is_seekable (dtp->u.p.current_unit->s))
- {
- new = file_position (dtp->u.p.current_unit->s)
- + dtp->u.p.current_unit->bytes_left;
-
- /* Direct access files do not generate END conditions,
- only I/O errors. */
- if (sseek (dtp->u.p.current_unit->s, new) == FAILURE)
- generate_error (&dtp->common, ERROR_OS, NULL);
-
- }
- else
- { /* Seek by reading data. */
- while (dtp->u.p.current_unit->bytes_left > 0)
- {
- rlength = length = (MAX_READ > dtp->u.p.current_unit->bytes_left) ?
- MAX_READ : dtp->u.p.current_unit->bytes_left;
-
- p = salloc_r (dtp->u.p.current_unit->s, &rlength);
- if (p == NULL)
- {
- generate_error (&dtp->common, ERROR_OS, NULL);
- break;
- }
-
- dtp->u.p.current_unit->bytes_left -= length;
- }
- }
+ skip_record (dtp, 0);
break;
case FORMATTED_STREAM:
@@ -2025,19 +2240,15 @@ write_us_marker (st_parameter_dt *dtp, const gfc_offset buf)
char p[sizeof (GFC_INTEGER_8)];
if (compile_options.record_marker == 0)
- len = sizeof (gfc_offset);
+ len = sizeof (GFC_INTEGER_4);
else
len = compile_options.record_marker;
/* Only CONVERT_NATIVE and CONVERT_SWAP are valid here. */
if (dtp->u.p.current_unit->flags.convert == CONVERT_NATIVE)
{
- switch (compile_options.record_marker)
+ switch (len)
{
- case 0:
- return swrite (dtp->u.p.current_unit->s, &buf, &len);
- break;
-
case sizeof (GFC_INTEGER_4):
buf4 = buf;
return swrite (dtp->u.p.current_unit->s, &buf4, &len);
@@ -2055,13 +2266,8 @@ write_us_marker (st_parameter_dt *dtp, const gfc_offset buf)
}
else
{
- switch (compile_options.record_marker)
+ switch (len)
{
- case 0:
- reverse_memcpy (p, &buf, sizeof (gfc_offset));
- return swrite (dtp->u.p.current_unit->s, p, &len);
- break;
-
case sizeof (GFC_INTEGER_4):
buf4 = buf;
reverse_memcpy (p, &buf4, sizeof (GFC_INTEGER_4));
@@ -2070,7 +2276,7 @@ write_us_marker (st_parameter_dt *dtp, const gfc_offset buf)
case sizeof (GFC_INTEGER_8):
buf8 = buf;
- reverse_memcpy (p, &buf8, sizeof (GFC_INTEGER_4));
+ reverse_memcpy (p, &buf8, sizeof (GFC_INTEGER_8));
return swrite (dtp->u.p.current_unit->s, p, &len);
break;
@@ -2082,16 +2288,72 @@ write_us_marker (st_parameter_dt *dtp, const gfc_offset buf)
}
+/* Position to the next (sub)record in write mode for
+ unformatted sequential files. */
+
+static void
+next_record_w_unf (st_parameter_dt *dtp, int next_subrecord)
+{
+ gfc_offset c, m, m_write;
+ size_t record_marker;
+
+ /* Bytes written. */
+ m = dtp->u.p.current_unit->recl_subrecord
+ - dtp->u.p.current_unit->bytes_left_subrecord;
+ c = file_position (dtp->u.p.current_unit->s);
+
+ /* Write the length tail. If we finish a record containing
+ subrecords, we write out the negative length. */
+
+ if (dtp->u.p.current_unit->continued)
+ m_write = -m;
+ else
+ m_write = m;
+
+ if (write_us_marker (dtp, m_write) != 0)
+ goto io_error;
+
+ if (compile_options.record_marker == 0)
+ record_marker = sizeof (GFC_INTEGER_4);
+ else
+ record_marker = compile_options.record_marker;
+
+ /* Seek to the head and overwrite the bogus length with the real
+ length. */
+
+ if (sseek (dtp->u.p.current_unit->s, c - m - record_marker)
+ == FAILURE)
+ goto io_error;
+
+ if (next_subrecord)
+ m_write = -m;
+ else
+ m_write = m;
+
+ if (write_us_marker (dtp, m_write) != 0)
+ goto io_error;
+
+ /* Seek past the end of the current record. */
+
+ if (sseek (dtp->u.p.current_unit->s, c + record_marker) == FAILURE)
+ goto io_error;
+
+ return;
+
+ io_error:
+ generate_error (&dtp->common, ERROR_OS, NULL);
+ return;
+
+}
/* Position to the next record in write mode. */
static void
next_record_w (st_parameter_dt *dtp, int done)
{
- gfc_offset c, m, record, max_pos;
+ gfc_offset m, record, max_pos;
int length;
char *p;
- size_t record_marker;
/* Zero counters for X- and T-editing. */
max_pos = dtp->u.p.max_pos;
@@ -2119,35 +2381,7 @@ next_record_w (st_parameter_dt *dtp, int done)
break;
case UNFORMATTED_SEQUENTIAL:
- /* Bytes written. */
- m = dtp->u.p.current_unit->recl - dtp->u.p.current_unit->bytes_left;
- c = file_position (dtp->u.p.current_unit->s);
-
- /* Write the length tail. */
-
- if (write_us_marker (dtp, m) != 0)
- goto io_error;
-
- if (compile_options.record_marker == 4)
- record_marker = sizeof(GFC_INTEGER_4);
- else
- record_marker = sizeof (gfc_offset);
-
- /* Seek to the head and overwrite the bogus length with the real
- length. */
-
- if (sseek (dtp->u.p.current_unit->s, c - m - record_marker)
- == FAILURE)
- goto io_error;
-
- if (write_us_marker (dtp, m) != 0)
- goto io_error;
-
- /* Seek past the end of the current record. */
-
- if (sseek (dtp->u.p.current_unit->s, c + record_marker) == FAILURE)
- goto io_error;
-
+ next_record_w_unf (dtp, 0);
break;
case FORMATTED_STREAM:
diff --git a/libgfortran/libgfortran.h b/libgfortran/libgfortran.h
index c160d19dd64..89f0e48c5bd 100644
--- a/libgfortran/libgfortran.h
+++ b/libgfortran/libgfortran.h
@@ -373,6 +373,7 @@ typedef struct
int pedantic;
int convert;
size_t record_marker;
+ int max_subrecord_length;
}
compile_options_t;
@@ -382,6 +383,7 @@ internal_proto(compile_options);
extern void init_compile_options (void);
internal_proto(init_compile_options);
+#define GFC_MAX_SUBRECORD_LENGTH 2147483639 /* 2**31 - 9 */
/* Structure for statement options. */
diff --git a/libgfortran/runtime/compile_options.c b/libgfortran/runtime/compile_options.c
index fb6ac509f13..b2aef05a832 100644
--- a/libgfortran/runtime/compile_options.c
+++ b/libgfortran/runtime/compile_options.c
@@ -86,13 +86,11 @@ set_record_marker (int val)
switch(val)
{
case 4:
- if (sizeof (GFC_INTEGER_4) != sizeof (gfc_offset))
- compile_options.record_marker = sizeof (GFC_INTEGER_4);
+ compile_options.record_marker = sizeof (GFC_INTEGER_4);
break;
case 8:
- if (sizeof (GFC_INTEGER_8) != sizeof (gfc_offset))
- compile_options.record_marker = sizeof (GFC_INTEGER_8);
+ compile_options.record_marker = sizeof (GFC_INTEGER_8);
break;
default:
@@ -100,3 +98,17 @@ set_record_marker (int val)
break;
}
}
+
+extern void set_max_subrecord_length (int);
+export_proto (set_max_subrecord_length);
+
+void set_max_subrecord_length(int val)
+{
+ if (val > GFC_MAX_SUBRECORD_LENGTH || val < 1)
+ {
+ runtime_error ("Invalid value for maximum subrecord length");
+ return;
+ }
+
+ compile_options.max_subrecord_length = val;
+}
diff --git a/libgfortran/runtime/error.c b/libgfortran/runtime/error.c
index 3f03f03f500..122f6d14bab 100644
--- a/libgfortran/runtime/error.c
+++ b/libgfortran/runtime/error.c
@@ -437,7 +437,7 @@ translate_error (int code)
break;
case ERROR_SHORT_RECORD:
- p = "Short record on unformatted read";
+ p = "I/O past end of record on unformatted file";
break;
default:
diff --git a/libgomp/ChangeLog b/libgomp/ChangeLog
index 56d43681be9..7b19325f2f1 100644
--- a/libgomp/ChangeLog
+++ b/libgomp/ChangeLog
@@ -1,3 +1,7 @@
+2006-12-02 Eric Botcazou <ebotcazou@libertysurf.fr>
+
+ * configure.tgt: Force initial-exec TLS model on Linux only.
+
2006-11-13 Daniel Jacobowitz <dan@codesourcery.com>
* configure: Regenerated.
diff --git a/libgomp/configure.tgt b/libgomp/configure.tgt
index 7464d6a1cdf..89bae02e80a 100644
--- a/libgomp/configure.tgt
+++ b/libgomp/configure.tgt
@@ -13,9 +13,14 @@
# Optimize TLS usage by avoiding the overhead of dynamic allocation.
# This does require that the library be present during process
# startup, so mark the library as not to be dlopened.
-if test $have_tls = yes && test "$with_gnu_ld" = "yes"; then
+if test $have_tls = yes ; then
+ case "${target}" in
+
+ *-*-linux*)
XCFLAGS="${XCFLAGS} -ftls-model=initial-exec"
XLDFLAGS="${XLDFLAGS} -Wl,-z,nodlopen"
+ ;;
+ esac
fi
# Since we require POSIX threads, assume a POSIX system by default.
diff --git a/libstdc++-v3/ChangeLog b/libstdc++-v3/ChangeLog
index 431845ba7fe..1031465d422 100644
--- a/libstdc++-v3/ChangeLog
+++ b/libstdc++-v3/ChangeLog
@@ -1,3 +1,15 @@
+2006-12-02 Howard Hinnant <hhinnant@apple.com>
+
+ * acinclude.m4: Allow OPTIMIZE_CXXFLAGS to be set by configure.host.
+ * configure.host: Set OPTIMIZE_CXXFLAGS to -fvisibility-inlines-hidden
+ for x86/darwin.
+ * configure: Regenerate.
+
+2006-12-01 Paolo Carlini <pcarlini@suse.de>
+
+ * include/ext/mt_allocator.h (__pool_base::_M_get_align): Remove
+ redundant const qualifier on the return type.
+
2006-11-29 Benjamin Kosnik <bkoz@redhat.com>
* include/ext/throw_allocator.h: Consistent @file markup.
diff --git a/libstdc++-v3/acinclude.m4 b/libstdc++-v3/acinclude.m4
index a15e076ce78..38345fa2e32 100644
--- a/libstdc++-v3/acinclude.m4
+++ b/libstdc++-v3/acinclude.m4
@@ -653,8 +653,8 @@ dnl
AC_DEFUN([GLIBCXX_EXPORT_FLAGS], [
# Optimization flags that are probably a good idea for thrill-seekers. Just
# uncomment the lines below and make, everything else is ready to go...
+ # Alternatively OPTIMIZE_CXXFLAGS can be set in configure.host.
# OPTIMIZE_CXXFLAGS = -O3 -fstrict-aliasing -fvtable-gc
- OPTIMIZE_CXXFLAGS=
AC_SUBST(OPTIMIZE_CXXFLAGS)
WARN_FLAGS='-Wall -Wextra -Wwrite-strings -Wcast-qual'
diff --git a/libstdc++-v3/configure b/libstdc++-v3/configure
index 8454849501d..ae4e09e15dc 100755
--- a/libstdc++-v3/configure
+++ b/libstdc++-v3/configure
@@ -110313,8 +110313,8 @@ echo "${ECHO_T}$gxx_include_dir" >&6
# Optimization flags that are probably a good idea for thrill-seekers. Just
# uncomment the lines below and make, everything else is ready to go...
+ # Alternatively OPTIMIZE_CXXFLAGS can be set in configure.host.
# OPTIMIZE_CXXFLAGS = -O3 -fstrict-aliasing -fvtable-gc
- OPTIMIZE_CXXFLAGS=
WARN_FLAGS='-Wall -Wextra -Wwrite-strings -Wcast-qual'
diff --git a/libstdc++-v3/configure.host b/libstdc++-v3/configure.host
index 441eb4cab37..ef4d1de4c02 100644
--- a/libstdc++-v3/configure.host
+++ b/libstdc++-v3/configure.host
@@ -202,6 +202,11 @@ case "${host_os}" in
# On Darwin, performance is improved if libstdc++ is single-module,
# and on 8+ compatibility is better if not -flat_namespace.
OPT_LDFLAGS="${OPT_LDFLAGS} -Wl,-single_module"
+ case "${host_cpu}" in
+ i[34567]86 | x86_64)
+ OPTIMIZE_CXXFLAGS="${OPTIMIZE_CXXFLAGS} -fvisibility-inlines-hidden"
+ ;;
+ esac
os_include_dir="os/bsd/darwin"
;;
*djgpp*) # leading * picks up "msdosdjgpp"
diff --git a/libstdc++-v3/include/ext/mt_allocator.h b/libstdc++-v3/include/ext/mt_allocator.h
index bc2d61f6eec..6083cdb9c25 100644
--- a/libstdc++-v3/include/ext/mt_allocator.h
+++ b/libstdc++-v3/include/ext/mt_allocator.h
@@ -151,7 +151,7 @@ _GLIBCXX_BEGIN_NAMESPACE(__gnu_cxx)
_M_get_binmap(size_t __bytes)
{ return _M_binmap[__bytes]; }
- const size_t
+ size_t
_M_get_align()
{ return _M_options._M_align; }