aboutsummaryrefslogtreecommitdiffstats
path: root/activerecord/test/cases/adapters
Commit message (Collapse)AuthorAgeFilesLines
* Remove needless `change_table`Ryuta Kamizono2017-12-151-6/+2
| | | | | These are using `remove_column` directly, not used `t` in `change_table`.
* Suppress expected exceptions by `report_on_exception` = `false`Yasuo Honda2017-12-141-0/+2
| | | | Follow up #31428 to address similar exceptions with mysql2 adapter
* Suppress `warning: BigDecimal.new is deprecated` in activerecordYasuo Honda2017-12-135-13/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | `BigDecimal.new` has been deprecated in BigDecimal 1.3.3 which will be a default for Ruby 2.5. Refer https://github.com/ruby/bigdecimal/commit/533737338db915b00dc7168c3602e4b462b23503 ``` $ cd rails/activerecord/ $ git grep -l BigDecimal.new | grep \.rb | xargs sed -i -e "s/BigDecimal.new/BigDecimal/g" ``` - Changes made only to Active Record. Will apply the same change to other module once this commit is merged. - The following deprecation has not been addressed because it has been reported at `ActiveRecord::Result.new`. `ActiveRecord::Result.ancestors` did not show `BigDecimal`. * Not addressed ```ruby /path/to/rails/activerecord/lib/active_record/connection_adapters/mysql/database_statements.rb:34: warning: BigDecimal.new is deprecated ``` * database_statements.rb:34 ```ruby ActiveRecord::Result.new(result.fields, result.to_a) if result ``` * ActiveRecord::Result.ancestors ```ruby [ActiveRecord::Result, Enumerable, ActiveSupport::ToJsonWithActiveSupportEncoder, Object, Metaclass::ObjectMethods, Mocha::ObjectMethods, PP::ObjectMixin, ActiveSupport::Dependencies::Loadable, ActiveSupport::Tryable, JSON::Ext::Generator::GeneratorMethods::Object, Kernel, BasicObject] ``` This commit has been tested with these Ruby and BigDecimal versions - ruby 2.5 and bigdecimal 1.3.3 ``` $ ruby -v ruby 2.5.0dev (2017-12-14 trunk 61217) [x86_64-linux] $ gem list |grep bigdecimal bigdecimal (default: 1.3.3, default: 1.3.2) ``` - ruby 2.4 and bigdecimal 1.3.0 ``` $ ruby -v ruby 2.4.2p198 (2017-09-14 revision 59899) [x86_64-linux-gnu] $ gem list |grep bigdecimal bigdecimal (default: 1.3.0) ``` - ruby 2.3 and bigdecimal 1.2.8 ``` $ ruby -v ruby 2.3.5p376 (2017-09-14 revision 59905) [x86_64-linux] $ gem list |grep -i bigdecimal bigdecimal (1.2.8) ``` - ruby 2.2 and bigdecimal 1.2.6 ``` $ ruby -v ruby 2.2.8p477 (2017-09-14 revision 59906) [x86_64-linux] $ gem list |grep bigdecimal bigdecimal (1.2.6) ```
* Suppress expected exceptions by `report_on_exception` = `false` in Ruby 2.5Yasuo Honda2017-12-131-0/+2
| | | | | | | | | | | | | * Ruby 2.4 introduces `report_on_exception` to control if it reports exceptions in thread, this default value has been `false` in Ruby 2.4. Refer https://www.ruby-lang.org/en/news/2016/11/09/ruby-2-4-0-preview3-released/ * Ruby 2.5 changes `report_on_exception` default value to `true` since this commit https://svn.ruby-lang.org/cgi-bin/viewvc.cgi?revision=61183&view=revision This pull request suppresses expected exceptions by setting `report_on_exception` = `false` it also supports Ruby 2.3 which does not have`report_on_exception`.
* SQLite: Fix `copy_table` with composite primary keysRyuta Kamizono2017-12-081-2/+29
| | | | | | | | | | | | | | | | | | | | | | | | | | `connection.primary_key` also return composite primary keys, so `from_primary_key_column` may not be found even if `from_primary_key` is presented. ``` % ARCONN=sqlite3 be ruby -w -Itest test/cases/adapters/sqlite3/sqlite3_adapter_test.rb -n test_copy_table_with_composite_primary_keys Using sqlite3 Run options: -n test_copy_table_with_composite_primary_keys --seed 19041 # Running: E Error: ActiveRecord::ConnectionAdapters::SQLite3AdapterTest#test_copy_table_with_composite_primary_keys: NoMethodError: undefined method `type' for nil:NilClass /path/to/rails/activerecord/lib/active_record/connection_adapters/sqlite3_adapter.rb:411:in `block in copy_table' ``` This change fixes `copy_table` to do not lose composite primary keys.
* Merge pull request #31327 from aellispierce/custom-id-change-table-sqliteEileen M. Uchitelle2017-12-071-0/+18
|\ | | | | Fix sqlite migrations with custom primary keys
| * Fix sqlite migrations with custom primary keysAshley Ellis Pierce2017-12-061-0/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, if a record was created with a custom primary key, that table could not be migrated using sqlite. While attempting to copy the table, the type of the primary key was ignored. Once that was corrected, copying the indexes would fail because custom primary keys are autoindexed by sqlite by default. To correct that, this skips copying the index if the index name begins with "sqlite_". This is a reserved word that indicates that the index is an internal schema object. SQLite prohibits applications from creating objects whose names begin with "sqlite_", so this string should be safe to use as a check. ref https://www.sqlite.org/fileformat2.html#intschema
* | Emulate JSON types for SQLite3 adapter (#29664)Ryuta Kamizono2017-12-031-2/+2
| | | | | | | | | | Actually SQLite3 doesn't have JSON storage class (so it is stored as a TEXT like Date and Time). But emulating JSON types is convinient for making database agnostic migrations.
* | Refactor `length`, `order`, and `opclass` index options dumpingRyuta Kamizono2017-12-031-3/+3
| |
* | Merge pull request #31230 from dinahshi/postgresql_extract_sqlMatthew Draper2017-12-031-2/+2
|\ \ | |/ |/| Extract sql fragment generators from PostgreSQL adapter
| * Extract sql fragment generators for alter table from PostgreSQL adapterDinah Shi2017-12-021-2/+2
| |
* | Add support for PostgreSQL operator classes to add_indexGreg Navis2017-11-303-5/+40
| | | | | | | | | | | | | | | | | | | | | | | | Add support for specifying non-default operator classes in PostgreSQL indexes. An example CREATE INDEX query that becomes possible is: CREATE INDEX users_name ON users USING gist (name gist_trgm_ops); Previously it was possible to specify the `gist` index but not the custom operator class. The `add_index` call for the above query is: add_index :users, :name, using: :gist, opclasses: {name: :gist_trgm_ops}
* | Add new error class `QueryCanceled` which will be raised when canceling ↵Ryuta Kamizono2017-11-272-2/+56
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | statement due to user request (#31235) This changes `StatementTimeout` to `QueryCanceled` for PostgreSQL. In MySQL, errno 1317 (`ER_QUERY_INTERRUPTED`) is only used when the query is manually cancelled. But in PostgreSQL, `QUERY_CANCELED` error code (57014) which is used `StatementTimeout` is also used when the both case. And, we can not tell which reason happened. So I decided to introduce new error class `QueryCanceled` closer to the error code name.
* | Rename `TransactionTimeout` to more descriptive `LockWaitTimeout` (#31223)Ryuta Kamizono2017-11-272-4/+4
|/ | | | | | Since #31129, new error class `StatementTimeout` has been added. `TransactionTimeout` is caused by the timeout shorter than `StatementTimeout`, but its name is too generic. I think that it should be a name that understands the difference with `StatementTimeout`.
* Merge pull request #30980 from sobrinho/sobrinho/arel-star-ignored-columnsRafael França2017-11-133-30/+30
|\ | | | | Do not use `Arel.star` when `ignored_columns`
| * Change tests to use models which don't ignore any columnsJon Moss2017-11-133-30/+30
| |
* | Add new error class `StatementTimeout` which will be raised when statement ↵Ryuta Kamizono2017-11-132-0/+57
| | | | | | | | | | | | | | | | | | | | | | | | timeout exceeded (#31129) We are sometimes using The MAX_EXECUTION_TIME hint for MySQL depending on the situation. It will prevent catastrophic performance down by wrong performing queries. The new error class `StatementTimeout` will make to be easier to handle that case. https://dev.mysql.com/doc/refman/5.7/en/optimizer-hints.html#optimizer-hints-execution-time
* | Raise `TransactionTimeout` when lock wait timeout exceeded for PG adapterRyuta Kamizono2017-11-112-1/+30
| | | | | | | | Follow up of #30360.
* | Should test actual error which is raised from the databaseRyuta Kamizono2017-11-111-1/+23
| |
* | Enable `Style/RedundantReturn` rubocop rule, and fixed a couple moreRyuta Kamizono2017-11-011-1/+1
| | | | | | | | Follow up of #31004.
* | removed unnecessary semicolonsShuhei Kitagawa2017-10-282-2/+2
|/
* `supports_extensions?` return always true since PostgreSQL 9.1Yasuo Honda2017-10-244-454/+440
| | | | | | | | since the minimum version of PostgreSQL currently Rails supports is 9.1, there is no need to handle if `supports_extensions?` Refer https://www.postgresql.org/docs/9.1/static/sql-createextension.html "CREATE EXTENSION"
* Remove deprecated arguments from `#verify!`Rafael Mendonça França2017-10-231-12/+0
|
* Remove deprecated argument `name` from `#indexes`Rafael Mendonça França2017-10-232-9/+1
|
* Fix longer sequence name detection for serial columns (#28339)Ryuta Kamizono2017-10-151-0/+32
| | | | | | | | We already found the longer sequence name, but we could not consider whether it was the sequence name created by serial type due to missed a max identifier length limitation. I've addressed the sequence name consideration to respect the max identifier length. Fixes #28332.
* MySQL: Don't lose `auto_increment: true` in the `db/schema.rb`Ryuta Kamizono2017-10-151-0/+34
| | | | | | | | | | Currently `AUTO_INCREMENT` is implicitly used in the default primary key definition. But `AUTO_INCREMENT` is not only used for single column primary key, but also for composite primary key. In that case, `auto_increment: true` should be dumped explicitly in the `db/schema.rb`. Fixes #30894.
* Add JSON attribute test cases for SQLite3 adapterRyuta Kamizono2017-10-051-0/+29
|
* `Postgres::OID::Range` serializes to a `Range`, quote in `Quoting`Thomas Cannon2017-09-262-1/+52
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | PostgreSQL 9.1+ introduced range types, and Rails added support for using this datatype in ActiveRecord. However, the serialization of `PostgreSQL::OID::Range` was incomplete, because it did not properly quote the bounds that make up the range. A clear example of this is a `tsrange`. Normally, ActiveRecord quotes Date/Time objects to include the milliseconds. However, the way `PostgreSQL::OID::Range` serialized its bounds, the milliseconds were dropped. This meant that the value was incomplete and not equal to the submitted value. An example of normal timestamps vs. a `tsrange`. Note how the bounds for the range do not include their milliseconds (they were present in the ruby Range): UPDATE "iterations" SET "updated_at" = $1, "range" = $2 WHERE "iterations"."id" = $3 [["updated_at", "2017-09-23 17:07:01.304864"], ["range", "[2017-09-23 00:00:00 UTC,2017-09-23 23:59:59 UTC]"], ["id", 1234]] `PostgreSQL::OID::Range` serialized the range by interpolating a string for the range, which works for most cases, but does not work for timestamps: def serialize(value) if value.is_a?(::Range) from = type_cast_single_for_database(value.begin) to = type_cast_single_for_database(value.end) "[#{from},#{to}#{value.exclude_end? ? ')' : ']'}" else super end end (byebug) from = type_cast_single_for_database(value.begin) 2010-01-01 13:30:00 UTC (byebug) to = type_cast_single_for_database(value.end) 2011-02-02 19:30:00 UTC (byebug) "[#{from},#{to}#{value.exclude_end? ? ')' : ']'}" "[2010-01-01 13:30:00 UTC,2011-02-02 19:30:00 UTC)" @sgrif (the original implementer for Postgres Range support) provided some feedback about where the quoting should occur: Yeah, quoting at all is definitely wrong here. I'm not sure what I was thinking in 02579b5, but what this is doing is definitely in the wrong place. It should probably just be returning a range of subtype.serialize(value.begin) and subtype.serialize(value.end), and letting the adapter handle the rest. `Postgres::OID::Range` now returns a `Range` object, and `ActiveRecord::ConnectionAdapters::PostgreSQL::Quoting` can now encode and quote a `Range`: def encode_range(range) "[#{type_cast(range.first)},#{type_cast(range.last)}#{range.exclude_end? ? ')' : ']'}" end ... encode_range(range) #=> "['2010-01-01 13:30:00.670277','2011-02-02 19:30:00.745125')" This commit includes tests to make sure the milliseconds are preserved in `tsrange` and `tstzrange` columns
* Fix collided sequence name detectionRyuta Kamizono2017-09-181-0/+36
| | | | | | If collided named sequence already exists, newly created serial column will generate alternative sequence name. Fix sequence name detection to allow the alternative names.
* Add an extra assertion to ensure dumping schema default as expectedRyuta Kamizono2017-09-081-1/+4
|
* Fix `quote_default_expression` for UUID with array defaultRyuta Kamizono2017-09-081-0/+10
| | | | Fixes #30539.
* Omit the default limit for float columns (#28041)Ryuta Kamizono2017-08-271-1/+1
|
* Prefer to place a table options before `force: :cascade` (#28005)Ryuta Kamizono2017-08-271-4/+4
| | | | | | I was added a table options after `force: :cascade` in #17569 for not touching existing tests (reducing diff). But `force: :cascade` is not an important information. So I prefer to place a table options before `force: :cascade`.
* Merge pull request #30360 from gcourtemanche/transaction_timedoutRafael França2017-08-221-0/+6
|\ | | | | Add TransactionTimeout for MySQL error code 1205
| * Add TransactionTimeout for MySQL error code 1205Gabriel Courtemanche2017-08-221-0/+6
| |
* | Update links to use https instead of http [ci skip]Yoshiyuki Hirano2017-08-221-1/+1
|/
* Prevent extra `SET time zone` in `configure_connection` (#28413)Ryuta Kamizono2017-08-211-0/+7
| | | | | | | | `SET time zone 'value'` is an alias for `SET timezone TO 'value'`. https://www.postgresql.org/docs/current/static/sql-set.html So if `variables["timezone"]` is specified, it is enough to `SET timezone` once.
* Allow `serialize` with a custom coder on `json` and `array` columnsRyuta Kamizono2017-08-131-1/+25
| | | | | | | We already have a test case for `serialize` with a custom coder in `PostgresqlHstoreTest`. https://github.com/rails/rails/blob/v5.1.3/activerecord/test/cases/adapters/postgresql/hstore_test.rb#L316-L335
* Move `test_not_compatible_with_serialize_macro` to `JSONSharedTestCases`Ryuta Kamizono2017-08-111-11/+2
| | | | Because `JSONSharedTestCases` is also used for `Mysql2JSONTest`.
* Merge pull request #29520 from kirs/serialize-vs-postgres-native-columnSean Griffin2017-08-042-0/+18
|\ | | | | Do not let use `serialize` on native JSON/array column
| * Do not let use `serialize` on native JSON/array columnKir Shatrov2017-08-042-0/+18
| |
* | Merge remote-tracking branch 'origin/master' into unlock-minitestRafael Mendonça França2017-08-0168-26/+183
|\|
| * Merge pull request #29869 from kamipo/make_type_map_to_privateRafael França2017-07-212-7/+7
| |\ | | | | | | Make `type_map` to private because it is only used in the connection adapter
| | * Make `type_map` to private because it is only used in the connection adapterRyuta Kamizono2017-07-202-7/+7
| | | | | | | | | | | | | | | | | | | | | `type_map` is an internal API and it is only used in the connection adapter. And also, some type map initializer methods requires passed `type_map`, but those instances already has `type_map` in itself. So we don't need explicit passing `type_map` to the initializers.
| * | Merge pull request #29732 from kirs/frozen-activerecordRafael França2017-07-2168-0/+136
| |\ \ | | | | | | | | Use frozen-string-literal in ActiveRecord
| | * | Use frozen-string-literal in ActiveRecordKir Shatrov2017-07-1968-0/+136
| | |/
| * / Revert "Extract `bind_param` and `bind_attribute` into `ActiveRecord::TestCase`"Sean Griffin2017-07-215-15/+23
| |/ | | | | | | | | | | | | | | This reverts commit b6ad4052d18e4b29b8a092526c2beef013e2bf4f. This is not something that the majority of Active Record should be testing or care about. We should look at having fewer places rely on these details, not make it easier to rely on them.
| * Merge pull request #29033 from kamipo/make_preload_query_to_prepared_statementsSean Griffin2017-07-182-2/+2
| |\ | | | | | | Make preload query to preparable
| | * Make preload query to preparableRyuta Kamizono2017-07-072-2/+2
| | | | | | | | | | | | | | | | | | | | | Currently preload query cannot be prepared statements even if `prepared_statements: true` due to array handler in predicate builder doesn't support making bind params. This makes preload query to preparable by don't passing array value if possible.
| * | Change sqlite3 boolean serialization to use 1 and 0Lisa Ugray2017-07-112-2/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Abstract boolean serialization has been using 't' and 'f', with MySQL overriding that to use 1 and 0. This has the advantage that SQLite natively recognizes 1 and 0 as true and false, but does not natively recognize 't' and 'f'. This change in serialization requires a migration of stored boolean data for SQLite databases, so it's implemented behind a configuration flag whose default false value is deprecated. The flag itself can be deprecated in a future version of Rails. While loaded models will give the correct result for boolean columns without migrating old data, where() clauses will interact incorrectly with old data. While working in this area, also change the abstract adapter to use `"TRUE"` and `"FALSE"` as quoted values and `true` and `false` for unquoted. These are supported by PostreSQL, and MySQL remains overriden.