aboutsummaryrefslogtreecommitdiffstats
path: root/activerecord/test/cases/adapters
Commit message (Collapse)AuthorAgeFilesLines
* Replace `assert !` with `assert_not`Daniel Colson2018-04-191-2/+2
| | | | | This autocorrects the violations after adding a custom cop in 3305c78dcd.
* Normalize date component when writing to time columnsAndrew White2018-03-111-0/+7
| | | | | | | | | | | | | | | For legacy reasons Rails stores time columns on sqlite as full timestamp strings. However because the date component wasn't being normalized this meant that when they were read back they were being prefixed with 2001-01-01 by ActiveModel::Type::Time. This had a twofold result - first it meant that the fast code path wasn't being used because the string was invalid and second it was corrupting the second fractional component being read by the Date._parse code path. Fix this by a combination of normalizing the timestamps on writing and also changing Active Model to be more lenient when detecting whether a string starts with a date component before creating the dummy time value for parsing.
* Remove unnecessary `respond_to?(:report_on_exception)` checkingyuuji.yaginuma2018-03-022-4/+4
| | | | Since Rails 6 requires Ruby 2.4.1+.
* Fix `#columsn_for_distinct` of MySQL and PostgreSQLkg8m2018-02-272-15/+15
| | | | | | | Prevent `ActiveRecord::FinderMethods#limited_ids_for` from using correct primary key values even if `ORDER BY` columns include other table's primary key. Fixes #28364.
* PostgreSQL: Allow BC dates like datetime consistentlyRyuta Kamizono2018-02-231-0/+18
| | | | | | | | | BC dates are supported by both date and datetime types. https://www.postgresql.org/docs/current/static/datatype-datetime.html Since #1097, new datetime allows year zero as 1 BC, but new date does not. It should be allowed even in new date consistently.
* PostgreSQL: Treat infinite values in date like datetime consistentlyRyuta Kamizono2018-02-232-0/+63
| | | | | | | | | | | | | | The values infinity and -infinity are supported by both date and timestamp types. https://www.postgresql.org/docs/current/static/datatype-datetime.html#DATATYPE-DATETIME-SPECIAL-TABLE And also, it can not be known whether a value is infinity correctly unless cast a value. I've added `QueryAttribute#infinity?` to handle that case. Closes #27585.
* Deprecate update_attributes and update_attributes!Eddie Lebow2018-02-172-4/+4
| | | | Closes #31998
* Dump correctly index nulls order for postgresqlfatkodima2018-01-282-0/+34
|
* Use assert_predicate and assert_not_predicateDaniel Colson2018-01-2521-72/+72
|
* Change refute to assert_notDaniel Colson2018-01-252-3/+3
|
* Use respond_to test helpersDaniel Colson2018-01-252-3/+3
|
* Merge pull request #31422 from Edouard-chin/multistatement-fixturesMatthew Draper2018-01-241-1/+3
|\ | | | | Build a multi-statement query when inserting fixtures
| * Build a multi-statement query when inserting fixtures:Edouard CHIN2018-01-221-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - The `insert_fixtures` method can be optimized by making a single multi statement query for all fixtures having the same connection instead of doing a single query per table - The previous code was bulk inserting fixtures for a single table, making X query for X fixture files - This patch builds a single **multi statement query** for every tables. Given a set of 3 fixtures (authors, dogs, computers): ```ruby # before %w(authors dogs computers).each do |table| sql = build_sql(table) connection.query(sql) end # after sql = build_sql(authors, dogs, computers) connection.query(sql) ``` - `insert_fixtures` is now deprecated, `insert_fixtures_set` is the new way to go with performance improvement - My tests were done with an app having more than 700 fixtures, the time it takes to insert all of them was around 15s. Using a single multi statement query, it took on average of 8 seconds - In order for a multi statement to be executed, mysql needs to be connected with the `MULTI_STATEMENTS` [flag](https://dev.mysql.com/doc/refman/5.7/en/c-api-multiple-queries.html), which is done before inserting the fixtures by reconnecting to da the database with the flag declared. Reconnecting to the database creates some caveats: 1. We loose all open transactions; Inside the original code, when inserting fixtures, a transaction is open. Multple delete statements are [executed](https://github.com/rails/rails/blob/a681eaf22955734c142609961a6d71746cfa0583/activerecord/lib/active_record/fixtures.rb#L566) and finally the fixtures are inserted. The problem with this patch is that we need to open the transaction only after we reconnect to the DB otherwise reconnecting drops the open transaction which doesn't commit all delete statements and inserting fixtures doesn't work since we duplicated them (Primary key duplicate exception)... - In order to fix this problem, the transaction is now open directly inside the `insert_fixtures` method, right after we reconnect to the db - As an effect, since the transaction is open inside the `insert_fixtures` method, the DELETE statements need to be executed here since the transaction is open later 2. The same problem happens for the `disable_referential_integrity` since we reconnect, the `FOREIGN_KEY_CHECKS` is reset to the original value - Same solution as 1. , the disable_referential_integrity can be called after we reconnect to the transaction 3. When the multi statement query is executed, no other queries can be performed until we paginate over the set of results, otherwise mysql throws a "Commands out of sync" [Ref](https://dev.mysql.com/doc/refman/5.7/en/commands-out-of-sync.html) - Iterating over the set of results until `mysql_client.next_result` is false. [Ref](https://github.com/brianmario/mysql2#multiple-result-sets) - Removed the `active_record.sql "Fixture delete"` notification, the delete statements are now inside the INSERT's one - On mysql the `max_allowed_packet` is looked up: 1. Before executing the multi-statements query, we check the packet length of each statements, if the packet is bigger than the max_allowed_packet config, an `ActiveRecordError` is raised 2. Otherwise we concatenate the current sql statement into the previous and so on until the packet is `< max_allowed_packet`
* | Support for PostgreSQL foreign tablesfatkodima2018-01-221-0/+109
| |
* | Refactor migration to move migrations paths to connectioneileencodes2018-01-181-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Rails has some support for multiple databases but it can be hard to handle migrations with those. The easiest way to implement multiple databases is to contain migrations into their own folder ("db/migrate" for the primary db and "db/seconddb_migrate" for the second db). Without this you would need to write code that allowed you to switch connections in migrations. I can tell you from experience that is not a fun way to implement multiple databases. This refactoring is a pre-requisite for implementing other features related to parallel testing and improved handling for multiple databases. The refactoring here moves the class methods from the `Migrator` class into it's own new class `MigrationContext`. The goal was to move the `migrations_paths` method off of the `Migrator` class and onto the connection. This allows users to do the following in their `database.yml`: ``` development: adapter: mysql2 username: root password: development_seconddb: adapter: mysql2 username: root password: migrations_paths: "db/second_db_migrate" ``` Migrations for the `seconddb` can now be store in the `db/second_db_migrate` directory. Migrations for the primary database are stored in `db/migrate`". The refactoring here drastically reduces the internal API for migrations since we don't need to pass `migrations_paths` around to every single method. Additionally this change does not require any Rails applications to make changes unless they want to use the new public API. All of the class methods from the `Migrator` class were `nodoc`'d except for the `migrations_paths` and `migrations_path` getter/setters respectively.
* | `create_database` should not add default charset when `collation` is givenRyuta Kamizono2018-01-151-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | If `collation` is given without `charset`, it may generate invalid SQL. For example `create_database(:matt_aimonetti, collation: "utf8mb4_bin")`: ``` > CREATE DATABASE `matt_aimonetti` DEFAULT CHARACTER SET `utf8` COLLATE `utf8mb4_bin`; ERROR 1253 (42000): COLLATION 'utf8mb4_bin' is not valid for CHARACTER SET 'utf8' ``` In MySQL, charset is used to find the default collation. If `collation` is given explicitly, it is not necessary to give extra charset.
* | Deprecate `valid_alter_table_type?` in sqlite3 adapterRyuta Kamizono2018-01-041-0/+4
| | | | | | | | | | This method which is used only in the internal was introduced in ac384820 and was renamed in #17579. It does not need to be exposed.
* | Correctly handle infinity value in PostgreSQL range typeyuuji.yaginuma2018-01-041-0/+12
| | | | | | | | | | | | | | | | An empty string is an invalid value in Ruby's range class. So need to handle `Float::INFINITY` as it is and cast it in `encode_range`. Fixes #31612
* | Fix recreating partial indexes after alter table for sqlitefatkodima2017-12-311-0/+17
| |
* | SQLite: Add more test cases for adding primary keyRyuta Kamizono2017-12-261-0/+52
| |
* | Remove passing needless empty string `options` in `create_table`Ryuta Kamizono2017-12-201-3/+3
| | | | | | | | Follow up of #31177.
* | Merge pull request #31177 from ↵Matthew Draper2017-12-202-3/+78
|\ \ | | | | | | | | | | | | albertoalmagro/remove-default-mysql-engine-from-ar-5-2 Remove default ENGINE=InnoDB for Mysql2 adapter
| * | Remove default ENGINE=InnoDB for Mysql2 adapterAlberto Almagro2017-12-112-3/+78
| |/ | | | | | | | | | | | | | | | | | | | | | | | | Before this commit ENGINE=InnoDB was added by default to Mysql2 adapter +create_table+ if no +options+ option was provided. This default ENGINE was lost as soon as something was passed in at +options+ option, making its goal and propagation inconsistent, as the programmer needed to remember including ENGINE=InnoDB when something was passed in. This commit removes default ENGINE as its use isn't needed anymore for current MySQL and MariaDB versions. It adds compatibility support and tests to ensure that default ENGINE is still present for migrations with version 5.1 and before. It also ensures we still dump the ENGINE option to +schema.rb+ in order to avoid inconsistencies.
* | Remove needless `change_table`Ryuta Kamizono2017-12-151-6/+2
| | | | | | | | | | These are using `remove_column` directly, not used `t` in `change_table`.
* | Suppress expected exceptions by `report_on_exception` = `false`Yasuo Honda2017-12-141-0/+2
| | | | | | | | Follow up #31428 to address similar exceptions with mysql2 adapter
* | Suppress `warning: BigDecimal.new is deprecated` in activerecordYasuo Honda2017-12-135-13/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | `BigDecimal.new` has been deprecated in BigDecimal 1.3.3 which will be a default for Ruby 2.5. Refer https://github.com/ruby/bigdecimal/commit/533737338db915b00dc7168c3602e4b462b23503 ``` $ cd rails/activerecord/ $ git grep -l BigDecimal.new | grep \.rb | xargs sed -i -e "s/BigDecimal.new/BigDecimal/g" ``` - Changes made only to Active Record. Will apply the same change to other module once this commit is merged. - The following deprecation has not been addressed because it has been reported at `ActiveRecord::Result.new`. `ActiveRecord::Result.ancestors` did not show `BigDecimal`. * Not addressed ```ruby /path/to/rails/activerecord/lib/active_record/connection_adapters/mysql/database_statements.rb:34: warning: BigDecimal.new is deprecated ``` * database_statements.rb:34 ```ruby ActiveRecord::Result.new(result.fields, result.to_a) if result ``` * ActiveRecord::Result.ancestors ```ruby [ActiveRecord::Result, Enumerable, ActiveSupport::ToJsonWithActiveSupportEncoder, Object, Metaclass::ObjectMethods, Mocha::ObjectMethods, PP::ObjectMixin, ActiveSupport::Dependencies::Loadable, ActiveSupport::Tryable, JSON::Ext::Generator::GeneratorMethods::Object, Kernel, BasicObject] ``` This commit has been tested with these Ruby and BigDecimal versions - ruby 2.5 and bigdecimal 1.3.3 ``` $ ruby -v ruby 2.5.0dev (2017-12-14 trunk 61217) [x86_64-linux] $ gem list |grep bigdecimal bigdecimal (default: 1.3.3, default: 1.3.2) ``` - ruby 2.4 and bigdecimal 1.3.0 ``` $ ruby -v ruby 2.4.2p198 (2017-09-14 revision 59899) [x86_64-linux-gnu] $ gem list |grep bigdecimal bigdecimal (default: 1.3.0) ``` - ruby 2.3 and bigdecimal 1.2.8 ``` $ ruby -v ruby 2.3.5p376 (2017-09-14 revision 59905) [x86_64-linux] $ gem list |grep -i bigdecimal bigdecimal (1.2.8) ``` - ruby 2.2 and bigdecimal 1.2.6 ``` $ ruby -v ruby 2.2.8p477 (2017-09-14 revision 59906) [x86_64-linux] $ gem list |grep bigdecimal bigdecimal (1.2.6) ```
* | Suppress expected exceptions by `report_on_exception` = `false` in Ruby 2.5Yasuo Honda2017-12-131-0/+2
|/ | | | | | | | | | | | | * Ruby 2.4 introduces `report_on_exception` to control if it reports exceptions in thread, this default value has been `false` in Ruby 2.4. Refer https://www.ruby-lang.org/en/news/2016/11/09/ruby-2-4-0-preview3-released/ * Ruby 2.5 changes `report_on_exception` default value to `true` since this commit https://svn.ruby-lang.org/cgi-bin/viewvc.cgi?revision=61183&view=revision This pull request suppresses expected exceptions by setting `report_on_exception` = `false` it also supports Ruby 2.3 which does not have`report_on_exception`.
* SQLite: Fix `copy_table` with composite primary keysRyuta Kamizono2017-12-081-2/+29
| | | | | | | | | | | | | | | | | | | | | | | | | | `connection.primary_key` also return composite primary keys, so `from_primary_key_column` may not be found even if `from_primary_key` is presented. ``` % ARCONN=sqlite3 be ruby -w -Itest test/cases/adapters/sqlite3/sqlite3_adapter_test.rb -n test_copy_table_with_composite_primary_keys Using sqlite3 Run options: -n test_copy_table_with_composite_primary_keys --seed 19041 # Running: E Error: ActiveRecord::ConnectionAdapters::SQLite3AdapterTest#test_copy_table_with_composite_primary_keys: NoMethodError: undefined method `type' for nil:NilClass /path/to/rails/activerecord/lib/active_record/connection_adapters/sqlite3_adapter.rb:411:in `block in copy_table' ``` This change fixes `copy_table` to do not lose composite primary keys.
* Merge pull request #31327 from aellispierce/custom-id-change-table-sqliteEileen M. Uchitelle2017-12-071-0/+18
|\ | | | | Fix sqlite migrations with custom primary keys
| * Fix sqlite migrations with custom primary keysAshley Ellis Pierce2017-12-061-0/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, if a record was created with a custom primary key, that table could not be migrated using sqlite. While attempting to copy the table, the type of the primary key was ignored. Once that was corrected, copying the indexes would fail because custom primary keys are autoindexed by sqlite by default. To correct that, this skips copying the index if the index name begins with "sqlite_". This is a reserved word that indicates that the index is an internal schema object. SQLite prohibits applications from creating objects whose names begin with "sqlite_", so this string should be safe to use as a check. ref https://www.sqlite.org/fileformat2.html#intschema
* | Emulate JSON types for SQLite3 adapter (#29664)Ryuta Kamizono2017-12-031-2/+2
| | | | | | | | | | Actually SQLite3 doesn't have JSON storage class (so it is stored as a TEXT like Date and Time). But emulating JSON types is convinient for making database agnostic migrations.
* | Refactor `length`, `order`, and `opclass` index options dumpingRyuta Kamizono2017-12-031-3/+3
| |
* | Merge pull request #31230 from dinahshi/postgresql_extract_sqlMatthew Draper2017-12-031-2/+2
|\ \ | |/ |/| Extract sql fragment generators from PostgreSQL adapter
| * Extract sql fragment generators for alter table from PostgreSQL adapterDinah Shi2017-12-021-2/+2
| |
* | Add support for PostgreSQL operator classes to add_indexGreg Navis2017-11-303-5/+40
| | | | | | | | | | | | | | | | | | | | | | | | Add support for specifying non-default operator classes in PostgreSQL indexes. An example CREATE INDEX query that becomes possible is: CREATE INDEX users_name ON users USING gist (name gist_trgm_ops); Previously it was possible to specify the `gist` index but not the custom operator class. The `add_index` call for the above query is: add_index :users, :name, using: :gist, opclasses: {name: :gist_trgm_ops}
* | Add new error class `QueryCanceled` which will be raised when canceling ↵Ryuta Kamizono2017-11-272-2/+56
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | statement due to user request (#31235) This changes `StatementTimeout` to `QueryCanceled` for PostgreSQL. In MySQL, errno 1317 (`ER_QUERY_INTERRUPTED`) is only used when the query is manually cancelled. But in PostgreSQL, `QUERY_CANCELED` error code (57014) which is used `StatementTimeout` is also used when the both case. And, we can not tell which reason happened. So I decided to introduce new error class `QueryCanceled` closer to the error code name.
* | Rename `TransactionTimeout` to more descriptive `LockWaitTimeout` (#31223)Ryuta Kamizono2017-11-272-4/+4
|/ | | | | | Since #31129, new error class `StatementTimeout` has been added. `TransactionTimeout` is caused by the timeout shorter than `StatementTimeout`, but its name is too generic. I think that it should be a name that understands the difference with `StatementTimeout`.
* Merge pull request #30980 from sobrinho/sobrinho/arel-star-ignored-columnsRafael França2017-11-133-30/+30
|\ | | | | Do not use `Arel.star` when `ignored_columns`
| * Change tests to use models which don't ignore any columnsJon Moss2017-11-133-30/+30
| |
* | Add new error class `StatementTimeout` which will be raised when statement ↵Ryuta Kamizono2017-11-132-0/+57
| | | | | | | | | | | | | | | | | | | | | | | | timeout exceeded (#31129) We are sometimes using The MAX_EXECUTION_TIME hint for MySQL depending on the situation. It will prevent catastrophic performance down by wrong performing queries. The new error class `StatementTimeout` will make to be easier to handle that case. https://dev.mysql.com/doc/refman/5.7/en/optimizer-hints.html#optimizer-hints-execution-time
* | Raise `TransactionTimeout` when lock wait timeout exceeded for PG adapterRyuta Kamizono2017-11-112-1/+30
| | | | | | | | Follow up of #30360.
* | Should test actual error which is raised from the databaseRyuta Kamizono2017-11-111-1/+23
| |
* | Enable `Style/RedundantReturn` rubocop rule, and fixed a couple moreRyuta Kamizono2017-11-011-1/+1
| | | | | | | | Follow up of #31004.
* | removed unnecessary semicolonsShuhei Kitagawa2017-10-282-2/+2
|/
* `supports_extensions?` return always true since PostgreSQL 9.1Yasuo Honda2017-10-244-454/+440
| | | | | | | | since the minimum version of PostgreSQL currently Rails supports is 9.1, there is no need to handle if `supports_extensions?` Refer https://www.postgresql.org/docs/9.1/static/sql-createextension.html "CREATE EXTENSION"
* Remove deprecated arguments from `#verify!`Rafael Mendonça França2017-10-231-12/+0
|
* Remove deprecated argument `name` from `#indexes`Rafael Mendonça França2017-10-232-9/+1
|
* Fix longer sequence name detection for serial columns (#28339)Ryuta Kamizono2017-10-151-0/+32
| | | | | | | | We already found the longer sequence name, but we could not consider whether it was the sequence name created by serial type due to missed a max identifier length limitation. I've addressed the sequence name consideration to respect the max identifier length. Fixes #28332.
* MySQL: Don't lose `auto_increment: true` in the `db/schema.rb`Ryuta Kamizono2017-10-151-0/+34
| | | | | | | | | | Currently `AUTO_INCREMENT` is implicitly used in the default primary key definition. But `AUTO_INCREMENT` is not only used for single column primary key, but also for composite primary key. In that case, `auto_increment: true` should be dumped explicitly in the `db/schema.rb`. Fixes #30894.
* Add JSON attribute test cases for SQLite3 adapterRyuta Kamizono2017-10-051-0/+29
|