aboutsummaryrefslogtreecommitdiffstats
path: root/activerecord/test/cases/adapters/postgresql
Commit message (Collapse)AuthorAgeFilesLines
* Remove ignored_sql from SQLCounter by adding "TRANSACTION" to log nameYasuo Honda2019-05-081-2/+2
| | | | | | | | | | This commit adds "TRANSACTION" to savepoint and commit, rollback statements because none of savepoint statements were removed by #36153 since they are not "SCHEMA" statements. Although, only savepoint statements can be labeled as "TRANSACTION" I think all of transaction related method should add this label. Follow up #36153
* Remove database specific sql statements from SQLCounterYasuo Honda2019-05-011-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Every database executes different type of sql statement to get metadata then `ActiveRecord::TestCase` ignores these database specific sql statements to make `assert_queries` or `assert_no_queries` work consistently. Connection adapter already labels these statement by setting "SCHEMA" argument, this pull request makes use of "SCHEMA" argument to ignore metadata queries. Here are the details of these changes: * PostgresqlConnectionTest Each of PostgresqlConnectionTest modified just executes corresponding methods https://github.com/rails/rails/blob/fef174f5c524edacbcad846d68400e7fe114a15a/activerecord/lib/active_record/connection_adapters/postgresql/schema_statements.rb#L182-L195 ```ruby # Returns the current database encoding format. def encoding query_value("SELECT pg_encoding_to_char(encoding) FROM pg_database WHERE datname = current_database()", "SCHEMA") end # Returns the current database collation. def collation query_value("SELECT datcollate FROM pg_database WHERE datname = current_database()", "SCHEMA") end # Returns the current database ctype. def ctype query_value("SELECT datctype FROM pg_database WHERE datname = current_database()", "SCHEMA") end ``` * BulkAlterTableMigrationsTest mysql2 adapter executes `SHOW KEYS FROM ...` to see if there is an index already created as below. I think the main concerns of these tests are how each database adapter creates or drops indexes then ignoring `SHOW KEYS FROM` statement makes sense. https://github.com/rails/rails/blob/fef174f5c524edacbcad846d68400e7fe114a15a/activerecord/lib/active_record/connection_adapters/mysql/schema_statements.rb#L11 ```ruby execute_and_free("SHOW KEYS FROM #{quote_table_name(table_name)}", "SCHEMA") do |result| ``` * Temporary change not included in this commit to show which statements executed ```diff $ git diff diff --git a/activerecord/test/cases/migration_test.rb b/activerecord/test/cases/migration_test.rb index 8e8ed494d9..df05f9bd16 100644 --- a/activerecord/test/cases/migration_test.rb +++ b/activerecord/test/cases/migration_test.rb @@ -854,7 +854,7 @@ def test_adding_indexes classname = ActiveRecord::Base.connection.class.name[/[^:]*$/] expected_query_count = { - "Mysql2Adapter" => 3, # Adding an index fires a query every time to check if an index already exists or not + "Mysql2Adapter" => 1, # Adding an index fires a query every time to check if an index already exists or not "PostgreSQLAdapter" => 2, }.fetch(classname) { raise "need an expected query count for #{classname}" @@ -886,7 +886,7 @@ def test_removing_index classname = ActiveRecord::Base.connection.class.name[/[^:]*$/] expected_query_count = { - "Mysql2Adapter" => 3, # Adding an index fires a query every time to check if an index already exists or not + "Mysql2Adapter" => 1, # Adding an index fires a query every time to check if an index already exists or not "PostgreSQLAdapter" => 2, }.fetch(classname) { raise "need an expected query count for #{classname}" $ ``` * Executed these modified tests ```ruby $ ARCONN=mysql2 bin/test test/cases/migration_test.rb -n /index/ Using mysql2 Run options: -n /index/ --seed 8462 F Failure: BulkAlterTableMigrationsTest#test_adding_indexes [/home/yahonda/git/rails/activerecord/test/cases/migration_test.rb:863]: 3 instead of 1 queries were executed. Queries: SHOW KEYS FROM `delete_me` SHOW KEYS FROM `delete_me` ALTER TABLE `delete_me` ADD UNIQUE INDEX `awesome_username_index` (`username`), ADD INDEX `index_delete_me_on_name_and_age` (`name`, `age`). Expected: 1 Actual: 3 bin/test test/cases/migration_test.rb:848 F Failure: BulkAlterTableMigrationsTest#test_removing_index [/home/yahonda/git/rails/activerecord/test/cases/migration_test.rb:895]: 3 instead of 1 queries were executed. Queries: SHOW KEYS FROM `delete_me` SHOW KEYS FROM `delete_me` ALTER TABLE `delete_me` DROP INDEX `index_delete_me_on_name`, ADD UNIQUE INDEX `new_name_index` (`name`). Expected: 1 Actual: 3 bin/test test/cases/migration_test.rb:879 .. Finished in 0.379245s, 10.5473 runs/s, 7.9105 assertions/s. 4 runs, 3 assertions, 2 failures, 0 errors, 0 skips $ ``` * ActiveRecord::ConnectionAdapters::Savepoints Left `self.ignored_sql` to ignore savepoint related statements because these SQL statements are not related "SCHEMA" ``` self.ignored_sql = [/^SAVEPOINT/, /^ROLLBACK TO SAVEPOINT/, /^RELEASE SAVEPOINT/] ``` https://github.com/rails/rails/blob/fef174f5c524edacbcad846d68400e7fe114a15a/activerecord/lib/active_record/connection_adapters/abstract/savepoints.rb#L10-L20 ```ruby def create_savepoint(name = current_savepoint_name) execute("SAVEPOINT #{name}") end def exec_rollback_to_savepoint(name = current_savepoint_name) execute("ROLLBACK TO SAVEPOINT #{name}") end def release_savepoint(name = current_savepoint_name) execute("RELEASE SAVEPOINT #{name}") end ```
* Raise `ArgumentError` for invalid `:limit` and `:precision` like as other ↵Ryuta Kamizono2019-04-072-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | options When I've added new `:size` option in #35071, I've found that invalid `:limit` and `:precision` raises `ActiveRecordError` unlike other invalid options. I think that is hard to distinguish argument errors and statement invalid errors since the `StatementInvalid` is a subclass of the `ActiveRecordError`. https://github.com/rails/rails/blob/c9e4c848eeeb8999b778fa1ae52185ca5537fffe/activerecord/lib/active_record/errors.rb#L103 ```ruby begin # execute any migration rescue ActiveRecord::StatementInvalid # statement invalid rescue ActiveRecord::ActiveRecordError, ArgumentError # `ActiveRecordError` except `StatementInvalid` is maybe an argument error end ``` I'd say this is the inconsistency worth fixing. Before: ```ruby add_column :items, :attr1, :binary, size: 10 # => ArgumentError add_column :items, :attr2, :decimal, scale: 10 # => ArgumentError add_column :items, :attr3, :integer, limit: 10 # => ActiveRecordError add_column :items, :attr4, :datetime, precision: 10 # => ActiveRecordError ``` After: ```ruby add_column :items, :attr1, :binary, size: 10 # => ArgumentError add_column :items, :attr2, :decimal, scale: 10 # => ArgumentError add_column :items, :attr3, :integer, limit: 10 # => ArgumentError add_column :items, :attr4, :datetime, precision: 10 # => ArgumentError ```
* Optimizer hints should be applied on Top level query as much as possibleRyuta Kamizono2019-04-041-0/+8
| | | | | I've experienced this issue in our app, some hints only works on Top level query (e.g. `MAX_EXECUTION_TIME`).
* Ensure `reset_table_name` when table name prefix/suffix is changedRyuta Kamizono2019-04-041-5/+8
| | | | | Also, `reset_column_information` is unnecessary since `reset_table_name` does that too.
* Cache database version in schema cacheAli Ibrahim2019-04-032-2/+2
| | | | | | | | | | | | | | | * The database version will get cached in the schema cache file during the schema cache dump. When the database version check happens, the version will be pulled from the schema cache and thus avoid querying the database for the version. * If the schema cache file doesn't exist, we'll query the database for the version and cache it on the schema cache object. * To facilitate this change, all connection adapters now implement #get_database_version and #database_version. #database_version returns the value from the schema cache. * To take advantage of the cached database version, the database version check will now happen after the schema cache is set on the connection in the connection pool.
* Merge pull request #19333 from palkan/dirty-storeKasper Timm Hansen2019-03-311-0/+16
|\ | | | | Add dirty methods for store accessors
| * Add dirty methods for store accessorspalkan2019-03-251-0/+16
| |
* | Add Relation#annotate for SQL commentingMatt Yoho2019-03-211-0/+37
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch has two main portions: 1. Add SQL comment support to Arel via Arel::Nodes::Comment. 2. Implement a Relation#annotate method on top of that. == Adding SQL comment support Adds a new Arel::Nodes::Comment node that represents an optional SQL comment and teachers the relevant visitors how to handle it. Comment nodes may be added to the basic CRUD statement nodes and set through any of the four (Select|Insert|Update|Delete)Manager objects. For example: manager = Arel::UpdateManager.new manager.table table manager.comment("annotation") manager.to_sql # UPDATE "users" /* annotation */ This new node type will be used by ActiveRecord::Relation to enable query annotation via SQL comments. == Implementing the Relation#annotate method Implements `ActiveRecord::Relation#annotate`, which accepts a comment string that will be appeneded to any queries generated by the relation. Some examples: relation = Post.where(id: 123).annotate("metadata string") relation.first # SELECT "posts".* FROM "posts" WHERE "posts"."id" = 123 # LIMIT 1 /* metadata string */ class Tag < ActiveRecord::Base scope :foo_annotated, -> { annotate("foo") } end Tag.foo_annotated.annotate("bar").first # SELECT "tags".* FROM "tags" LIMIT 1 /* foo */ /* bar */ Also wires up the plumbing so this works with `#update_all` and `#delete_all` as well. This feature is useful for instrumentation and general analysis of queries generated at runtime.
* Add test case to prevent possible SQL injectionRyuta Kamizono2019-03-181-0/+10
|
* Add test case for unscoping `:optimizer_hints`Ryuta Kamizono2019-03-181-0/+6
|
* Extract `truncate` and `truncate_tables` into database statementsRyuta Kamizono2019-03-171-8/+0
| | | | This is to easier make `truncate_tables` to bulk statements.
* Support Optimizer HintsRyuta Kamizono2019-03-161-0/+28
| | | | | | | | | | | | | | | | | | We as Arm Treasure Data are using Optimizer Hints with a monkey patch (https://gist.github.com/kamipo/4c8539f0ce4acf85075cf5a6b0d9712e), especially in order to use `MAX_EXECUTION_TIME` (refer #31129). Example: ```ruby class Job < ApplicationRecord default_scope { optimizer_hints("MAX_EXECUTION_TIME(50000) NO_INDEX_MERGE(jobs)") } end ``` Optimizer Hints is supported not only for MySQL but also for most databases (PostgreSQL on RDS, Oracle, SQL Server, etc), it is really helpful to turn heavy queries for large scale applications.
* Update READ_QUERY regexAli Ibrahim2019-02-251-0/+10
| | | | | | | * The READ_QUERY regex would consider reads to be writes if they started with spaces or parens. For example, a UNION query might have parens around each SELECT - (SELECT ...) UNION (SELECT ...). * It will now correctly treat these queries as reads.
* Remove duplicated protected params definitionsRyuta Kamizono2019-02-241-7/+2
| | | | Use "support/stubs/strong_parameters" instead.
* Merge pull request #35263 from hatch-carl/reduce-postgres-uuid-allocationsRyuta Kamizono2019-02-211-1/+13
|\ | | | | Reduce unused allocations when casting UUIDs for Postgres
| * Reduce unused allocations when casting UUIDs for PostgresCarl Thuringer2019-02-201-1/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Using the subscript method `#[]` on a string has several overloads and rather complex implementation. One of the overloads is the capability to accept a regular expression and then run a match, then return the receiver (if it matched) or one of the groups from the MatchData. The function of the `UUID#cast` method is to cast a UUID to a type and format acceptable by postgres. Naturally UUIDs are supposed to be string and of a certain format, but it had been determined that it was not ideal for the framework to send just any old string to Postgres and allow the engine to complain when "foobar" or "" was sent, being obviously of the wrong format for a valid UUID. Therefore this code was written to facilitate the checking, and if it were not of the correct format, a `nil` would be returned as is conventional in Rails. Now, the subscript method will allocate one or more strings on a match and return one of them, based on the index parameter. However, there is no need for a new string, as a UUID of the correct format is already such, and so long as the format was verified then the string supplied is adequate for consumption by the database. The subscript method also creates a MatchData object which will never be used, and so must eventually be garbage collected. Garbage collection indeed. This innocuous method tends to be called quite a lot, for example if the primary key of a table is a uuid, then this method will be called. If the foreign key of a relation is a UUID, once again this method is called. If that foreign key is belonging to a has_many relationship with dozens of objects, then again dozens of UUIDs shall be cast to a dup of themselves, and spawn dozens of MatchData objects, and so on. So, for users that: * Use UUIDs as primary keys * Use Postgres * Operate on collections of objects This accomplishes a significant savings in total allocations, and may save many garbage collections.
* | PostgreSQL: Support endless range values for range typesRyuta Kamizono2019-02-201-0/+16
|/
* Don't allow `where` with invalid value matches to nil valuesRyuta Kamizono2019-02-181-0/+6
| | | | | | | | | That is considered as silently leaking information. If type casting doesn't return any actual value, it should not be matched to any record. Fixes #33624. Closes #33946.
* Fix type casting column default in `change_column`Ryuta Kamizono2019-01-201-1/+13
| | | | | | | | | | | | | | Since #31230, `change_column` is executed as a bulk statement. That caused incorrect type casting column default by looking up the before changed type, not the after changed type. In a bulk statement, we can't use `change_column_default_for_alter` if the statement changes the column type. This fixes the type casting to use the constructed target sql_type. Fixes #34938.
* Remove deprecated `#insert_fixtures` from the database adaptersRafael Mendonça França2019-01-171-8/+0
|
* Fix `test_case_insensitiveness` to follow up eb5fef5Ryuta Kamizono2019-01-111-9/+8
|
* Enable `Style/RedundantBegin` cop to avoid newly adding redundant begin blockRyuta Kamizono2018-12-212-22/+16
| | | | | | | | | | Currently we sometimes find a redundant begin block in code review (e.g. https://github.com/rails/rails/pull/33604#discussion_r209784205). I'd like to enable `Style/RedundantBegin` cop to avoid that, since rescue/else/ensure are allowed inside do/end blocks in Ruby 2.5 (https://bugs.ruby-lang.org/issues/12906), so we'd probably meets with that situation than before.
* Module#{define_method,alias_method,undef_method,remove_method} become public ↵Ryuta Kamizono2018-12-211-4/+4
| | | | | | since Ruby 2.5 https://bugs.ruby-lang.org/issues/14133
* Rename error that occurs when writing on a readEileen Uchitelle2018-12-071-3/+3
| | | | | | | I originally named this `StatementInvalid` because that's what we do in GitHub, but `@tenderlove` pointed out that this means apps can't test for or explitly rescue this error. `StatementInvalid` is pretty broad so I've renamed this to `ReadOnlyError`.
* Add ability to prevent writes to a databaseEileen Uchitelle2018-11-301-0/+56
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This PR adds the ability to prevent writes to a database even if the database user is able to write (ie the database is a primary and not a replica). This is useful for a few reasons: 1) when converting your database from a single db to a primary/replica setup - you can fix all the writes on reads early on, 2) when we implement automatic database switching or when an app is manually switching connections this feature can be used to ensure reads are reading and writes are writing. We want to make sure we raise if we ever try to write in read mode, regardless of database type and 3) for local development if you don't want to set up multiple databases but do want to support rw/ro queries. This should be used in conjunction with `connected_to` in write mode. For example: ``` ActiveRecord::Base.connected_to(role: :writing) do Dog.connection.while_preventing_writes do Dog.create! # will raise because we're preventing writes end end ActiveRecord::Base.connected_to(role: :reading) do Dog.connection.while_preventing_writes do Dog.first # will not raise because we're not writing end end ```
* Allow spaces in postgres table namesGannon McGibbon2018-11-281-0/+5
| | | | | Fixes issue where "user post" is misinterpreted as "\"user\".\"post\"" when quoting table names with the postgres adapter.
* Bump the minimum version of PostgreSQL to 9.3Yasuo Honda2018-11-253-377/+359
| | | | | | | | | | | | | | | | | | | | https://www.postgresql.org/support/versioning/ - 9.1 EOLed on September 2016. - 9.2 EOLed on September 2017. 9.3 is also not supported since Nov 8, 2018. https://www.postgresql.org/about/news/1905/ I think it may be a little bit early to drop PostgreSQL 9.3 yet. * Deprecated `supports_ranges?` since no other databases support range data type * Add `supports_materialized_views?` to abstract adapter Materialized views itself is supported by other databases, other connection adapters may support them * Remove `with_manual_interventions` It was only necessary for PostgreSQL 9.1 or earlier * Drop CI against PostgreSQL 9.2
* Use squiggly heredoc to strip odd indentation in the executed SQLRyuta Kamizono2018-11-228-20/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Before: ``` LOG: execute <unnamed>: SELECT t.oid, t.typname FROM pg_type as t WHERE t.typname IN ('int2', 'int4', 'int8', 'oid', 'float4', 'float8', 'bool') LOG: execute <unnamed>: SELECT t.oid, t.typname, t.typelem, t.typdelim, t.typinput, r.rngsubtype, t.typtype, t.typbasetype FROM pg_type as t LEFT JOIN pg_range as r ON oid = rngtypid WHERE t.typname IN ('int2', 'int4', 'int8', 'oid', 'float4', 'float8', 'text', 'varchar', 'char', 'name', 'bpchar', 'bool', 'bit', 'varbit', 'timestamptz', 'date', 'money', 'bytea', 'point', 'hstore', 'json', 'jsonb', 'cidr', 'inet', 'uuid', 'xml', 'tsvector', 'macaddr', 'citext', 'ltree', 'interval', 'path', 'line', 'polygon', 'circle', 'lseg', 'box', 'time', 'timestamp', 'numeric') OR t.typtype IN ('r', 'e', 'd') OR t.typinput::varchar = 'array_in' OR t.typelem != 0 LOG: statement: SHOW TIME ZONE LOG: statement: SELECT 1 LOG: execute <unnamed>: SELECT COUNT(*) FROM pg_class c LEFT JOIN pg_namespace n ON n.oid = c.relnamespace WHERE c.relkind IN ('r','v','m') -- (r)elation/table, (v)iew, (m)aterialized view AND c.relname = 'accounts' AND n.nspname = ANY (current_schemas(false)) ``` After: ``` LOG: execute <unnamed>: SELECT t.oid, t.typname FROM pg_type as t WHERE t.typname IN ('int2', 'int4', 'int8', 'oid', 'float4', 'float8', 'bool') LOG: execute <unnamed>: SELECT t.oid, t.typname, t.typelem, t.typdelim, t.typinput, r.rngsubtype, t.typtype, t.typbasetype FROM pg_type as t LEFT JOIN pg_range as r ON oid = rngtypid WHERE t.typname IN ('int2', 'int4', 'int8', 'oid', 'float4', 'float8', 'text', 'varchar', 'char', 'name', 'bpchar', 'bool', 'bit', 'varbit', 'timestamptz', 'date', 'money', 'bytea', 'point', 'hstore', 'json', 'jsonb', 'cidr', 'inet', 'uuid', 'xml', 'tsvector', 'macaddr', 'citext', 'ltree', 'interval', 'path', 'line', 'polygon', 'circle', 'lseg', 'box', 'time', 'timestamp', 'numeric') OR t.typtype IN ('r', 'e', 'd') OR t.typinput::varchar = 'array_in' OR t.typelem != 0 LOG: statement: SHOW TIME ZONE LOG: statement: SELECT 1 LOG: execute <unnamed>: SELECT COUNT(*) FROM pg_class c LEFT JOIN pg_namespace n ON n.oid = c.relnamespace WHERE c.relkind IN ('r','v','m') -- (r)elation/table, (v)iew, (m)aterialized view AND c.relname = 'accounts' AND n.nspname = ANY (current_schemas(false)) ```
* Fixing an issue when parsing an opclass by allowing indexed columnThomas Bianchini2018-11-211-0/+12
| | | | | | | | in indexdef to be wrapped up by double quotes Fixes #34493. *Thomas Bianchini*
* Add support for UNLOGGED Postgresql tablesJacob Evelyn2018-11-131-0/+74
| | | | | | | | | | | This commit adds support for the `ActiveRecord::ConnectionAdapters::PostgreSQLAdapter.create_unlogged_tables` setting, which turns `CREATE TABLE` SQL statements into `CREATE UNLOGGED TABLE` statements. This can improve PostgreSQL performance but at the cost of data durability, and thus it is highly recommended that you *DO NOT* enable this in a production environment.
* Fix test case for money schema defaultRyuta Kamizono2018-11-121-1/+1
| | | | | | | | Follow up a741208f80dd33420a56486bd9ed2b0b9862234a. Since a741208, `Decimal#serialize` which is superclass of `Money` type is no longer no-op, so it consistently serialize/deserialize a value as a decimal even if schema default.
* Change the empty block style to have space inside of the blockRafael Mendonça França2018-09-251-1/+1
|
* Enable `Performance/UnfreezeString` copyuuji.yaginuma2018-09-232-6/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In Ruby 2.3 or later, `String#+@` is available and `+@` is faster than `dup`. ```ruby # frozen_string_literal: true require "bundler/inline" gemfile(true) do source "https://rubygems.org" gem "benchmark-ips" end Benchmark.ips do |x| x.report('+@') { +"" } x.report('dup') { "".dup } x.compare! end ``` ``` $ ruby -v benchmark.rb ruby 2.5.1p57 (2018-03-29 revision 63029) [x86_64-linux] Warming up -------------------------------------- +@ 282.289k i/100ms dup 187.638k i/100ms Calculating ------------------------------------- +@ 6.775M (± 3.6%) i/s - 33.875M in 5.006253s dup 3.320M (± 2.2%) i/s - 16.700M in 5.032125s Comparison: +@: 6775299.3 i/s dup: 3320400.7 i/s - 2.04x slower ```
* `supports_xxx?` returns whether a feature is supported by the backendRyuta Kamizono2018-09-081-8/+0
| | | | Rather than a configuration on the connection.
* Merge pull request #32647 from eugeneius/lazy_transactionsMatthew Draper2018-08-232-1/+4
|\ | | | | Omit BEGIN/COMMIT statements for empty transactions
| * Omit BEGIN/COMMIT statements for empty transactionsEugene Kenny2018-08-132-1/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If a transaction is opened and closed without any queries being run, we can safely omit the `BEGIN` and `COMMIT` statements, as they only exist to modify the connection's behaviour inside the transaction. This removes the overhead of those statements when saving a record with no changes, which makes workarounds like `save if changed?` unnecessary. This implementation buffers transactions inside the transaction manager and materializes them the next time the connection is used. For this to work, the adapter needs to guard all connection use with a call to `materialize_transactions`. Because of this, adapters must opt in to get this new behaviour by implementing `supports_lazy_transactions?`. If `raw_connection` is used to get a reference to the underlying database connection, the behaviour is disabled and transactions are opened eagerly, as we can't know how the connection will be used. However when the connection is checked back into the pool, we can assume that the application won't use the reference again and reenable lazy transactions. This prevents a single `raw_connection` call from disabling lazy transactions for the lifetime of the connection.
* | Add database configuration to disable advisory locks.Guo Xiang Tan2018-08-221-0/+8
|/ | | | https://github.com/rails/rails/issues/31190
* PostgreSQL 10 new relkind for partitioned tables (#31336)Yannick Schutz2018-07-271-0/+22
| | | | | | | | | | | | | | | | | | | | | | | * PostgreSQL 10 new relkind for partitioned tables Starting with PostgreSQL 10, we can now have partitioned tables natively * Add comment * Remove extra space * Add test for partition table in postgreSQL10 * Select 'p' for "BASE TABLE" and add a test case to support PostgreSQL 10 partition tables * Address RuboCop offense * Addressed incorrect `postgresql_version` Fixes #33008. [Yannick Schutz & Yasuo Honda & Ryuta Kamizono]
* OS X -> macOS [Closes #30313]Xavier Noria2018-06-231-1/+1
| | | | [Jon Moss & Xavier Noria]
* Parse raw value only when a value came from user in numericality validatorRyuta Kamizono2018-05-281-1/+4
| | | | | | | | Since `parse_raw_value_as_a_number` may not always parse raw value from database as a number without type casting (e.g. "$150.55" as money format). Fixes #32531.
* Make force equality checking more strictly not to allow serialized attributeRyuta Kamizono2018-05-251-0/+6
| | | | | | | | | | | Since #26074, introduced force equality checking to build a predicate consistently for both `find` and `create` (fixes #27313). But the assumption that only array/range attribute have subtype was wrong. We need to make force equality checking more strictly not to allow serialized attribute. Fixes #32761.
* Fix `CustomCops/AssertNot` to allow it to have failure messageRyuta Kamizono2018-05-132-3/+3
| | | | Follow up of #32605.
* Remove unnecessary `respond_to?(:report_on_exception)` checkingyuuji.yaginuma2018-03-021-2/+2
| | | | Since Rails 6 requires Ruby 2.4.1+.
* Fix `#columsn_for_distinct` of MySQL and PostgreSQLkg8m2018-02-271-10/+10
| | | | | | | Prevent `ActiveRecord::FinderMethods#limited_ids_for` from using correct primary key values even if `ORDER BY` columns include other table's primary key. Fixes #28364.
* PostgreSQL: Allow BC dates like datetime consistentlyRyuta Kamizono2018-02-231-0/+18
| | | | | | | | | BC dates are supported by both date and datetime types. https://www.postgresql.org/docs/current/static/datatype-datetime.html Since #1097, new datetime allows year zero as 1 BC, but new date does not. It should be allowed even in new date consistently.
* PostgreSQL: Treat infinite values in date like datetime consistentlyRyuta Kamizono2018-02-232-0/+63
| | | | | | | | | | | | | | The values infinity and -infinity are supported by both date and timestamp types. https://www.postgresql.org/docs/current/static/datatype-datetime.html#DATATYPE-DATETIME-SPECIAL-TABLE And also, it can not be known whether a value is infinity correctly unless cast a value. I've added `QueryAttribute#infinity?` to handle that case. Closes #27585.
* Deprecate update_attributes and update_attributes!Eddie Lebow2018-02-171-2/+2
| | | | Closes #31998
* Dump correctly index nulls order for postgresqlfatkodima2018-01-282-0/+34
|
* Use assert_predicate and assert_not_predicateDaniel Colson2018-01-2517-61/+61
|