| Commit message (Collapse) | Author | Age | Files | Lines |
|\
| |
| | |
Reduce unused allocations when casting UUIDs for Postgres
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Using the subscript method `#[]` on a string has several overloads and
rather complex implementation. One of the overloads is the capability to
accept a regular expression and then run a match, then return the
receiver (if it matched) or one of the groups from the MatchData.
The function of the `UUID#cast` method is to cast a UUID to a type and
format acceptable by postgres. Naturally UUIDs are supposed to be
string and of a certain format, but it had been determined that it was
not ideal for the framework to send just any old string to Postgres and
allow the engine to complain when "foobar" or "" was sent, being
obviously of the wrong format for a valid UUID. Therefore this code was
written to facilitate the checking, and if it were not of the correct
format, a `nil` would be returned as is conventional in Rails.
Now, the subscript method will allocate one or more strings on a match
and return one of them, based on the index parameter. However, there
is no need for a new string, as a UUID of the correct format is already
such, and so long as the format was verified then the string supplied is
adequate for consumption by the database.
The subscript method also creates a MatchData object which will never be
used, and so must eventually be garbage collected.
Garbage collection indeed. This innocuous method tends to be called
quite a lot, for example if the primary key of a table is a uuid, then
this method will be called. If the foreign key of a relation is a UUID,
once again this method is called. If that foreign key is belonging to
a has_many relationship with dozens of objects, then again dozens of
UUIDs shall be cast to a dup of themselves, and spawn dozens of
MatchData objects, and so on.
So, for users that:
* Use UUIDs as primary keys
* Use Postgres
* Operate on collections of objects
This accomplishes a significant savings in total allocations, and may
save many garbage collections.
|
|/ |
|
|
|
|
|
|
|
|
|
| |
That is considered as silently leaking information.
If type casting doesn't return any actual value, it should not be
matched to any record.
Fixes #33624.
Closes #33946.
|
|\
| |
| |
| | |
Fix the regex that extract mismatched foreign key information
|
|/
|
|
|
|
|
|
|
|
|
|
|
| |
The CI failure for `test_errors_for_bigint_fks_on_integer_pk_table` is
due to the poor regex that extract all ``` `(\w+)` ``` like parts from
the message (`:foreign_key` should be `"old_car_id"`, but `"engines"`):
https://travis-ci.org/rails/rails/jobs/494123455#L1703
I've improved the regex more strictly and have more exercised mismatched
foreign key tests.
Fixes #35294
|
|
|
|
|
|
|
|
|
|
|
| |
Currently `conn.column_exists?("testings", "created_at", "datetime")`
returns false even if the table has the `created_at` column.
That reason is that `column.type` is a symbol but passed `type` is not
normalized to symbol unlike `column_name`, it is surprising behavior to
me.
I've improved that to normalize a value before comparison.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since #31230, `change_column` is executed as a bulk statement.
That caused incorrect type casting column default by looking up the
before changed type, not the after changed type.
In a bulk statement, we can't use `change_column_default_for_alter` if
the statement changes the column type.
This fixes the type casting to use the constructed target sql_type.
Fixes #34938.
|
| |
|
|
|
|
| |
`ActiveRecord::ConnectionAdapters::SQLite3Adapter#valid_alter_table_type?`
|
| |
|
| |
|
|\
| |
| | |
MySQL: `ROW_FORMAT=DYNAMIC` create table option by default
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Since MySQL 5.7.9, the `innodb_default_row_format` option defines the
default row format for InnoDB tables. The default setting is `DYNAMIC`.
The row format is required for indexing on `varchar(255)` with `utf8mb4`
columns.
As long as using MySQL 5.6, CI won't be passed even if MySQL server
setting is properly configured the same as MySQL 5.7
(`innodb_file_per_table = 1`, `innodb_file_format = 'Barracuda'`, and
`innodb_large_prefix = 1`) since InnoDB table is created as the row
format `COMPACT` by default on MySQL 5.6, therefore indexing on string
with `utf8mb4` columns aren't succeeded.
Making `ROW_FORMAT=DYNAMIC` create table option by default for legacy
MySQL version would mitigate the indexing issue on the user side, and it
makes CI would be passed on MySQL 5.6 which is configured properly.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Currently we sometimes find a redundant begin block in code review
(e.g. https://github.com/rails/rails/pull/33604#discussion_r209784205).
I'd like to enable `Style/RedundantBegin` cop to avoid that, since
rescue/else/ensure are allowed inside do/end blocks in Ruby 2.5
(https://bugs.ruby-lang.org/issues/12906), so we'd probably meets with
that situation than before.
|
|/
|
|
|
|
| |
since Ruby 2.5
https://bugs.ruby-lang.org/issues/14133
|
|
|
|
|
|
|
| |
I originally named this `StatementInvalid` because that's what we do in
GitHub, but `@tenderlove` pointed out that this means apps can't test
for or explitly rescue this error. `StatementInvalid` is pretty broad so
I've renamed this to `ReadOnlyError`.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This PR adds the ability to prevent writes to a database even if the
database user is able to write (ie the database is a primary and not a
replica).
This is useful for a few reasons: 1) when converting your database from
a single db to a primary/replica setup - you can fix all the writes on
reads early on, 2) when we implement automatic database switching or
when an app is manually switching connections this feature can be used
to ensure reads are reading and writes are writing. We want to make sure
we raise if we ever try to write in read mode, regardless of database
type and 3) for local development if you don't want to set up multiple
databases but do want to support rw/ro queries.
This should be used in conjunction with `connected_to` in write mode.
For example:
```
ActiveRecord::Base.connected_to(role: :writing) do
Dog.connection.while_preventing_writes do
Dog.create! # will raise because we're preventing writes
end
end
ActiveRecord::Base.connected_to(role: :reading) do
Dog.connection.while_preventing_writes do
Dog.first # will not raise because we're not writing
end
end
```
|
|
|
|
|
| |
Fixes issue where "user post" is misinterpreted as "\"user\".\"post\""
when quoting table names with the postgres adapter.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
https://www.postgresql.org/support/versioning/
- 9.1 EOLed on September 2016.
- 9.2 EOLed on September 2017.
9.3 is also not supported since Nov 8, 2018. https://www.postgresql.org/about/news/1905/
I think it may be a little bit early to drop PostgreSQL 9.3 yet.
* Deprecated `supports_ranges?` since no other databases support range data type
* Add `supports_materialized_views?` to abstract adapter
Materialized views itself is supported by other databases, other connection adapters may support them
* Remove `with_manual_interventions`
It was only necessary for PostgreSQL 9.1 or earlier
* Drop CI against PostgreSQL 9.2
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Before:
```
LOG: execute <unnamed>: SELECT t.oid, t.typname
FROM pg_type as t
WHERE t.typname IN ('int2', 'int4', 'int8', 'oid', 'float4', 'float8', 'bool')
LOG: execute <unnamed>: SELECT t.oid, t.typname, t.typelem, t.typdelim, t.typinput, r.rngsubtype, t.typtype, t.typbasetype
FROM pg_type as t
LEFT JOIN pg_range as r ON oid = rngtypid
WHERE
t.typname IN ('int2', 'int4', 'int8', 'oid', 'float4', 'float8', 'text', 'varchar', 'char', 'name', 'bpchar', 'bool', 'bit', 'varbit', 'timestamptz', 'date', 'money', 'bytea', 'point', 'hstore', 'json', 'jsonb', 'cidr', 'inet', 'uuid', 'xml', 'tsvector', 'macaddr', 'citext', 'ltree', 'interval', 'path', 'line', 'polygon', 'circle', 'lseg', 'box', 'time', 'timestamp', 'numeric')
OR t.typtype IN ('r', 'e', 'd')
OR t.typinput::varchar = 'array_in'
OR t.typelem != 0
LOG: statement: SHOW TIME ZONE
LOG: statement: SELECT 1
LOG: execute <unnamed>: SELECT COUNT(*)
FROM pg_class c
LEFT JOIN pg_namespace n ON n.oid = c.relnamespace
WHERE c.relkind IN ('r','v','m') -- (r)elation/table, (v)iew, (m)aterialized view
AND c.relname = 'accounts'
AND n.nspname = ANY (current_schemas(false))
```
After:
```
LOG: execute <unnamed>: SELECT t.oid, t.typname
FROM pg_type as t
WHERE t.typname IN ('int2', 'int4', 'int8', 'oid', 'float4', 'float8', 'bool')
LOG: execute <unnamed>: SELECT t.oid, t.typname, t.typelem, t.typdelim, t.typinput, r.rngsubtype, t.typtype, t.typbasetype
FROM pg_type as t
LEFT JOIN pg_range as r ON oid = rngtypid
WHERE
t.typname IN ('int2', 'int4', 'int8', 'oid', 'float4', 'float8', 'text', 'varchar', 'char', 'name', 'bpchar', 'bool', 'bit', 'varbit', 'timestamptz', 'date', 'money', 'bytea', 'point', 'hstore', 'json', 'jsonb', 'cidr', 'inet', 'uuid', 'xml', 'tsvector', 'macaddr', 'citext', 'ltree', 'interval', 'path', 'line', 'polygon', 'circle', 'lseg', 'box', 'time', 'timestamp', 'numeric')
OR t.typtype IN ('r', 'e', 'd')
OR t.typinput::varchar = 'array_in'
OR t.typelem != 0
LOG: statement: SHOW TIME ZONE
LOG: statement: SELECT 1
LOG: execute <unnamed>: SELECT COUNT(*)
FROM pg_class c
LEFT JOIN pg_namespace n ON n.oid = c.relnamespace
WHERE c.relkind IN ('r','v','m') -- (r)elation/table, (v)iew, (m)aterialized view
AND c.relname = 'accounts'
AND n.nspname = ANY (current_schemas(false))
```
|
|
|
|
|
|
|
|
| |
in indexdef to be wrapped up by double quotes
Fixes #34493.
*Thomas Bianchini*
|
|\
| |
| | |
Adjust bind length of SQLite to default (999)
|
| |
| |
| |
| |
| | |
Change `#bind_params_length` in SQLite adapter to return the default
maximum amount (999). See https://www.sqlite.org/limits.html
|
|/
|
|
|
|
|
|
|
|
|
| |
This commit adds support for the
`ActiveRecord::ConnectionAdapters::PostgreSQLAdapter.create_unlogged_tables`
setting, which turns `CREATE TABLE` SQL statements into
`CREATE UNLOGGED TABLE` statements.
This can improve PostgreSQL performance but at the
cost of data durability, and thus it is highly recommended
that you *DO NOT* enable this in a production environment.
|
|
|
|
|
|
|
|
| |
Follow up a741208f80dd33420a56486bd9ed2b0b9862234a.
Since a741208, `Decimal#serialize` which is superclass of `Money` type
is no longer no-op, so it consistently serialize/deserialize a value as
a decimal even if schema default.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In Ruby 2.3 or later, `String#+@` is available and `+@` is faster than `dup`.
```ruby
# frozen_string_literal: true
require "bundler/inline"
gemfile(true) do
source "https://rubygems.org"
gem "benchmark-ips"
end
Benchmark.ips do |x|
x.report('+@') { +"" }
x.report('dup') { "".dup }
x.compare!
end
```
```
$ ruby -v benchmark.rb
ruby 2.5.1p57 (2018-03-29 revision 63029) [x86_64-linux]
Warming up --------------------------------------
+@ 282.289k i/100ms
dup 187.638k i/100ms
Calculating -------------------------------------
+@ 6.775M (± 3.6%) i/s - 33.875M in 5.006253s
dup 3.320M (± 2.2%) i/s - 16.700M in 5.032125s
Comparison:
+@: 6775299.3 i/s
dup: 3320400.7 i/s - 2.04x slower
```
|
|
|
|
|
| |
Since #33875, Rails dropped supporting MySQL 5.1 which does not support
utf8mb4. We no longer need to use legacy utf8 (utf8mb3) conservatively.
|
|
|
|
|
| |
Follow up #33874.
Related #23393.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Use utf8mb4 character set by default
`utf8mb4` character set supports supplementary characters including emoji.
`utf8` character set with 3-Byte encoding is not enough to support them.
There was a downside of 4-Byte length character set with MySQL 5.5 and 5.6:
"ERROR 1071 (42000): Specified key was too long; max key length is 767 bytes"
for Rails string data type which is mapped to varchar(255) type.
MySQL 5.7 supports 3072 byte key prefix length by default.
* Remove `DEFAULT COLLATE` from Active Record unit test databases
There should be no "one size fits all" collation in MySQL 5.7.
Let MySQL server choose the default collation for Active Record
unit test databases.
Users can choose their best collation for their databases
by setting `options[:collation]` based on their requirements.
* InnoDB FULLTEXT indexes support since MySQL 5.6
it does not have to use MyISAM storage engine whose maximum key length is 1000 bytes.
Using MyISAM storag engine with utf8mb4 character set would cause
"Specified key was too long; max key length is 1000 bytes"
https://dev.mysql.com/doc/refman/5.6/en/innodb-fulltext-index.html
* References
"10.9.1 The utf8mb4 Character Set (4-Byte UTF-8 Unicode Encoding)"
https://dev.mysql.com/doc/refman/5.7/en/charset-unicode-utf8mb4.html
"10.9.2 The utf8mb3 Character Set (3-Byte UTF-8 Unicode Encoding)"
https://dev.mysql.com/doc/refman/5.7/en/charset-unicode-utf8.html
"14.8.1.7 Limits on InnoDB Tables"
https://dev.mysql.com/doc/refman/5.7/en/innodb-restrictions.html
> If innodb_large_prefix is enabled (the default), the index key prefix limit is 3072 bytes
> for InnoDB tables that use DYNAMIC or COMPRESSED row format.
* CI against MySQL 5.7
Followed this instruction and changed root password to empty string.
https://docs.travis-ci.com/user/database-setup/#MySQL-57
* The recommended minimum version of MySQL is 5.7.9
to support utf8mb4 character set and `innodb_default_row_format`
MySQL 5.7.9 introduces `innodb_default_row_format` to support 3072 byte length index by default.
Users do not have to change MySQL database configuration to support Rails string type.
https://dev.mysql.com/doc/refman/5.7/en/innodb-parameters.html#sysvar_innodb_default_row_format
https://dev.mysql.com/doc/refman/5.7/en/innodb-restrictions.html
> If innodb_large_prefix is enabled (the default),
> the index key prefix limit is 3072 bytes for InnoDB tables that use DYNAMIC or COMPRESSED row format.
* The recommended minimum version of MariaDB is 10.2.2
MariaDB 10.2.2 is the first version of MariaDB supporting `innodb_default_row_format`
Also MariaDB says "MySQL 5.7 is compatible with MariaDB 10.2".
- innodb_default_row_format
https://mariadb.com/kb/en/library/xtradbinnodb-server-system-variables/#innodb_default_row_format
- "MariaDB versus MySQL - Compatibility"
https://mariadb.com/kb/en/library/mariadb-vs-mysql-compatibility/
> MySQL 5.7 is compatible with MariaDB 10.2
- "Supported Character Sets and Collations"
https://mariadb.com/kb/en/library/supported-character-sets-and-collations/
|
|
|
|
| |
Rather than a configuration on the connection.
|
|\
| |
| | |
Omit BEGIN/COMMIT statements for empty transactions
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
If a transaction is opened and closed without any queries being run, we
can safely omit the `BEGIN` and `COMMIT` statements, as they only exist
to modify the connection's behaviour inside the transaction. This
removes the overhead of those statements when saving a record with no
changes, which makes workarounds like `save if changed?` unnecessary.
This implementation buffers transactions inside the transaction manager
and materializes them the next time the connection is used. For this to
work, the adapter needs to guard all connection use with a call to
`materialize_transactions`. Because of this, adapters must opt in to get
this new behaviour by implementing `supports_lazy_transactions?`.
If `raw_connection` is used to get a reference to the underlying
database connection, the behaviour is disabled and transactions are
opened eagerly, as we can't know how the connection will be used.
However when the connection is checked back into the pool, we can assume
that the application won't use the reference again and reenable lazy
transactions. This prevents a single `raw_connection` call from
disabling lazy transactions for the lifetime of the connection.
|
|/
|
|
| |
https://github.com/rails/rails/issues/31190
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* PostgreSQL 10 new relkind for partitioned tables
Starting with PostgreSQL 10, we can now have partitioned tables natively
* Add comment
* Remove extra space
* Add test for partition table in postgreSQL10
* Select 'p' for "BASE TABLE" and add a test case
to support PostgreSQL 10 partition tables
* Address RuboCop offense
* Addressed incorrect `postgresql_version`
Fixes #33008.
[Yannick Schutz & Yasuo Honda & Ryuta Kamizono]
|
|
|
|
| |
Follow up of #33358 for SQLite3.
|
|
|
|
| |
A correct, but not obvious use of `ActiveSupport::Testing::MethodCallAssertions`, which might also have been part of #33337 or #33391.
|
|
|
|
| |
Missed these in preparing #33337
|
|
|
|
|
|
| |
Step 1 in #33162
[utilum + bogdanvlviv]
|
|\
| |
| |
| | |
Support readonly option in SQLite3Adapter
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Readonly sqlite database files are very useful as a data format for
storing configuration/lookup data that is too complicated for YAML
files. But since such files would typically be committed to a source
control repository, it's important to ensure that they are truly safe
from being inadvertently modified. Unfortunately using unix permissions
isn't enough, as sqlite will "helpfully" add the write bit to a database
file whenever it's written to.
sqlite3-ruby has supported a `:readonly` option since version 1.3.2 (see
https://github.com/sparklemotion/sqlite3-ruby/commit/c20c9f5dd2990042)
This simply passes that option through to the adapter if present in the
config hash. I think this is best considered an adapter-specific option
since no other supported database has an identical concept.
|
|
|
|
| |
[Jon Moss & Xavier Noria]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- MariaDB 10.3.7 is the first GA release
https://mariadb.com/kb/en/library/mariadb-1037-release-notes/
- MariaDB 10.3 translates `LENGTH()` to `OCTET_LENGTH()` function
https://mariadb.com/kb/en/library/sql_modeoracle-from-mariadb-103/
> MariaDB translates LENGTH() to OCTET_LENGTH()
- MySQL does NOT translate `LENGTH()` to `OCTET_LENGTH()`
However, it translates `OCTET_LENGTH()` to `LENGTH()`
Here are generated schema dumps of this test to show the differences
between MySQL and MariaDB:
* MySQL 8.0 (Server version: 8.0.11 MySQL Community Server - GPL)
```ruby
create_table \"virtual_columns\", options: \"ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci\", force: :cascade do |t|
t.string \"name\"
t.virtual \"upper_name\", type: :string, as: \"upper(`name`)\"
t.virtual \"name_length\", type: :integer, as: \"length(`name`)\", stored: true
t.virtual \"name_octet_length\", type: :integer, as: \"length(`name`)\", stored: true
end
```
* Maria DB 10.3 (Server version: 10.3.7-MariaDB-1:10.3.7+maria~bionic-log mariadb.org binary distribution)
```ruby
create_table \"virtual_columns\", options: \"ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci\", force: :cascade do |t|
t.string \"name\"
t.virtual \"upper_name\", type: :string, as: \"ucase(`name`)\"
t.virtual \"name_length\", type: :integer, as: \"octet_length(`name`)\", stored: true
t.virtual \"name_octet_length\", type: :integer, as: \"octet_length(`name`)\", stored: true
end
```
|
|
|
|
|
|
|
|
| |
Since `parse_raw_value_as_a_number` may not always parse raw value from
database as a number without type casting (e.g. "$150.55" as money
format).
Fixes #32531.
|
|
|
|
|
|
|
|
|
|
|
| |
Since #26074, introduced force equality checking to build a predicate
consistently for both `find` and `create` (fixes #27313).
But the assumption that only array/range attribute have subtype was
wrong. We need to make force equality checking more strictly not to
allow serialized attribute.
Fixes #32761.
|
|
|
|
| |
Follow up of #32605.
|
|
|
|
|
| |
This autocorrects the violations after adding a custom cop in
3305c78dcd.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For legacy reasons Rails stores time columns on sqlite as full
timestamp strings. However because the date component wasn't being
normalized this meant that when they were read back they were being
prefixed with 2001-01-01 by ActiveModel::Type::Time. This had a
twofold result - first it meant that the fast code path wasn't being
used because the string was invalid and second it was corrupting the
second fractional component being read by the Date._parse code path.
Fix this by a combination of normalizing the timestamps on writing
and also changing Active Model to be more lenient when detecting
whether a string starts with a date component before creating the
dummy time value for parsing.
|