| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|\ \ \ \ \
| | | | | |
| | | | | | |
Use raw time string from DB to generate ActiveRecord#cache_version
|
| | | | | | |
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
When an `updated_at` column exists on the model, but is not available on the instance (likely due to a select), we should raise an error rather than silently not generating a cache_version. Without this behavior it's likely that cache entries will not be able to be invalidated and this will happen without notice.
This behavior was reported and described by @lsylvester in https://github.com/rails/rails/pull/34197#issuecomment-429668759.
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Currently, the `updated_at` field is used to generate a `cache_version`. Some database adapters return this timestamp value as a string that must then be converted to a Time value. This process requires a lot of memory and even more CPU time. In the case where this value is only being used for a cache version, we can skip the Time conversion by using the string value directly.
- This PR preserves existing cache format by converting a UTC string from the database to `:usec` format.
- Some databases return an already converted Time object, in those instances, we can directly use `created_at`.
- The `updated_at_before_type_cast` can be a value that comes from either the database or the user. We only want to optimize the case where it is from the database.
- If the format of the cache version has been changed, we cannot apply this optimization, and it is skipped.
- If the format of the time in the database is not UTC, then we cannot use this optimization, and it is skipped.
Some databases (notably PostgreSQL) returns a variable length nanosecond value in the time string. If the value ends in a zero, then it is truncated For instance instead of `2018-10-12 05:00:00.000000` the value `2018-10-12 05:00:00` is returned. We detect this case and pad the remaining zeros to ensure consistent cache version generation.
Before: Total allocated: 743842 bytes (6626 objects)
After: Total allocated: 702955 bytes (6063 objects)
(743842 - 702955) / 743842.0 # => 5.4% ⚡️⚡️⚡️⚡️⚡️
Using the CodeTriage application and derailed benchmarks this PR shows between 9-11% (statistically significant) performance improvement versus the commit before it.
Special thanks to @lsylvester for helping to figure out a way to preserve the usec format and for helping with many implementation details.
|
|\ \ \ \ \ \
| |_|_|/ / /
|/| | | | | |
Additional types of ResultSet should not contain keys of #attributes_to_define_after_schema_loads
|
| | | | | | |
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
This reverts commit f2ab8b64d4d46d7199f94c3e21021f414a4d259a, reversing
changes made to b9c7305dbe57931a153a540d49ae5d469af61a14.
Reason: `scope.take` is not the same with `scope.to_a.first`.
|
|\ \ \ \ \ \
| |/ / / / /
|/| | | | | |
Reuse code in AR::Association#find_target
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Before this patch, singular and collection associations
had different implementations of the #find_target method.
This patch reuses the code properly through extending the low level
methods.
|
|\ \ \ \ \ \
| | | | | | |
| | | | | | | |
Make it possible to override the implicit order column
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
When calling ordered finder methods such as +first+ or +last+ without an
explicit order clause, ActiveRecord sorts records by primary key. This
can result in unpredictable and surprising behaviour when the primary
key is not an auto-incrementing integer, for example when it's a UUID.
This change makes it possible to override the column used for implicit
ordering such that +first+ and +last+ will return more predictable
results. For Example:
class Project < ActiveRecord::Base
self.implicit_order_column = "created_at"
end
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
This reverts commit d52f74480ae46cd3de7ce697093136b01c7a2172.
Since 24adc20, the `Helpers` constant in the `ActiveRecord` namespace is
not referenced anymore.
|
|/ / / / / /
| | | | | |
| | | | | |
| | | | | | |
It should be referenced by full qualified name from Active Record.
|
|\ \ \ \ \ \
| | | | | | |
| | | | | | | |
Bump the minimum version of PostgreSQL to 9.3
|
| |/ / / / /
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
https://www.postgresql.org/support/versioning/
- 9.1 EOLed on September 2016.
- 9.2 EOLed on September 2017.
9.3 is also not supported since Nov 8, 2018. https://www.postgresql.org/about/news/1905/
I think it may be a little bit early to drop PostgreSQL 9.3 yet.
* Deprecated `supports_ranges?` since no other databases support range data type
* Add `supports_materialized_views?` to abstract adapter
Materialized views itself is supported by other databases, other connection adapters may support them
* Remove `with_manual_interventions`
It was only necessary for PostgreSQL 9.1 or earlier
* Drop CI against PostgreSQL 9.2
|
|/ / / / /
| | | | |
| | | | |
| | | | |
| | | | | |
When running `exec_query` with `INSERT` (or other write commands), MySQL
returns `ActiveRecord::Result`.
|
| | | | | |
|
|\ \ \ \ \
| | | | | |
| | | | | | |
Redact SQL in errors
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Move `ActiveRecord::StatementInvalid` SQL to error property.
Also add bindings as an error property.
|
|/ / / / /
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Before:
```
LOG: execute <unnamed>: SELECT t.oid, t.typname
FROM pg_type as t
WHERE t.typname IN ('int2', 'int4', 'int8', 'oid', 'float4', 'float8', 'bool')
LOG: execute <unnamed>: SELECT t.oid, t.typname, t.typelem, t.typdelim, t.typinput, r.rngsubtype, t.typtype, t.typbasetype
FROM pg_type as t
LEFT JOIN pg_range as r ON oid = rngtypid
WHERE
t.typname IN ('int2', 'int4', 'int8', 'oid', 'float4', 'float8', 'text', 'varchar', 'char', 'name', 'bpchar', 'bool', 'bit', 'varbit', 'timestamptz', 'date', 'money', 'bytea', 'point', 'hstore', 'json', 'jsonb', 'cidr', 'inet', 'uuid', 'xml', 'tsvector', 'macaddr', 'citext', 'ltree', 'interval', 'path', 'line', 'polygon', 'circle', 'lseg', 'box', 'time', 'timestamp', 'numeric')
OR t.typtype IN ('r', 'e', 'd')
OR t.typinput::varchar = 'array_in'
OR t.typelem != 0
LOG: statement: SHOW TIME ZONE
LOG: statement: SELECT 1
LOG: execute <unnamed>: SELECT COUNT(*)
FROM pg_class c
LEFT JOIN pg_namespace n ON n.oid = c.relnamespace
WHERE c.relkind IN ('r','v','m') -- (r)elation/table, (v)iew, (m)aterialized view
AND c.relname = 'accounts'
AND n.nspname = ANY (current_schemas(false))
```
After:
```
LOG: execute <unnamed>: SELECT t.oid, t.typname
FROM pg_type as t
WHERE t.typname IN ('int2', 'int4', 'int8', 'oid', 'float4', 'float8', 'bool')
LOG: execute <unnamed>: SELECT t.oid, t.typname, t.typelem, t.typdelim, t.typinput, r.rngsubtype, t.typtype, t.typbasetype
FROM pg_type as t
LEFT JOIN pg_range as r ON oid = rngtypid
WHERE
t.typname IN ('int2', 'int4', 'int8', 'oid', 'float4', 'float8', 'text', 'varchar', 'char', 'name', 'bpchar', 'bool', 'bit', 'varbit', 'timestamptz', 'date', 'money', 'bytea', 'point', 'hstore', 'json', 'jsonb', 'cidr', 'inet', 'uuid', 'xml', 'tsvector', 'macaddr', 'citext', 'ltree', 'interval', 'path', 'line', 'polygon', 'circle', 'lseg', 'box', 'time', 'timestamp', 'numeric')
OR t.typtype IN ('r', 'e', 'd')
OR t.typinput::varchar = 'array_in'
OR t.typelem != 0
LOG: statement: SHOW TIME ZONE
LOG: statement: SELECT 1
LOG: execute <unnamed>: SELECT COUNT(*)
FROM pg_class c
LEFT JOIN pg_namespace n ON n.oid = c.relnamespace
WHERE c.relkind IN ('r','v','m') -- (r)elation/table, (v)iew, (m)aterialized view
AND c.relname = 'accounts'
AND n.nspname = ANY (current_schemas(false))
```
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
in indexdef to be wrapped up by double quotes
Fixes #34493.
*Thomas Bianchini*
|
|\ \ \ \ \
| | | | | |
| | | | | | |
Fix query cache for multiple connections
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Currently the query cache is only aware of one handler so once we added
multiple databases switching on the handler we broke query cache for
those reading connections.
While #34054 is the proper fix, that fix is not straight forward and I
want to make sure that the query cache isn't just broken for all other
connections not in the main handler.
|
|/ / / / /
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
The connection handler was using the RuntimeRegistry which kind of
implies it's a per thread registry. But it's actually per fiber.
If you have an application that uses fibers and you're using multiple
databases, when you switch the connection handler to swap connections
new fibers running on the same thread used to get a different connection
id. This PR changes the code to actually use a thread so that we get
the same connection.
Fixes https://github.com/rails/rails/issues/30047
[Eileen M. Uchitelle, Aaron Patterson, & Arthur Neeves]
|
|\ \ \ \ \
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
febeling/inconsistent-assignment-has-many-through-33942
Fix handling of duplicates for `replace` on has_many-through
|
| | | | | | |
|
| | | | | | |
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
There was a bug in the handling of duplicates when
assigning (replacing) associated records, which made the result
dependent on whether a given record was associated already before
being assigned anew. E.g.
post.people = [person, person]
post.people.count
# => 2
while
post.people = [person]
post.people = [person, person]
post.people.count
# => 1
This change adds a test to provoke the former incorrect behavior, and
fixes it.
Cause of the bug was the handling of record collections as sets, and
using `-` (difference) and `&` (union) operations on them
indiscriminately. This temporary conversion to sets would eliminate
duplicates.
The fix is to decorate record collections for these operations, and
only for the `has_many :through` case. It is done by counting
occurrences, and use the record together with the occurrence number as
element, in order to make them work well in sets. Given
a, b = *Person.all
then the collection used for finding the difference or union of
records would be internally changed from
[a, b, a]
to
[[a, 1], [b, 1], [a, 2]]
for these operations. So a first occurrence and a second occurrence
would be distinguishable, which is all that is necessary for this
task.
Fixes #33942.
|
|\ \ \ \ \ \
| | | | | | |
| | | | | | | |
Exercise `connected_to` and `connects_to` methods
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
Since both methods are public API I think it makes sense to add these tests
in order to prevent any regression in the behavior of those methods after the 6.0 release.
Exercise `connected_to`
- Ensure that the method raises with both `database` and `role` arguments
- Ensure that the method raises without `database` and `role`
Exercise `connects_to`
- Ensure that the method returns an array of established connections(as mentioned
in the docs of the method)
Related to #34052
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
This commit fixes a small typo in documentation of the
"UNLOGGED" table option for PostgreSQL databases, and
clarifies the documentation slightly.
|
| | | | | | |
| | | | | | |
| | | | | | | |
[ci skip]
|
|/ / / / / /
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
* Arel: Implemented DB-aware NULL-safe comparison
* Fixed where clause inversion for NULL-safe comparison
* Renaming "null_safe_eq" to "is_not_distinct_from", "null_safe_not_eq" to "is_distinct_from"
[Dmytro Shteflyuk + Rafael Mendonça França]
|
|\ \ \ \ \ \
| | | | | | |
| | | | | | |
| | | | | | | |
Fix: Arel now emits a single pair of parens for UNION and UNION ALL
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
mysql has a great implementation to suppress multiple parens for union
sql statements.
This moves that functionality to the generic implementation
This also introduces that functionality for UNION ALL
|
|\ \ \ \ \ \ \
| |/ / / / / /
|/| | | | | | |
Adjust bind length of SQLite to default (999)
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
Change `#bind_params_length` in SQLite adapter to return the default
maximum amount (999). See https://www.sqlite.org/limits.html
|
|/ / / / / /
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
This commit adds support for the
`ActiveRecord::ConnectionAdapters::PostgreSQLAdapter.create_unlogged_tables`
setting, which turns `CREATE TABLE` SQL statements into
`CREATE UNLOGGED TABLE` statements.
This can improve PostgreSQL performance but at the
cost of data durability, and thus it is highly recommended
that you *DO NOT* enable this in a production environment.
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
The test added in 12b0b26df7560ab5199ba830586864085441508f passes even
without this code since 9b8c7796a9c2048208aa843ad3dc477dffa8bdee, as the
call to `id` in `remember_transaction_record_state` now triggers a
`sync_with_transaction_state` which discards the leftover state from the
previous transaction.
This issue had already been fixed for `save!`, `destroy` and `touch` in
caae79a385ce112245262a17414bcd96bea013c2, but continued to affect `save`
because the call to `rollback_active_record_state!` in that method would
increment the transaction level before `add_to_transaction` could clear
it, preventing the fix from working correctly.
As `rollback_active_record_state!` was removed entirely in
48007d5390db47fc1223f57c8e7ab3ebb7c3a3d7, this code is no longer needed.
|
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Use `t.index ...` instead.
|
| | | | | | |
|
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Since quoted `Infinity` and `NaN` are valid data for PostgreSQL.
|
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
[fatkodima & Stefan Kanev]
|
| | | | | | |
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Adds support for multiple databases to `rails db:schema:cache:dump`
and `rails db:schema:cache:clear`.
|
|/ / / / /
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
When a record with transactional callbacks is saved, it's attached to
the current transaction so that the callbacks can be run when the
transaction is committed. Records can also be added manually with
`add_transaction_record`, even if they have no transactional callbacks.
When a nested transaction is committed, its records are transferred to
the parent transaction, as transactional callbacks should only be run
when the outermost transaction is committed (the "real" transaction).
However, this currently only happens when the record has transactional
callbacks, and not when added manually with `add_transaction_record`.
If a record is added to a nested transaction, we should always attach it
to the parent transaction when the nested transaction is committed,
regardless of whether it has any transactional callbacks.
[Eugene Kenny & Ryuta Kamizono]
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
The `read_attribute` method always returns the primary key when asked to
read the `id` attribute, even if the primary key isn't named `id`, and
even if another attribute named `id` exists.
For the `inspect`, `attribute_for_inspect` and `pretty_print` methods,
this behaviour is undesirable, as they're used to examine the internal
state of the record. By using `_read_attribute` instead, we'll get the
real value of the `id` attribute.
|
| | | | | |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
The `model_metadata` is only used if `model_class` is given.
If `model_class` is given, the `table_name` is always
`model_class.table_name`.
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
The `@connection` is no longer used since ee5ab22.
Originally the `@connection` was useless because it is only used in
`timestamp_column_names`, which is only used if `model_class` is given.
If `model_class` is given, the `@connection` is always
`model_class.connection`.
|