| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|\
| |
| | |
Load YAML for rake tasks without parsing ERB
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This change adds a new method that loads the YAML for the database
config without parsing the ERB. This may seem odd but bear with me:
When we added the ability to have rake tasks for multiple databases we
started looping through the configurations to collect the namespaces so
we could do `rake db:create:my_second_db`. See #32274.
This caused a problem where if you had `Rails.config.max_threads` set in
your database.yml it will blow up because the environment that defines
`max_threads` isn't loaded during `rake -T`. See #35468.
We tried to fix this by adding the ability to just load the YAML and
ignore ERB all together but that caused a bug in GitHub's YAML loading
where if you used multi-line ERB the YAML was invalid. That led us to
reverting some changes in #33748.
After trying to resolve this a bunch of ways `@tenderlove` came up with
replacing the ERB values so that we don't need to load the environment
but we also can load the YAML.
This change adds a DummyCompiler for ERB that will replace all the
values so we can load the database yaml and create the rake tasks.
Nothing else uses this method so it's "safe".
DO NOT use this method in your application.
Fixes #35468
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| | |
Foreign keys could be created to the same table.
So `remove_foreign_key :from_table, :to_table` is sometimes ambiguous.
This allows `remove_foreign_key` to remove the select one on the same
table with giving both `to_table` and `options`.
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Since #23461, all adapters supports prepared statements, so that clears
the prepared statements cache is no longer database specific.
Actually, I struggled to identify the cause of random CI failure in
#23461, that was missing `@statements.clear` in `clear_cache!`.
This extracts `clear_cache!` to ensure the common concerns in the
abstract adapter.
|
|/
|
|
|
| |
Adds a method to ActiveRecord allowing records to be inserted in bulk without instantiating ActiveRecord models. This method supports options for handling uniqueness violations by skipping duplicate records or overwriting them in an UPSERT operation.
ActiveRecord already supports bulk-update and bulk-destroy actions that execute SQL UPDATE and DELETE commands directly. It also supports bulk-read actions through `pluck`. It makes sense for it also to support bulk-creation.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Add `ActiveRecord::Base.connection.truncate` for SQLite3 adapter.
SQLite doesn't support `TRUNCATE TABLE`, but SQLite3 adapter can support
`ActiveRecord::Base.connection.truncate` by using `DELETE FROM`.
`DELETE` without `WHERE` uses "The Truncate Optimization",
see https://www.sqlite.org/lang_delete.html.
* Add `rails db:seed:replant` that truncates database tables and loads the seeds
Closes #34765
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In MySQL, the default collation is case insensitive. Since the
uniqueness validator enforces case sensitive comparison by default, it
frequently causes mismatched collation issues (performance, weird
behavior, etc) to MySQL users.
https://grosser.it/2009/12/11/validates_uniqness_of-mysql-slow/
https://github.com/rails/rails/issues/1399
https://github.com/rails/rails/pull/13465
https://github.com/gitlabhq/gitlabhq/commit/c1dddf8c7d947691729f6d64a8ea768b5c915855
https://github.com/huginn/huginn/pull/1330#discussion_r55152573
I'd like to deprecate the implicit default enforcing since I frequently
experienced the problems in code reviews.
Note that this change has no effect to sqlite3, postgresql, and
oracle-enhanced adapters which are implemented as case sensitive by
default, only affect to mysql2 adapter (I can take a work if sqlserver
adapter will support Rails 6.0).
|
| |
|
|\
| |
| | |
Relax table name detection in `from` to allow any extension like INDEX hint
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
#35360 allows table name qualified if `from` has original table name.
But that is still too strict. We have a valid use case that `from` with
INDEX hint (e.g. `from("comments USE INDEX (PRIMARY)")`).
So I've relaxed the table name detection in `from` to allow any
extension like INDEX hint.
Fixes #35359.
|
|\ \
| |/
|/| |
Enable SQL statement cache for `find` on base class as with `find_by`
|
| |
| |
| |
| | |
Related and follows d333d85254d27cd572e6ecce8ee850c107a4f340.
|
|\ \
| |/
|/| |
Add reselect method
|
| | |
|
| |\ |
|
| |\ \ |
|
| |\ \ \ |
|
| |\ \ \ \ |
|
| | | | | | |
|
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
`primary key` is integer
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
It's mentioned everywhere as `ActiveRecord::RecordNotFound` so to be
coherent with the rest of the documentation I've applied it here.
Also doc was saying if the parameter is integer it coerces it which is
other way around.
|
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
`ClassSpecificRelation`
|
| | | | | | |
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
When the middle association doesn't have any records and the inner
association is not an empty scope the owner will be `nil` so we can't
try to reset the inverse association.
|
| | | | | |
| | | | | |
| | | | | | |
Add negative scopes for all enum values
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Caching `find_by` statements on STI subclasses is unsafe, since
`type IN (?,?,?,?)` part is dynamic, and we don't have SQL statements
cache invalidation when a STI subclass is created or removed for now.
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
DISTINCT
When using `select` with `'DISTINCT( ... )'` if you use method `size` on a non loaded relation it overrides the column selected by passing `:all` so it returns different value than count.
This fixes #35214
|
| | | | | | |
|
|\ \ \ \ \ \
| | | | | | |
| | | | | | | |
Support read queries with leading characters while preventing writes
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
* The READ_QUERY regex would consider reads to be writes if they started with
spaces or parens. For example, a UNION query might have parens around each
SELECT - (SELECT ...) UNION (SELECT ...).
* It will now correctly treat these queries as reads.
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
Also, improving an argument error message for `limit`, extracting around
`type_to_sql` code into schema statements, and more exercise tests.
|
|/ / / / / /
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Related cbcdecd, 2a56b2d.
This is a regression caused by cbcdecd.
If query caching is enabled, prepared statement handles are never
re-used, since we missed that a query is preprocessed when query caching
is enabled, but doesn't keep the `preparable` flag.
We should care about that case.
|
|\ \ \ \ \ \
| | | | | | |
| | | | | | | |
Ensure `update_all` series doesn't care optimistic locking
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
Incrementing the lock version invalidates any other process's optimistic
lock, which is the desired outcome: the record no longer looks the same
as it did when they loaded it.
|
| | | | | | | |
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
This is caused by 0ee96d1.
Since #18744, `select` columns doesn't be qualified by table name if
using `from`. 0ee96d1 follows that for `pluck` as well.
But people depends that `pluck` columns are qualified even if using
`from`.
So I've fixed that to be qualified if `from` has the original table name
to keep the behavior as much as before.
Fixes #35359.
|
|/ / / / / /
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
The argument of `_update_record` and `_create_record` is
`attribute_names`, that is reserved for overriding by partial writes
attribute names.
https://github.com/rails/rails/blob/67e20d1d4854d834e9e43e56486d37cd98983f0d/activerecord/lib/active_record/persistence.rb#L719
https://github.com/rails/rails/blob/67e20d1d4854d834e9e43e56486d37cd98983f0d/activerecord/lib/active_record/persistence.rb#L737
https://github.com/rails/rails/blob/67e20d1d4854d834e9e43e56486d37cd98983f0d/activerecord/lib/active_record/attribute_methods/dirty.rb#L171
https://github.com/rails/rails/blob/67e20d1d4854d834e9e43e56486d37cd98983f0d/activerecord/lib/active_record/attribute_methods/dirty.rb#L177
The reason that no failing with extra args is that `Timestamp` module
which is most outside module of the `_update_record` discards the extra
args.
https://github.com/rails/rails/blob/67e20d1d4854d834e9e43e56486d37cd98983f0d/activerecord/lib/active_record/timestamp.rb#L104
But that looks odd dependency. It should not be passed extra args,
should only be passed attribute names.
|
| | | | | | |
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
collation issues
In MySQL, the default collation is case insensitive. Since the
uniqueness validator enforces case sensitive comparison by default, it
frequently causes mismatched collation issues (performance, weird
behavior, etc) to MySQL users.
https://grosser.it/2009/12/11/validates_uniqness_of-mysql-slow/
https://github.com/rails/rails/issues/1399
https://github.com/rails/rails/pull/13465
https://github.com/gitlabhq/gitlabhq/commit/c1dddf8c7d947691729f6d64a8ea768b5c915855
https://github.com/huginn/huginn/pull/1330#discussion_r55152573
This extracts `default_uniqueness_comparison` to ease to handle the
mismatched collation issues on the connection.
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Revert "Fix lint `ShadowingOuterLocalVariable`"
This reverts commit 38bd45a48992b500478a82d56d31468a322937a8.
Change of variable name
Fix lint `ShadowingOuterLocalVariable`
|
|\ \ \ \ \ \
| | | | | | |
| | | | | | | |
Reduce unused allocations when casting UUIDs for Postgres
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
Using the subscript method `#[]` on a string has several overloads and
rather complex implementation. One of the overloads is the capability to
accept a regular expression and then run a match, then return the
receiver (if it matched) or one of the groups from the MatchData.
The function of the `UUID#cast` method is to cast a UUID to a type and
format acceptable by postgres. Naturally UUIDs are supposed to be
string and of a certain format, but it had been determined that it was
not ideal for the framework to send just any old string to Postgres and
allow the engine to complain when "foobar" or "" was sent, being
obviously of the wrong format for a valid UUID. Therefore this code was
written to facilitate the checking, and if it were not of the correct
format, a `nil` would be returned as is conventional in Rails.
Now, the subscript method will allocate one or more strings on a match
and return one of them, based on the index parameter. However, there
is no need for a new string, as a UUID of the correct format is already
such, and so long as the format was verified then the string supplied is
adequate for consumption by the database.
The subscript method also creates a MatchData object which will never be
used, and so must eventually be garbage collected.
Garbage collection indeed. This innocuous method tends to be called
quite a lot, for example if the primary key of a table is a uuid, then
this method will be called. If the foreign key of a relation is a UUID,
once again this method is called. If that foreign key is belonging to
a has_many relationship with dozens of objects, then again dozens of
UUIDs shall be cast to a dup of themselves, and spawn dozens of
MatchData objects, and so on.
So, for users that:
* Use UUIDs as primary keys
* Use Postgres
* Operate on collections of objects
This accomplishes a significant savings in total allocations, and may
save many garbage collections.
|
|\ \ \ \ \ \ \
| | | | | | | |
| | | | | | | | |
Replaced usage of where.delete/destroy_all with delete/destroy_by
|
| |/ / / / / / |
|
|\ \ \ \ \ \ \
| | | | | | | |
| | | | | | | | |
Fix reset of the source association when through association is loaded
|
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
The special case happens when through association has a custom scope
that is applied to the source association when loading.
In this case, the soucre association would need to be reset after
main association is loaded. See tests.
The special case exists when a through association has
|