| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
If an sqlite3 table contains a decimal column behind columns with a collation
definition, then parsing the collation of all preceeding columns will fail --
the collation will be missed without notice.
|
|\
| |
| |
| |
| | |
guigs/fix-invalid-schema-when-pk-column-has-comment
Fix invalid schema dump when primary key column has a comment
|
| |
| |
| |
| |
| |
| |
| |
| | |
Before this fix it would either generate an invalid schema, passing `comment` option twice to `create_table`, or it move the comment from primary key column to the table if table had no comment when the dump was generated.
The situation now is that a comment on primary key will be ignored (not present on schema).
Fixes #29966
|
|/
|
|
|
|
|
|
| |
`create_table` and `t.column` have the same named options (e.g.
`:comment`, `:primary_key`), so it should be separated table options
from column options.
Related #36373.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Prior to 3e2e8eeb9ea552bd4782538cf9348455f3d0e14a the Reaper thread
would hold a reference to connection pools indefinitely, preventing the
connection pool from being garbage collected, and also leaking the
Thread.
Since 3e2e8eeb9ea552bd4782538cf9348455f3d0e14a, there is only one Reaper
Thread for all pools, however all pools are still stored in a class
variable, preventing them from being garbage collected.
This commit instead holds reference to the pools through a WeakRef. This
way, connection pools referenced elsewhere will be reaped, any others
will be able to be garbage collected.
I don't love resorting to WeakRef to solve this, but I believe it's the
simplest way to accomplish the the desired behaviour.
|
|
|
|
|
| |
Almost all database statements methods except `explain` was moved into
`DatabaseStatements` at #35922. This moves the last one method.
|
|\
| |
| | |
Use a single thread for all ConnectionPool Reapers
|
| |
| |
| |
| |
| | |
Previously we would spawn one thread per connection pool, which ends up
being wasteful for apps with several connection pools.
|
|/
|
|
|
|
|
|
|
| |
Since d1a74c1e012ed96f7179e53b9190b7da0a369e11, Active Record requires
SQLite version 3.8.0 or greater, so savepoints and partial indexes are
always available.
That commit also added a runtime version check, so we can remove the
minimum version requirement from the internal adapter documentation.
|
|
|
|
|
|
|
|
|
|
| |
This commit adds "TRANSACTION" to savepoint and commit, rollback statements
because none of savepoint statements were removed by #36153 since they are not "SCHEMA" statements.
Although, only savepoint statements can be labeled as "TRANSACTION"
I think all of transaction related method should add this label.
Follow up #36153
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
transaction
Currently, `committed!`/`rolledback!` will only be attempted for the
first enrolled record in the transaction, that will cause some
problematic behaviors.
The first one problem, `clear_transaction_record_state` won't be called
even if the transaction is finalized except the first enrolled record.
This means that de-duplicated records in the transaction won't refer
latest state (e.g. won't happen rolling back record state).
The second one problem, the enrolled order is not always the same as the
order in which the actions actually happened, the first enrolled record
may succeed no actions (e.g. `destroy` has already succeeded on another
record during `before_destroy`), it will lose to fire any transactional
callbacks.
To avoid both problems, we should attempt `committed!`/`rolledback!` to
all enrolled records in the transaction.
|
|\
| |
| | |
Document algorithm: concurrent option for PostgreSQL [ci skip]
|
| | |
|
|\ \
| |/
|/| |
Cache full MySQL version in schema cache
|
| |
| |
| |
| |
| |
| |
| | |
* Remove AbstractMysqlAdapter::Version since full_version_string will always
be set.
* Remove nodoc on private methods because private methods are not exposed in
the docs.
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* The database version is cached in all the adapters, but this didn't include
the full MySQL version. Anything that uses the full MySQL version would need
to query the database to get that data even if they're using the schema
cache.
* Now the full MySQL version will be cached in the schema cache via the
Version object.
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
We can revert migrations using `change_column_comment` or
`change_table_comment` at current master.
However, results are not what we expect: comments are remained in new
status.
This change tells previous comment to these methods in a way like
`change_column_default`.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Running this migration on mysql at current master fails
because `add_references_for_alter` is missing.
```
change_table :users, bulk: true do |t|
t.references :article
end
```
This is also true for postgresql adapter,
but its `bulk_alter_table` implementation can fallback in such case.
postgresql's implementation is desirable to prevent unknown failure like this.
|
| |
| |
| |
| |
| |
| | |
* remove useless `@type_metadata` and `@array`
* move the compatibility code (for array) into column
* etc.
|
|\ \
| |/
|/|
| |
| | |
michaelglass/move-sqlite-3-database-statements-into-database-statements
make SQLite3 `last_inserted_id` private and organize DatabaseStatement methods
|
| |
| |
| |
| | |
Sqlite3::DatabaseStatements and make those that are private in Abstract::DatabaseStatements private for sqlite3
|
|/
|
|
|
|
|
|
|
|
|
|
| |
* Adding type option example to the documentation [ci skip]
It was hard for me looking https://api.rubyonrails.org/ to find that there was a type option.
Adding this to the doc would be helpful especially for application with old tables where the references are still an integer not bigint
* Update activerecord/lib/active_record/connection_adapters/abstract/schema_definitions.rb
Co-Authored-By: robertomiranda <rjmaltamar@gmail.com>
|
|
|
|
| |
Refer #35875.
|
|
|
|
|
|
|
|
|
|
| |
All adapters (sqlite3, mysql2, postgresql, oracle-enhanced, sqlserver)
doesn't use `sequence_name` in `sql_for_insert`.
https://github.com/rsim/oracle-enhanced/blob/4e0db270a93859c9713fd079dbb315b9fe550e57/lib/active_record/connection_adapters/oracle_enhanced/database_statements.rb#L79-L85
https://github.com/rails-sqlserver/activerecord-sqlserver-adapter/blob/959fe8f49744460b876bc205c73259f8d4f37629/lib/active_record/connection_adapters/sqlserver/database_statements.rb#L226-L249
It can be handled in `exec_insert` like postgresql adapter if we want.
|
| |
|
|\
| |
| | |
Improve == and hash methods on various schema cache structs to be allocation free.
|
| |
| |
| |
| |
| |
| |
| |
| | |
free.
The previous implementation would allocate 2 arrays per comparisons.
I tried relying on Struct, but they do allocate one Hash inside `Struct#hash`.
|
|\ \
| | |
| | | |
Bring back postgresql_version as an alias.
|
| |/ |
|
|/ |
|
|\
| |
| | |
Wrap Mysql count of deleted rows in lock block to avoid conflict in test
|
| | |
|
|\ \
| | |
| | | |
Raise `ArgumentError` for invalid `:limit` and `:precision` like as other options
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
options
When I've added new `:size` option in #35071, I've found that invalid
`:limit` and `:precision` raises `ActiveRecordError` unlike other
invalid options.
I think that is hard to distinguish argument errors and statement
invalid errors since the `StatementInvalid` is a subclass of the
`ActiveRecordError`.
https://github.com/rails/rails/blob/c9e4c848eeeb8999b778fa1ae52185ca5537fffe/activerecord/lib/active_record/errors.rb#L103
```ruby
begin
# execute any migration
rescue ActiveRecord::StatementInvalid
# statement invalid
rescue ActiveRecord::ActiveRecordError, ArgumentError
# `ActiveRecordError` except `StatementInvalid` is maybe an argument error
end
```
I'd say this is the inconsistency worth fixing.
Before:
```ruby
add_column :items, :attr1, :binary, size: 10 # => ArgumentError
add_column :items, :attr2, :decimal, scale: 10 # => ArgumentError
add_column :items, :attr3, :integer, limit: 10 # => ActiveRecordError
add_column :items, :attr4, :datetime, precision: 10 # => ActiveRecordError
```
After:
```ruby
add_column :items, :attr1, :binary, size: 10 # => ArgumentError
add_column :items, :attr2, :decimal, scale: 10 # => ArgumentError
add_column :items, :attr3, :integer, limit: 10 # => ArgumentError
add_column :items, :attr4, :datetime, precision: 10 # => ArgumentError
```
|
|\ \
| | |
| | | |
Except `table_name` from column objects
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The `table_name` was added at #23677 to detect whether serial column or
not correctly.
We can do that detection before initialize column object, it makes
column object size smaller, and it probably helps column object
de-duplication.
|
|/ /
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
not specified
If `id` is an `AUTONUMBER` column, then my former strategy here of assigning `no_op_column` to an arbitrary column would fail in this specific scenario:
1. `model.columns.first` is an AUTONUMBER column
2. `model.columns.first` is not assigned in the insert attributes
I added three tests: the first test covers the actual error; the second test documents that this _isn't_ a problem when a value is given for the AUTONUMBER column and the third test ensures that this no-op strategy isn't secretly doing an UPSERT.
|
|/
|
|
| |
Follow up of c9e4c848eeeb8999b778fa1ae52185ca5537fffe.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
regression for fixture loading
d8d6bd5 makes fixture loading to bulk statements by using
`execute_batch` for sqlite3 adapter. But `execute_batch` is slower and
it caused the performance regression for fixture loading.
In sqlite3 1.4.0, it have new batch method `execute_batch2`. I've
confirmed `execute_batch2` is extremely faster than `execute_batch`.
So I think it is worth to upgrade sqlite3 to 1.4.0 to use that method.
Before:
```
% ARCONN=sqlite3 bundle exec ruby -w -Itest test/cases/associations/eager_test.rb -n test_eager_loading_too_may_ids
Using sqlite3
Run options: -n test_eager_loading_too_may_ids --seed 35790
# Running:
.
Finished in 202.437406s, 0.0049 runs/s, 0.0049 assertions/s.
1 runs, 1 assertions, 0 failures, 0 errors, 0 skips
ARCONN=sqlite3 bundle exec ruby -w -Itest -n test_eager_loading_too_may_ids 142.57s user 60.83s system 98% cpu 3:27.08 total
```
After:
```
% ARCONN=sqlite3 bundle exec ruby -w -Itest test/cases/associations/eager_test.rb -n test_eager_loading_too_may_ids
Using sqlite3
Run options: -n test_eager_loading_too_may_ids --seed 16649
# Running:
.
Finished in 8.471032s, 0.1180 runs/s, 0.1180 assertions/s.
1 runs, 1 assertions, 0 failures, 0 errors, 0 skips
ARCONN=sqlite3 bundle exec ruby -w -Itest -n test_eager_loading_too_may_ids 10.71s user 1.36s system 95% cpu 12.672 total
```
|
|\
| |
| | |
Cache database version in schema cache
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* The database version will get cached in the schema cache file during the
schema cache dump. When the database version check happens, the version will
be pulled from the schema cache and thus avoid querying the database for
the version.
* If the schema cache file doesn't exist, we'll query the database for the
version and cache it on the schema cache object.
* To facilitate this change, all connection adapters now implement
#get_database_version and #database_version. #database_version returns the
value from the schema cache.
* To take advantage of the cached database version, the database version check
will now happen after the schema cache is set on the connection in the
connection pool.
|
| | |
|
|/
|
|
|
|
|
|
| |
* s/Postgres/PostgreSQL/
* s/MYSQL/MySQL/, s/Mysql/MySQL/
* s/Sqlite/SQLite/
Replaced all newly added them after 6089b31.
|
|
|
|
| |
Probably that is useful for any other feature as well.
|
|
|
|
| |
Internal usage for the method as public has removed at #29623.
|
|
|
|
|
|
| |
* Remove redundant `table_names.empty?`
* Early return in `truncate_tables` since it is already deeply nested
* Move `truncate_tables` out from between `exec_delete` and `exec_update`
|
| |
|