| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Before:
```
(16.4ms) TRUNCATE TABLE `author_addresses`
(20.5ms) TRUNCATE TABLE `authors`
(19.4ms) TRUNCATE TABLE `posts`
```
After:
```
Truncate Tables (19.5ms) TRUNCATE TABLE `author_addresses`;
TRUNCATE TABLE `authors`;
TRUNCATE TABLE `posts`
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The transaction used to restore fixtures is an implementation detail
that should be abstracted away. Idealy a test should behave the same
wether or not transactional fixtures are enabled.
However since transactions have been made lazy, the fixture
transaction started leaking into tests case. e.g. consider the
following (oversimplified) test:
```ruby
class SQLSubscriber
attr_accessor :sql
def initialize
@sql = []
end
def call(*, event)
sql << event[:sql]
end
end
subscriber = SQLSubscriber.new
ActiveSupport::Notifications.subscribe("sql.active_record", subscriber)
User.connection.execute('SELECT 1', 'Generic name')
assert_equal ['SELECT 1'], subscriber.sql
```
On Rails 6 it starts to break because the `sql` array will be `['BEGIN', 'SELECT 1']`.
Several things are wrong here:
- That transaction is not generated by the tested code, so it shouldn't be visible.
- The transaction is not even closed yet, which again doesn't reflect the reality.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In an application that has a primary and replica database the data
inserted on the primary connection will not be able to be read by the
replica connection.
In a test like this:
```
test "creating a home and then reading it" do
home = Home.create!(owner: "eileencodes")
ActiveRecord::Base.connected_to(role: :default) do
assert_equal 3, Home.count
end
ActiveRecord::Base.connected_to(role: :readonly) do
assert_equal 3, Home.count
end
end
```
The home inserted in the beginning of the test can't be read by the
replica database because when the test is started a transasction is
opened byy `setup_fixtures`. That transaction remains open for the
remainder of the test until we are done and run `teardown_fixtures`.
Because the data isn't actually committed to the database the replica
database cannot see the data insertion.
I considered a couple ways to fix this. I could have written a database
cleaner like class that would allow the data to be committed and then
clean up that data afterwards. But database cleaners can make the
database slow and the point of the fixtures is to be fast.
In GitHub we solve this by sharing the connection pool for the replicas
with the primary (writing) connection. This is a bit hacky but it works.
Additionally since we define `replica? || preventing_writes?` as the
code that blocks writes to the database this will still prevent writing
on the replica / readonly connection. So we get all the behavior of
multiple connections for the same database without slowing down the
database.
In this PR the code loops through the handlers. If the handler doesn't
match the default handler then it retrieves the connection pool from the
default / writing handler and assigns the reading handler's connections
to that pool.
Then in enlist_fixture_connections it maps all the connections for the
default handler because all the connections are now available on that
handler so we don't need to loop through them again.
The test uses a temporary connection pool so we can test this with
sqlite3_mem. This adapter doesn't behave the same as the others and
after looking over how the query cache test works I think this is the
most correct. The issues comes when calling `connects_to` because that
establishes new connections and confuses the sqlite3_mem adapter. I'm
not entirely sure why but I wanted to make sure we tested all adapters
for this change and I checked that it wasn't the shared connection code
that was causing issues - it was the `connects_to` code.
|
|
|
|
|
|
|
|
|
|
| |
Currently we sometimes find a redundant begin block in code review
(e.g. https://github.com/rails/rails/pull/33604#discussion_r209784205).
I'd like to enable `Style/RedundantBegin` cop to avoid that, since
rescue/else/ensure are allowed inside do/end blocks in Ruby 2.5
(https://bugs.ruby-lang.org/issues/12906), so we'd probably meets with
that situation than before.
|
|
|
|
|
|
|
|
|
|
| |
The `@connection` is no longer used since ee5ab22.
Originally the `@connection` was useless because it is only used in
`timestamp_column_names`, which is only used if `model_class` is given.
If `model_class` is given, the `@connection` is always
`model_class.connection`.
|
|
|
|
|
|
|
|
|
| |
When loading fixtures, Ruby 1.9's hash key ordering means that HABTM
join table rows are always loaded before the parent table rows,
violating foreign key constraints that may be in place. This very
simple change ensures that the parent table's key appears first in the
hash. Violations may still occur if fixtures are loaded in the wrong
order but those instances can be avoided unlike this one.
|
|
|
|
| |
[Gannon McGibbon + Max Albrecht]
|
| |
|
|\
| |
| | |
Use MethodCallAssertions instead of Mocha#expects
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Many calls to `Mocha#expects` preceded the introduction of
`ActiveSupport::Testing::MethodCallAssertions` in 53f64c0fb,
and many are simple to replace with `MethodCallAssertions`.
This patch makes all these simple replacements.
Step 5 in #33162
|
|/
|
|
|
|
| |
#33363 has two regressions. First one is that `insert_fixtures_set` is
failed if flags is an array. Second one is that connection flags are not
restored if `set_server_option` is not supported.
|
|\
| |
| | |
use set_server_option if possible
|
| | |
|
| |
| |
| |
| | |
Missed these in preparing #33337
|
|/
|
|
| |
Step 4 in #33162
|
|
|
|
|
|
|
|
|
|
| |
Remove returning of `false` value for stubbed `lock_thread=` methods
since there aren't any needs in it.
Remove unnecessary returning of `true` for stubbed `drop_database` method.
Follow up #33309.
Related to #33162, #33326.
|
|
|
|
|
|
|
| |
While preparing this I realised that some stubbed returns values
serve no purpose, so this patch drops those as well.
Step 3 in #33162
|
|
|
|
| |
Follow up of #32605.
|
|
|
|
|
| |
This autocorrects the violations after adding a custom cop in
3305c78dcd.
|
| |
|
| |
|
|
|
|
|
| |
Some places we can't remove because Ruby still don't have a method
equivalent to strip_heredoc to be called in an already existent string.
|
| |
|
| |
|
|
|
|
|
|
| |
Since #29504, mysql2 adapter lost ability to insert zero value on
primary key due to enforce `NO_AUTO_VALUE_ON_ZERO` disabled.
That is for using `DEFAULT` on auto increment column, but we can use
`NULL` instead in that case.
|
|
|
|
| |
Since #31422, `insert_fixtures` is deprecated.
|
|\
| |
| | |
Build a multi-statement query when inserting fixtures
|
| |
| |
| |
| |
| | |
- mysql will add a 2 bytes margin to the statement, so given a `max_allowed_packet` set to 1024 bytes, a 1024 bytes fixtures will no be inserted (mysql will throw an error)
- Preventing this by decreasing the max_allowed_packet by 2 bytes when doing the comparison with the actual statement size
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
- The `insert_fixtures` method can be optimized by making a single multi statement query for all fixtures having the same connection instead of doing a single query per table
- The previous code was bulk inserting fixtures for a single table, making X query for X fixture files
- This patch builds a single **multi statement query** for every tables. Given a set of 3 fixtures (authors, dogs, computers):
```ruby
# before
%w(authors dogs computers).each do |table|
sql = build_sql(table)
connection.query(sql)
end
# after
sql = build_sql(authors, dogs, computers)
connection.query(sql)
```
- `insert_fixtures` is now deprecated, `insert_fixtures_set` is the new way to go with performance improvement
- My tests were done with an app having more than 700 fixtures, the time it takes to insert all of them was around 15s. Using a single multi statement query, it took on average of 8 seconds
- In order for a multi statement to be executed, mysql needs to be connected with the `MULTI_STATEMENTS` [flag](https://dev.mysql.com/doc/refman/5.7/en/c-api-multiple-queries.html), which is done before inserting the fixtures by reconnecting to da the database with the flag declared. Reconnecting to the database creates some caveats:
1. We loose all open transactions; Inside the original code, when inserting fixtures, a transaction is open. Multple delete statements are [executed](https://github.com/rails/rails/blob/a681eaf22955734c142609961a6d71746cfa0583/activerecord/lib/active_record/fixtures.rb#L566) and finally the fixtures are inserted. The problem with this patch is that we need to open the transaction only after we reconnect to the DB otherwise reconnecting drops the open transaction which doesn't commit all delete statements and inserting fixtures doesn't work since we duplicated them (Primary key duplicate exception)...
- In order to fix this problem, the transaction is now open directly inside the `insert_fixtures` method, right after we reconnect to the db
- As an effect, since the transaction is open inside the `insert_fixtures` method, the DELETE statements need to be executed here since the transaction is open later
2. The same problem happens for the `disable_referential_integrity` since we reconnect, the `FOREIGN_KEY_CHECKS` is reset to the original value
- Same solution as 1. , the disable_referential_integrity can be called after we reconnect to the transaction
3. When the multi statement query is executed, no other queries can be performed until we paginate over the set of results, otherwise mysql throws a "Commands out of sync" [Ref](https://dev.mysql.com/doc/refman/5.7/en/commands-out-of-sync.html)
- Iterating over the set of results until `mysql_client.next_result` is false. [Ref](https://github.com/brianmario/mysql2#multiple-result-sets)
- Removed the `active_record.sql "Fixture delete"` notification, the delete statements are now inside the INSERT's one
- On mysql the `max_allowed_packet` is looked up:
1. Before executing the multi-statements query, we check the packet length of each statements, if the packet is bigger than the max_allowed_packet config, an `ActiveRecordError` is raised
2. Otherwise we concatenate the current sql statement into the previous and so on until the packet is `< max_allowed_packet`
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
- The mysql `NO_AUTO_VALUE_ON_ZERO` mode should be disabled when inserting fixtures in bulk, this PR adds a test to make sure we don't remove it by mistake
- If we live this mode enabled, a statement like this wouldn't work and a `Duplicate entry '0' for key 'PRIMARY'` error will be raised. That's because `DEFAULT` on auto_increment will return 0
```sql
INSERT INTO `aircraft` (`id`, `name`, `wheels_count`) VALUES (DEFAULT, 'first', 2), (DEFAULT, 'second', 3)
```
|
|/
|
|
| |
Follow up of #31432.
|
| |
|
|\ |
|
| | |
|
|\| |
|
| |
| |
| |
| |
| | |
This reverts commit 3420a14590c0e6915d8b6c242887f74adb4120f9, reversing
changes made to afb66a5a598ce4ac74ad84b125a5abf046dcf5aa.
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| | |
Improves the performance from O(n) to O(1).
Previously it would require 50 queries to
insert 50 fixtures. Now it takes only one query.
Disabled on sqlite which doesn't support multiple inserts.
|
|\ \
| | |
| | | |
Fix random minitest error: database_statements_test
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The primary key sequence for each test in FixturesResetPkSequenceTest is
reset.
The state in some of the FixturesResetPkSequenceTest tests are leaking,
causing failurse in others. Using a seed of `48104`, the
`FixturesResetPkSequenceTest#test_resets_to_min_pk_with_specified_pk_and_sequence`
runs before the `DatabaseStatementsTest` tests, and
the tests fail with duplicate primary key errors:
```
Run options: --seed 48104 -n
"/^(?:FixturesResetPkSequenceTest#(?:test_resets_to_min_pk_with_specified_pk_and_sequence)|DatabaseStatementsTest#(?:test_create_should_return_the_inserted_id|test_exec_insert|test_insert_should_return_the_inserted_id))$/"
.EEE
1) Error:
DatabaseStatementsTest#test_exec_insert:
ActiveRecord::RecordNotUnique: PG::UniqueViolation: ERROR: duplicate
key value violates unique constraint "accounts_pkey"
DETAIL: Key (id)=(2) already exists.
2) Error:
DatabaseStatementsTest#test_create_should_return_the_inserted_id:
ActiveRecord::RecordNotUnique: PG::UniqueViolation: ERROR: duplicate
key value violates unique constraint "accounts_pkey"
DETAIL: Key (id)=(3) already exists.
3) Error:
DatabaseStatementsTest#test_insert_should_return_the_inserted_id:
ActiveRecord::RecordNotUnique: PG::UniqueViolation: ERROR: duplicate
key value violates unique constraint "accounts_pkey"
DETAIL: Key (id)=(4) already exists.
```
|
|/
|
|
|
|
|
| |
Ensure that the fixtures are properly loaded for FoxyFixturesTest
When tests are randomized, FoxyFixturesTest often fails due to unloaded
fixtures.
|
|
|
|
|
| |
This change reverted in eac6f369 but it is needed for data integrity.
See #25328.
|
|
|
|
|
|
|
|
|
| |
mtsmfm/disable-referential-integrity-without-superuser-privilege-take-2"
This reverts commit c1faca6333abe4b938b98fedc8d1f47b88209ecf, reversing
changes made to 8c658a0ecc7f2b5fc015d424baf9edf6f3eb2b0b.
See https://github.com/rails/rails/pull/27636#issuecomment-297534129
|
|
|
|
| |
fixtures, not an empty array.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
privileges (take 2)
Re-create https://github.com/rails/rails/pull/21233
eeac6151a5 was reverted (127509c071b4) because it breaks tests.
----------------
ref: 72c1557254
- We must use `authors` fixture with `author_addresses` because of its foreign key constraint.
- Tests require PostgreSQL >= 9.4.2 because it had a bug about `ALTER CONSTRAINTS` and fixed in 9.4.2.
|
|
|
|
|
|
|
| |
`supports_migrations?` was added at 4160b518 to determine if schema
statements (`create_table`, `drop_table`, etc) are implemented in the
adapter. But all tested databases has been supported migrations since
a4fc93c3 at least.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This ensures multiple threads inside a transactional test to see consistent
database state.
When a system test starts Puma spins up one thread and Capybara spins up
another thread. Because of this when tests are run the database cannot
see what was inserted into the database on teardown. This is because
there are two threads using two different connections.
This change uses the statement cache to lock the threads to using a
single connection ID instead of each not being able to see each other.
This code only runs in the fixture setup and teardown so it does not
affect real production databases.
When a transaction is opened we set `lock_thread` to `Thread.current` so
we can keep track of which connection the thread is using. When we
rollback the transaction we unlock the thread and then there will be no
left-over data in the database because the transaction will roll back
the correct connections.
[ Eileen M. Uchitelle, Matthew Draper ]
|
|
|
|
| |
Actually, private methods cannot be called with `self.`, so it's not just redundant, it's a bad habit in Ruby
|