| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
| | | | | | |
|
|/ / / / / |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
This is obviously all very internal, but sometimes you have to look at
it... and when you do, it'll save a lot of confusion if it doesn't lie
about its identity.
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
The subtype will (quite reasonably) ignore the possibility that it has
`changed_in_place?` by becoming nil.
Fixes #19467
|
|/ / / / |
|
|\ \ \ \
| |_|/ /
|/| | | |
Fix documentation for find_or_create_by
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The code in the comment fails on concurrent inserts if done inside a transaction.
The fix is to force a savepoint to run so that if the database raises an unique violation exception. Otherwise, you'll get errors like:
```
(0.3ms) BEGIN
Cart Load (0.5ms) SELECT "carts".* FROM "carts" WHERE "carts"."uuid" = '12345' LIMIT 1
# Another process inserts a cart with uuid of '12345' right now
SQL (4371.7ms) INSERT INTO "carts" ("created_at", "updated_at", "uuid") VALUES ('2015-03-21 01:05:07.833231', '2015-03-21 01:05:07.833231', '12345') RETURNING "id" [["created_at", Sat, 21 Mar 2015 01:05:07 PDT -07:00], ["updated_at", Sat, 21 Mar 2015 01:05:07 PDT -07:00], ["uuid", "12345"]]
PG::UniqueViolation: ERROR: duplicate key value violates unique constraint "carts_uuid_idx1"
DETAIL: Key (uuid)=(12345) already exists.
: INSERT INTO "carts" ("created_at", "updated_at", "uuid") VALUES ('2015-03-21 01:05:07.833231', '2015-03-21 01:05:07.833231', '12345') RETURNING "id"
# Retrying the find
Cart Load (0.8ms) SELECT "carts".* FROM "carts" WHERE "carts"."uuid" = '12345' LIMIT 1
PG::InFailedSqlTransaction: ERROR: current transaction is aborted, commands ignored until end of transaction block
: SELECT "carts".* FROM "carts" WHERE "carts"."uuid" = '12345' LIMIT 1
(0.1ms) ROLLBACK
ActiveRecord::StatementInvalid: PG::InFailedSqlTransaction: ERROR: current transaction is aborted, commands ignored until end of transaction block
: SELECT "carts".* FROM "carts" WHERE "carts"."uuid" = '12345' LIMIT 1
```
|
|/ / /
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
As described here https://github.com/rails/rails/issues/19420. When
using the Postgres BigInt[] field type the big int value was not being
translated into schema.rb. This caused the field to become just a
regular integer field when building off of schema.rb. This fix will
address this by delegating the limit from the subtype to the Array type.
https://github.com/rails/rails/issues/19420
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Preserving RACK_ENV behavior.
This reverts commit 7bdc7635b885e473f6a577264fd8efad1c02174f, reversing
changes made to 45786be516e13d55a1fca9a4abaddd5781209103.
|
| | |
| | |
| | |
| | | |
Fixes #19389.
|
| | |
| | |
| | |
| | | |
skip]
|
| | | |
|
|\ \ \
| | | |
| | | | |
Updated MySQL documentation link for STRICT_ALL_TABLES [ci skip]
|
| | | | |
|
|\ \ \ \
| | | | |
| | | | | |
Don't fallback to RACK_ENV when RAILS_ENV is not present
|
| | | | | |
|
| |/ / /
|/| | |
| | | |
| | | |
| | | |
| | | | |
ActiveRecord: Add a changelog entry for issue #17680. [ci skip]
Conflicts:
activerecord/CHANGELOG.md
|
|/ / / |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This is a better test for 51660f0. It is testing that the SQL is the
same before and after the previously leaky scope is called. Before if
`hotel.drink_designers` was called first then `hotel.recipes` would
incorrectly get the scope applied. We want to be sure that the
polymorphic hm:t association is not leaking into or affecting the
SQL for the hm:t association on `Hotel`.
The reason I couldn't do this before was because there was an issue with
the SQL getting cached and wanted to resolve that later and then fix the
test to be better. Because of the caching, this test requires that
`Hotel.reflect_on_association(:recipes).clear_association_scope_cache`
be called after the first call to `hotel.recipes` to clear the
assocation scope chain and not interfere with the rest of the test.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
In the tests if I were to call `post.categorizations.to_a` and then later call
`post.categorizations.to_a` expecting to have different results the 2 queries
would be the same because of the caching involved in
`@association_scope_cache`. The chain gets cached and the queries will
be the same even if they are not supposed to be (i.e. testing an order
dependent scoping issue).
I found this issue because I was working on a bug with cached scoped
in hm:t and hm:t polymorphic relationships but `capture_sql` was
outputting the wrong SQL to write a good test.
|
|\ \ \
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Reuse the CollectionAssociation#reader proxy cache if the foreign key is present from the start.
Conflicts:
activerecord/CHANGELOG.md
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
present from the start.
When a new record has the necessary information prior to save, we can
avoid busting the cache.
We could simply clear the @proxy on #reset or #reset_scope, but that
would clear the cache more often than necessary.
|
| | | |
| | | |
| | | | |
Update link in comments to point to latest MySQL production version documentation. See here for reference: http://dev.mysql.com/doc/refman/5.0/en/choosing-version.html
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Fixes db:structure:dump when using schema_search_path and PostgreSQL
extensions.
Closes #17157.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
It is redundant with tests in `eager_loading?`, but for the difference
between `includes_values.present?` and `includes_values.any?`, which
is a difference without a distinction because `false` has no meaning
for `includes`.
|
| | | | |
|
| | | | |
|
|\ \ \ \
| | | | |
| | | | | |
Materialize subqueries by adding `DISTINCT` to suport MySQL 5.7.6 and later
|
| |/ / /
| | | |
| | | |
| | | | |
to support MySQL 5.7.6 `optimizer_switch='derived_merge=on'`
|
|/ / /
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
I’m renaming all instances of `use_transcational_fixtures` to
`use_transactional_tests` and “transactional fixtures” to
“transactional tests”.
I’m deprecating `use_transactional_fixtures=`. So anyone who is
explicitly setting this will get a warning telling them to use
`use_transactional_tests=` instead.
I’m maintaining backwards compatibility—both forms will work.
`use_transactional_tests` will check to see if
`use_transactional_fixtures` is set and use that, otherwise it will use
itself. But because `use_transactional_tests` is a class attribute
(created with `class_attribute`) this requires a little bit of hoop
jumping. The writer method that `class_attribute` generates defines a
new reader method that return the value being set. Which means we can’t
set the default of `true` using `use_transactional_tests=` as was done
previously because that won’t take into account anyone using
`use_transactional_fixtures`. Instead I defined the reader method
manually and it checks `use_transactional_fixtures`. If it was set then
it should be used, otherwise it should return the default, which is
`true`. If someone uses `use_transactional_tests=` then it will
overwrite the backwards-compatible method with whatever they set.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
If there was a polymorphic hm:t association with a scope AND second
non-scoped hm:t association on a model the polymorphic scope would leak
through into the call for the non-polymorhic hm:t association.
This would only break if `hotel.drink_designers` was called before
`hotel.recipes`. If `hotel.recipes` was called first there would be
no problem with the SQL.
Before (employable_type should not be here):
```
SELECT COUNT(*) FROM "drink_designers" INNER JOIN "chefs" ON
"drink_designers"."id" = "chefs"."employable_id" INNER JOIN
"departments" ON "chefs"."department_id" = "departments"."id" WHERE
"departments"."hotel_id" = ? AND "chefs"."employable_type" = ?
[["hotel_id", 1], ["employable_type", "DrinkDesigner"]]
```
After:
```
SELECT COUNT(*) FROM "recipes" INNER JOIN "chefs" ON "recipes"."chef_id"
= "chefs"."id" INNER JOIN "departments" ON "chefs"."department_id" =
"departments"."id" WHERE "departments"."hotel_id" = ? [["hotel_id", 1]]
```
From the SQL you can see that `employable_type` was leaking through when
calling recipes. The solution is to dup the chain of the polymorphic
association so it doesn't get cached. Additionally, this follows
`scope_chain` which dup's the `source_reflection`'s `scope_chain`.
This required another model/table/relationship because the leak only
happens on a hm:t polymorphic that's called before another hm:t on the
same model.
I am specifically testing the SQL here instead of the number of records
becasue the test could pass if there was 1 drink designer recipe for the
drink designer chef even though the `employable_type` was leaking through.
This needs to specifically check that `employable_type` is not in the SQL
statement.
|
| | | |
|
| | | |
|
|\ \ \
| | | |
| | | | |
Isolate access to .default_scopes in ActiveRecord::Scoping::Default
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Instead use .scope_attributes? consistently in ActiveRecord to check whether
there are attributes currently associated with the scope.
Move the implementation of .scope_attributes? and .scope_attributes to
ActiveRecord::Scoping because they don't particularly have to do specifically
with Named scopes and their only dependency, in the case of
.scope_attributes?, and only caller, in the case of .scope_attributes is
contained in Scoping.
|
|/ / /
| | |
| | |
| | |
| | |
| | |
| | | |
Versions of the pg gem earlier than 0.18.0 cannot be used safely with Ruby 2.2.
Specifically, pg 0.17 when used with Ruby 2.2 has a known bug that causes
random bits to be added to the end of strings. Further explanation here:
https://bitbucket.org/ged/ruby-pg/issue/210/crazy-bytes-being-added-to-record
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This introduces undesirable `Rails.logger` formatters (such as the syslog
formatter) onto a `Logger.new(STDERR)` for the console. The production
logger may be going elsewhere than standard io, so we can't presume to
reuse its formatter.
With syslog, this causes missing newlines in the console, so irb prompts
start at the end of the last log message.
We can work to expose the console formatter in another way to address
the original issue.
This reverts commit 026ce5ddf11c4cda0aae7f33a9266e54117db318, reversing
changes made to 6f0a69c5899ebdc892e2aa23e68e2604fa70fb73.
|
| | |
| | |
| | |
| | | |
This change was prompted by 598b841.
|
| | |
| | |
| | |
| | |
| | |
| | | |
There was a typo in the `:requires_new` option. This led
to `#<ArgumentError: unknown keyword: require_new>` leaving all the
triggers in a disabled state.
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
As of Ruby 2.2, Psych can handle any object which is marshallable. This
was not true on previous versions of Ruby, so our delegator types had to
provide their own implementation of `init_with` and `encode_with`.
Unfortunately, this doesn't match up with what Psych will do today.
Since by the time we hit this layer, the objects will have already been
created, I think it makes the most sense to just grab the current type
from the class.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
I should have done this in the first place. We are now serializing an
explicit version so we can make more careful changes in the future. This
will load Active Record objects which were serialized in Rails 4.1.
There will be bugs, as YAML serialization was at least partially broken
back then. There will also be edge cases that we might not be able to
handle, especially if the type of a column has changed.
In addition, we're passing this as `from_database`, since that is
required for serialized columns at minimum. All other types were
serializing the cast value. At a glance, there should be no types for
which this is a problem.
Finally, dirty checking information will be lost on records serialized
in 4.1, so no columns will be marked as changed.
|
|\ \ \
| | | |
| | | | |
remove unnecessary autoload path parameters
|
| | | | |
|
|/ / / |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The table is being modified in tests, without reloading the column
information on the appropriate class. This is leading to incorrect
column information in many cases.
The failures fixed by this commit can be replicated with:
ARCONN=postgresql ruby -Itest test/cases/adapters/postgresql/hstore_test.rb --seed 21574
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The default value of `"pg_arrays"."tags"` is being changed to `[]` in
one test, but the column information on the model isn't reset after it's
changed back. As such, we think the default value is `[]` when in the
database it's actually `nil`. That means any test which was assigning
`[]` to a new record would have that key skipped with partial writes, as
it hasn't changed from the default. However since the *actual* default
value is `nil`, we get back values that the test doesn't expect, and it
fails.
|
| | |
| | |
| | |
| | | |
This is a follow-up to #19257
|
|\ \ \
| | | |
| | | | |
Fix rollback of frozen records
|