| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|\ \
| | |
| | | |
Fix connection leak when a thread checks in additional connections.
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The code in `ConnectionPool#release` assumed that a single thread only
ever holds a single connection, and thus that releasing a connection
only requires the owning thread_id.
There is a trivial counterexample to this assumption: code that checks
out additional connections from the pool in the same thread. For
instance:
connection_1 = ActiveRecord::Base.connection
connection_2 = ActiveRecord::Base.connection_pool.checkout
ActiveRecord::Base.connection_pool.checkin(connection_2)
connection_3 = ActiveRecord::Base.connection
At this point, connection_1 has been removed from the
`@reserved_connections` hash, causing a NEW connection to be returned as
connection_3 and the loss of any tracking info on connection_1. As long
as the thread in this example lives, connection_1 will be inaccessible
and un-reapable. If this block of code runs more times than the size of
the connection pool in a single thread, every subsequent connection
attempt will timeout, as all of the available connections have been
leaked.
Reverts parts of 9e457a8654fa89fe329719f88ae3679aefb21e56 and
essentially all of 4367d2f05cbeda855820e25a08353d4b7b3457ac
|
| |
| |
| |
| | |
Here you go, @senny. :grin:
|
|\ \
| | |
| | | |
Add information about "allow_destroy" requiring an ID. [ci skip]
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
I just wasted an absurd amount of time trying to figure out why my model
wasn't being deleted even though I was setting `_destroy` to true like
the instructions said. Making the documentation a little bit clear so
that someone like me doesn't waste their time in future.
|
|\ \ \
| | | |
| | | |
| | | |
| | | | |
Conflicts:
activerecord/CHANGELOG.md
|
| | |/
| |/| |
|
| | | |
|
| | | |
|
| | | |
|
|/ /
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This bug occurs when an attribute of an ActiveRecord model is an
ActiveRecord::Type::Integer type or a ActiveRecord::Type::Decimal type (or any
other type that includes the ActiveRecord::Type::Numeric module. When the value
of the attribute is negative and is set to the same negative value, it is marked
as changed.
Take the following example of a Person model with the integer attribute age:
class Person < ActiveRecord::Base
# age :integer(4)
end
The following will produce the error:
person = Person.new(age: -1)
person.age = -1
person.changes
=> { "age" => [-1, -1] }
person.age_changed?
=> true
The problematic line is here:
module ActiveRecord
module Type
module Numeric
...
def non_numeric_string?(value)
# 'wibble'.to_i will give zero, we want to make sure
# that we aren't marking int zero to string zero as
# changed.
value.to_s !~ /\A\d+\.?\d*\z/
end
end
end
end
The regex match doesn't accept numbers with a leading '-'.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
`ActiveRecord::Base#[]` has overhead that was introduced in 4.2. The
`foo["id"]` working with PKs other than ID isn't really a case that we
want to support publicly, but deprecating was painful enough that we
avoid it. `_read_attribute` was introduced as the faster alternative for
use internally. By using that, we can save a lot of overhead. We also
save some overhead by reading the attribute one fewer times in
`stale_state`.
Fixes #18151
|
| |
| |
| |
| |
| |
| |
| |
| | |
If there is a method defined such as `find_and_do_stuff(id)`, which then
gets called on an association, we will perform statement caching and the
parent ID will not change on subsequent calls.
Fixes #18117
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Calling `changed_attributes` will ultimately check if every mutable
attribute has changed in place. Since this gets called whenever an
attribute is assigned, it's extremely slow. Instead, we can avoid this
calculation until we actually need it.
Fixes #18029
|
| |
| |
| |
| | |
PG will warn without it, but mysql2 errors out.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Changes `rails g model Post user:references` from
def change
create_table :posts do |t|
t.references :user, index: true
end
add_foreign_key :posts, :users
end
to
def change
create_table :posts do |t|
t.references :user, index: true, foreign_key: true
end
end
Changes `rails g migration add_user_to_posts user:references` from
def change
add_reference :posts, :users, index: true
add_foreign_key :posts, :users
end
to
def change
add_reference :posts, :users, index: true, foreign_key: true
end
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This has the same comments as 9af90ffa00ba35bdee888e3e1ab775ba0bdbe72c,
however it affects the `add_reference` method, and `t.references` in the
context of a `change_table` block.
There is a lot of duplication of code between creating and updating
tables. We should re-evaluate the structure of this code from a high
level so changes like this don't need to be made in two places. (Note to
self)
|
| |
| |
| |
| |
| |
| | |
While we still aren't accepting PRs that only make changes like this,
it's fine when we're actively working on a method if it makes our lives
easier.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Rather than having to do:
create_table :posts do |t|
t.references :user
end
add_foreign_key :posts, :users
You can instead do:
create_table :posts do |t|
t.references :user, foreign_key: true
end
Similar to the `index` option, you can also pass a hash. This will be
passed as the options to `add_foreign_key`. e.g.:
create_table :posts do |t|
t.references :user, foreign_key: { primary_key: :other_id }
end
is equivalent to
create_table :posts do |t|
t.references :user
end
add_foreign_key :posts, :users, primary_key: :other_id
|
| |
| |
| |
| |
| |
| | |
While we aren't taking PRs with these kinds of changes just yet, they
are fine if we're actively working on the method and it makes things
easier.
|
| |
| |
| |
| |
| |
| | |
If the test is interrupted in a way that the teardown block fails to
run, the tests will fail to run until the table is removed manually
without this option.
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
PG doesn't register it's types using the `int(4)` format that others do.
As such, if we alias `int8` to the other integer types, the range
information is lost. This is fixed by simply registering it separately.
The other option (which I specifically chose to avoid) is to pass the
information of the original type that was being aliased as an argument.
I'd rather avoid that, since an alias should truly be treated the same.
If we need different behavior for a different type, we should explicitly
register it with that, and not have a conditional based on aliasing.
Fixes #18144
[Sean Griffin & ysbaddaden]
|
|
|
|
| |
Fixes #18122
|
|
|
|
|
|
|
|
|
|
|
| |
1. Specify that you need to create the test databases, and that no special
Rails command needs to be run to do that.
2. Although the underscore style of `rake test_mysql` works, make the
documentation of running the tests in RUNNING_UNIT_TESTS.rdoc
consistent with the "Contributing..." guide.
3. Promote "Testing Active Record" to not be a subsection of
"Running a Single Test," since it doesn't make sense as a subsection
of that.
|
| |
|
|\
| |
| | |
Update description for `start` parameter.
|
| |
| |
| |
| |
| |
| | |
I find that `Specifies the starting point for the batch processing.`
does not give enough information for me to understand what this
parameter actually does.
|
| |
| |
| |
| | |
Add nodoc to some constants [skip ci]
|
| | |
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Closes #17945
`db:test:prepare` still purges the database to always keep the test
database in a consistent state.
This patch introduces new problems with `db:schema:load`. Prior
to the introduction of foreign-keys, we could run this file against
a non-empty database. Since every `create_table` containted the
`force: true` option, this would recreate tables when loading the schema.
However with foreign-keys in place, `force: true` wont work anymore and
the task will crash.
/cc @schneems
|
|
|
|
|
|
|
|
| |
Apparently PG does not validate against RFC 4122. The intent of the original
patch is just to protect against PG errors (which potentially breaks txns, etc)
because of bad user input, so we shouldn't try any harder than PG itself.
Closes #17931
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In the case of serialized columns, we would expect the unserialized
value as input, not the serialized value. The original issue which made
this distinction, #14163, introduced a bug. If you passed serialized
input to the method, it would double serialize when it was sent to the
database. You would see the wrong input upon reloading, or get an error
if you had a specific type on the serialized column.
To put it another way, `update_column` is a special case of
`update_all`, which would take `['a']` and not `['a'].to_yaml`, but you
would not pass data from `params` to it.
Fixes #18037
|
|
|
|
|
|
|
|
|
| |
Because we're only using the `connection` so passing the entire tracker
isn't unnecessary.
Eventually only the `connection` will be passed to `add_constraints`
with later refactoring but curretly that's not possible because of
`construct_tables` method.
|
|
|
|
| |
https://github.com/rails/rails/commit/39542fba54328ca048fb75a5d5b37f8e1d4c1f37#commitcomment-8938379
|
| |
|
|\
| |
| | |
Fix ProtocolViolation/bind message for polymorphic + pluck or group+calc
|
| | |
|
|\ \
| | |
| | | |
Fix undesirable RangeError by Type::Integer. Add Type::UnsignedInteger.
|
| |/ |
|
|/
|
|
| |
Move microseconds formatting to `AbstractAdapter`.
|
|\
| |
| | |
Add foreign_type option for polymorphic has_one and has_many.
|
| |
| |
| |
| |
| |
| |
| | |
To be possible to use a custom column name to save/read the polymorphic
associated type in a has_many or has_one polymorphic association, now users
can use the option :foreign_type to inform in what column the associated object
type will be saved.
|
| | |
|
|\ \
| | |
| | | |
Remove unused "Developer" fixtures from tests
|
| |/
| |
| |
| |
| | |
The `RecursiveCallbackDeveloper` and `ImmutableMethodDeveloper` classes
are not used anymore in tests, and neither is the `@cancelled` variable.
|
| |
| |
| |
| |
| |
| | |
The test added in 42418cfc94d1356d35d28d786f63e7fab9406ad6 wasn't
actually testing anything, since the bug was with TZ aware attributes
only.
|
|/
|
|
|
|
|
|
|
|
|
|
| |
PostgreSQL for example, allows infinity as a valid value for date time
columns. The PG type has explicit handling for that case. However, time
zone conversion will end up trampling that handling. Unfortunately, we
can't call super and then convert time zones.
However, if we get back nil from `.in_time_zone`, it's something we
didn't expect so we can let the superclass handle it.
Fixes #17971
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
`freeze` will ultimately end up freezing the `AttributeSet`, which in
turn freezes its `@attributes` hash. However, we actually insert a
special object to lazily instantiate the values of the hash on demand.
When it does need to actually instantiate all of them for iteration (the
only case is `ActiveRecord::Base#attributes`, which calls
`AttributeSet#to_h`), it will set an instance variable as a performance
optimization
Since it's just an optimization for subsequent calls, and that method
being called at all is a very uncommon case, we can just leave the ivar
alone if we're frozen, as opposed to coming up with some overly
complicated mechanism for freezing which allows us to continue to modify
ourselves.
Fixes #17960
|
|
|
|
|
|
|
|
|
|
| |
The user is able to pass PG string literals in 4.1, and have it
converted to an array. This is also possible in 4.2, but it would remain
in string form until saving and reloading, which breaks our
`attr = save.reload.attr` contract. I think we should deprecate this in
5.0, and only allow array input from user sources. However, this
currently constitutes a breaking change to public API that did not go
through a deprecation cycle.
|