| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
| |
This will allow us to pass the predicate builder into the constructor of
these handlers. The procs had to be changed to objects, because the
`PredicateBuilder` needs to be marshalable. If we ever decide to make
`register_handler` part of the public API, we should come up with a
better solution which allows procs.
/cc @mrgilman
[Sean Griffin & Melanie Gilman]
|
|
|
|
|
| |
We're accidentally documenting `PredicateBuilder` and `ArrayHandler`
since there's a constant which is missing `# :nodoc:`
|
|
|
|
|
|
|
| |
Construction of relations can be a hotspot, we don't want to create one
of these in the constructor. This also allows us to do more expensive
things in the predicate builder's constructor, since it's created once
per AR::Base subclass
|
|
|
|
| |
We don't memoize the relation instance
|
|\ |
|
|/ |
|
|
|
|
|
|
|
|
| |
I think we should deprecate this behavior and just error if you tell us
to do a case insensitive comparison for types which are not case
sensitive. Partially reverts 35592307
Fixes #18195
|
| |
|
|
|
|
| |
reference to past scope`
|
|
|
|
|
| |
`Rails` constant is added by rails-html-sanitizer leading to bugs in
non-Rails apps using ActiveRecord and ActionMailer
|
| |
|
|\
| |
| |
| |
| | |
M7/docs-active_record-update_query_method_docs_with_full_description
Describe full behaviour of Active Record's attribute query methods
|
| |
| |
| |
| |
| | |
value is present. [ci skip]
The way Active Record query methods handle numeric values is a special case, and is not part of Rails's standard definition of present. This update attempts to make this more clear in the docs, so that people don't expect Object#present? to return false if used on a number that is zero.
|
| |
| |
| | |
Update Active Record's attribute query methods documentation to clarify that whether an attribute is present is based on Object#present?. This gives people a place to go see what the exact definition of presence is. [ci skip]
|
| |
| |
| |
| | |
full behaviour. [ci skip]
|
|\ \
| | |
| | | |
Remove unneeded special case to calculate size for has_many :through
|
| |/
| |
| |
| |
| | |
All cases are properly handled in CollectionAssociation
for all subclasses of this association
|
| |
| |
| |
| |
| |
| |
| | |
We were ignoring the `default_value?` escape clause in the serialized
type, which caused the default value to always be treated as changed.
Fixes #18169
|
| |
| |
| |
| |
| |
| |
| |
| | |
The code for `TableDefinition#references` and
`SchemaStatements#add_reference` were almost identical both
structurally, and in terms of domain knowledge. This removes that
duplication into a common class, using the `Table` API as the expected
interface of its collaborator.
|
| |
| |
| |
| | |
This isn't Seattle.rb, @senny. ;)
|
|\ \
| | |
| | | |
Fix connection leak when a thread checks in additional connections.
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The code in `ConnectionPool#release` assumed that a single thread only
ever holds a single connection, and thus that releasing a connection
only requires the owning thread_id.
There is a trivial counterexample to this assumption: code that checks
out additional connections from the pool in the same thread. For
instance:
connection_1 = ActiveRecord::Base.connection
connection_2 = ActiveRecord::Base.connection_pool.checkout
ActiveRecord::Base.connection_pool.checkin(connection_2)
connection_3 = ActiveRecord::Base.connection
At this point, connection_1 has been removed from the
`@reserved_connections` hash, causing a NEW connection to be returned as
connection_3 and the loss of any tracking info on connection_1. As long
as the thread in this example lives, connection_1 will be inaccessible
and un-reapable. If this block of code runs more times than the size of
the connection pool in a single thread, every subsequent connection
attempt will timeout, as all of the available connections have been
leaked.
Reverts parts of 9e457a8654fa89fe329719f88ae3679aefb21e56 and
essentially all of 4367d2f05cbeda855820e25a08353d4b7b3457ac
|
|\ \
| | |
| | | |
Add information about "allow_destroy" requiring an ID. [ci skip]
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
I just wasted an absurd amount of time trying to figure out why my model
wasn't being deleted even though I was setting `_destroy` to true like
the instructions said. Making the documentation a little bit clear so
that someone like me doesn't waste their time in future.
|
|\ \ \
| | | |
| | | |
| | | |
| | | | |
Conflicts:
activerecord/CHANGELOG.md
|
| | |/
| |/| |
|
| | | |
|
| | | |
|
|/ /
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This bug occurs when an attribute of an ActiveRecord model is an
ActiveRecord::Type::Integer type or a ActiveRecord::Type::Decimal type (or any
other type that includes the ActiveRecord::Type::Numeric module. When the value
of the attribute is negative and is set to the same negative value, it is marked
as changed.
Take the following example of a Person model with the integer attribute age:
class Person < ActiveRecord::Base
# age :integer(4)
end
The following will produce the error:
person = Person.new(age: -1)
person.age = -1
person.changes
=> { "age" => [-1, -1] }
person.age_changed?
=> true
The problematic line is here:
module ActiveRecord
module Type
module Numeric
...
def non_numeric_string?(value)
# 'wibble'.to_i will give zero, we want to make sure
# that we aren't marking int zero to string zero as
# changed.
value.to_s !~ /\A\d+\.?\d*\z/
end
end
end
end
The regex match doesn't accept numbers with a leading '-'.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
`ActiveRecord::Base#[]` has overhead that was introduced in 4.2. The
`foo["id"]` working with PKs other than ID isn't really a case that we
want to support publicly, but deprecating was painful enough that we
avoid it. `_read_attribute` was introduced as the faster alternative for
use internally. By using that, we can save a lot of overhead. We also
save some overhead by reading the attribute one fewer times in
`stale_state`.
Fixes #18151
|
| |
| |
| |
| |
| |
| |
| |
| | |
If there is a method defined such as `find_and_do_stuff(id)`, which then
gets called on an association, we will perform statement caching and the
parent ID will not change on subsequent calls.
Fixes #18117
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Calling `changed_attributes` will ultimately check if every mutable
attribute has changed in place. Since this gets called whenever an
attribute is assigned, it's extremely slow. Instead, we can avoid this
calculation until we actually need it.
Fixes #18029
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This has the same comments as 9af90ffa00ba35bdee888e3e1ab775ba0bdbe72c,
however it affects the `add_reference` method, and `t.references` in the
context of a `change_table` block.
There is a lot of duplication of code between creating and updating
tables. We should re-evaluate the structure of this code from a high
level so changes like this don't need to be made in two places. (Note to
self)
|
| |
| |
| |
| |
| |
| | |
While we still aren't accepting PRs that only make changes like this,
it's fine when we're actively working on a method if it makes our lives
easier.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Rather than having to do:
create_table :posts do |t|
t.references :user
end
add_foreign_key :posts, :users
You can instead do:
create_table :posts do |t|
t.references :user, foreign_key: true
end
Similar to the `index` option, you can also pass a hash. This will be
passed as the options to `add_foreign_key`. e.g.:
create_table :posts do |t|
t.references :user, foreign_key: { primary_key: :other_id }
end
is equivalent to
create_table :posts do |t|
t.references :user
end
add_foreign_key :posts, :users, primary_key: :other_id
|
| |
| |
| |
| |
| |
| | |
While we aren't taking PRs with these kinds of changes just yet, they
are fine if we're actively working on the method and it makes things
easier.
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
PG doesn't register it's types using the `int(4)` format that others do.
As such, if we alias `int8` to the other integer types, the range
information is lost. This is fixed by simply registering it separately.
The other option (which I specifically chose to avoid) is to pass the
information of the original type that was being aliased as an argument.
I'd rather avoid that, since an alias should truly be treated the same.
If we need different behavior for a different type, we should explicitly
register it with that, and not have a conditional based on aliasing.
Fixes #18144
[Sean Griffin & ysbaddaden]
|
|
|
|
| |
Fixes #18122
|
| |
|
|\
| |
| | |
Update description for `start` parameter.
|
| |
| |
| |
| |
| |
| | |
I find that `Specifies the starting point for the batch processing.`
does not give enough information for me to understand what this
parameter actually does.
|
| |
| |
| |
| | |
Add nodoc to some constants [skip ci]
|
| | |
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Closes #17945
`db:test:prepare` still purges the database to always keep the test
database in a consistent state.
This patch introduces new problems with `db:schema:load`. Prior
to the introduction of foreign-keys, we could run this file against
a non-empty database. Since every `create_table` containted the
`force: true` option, this would recreate tables when loading the schema.
However with foreign-keys in place, `force: true` wont work anymore and
the task will crash.
/cc @schneems
|
|
|
|
|
|
|
|
| |
Apparently PG does not validate against RFC 4122. The intent of the original
patch is just to protect against PG errors (which potentially breaks txns, etc)
because of bad user input, so we shouldn't try any harder than PG itself.
Closes #17931
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In the case of serialized columns, we would expect the unserialized
value as input, not the serialized value. The original issue which made
this distinction, #14163, introduced a bug. If you passed serialized
input to the method, it would double serialize when it was sent to the
database. You would see the wrong input upon reloading, or get an error
if you had a specific type on the serialized column.
To put it another way, `update_column` is a special case of
`update_all`, which would take `['a']` and not `['a'].to_yaml`, but you
would not pass data from `params` to it.
Fixes #18037
|
|
|
|
|
|
|
|
|
| |
Because we're only using the `connection` so passing the entire tracker
isn't unnecessary.
Eventually only the `connection` will be passed to `add_constraints`
with later refactoring but curretly that's not possible because of
`construct_tables` method.
|
|\
| |
| | |
Fix ProtocolViolation/bind message for polymorphic + pluck or group+calc
|
| | |
|
|\ \
| | |
| | | |
Fix undesirable RangeError by Type::Integer. Add Type::UnsignedInteger.
|