| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|\ \
| | |
| | |
| | |
| | | |
kamipo/all_of_queries_should_return_correct_result
All of queries should return correct result even if including large number
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Since 31ffbf8d, finder methods no longer raise `RangeError`. So
`StatementCache#execute` is the only place to raise the exception for
finder queries.
`StatementCache` is used for simple equality queries in the codebase.
This means that if `StatementCache#execute` raises `RangeError`, the
result could always be regarded as empty.
So `StatementCache#execute` just return nil in that range error case,
and treat that as empty in the caller side, then we can avoid catching
the exception in much places.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Currently several queries cannot return correct result due to incorrect
`RangeError` handling.
First example:
```ruby
assert_equal true, Topic.where(id: [1, 9223372036854775808]).exists?
assert_equal true, Topic.where.not(id: 9223372036854775808).exists?
```
The first example is obviously to be true, but currently it returns
false.
Second example:
```ruby
assert_equal topics(:first), Topic.where(id: 1..9223372036854775808).find(1)
```
The second example also should return the object, but currently it
raises `RecordNotFound`.
It can be seen from the examples, the queries including large number
assuming empty result is not always correct.
Therefore, This change handles `RangeError` to generate executable SQL
instead of raising `RangeError` to users to always return correct
result. By this change, it is no longer raised `RangeError` to users.
|
|\ \ \
| |/ /
|/| | |
Fix error message when adapter is not specified
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
When we added support for multiple databases through a 3-tiered config
and configuration objects this error message got a bit convoluted.
Previously if you had an application with a missing configuation and
multiple databases the error message would look like this:
```
'doesnexist' database is not configured. Available: development,
development, test, test, production, production
(ActiveRecord::AdapterNotSpecified)
```
That's not very descriptive since it duplicates the environments
(because there are multiple databases per environment for this
application).
To fix this I've constructed a bit more readable error message which now
reads like this if you have a multi db app:
```
The `doesntexist` database is not configured for the `production`
environment. (ActiveRecord::AdapterNotSpecified)
Available databases configurations are:
development: primary, primary_readonly
test: primary, primary_readonly
production: primary, primary_readonly
```
And like this if you have a single db app:
```
The `doesntexist` database is not configured for the `production`
environment. (ActiveRecord::AdapterNotSpecified)
Available databases configurations are:
development
test
```
This makes the error message more readable and presents the user all
available options for the database connections.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The `unboundable?` behaves like the `infinite?`.
```ruby
inf = Topic.predicate_builder.build_bind_attribute(:id, Float::INFINITY)
inf.infinite? # => 1
oob = Topic.predicate_builder.build_bind_attribute(:id, 9999999999999999999999999999999)
oob.unboundable? # => 1
inf = Topic.predicate_builder.build_bind_attribute(:id, -Float::INFINITY)
inf.infinite? # => -1
oob = Topic.predicate_builder.build_bind_attribute(:id, -9999999999999999999999999999999)
oob.unboundable? # => -1
```
|
|\ \ \
| | | |
| | | |
| | | |
| | | | |
dylanahsmith/better-composed-of-single-field-query
activerecord: Use a simpler query condition for aggregates with one mapping
|
| | | |
| | | |
| | | |
| | | |
| | | | |
methods by default
Co-Authored-By: dylanahsmith <dylan.smith@shopify.com>
|
| | | | |
|
| |/ / |
|
| | | |
|
| | | |
|
| | | |
|
| | |
| | |
| | |
| | | |
`ActiveRecord::ConnectionAdapters::SQLite3Adapter#valid_alter_table_type?`
|
| | | |
|
| | | |
|
| | | |
|
| | |
| | |
| | |
| | | |
class
|
| | | |
|
| | | |
|
| | | |
|
|/ / |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
We need this in order to be able to add this migration for users that
use ActiveStorage during update their apps from Rails 5.2 to Rails 6.0.
Related to #33405
`rake app:update` should update active_storage
`rake app:update` should execute `rake active_storage:update`
if it is used in the app that is being updated.
It will add new active_storage's migrations to users' apps during update Rails.
Context https://github.com/rails/rails/pull/33405#discussion_r204239399
Also, see a related discussion in the Campfire:
https://3.basecamp.com/3076981/buckets/24956/chats/12416418@1236713081
|
| |
| |
| |
| |
| |
| | |
(#28078)
This PR addresses the issue described in #28025. On `dependent: :nullify` strategy only the foreign key of the relation is nullified. However on polymorphic associations the `*_type` column is not nullified leaving the record with a NULL `*_id` but the `*_type` column is present.
|
| |
| |
| |
| |
| |
| | |
The `@prevent_writes` should be updated only in the
`while_preventing_writes`, it is not necessary to expose the attr
writer.
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This attr writer was introduced at 7db90aa, but the usage is already
removed at bd2f5c0 before v3.2.0.rc1 is released.
If we'd like to customize the visitor in the connection, `arel_visitor`
which is implemented in all adapters (mysql2, postgresql, sqlite3,
oracle-enhanced, sqlserver) could be used for the purpose #23515.
|
| |
| |
| |
| | |
This class is no longer used since 9cbfc8a370bf6537a02a2f21e7246dc21ba4cf1f.
|
|\ \
| | |
| | | |
Allow strong params in ActiveRecord::Base#exists?
|
| |\ \ |
|
| | | |
| | | |
| | | |
| | | |
| | | | |
Allow `ActionController::Params` as argument of
`ActiveRecord::Base#exists?`
|
| |/ /
|/| |
| | |
| | |
| | |
| | | |
And support endless ranges for `not_between` like as `between`.
Follow up #34906.
|
|\ \ \
| | | |
| | | | |
Support endless ranges in where
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This commit adds support for endless ranges, e.g. (1..), that were added
in Ruby 2.6. They're functionally equivalent to explicitly specifying
Float::INFINITY as the end of the range.
|
| | | |
| | | |
| | | |
| | | | |
Since #26002, `id_value` is no longer passed to `sql_for_insert`.
|
| | | | |
|
|/ / /
| | |
| | |
| | | |
predicate construction
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
* Enable `Lint/UselessAssignment` cop to avoid unused variable warnings
Since we've addressed the warning "assigned but unused variable"
frequently.
370537de05092aeea552146b42042833212a1acc
3040446cece8e7a6d9e29219e636e13f180a1e03
5ed618e192e9788094bd92c51255dda1c4fd0eae
76ebafe594fc23abc3764acc7a3758ca473799e5
And also, I've found the unused args in c1b14ad which raises no warnings
by the cop, it shows the value of the cop.
|
| | |
| | |
| | |
| | |
| | | |
This slightly change the code in the Arel to allow +/-INFINITY as open
ended since the Active Record expects that behavior. See 5ecbeda.
|
|/ / |
|
|\ \
| | |
| | |
| | |
| | | |
eileencodes/share-fixture-connections-with-multiple-handlers
For fixtures share the connection pool when there are multiple handlers
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
In an application that has a primary and replica database the data
inserted on the primary connection will not be able to be read by the
replica connection.
In a test like this:
```
test "creating a home and then reading it" do
home = Home.create!(owner: "eileencodes")
ActiveRecord::Base.connected_to(role: :default) do
assert_equal 3, Home.count
end
ActiveRecord::Base.connected_to(role: :readonly) do
assert_equal 3, Home.count
end
end
```
The home inserted in the beginning of the test can't be read by the
replica database because when the test is started a transasction is
opened byy `setup_fixtures`. That transaction remains open for the
remainder of the test until we are done and run `teardown_fixtures`.
Because the data isn't actually committed to the database the replica
database cannot see the data insertion.
I considered a couple ways to fix this. I could have written a database
cleaner like class that would allow the data to be committed and then
clean up that data afterwards. But database cleaners can make the
database slow and the point of the fixtures is to be fast.
In GitHub we solve this by sharing the connection pool for the replicas
with the primary (writing) connection. This is a bit hacky but it works.
Additionally since we define `replica? || preventing_writes?` as the
code that blocks writes to the database this will still prevent writing
on the replica / readonly connection. So we get all the behavior of
multiple connections for the same database without slowing down the
database.
In this PR the code loops through the handlers. If the handler doesn't
match the default handler then it retrieves the connection pool from the
default / writing handler and assigns the reading handler's connections
to that pool.
Then in enlist_fixture_connections it maps all the connections for the
default handler because all the connections are now available on that
handler so we don't need to loop through them again.
The test uses a temporary connection pool so we can test this with
sqlite3_mem. This adapter doesn't behave the same as the others and
after looking over how the query cache test works I think this is the
most correct. The issues comes when calling `connects_to` because that
establishes new connections and confuses the sqlite3_mem adapter. I'm
not entirely sure why but I wanted to make sure we tested all adapters
for this change and I checked that it wasn't the shared connection code
that was causing issues - it was the `connects_to` code.
|
|\ \ \
| | | |
| | | |
| | | |
| | | | |
albertoalmagro/make-rails-compatible-accross-ruby-versions
Make average compatible across ruby versions
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Since Ruby 2.6.0 NilClass#to_d is returning `BigDecimal` 0.0, this
breaks `average` compatibility with prior Ruby versions. This patch
makes `average` return `nil` in all Ruby versions when there are no
rows.
|
| | | | |
|
|/ / /
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
`nil`, `Numeric`, and `String` are most basic objects which are passed
to `type_cast`. But now each `when *types_which_need_no_typecasting`
evaluation allocates extra two arrays, it makes `type_cast` slower.
The `types_which_need_no_typecasting` was introduced at #15351, but the
method isn't useful (never used any adapters) since all adapters
(sqlite3, mysql2, postgresql, oracle-enhanced, sqlserver) still
overrides the `_type_cast`.
Just expanding the method would make the `type_cast` 2x faster.
```ruby
module ActiveRecord
module TypeCastFast
def type_cast_fast(value, column = nil)
value = id_value_for_database(value) if value.is_a?(Base)
if column
value = type_cast_from_column(column, value)
end
_type_cast_fast(value)
rescue TypeError
to_type = column ? " to #{column.type}" : ""
raise TypeError, "can't cast #{value.class}#{to_type}"
end
private
def _type_cast_fast(value)
case value
when Symbol, ActiveSupport::Multibyte::Chars, Type::Binary::Data
value.to_s
when true then unquoted_true
when false then unquoted_false
# BigDecimals need to be put in a non-normalized form and quoted.
when BigDecimal then value.to_s("F")
when nil, Numeric, String then value
when Type::Time::Value then quoted_time(value)
when Date, Time then quoted_date(value)
else raise TypeError
end
end
end
end
conn = ActiveRecord::Base.connection
conn.extend ActiveRecord::TypeCastFast
Benchmark.ips do |x|
x.report("type_cast") { conn.type_cast("foo") }
x.report("type_cast_fast") { conn.type_cast_fast("foo") }
x.compare!
end
```
```
Warming up --------------------------------------
type_cast 58.733k i/100ms
type_cast_fast 101.364k i/100ms
Calculating -------------------------------------
type_cast 708.066k (± 5.9%) i/s - 3.583M in 5.080866s
type_cast_fast 1.424M (± 2.3%) i/s - 7.197M in 5.055860s
Comparison:
type_cast_fast: 1424240.0 i/s
type_cast: 708066.0 i/s - 2.01x slower
```
|
|\ \ \
| | | |
| | | | |
Only define attribute methods from schema cache
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
To define the attribute methods for a model, Active Record needs to know
the schema of the underlying table, which is usually achieved by making
a request to the database. This is undesirable behaviour while the app
is booting, for two reasons: it makes the boot process dependent on the
availability of the database, and it means every new process will make
one query for each table, which can cause issues for large applications.
However, if the application is using the schema cache dump feature, then
the schema cache already contains the necessary information, and we can
define the attribute methods without causing any extra database queries.
|
|\ \ \ \
| |_|/ /
|/| | | |
Restore an ability that class level `update` without giving ids
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
That ability was introduced at #11898 as `Relation#update` without
giving ids, so the ability on the class level is not documented and not
tested.
c83e30d which fixes #33470 has lost two undocumented abilities.
One has fixed at 5c65688, but I missed the ability on the class level.
Removing any feature should not be suddenly happened in a stable version
even if that is not documented.
I've restored the ability and added test case to avoid any regression in
the future.
Fixes #34743.
|
| | | |
| | | |
| | | |
| | | | |
Since we already bumped the minimum version of MySQL to 5.5.8 at #33853.
|