aboutsummaryrefslogtreecommitdiffstats
path: root/activerecord/test/cases
Commit message (Collapse)AuthorAgeFilesLines
* Merge pull request #35073 from eileencodes/db-selectionEileen M. Uchitelle2019-01-301-0/+77
|\ | | | | Part 8: Multi db improvements, Adds basic automatic database switching to Rails
| * Adds basic automatic database switching to RailsEileen Uchitelle2019-01-301-0/+77
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The following PR adds behavior to Rails to allow an application to automatically switch it's connection from the primary to the replica. A request will be sent to the replica if: * The request is a read request (`GET` or `HEAD`) * AND It's been 2 seconds since the last write to the database (because we don't want to send a user to a replica if the write hasn't made it to the replica yet) A request will be sent to the primary if: * It's not a GET/HEAD request (ie is a POST, PATCH, etc) * Has been less than 2 seconds since the last write to the database The implementation that decides when to switch reads (the 2 seconds) is "safe" to use in production but not recommended without adequate testing with your infrastructure. At GitHub in addition to the a 5 second delay we have a curcuit breaker that checks the replication delay and will send the query to a replica before the 5 seconds has passed. This is specific to our application and therefore not something Rails should be doing for you. You'll need to test and implement more robust handling of when to switch based on your infrastructure. The auto switcher in Rails is meant to be a basic implementation / API that acts as a guide for how to implement autoswitching. The impementation here is meant to be strict enough that you know how to implement your own resolver and operations classes but flexible enough that we're not telling you how to do it. The middleware is not included automatically and can be installed in your application with the classes you want to use for the resolver and operations passed in. If you don't pass any classes into the middleware the Rails default Resolver and Session classes will be used. The Resolver decides what parameters define when to switch, Operations sets timestamps for the Resolver to read from. For example you may want to use cookies instead of a session so you'd implement a Resolver::Cookies class and pass that into the middleware via configuration options. ``` config.active_record.database_selector = { delay: 2.seconds } config.active_record.database_resolver = MyResolver config.active_record.database_operations = MyResolver::MyCookies ``` Your classes can inherit from the existing classes and reimplment the methods (or implement more methods) that you need to do the switching. You only need to implement methods that you want to change. For example if you wanted to set the session token for the last read from a replica you would reimplement the `read_from_replica` method in your resolver class and implement a method that updates a new timestamp in your operations class.
* | Fix case when we want a UrlConfig but the URL is nilEileen Uchitelle2019-01-301-0/+8
|/ | | | | | | | | | | | | | | | | | | | Previously if the `url` key in a config hash was nil we'd ignore the configuration as invalid. This can happen when you're relying on a `DATABASE_URL` in the env and that is not set in the environment. ``` production: <<: *default url: ENV['DATABASE_URL'] ``` This PR fixes that case by checking if there is a `url` key in the config instead of checking if the `url` is not nil in the config. In addition to changing the conditional we then need to build a url hash to merge with the original hash in the `UrlConfig` object. Fixes #35091
* Allow changing text and blob size without giving the `limit` optionRyuta Kamizono2019-01-292-9/+27
| | | | | | | | | | | | | | In MySQL, the text column size is 65,535 bytes by default (1 GiB in PostgreSQL). It is sometimes too short when people want to use a text column, so they sometimes change the text size to mediumtext (16 MiB) or longtext (4 GiB) by giving the `limit` option. Unlike MySQL, PostgreSQL doesn't allow the `limit` option for a text column (raises ERROR: type modifier is not allowed for type "text"). So `limit: 4294967295` (longtext) couldn't be used in Action Text. I've allowed changing text and blob size without giving the `limit` option, it prevents that migration failure on PostgreSQL.
* PostgreSQL: Use native timestamp decoders of pg-1.1Lars Kanis2019-01-261-3/+3
| | | | | This improves performance of timestamp conversion and avoids additional string allocations.
* Make `t.timestamps` with precision by defaultRyuta Kamizono2019-01-262-0/+122
|
* Fix `t.timestamps` missing `null: false` in `change_table bulk: true`Ryuta Kamizono2019-01-262-0/+32
|
* Allow `column_exists?` giving options without typeRyuta Kamizono2019-01-262-12/+12
|
* Merge pull request #35042 from eileencodes/fix-error-message-for-missing-handlerEileen M. Uchitelle2019-01-252-3/+11
|\ | | | | Fix error raised when handler doesn't exist
| * Fix error raised when handler doesn't existEileen Uchitelle2019-01-252-3/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | While working on another feature for multiple databases (auto-switching) I observed that in development the first request won't autoload the application record connection for the primary database and may not yet know about the replica connection. In my test application this caused the application to thrown an error if I tried to send the first request to the replica before the replica was connected. This wouldn't be an issue in production because the application is preloaded. In order to fix this I decided to leave the original error message and delete the new error message. I updated the original error message to include the `role` to make it a bit clearer that the connection isn't established for that particular role. The error now reads: ``` No connection pool with 'primary' found for the 'reading' role. ``` A single database application will continue uisng the original error message: ``` No connection pool with 'primary' found. ```
* | Make `And` and `Case` into expression nodesKevin Deisz2019-01-241-0/+9
|/ | | | Allows aliasing, predications, ordering, and various other functions on `And` and `Case` nodes. This brings them in line with other nodes like `Binary` and `Unary`.
* Allow `column_exists?` to be passed `type` argument as a stringRyuta Kamizono2019-01-241-9/+4
| | | | | | | | | | | Currently `conn.column_exists?("testings", "created_at", "datetime")` returns false even if the table has the `created_at` column. That reason is that `column.type` is a symbol but passed `type` is not normalized to symbol unlike `column_name`, it is surprising behavior to me. I've improved that to normalize a value before comparison.
* activerecord: Fix statement cache for strictly cast attributesDylan Thacker-Smith2019-01-231-0/+6
|
* Merge pull request #35006 from kddeisz/alias-case-nodesRyuta Kamizono2019-01-221-0/+10
|\ | | | | Alias case nodes
| * Alias case nodesKevin Deisz2019-01-211-0/+10
| | | | | | | | When `Arel` was merged into `ActiveRecord` we lost the ability to alias case nodes. This adds it back.
* | Fix year value when casting a multiparameter time hashAndrew White2019-01-211-0/+22
|/ | | | | | | | | | | | | | | | | | | | | | | When assigning a hash to a time attribute that's missing a year component (e.g. a `time_select` with `:ignore_date` set to `true`) then the year defaults to 1970 instead of the expected 2000. This results in the attribute changing as a result of the save. Before: event = Event.new(start_time: { 4 => 20, 5 => 30 }) event.start_time # => 1970-01-01 20:30:00 UTC event.save event.reload event.start_time # => 2000-01-01 20:30:00 UTC After: event = Event.new(start_time: { 4 => 20, 5 => 30 }) event.start_time # => 2000-01-01 20:30:00 UTC event.save event.reload event.start_time # => 2000-01-01 20:30:00 UTC
* Fix type casting column default in `change_column`Ryuta Kamizono2019-01-201-1/+13
| | | | | | | | | | | | | | Since #31230, `change_column` is executed as a bulk statement. That caused incorrect type casting column default by looking up the before changed type, not the after changed type. In a bulk statement, we can't use `change_column_default_for_alter` if the statement changes the column type. This fixes the type casting to use the constructed target sql_type. Fixes #34938.
* activerecord: Fix where nil condition on composed_of attributeDylan Thacker-Smith2019-01-181-0/+18
|
* Ensure `StatementCache#execute` never raises `RangeError`Ryuta Kamizono2019-01-182-7/+13
| | | | | | | | | | | | | Since 31ffbf8d, finder methods no longer raise `RangeError`. So `StatementCache#execute` is the only place to raise the exception for finder queries. `StatementCache` is used for simple equality queries in the codebase. This means that if `StatementCache#execute` raises `RangeError`, the result could always be regarded as empty. So `StatementCache#execute` just return nil in that range error case, and treat that as empty in the caller side, then we can avoid catching the exception in much places.
* All of queries should return correct result even if including large numberRyuta Kamizono2019-01-184-2/+31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently several queries cannot return correct result due to incorrect `RangeError` handling. First example: ```ruby assert_equal true, Topic.where(id: [1, 9223372036854775808]).exists? assert_equal true, Topic.where.not(id: 9223372036854775808).exists? ``` The first example is obviously to be true, but currently it returns false. Second example: ```ruby assert_equal topics(:first), Topic.where(id: 1..9223372036854775808).find(1) ``` The second example also should return the object, but currently it raises `RecordNotFound`. It can be seen from the examples, the queries including large number assuming empty result is not always correct. Therefore, This change handles `RangeError` to generate executable SQL instead of raising `RangeError` to users to always return correct result. By this change, it is no longer raised `RangeError` to users.
* Merge pull request #34963 from ↵Rafael França2019-01-171-0/+1
|\ | | | | | | | | dylanahsmith/better-composed-of-single-field-query activerecord: Use a simpler query condition for aggregates with one mapping
| * activerecord: Use a simpler query condition for aggregates with one mappingDylan Thacker-Smith2019-01-171-0/+1
| |
* | Merge pull request #34966 from ↵Rafael Mendonça França2019-01-171-2/+6
|\ \ | | | | | | | | | | | | | | | bogdanvlviv/ensure-ar-relation-exists-allows-permitted-params Ensure that AR::Relation#exists? allows only permitted params
| * | Ensure that AR::Relation#exists? allows only permitted paramsbogdanvlviv2019-01-171-2/+6
| |/ | | | | | | | | Clarify changelog entry Related to #34891
* | Remove deprecated `#set_state` from the transaction objectRafael Mendonça França2019-01-171-11/+0
| |
* | Remove deprecated `#supports_statement_cache?` from the database adaptersRafael Mendonça França2019-01-171-4/+0
| |
* | Remove deprecated `#insert_fixtures` from the database adaptersRafael Mendonça França2019-01-172-22/+0
| |
* | Remove deprecated ↵Rafael Mendonça França2019-01-171-4/+0
| | | | | | | | `ActiveRecord::ConnectionAdapters::SQLite3Adapter#valid_alter_table_type?`
* | Do not allow passing the column name to `sum` when a block is passedRafael Mendonça França2019-01-171-3/+3
| |
* | Do not allow passing the column name to `count` when a block is passedRafael Mendonça França2019-01-171-3/+3
| |
* | Remove delegation of missing methods in a relation to arelRafael Mendonça França2019-01-171-16/+0
| |
* | Remove delegation of missing methods in a relation to private methods of the ↵Rafael Mendonça França2019-01-171-7/+0
| | | | | | | | class
* | Change `SQLite3Adapter` to always represent boolean values as integersRafael Mendonça França2019-01-171-13/+1
| |
* | Remove ability to specify a timestamp name for `#cache_key`Rafael Mendonça França2019-01-171-15/+0
| |
* | Remove deprecated `ActiveRecord::Migrator.migrations_path=`Rafael Mendonça França2019-01-171-10/+0
| |
* | Remove deprecated `expand_hash_conditions_for_aggregates`Rafael Mendonça França2019-01-171-6/+0
|/
* Refs #28025 nullify *_type column on polymorphic associations on :nu… ↵Laerti2019-01-152-0/+34
| | | | | | (#28078) This PR addresses the issue described in #28025. On `dependent: :nullify` strategy only the foreign key of the relation is nullified. However on polymorphic associations the `*_type` column is not nullified leaving the record with a NULL `*_id` but the `*_type` column is present.
* Merge pull request #34891 from gmcgibbon/ac_params_existsRafael França2019-01-141-0/+9
|\ | | | | Allow strong params in ActiveRecord::Base#exists?
| * Merge branch 'master' into ac_params_existsAaron Patterson2019-01-117-15/+39
| |\
| * | Allow strong params in ActiveRecord::Base#exists?Gannon McGibbon2019-01-071-0/+9
| | | | | | | | | | | | | | | Allow `ActionController::Params` as argument of `ActiveRecord::Base#exists?`
* | | More exercise test cases for `not_between`Ryuta Kamizono2019-01-121-29/+82
| |/ |/| | | | | | | | | And support endless ranges for `not_between` like as `between`. Follow up #34906.
* | Merge pull request #34906 from gregnavis/add-endless-ranges-to-activerecordAaron Patterson2019-01-111-0/+12
|\ \ | | | | | | Support endless ranges in where
| * | Support endless ranges in whereGreg Navis2019-01-111-0/+12
| | | | | | | | | | | | | | | | | | This commit adds support for endless ranges, e.g. (1..), that were added in Ruby 2.6. They're functionally equivalent to explicitly specifying Float::INFINITY as the end of the range.
* | | Fix `test_case_insensitiveness` to follow up eb5fef5Ryuta Kamizono2019-01-111-9/+8
| | |
* | | Merge pull request #34900 from gmcgibbon/fix_test_find_only_some_columnsEileen M. Uchitelle2019-01-091-2/+11
|\ \ \ | |/ / |/| | Reset column info on original Topic in serialized attr test
| * | Reset column info on original Topic in serialized attr testGannon McGibbon2019-01-091-2/+11
| | | | | | | | | | | | | | | Call .reset_column_information on ::Topic in serialized attribute test so that attribute methods are safely undefined for all topics.
* | | Enable `Lint/UselessAssignment` cop to avoid unused variable warnings (#34904)Ryuta Kamizono2019-01-093-4/+4
|/ / | | | | | | | | | | | | | | | | | | | | | | | | | | * Enable `Lint/UselessAssignment` cop to avoid unused variable warnings Since we've addressed the warning "assigned but unused variable" frequently. 370537de05092aeea552146b42042833212a1acc 3040446cece8e7a6d9e29219e636e13f180a1e03 5ed618e192e9788094bd92c51255dda1c4fd0eae 76ebafe594fc23abc3764acc7a3758ca473799e5 And also, I've found the unused args in c1b14ad which raises no warnings by the cop, it shows the value of the cop.
* / :recycle: Fix mysql type map for enum and setbannzai2019-01-081-0/+4
|/
* Merge pull request #34773 from ↵Eileen M. Uchitelle2019-01-041-0/+34
|\ | | | | | | | | eileencodes/share-fixture-connections-with-multiple-handlers For fixtures share the connection pool when there are multiple handlers
| * Share the connection pool when there are multiple handlersEileen Uchitelle2019-01-031-0/+34
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In an application that has a primary and replica database the data inserted on the primary connection will not be able to be read by the replica connection. In a test like this: ``` test "creating a home and then reading it" do home = Home.create!(owner: "eileencodes") ActiveRecord::Base.connected_to(role: :default) do assert_equal 3, Home.count end ActiveRecord::Base.connected_to(role: :readonly) do assert_equal 3, Home.count end end ``` The home inserted in the beginning of the test can't be read by the replica database because when the test is started a transasction is opened byy `setup_fixtures`. That transaction remains open for the remainder of the test until we are done and run `teardown_fixtures`. Because the data isn't actually committed to the database the replica database cannot see the data insertion. I considered a couple ways to fix this. I could have written a database cleaner like class that would allow the data to be committed and then clean up that data afterwards. But database cleaners can make the database slow and the point of the fixtures is to be fast. In GitHub we solve this by sharing the connection pool for the replicas with the primary (writing) connection. This is a bit hacky but it works. Additionally since we define `replica? || preventing_writes?` as the code that blocks writes to the database this will still prevent writing on the replica / readonly connection. So we get all the behavior of multiple connections for the same database without slowing down the database. In this PR the code loops through the handlers. If the handler doesn't match the default handler then it retrieves the connection pool from the default / writing handler and assigns the reading handler's connections to that pool. Then in enlist_fixture_connections it maps all the connections for the default handler because all the connections are now available on that handler so we don't need to loop through them again. The test uses a temporary connection pool so we can test this with sqlite3_mem. This adapter doesn't behave the same as the others and after looking over how the query cache test works I think this is the most correct. The issues comes when calling `connects_to` because that establishes new connections and confuses the sqlite3_mem adapter. I'm not entirely sure why but I wanted to make sure we tested all adapters for this change and I checked that it wasn't the shared connection code that was causing issues - it was the `connects_to` code.