aboutsummaryrefslogtreecommitdiffstats
path: root/activerecord/lib/active_record/persistence.rb
Commit message (Collapse)AuthorAgeFilesLines
* Don't assign to `@changed_attributes` in `becomes`Ryuta Kamizono2019-04-031-1/+0
| | | | `@changed_attributes` is no longer used since #30985.
* Use official database name [ci skip]Ryuta Kamizono2019-04-031-5/+5
| | | | | | | | * s/Postgres/PostgreSQL/ * s/MYSQL/MySQL/, s/Mysql/MySQL/ * s/Sqlite/SQLite/ Replaced all newly added them after 6089b31.
* Fix the markup for `insert_all` and `upsert_all` docs [ci skip]Ryuta Kamizono2019-04-031-7/+7
|
* Format 'RETURNING' text in the docs [ci skip]Sharang Dashputre2019-03-251-3/+3
|
* [ci skip] Documentation pass for insert_all etc.Kasper Timm Hansen2019-03-201-105/+107
|
* Bulk Insert: Reuse indexes for unique_byKasper Timm Hansen2019-03-201-41/+24
| | | | | | | | | | | I found `:unique_by` with `:columns` and `:where` inside it tough to grasp. The documentation only mentioned indexes and partial indexes. So why duplicate a model's indexes in an insert_all/upsert_all call when we can just look it up? This has the added benefit of raising if no index is found, such that people can't insert thousands of records without relying on an index of some form.
* Update upsert_all documentation [ci skip]Sharang Dashputre2019-03-091-3/+2
|
* Merge pull request #35531 from boblail/issue-35519Ryuta Kamizono2019-03-091-3/+7
|\ | | | | | | | | Update documentation on upsert_all so that it is correct for Postgres [ci skip]
| * Update documentation on upsert_all so that it is correct for PostgresBob Lail2019-03-081-3/+7
| | | | | | | | | | | | | | | | | | | | Details in https://github.com/rails/rails/issues/35519 In short, MySQL and Sqlite3 allow a record to be both inserted _and_ replaced in the same operation. Postgres (and the SQL-2003 rules for MERGE) do not. Postgres's rationale seems to be that the operation would be nondeterministic. I think it's OK for Rails users to have a different experience with this feature depending on their database; but I think you should be able to follow the examples in the docs on any database.
* | Minor documentation fixes related to bulk insert [skip ci]Vishal Telangre2019-03-091-10/+13
|/
* [ci skip]Fix typo: constaint -> constraintwillnet2019-03-071-2/+2
|
* [ci skip] Fix typo beacuse -> becauseAbhay Nikam2019-03-061-1/+1
|
* Add insert_all to ActiveRecord models (#35077)Bob Lail2019-03-051-0/+197
| | | | | Adds a method to ActiveRecord allowing records to be inserted in bulk without instantiating ActiveRecord models. This method supports options for handling uniqueness violations by skipping duplicate records or overwriting them in an UPSERT operation. ActiveRecord already supports bulk-update and bulk-destroy actions that execute SQL UPDATE and DELETE commands directly. It also supports bulk-read actions through `pluck`. It makes sense for it also to support bulk-creation.
* Should not pass extra args to `_update_record`Ryuta Kamizono2019-02-211-2/+2
| | | | | | | | | | | | | | | | | | | | The argument of `_update_record` and `_create_record` is `attribute_names`, that is reserved for overriding by partial writes attribute names. https://github.com/rails/rails/blob/67e20d1d4854d834e9e43e56486d37cd98983f0d/activerecord/lib/active_record/persistence.rb#L719 https://github.com/rails/rails/blob/67e20d1d4854d834e9e43e56486d37cd98983f0d/activerecord/lib/active_record/persistence.rb#L737 https://github.com/rails/rails/blob/67e20d1d4854d834e9e43e56486d37cd98983f0d/activerecord/lib/active_record/attribute_methods/dirty.rb#L171 https://github.com/rails/rails/blob/67e20d1d4854d834e9e43e56486d37cd98983f0d/activerecord/lib/active_record/attribute_methods/dirty.rb#L177 The reason that no failing with extra args is that `Timestamp` module which is most outside module of the `_update_record` discards the extra args. https://github.com/rails/rails/blob/67e20d1d4854d834e9e43e56486d37cd98983f0d/activerecord/lib/active_record/timestamp.rb#L104 But that looks odd dependency. It should not be passed extra args, should only be passed attribute names.
* Replaced usage of where.delete/destroy_all with delete/destroy_byAbhay Nikam2019-02-201-1/+1
|
* Fix typo a -> an, an -> a [ci skip]Ryuta Kamizono2019-02-111-1/+1
|
* Tell the user what to use instead of update_attributes/!Xavier Noria2019-01-231-2/+2
|
* Restore an ability that class level `update` without giving idsRyuta Kamizono2019-01-021-1/+3
| | | | | | | | | | | | | | | | | | That ability was introduced at #11898 as `Relation#update` without giving ids, so the ability on the class level is not documented and not tested. c83e30d which fixes #33470 has lost two undocumented abilities. One has fixed at 5c65688, but I missed the ability on the class level. Removing any feature should not be suddenly happened in a stable version even if that is not documented. I've restored the ability and added test case to avoid any regression in the future. Fixes #34743.
* Fix failing testSean Griffin2018-10-301-0/+1
| | | | | b63701e moved the assignment before the query, but we need to capture our old id before assignment in case we are assigning the id.
* `update_columns` raises if the column is unknownSean Griffin2018-10-301-4/+4
| | | | | | | | | Previosly, `update_columns` would just take whatever keys you gave it and tried to run the update query. Most likely this would result in an error from the database. However, if the column actually did exist, but was in `ignored_columns`, this would result in the method returning successfully when it should have raised, and an attribute that should not exist written to `@attributes`.
* Consolidate duplicated code that initializing an empty model objectRyuta Kamizono2018-10-171-1/+1
| | | | | | | | | | | | | `init_with` and `init_from_db` are almost the same code except decode `coder`. And also, named `init_from_db` is a little misreading, a raw values hash from the database is already converted to an attributes object by `attributes_builder.build_from_database`, so passed `attributes` in that method is just an attributes object. I renamed that method to `init_with_attributes` since the method is shared with `init_with` to initialize an empty model object.
* Merge pull request #34117 from ↵Ryuta Kamizono2018-10-101-0/+2
|\ | | | | | | | | | | | | aergonaut/docs/ActiveRecord--Persistence-belongs_to_touch_method Add docs to ActiveRecord::Persistence#belongs_to_touch_method [ci skip]
| * Add docs to ActiveRecord::Persistence#belongs_to_touch_methodChris Fung2018-10-071-0/+2
| | | | | | | | [ci skip]
* | Don't expose `instantiate_instance_of` for internal useRyuta Kamizono2018-09-111-11/+7
| |
* | Refactor `attributes_for_{create,update}` to avoid an extra allocationRyuta Kamizono2018-08-311-2/+0
| | | | | | | | Use `delete_if` instead of `reject` to avoid an extra allocation.
* | Remove `attributes_with_values_for_{create,update}` for internal useRyuta Kamizono2018-08-301-2/+5
| | | | | | | | | | `attributes_with_values_for_update` is no longer used since ae2d36c, and `attributes_with_values_for_create` is internally used only one place.
* | Avoid extra scoping when using `Relation#update`Ryuta Kamizono2018-07-311-3/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Since 9ac7dd4, class level `update`, `destroy`, and `delete` were placed in the `Persistence` module as class methods. But `Relation#update` without passing ids which was introduced at #11898 is not a class method, and it was caused the extra scoping regression #33470. I moved the relation method back into the `Relation` to fix the regression. Fixes #33470.
* | Fix documentation based on feedbackAaron Patterson2018-06-261-2/+6
| |
* | Speed up homogeneous AR lists / reduce allocationsAaron Patterson2018-06-251-1/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This commit speeds up allocating homogeneous lists of AR objects. We can know if the result set contains an STI column before initializing every AR object, so this change pulls the "does this result set contain an STI column?" test up, then uses a specialized instantiation function. This way we only have to check for an STI column once rather than N times. This change also introduces a new initialization function that is meant for use when allocating AR objects that come from the database. Doing this allows us to eliminate one hash allocation per AR instance. Here is a benchmark: ```ruby require 'active_record' require 'benchmark/ips' ActiveRecord::Base.establish_connection adapter: "sqlite3", database: ":memory:" ActiveRecord::Migration.verbose = false ActiveRecord::Schema.define do create_table :users, force: true do |t| t.string :name t.timestamps null: false end end class User < ActiveRecord::Base; end 2000.times do User.create!(name: "Gorby") end Benchmark.ips do |x| x.report("find") do User.limit(2000).to_a end end ``` Results: Before: ``` [aaron@TC activerecord (master)]$ be ruby -I lib:~/git/allocation_tracer/lib speed.rb Warming up -------------------------------------- find 5.000 i/100ms Calculating ------------------------------------- find 56.192 (± 3.6%) i/s - 285.000 in 5.080940s ``` After: ``` [aaron@TC activerecord (homogeneous-allocation)]$ be ruby -I lib:~/git/allocation_tracer/lib speed.rb Warming up -------------------------------------- find 7.000 i/100ms Calculating ------------------------------------- find 72.204 (± 2.8%) i/s - 364.000 in 5.044592s ```
* | `becomes` should clear the mutation tracker which is created in ↵Ryuta Kamizono2018-05-111-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | `after_initialize` `becomes` creates new object and copies attributes from the receiver. If new object has mutation tracker which is created in `after_initialize`, it should be cleared since it is for discarded attributes. But if the receiver doesn't have mutation tracker yet, it will not be cleared properly. It should be cleared regardless of whether the receiver has mutation tracker or not. Fixes #32867.
* | Allow `primary_key` argument to `empty_insert_statement_value`Yasuo Honda2018-04-201-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | to support Oracle database support identity data type Oracle database does not support `INSERT .. DEFAULT VALUES` then every insert statement needs at least one column name specified. When `prefetch_primary_key?` returns `true` insert statement always have the primary key name since the primary key value is selected from the associated sequence. However, supporting identity data type will make `prefetch_primary_key?` returns `false` then no primary key column name added. As a result, `empty_insert_statement_value` raises `NotImplementedError` To address this error `empty_insert_statement_value` can take one argument `primary_key` to generate insert statement like this. `INSERT INTO "POSTS" ("ID") VALUES(DEFAULT)` It needs arity change for the public method but no actual behavior changes for the bundled adapters. Oracle enhanced adapter `empty_insert_statement_value` implementation will be like this: ``` def empty_insert_statement_value(primary_key) raise NotImplementedError unless primary_key "(#{quote_column_name(primary_key)}) VALUES(DEFAULT)" end ``` [Raise NotImplementedError when using empty_insert_statement_value with Oracle](https://github.com/rails/rails/pull/28029) [Add support for INSERT .. DEFAULT VALUES](https://community.oracle.com/ideas/13845)
* | Fix that `touch(:updated_at)` causes multiple assignments on the columnRyuta Kamizono2018-03-231-1/+1
| | | | | | | | | | | | | | The multiple assignments was caused by 37a1dfa due to lost the `to_s` normalization for given names. Fixes #32323.
* | Revert "PERF: Recover `changes_applied` performance (#31698)"Sean Griffin2018-03-061-0/+1
| | | | | | | | This reverts commit a19e91f0fab13cca61acdb1f33e27be2323b9786.
* | Fix that after commit callbacks on update does not triggered when optimistic ↵Ryuta Kamizono2018-03-061-10/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | locking is enabled This issue is caused by `@_trigger_update_callback` won't be updated due to `_update_record` in `Locking::Optimistic` doesn't call `super` when optimistic locking is enabled. Now optimistic locking concern when updating is supported by `_update_row` low level API, therefore overriding `_update_record` is no longer necessary. Removing the method just fix the issue. Closes #29096. Closes #29321. Closes #30823.
* | Introduce `_update_row` to decouple optimistic locking concern from ↵Ryuta Kamizono2018-03-051-28/+14
| | | | | | | | `Persistence` module
* | Introduce `_delete_record` and use it for deleting a recordRyuta Kamizono2018-03-051-8/+14
| |
* | Prefer `_update_record` than `update_all` for updating a recordRyuta Kamizono2018-03-051-23/+33
| |
* | Refactor `_substitute_values` to be passed attribute names and valuesRyuta Kamizono2018-03-051-7/+7
| |
* | `id_in_database` should be respected as primary key value for persisted recordsRyuta Kamizono2018-03-051-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | Currently primary key value can not be updated if a record has a locking column because of `_update_record` in `Locking::Optimistic` doesn't respect `id_in_database` as primary key value unlike in `Persistence`. And also, if a record has dirty primary key value, it may destroy any other record by the lock version of dirty record itself. When updating/destroying persisted records, it should identify themselves by `id_in_database`, not by dirty primary key value.
* | `id_in_database` do not return nil value for persisted recordRyuta Kamizono2018-03-041-3/+3
| | | | | | | | | | This removes `|| id` which were added in #9963 and #23887 since it is no longer necessary.
* | Ensure we don't write virtual attributes on update, tooSean Griffin2018-02-261-1/+1
| | | | | | | | See 948b931925febac3c965ab13470065ced68f7b53 for context
* | Never attempt to write virtual attributes to the databaseSean Griffin2018-02-261-1/+3
| | | | | | | | | | | | | | | | | | | | | | Currently the place where we limit what gets sent to the database is in the implementation for `partial_writes`. We should also be restricting it to column names when partial writes are turned off. Note that we're using `&` instead of just defaulting to `self.class.column_names`, as the instance version of `attribute_names` does not include attributes which are uninitialized (were not included in the select clause)
* | Deprecate update_attributes and update_attributes!Eddie Lebow2018-02-171-0/+2
| | | | | | | | Closes #31998
* | Ignores a default subclass when `becomes(Parent)`Leonel Galan2018-01-221-1/+2
| | | | | | | | | | | | | | | | | | | | | | Fixes issue described in #30399: A default value on the inheritance column prevented `child.becomes(Parent)` to return an instance of `Parent` as expected, instead it returns an instance of the default subclass. The change was introduced by #17169 and it was meant to affect initialization, alone. Where `Parent.new` is expected to return an instance of the default subclass.
* | PERF: Recover `changes_applied` performance (#31698)Ryuta Kamizono2018-01-221-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | #30985 caused `object.save` performance regression since calling `changes` in `changes_applied` is very slow. We don't need to call the expensive method in `changes_applied` as long as `@attributes` is tracked by mutation tracker. https://gist.github.com/kamipo/1a9f4f3891803b914fc72ede98268aa2 Before: ``` Warming up -------------------------------------- create_string_columns 73.000 i/100ms Calculating ------------------------------------- create_string_columns 722.256 (± 5.8%) i/s - 3.650k in 5.073031s ``` After: ``` Warming up -------------------------------------- create_string_columns 96.000 i/100ms Calculating ------------------------------------- create_string_columns 950.224 (± 7.7%) i/s - 4.800k in 5.084837s ```
* | Don't allow destroyed object mutation after `save` or `save!` is calledRyuta Kamizono2018-01-151-0/+1
| | | | | | | | | | | | | | | | | | | | Currently `object.save` will unfreeze the object, due to `changes_applied` replaces frozen `@attributes` to new `@attributes`. Since originally destroyed objects are not allowed to be mutated, `save` and `save!` should not return success in that case. Fixes #28563.
* | save attributes changed by callbacks after update_attributeMike Busch2017-12-221-5/+1
| | | | | | | | | | | | | | | | update_attribute previously stopped execution, before saving and before running callbacks, if the record's attributes hadn't changed. [The documentation](http://api.rubyonrails.org/classes/ActiveRecord/Persistence.html#method-i-update_attribute) says that "Callbacks are invoked", which was not happening if the persisted attributes hadn't changed.
* | Class level `update` and `destroy` checks all the records exist before ↵Ryuta Kamizono2017-12-011-9/+4
| | | | | | | | | | making changes (#31306) It makes more sense than ignoring invalid IDs.
* | Maintain raising `RecordNotFound` for class level `update` and` destroy`Ryuta Kamizono2017-12-011-4/+9
| | | | | | | | | | | | | | | | | | | | | | In 35836019, class level `update` and `destroy` suppressed `RecordNotFound` to ensure returning affected objects. But `RecordNotFound` is a common exception caught by a `rescue_from` handler. So changing the behavior when a typical `params[:id]` is passed has a compatibility problem. The previous behavior should not be changed. Fixes #31301.
* | Avoid creating extra `relation` and `build_arel` in `_create_record` and ↵Ryuta Kamizono2017-11-171-2/+41
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | `_update_record` (#29999) Currently `_create_record` and `_update_record` in `Persistence` are creating extra `unscoped` and calling `build_arel` in the relation. But `compile_insert` and `compile_update` can be done without those expensive operation for `SelectManager` creation. So I moved the implementation to `Persistence` to avoid creating extra relation and refactored to avoid calling `build_arel`. https://gist.github.com/kamipo/8ed73d760112cfa5f6263c9413633419 Before: ``` Warming up -------------------------------------- _update_record 150.000 i/100ms Calculating ------------------------------------- _update_record 1.548k (±12.3%) i/s - 7.650k in 5.042603s ``` After: ``` Warming up -------------------------------------- _update_record 201.000 i/100ms Calculating ------------------------------------- _update_record 2.002k (±12.8%) i/s - 9.849k in 5.027681s ``` 30% faster for STI classes.