aboutsummaryrefslogtreecommitdiffstats
path: root/activerecord/test/cases/adapters
Commit message (Collapse)AuthorAgeFilesLines
* Fix index_exists? for PostgreSQL expression indexesStan Hu2019-07-241-2/+4
| | | | | | | | Previously Rails expected indexes to be an array of columns, but for PostgreSQL a expression index can just be a string of text. Handle this by forcing `Index#columns` to be an Array inside `index_exists?`. Closes #36739
* Merge pull request #36665 from jmschneider/masterRafael Mendonça França2019-07-231-0/+4
|\ | | | | | | Make currency symbols optional for money column type in PostgreSQL
| * Make currency symbols optional for money column type in PostgreSQLJoel Schneider2019-07-121-0/+4
| |
* | [ActiveRecord] Superclass for aborted queriesKir Shatrov2019-07-162-2/+5
| |
* | Raise specific exception on Mysql2::Error::TimeoutErrorKir Shatrov2019-07-161-0/+14
|/
* Merge branch 'master' into add_database_exist_methodGuillermo Iguaran2019-07-085-111/+46
|\
| * Should `Regexp.escape` quoted table name in regexRyuta Kamizono2019-07-083-111/+0
| | | | | | | | | | It is for agnostic test case, since quoted table name may include `.` for all adapters, and `[` / `]` for sqlserver adapter.
| * MySQL: Fix schema dumping `enum` and `set` columns correctlyRyuta Kamizono2019-07-052-0/+46
| | | | | | | | | | | | | | | | | | `enum` and `set` are typed cast as `:string`, but currently the `:string` type is incorrectly reused for schema dumping. A cast type on columns is not always the same with `sql_type`, this fixes schema dumping `enum` and `set` columns to use `sql_type` instead of `type` correctly.
* | Add database_exists? method to connection adaptersGuilherme Mansur2019-06-173-0/+40
|/ | | | | | | | When SQLite connects it will silently create a database if the database does not exist. This behaviour causes different issues because of inconsistent behaviour between adapters: #36383, #32914. This commit adds a `database_exists?` method as a way to check the database without creating it. This is a stepping stone to fully resolving the above issues.
* Ensure to reset actually used `@connection.schema_migration`'s table nameRyuta Kamizono2019-06-151-4/+4
| | | | https://buildkite.com/rails/rails/builds/61744#f12cc6cf-7458-4131-917a-9735615f6259/999-1010
* Fix `test_schema_names` to include "hint_plan" schemaRyuta Kamizono2019-06-151-1/+5
|
* Move while_preventing_writes from conn to handlereileencodes2019-06-143-21/+25
| | | | | | | | | | | | | | | | | If we put the `while_preventing_writes` on the connection then the middleware that sends reads to the primary and ensures they can't write will not work. The `while_preventing_writes` will only be applied to the connection which it's called on - which in the case of the middleware is Ar::Base. This worked fine if you called it directly like `OtherDbConn.connection.while_preventing_writes` but Rails didn't have a way of knowing you wanted to call it on all the connections. The change here moves the `while_preventing_writes` method from the connection to the handler so that it can block writes to all queries for that handler. This will apply to all the connections associated with that handler.
* Move SchemaMigration to migration_contexteileencodes2019-06-143-8/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | This PR moves the `schema_migration` to `migration_context` so that we can access the `schema_migration` per connection. This does not change behavior of the SchemaMigration if you are using one database. This also does not change behavior of any public APIs. `Migrator` is private as is `MigrationContext` so we can change these as needed. We now need to pass a `schema_migration` to `Migrator` so that we can run migrations on the right connection outside the context of a rake task. The bugs this fixes were discovered while debugging the issues around the SchemaCache on initialization with multiple database. It was clear that `get_all_versions` wouldn't work without these changes outside the context of a rake task (because in the rake task we establish a connection and change AR::Base.connection to the db we're running on). Because the `SchemaCache` relies on the `SchemaMigration` information we need to make sure we store it per-connection rather than on ActiveRecord::Base. [Eileen M. Uchitelle & Aaron Patterson]
* Fix rubocop violationsRyuta Kamizono2019-06-141-2/+2
|
* Ensure to reset migration version after testing migrationRyuta Kamizono2019-06-131-2/+2
| | | | | | | "schema_migrations" table may be hard dropped before, so the reset migration version should be done in ensure block. https://buildkite.com/rails/rails/builds/61697#18d6f3ac-2257-4f4b-8efc-4010464c4d9a/999-1011
* Reset migration version before testing migrationRyuta Kamizono2019-06-131-0/+2
| | | | https://buildkite.com/rails/rails/builds/61695#373bb1a7-677f-49ec-95e7-a92467fefd60/1076-1084
* Avoid implicit rollback when testing migrationRyuta Kamizono2019-06-131-0/+2
| | | | | | | | | "schema_migrations" is hard dropped by some existing tests, so testing migration in using transactional tests may cause implicit creation and rollback "schema_migrations" table, it makes migration tests flaky. https://buildkite.com/rails/rails/builds/61692#42383249-30be-4508-b1fb-a7bb27600c8e/999-1010 https://buildkite.com/rails/rails/builds/61694#6e462ad3-41d8-4e26-95ce-728495b0ac64/999-1010
* Enable `Layout/EmptyLinesAroundAccessModifier` copRyuta Kamizono2019-06-1310-10/+0
| | | | | | | | | | | We sometimes say "✂️ newline after `private`" in a code review (e.g. https://github.com/rails/rails/pull/18546#discussion_r23188776, https://github.com/rails/rails/pull/34832#discussion_r244847195). Now `Layout/EmptyLinesAroundAccessModifier` cop have new enforced style `EnforcedStyle: only_before` (https://github.com/rubocop-hq/rubocop/pull/7059). That cop and enforced style will reduce the our code review cost.
* Fix sqlite3 collation parsing when using decimal columns.Martin Schuster2019-06-041-0/+9
| | | | | | If an sqlite3 table contains a decimal column behind columns with a collation definition, then parsing the collation of all preceeding columns will fail -- the collation will be missed without notice.
* Use `capture_sql` instead of `assert_sql` with no patternRyuta Kamizono2019-05-221-5/+5
| | | | And no longer need to except SCHEMA SQLs manually since 0810c07.
* Remove ignored_sql from SQLCounter by adding "TRANSACTION" to log nameYasuo Honda2019-05-081-2/+2
| | | | | | | | | | This commit adds "TRANSACTION" to savepoint and commit, rollback statements because none of savepoint statements were removed by #36153 since they are not "SCHEMA" statements. Although, only savepoint statements can be labeled as "TRANSACTION" I think all of transaction related method should add this label. Follow up #36153
* Remove redundant `test_too_many_binds`Yasuo Honda2019-05-031-20/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | with `ActiveRecord::BindParameterTest#test_too_many_binds` sqlite adapter has its own `bind_params_length`, `ActiveRecord::BindParameterTest#test_too_many_binds` respects it. * Modified `ActiveRecord::BindParameterTest#test_too_many_binds` to show `bind_params_length` value ``` $ git diff diff --git a/activerecord/test/cases/bind_parameter_test.rb b/activerecord/test/cases/bind_parameter_test.rb index 85685d1d00..83cd07f1d7 100644 --- a/activerecord/test/cases/bind_parameter_test.rb +++ b/activerecord/test/cases/bind_parameter_test.rb @@ -108,6 +108,7 @@ def test_statement_cache_with_sql_string_literal def test_too_many_binds bind_params_length = @connection.send(:bind_params_length) + p bind_params_length topics = Topic.where(id: (1 .. bind_params_length).to_a << 2**63) assert_equal Topic.count, topics.count $ ``` * Executed modified `ActiveRecord::BindParameterTest#test_too_many_binds` ``` $ bin/test test/cases/bind_parameter_test.rb -n test_too_many_binds Using sqlite3 Run options: -n test_too_many_binds --seed 47321 999 . Finished in 0.075249s, 13.2892 runs/s, 26.5784 assertions/s. 1 runs, 2 assertions, 0 failures, 0 errors, 0 skips $ ```
* Remove database specific sql statements from SQLCounterYasuo Honda2019-05-011-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Every database executes different type of sql statement to get metadata then `ActiveRecord::TestCase` ignores these database specific sql statements to make `assert_queries` or `assert_no_queries` work consistently. Connection adapter already labels these statement by setting "SCHEMA" argument, this pull request makes use of "SCHEMA" argument to ignore metadata queries. Here are the details of these changes: * PostgresqlConnectionTest Each of PostgresqlConnectionTest modified just executes corresponding methods https://github.com/rails/rails/blob/fef174f5c524edacbcad846d68400e7fe114a15a/activerecord/lib/active_record/connection_adapters/postgresql/schema_statements.rb#L182-L195 ```ruby # Returns the current database encoding format. def encoding query_value("SELECT pg_encoding_to_char(encoding) FROM pg_database WHERE datname = current_database()", "SCHEMA") end # Returns the current database collation. def collation query_value("SELECT datcollate FROM pg_database WHERE datname = current_database()", "SCHEMA") end # Returns the current database ctype. def ctype query_value("SELECT datctype FROM pg_database WHERE datname = current_database()", "SCHEMA") end ``` * BulkAlterTableMigrationsTest mysql2 adapter executes `SHOW KEYS FROM ...` to see if there is an index already created as below. I think the main concerns of these tests are how each database adapter creates or drops indexes then ignoring `SHOW KEYS FROM` statement makes sense. https://github.com/rails/rails/blob/fef174f5c524edacbcad846d68400e7fe114a15a/activerecord/lib/active_record/connection_adapters/mysql/schema_statements.rb#L11 ```ruby execute_and_free("SHOW KEYS FROM #{quote_table_name(table_name)}", "SCHEMA") do |result| ``` * Temporary change not included in this commit to show which statements executed ```diff $ git diff diff --git a/activerecord/test/cases/migration_test.rb b/activerecord/test/cases/migration_test.rb index 8e8ed494d9..df05f9bd16 100644 --- a/activerecord/test/cases/migration_test.rb +++ b/activerecord/test/cases/migration_test.rb @@ -854,7 +854,7 @@ def test_adding_indexes classname = ActiveRecord::Base.connection.class.name[/[^:]*$/] expected_query_count = { - "Mysql2Adapter" => 3, # Adding an index fires a query every time to check if an index already exists or not + "Mysql2Adapter" => 1, # Adding an index fires a query every time to check if an index already exists or not "PostgreSQLAdapter" => 2, }.fetch(classname) { raise "need an expected query count for #{classname}" @@ -886,7 +886,7 @@ def test_removing_index classname = ActiveRecord::Base.connection.class.name[/[^:]*$/] expected_query_count = { - "Mysql2Adapter" => 3, # Adding an index fires a query every time to check if an index already exists or not + "Mysql2Adapter" => 1, # Adding an index fires a query every time to check if an index already exists or not "PostgreSQLAdapter" => 2, }.fetch(classname) { raise "need an expected query count for #{classname}" $ ``` * Executed these modified tests ```ruby $ ARCONN=mysql2 bin/test test/cases/migration_test.rb -n /index/ Using mysql2 Run options: -n /index/ --seed 8462 F Failure: BulkAlterTableMigrationsTest#test_adding_indexes [/home/yahonda/git/rails/activerecord/test/cases/migration_test.rb:863]: 3 instead of 1 queries were executed. Queries: SHOW KEYS FROM `delete_me` SHOW KEYS FROM `delete_me` ALTER TABLE `delete_me` ADD UNIQUE INDEX `awesome_username_index` (`username`), ADD INDEX `index_delete_me_on_name_and_age` (`name`, `age`). Expected: 1 Actual: 3 bin/test test/cases/migration_test.rb:848 F Failure: BulkAlterTableMigrationsTest#test_removing_index [/home/yahonda/git/rails/activerecord/test/cases/migration_test.rb:895]: 3 instead of 1 queries were executed. Queries: SHOW KEYS FROM `delete_me` SHOW KEYS FROM `delete_me` ALTER TABLE `delete_me` DROP INDEX `index_delete_me_on_name`, ADD UNIQUE INDEX `new_name_index` (`name`). Expected: 1 Actual: 3 bin/test test/cases/migration_test.rb:879 .. Finished in 0.379245s, 10.5473 runs/s, 7.9105 assertions/s. 4 runs, 3 assertions, 2 failures, 0 errors, 0 skips $ ``` * ActiveRecord::ConnectionAdapters::Savepoints Left `self.ignored_sql` to ignore savepoint related statements because these SQL statements are not related "SCHEMA" ``` self.ignored_sql = [/^SAVEPOINT/, /^ROLLBACK TO SAVEPOINT/, /^RELEASE SAVEPOINT/] ``` https://github.com/rails/rails/blob/fef174f5c524edacbcad846d68400e7fe114a15a/activerecord/lib/active_record/connection_adapters/abstract/savepoints.rb#L10-L20 ```ruby def create_savepoint(name = current_savepoint_name) execute("SAVEPOINT #{name}") end def exec_rollback_to_savepoint(name = current_savepoint_name) execute("ROLLBACK TO SAVEPOINT #{name}") end def release_savepoint(name = current_savepoint_name) execute("RELEASE SAVEPOINT #{name}") end ```
* use PostgreSQL's bulk_alter_table implementationYoshiyuki Kinjo2019-04-131-7/+17
| | | | | | | | | | | | | | | Running this migration on mysql at current master fails because `add_references_for_alter` is missing. ``` change_table :users, bulk: true do |t| t.references :article end ``` This is also true for postgresql adapter, but its `bulk_alter_table` implementation can fallback in such case. postgresql's implementation is desirable to prevent unknown failure like this.
* Merge pull request #34800 from mqchau/mysqlCountDeleteRowInLockMatthew Draper2019-04-091-0/+28
|\ | | | | Wrap Mysql count of deleted rows in lock block to avoid conflict in test
| * Wrap Mysql count of deleted rows in lock block to avoid conflict in testQuan Chau2019-04-081-0/+28
| |
* | Raise `ArgumentError` for invalid `:limit` and `:precision` like as other ↵Ryuta Kamizono2019-04-072-2/+2
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | options When I've added new `:size` option in #35071, I've found that invalid `:limit` and `:precision` raises `ActiveRecordError` unlike other invalid options. I think that is hard to distinguish argument errors and statement invalid errors since the `StatementInvalid` is a subclass of the `ActiveRecordError`. https://github.com/rails/rails/blob/c9e4c848eeeb8999b778fa1ae52185ca5537fffe/activerecord/lib/active_record/errors.rb#L103 ```ruby begin # execute any migration rescue ActiveRecord::StatementInvalid # statement invalid rescue ActiveRecord::ActiveRecordError, ArgumentError # `ActiveRecordError` except `StatementInvalid` is maybe an argument error end ``` I'd say this is the inconsistency worth fixing. Before: ```ruby add_column :items, :attr1, :binary, size: 10 # => ArgumentError add_column :items, :attr2, :decimal, scale: 10 # => ArgumentError add_column :items, :attr3, :integer, limit: 10 # => ActiveRecordError add_column :items, :attr4, :datetime, precision: 10 # => ActiveRecordError ``` After: ```ruby add_column :items, :attr1, :binary, size: 10 # => ArgumentError add_column :items, :attr2, :decimal, scale: 10 # => ArgumentError add_column :items, :attr3, :integer, limit: 10 # => ArgumentError add_column :items, :attr4, :datetime, precision: 10 # => ArgumentError ```
* Optimizer hints should be applied on Top level query as much as possibleRyuta Kamizono2019-04-042-0/+16
| | | | | I've experienced this issue in our app, some hints only works on Top level query (e.g. `MAX_EXECUTION_TIME`).
* Ensure `reset_table_name` when table name prefix/suffix is changedRyuta Kamizono2019-04-041-5/+8
| | | | | Also, `reset_column_information` is unnecessary since `reset_table_name` does that too.
* Merge pull request #35795 from alimi/cache-database-versionEileen M. Uchitelle2019-04-034-6/+3
|\ | | | | Cache database version in schema cache
| * Cache database version in schema cacheAli Ibrahim2019-04-034-6/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * The database version will get cached in the schema cache file during the schema cache dump. When the database version check happens, the version will be pulled from the schema cache and thus avoid querying the database for the version. * If the schema cache file doesn't exist, we'll query the database for the version and cache it on the schema cache object. * To facilitate this change, all connection adapters now implement #get_database_version and #database_version. #database_version returns the value from the schema cache. * To take advantage of the cached database version, the database version check will now happen after the schema cache is set on the connection in the connection pool.
* | Use official database name [ci skip]Ryuta Kamizono2019-04-031-1/+1
|/ | | | | | | | * s/Postgres/PostgreSQL/ * s/MYSQL/MySQL/, s/Mysql/MySQL/ * s/Sqlite/SQLite/ Replaced all newly added them after 6089b31.
* Merge pull request #19333 from palkan/dirty-storeKasper Timm Hansen2019-03-311-0/+16
|\ | | | | Add dirty methods for store accessors
| * Add dirty methods for store accessorspalkan2019-03-251-0/+16
| |
* | Don't change `collation_connection` in the current connectionRyuta Kamizono2019-03-271-1/+1
| | | | | | | | | | | | | | We have a test case for `collation_connection` session variable, so it should not be changed in any other test. Fixes #35458.
* | Add Relation#annotate for SQL commentingMatt Yoho2019-03-213-0/+111
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch has two main portions: 1. Add SQL comment support to Arel via Arel::Nodes::Comment. 2. Implement a Relation#annotate method on top of that. == Adding SQL comment support Adds a new Arel::Nodes::Comment node that represents an optional SQL comment and teachers the relevant visitors how to handle it. Comment nodes may be added to the basic CRUD statement nodes and set through any of the four (Select|Insert|Update|Delete)Manager objects. For example: manager = Arel::UpdateManager.new manager.table table manager.comment("annotation") manager.to_sql # UPDATE "users" /* annotation */ This new node type will be used by ActiveRecord::Relation to enable query annotation via SQL comments. == Implementing the Relation#annotate method Implements `ActiveRecord::Relation#annotate`, which accepts a comment string that will be appeneded to any queries generated by the relation. Some examples: relation = Post.where(id: 123).annotate("metadata string") relation.first # SELECT "posts".* FROM "posts" WHERE "posts"."id" = 123 # LIMIT 1 /* metadata string */ class Tag < ActiveRecord::Base scope :foo_annotated, -> { annotate("foo") } end Tag.foo_annotated.annotate("bar").first # SELECT "tags".* FROM "tags" LIMIT 1 /* foo */ /* bar */ Also wires up the plumbing so this works with `#update_all` and `#delete_all` as well. This feature is useful for instrumentation and general analysis of queries generated at runtime.
* Add test case to prevent possible SQL injectionRyuta Kamizono2019-03-182-0/+20
|
* Add test case for unscoping `:optimizer_hints`Ryuta Kamizono2019-03-182-0/+12
|
* Extract `truncate` and `truncate_tables` into database statementsRyuta Kamizono2019-03-173-37/+0
| | | | This is to easier make `truncate_tables` to bulk statements.
* Support Optimizer HintsRyuta Kamizono2019-03-162-0/+52
| | | | | | | | | | | | | | | | | | We as Arm Treasure Data are using Optimizer Hints with a monkey patch (https://gist.github.com/kamipo/4c8539f0ce4acf85075cf5a6b0d9712e), especially in order to use `MAX_EXECUTION_TIME` (refer #31129). Example: ```ruby class Job < ApplicationRecord default_scope { optimizer_hints("MAX_EXECUTION_TIME(50000) NO_INDEX_MERGE(jobs)") } end ``` Optimizer Hints is supported not only for MySQL but also for most databases (PostgreSQL on RDS, Oracle, SQL Server, etc), it is really helpful to turn heavy queries for large scale applications.
* Allow `truncate` for SQLite3 adapter and add `rails db:seed:replant` (#34779)Bogdan2019-03-041-0/+18
| | | | | | | | | | | | | * Add `ActiveRecord::Base.connection.truncate` for SQLite3 adapter. SQLite doesn't support `TRUNCATE TABLE`, but SQLite3 adapter can support `ActiveRecord::Base.connection.truncate` by using `DELETE FROM`. `DELETE` without `WHERE` uses "The Truncate Optimization", see https://www.sqlite.org/lang_delete.html. * Add `rails db:seed:replant` that truncates database tables and loads the seeds Closes #34765
* Update READ_QUERY regexAli Ibrahim2019-02-253-0/+28
| | | | | | | * The READ_QUERY regex would consider reads to be writes if they started with spaces or parens. For example, a UNION query might have parens around each SELECT - (SELECT ...) UNION (SELECT ...). * It will now correctly treat these queries as reads.
* Remove duplicated protected params definitionsRyuta Kamizono2019-02-241-7/+2
| | | | Use "support/stubs/strong_parameters" instead.
* Merge pull request #35263 from hatch-carl/reduce-postgres-uuid-allocationsRyuta Kamizono2019-02-211-1/+13
|\ | | | | Reduce unused allocations when casting UUIDs for Postgres
| * Reduce unused allocations when casting UUIDs for PostgresCarl Thuringer2019-02-201-1/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Using the subscript method `#[]` on a string has several overloads and rather complex implementation. One of the overloads is the capability to accept a regular expression and then run a match, then return the receiver (if it matched) or one of the groups from the MatchData. The function of the `UUID#cast` method is to cast a UUID to a type and format acceptable by postgres. Naturally UUIDs are supposed to be string and of a certain format, but it had been determined that it was not ideal for the framework to send just any old string to Postgres and allow the engine to complain when "foobar" or "" was sent, being obviously of the wrong format for a valid UUID. Therefore this code was written to facilitate the checking, and if it were not of the correct format, a `nil` would be returned as is conventional in Rails. Now, the subscript method will allocate one or more strings on a match and return one of them, based on the index parameter. However, there is no need for a new string, as a UUID of the correct format is already such, and so long as the format was verified then the string supplied is adequate for consumption by the database. The subscript method also creates a MatchData object which will never be used, and so must eventually be garbage collected. Garbage collection indeed. This innocuous method tends to be called quite a lot, for example if the primary key of a table is a uuid, then this method will be called. If the foreign key of a relation is a UUID, once again this method is called. If that foreign key is belonging to a has_many relationship with dozens of objects, then again dozens of UUIDs shall be cast to a dup of themselves, and spawn dozens of MatchData objects, and so on. So, for users that: * Use UUIDs as primary keys * Use Postgres * Operate on collections of objects This accomplishes a significant savings in total allocations, and may save many garbage collections.
* | PostgreSQL: Support endless range values for range typesRyuta Kamizono2019-02-201-0/+16
|/
* Don't allow `where` with invalid value matches to nil valuesRyuta Kamizono2019-02-181-0/+6
| | | | | | | | | That is considered as silently leaking information. If type casting doesn't return any actual value, it should not be matched to any record. Fixes #33624. Closes #33946.
* Merge pull request #35299 from kamipo/fix_mismatched_foreign_keyRyuta Kamizono2019-02-171-3/+80
|\ | | | | | | Fix the regex that extract mismatched foreign key information
| * Fix the regex that extract mismatched foreign key informationRyuta Kamizono2019-02-171-3/+80
|/ | | | | | | | | | | | | The CI failure for `test_errors_for_bigint_fks_on_integer_pk_table` is due to the poor regex that extract all ``` `(\w+)` ``` like parts from the message (`:foreign_key` should be `"old_car_id"`, but `"engines"`): https://travis-ci.org/rails/rails/jobs/494123455#L1703 I've improved the regex more strictly and have more exercised mismatched foreign key tests. Fixes #35294
* Allow `column_exists?` to be passed `type` argument as a stringRyuta Kamizono2019-01-241-9/+4
| | | | | | | | | | | Currently `conn.column_exists?("testings", "created_at", "datetime")` returns false even if the table has the `created_at` column. That reason is that `column.type` is a symbol but passed `type` is not normalized to symbol unlike `column_name`, it is surprising behavior to me. I've improved that to normalize a value before comparison.