aboutsummaryrefslogtreecommitdiffstats
path: root/activerecord/lib/active_record/connection_adapters/postgresql
Commit message (Collapse)AuthorAgeFilesLines
* Revert "Allow `:precision` option for time type columns"Sean Griffin2015-02-171-0/+7
| | | | | | | | | | This reverts commit 1502caefd30b137fd1a0865be34c5bbf85ba64c1. The test suite for the mysql adapter broke when this commit was used with MySQL 5.6. Conflicts: activerecord/CHANGELOG.md
* Register adapter specific types with the global type registrySean Griffin2015-02-151-38/+0
| | | | | | We do this in the adapter classes specifically, so the types aren't registered if we don't use that adapter. Constants under the PostgreSQL namespace for example are never loaded if we're using mysql.
* Allow `:precision` option for time type columnsRyuta Kamizono2015-02-121-7/+0
|
* Merge pull request #18888 from kamipo/refactor_quote_default_expressionRafael Mendonça França2015-02-112-12/+6
|\ | | | | Refactor `quote_default_expression`
| * Refactor `quote_default_expression`Ryuta Kamizono2015-02-112-12/+6
| | | | | | | | | | | | | | `quote_default_expression` and `quote_default_value` are almost the same handling for do not quote default function of `:uuid` columns. Rename `quote_default_value` to `quote_default_expression`, and remove duplicate code.
* | Remove an unused option that I didn't mean to commit [ci skip]Sean Griffin2015-02-111-2/+1
| |
* | Remove most PG specific type subclassesSean Griffin2015-02-1111-83/+14
|/ | | | | | | | | The latest version of the PG gem can actually convert the primitives for us in C code, which gives a pretty substantial speed up. A few cases were only there to add the `infinity` method, which I just put on the range type (which is the only place it was used). Floats also needed to parse `Infinity` and `NaN`, but it felt reasonable enough to put that on the generic form.
* Refactor microsecond precision to be database agnosticSean Griffin2015-02-101-9/+0
| | | | | | | | | | The various databases don't actually need significantly different handling for this behavior, and they can achieve it without knowing about the type of the object. The old implementation was returning a string, which will cause problems such as breaking TZ aware attributes, and making it impossible for the adapters to supply their logic for time objects.
* Merge pull request #18849 from kamipo/array_type_is_a_part_of_sql_typeSean Griffin2015-02-091-18/+3
|\ | | | | An array type is a part of `sql_type`
| * An array type is a part of `sql_type`Ryuta Kamizono2015-02-081-18/+3
| | | | | | | | | | | | `sql_type` is reused in `lookup_cast_type`. If making it a part of `sql_type` when handled array option first, it isn't necessary to do again.
* | Fix rounding problem for PostgreSQL timestamp columnRyuta Kamizono2015-02-081-0/+9
|/ | | | | If timestamp column have the precision, it need to format according to the precision of timestamp column.
* rm `Type#text?`Sean Griffin2015-02-071-4/+0
| | | | | | | | | | | | | | | | This predicate was only to figure out if it's safe to do case insensitive comparison, which is only a problem on PG. Turns out, PG can just tell us whether we are able to do it or not. If the query turns out to be a problem, let's just replace that method with checking the SQL type for `text` or `character`. I'd rather not burden the type objects with adapter specific knowledge. The *real* solution, is to deprecate this behavior entirely. The only reason we need it is because the `:case_sensitive` option for `validates_uniqueness_of` is documented as "this option is ignored for non-strings". It makes no sense for us to do that. If the type can't be compared in a case insensitive way, the user shouldn't tell us to do case insensitive comparison.
* Move non-type objects into the `Type::Helpers` namespaceSean Griffin2015-02-074-4/+4
| | | | | | | The type code is actually quite accessible, and I'm planning to encourage people to look at the files in the `type` folder to learn more about how it works. This will help reduce the noise from code that is less about type casting, and more about random AR nonsense.
* Allow a symbol to be passed to `attribute`, in place of a type objectSean Griffin2015-02-063-1/+54
| | | | | | | | | | | | | | | | | | The same is not true of `define_attribute`, which is meant to be the low level no-magic API that sits underneath. The differences between the two APIs are: - `attribute` - Lazy (the attribute will be defined after the schema has loaded) - Allows either a type object or a symbol - `define_attribute` - Runs immediately (might get trampled by schema loading) - Requires a type object This was the last blocker in terms of public interface requirements originally discussed for this feature back in May. All the implementation blockers have been cleared, so this feature is probably ready for release (pending one more look-over by me).
* Add default options to 'bit' and 'bit_varying' methodsMelody2015-02-031-2/+2
|
* Adds default options hash for postgres money typeMelody Berton2015-02-031-1/+1
|
* rm `Column#cast_type`Sean Griffin2015-02-033-18/+53
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The type from the column is never used, except when being passed to the attributes API. While leaving the type on the column wasn't necessarily a bad thing, I worry that it's existence there implies that it is something which should be used. During the design and implementation process of the attributes API, there have been plenty of cases where getting the "right" type object was hard, but I had easy access to the column objects. For any contributor who isn't intimately familiar with the intents behind the type casting system, grabbing the type from the column might easily seem like the "correct" thing to do. As such, the goal of this change is to express that the column is not something that should be used for type casting. The only places that are "valid" (at the time of this commit) uses of acquiring a type object from the column are fixtures (as the YAML file is going to mirror the database more closely than the AR object), and looking up the type during schema detection to pass to the attributes API Many of the failing tests were removed, as they've been made obsolete over the last year. All of the PG column tests were testing nothing beyond polymorphism. The Mysql2 tests were duplicating the mysql tests, since they now share a column class. The implementation is a little hairy, and slightly verbose, but it felt preferable to going back to 20 constructor options for the columns. If you are git blaming to figure out wtf I was thinking with them, and have a better idea, go for it. Just don't use a type object for this.
* Remove most uses of `Column#cast_type`Sean Griffin2015-01-303-9/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The goal is to remove the type object from the column, and remove columns from the type casting process entirely. The primary motivation for this is clarity. The connection adapter does not have sufficient type information, since the type we want to work with might have been overriden at the class level. By taking this object from the column, it is easy to mistakenly think that the column object which exists on the connection adapter is sufficient. It isn't. A concrete example of this is `serialize`. In 4.2 and earlier, `where` worked in a very inconsistent and confusing manner. If you passed a single value to `where`, it would serialize it before querying, and do the right thing. However, passing it as part of an array, hash, or range would cause it to not work. This is because it would stop using prepared statements, so the type casting would come from arel. Arel would have no choice but to get the column from the connection adapter, which would treat it as any other string column, and query for the wrong value. There are a handful of cases where using the column object to find the cast type is appropriate. These are cases where there is not actually a class involved, such as the migration DSL, or fixtures. For all other cases, the API should be designed as such that the type is provided before we get to the connection adapter. (For an example of this, see the work done to decorate the arel table object with a type caster, or the introduction of `QueryAttribute` to `Relation`). There are times that it is appropriate to use information from the column to change behavior in the connection adapter. These cases are when the primitive used to represent that type before it goes to the database does not sufficiently express what needs to happen. An example of this that affects every adapter is binary vs varchar, where the primitive used for both is a string. In this case it is appropriate to look at the column object to determine which quoting method to use, as this is something schema dependent. An example of something which would not be appropriate is to look at the type and see that it is a datetime, and performing string parsing when given a string instead of a date. This is the type of logic that should live entirely on the type. The value which comes out of the type should be a sufficiently generic primitive that the adapter can be expected to know how to work with it. The one place that is still using the column for type information which should not be necessary is the connection adapter type caster which is sometimes given to the arel table when we can't find the associated table. This will hopefully go away in the near future.
* Don't error when invalid json is assigned to a JSON columnSean Griffin2015-01-211-1/+1
| | | | | | | Keeping with our behavior elsewhere in the system, invalid input is assumed to be `nil`. Fixes #18629.
* Add an `:if_exists` option to `drop_table`Stefan Kanev2015-01-191-1/+1
| | | | | | | | | | | If set to `if_exists: true`, it generates a statement like: DROP TABLE IF EXISTS posts This syntax is supported in the popular SQL servers, that is (at least) SQLite, PostgreSQL, MySQL, Oracle and MS SQL Sever. Closes #16366.
* Time columns should support time zone aware attributesSean Griffin2015-01-151-1/+1
| | | | | | The types that are affected by `time_zone_aware_attributes` (which is on by default) have been made configurable, in case this is a breaking change for existing applications.
* remove deprecated support for PG ranges with exclusive lower bounds.Yves Senn2015-01-051-10/+1
| | | | addresses https://github.com/rails/rails/commit/91949e48cf41af9f3e4ffba3e5eecf9b0a08bfc3#commitcomment-9144563
* Merge pull request #18318 from ↵Rafael Mendonça França2015-01-041-1/+2
|\ | | | | | | | | kamipo/stop_passing_the_column_when_quoting_defaults Stop passing the column to the `quote` method when quoting defaults
| * Stop passing the column to the `quote` method when quoting defaultsRyuta Kamizono2015-01-041-1/+2
| | | | | | | | Related the commit 8f8f8058e58dda20259c1caa61ec92542573643d.
* | Prefer `array?` rather than `array`Ryuta Kamizono2015-01-041-5/+4
|/ | | | | | Slightly refactoring `PostgreSQLColumn`. `array` should be readonly. `default_function` should be initialized by `super`. `sql_type` has been removed `[]`. Since we already choose to remove it we should not change.
* Merge pull request #17820 from fw42/restore_query_cache_on_rollbackRafael Mendonça França2015-01-021-1/+1
|\ | | | | | | Clear query cache on rollback
| * Restore query cache on rollbackFlorian Weingarten2014-12-011-1/+1
| |
* | Merge pull request #18228 from kamipo/correctly_dump_primary_keyRafael Mendonça França2015-01-021-0/+4
|\ \ | | | | | | | | | | | | | | | | | | Improve a dump of the primary key support. Conflicts: activerecord/CHANGELOG.md
| * | Improve a dump of the primary key support.Ryuta Kamizono2014-12-291-0/+4
| | | | | | | | | | | | If it is not a default primary key, correctly dump the type and options.
* | | Extract the index length validation to a auxiliar methodRafael Mendonça França2014-12-301-3/+2
| | |
* | | Merge pull request #17680 from larskanis/fix_bytea_change_detectionSean Griffin2014-12-301-0/+1
|\ \ \ | | | | | | | | PostgreSQL, Fix change detection caused by superfluous bytea unescaping
| * | | PostgreSQL, Fix change detection caused by wrong data for bytea unescaping.Lars Kanis2014-12-291-0/+1
| |/ / | | | | | | | | | | | | | | | | | | | | | | | | | | | This showed up when running BinaryTest#test_load_save with the more restrictive input string handling of pg-0.18.0.pre20141117110243.gem . Bytea values sent to the server are in binary format, but are returned back as escaped text. To fulfill the assumption that type_cast_from_database(type_cast_for_database(binary)) == binary we unescape only, if the value was really received from the server.
* / / Ensures that primary_key method will return nil when multi-pkArthur Neves2014-12-301-4/+4
|/ / | | | | | | | | | | | | When table has a composite primary key, the `primary_key` method for sqlite3 and postgresql was only returning the first field of the key. Ensures that it will return nil instead, as AR dont support composite pks.
* | Support for any type primary key.Ryuta Kamizono2014-12-281-9/+0
| |
* | Refactor `PostgreSQL::TableDefinition#primary_key`Ryuta Kamizono2014-12-271-4/+2
| | | | | | | | | | Because call the `column` method and set the `options[:primary_key]` is handled at `super`, here need only treat the `options[:default]`.
* | Correctly ignore `case_sensitive` for UUID uniqueness validationSean Griffin2014-12-261-0/+4
| | | | | | | | | | | | | | | | I think we should deprecate this behavior and just error if you tell us to do a case insensitive comparison for types which are not case sensitive. Partially reverts 35592307 Fixes #18195
* | `force: :cascade` to recreate tables referenced by foreign-keys.Yves Senn2014-12-191-0/+4
| |
* | Relax the UUID regexGodfrey Chan2014-12-181-9/+2
| | | | | | | | | | | | | | | | Apparently PG does not validate against RFC 4122. The intent of the original patch is just to protect against PG errors (which potentially breaks txns, etc) because of bad user input, so we shouldn't try any harder than PG itself. Closes #17931
* | Refactor `quoted_date`Ryuta Kamizono2014-12-111-7/+3
| | | | | | | | Move microseconds formatting to `AbstractAdapter`.
* | Revert to 4.1 behavior for casting PG arraysSean Griffin2014-12-081-0/+3
| | | | | | | | | | | | | | | | | | | | The user is able to pass PG string literals in 4.1, and have it converted to an array. This is also possible in 4.2, but it would remain in string form until saving and reloading, which breaks our `attr = save.reload.attr` contract. I think we should deprecate this in 5.0, and only allow array input from user sources. However, this currently constitutes a breaking change to public API that did not go through a deprecation cycle.
* | Correctly respect subtypes for PG arrays and rangesSean Griffin2014-12-051-7/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | The type registration was simply looking for the OID, and eagerly fetching/constructing the sub type when it was registered. However, numeric types have additional parameters which are extracted from the actual SQL string of the type during lookup, and can have their behavior change based on the result. We simply need to use the block form of registration, and look up the subtype lazily instead. Fixes #17935
* | Merge pull request #17799 from kamipo/refactor_add_column_optionsRafael Mendonça França2014-11-281-4/+12
|\ \ | | | | | | Refactor `add_column_options!`, to move the quoting of default value for :uuid in `quote_value`.
| * | Rename to `quote_default_expression` from `quote_value`Ryuta Kamizono2014-11-281-1/+1
| | |
| * | Refactor `add_column_options!`, to move the quoting of default value for ↵Ryuta Kamizono2014-11-281-4/+12
| |/ | | | | | | :uuid in `quote_value`.
* / Refactor `SchemaCreation#visit_AddColumn`Ryuta Kamizono2014-11-271-6/+0
|/
* Move PG float quoting to the correct locationSean Griffin2014-11-251-16/+6
| | | | | Not sure how we missed this case when we moved everything else to the `_quote` method.
* allow types to be passed in for USING castsAaron Patterson2014-11-241-0/+3
| | | | | | | This allows us so abstract the migration from the type that is actually used by Rails. For example, ":string" may be a varchar or something, but the framework does that translation, and the app shouldn't need to know.
* allow the "USING" statement to be specified on change column callsAaron Patterson2014-11-241-1/+3
|
* Rename the primary key index when renaming a table in pgSean Griffin2014-11-221-0/+3
| | | | | | | | Also checked to make sure this does not affect foreign key constraints. (It doesn't). Fixes #12856 Closes #14088
* raise a better exception for renaming long indexesAaron Patterson2014-11-201-0/+3
|