| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
|
|
|
|
| |
In non-strict mode it is '', but if someone is in strict mode then we
should honour the strict semantics.
Also, this removes the need for a completely horrible hack in dirty.rb.
Closes #7780
|
| |
|
|
|
|
|
|
|
| |
Remove parsing of character type default values for 8.1 formatting since
Rails doesn't support postgreSQL 8.1 anymore.
Remove misleading comment unrelated to code.
|
|
|
|
|
|
|
|
|
| |
According to postgreSQL documentation:
(http://www.postgresql.org/docs/8.2/static/catalog-pg-attrdef.html)
we should not be using 'adsrc' field because this field is unaware of
outside changes that could affect the way that default values are
represented. Thus, I changed the queries to use
"pg_get_expr(adbin, adrelid)" instead of the historical "adsrc" field.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
PostgreSQL adapter properly parses default values when using multiple
schemas and domains.
When using domains across schemas, PostgresSQL prefixes the type of the
default value with the name of the schema where that type (or domain) is.
For example, this query:
```
SELECT a.attname, d.adsrc
FROM pg_attribute a LEFT JOIN pg_attrdef d
ON a.attrelid = d.adrelid AND a.attnum = d.adnum
WHERE a.attrelid = "defaults"'::regclass
AND a.attnum > 0 AND NOT a.attisdropped
ORDER BY a.attnum;
```
could return something like "'<default_value>'::pg_catalog.text" or
"(''<default_value>'::pg_catalog.text)::text" for the text columns with
defaults.
I modified the regexp used to parse this value so that it ignores
anything between ':: and \b(?:character varying|bpchar|text), and it
allows to have optional parens like in the above second example.
|
|\
| |
| |
| |
| |
| | |
Conflicts:
activerecord/lib/active_record/persistence.rb
railties/lib/rails/generators/rails/resource_route/resource_route_generator.rb
|
| | |
|
|/ |
|
|
|
|
|
|
|
|
|
|
|
| |
When inserting new records, only the fields which have been changed
from the defaults will actually be included in the INSERT statement.
The other fields will be populated by the database.
This is more efficient, and also means that it will be safe to
remove database columns without getting subsequent errors in running
app processes (so long as the code in those processes doesn't
contain any references to the removed column).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If your database supports setting the isolation level for a transaction,
you can set it like so:
Post.transaction(isolation: :serializable) do
# ...
end
Valid isolation levels are:
* `:read_uncommitted`
* `:read_committed`
* `:repeatable_read`
* `:serializable`
You should consult the documentation for your database to understand the
semantics of these different levels:
* http://www.postgresql.org/docs/9.1/static/transaction-iso.html
* https://dev.mysql.com/doc/refman/5.0/en/set-transaction.html
An `ActiveRecord::TransactionIsolationError` will be raised if:
* The adapter does not support setting the isolation level
* You are joining an existing open transaction
* You are creating a nested (savepoint) transaction
The mysql, mysql2 and postgresql adapters support setting the
transaction isolation level. However, support is disabled for mysql
versions below 5, because they are affected by a bug
(http://bugs.mysql.com/bug.php?id=39170) which means the isolation level
gets persisted outside the transaction.
|
| |
|
|\
| |
| | |
Adds migration and type casting support for PostgreSQL Array datatype
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Having column related schema dumper code in the AbstractAdapter. The
code remains the same, but by placing it in the AbstractAdapter, we can
then overwrite it with Adapter specific methods that will help with
Adapter specific data types.
The goal of moving this code here is to create a new migration key for
PostgreSQL's array type. Since any datatype can be an array, the goal is
to have ':array => true' as a migration option, turning the datatype
into an array. I've implemented this in postgres_ext, the syntax is
shown here: https://github.com/dockyard/postgres_ext#arrays
Adds array migration support
Adds array_test.rb outlining the test cases for array data type
Adds pg_array_parser to Gemfile for testing
Adds pg_array_parser to postgresql_adapter (unused in this commit)
Adds schema dump support for arrays
Adds postgres array type casting support
Updates changelog, adds note for inet and cidr support, which I forgot to add before
Removing debugger, Adds pg_array_parser to JRuby platform
Removes pg_array_parser requirement, creates ArrayParser module used by
PostgreSQLAdapter
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| | |
Accidentally checked in commented test code. Fail. >_<
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This method was first seen in 045713ee240fff815edb5962b25d668512649478,
and subsequently reimplemented in
fb2325e35855d62abd2c76ce03feaa3ca7992e4f.
According to @jeremy, this is okay to remove. He thinks it was added
because at the time we didn't have much transaction state to keep track
of, and he viewed it as a hack for us to track it internally, thinking
it was better to ask the connection for the transaction state.
Over the years we have added more and more state to track, a lot of
which is impossible to ask the connection for. So it seems that this is
just a relic of the passed and we will just track the state internally
only.
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| |
| | |
The caller needs to have knowledge of the rollback either way, so do it
all in the caller (#transaction)
|
| |
| |
| |
| | |
This avoids us having to manually increment and decrement it.
|
| | |
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
during a"
This reverts commit c24c885209ac2334dc6f798c394a821ee270bec6.
Here's the explanation I just sent to @tenderlove:
Hey,
I've been thinking about about the transaction memory leak thing that we
were discussing.
Example code:
post = nil
Post.transaction do
N.times { post = Post.create }
end
Post.transaction is going to create a real transaction and there will
also be a (savepoint) transaction inside each Post.create.
In an idea world, we'd like all but the last Post instance to be GC'd,
and for the last Post instance to receive its after_commit callback when
Post.transaction returns.
I can't see how this can work using your solution where the Post itself
holds a reference to the transaction it is in; when Post.transaction
returns, control does not switch to any of Post's instance methods, so
it can't trigger the callbacks itself.
What we really want is for the transaction itself to hold weak
references to the objects within the transaction. So those objects can
be GC'd, but if they are not GC'd then the transaction can iterate them
and execute their callbacks.
I've looked into WeakRef implementations that are available. On 1.9.3,
the stdlib weakref library is broken and we shouldn't use it.
There is a better implementation here:
https://github.com/bdurand/ref/blob/master/lib/ref/weak_reference/pure_ruby.rb
We could use that, either by pulling in the gem or just copying the code
in, but it still suffers from the limitation that it uses ObjectSpace
finalizers.
In my testing, this finalizers make GC quite expensive:
https://gist.github.com/3722432
Ruby 2.0 will have a native WeakRef implementation (via
ObjectSpace::WeakMap), hence won't be reliant on finalizers:
http://bugs.ruby-lang.org/issues/4168
So the ultimate solution will be for everyone to use Ruby 2.0, and for
us to just use ObjectSpace::WeakMap.
In the meantime, we have basically 3 options:
The first is to leave it as it is.
The second is to use a finalizer-based weakref implementation and take
the GC perf hit.
The final option is to store object ids rather than the actual objects.
Then use ObjectSpace._id2ref to deference the objects at the end of the
transaction, if they exist. This won't stop memory use growing within
the transaction, but it'll grow more slowly.
I benchmarked the performance of _id2ref this if the object does or does
not exist: https://gist.github.com/3722550
If it does exist it seems decent, but it's hugely more expensive if it
doesn't, probably because we have to do the rescue nil.
Probably most of the time the objects will exist. However the point of
doing this optimisation is to allow people to create a large number of
objects inside a transaction and have them be GC'd. So for that use
case, we'd be replacing one problem with another. I'm not sure which of
the two problems is worse.
My feeling is that we should just leave this for now and come back to it
when Ruby 2.0 is out.
I'm going to revert your commit because I can't see how it solves this.
Hope you don't mind... if I've misunderstood then let me know!
Jon
|
| |
|
|\
| |
| | |
Fixed support for DATABASE_URL for rake db tasks
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
- added tests to confirm establish_connection uses DATABASE_URL and
Rails.env correctly even when no arguments are passed in.
- updated rake db tasks to support DATABASE_URL, and added tests to
confirm correct behavior for these rake tasks. (Removed
establish_connection call from some tasks since in those cases
the :environment task already made sure the function would be called)
- updated Resolver so that when it resolves the database url, it
removes hash values with empty strings from the config spec (e.g.
to support connection to postgresql when no username is specified).
|
|/
|
|
|
| |
1. Unused variable
2. possibly useless use of a variable in
void context
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As a result of different commits, ConnectionPool had become
of two minds about exceptions, sometimes using PoolFullError
and sometimes using ConnectionTimeoutError. In fact, it was
using ConnectionTimeoutError internally, but then recueing
and re-raising as a PoolFullError.
There's no reason for this bifurcation, standardize on
ConnectionTimeoutError, which is the rails2 name and still
accurately describes semantics at this point.
History
In Rails2, ConnectionPool raises a ConnectionTimeoutError if
it can't get a connection within timeout.
Originally in master/rails3, @tenderlove had planned on removing
wait/blocking in connectionpool entirely, at that point he changed
exception to PoolFullError.
But then later wait/blocking came back, but exception remained
PoolFullError.
Then in 02b233556377 pmahoney introduced fair waiting logic, and
brought back ConnectionTimeoutError, introducing the weird bifurcation.
ConnectionTimeoutError accurately describes semantics as of this
point, and is backwards compat with rails2, there's no reason
for PoolFullError to be introduced, and no reason for two
different exception types to be used internally, no reason
to rescue one and re-raise as another. Unify!
|
| |
|
| |
|
|\
| |
| |
| |
| | |
Conflicts:
activerecord/lib/active_record/connection_adapters/postgresql_adapter.rb
|
| | |
|
| |
| |
| |
| | |
<pre> block.
|
|\ \
| | |
| | | |
postgres, map scaled intervals to string datatype
|
| | | |
|
|/ /
| |
| |
| | |
transaction.
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This implements the support to encode/decode JSON
data to/from database and creating columns of type
JSON using a native type [1] supported by PostgreSQL
from version 9.2.
[1] http://www.postgresql.org/docs/9.2/static/datatype-json.html
|
|\ \
| | |
| | | |
Fix for time type columns with invalid time value
|
| |/
| |
| |
| |
| |
| | |
The string_to_dummy_time method was blindly parsing the dummy time string
with Date._parse which returns a hash for the date part regardless
of whether the time part is an invalid time string.
|
|/ |
|
| |
|
|
|
|
| |
We don't need separate @class_to_pool and @connection_pool hashes.
|
| |
|
|
|
|
|
|
| |
* Loop rather than recurse in retrieve_connection_pool
* Key the hash by class rather than class name. This avoids creating
unnecessary strings.
|
| |
|