| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
| |
Adds the ability to dump the schema or structure files for mulitple
databases. Loops through the configs for a given env and sets a filename
based on the format, then establishes a connection for that config and
dumps into the file.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
`each_current_configuration` is used by create, drop, and other methods
to find the configs for a given environment and returning those to the
method calling them.
The change here allows for the database commands to operate on all the
configs in the environment. Previously we couldn't slice the hashes and
iterate over them becasue they could be two tier or could be three
tier. By using the database config structs we don't need to care whether
we're dealing with a three tier or two tier, we can just parse all the
configs based on the environment.
This makes it possible for us to run `bin/rails db:create` and it will
create all the configs for the dev and test environment ust like it does
for a two tier - it creates the db for dev and test. Now `db:create`
will create `primary` for dev and test, and `animals` for dev and test
if our database.yml looks like:
```
development:
primary:
etc
animals:
etc
test:
primary:
etc
animals:
etc
```
This means that `bin/rails db:create`, `bin/rails db:drop`, and
`bin/rails db:migrate` will operate on the dev and test env for both
primary and animals ds.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If we have a three-tier yaml file like this:
```
development:
primary:
database: "development"
animals:
database: "development_animals"
migrations_paths: "db/animals_migrate"
```
This will add db create/drop/and migrate tasks for each level of the
config under that environment.
```
bin/rails db:drop:primary
bin/rails db:drop:animals
bin/rails db:create:primary
bin/rails db:create:animals
bin/rails db:migrate:primary
bin/rails db:migrate:animals
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Passing around and parsing hashes is easy if you know that it's a two
tier config and each key will be named after the environment and each
value will be the config for that environment key.
This falls apart pretty quickly with three-tier configs. We have no idea
what the second tier will be named (we know the first is primary but we
don't know the second), we have no easy way of figuring out
how deep a hash we have without iterating over it, and we'd have to do
this a lot throughout the code since it breaks all of Active Record's
assumptions regarding configurations.
These methods allow us to pass around objects instead. This will allow
us to more easily parse the configs for the rake tasks. Evenually I'd
like to replace the Active Record connection management that passes
around config hashes to use these methods as well but that's much
farther down the road.
`walk_configs` takes an environment, specification name, and a config
and turns them into DatabaseConfig struct objects so we can ask the
configs questions like:
```
db_config.spec_name
=> animals
db_config.env_name
=> development
db_config.config
{ :adapter => mysql etc }
```
`db_configs` loops through all given configurations and returns an array
of DatabaseConfig structs for each config in the yaml file.
and lastly `configs_for` takes an environment and either returns the
spec name and config if a block is given or returns an array of
DatabaseConfig structs just for the given environment.
|
|
|
|
|
|
| |
The current documentation explicitly mentions only PostgreSQL (hstore/json)
for use with `.store_accessor`, making it somewhat confusing what to choose on
a MySQL 5.7+ setup (which introduced a json data type).
|
|\
| |
| | |
Fix default connection handling with three-tier config
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
If you had a three-tier config, the `establish_connection` that's called
in the Railtie on load can't figure out how to access the default
configuration.
This is because Rails assumes that the config is the first value in the
hash and always associated with the key from the environment. With a
three tier config however we need to go one level deeper.
This commit includes 2 changes. 1) removes a line from `resolve_all`
which was parsing out the the environment from the config so instead of
getting
```
{
:development => {
:primary => {
:database => "whatever"
}
},
:animals => {
:database => "whatever-animals"
}
},
etc with test / prod
}
```
We'd instead end up with a config that had no attachment to it's
envioronment.
```
{
:primary => {
:database => "whatever"
}
:animals => {
:database => "whatever-animals"
}
etc - without test and prod
}
```
Not only did this mean that Active Record didn't know how to establish a
connection, it didn't have the other necessary configs along with it in
the configs list.
So fix this I removed the line that deletes these configs.
The second thing this commit changes is adding this line to
`establish_connection`
```
spec = spec[spec_name.to_sym] if spec[spec_name.to_sym]
```
When you have a three-tier config and don't pass any hash/symbol/env etc
to `establish_connection` the resolver will automatically return both
the primary and secondary (in this case animals db) configurations.
We'll get an `database configuration does not specify adapter` error
because AR will try to establish a connection on the `primary` key
rather than the `primary` key's config. It assumes that the
`development` or default env automatically will return a config hash,
but with a three-tier config we actually get a key and config `primary
=> config`.
This fix is a bit of a bandaid because it's not the "correct" way to
handle this situation, but it does solve our immediate problem. The new
code here is saying "if the config returned from the resolver (I know
it's called spec in here but we interchange our meanings a LOT and what
is returned is a three-tier config) has a key matching the "primary"
spec name, grab the config from the spec and pass that to the
estalbish_connection method".
This works because if we pass `:animals` or a hash, or `:primary` we'll
already have the correct configuration to connect with.
This fixes the case where we want Rail to connect with the default
connection.
Coming soon is a refactoring that should eliminate the need to do this
but I need this fix in order to write the multi-db rake tasks that I
promised in my RailsConf submission. `@tenderlove` and I are working on
the refactoring of the internals for connection management but it won't
be ready for a few weeks and this issue has been blocking progress.
|
| |
| |
| |
| | |
[ci skip]
|
| | |
|
|\ \
| | |
| | |
| | |
| | | |
lsylvester/only-preload-misses-on-multifetch-cache
Only preload misses on multifetch cache
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
In #24542, quoted_time was introduced to strip the leading date
component for time columns because it was having a significant
effect in mariadb. However, it assumed that the date component
was always 2000-01-01 which isn't the case, especially if the
source wasn't another time column.
|
| |/
|/|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
For legacy reasons Rails stores time columns on sqlite as full
timestamp strings. However because the date component wasn't being
normalized this meant that when they were read back they were being
prefixed with 2001-01-01 by ActiveModel::Type::Time. This had a
twofold result - first it meant that the fast code path wasn't being
used because the string was invalid and second it was corrupting the
second fractional component being read by the Date._parse code path.
Fix this by a combination of normalizing the timestamps on writing
and also changing Active Model to be more lenient when detecting
whether a string starts with a date component before creating the
dummy time value for parsing.
|
|\ \
| | |
| | |
| | |
| | |
| | | |
yujideveloper/feature/delegate-ar-base-pick-to-all
Add `delegate :pick, to: :all`
|
|/ / |
|
| |
| |
| |
| | |
This reverts commit a19e91f0fab13cca61acdb1f33e27be2323b9786.
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When a class has a belongs_to or has_one relationship with dependent: :destroy
option enabled, objects of this class should not be deleted if it's dependents
cannot be deleted.
Example:
class Parent
has_one :child, dependent: :destroy
end
class Child
belongs_to :parent, inverse_of: :child
before_destroy { throw :abort }
end
c = Child.create
p = Parent.create(child: c)
p.destroy
p.destroyed? # expected: false; actual: true;
Fixes #32022
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
locking is enabled
This issue is caused by `@_trigger_update_callback` won't be updated due
to `_update_record` in `Locking::Optimistic` doesn't call `super` when
optimistic locking is enabled.
Now optimistic locking concern when updating is supported by
`_update_row` low level API, therefore overriding `_update_record` is no
longer necessary.
Removing the method just fix the issue.
Closes #29096.
Closes #29321.
Closes #30823.
|
|
|
|
| |
`Persistence` module
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently primary key value can not be updated if a record has a locking
column because of `_update_record` in `Locking::Optimistic` doesn't
respect `id_in_database` as primary key value unlike in `Persistence`.
And also, if a record has dirty primary key value, it may destroy any
other record by the lock version of dirty record itself.
When updating/destroying persisted records, it should identify
themselves by `id_in_database`, not by dirty primary key value.
|
|
|
|
|
| |
This removes `|| id` which were added in #9963 and #23887 since it is no
longer necessary.
|
|
|
|
|
|
|
|
|
|
| |
This reverts ignoring polymorphic error introduced at 02da8ae.
What the ignoring want to solve was caused by force eager loading
regardless of whether it is necessary, but it has been fixed by #29043.
The ignoring is now only causing a mismatch of `exists?` behavior with
`to_a`, `count`, etc. It should behave consistently.
|
|
|
|
|
|
|
| |
This is an alternative of #29722, and follow up of #32048.
This does not change the current behavior, but makes it easier to modify
all polymorphic names consistently.
|
|
|
|
|
| |
Ruby 2.4+ provides `Hash#compact` and `Hash#compact!` natively,
so `active_support/core_ext/hash/compact` is no longer necessary.
|
|
|
|
|
| |
https://bugs.ruby-lang.org/issues/12752
https://ruby-doc.org/core-2.4.0/String.html#method-i-unpack1
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
This comment was added at 070dda2. That arguments has already been
changed since those are internal nodoc classes, but the comment does not
reflect the current state.
I decided to remove the staled comment since it is not useful for
understanding what the class does.
[ci skip]
|
|
|
|
|
|
| |
Follow up of b988ecb99ff6c8854e4b74ef8a7ade8d9ef5d954.
This was added for internal usage, it doesn't need to be public.
|
|
|
|
| |
Duplicated method name list is no longer needed.
|
|
|
|
| |
There is no reason `attributes=` doesn't take `assign_attributes`.
|
|
|
|
| |
This was added in 9bfa13b, but it is never used from the beginning.
|
|\
| |
| |
| |
| | |
Expand AR::Base.abstract_class documentation
[ci skip]
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The previous documentation is somewhat unclear about the use case for an
abstract ActiveRecord class.
This clears it up by highlighting the following points:
- table_name is not derived from the abstract class' name
- type is not derived on direct descendants of the abstract class
- validations, not abstract_class, should be used to specify whether
the parent model can be instantiated or not
|
| |
| |
| |
| |
| |
| |
| | |
Prevent `ActiveRecord::FinderMethods#limited_ids_for` from using correct primary
key values even if `ORDER BY` columns include other table's primary key.
Fixes #28364.
|
|\ \
| | |
| | |
| | | |
Active Record distinct & order #count regression
|
| | | |
|
|\ \ \
| | | |
| | | |
| | | |
| | | | |
tgxworld/raise_error_when_advisory_lock_is_not_releases
Raise an error if advisory lock in migrator was not released.
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Actually `reflection.klass` should be valid AR model unless
`polymorphic?`. Previously it worked limitedly by ignoring `NameError`
even if `reflection.klass` is invalid, and our isolated testing depends
on the limited working.
Probably we should also check the klass validity in `check_validity!`
properly. Until that time, I restored the error suppression for now.
Closes #32113.
|
| | | |
| | | |
| | | |
| | | | |
See 948b931925febac3c965ab13470065ced68f7b53 for context
|
| |/ /
|/| |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Currently the place where we limit what gets sent to the database is in
the implementation for `partial_writes`. We should also be restricting
it to column names when partial writes are turned off.
Note that we're using `&` instead of just defaulting to
`self.class.column_names`, as the instance version of `attribute_names`
does not include attributes which are uninitialized (were not included
in the select clause)
|
|\ \ \
| | | |
| | | |
| | | |
| | | | |
kamipo/do_not_attempt_to_find_inverse_of_polymorphic
Make `reflection.klass` raise if `polymorphic?` not to be misused
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
`belongs_to` association
We can't automatically find the inverse of a polymorphic `belongs_to`
association without context.
[Ryuta Kamizono & Eric K Idema]
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This is an alternative of #31877 to fix #31876 caused by #28808.
This issue was caused by a combination of several loose implementation.
* finding automatic inverse association of polymorphic without context (caused by #28808)
* returning `klass` even if `polymorphic?` (exists before #28808)
* loose verification by `valid_inverse_reflection?` (exists before #28808)
This makes `klass` raise if `polymorphic?` not to be misused.
This issue will not happen unless polymorphic `klass` is misused.
Fixes #31876.
Closes #31877.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This is an alternative of #29722, and revert of #29601 and a1fcbd9.
Currently, association creation and normal association finding doesn't
respect `store_full_sti_class`. But eager loading and preloading respect
the setting. This means that if set `store_full_sti_class = false`
(`true` by default), eager loading and preloading can not find
created polymorphic records.
Association creation and finding should work consistently.
|