| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
| |
We need to test if the same method defined more than once only register
one subscriber for it. We can safelly remove because the method body is
the same and Subscriber use method_added hook for register the
subscriber.
|
|\
| |
| | |
Fixed serialization for records with an attribute named `format`.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
2d73f5a forces AR to enter the `define_attribute_methods` method whenever it
instantiate a record from the `init_with` entry point. This is a potential
performance hotspot, because `init_with` is called from all `find*` family
methods, and `define_attribute_methods` is slow because it tries to acquire
a lock on the mutex everytime it is entered.
By using [DCL](http://en.wikipedia.org/wiki/Double-checked_locking), we can
avoid grabbing the lock most of the time when the attribute methods are already
defined (the common case). This is made possible by the fact that reading an
instance variable is an atomic operation in Ruby.
Credit goes to Aaron Patterson for pointing me to DCL and filling me in on the
atomicity guarantees in Ruby.
[*Godfrey Chan*, *Aaron Patterson*]
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* * *
This bug can be triggered when serializing record R (the instance) of type C
(the class), provided that the following conditions are met:
1. The name of one or more columns/attributes on C/R matches an existing private
method on C (e.g. those defined by `Kernel`, such as `format`).
2. The attribute methods have not yet been generated on C.
In this case, the matching private methods will be called by the serialization
code (with no arguments) and their return values will be serialized instead. If
the method requires one or more arguments, it will result in an `ArgumentError`.
This regression is introduced in d1316bb.
* * *
Attribute methods (e.g. `#name` and `#format`, assuming the class has columns
named `name` and `format` in its database table) are lazily defined. Instead of
defining them when a the class is defined (e.g. in the `inherited` hook on
`ActiveRecord::Base`), this operation is deferred until they are first accessed.
The reason behind this is that is defining those methods requires knowing what
columns are defined on the database table, which usually requires a round-trip
to the database. Deferring their definition until the last-possible moment helps
reducing unnessary work, especially in development mode where classes are
redefined and throw away between requests.
Typically, when an attribute is first accessed (e.g. `a_book.format`), it will
fire the `method_missing` hook on the class, which triggers the definition of
the attribute methods. This even works for methods like `format`, because
calling a private method with an explicit receiver will also trigger that hook.
Unfortunately, `read_attribute_for_serialization` is simply an alias to `send`,
which does not respect method visibility. As a result, when serializing a record
with those conflicting attributes, the `method_missing` is not fired, and as a
result the attribute methods are not defined one would expected.
Before d1316bb, this is negated by the fact that calling the `run_callbacks`
method will also trigger a call to `respond_to?`, which is another trigger point
for the class to define its attribute methods. Therefore, when Active Record
tries to run the `after_find` callbacks, it will also define all the attribute
methods thus masking the problem.
* * *
The proper fix for this problem is probably to restrict `read_attribute_for_serialization`
to call public methods only (i.e. alias `read_attribute_for_serialization` to
`public_send` instead of `send`). This however would be quite risky to change
in a patch release and would probably require a full deprecation cycle.
Another approach would be to override `read_attribute_for_serialization` inside
Active Record to force the definition of attribute methods:
def read_attribute_for_serialization(attribute)
self.class.define_attribute_methods
send(attribute)
end
Unfortunately, this is quite likely going to cause a performance degradation.
This patch therefore restores the behaviour from the 4-0-stable branch by
explicitly forcing the class to define its attribute methods in a similar spot
(when records are initialized). This should not cause any extra roundtrips to
the database because the `@columns` should already be cached on the class.
Fixes #15188.
|
|\ \
| | |
| | | |
Reorder query in active record query of rails guide [ci skip]
|
|/ / |
|
| |
| |
| |
| | |
this decouples our code from the env hash a bit.
|
| | |
|
| | |
|
|/ |
|
|\
| |
| | |
Removed not used code
|
| | |
|
| | |
|
| | |
|
|\ \
| | |
| | | |
Push limit to type objects
|
|/ /
| |
| |
| |
| | |
Columns and injected types no longer have any conditionals based on the
format of SQL type strings! Hooray!
|
|\ \
| | |
| | | |
Push precision to type objects
|
|/ / |
|
|\ \
| | |
| | | |
Push scale to type objects
|
| | |
| | |
| | |
| | |
| | |
| | | |
Ideally types will be usable without having to specify a sql type
string, so we should keep the information related to parsing them on the
adapter or another object.
|
|\ \ \
| |/ /
|/| | |
Rename `stack` to `queue`
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Because it is used as a queue (FIFO), not as a stack (LIFO).
* http://en.wikipedia.org/wiki/Stack_(abstract_data_type)
* http://en.wikipedia.org/wiki/Queue_(data_structure)
|
|\ \ \
| | | |
| | | | |
Move `extract_precision` onto type objects
|
|/ / / |
|
|\ \ \
| |_|/
|/| | |
Remove unnecessary `Hash#to_a` call
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Inspired by https://github.com/rails/rails/commit/931ee4186b877856b212b0085cd7bd7f6a4aea67
```ruby
def stat(num)
start = GC.stat(:total_allocated_object)
num.times { yield }
total_obj_count = GC.stat(:total_allocated_object) - start
puts "#{total_obj_count / num} allocations per call"
end
h = { 'x' => 'y' }
stat(100) { h. each { |pair| pair } }
stat(100) { h.to_a.each { |pair| pair } }
__END__
1 allocations per call
2 allocations per call
```
|
|\ \
| | |
| | | |
Use `break` instead of `next` in AD::Journey::Formatter#match_route
|
| |/
| |
| |
| |
| |
| | |
The array is sorted in descending order, so there is no point in
iterating further if we met a negative item - all the rest will be
negative too.
|
|\ \
| | |
| | | |
Use the generic type map for all PG type registrations
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
We're going to want all of the benefits of the type map object for
registrations, including block registration and real aliasing. Moves
type name registrations to the adapter, and aliases the OIDs to the
named types
|
|\ \ \
| | | |
| | | | |
Allow additional arguments to be used during type map lookups
|
| |/ /
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Determining things like precision and scale in postgresql will require
the given blocks to take additional arguments besides the OID.
- Adds the ability to handle additional arguments to `TypeMap`
- Passes the column type to blocks when looking up PG types
|
|\ \ \
| | | |
| | | | |
[Guides] Do not gsub non ASCII characters in header anchor.
|
| | | | |
|
| |_|/
|/| |
| | |
| | | |
It was changed by mistake at c5d64b2b86aa42f57881091491ee289b3c489c7e.
|
| | | |
|
| | | |
|
| | | |
|
|/ / |
|
|\ \
| |/
|/| |
Form full URI as string to be parsed in Rack::Test.
|
|/
|
|
| |
There are performance gains to be made by avoiding URI setter methods.
|
|\
| |
| |
| | |
Fixed a problem where `sum` used with a `group` was not returning a Hash.
|
| |
| |
| |
| | |
with a grouping was not returning a Hash.
|
| |
| |
| | |
Most recent change should be moved to the top
|
|\ \
| | |
| | | |
Documentation: Rename Posts to Articles
|
|/ / |
|
|\ \
| | |
| | | |
Move extract_scale to decimal type
|
| | |
| | |
| | |
| | |
| | |
| | | |
The only type that has a scale is decimal. There's a special case where
decimal columns with 0 scale are type cast to integers if the scale is
not specified. Appears to only affect schema dumping.
|
|\ \ \
| | | |
| | | | |
Move PG OID types to their own files
|