| Commit message (Collapse) | Author | Age | Files | Lines |
|\
| |
| | |
Do not create a hash key when calling ActiveModel::Errors#include?
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
From: https://github.com/rails/rails/issues/24279
Problem:
By doing `record.errors.include? :foo`, it adds a new key to the
@messages hash that defaults to an empty array.
This happens because of a combination of these 2 commits:
https://github.com/rails/rails/commit/b97035df64f5b2f912425c4a7fcb6e6bb3ddab8d
(Added in Rails 4.1)
and
https://github.com/rails/rails/commit/6ec8ba16d85d5feaccb993c9756c1edcbbf0ba13#diff-fdcf8b65b5fb954372c6fe1ddf284c78R76
(Rails 5.0)
By adding the default proc that returns an array for non-existing keys,
ruby adds that key to the hash.
Solution:
Change `#include?` to check with `has_key?` and then check if that value is
`present?`.
Add test case for ActiveModels::Errors#include?
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Since precision is always larger than scale, it can actually change
rounding behavior. Given a precision of 5 and a scale of 3, when you
apply the precision of 5 to `1.25047`, the result is `1.2505`, which
when the scale is applied would be `1.251` instead of the expected
`1.250`.
This issue appears to only occur with floats, as scale doesn't apply to
other numeric types, and the bigdecimal constructor actually ignores
precision entirely when working with strings. There's no way we could
handle this for the "unknown object which responds to `to_d`" case, as
we can't assume an interface for applying the scale.
Fixes #24235
|
|/ |
|
|
|
|
| |
Follow up to #24079
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In order to fix issue #17621 we added a check to validations that
determined if a record should be validated. Based on the existing tests
and behavior we wanted we determined the best way to do that was by
checking if `!record.peristed? || record.changed? || record.marked_for_destruction?`
This change didn't make it into a release until now. When #23790 was
opened we realized that `valid?` and `invalid?` were broken and did not
work on persisted records because of the `!record.persisted?`.
While there is still a bug that #17621 brought up, this change was too
drastic and should not be a RC blocker. I will work on fixing this so
that we don't break `valid?` but also aren't validating parent records
through child records if that parent record is validate false. This
change removes the code changes to validate and the corresponding tests.
It adds tests for two of the bugs found since Rails 5 beta2 release.
Fixes #17621
|
|\
| |
| | |
Remove unused parameter from method
|
| |
| |
| |
| |
| |
| | |
The `attribute` parameter is not used inside the `normalize_detail`
method. This does not need to go through a deprecation cycle, since the
method is private.
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixes #23645
When you're using an `attr_accessor` for a record instead of an
attribute in the database there's no way for the record to know if it
has `changed?` unless you tell it `attribute_will_change!("attribute")`.
The change made in 27aa4dd updated validations to check if a record was
`changed?` or `marked_for_destruction?` or not `persisted?`. It did not
take into account virtual attributes that do not affect the model's
dirty status.
The only way to fix this is to always validate the record if the
attribute does not belong to the set of attributes the record expects
(in `record.attributes`) because virtual attributes will not be in that
hash.
I think we should consider deprecating this particular behavior in the
future and requiring that the user mark the record dirty by noting that
the virtual attribute will change. Unfortunately this isn't easy because
we have no way of knowing that you did the "right thing" in your
application by marking it dirty and will get the deprecation warning
even if you are doing the correct thing.
For now this restores expected behavior when using a virtual attribute
by always validating the record, as well as adds tests for this case.
I was going to add the `!record.attributes.include?(attribute)` to the
`should_validate?` method but `uniqueness` cannot validate a virtual
attribute with nothing to hold on to the attribute. Because of this
`should_validate?` was about to become a very messy method so I decided
to split them up so we can handle it specifically for each case.
|
| |
|
| |
|
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* 5-0-beta-sec:
bumping version
fix version update task to deal with .beta1.1
Eliminate instance level writers for class accessors
allow :file to be outside rails root, but anything else must be inside the rails view directory
Don't short-circuit reject_if proc
stop caching mime types globally
use secure string comparisons for basic auth username / password
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Instance level writers can have an impact on how the Active Model /
Record objects are saved. Specifically, they can be used to bypass
validations. This is a problem if mass assignment protection is
disabled and specific attributes are passed to the constructor.
CVE-2016-0753
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This is an alternate implementation to #22875, that generalizes a lot of
the logic that type decorators are going to need, in order to have them
work with arrays, ranges, etc. The types have the ability to map over a
value, with the default implementation being to just yield that given
value. Array and Range give more appropriate definitions.
This does not automatically make ranges time zone aware, as they need to
be added to the `time_zone_aware` types config, but we could certainly
make that change if we feel it is appropriate. I do think this would be
a breaking change however, and should at least have a deprecation cycle.
Closes #22875.
/cc @matthewd
|
| | |
|
| | |
|
| | |
|
|/
|
|
|
|
|
|
|
|
|
|
| |
A regression (#22744) introduced in 7500dae caused certain numericality
validations to raise an error when run against an attribute with a
string value. Previously, these validations would successfully run
against string values because the value was cast to a numeric class.
This commit resolves the regression by converting string values to
floats before performing numericality comparison validations.
[fixes #22744]
|
|
|
|
| |
:tada: :beers:
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We currently generate an unbounded number of prepared statements when
`limit` or `offset` are called with a dynamic argument. This changes
`LIMIT` and `OFFSET` to use bind params, eliminating the problem.
`Type::Value#hash` needed to be implemented, as it turns out we busted
the query cache if the type object used wasn't exactly the same object.
This drops support for passing an `Arel::Nodes::SqlLiteral` to `limit`.
Doing this relied on AR internals, and was never officially supported
usage.
Fixes #22250.
|
| |
|
|\
| |
| | |
Update and fix forbidden attributes test issues caused by AC::Parameters change
|
| |
| |
| |
| | |
Add AC::Parameters tests for WhereChain#not
|
|\ \
| | |
| | |
| | | |
[ci skip] Update dirty.rb: documentation fix.
|
|/ /
| |
| | |
ActiveModel::Dirty module documentation fix.
|
|/ |
|
|\
| |
| | |
Use the post-type-cast version of the attribute to validate numericality
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This fixes the issue where you may be comparing (using a numeric
validator such as `greater_than`) numbers of a specific Numeric type
such as `BigDecimal`.
Previous behavior took the numeric value to be validated and
unconditionally converted to Float. For example, due to floating point
precision, this can cause issues when comparing a Float to a BigDecimal.
Consider the following:
```
validates :sub_total, numericality: {
greater_than: BigDecimal('97.18')
}
```
If the `:sub_total` value BigDecimal.new('97.18') was validated against
the above, the following would be valid since `:sub_total` is converted
to a Float regardless of its original type. The result therefore becomes
Kernel.Float(97.18) > BigDecimal.new('97.18')
The above illustrated behavior is corrected with this patch by
conditionally converting the value to validate to float.
Use the post-type-cast version of the attribute to validate numericality
[Roque Pinel & Trevor Wistaff]
|
| |
| |
| |
| |
| |
| |
| | |
I seriously don't even know why we handle booleans, but those strings
should technically be frozen. Additionally, we don't need to actually
check the class in the mutable string type, since the `cast_value`
function will always return a string.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This type adds an escape hatch to apps for which string duping causes
unacceptable memory growth. The reason we are duping them is in order to
detect mutation, which was a feature added to 4.2 in #15674. The string
type was modified to support this behavior in #15788.
Memory growth is really only a concern for string types, as it's the
only mutable type where the act of coersion does not create a new object
regardless (as we're usually returning an object of a different class).
I do feel strongly that if we are going to support detecting mutation,
we should do it universally for any type which is mutable. While it is
less common and ideomatic to mutate strings than arrays or hashes, there
shouldn't be rules or gotchas to understanding our behavior.
However, I also appreciate that for apps which are using a lot of string
columns, this would increase the number of allocations by a large
factor. To ensure that we keep our contract, if you'd like to opt out of
mutation detection on strings, you'll also be option out of mutation of
those strings.
I'm not completely married to the thought that strings coming out of
this actually need to be frozen -- and I think the name is correct
either way, as the purpose of this is to provide a string type which
does not detect mutation.
In the new implementation, I'm only overriding `cast_value`. I did not
port over the duping in `serialize`. I cannot think of a reason we'd
need to dup the string there, and the tests pass without it.
Unfortunately that line was introduced at a time where I was not nearly
as good about writing my commit messages, so I have no context as to
why I added it. Thanks past Sean. You are a jerk.
|
| |
| |
| |
| |
| | |
Use the documented module instead of ActiveModel::Model.
This makes the example more focused.
|
|\ \
| | |
| | | |
[ci skip] Fix explanation of `ActiveModel::Serialization`
|
| | |
| | |
| | |
| | |
| | |
| | | |
This explanation was change by https://github.com/rails/rails/commit/7a27de2b.
This change reversed the including module (`ActiveModel::Serializers::JSON`)
and the included module (`ActiveModel::Serialization`) by mistake.
|
|/ / |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The implementation of `attribute_method?` on Active Record requires
establishing a database connection and querying the schema. As a general
rule, we don't want to require database connections for any class macro,
as the class should be able to be loaded without a database (e.g. for
things like compiling assets).
Instead of eagerly defining these methods, we do it lazily the first
time they are accessed via `method_missing`. This should not cause any
performance hits, as it will only hit `method_missing` once for the
entire class.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The biggest source of the performance regression in these methods
occurred because dirty tracking required eagerly materializing and type
casting the assigned values. In the previous commits, I've changed dirty
tracking to perform the comparisons lazily. However, all of this is moot
when calling `save`, since `changes_applied` will be called, which just
ends up eagerly materializing everything, anyway. With the new mutation
tracker, it's easy to just compare the previous two hashes in the same
lazy fashion.
We will not have aliasing issues with this setup, which is proven by the
fact that we're able to detect nested mutation.
Before:
User.create! 2.007k (± 7.1%) i/s - 10.098k
After:
User.create! 2.557k (± 3.5%) i/s - 12.789k
Fixes #19859
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This moves a bit more of the logic required for dirty checking into the
attribute objects. I had hoped to remove the `with_value_from_database`
stuff, but unfortunately just calling `dup` on the attribute objects
isn't enough, since the values might contain deeply nested data
structures. I think this can be cleaned up further.
This makes most dirty checking become lazy, and reduces the number of
object allocations and amount of CPU time when assigning a value. This
opens the door (but doesn't quite finish) to improving the performance
of writes to a place comparable to 4.1
|
|\ \
| | |
| | | |
WIP: Fix the AS::Callbacks terminator regression from 4.2.3
|
| | |
| | |
| | |
| | |
| | |
| | | |
Rails 4.2.3 AS::Callbacks will not halt chain if `false` is returned.
That is the behavior of specific callbacks like AR::Callbacks and
AM::Callbacks.
|
|\ \ \
| |/ /
|/| |
| | |
| | | |
AR: take precision into count when assigning a value to timestamp
attribute
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Timestamp column can have less precision than ruby timestamp
In result in how big a fraction of a second can be stored in the
database.
m = Model.create!
m.created_at.usec == m.reload.created_at.usec
# => false
# due to different seconds precision in Time.now and database column
If the precision is low enough, (mysql default is 0, so it is always low
enough by default) the value changes when model is reloaded from the
database. This patch fixes that issue ensuring that any timestamp
assigned as an attribute is converted to column precision under the
attribute.
|
|/ / |
|
| |
| |
| |
| |
| |
| |
| | |
In Active Record, it appears these were either autoloaded, which
actually was likely due to test ordering since the method `Float#to_d`
wouldn't trigger it. This makes it explicit, and unlikely to fail in the
future.
|
| |
| |
| |
| |
| |
| | |
The `override` option is only a thing for Active Record registrations.
We should figure out how to make this properly error out without doing
anything too weird to the code.
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Any tests for a type which is not overridden by Active Record, and does
not test the specifics of the attributes API interacting in more complex
ways have no reason to be in the Active Record suite. Doing this
revealed that the implementation of the date and time types in AM was
actually completely broken, and incapable of returning any value other
than `nil`.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Things like decorations, overrides, and priorities only matter for
Active Record, so the Active Model registry can be implemented much more
simply. At this point, I wonder if having Active Record's registry
inherit from Active Model's is even worth the trouble?
The Active Model class was also missing test cases, which have been
backfilled.
This removes the error when two types are registered with the same name,
but given that Active Model is meant to be significantly more generic, I
do not think this is an issue for now. If we want, we can raise an error
at the point that someone tries to register it.
|