| Commit message (Collapse) | Author | Age | Files | Lines |
|\
| |
| | |
Fix the documentation of Hash#except method [ci skip]
|
| |
| |
| |
| | |
fix minor problems
|
|/ |
|
| |
|
|
|
|
|
|
| |
We can save a few objects by freezing the `replacement` string. We save a few more by down-casing the string in memory instead of allocating a new one. We save far more objects by checking for the default separator `"-"`, and using pre-generated regular expressions.
We will save 209,231 bytes and 1,322 objects.
|
|
|
|
|
|
| |
In `apply_inflections` a string is down cased and some whitespace stripped in the front (which allocate strings). This would normally be fine, however `uncountables` is a fairly small array (10 elements out of the box) and this method gets called a TON. Instead we can keep an array of valid regexes for each uncountable so we don't have to allocate new strings.
This change buys us 325,106 bytes of memory and 3,251 fewer objects per request.
|
| |
|
| |
|
|\
| |
| | |
We need stricter locking before we can unload
|
| |
| |
| |
| |
| |
| | |
* only test the upgrade path,
* add test to verify non upgrades can’t preempt,
* add reentrancy assertion.
|
| |
| |
| |
| |
| |
| | |
Specifically, clean up if the thread is killed while it's blocked
awaiting the lock... if we get killed on some other arbitrary line, the
result remains quite undefined.
|
| | |
|
| |
| |
| |
| |
| |
| | |
I accidentally discovered `assert_threads_not_stuck` couldn't fail, so
the simplest solution was to prove they're all now working in both
directions.
|
| |
| |
| |
| |
| | |
If the thread isn't yet holding any form of lock, it has no claim over
what may / may not run while it's blocked.
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Specifically, the "loose upgrades" behaviour that allows us to obtain an
exclusive right to load things while other requests are in progress (but
waiting on the exclusive lock for themselves) prevents us from treating
load & unload interchangeably: new things appearing is fine, but they do
*not* expect previously-present constants to vanish.
We can still use loose upgrades for unloading -- once someone has
decided to unload, they don't really care if someone else gets there
first -- it just needs to be tracked separately.
|
|/
|
|
|
|
| |
That's outside our remit, and dangerous... if a caller has their own
locking to protect against the natural race danger, we'll deadlock
against it.
|
| |
|
|\
| |
| | |
Freeze string literals when not mutated.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
I wrote a utility that helps find areas where you could optimize your program using a frozen string instead of a string literal, it's called [let_it_go](https://github.com/schneems/let_it_go). After going through the output and adding `.freeze` I was able to eliminate the creation of 1,114 string objects on EVERY request to [codetriage](codetriage.com). How does this impact execution?
To look at memory:
```ruby
require 'get_process_mem'
mem = GetProcessMem.new
GC.start
GC.disable
1_114.times { " " }
before = mem.mb
after = mem.mb
GC.enable
puts "Diff: #{after - before} mb"
```
Creating 1,114 string objects results in `Diff: 0.03125 mb` of RAM allocated on every request. Or 1mb every 32 requests.
To look at raw speed:
```ruby
require 'benchmark/ips'
number_of_objects_reduced = 1_114
Benchmark.ips do |x|
x.report("freeze") { number_of_objects_reduced.times { " ".freeze } }
x.report("no-freeze") { number_of_objects_reduced.times { " " } }
end
```
We get the results
```
Calculating -------------------------------------
freeze 1.428k i/100ms
no-freeze 609.000 i/100ms
-------------------------------------------------
freeze 14.363k (± 8.5%) i/s - 71.400k
no-freeze 6.084k (± 8.1%) i/s - 30.450k
```
Now we can do some maths:
```ruby
ips = 6_226k # iterations / 1 second
call_time_before = 1.0 / ips # seconds per iteration
ips = 15_254 # iterations / 1 second
call_time_after = 1.0 / ips # seconds per iteration
diff = call_time_before - call_time_after
number_of_objects_reduced * diff * 100
# => 0.4530373333993266 miliseconds saved per request
```
So we're shaving off 1 second of execution time for every 220 requests.
Is this going to be an insane speed boost to any Rails app: nope. Should we merge it: yep.
p.s. If you know of a method call that doesn't modify a string input such as [String#gsub](https://github.com/schneems/let_it_go/blob/b0e2da69f0cca87ab581022baa43291cdf48638c/lib/let_it_go/core_ext/string.rb#L37) please [give me a pull request to the appropriate file](https://github.com/schneems/let_it_go/blob/b0e2da69f0cca87ab581022baa43291cdf48638c/lib/let_it_go/core_ext/string.rb#L37), or open an issue in LetItGo so we can track and freeze more strings.
Keep those strings Frozen
![](https://www.dropbox.com/s/z4dj9fdsv213r4v/let-it-go.gif?dl=1)
|
|\ \
| |/
|/| |
Fix `TimeWithZone#eql?` to handle `TimeWithZone` created from `DateTime`
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Before:
```ruby
twz = DateTime.now.in_time_zone
twz.eql?(twz.dup) => false
```
Now:
```ruby
twz = DateTime.now.in_time_zone
twz.eql?(twz.dup) => true
```
Please notice that this fix the `TimeWithZone` comparison to itself,
not to `DateTime`. Based on #3725, `DateTime` should not be equal to
`TimeWithZone`.
|
|\ \
| |/
|/|
| |
| | |
TheBlasfem/added_examples_dateandtime_calculations
Added examples to DateAndTime::Calculations [ci skip]
|
| | |
|
| |
| |
| |
| | |
enumerator if called without block
|
| |
| |
| |
| | |
Various grammar corrections and wrap to 80 characters.
|
|\ \
| | |
| | | |
Revert "Revert "Reduce allocations when running AR callbacks.""
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This reverts commit bdc1d329d4eea823d07cf010064bd19c07099ff3.
Before:
Calculating -------------------------------------
22.000 i/100ms
-------------------------------------------------
229.700 (± 0.4%) i/s - 1.166k
Total Allocated Object: 9939
After:
Calculating -------------------------------------
24.000 i/100ms
-------------------------------------------------
246.443 (± 0.8%) i/s - 1.248k
Total Allocated Object: 7939
```
begin
require 'bundler/inline'
rescue LoadError => e
$stderr.puts 'Bundler version 1.10 or later is required. Please update your Bundler'
raise e
end
gemfile(true) do
source 'https://rubygems.org'
# gem 'rails', github: 'rails/rails', ref: 'bdc1d329d4eea823d07cf010064bd19c07099ff3'
gem 'rails', github: 'rails/rails', ref: 'd2876141d08341ec67cf6a11a073d1acfb920de7'
gem 'arel', github: 'rails/arel'
gem 'sqlite3'
gem 'benchmark-ips'
end
require 'active_record'
require 'benchmark/ips'
ActiveRecord::Base.establish_connection('sqlite3::memory:')
ActiveRecord::Migration.verbose = false
ActiveRecord::Schema.define do
create_table :users, force: true do |t|
t.string :name, :email
t.boolean :admin
t.timestamps null: false
end
end
class User < ActiveRecord::Base
default_scope { where(admin: true) }
end
admin = true
1000.times do
attributes = {
name: "Lorem ipsum dolor sit amet, consectetur adipiscing elit.",
email: "foobar@email.com",
admin: admin
}
User.create!(attributes)
admin = !admin
end
GC.disable
Benchmark.ips(5, 3) do |x|
x.report { User.all.to_a }
end
key =
if RUBY_VERSION < '2.2'
:total_allocated_object
else
:total_allocated_objects
end
before = GC.stat[key]
User.all.to_a
after = GC.stat[key]
puts "Total Allocated Object: #{after - before}"
```
|
|/ /
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The concurrent-ruby gem is a toolset containing many concurrency
utilities. Many of these utilities include runtime-specific
optimizations when possible. Rather than clutter the Rails codebase with
concurrency utilities separate from the core task, such tools can be
superseded by similar tools in the more specialized gem. This commit
replaces `ActiveSupport::Concurrency::Latch` with
`Concurrent::CountDownLatch`, which is functionally equivalent.
|
| | |
|
| | |
|
| | |
|
| | |
|
|\ \
| | |
| | | |
Improve duplicable documentation [ci skip]
|
| |/ |
|
| | |
|
|\ \
| | |
| | | |
Concurrent load interlock (rm Rack::Lock)
|
| | | |
|
| | | |
|
| | |
| | |
| | |
| | |
| | | |
We don't need to fully disable concurrent requests: just ensure that
loads are performed in isolation.
|
| | |
| | |
| | |
| | | |
See 2f26f611 for more info.
|
| | | |
|
| | |
| | |
| | |
| | |
| | | |
Same fix as 109e71d2bb6d2305a091fe7ea96d4f6e9c7cd52d but after
mocha got removed in 2f28e5b6417fd4e5d6060983b36262737558b613.
|
| | | |
|
|\ \ \
| |_|/
|/| | |
active_support/indifferent_access: fix not raising when default_proc does
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|