| Commit message (Collapse) | Author | Age | Files | Lines |
|\
| |
| | |
File renaming should be the last operation of an atomic write
|
| | |
|
|\ \
| | |
| | | |
Correct cache store superclass in comment [ci skip]
|
| | | |
|
|\ \ \
| |/ /
|/| | |
Cleaned up generators tests using internal assertion helper
|
| |/ |
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The perf gain is relatively minor but consistent:
```
Calculating -------------------------------------
0.zero? 137.091k i/100ms
1.zero? 137.350k i/100ms
0 == 0 142.207k i/100ms
1 == 0 144.724k i/100ms
-------------------------------------------------
0.zero? 8.893M (± 6.5%) i/s - 44.280M
1.zero? 8.751M (± 6.4%) i/s - 43.677M
0 == 0 10.033M (± 7.0%) i/s - 49.915M
1 == 0 9.814M (± 8.0%) i/s - 48.772M
```
And try! is quite a big hotspot for us so every little gain is appreciable.
|
| |
|
|\
| |
| | |
Add method_source dependency to activesupport
|
| | |
|
|\ \
| | |
| | | |
Added helper methods to stub any instance
|
| | | |
|
|\ \ \
| | | |
| | | | |
Assert that the `:prefix` option of `number_to_human_size` is deprecated
|
| | | | |
|
|\ \ \ \
| |/ / /
|/| | | |
[ci skip] Documentation: Switch around a common phrase for readability
|
| | |/
| |/| |
|
| |/
|/| |
|
| | |
|
| | |
|
| | |
|
|\ \
| | |
| | | |
Reload I18n.load_path in development
|
| | | |
|
|\ \ \
| | | |
| | | | |
[ci skip] Fix the AS::Callbacks terminator docs
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The second argument of the terminator lambda is no longer the result
of the callback, but the result lambda.
https://github.com/rails/rails/blob/3a7609e2bafee4b071fe35136274e6ccbae8cacd/activesupport/test/callbacks_test.rb#L553
|
|/ / /
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Using each_key is faster and more intention revealing.
Calculating -------------------------------------
each 31.378k i/100ms
each_key 33.790k i/100ms
-------------------------------------------------
each 450.225k (± 7.0%) i/s - 2.259M
each_key 494.459k (± 6.3%) i/s - 2.467M
Comparison:
each_key: 494459.4 i/s
each: 450225.1 i/s - 1.10x slower
|
| | |
| | |
| | |
| | | |
Discussion https://github.com/JuanitoFatas/fast-ruby/pull/59#issuecomment-128513763
|
| | |
| | |
| | |
| | | |
cause issues if it is not idempotent
|
|/ / |
|
|\ \
| | |
| | | |
Fix the documentation of Hash#except method [ci skip]
|
| | |
| | |
| | |
| | | |
fix minor problems
|
|/ / |
|
| | |
|
| |
| |
| |
| |
| |
| | |
We can save a few objects by freezing the `replacement` string. We save a few more by down-casing the string in memory instead of allocating a new one. We save far more objects by checking for the default separator `"-"`, and using pre-generated regular expressions.
We will save 209,231 bytes and 1,322 objects.
|
|/
|
|
|
|
| |
In `apply_inflections` a string is down cased and some whitespace stripped in the front (which allocate strings). This would normally be fine, however `uncountables` is a fairly small array (10 elements out of the box) and this method gets called a TON. Instead we can keep an array of valid regexes for each uncountable so we don't have to allocate new strings.
This change buys us 325,106 bytes of memory and 3,251 fewer objects per request.
|
| |
|
| |
|
|\
| |
| | |
We need stricter locking before we can unload
|
| |
| |
| |
| |
| |
| | |
* only test the upgrade path,
* add test to verify non upgrades can’t preempt,
* add reentrancy assertion.
|
| |
| |
| |
| |
| |
| | |
Specifically, clean up if the thread is killed while it's blocked
awaiting the lock... if we get killed on some other arbitrary line, the
result remains quite undefined.
|
| | |
|
| |
| |
| |
| |
| |
| | |
I accidentally discovered `assert_threads_not_stuck` couldn't fail, so
the simplest solution was to prove they're all now working in both
directions.
|
| |
| |
| |
| |
| | |
If the thread isn't yet holding any form of lock, it has no claim over
what may / may not run while it's blocked.
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Specifically, the "loose upgrades" behaviour that allows us to obtain an
exclusive right to load things while other requests are in progress (but
waiting on the exclusive lock for themselves) prevents us from treating
load & unload interchangeably: new things appearing is fine, but they do
*not* expect previously-present constants to vanish.
We can still use loose upgrades for unloading -- once someone has
decided to unload, they don't really care if someone else gets there
first -- it just needs to be tracked separately.
|
|/
|
|
|
|
| |
That's outside our remit, and dangerous... if a caller has their own
locking to protect against the natural race danger, we'll deadlock
against it.
|
| |
|
|\
| |
| | |
Freeze string literals when not mutated.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
I wrote a utility that helps find areas where you could optimize your program using a frozen string instead of a string literal, it's called [let_it_go](https://github.com/schneems/let_it_go). After going through the output and adding `.freeze` I was able to eliminate the creation of 1,114 string objects on EVERY request to [codetriage](codetriage.com). How does this impact execution?
To look at memory:
```ruby
require 'get_process_mem'
mem = GetProcessMem.new
GC.start
GC.disable
1_114.times { " " }
before = mem.mb
after = mem.mb
GC.enable
puts "Diff: #{after - before} mb"
```
Creating 1,114 string objects results in `Diff: 0.03125 mb` of RAM allocated on every request. Or 1mb every 32 requests.
To look at raw speed:
```ruby
require 'benchmark/ips'
number_of_objects_reduced = 1_114
Benchmark.ips do |x|
x.report("freeze") { number_of_objects_reduced.times { " ".freeze } }
x.report("no-freeze") { number_of_objects_reduced.times { " " } }
end
```
We get the results
```
Calculating -------------------------------------
freeze 1.428k i/100ms
no-freeze 609.000 i/100ms
-------------------------------------------------
freeze 14.363k (± 8.5%) i/s - 71.400k
no-freeze 6.084k (± 8.1%) i/s - 30.450k
```
Now we can do some maths:
```ruby
ips = 6_226k # iterations / 1 second
call_time_before = 1.0 / ips # seconds per iteration
ips = 15_254 # iterations / 1 second
call_time_after = 1.0 / ips # seconds per iteration
diff = call_time_before - call_time_after
number_of_objects_reduced * diff * 100
# => 0.4530373333993266 miliseconds saved per request
```
So we're shaving off 1 second of execution time for every 220 requests.
Is this going to be an insane speed boost to any Rails app: nope. Should we merge it: yep.
p.s. If you know of a method call that doesn't modify a string input such as [String#gsub](https://github.com/schneems/let_it_go/blob/b0e2da69f0cca87ab581022baa43291cdf48638c/lib/let_it_go/core_ext/string.rb#L37) please [give me a pull request to the appropriate file](https://github.com/schneems/let_it_go/blob/b0e2da69f0cca87ab581022baa43291cdf48638c/lib/let_it_go/core_ext/string.rb#L37), or open an issue in LetItGo so we can track and freeze more strings.
Keep those strings Frozen
![](https://www.dropbox.com/s/z4dj9fdsv213r4v/let-it-go.gif?dl=1)
|
|\ \
| |/
|/| |
Fix `TimeWithZone#eql?` to handle `TimeWithZone` created from `DateTime`
|