profile
viewpoint
Dylan Thacker-Smith dylanahsmith Shopify Ottawa, ON

airbrake/node-airbrake 182

node-airbrake is no longer maintained. Please visit https://airbrake.io/docs/performance-monitoring/updating-from-deprecated-libraries-for-node/

dylanahsmith/ar_transaction_changes 66

Store transaction changes for active record objects

csfrancis/harb 15

Ruby 2.x objspace heap dump analyzer

dylanahsmith/elasticsearch-examples 15

Examples of ElasticSearch usage

dylanahsmith/dotfiles 2

Home directory configuration

dylanahsmith/errbit-reporter-python 2

Python Errbit Client

dylanahsmith/airbrake 1

The official Airbrake library for Ruby on Rails. Links to other Airbrake libraries are in the ReadMe.

dylanahsmith/app-proxy-test 1

Shopify application for testing application proxies

dylanahsmith/batman 1

Fighting Crime and Kicking Apps

push eventdylanahsmith/rubocop

Dylan Thacker-Smith

commit sha 22d5d6ae632e09a430695379a5b77390f691d076

[Fix #6918] Add support for variable alignment to Layout/RescueEnsureAlignment

view details

push time in 7 days

push eventdylanahsmith/rubocop

Koichi ITO

commit sha d54e15cd4e09cb74be98c5906ec944b88f64051f

Add the Style Guide URL for `Gemspec/RubyVersionGlobalsUsage` Follow up of https://github.com/rubocop-hq/ruby-style-guide/pull/782 and https://github.com/rubocop-hq/ruby-style-guide/pull/784. This PR adds the Style Guide URL for `Gemspec/RubyVersionGlobalsUsage`.

view details

khiav reoy

commit sha 72768726b4a82078795b0013c87419af3c403ca6

Support EnforcedStyleForExponentOperator for SpaceAroundOperators cop

view details

Sander Verdonschot

commit sha cc7e221b0e4723a75b59c4bd725a5555a13068dd

Add new Lint/NonDeterministicRequireOrder cop (#7528) Dir[...] and Dir.glob(...) make no guarantees about the order files are returned in: > Case sensitivity depends on your system [...], as does the order > in which the results are returned. This becomes a problem when they are used for applications in which the order can matter, such as requiring files. At worst, this can lead to bugs that only happen intermittently in production and can't be reproduced in development. This cop suggests adding a .sort when requiring the files: > # bad > Dir["./lib/middleware/*.rb"].each do |file| > require file > end > # good > Dir["./lib/middleware/*.rb"].sort.each do |file| > require file > end

view details

Koichi ITO

commit sha dff7052e6311c05a3e1fec58f6799878c7f99912

Replace "can not" with "cannot" This PR unifies wording. cf. https://github.com/rails/rails/pull/35503

view details

Andreas Bühmann

commit sha b88ff7511cb12d2b16440eb8bea90e50617102b0

[Fix #7574] Fix corner case in Style/GuardClause

view details

Andreas Bühmann

commit sha acea172af6d0a2b7e2f293350da111c06b76dc86

Fix an oversight in specs

view details

Koichi ITO

commit sha d2d8c663de1bbc8b0839dc9d3a09f2841b74d81c

Merge pull request #7575 from buehmann/guard-empty-begin/7574 [Fix #7574] Fix corner case in Style/GuardClause

view details

Bozhidar Batsov

commit sha 7aa37664e4d877cfb522b6af153e5d4c16ca9a5a

Cut 0.78

view details

Ian

commit sha 18b55599846d72b9458dfc553c72e83f212f5f4d

analized -> analyzed

view details

Koichi ITO

commit sha 161dcd0eb7ec189ef4a0fee7923c0c8bee156975

Merge pull request #7580 from ianfixes/patch-1 analized -> analyzed

view details

Koichi ITO

commit sha 76e07a29782acd884a5ee73f70a284e985ef0351

Add how to write a breaking change entry to the CONTRIBUTING.md This PR adds how to write a breaking change entry to the CONTRIBUTING.md This change is based on: > All breaking changes are clearly marked with **(Breaking)** in the Changelog. https://metaredux.com/posts/2019/12/15/a-uniform-rubocop.html

view details

Andreas Bühmann

commit sha 3d63f19d5007353a0ea3c7fdabd319b93a36d770

Remove unnecessary use of regexp

view details

Andreas Bühmann

commit sha 3bcfb9373d8e7df8bcfe6fbe67018afeca0208ac

[Fix #7193] Fix string_source for symbol case (%i) This actually follows up on https://github.com/rubocop-hq/rubocop/pull/5020

view details

Koichi ITO

commit sha 46e412e27cf1000d8cd7ba2e13b110693180305d

Suppress a deprecation warning when using Ruby 2.7.0-dev This PR suppresses the following deprecation warning when using Ruby 2.7.0-dev. ```console % ruby -v ruby 2.7.0dev (2019-12-23T02:48:54Z master 048f797bf0) [x86_64-darwin17] % bundle exec rake (snip) /Users/koic/.rbenv/versions/2.7.0-dev/lib/ruby/gems/2.7.0/gems/rspec-core-3.9.0/lib/rspec/core/shared_example_group.rb:36: warning: The last argument is used as keyword parameters; maybe ** should be added to the call /Users/koic/src/github.com/rubocop-hq/rubocop/spec/rubocop/cop/style/numeric_predicate_spec.rb:12: warning: The called method is defined here ``` https://bugs.ruby-lang.org/issues/14183

view details

Phil Pirozhkov

commit sha 5fb9db3ca1fea463f75503cac0a327c57d59fb11

[Fix #5979] Introduce cops with special status (#7567) Previously, new cops were introduced as either enabled or disabled. The ones enabled were bothering users with new offences, while disabled cops were often left out and remained under their radars, while still being useful. By introducing this special status, users have to decide how to handle new cops, by explicitly enabling or disabling them. Cops are to be introduced with pending status between major releases of RuboCop and its extensions, and they eventually become enabled or disabled on major releases. Co-authored-by: Phil Pirozhkov <hello@fili.pp.ru>

view details

Bozhidar Batsov

commit sha b0c6efdc5f542e4901e3b75a104edd95d40c55e8

Fix the changelog

view details

Andreas Bühmann

commit sha ed083fb167352c2817e3b6cd86c63ef5f9ebddd4

[Fix #7593] Do not Kernel#abort on validation errors

view details

Koichi ITO

commit sha c66efb96904abe008bab81ee5de046704070227e

[Fix #7590] Fix an error for `Layout/SpaceBeforeBlockBraces` Fixes #7590. This PR fixes an error for `Layout/SpaceBeforeBlockBraces` when using with `EnforcedStyle: line_count_based` of `Style/BlockDelimiters` cop.

view details

Koichi ITO

commit sha a7dcd1c7c5eac967603b5a819d59c10daea0eb58

[Fix #7569] Make `Style/YodaCondition` accept `__FILE__ == $0` Fixes #7569. This PR makes `Style/YodaCondition` cop accept `__FILE__ == $0` I think that both operands `__FILE__ == $0` can be assumed to be read-only, as mentioned in #7569. If users try to assign `__FILE__`, an error will occur. Also, it is not usually assigned to `$0` by users. Therefore, this PR will make `Style/YodaCondition` cop accept both `__FILE__` and `$0` on the left side. Also, this PR will change only an idiom `__FILE__ == $0`.

view details

Koichi ITO

commit sha a960b74d7be709b11e4b811328bb553246036ffd

Fix an error when using Parser 2.7.0.0 This PR fixes the following error when using Parser 2.7.0.0. https://circleci.com/gh/rubocop-hq/rubocop/78846 ```console % cd path/to/repo/rubocop % bundle exec rake internal_investigation Running RuboCop... invalid byte sequence in UTF-8 /Users/koic/src/github.com/whitequark/parser/lib/parser/lexer/dedenter.rb:40:in `split' /Users/koic/src/github.com/whitequark/parser/lib/parser/lexer/dedenter.rb:40:in `dedent' /Users/koic/src/github.com/whitequark/parser/lib/parser/builders/default.rb:288:in `block in dedent_string' /Users/koic/src/github.com/whitequark/parser/lib/parser/builders/default.rb:285:in `each' /Users/koic/src/github.com/whitequark/parser/lib/parser/builders/default.rb:285:in `dedent_string' /Users/koic/src/github.com/whitequark/parser/lib/parser/ruby23.rb:5713:in `_reduce_435' /Users/koic/.rbenv/versions/2.6.5/lib/ruby/2.6.0/racc/parser.rb:259:in `_racc_do_parse_c' /Users/koic/.rbenv/versions/2.6.5/lib/ruby/2.6.0/racc/parser.rb:259:in `do_parse' /Users/koic/src/github.com/whitequark/parser/lib/parser/base.rb:189:in `parse' /Users/koic/src/github.com/whitequark/parser/lib/parser/base.rb:236:in `tokenize' /Users/koic/src/github.com/rubocop-hq/rubocop/lib/rubocop/processed_source.rb:163:in `tokenize' /Users/koic/src/github.com/rubocop-hq/rubocop/lib/rubocop/processed_source.rb:158:in `parse' /Users/koic/src/github.com/rubocop-hq/rubocop/lib/rubocop/processed_source.rb:36:in `initialize' /Users/koic/src/github.com/rubocop-hq/rubocop/lib/rubocop/processed_source.rb:17:in `new' /Users/koic/src/github.com/rubocop-hq/rubocop/lib/rubocop/processed_source.rb:17:in `from_file' /Users/koic/src/github.com/rubocop-hq/rubocop/lib/rubocop/runner.rb:365:in `get_processed_source' /Users/koic/src/github.com/rubocop-hq/rubocop/lib/rubocop/runner.rb:118:in `block in file_offenses' /Users/koic/src/github.com/rubocop-hq/rubocop/lib/rubocop/runner.rb:143:in `file_offense_cache' /Users/koic/src/github.com/rubocop-hq/rubocop/lib/rubocop/runner.rb:117:in `file_offenses' ``` The percent_string_array_spec.rb file contains a test case for binary encoded source, but the default file encoding is UTF-8. ```ruby %W[\xC0 "foo"] ``` https://github.com/rubocop-hq/rubocop/blob/v0.78.0/spec/rubocop/cop/lint/percent_string_array_spec.rb#L95 This error is affected by the following Parser gem changes: https://github.com/whitequark/parser/pull/641 I'm not confident about the solution, anyway I opened the following patch as PR to the whitequark/parser reposity. https://github.com/whitequark/parser/pull/642

view details

push time in 7 days

pull request commentrubocop-hq/rubocop

Add support for variable alignment to Layout/RescueEnsureAlignment

It would make sense to add support for start_of_line as well, but perhaps that should be done in another PR to keep this one focused.

dylanahsmith

comment created time in 7 days

push eventShopify/identity_cache

Dylan Thacker-Smith

commit sha 214efcd87a194a1aeb70b019935d5b1813c2bb3f

Add support for fill lock with lock wait to avoid thundering herd problem

view details

push time in 7 days

push eventShopify/identity_cache

Dylan Thacker-Smith

commit sha 4554368752a07c21e09a54401e4663a94478af00

Avoid exposing primary index methods on models including WithoutPrimaryIndex (#414)

view details

Dylan Thacker-Smith

commit sha 8945a94daf5e6095756a747bb3de9ace06ae8c49

Move primary cache index implementation to a Cached::PrimaryIndex class (#415) similar to the cached association objects.

view details

Dylan Thacker-Smith

commit sha 83522d0de8843bb45332f82c33bfd395c2fd3e8d

Extract cache attribute implementation into a separate class (#416)

view details

Dylan Thacker-Smith

commit sha 97866057b288d5681631e3e9d9ba50b08bdc063e

Lazily query the primary key for cache_index again (#417) It was accidentally made eager when refactoring the cache attribute code into IdentityCache::Cached::Attribute

view details

Gannon McGibbon

commit sha 8354c9d5b8882041d3f5630f81ae0ec8e52f806f

Fix method redefinition error on IdentityCache::Cached::Attribute#attribute

view details

Gannon McGibbon

commit sha 2eaeb5c4d792bf1113003c4a467fdba74ed3d2ce

Merge pull request #418 from Shopify/fix_warn Fix method redefinition error on IdentityCache::Cached::Attribute#attribute

view details

Edouard CHIN

commit sha 605604ff5c48cab5a9cbee91adf63d2b3c51bc73

Calling mattr_* methods on singleton now raises an error: - ref https://github.com/rails/rails/pull/38144/commits/b5b1b02087cac08d43a3174cfb8c0909ec6bb6ea

view details

Edouard Chin

commit sha c1dddedb70842654ad272e80047c694739674485

Merge pull request #423 from Shopify/mattr-accessor-outside-singleton Calling mattr_* methods on singleton now raises an error:

view details

Gannon McGibbon

commit sha 68d5b85d477873e4ef5c5c40690ffceedf87bb62

Use IdentityCache::RecordNotFound in place of ActiveRecord::RecordNotFound

view details

Gannon McGibbon

commit sha 0cbe7ec09516569f79fdee7a3ac25c9e16084229

Merge pull request #424 from Shopify/introduce_identity_cache_record_not_found Use IdentityCache::RecordNotFound in place of ActiveRecord::RecordNotFound

view details

Dylan Thacker-Smith

commit sha e3fb3b53cbae0f92c093b4c2a28cc5d5b87ddef7

Refactor cached attribute to handle composite keys separately (#419) We only support multi fetching for attributes by a single key column, so they don't really support the same API as a cached attribute with a composite key.

view details

Dylan Thacker-Smith

commit sha d5abdffff42879eda5d9a941b6e3ef3912c33f5b

Refactor to separate generic cache key loading logic

view details

Dylan Thacker-Smith

commit sha 978b348b56a1cfab21e3bba2445dad57182c97bd

Use the CacheKeyLoader for cache attributes' fetch multi

view details

Dylan Thacker-Smith

commit sha 45ceb171b657d2fe216d1065e88e719ca9e348ce

Merge pull request #420 from Shopify/cache-key-loader Refactor to separate generic cache key loading logic

view details

Gannon McGibbon

commit sha 64ade223a1e3efe730ddba9dd46ec8e5e241b23d

Fix IdentityCache::WithPrimaryIndex#fetch docs Clarify exception throwing behaviour in docs of #fetch.

view details

Gannon McGibbon

commit sha 3f626e01d3f22ce454a42f29e241719e850512d6

Merge pull request #425 from Shopify/fix_fetch_docs Fix IdentityCache::WithPrimaryIndex#fetch docs

view details

Dylan Thacker-Smith

commit sha cb370bac1f71cefbb9f311a149c008b436ecd39e

Add CacheKeyLoader.batch_load to batch cache loading across cache fetchers

view details

Dylan Thacker-Smith

commit sha 602d04c05bedfb3393c3b232c4b4d88e5d7011e9

[temp] Implement load_multi using batch_load to test their equivalence However, I think we will want to revert this commit to keep load_multi as an optimization.

view details

Dylan Thacker-Smith

commit sha 3e585315b7b502bdc489a2abc39026f669e69998

Revert "Merge pull request #413 from Shopify/fetch_batch" This reverts commit 10c771a477b91a9496102f86f19c7633b2fb51a9, reversing changes made to b2b69be53d138b407e3220ff10ce0189569016d2.

view details

Dylan Thacker-Smith

commit sha 0014ca7d4e517a27c76a856194f2f3dcd3b4a20d

Implement batch fetching - Add #fetch_async to cached associations for lazy loading/fetching - Introduce load strategy and request constants - Add cached prefetcher module to lazily or eagerly load associations Co-authored-by: Gannon McGibbon <gannon.mcgibbon@shopify.com>

view details

push time in 7 days

created tagdylanahsmith/ar_transaction_changes

tagv1.1.7

Store transaction changes for active record objects

created time in 8 days

push eventdylanahsmith/ar_transaction_changes

Dylan Thacker-Smith

commit sha dd6822674d763fce0ec2f64fe8a5f74d57a6cbc2

Freeze literal strings

view details

Dylan Thacker-Smith

commit sha d75168cc97e1e4a7c863ff0d924ffc1a2fbcd197

Release v1.1.7

view details

push time in 8 days

push eventdylanahsmith/ar_transaction_changes

Jean byroot Boussier

commit sha e4f8b5d96fd43ab6ab2fc397c4bfa8d7f0730176

Avoid to read serialized attributes as it clones the value (#29)

view details

push time in 8 days

PR merged dylanahsmith/ar_transaction_changes

Avoid to read serialized attributes as it clones the value

read_attribute goes through Attribute#cast which ultimately for serialized attributes goes through deserialize(serialize(value)) which is akin to cloning the attribute value.

The problem with this is that doing so you might lose all current mutations to the serialized value.

I'm not super fan of the implementation, so if you have a better idea how to handle this I'm all ears.

@dylanahsmith @Edouard-chin @rafaelfranca @etiennebarrie

+52 -3

1 comment

4 changed files

casperisfine

pr closed time in 8 days

push eventShopify/identity_cache

Jean Boussier

commit sha 3539264e01b5b8fd812154e5cea7c7d83be037ef

Add support for the default MemCacheStore from ActiveSupport

view details

Jean byroot Boussier

commit sha c72992237e089d8bb0b2cbbee7a7f4abfd446519

Merge pull request #465 from Shopify/dalli-support Add support for the default MemCacheStore from ActiveSupport

view details

Dylan Thacker-Smith

commit sha bdc5b9c9fec4c0ab715e5ee6fbb29cfc2870393c

Use the same version of the pg gem in the default Gemfile and CI gemfiles

view details

Dylan Thacker-Smith

commit sha 364ae204798b23c9fee59462567f3ae33a0945cd

test: Add timestamps columns to associated_records table

view details

Dylan Thacker-Smith

commit sha 843d28268200e1d23b044fc702c72a12dc18559b

Skip expiring the primary cache key index on save without changes.

view details

Dylan Thacker-Smith

commit sha 07a736d7edbaf784d2d7cc02e793f309c0c6bbea

Skip expiring parent primary cache index on save without changes.

view details

push time in 9 days

Pull request review commentdylanahsmith/ar_transaction_changes

Avoid to read serialized attributes as it clones the value

 def write_attribute(attr_name, value)    def _store_transaction_changed_attributes(attr_name)     attr_name = attr_name.to_s-    old_value = read_attribute(attr_name)+    old_value = _read_attribute_for_transaction(attr_name)     ret = yield-    new_value = read_attribute(attr_name)+    new_value = _read_attribute_for_transaction(attr_name)     unless transaction_changed_attributes.key?(attr_name) || new_value == old_value-      transaction_changed_attributes[attr_name] = old_value+      attribute = @attributes[attr_name]+      transaction_changed_attributes[attr_name] = if attribute.type.is_a?(::ActiveRecord::Type::Serialized)+        attribute.type.deserialize(old_value)+      else+        old_value+      end+      transaction_changed_attributes     end     ret   end++  def _read_attribute_for_transaction(attr_name)+    attribute = @attributes[attr_name]+    if attribute.type.is_a?(::ActiveRecord::Type::Serialized)
    # Avoid causing an earlier memoized type cast of mutable serialized user values,
    # since could prevent mutations of that user value from affecting the attribute value
    # that would affect it without using this library.
    if attribute.type.is_a?(::ActiveRecord::Type::Serialized)
casperisfine

comment created time in 9 days

Pull request review commentdylanahsmith/ar_transaction_changes

Avoid to read serialized attributes as it clones the value

 def test_transaction_changes_type_cast     end     assert_empty @user.stored_transaction_changes   end++  def test_serialized_attributes_value+    @user.connection_details = [User::ConnectionDetails.new(client_ip: '1.1.1.1')]+    @user.save!+    assert_instance_of Array, @user.stored_transaction_changes['connection_details']

This is basically just asserting that the change was detected without actually testing the old or new value in this array. The other added test doesn't test the old or new values returned from transaction_changed_attributes either. So I think we need more specific assertions to properly test the code

    old_value, new_value = @user.stored_transaction_changes['connection_details']
    assert_equal([], old_value)
    assert_equal(['1.1.1.1'], new_value.map(&:client_ip))
casperisfine

comment created time in 9 days

Pull request review commentShopify/identity_cache

Add support for the default MemCacheStore from ActiveSupport

 def initialize(cache_adaptor = nil)     end      def cache_backend=(cache_adaptor)+      if cache_adaptor.class.name == 'ActiveSupport::Cache::MemCacheStore'+        cache_adaptor.extend(MemCacheStoreCAS)

Oh right, refinements are statically scoped, which doesn't work as well for extending an optional dependency. Maybe we should just raise if the AS::Cache::MemCacheStore object responds to cas or cas_multi, then handle that compatibility in this library if/when CAS support is added upstream.

casperisfine

comment created time in 9 days

Pull request review commentShopify/identity_cache

Add support for the default MemCacheStore from ActiveSupport

 def initialize(cache_adaptor = nil)     end      def cache_backend=(cache_adaptor)+      if cache_adaptor.class.name == 'ActiveSupport::Cache::MemCacheStore'+        cache_adaptor.extend(MemCacheStoreCAS)

I'm not sure the interface would be the same anyway.

Yeah, the fact that the interface might not be the same is why I'm thinking it being implemented upstream could cause a conflict.

I thought about that, but we need to access a bunch of private methods like merged_options, etc. So it makes the code much more terse.

Good point. Maybe this is a good use case for a refinement.

casperisfine

comment created time in 9 days

Pull request review commentShopify/identity_cache

Add support for the default MemCacheStore from ActiveSupport

+# frozen_string_literal: true+require 'dalli/cas/client'++module IdentityCache+  module MemCacheStoreCAS+    def cas(name, options = nil)+      options = merged_options(options)++      instrument(:cas, name, options) do+        @data.cas(name, options[:expires_in].to_i, options) do |raw_value|

Based on what MemCacheStore and MemcachedStore does internally, it looks like you are missing:

  • support for connection pooling by wrapping @data calls with a @data.with do |client| block
  • support for namespacing by using normalize_key
  • handling of exceptions with rescue_error_with
        rescue_error_with(false) do
          @data.with do |client|
            key = normalize_key(name, options)
            client.cas(key, options[:expires_in].to_i, options) do |raw_value|

These changes are needed for cas_multi as well.

casperisfine

comment created time in 10 days

Pull request review commentShopify/identity_cache

Add support for the default MemCacheStore from ActiveSupport

+# frozen_string_literal: true+require 'dalli/cas/client'++module IdentityCache+  module MemCacheStoreCAS+    def cas(name, options = nil)+      options = merged_options(options)++      instrument(:cas, name, options) do+        @data.cas(name, options[:expires_in].to_i, options) do |raw_value|+          entry = deserialize_entry(raw_value)+          value = yield entry.value+          entry = ActiveSupport::Cache::Entry.new(value, options)+          options[:raw] ? entry.value.to_s : entry+        end+      end+    end++    def cas_multi(*names)+      options = names.extract_options!
    def cas_multi(*names, **options)
casperisfine

comment created time in 10 days

Pull request review commentShopify/identity_cache

Add support for the default MemCacheStore from ActiveSupport

 def initialize(cache_adaptor = nil)     end      def cache_backend=(cache_adaptor)+      if cache_adaptor.class.name == 'ActiveSupport::Cache::MemCacheStore'+        cache_adaptor.extend(MemCacheStoreCAS)

Should we wrap the cache adapter rather than mutating it with extend? Since ideally we would add CAS support to ActiveSupport::Cache::MemCacheStore itself, in which case we could have a conflict from this extension affecting code using the adapter independently of this library.

It means we would need to get the underlying dalli client using cache_store.instance_variable_get(:@data), but the coupling would be the same.

@rafaelfranca should ActiveSupport::Cache::MemCacheStore expose the underlying dalli client? That would be consistent with the database adapters which expose the underlying client through raw_connection.

casperisfine

comment created time in 10 days

Pull request review commentdylanahsmith/ar_transaction_changes

Avoid to read serialized attributes as it clones the value

 def write_attribute(attr_name, value)    def _store_transaction_changed_attributes(attr_name)     attr_name = attr_name.to_s-    old_value = read_attribute(attr_name)+    old_value = _read_attribute_for_transaction(attr_name)     ret = yield-    new_value = read_attribute(attr_name)+    new_value = _read_attribute_for_transaction(attr_name)     unless transaction_changed_attributes.key?(attr_name) || new_value == old_value-      transaction_changed_attributes[attr_name] = old_value+      attribute = @attributes[attr_name]+      transaction_changed_attributes[attr_name] = if attribute.type.is_a?(::ActiveRecord::Type::Serialized)+        attribute.type.deserialize(old_value)+      else+        old_value+      end+      transaction_changed_attributes     end     ret   end++  def _read_attribute_for_transaction(attr_name)+    attribute = @attributes[attr_name]+    if attribute.type.is_a?(::ActiveRecord::Type::Serialized)

If we need to special case serialized attributes for performance reasons, then maybe that's fine, but we should at least be explicit about that being the reason (e.g. with a comment in the code).

casperisfine

comment created time in 15 days

Pull request review commentdylanahsmith/ar_transaction_changes

Avoid to read serialized attributes as it clones the value

 def write_attribute(attr_name, value)    def _store_transaction_changed_attributes(attr_name)     attr_name = attr_name.to_s-    old_value = read_attribute(attr_name)+    old_value = _read_attribute_for_transaction(attr_name)     ret = yield-    new_value = read_attribute(attr_name)+    new_value = _read_attribute_for_transaction(attr_name)     unless transaction_changed_attributes.key?(attr_name) || new_value == old_value-      transaction_changed_attributes[attr_name] = old_value+      attribute = @attributes[attr_name]+      transaction_changed_attributes[attr_name] = if attribute.type.is_a?(::ActiveRecord::Type::Serialized)+        attribute.type.deserialize(old_value)+      else+        old_value+      end+      transaction_changed_attributes     end     ret   end++  def _read_attribute_for_transaction(attr_name)+    attribute = @attributes[attr_name]+    if attribute.type.is_a?(::ActiveRecord::Type::Serialized)

Are serialized attributes actually a special case? It seems like the same problem can be seen with string attributes, which also seem to get duplicated on reads.

For example, before the regression was introduced to this library, the following modified test passes

   def test_transaction_changes_for_update
-    @user.name = "Dillon"
+    name = "Dillon"
+    @user.name = name
+    name.upcase!
     @user.save!
-    assert_equal ["Dylan", "Dillon"], @user.stored_transaction_changes["name"]
+    assert_equal ["Dylan", "DILLON"], @user.stored_transaction_changes["name"]
   end

The same test against this branch results in a failure

  1) Failure:
TransactionChangesTest#test_transaction_changes_for_update [test/transaction_changes_test.rb:25]:
Expected: ["Dylan", "DILLON"]
  Actual: ["Dylan", "Dillon"]

due to the same regression. So perhaps it doesn't make sense to special case serialized attributes.

casperisfine

comment created time in 15 days

Pull request review commentdylanahsmith/ar_transaction_changes

Avoid to read serialized attributes as it clones the value

 def test_transaction_changes_type_cast     end     assert_empty @user.stored_transaction_changes   end++  def test_serialized_attributes_value+    @user.connection_details = [User::ConnectionDetails.new(client_ip: '1.1.1.1')]+    @user.save!+    assert_instance_of Array, @user.stored_transaction_changes['connection_details']+  end++  def test_serialized_attributes_mutation+    details = User::ConnectionDetails.new(client_ip: '1.1.1.1')+    @user.connection_details = [details]+    details.client_ip = '2.2.2.2'+    @user.save!+    assert_equal '2.2.2.2', @user.connection_details.first.client_ip

This behaviour is very subtle and is kind of a gotcha within Active Record itself for application code. Basically, this is showing that lazy casting of attribute values is a lazy abstraction for writes from application code. As in the same problem can be seen (even without using ArTransactionChanges) by adding in an attribute read in the application code

     details = User::ConnectionDetails.new(client_ip: '1.1.1.1')
     @user.connection_details = [details]
+    @user.connection_details
     details.client_ip = '2.2.2.2'
     @user.save!
     assert_equal '2.2.2.2', @user.connection_details.first.client_ip

which would result in the same failure

  1) Failure:
TransactionChangesTest#test_serialized_attributes_mutation [/Users/dylants/src/ar_transaction_changes/test/transaction_changes_test.rb:126]:
Expected: "2.2.2.2"
  Actual: "1.1.1.1"

These mutating reads don't seem like ideal behaviour for Active Record. It seems like it would be easier to reason about the code if any attribute value duplication happened eagerly on attribute assignment. Of course, any attempt to fix this behaviour upstream in Active Record would be a breaking change, which would require deprecating mutations to attribute assignment arguments between the assignment and when the attribute is first read.

We should also fix any application code affected by this regression, since that code is fragile to attribute reads being introduced which could subtly prevent the serialized value mutation to not affect what is being persisted. So if the intention is to mutate an assigned attribute value, then it should be done after reading from the attribute. E.g.

     details = User::ConnectionDetails.new(client_ip: '1.1.1.1')
     @user.connection_details = [details]
-    details.client_ip = '2.2.2.2'
+    @user.connection_details.client_ip = '2.2.2.2'
     @user.save!

I'm not trying to say we shouldn't fix the regression introduced by this library, just trying to clarify the problem.

casperisfine

comment created time in 15 days

pull request commentdylanahsmith/ar_transaction_changes

Avoid to read serialized attributes as it clones the value

For context, it looks like this is addressing a regression from https://github.com/dylanahsmith/ar_transaction_changes/pull/26 .

casperisfine

comment created time in 15 days

issue commentlgierth/promise.rb

Closing my Github account

Sounds good

lgierth

comment created time in 20 days

push eventShopify/ar_transaction_changes

Edouard Chin

commit sha 4fdb93f5549b292a87b8f44134c6f58f0c7c4dbf

`write_attribute` no longer delegates to `_write_attribute: (#28) - Since rails/rails@27a1ca2bfeda4298bbf44da17d07fac4147a4b1c, doing `model[:attr_name] = ...` will no longer go through `_write_attribute`

view details

Dylan Thacker-Smith

commit sha 47f10f9b8f277f6079c93d9a4dc31f8e0cfe9ce4

Release v1.1.6

view details

push time in 23 days

created tagdylanahsmith/ar_transaction_changes

tagv1.1.6

Store transaction changes for active record objects

created time in 23 days

push eventdylanahsmith/ar_transaction_changes

Dylan Thacker-Smith

commit sha 47f10f9b8f277f6079c93d9a4dc31f8e0cfe9ce4

Release v1.1.6

view details

push time in 23 days

push eventdylanahsmith/ar_transaction_changes

Edouard Chin

commit sha 4fdb93f5549b292a87b8f44134c6f58f0c7c4dbf

`write_attribute` no longer delegates to `_write_attribute: (#28) - Since rails/rails@27a1ca2bfeda4298bbf44da17d07fac4147a4b1c, doing `model[:attr_name] = ...` will no longer go through `_write_attribute`

view details

push time in 23 days

PR merged dylanahsmith/ar_transaction_changes

`write_attribute` no longer delegates to `_write_attribute:

write_attribute no longer delegates to `_write_attribute:

  • Since rails/rails@27a1ca2bfeda4298bbf44da17d07fac4147a4b1c, doing model[:attr_name] = ... will no longer go through _write_attribute

cc/ @dylanahsmith @casperisfine @etiennebarrie @rafaelfranca

+24 -1

3 comments

2 changed files

Edouard-chin

pr closed time in 23 days

PR merged dylanahsmith/rails-demo

Bump websocket-extensions from 0.1.3 to 0.1.4 dependencies javascript

Bumps websocket-extensions from 0.1.3 to 0.1.4. <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/faye/websocket-extensions-node/blob/master/CHANGELOG.md">websocket-extensions's changelog</a>.</em></p> <blockquote> <h3>0.1.4 / 2020-06-02</h3> <ul> <li>Remove a ReDoS vulnerability in the header parser (CVE-2020-7662, reported by Robert McLaughlin)</li> <li>Change license from MIT to Apache 2.0</li> </ul> </blockquote> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/faye/websocket-extensions-node/commit/8efd0cd6e35faf9bb9cb08759be1e27082177d43"><code>8efd0cd</code></a> Bump version to 0.1.4</li> <li><a href="https://github.com/faye/websocket-extensions-node/commit/3dad4ad44a8c5f74d4f8f4efd3f9d6e0b5df3051"><code>3dad4ad</code></a> Remove ReDoS vulnerability in the Sec-WebSocket-Extensions header parser</li> <li><a href="https://github.com/faye/websocket-extensions-node/commit/4a76c75efb1c5d6a2f60550e9501757458d19533"><code>4a76c75</code></a> Add Node versions 13 and 14 on Travis</li> <li><a href="https://github.com/faye/websocket-extensions-node/commit/44a677a9c0631daed0b0f4a4b68c095b624183b8"><code>44a677a</code></a> Formatting change: {...} should have spaces inside the braces</li> <li><a href="https://github.com/faye/websocket-extensions-node/commit/f6c50aba0c20ff45b0f87cea33babec1217ec3f5"><code>f6c50ab</code></a> Let npm reformat package.json</li> <li><a href="https://github.com/faye/websocket-extensions-node/commit/2d211f3705d52d9efb4f01daf5a253adf828592e"><code>2d211f3</code></a> Change markdown formatting of docs.</li> <li><a href="https://github.com/faye/websocket-extensions-node/commit/0b620834cc1e1f2eace1d55ab17f71d90d88271d"><code>0b62083</code></a> Update Travis target versions.</li> <li><a href="https://github.com/faye/websocket-extensions-node/commit/729a4653073fa8dd020561113513bfa2e2119415"><code>729a465</code></a> Switch license to Apache 2.0.</li> <li>See full diff in <a href="https://github.com/faye/websocket-extensions-node/compare/0.1.3...0.1.4">compare view</a></li> </ul> </details> <br />

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


<details> <summary>Dependabot commands and options</summary> <br />

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
  • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
  • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
  • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

You can disable automated security fix PRs for this repo from the Security Alerts page.

</details>

+3 -3

0 comment

1 changed file

dependabot[bot]

pr closed time in 24 days

push eventdylanahsmith/rails-demo

dependabot[bot]

commit sha 34004c7fdd245628642e204df45cab8616e6ae08

Bump websocket-extensions from 0.1.3 to 0.1.4 (#2) Bumps [websocket-extensions](https://github.com/faye/websocket-extensions-node) from 0.1.3 to 0.1.4. - [Release notes](https://github.com/faye/websocket-extensions-node/releases) - [Changelog](https://github.com/faye/websocket-extensions-node/blob/master/CHANGELOG.md) - [Commits](https://github.com/faye/websocket-extensions-node/compare/0.1.3...0.1.4) Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

view details

push time in 24 days

issue commentShopify/graphql-batch

How to Implement Pagination with AssociationLoader using Connections?

This seems like a duplicate of https://github.com/Shopify/graphql-batch/issues/114 . I think you have to start by figuring out what you even want the SQL to be based on what is efficient for your datastore.

If you're on a sufficiently recent version of MySQL/Postgres/whatever you can use window functions, roughly similar to the explanation at https://www.the-art-of-web.com/sql/partition-over/.

toyhammered

comment created time in 24 days

issue commentdylanahsmith/ar_transaction_changes

Multiple instances of the same object in a transaction

That seems like a huge gotcha in active record itself. I would have expected it to avoid calling after_commit on the same object twice, but not to do this across objects. That seems like an upstream problem I wouldn't want to workaround.

I checked to see if that is still the current behaviour of rails and it looks like rails 6.0 change the duplicate removal to records.uniq(&:object_id), so it seems like that rails issue might have actually have been fixed.

naveedkakal

comment created time in a month

issue commentrails/rails

after_commit uses incorrect final object when changing same object twice while reloading it from the DB

I think this was fixed in rails 6.0 by https://github.com/rails/rails/pull/36190

marktermaat

comment created time in a month

Pull request review commentShopify/identity_cache

Skip expiring record cache on save with no db update

 def test_cached_attribute_values_are_expired_from_the_cache_when_an_existing_rec     assert_queries(1) { assert_equal 'foo', AssociatedRecord.fetch_name_by_id(1) }     assert_queries(0) { assert_equal 'foo', AssociatedRecord.fetch_name_by_id(1) } -    @record.save!+    @record.update!(updated_at: @record.updated_at + 1)

I wanted to avoid having the test fail because the record's updated_at matched the current time, in which case we are trying to skip expiring the cache.

dylanahsmith

comment created time in a month

Pull request review commentShopify/identity_cache

Only expire the cache when the record has changed, or it is forced

 def test_touch_will_expire_the_caches     @record.touch   end -  def test_expire_cache_works_in_a_transaction+  def test_expire_cache_works_in_a_transaction_when_forced     expect_cache_delete("#{NAMESPACE}attr:Item:id:id/title:#{cache_hash('"1"/"bob"')}")     expect_cache_delete("#{NAMESPACE}attr:Item:id:title:#{cache_hash('"bob"')}")     expect_cache_delete(@blob_key) +    ActiveRecord::Base.transaction do+      @record.expire_cache(force: true)

The fact that this test needs to be updated in this way shows the breaking change I mentioned in the change to expire_cache

DougEdey

comment created time in a month

Pull request review commentShopify/identity_cache

Only expire the cache when the record has changed, or it is forced

 def test_update     @record.save   end +  def test_update_no_change+    # Regular flow, write index id, write index id/tile, delete data blob since Record has changed+    expect_cache_delete("#{NAMESPACE}attr:Item:id:id/title:#{cache_hash('"1"/"bob"')}").never+    expect_cache_delete("#{NAMESPACE}attr:Item:id:title:#{cache_hash('"bob"')}").never+    expect_cache_delete(@blob_key).never++    @record.save

If the return value for the save isn't going to be checked, then use save! instead to make sure the test fails in an easy to debug way if there are unexpected validation errors. You can also make this change when refactoring existing tests that have this anti-pattern.

DougEdey

comment created time in a month

Pull request review commentShopify/identity_cache

Only expire the cache when the record has changed, or it is forced

 def test_update     @record.save   end +  def test_update_no_change+    # Regular flow, write index id, write index id/tile, delete data blob since Record has changed+    expect_cache_delete("#{NAMESPACE}attr:Item:id:id/title:#{cache_hash('"1"/"bob"')}").never+    expect_cache_delete("#{NAMESPACE}attr:Item:id:title:#{cache_hash('"bob"')}").never+    expect_cache_delete(@blob_key).never

This type of test is quite fragile. Not only is it coupled to the internals of the library, but also changes to those internals could cause this test to no longer test anything relevant by it instead deleting slightly different cache key.

A way to test this with less internal coupling would be to do a fetch surrounded by assert_no_queries to make sure the cache wasn't expired.

DougEdey

comment created time in a month

Pull request review commentShopify/identity_cache

Only expire the cache when the record has changed, or it is forced

 def test_nil_is_stored_in_the_cache_on_cache_misses     end   end -  def test_cached_attribute_values_are_expired_from_the_cache_when_an_existing_record_is_saved+  def test_cached_attribute_values_are_not_expired_from_the_cache_when_an_existing_record_is_saved_with_no_changes

This is changing the purpose of this test. Unless the test was redundant, we should be adding a new test for the new behaviour and just update this one to test the behaviour it was intended for. This can be done by updating some attribute instead of calling save! without changing any attributes.

DougEdey

comment created time in a month

Pull request review commentShopify/identity_cache

Only expire the cache when the record has changed, or it is forced

 module WithPrimaryIndex      include WithoutPrimaryIndex -    def expire_cache-      expire_primary_index-      super+    def expire_cache(force: false)+      expire_primary_index if force || transaction_changed_attributes.any? || destroyed?+      super(force: force)

Calling super without arguments will pass along in the arguments passed into this method.

DougEdey

comment created time in a month

Pull request review commentShopify/identity_cache

Only expire the cache when the record has changed, or it is forced

 def cache_fetch_includes     end      # Invalidate the cache data associated with the record.-    def expire_cache-      expire_attribute_indexes+    def expire_cache(force: false)

If expire_cache gets called manually then I don't think it should be conditional on transaction_changed_attributes. In fact, I don't think it makes sense to use transaction_changed_attributes outside of the after_commit callback.

DougEdey

comment created time in a month

PR opened Shopify/identity_cache

Reviewers
Skip expiring record cache on save with no db update

Problem

When ActiveRecord::Base#save! is called without modifying any attributes, then active record will skip the database update, however, it will still result in the record being expired in identity cache. This happens because active record still calls the after_commit callback in this case.

Solution

Make the cache expiry on update to be conditional on transaction_changed_attributes.present? so that it will skip these unnecessary cache expiries.

+51 -22

0 comment

13 changed files

pr created time in a month

push eventShopify/identity_cache

Dylan Thacker-Smith

commit sha 3ab45d888fc5511727fcabc5ce0128566c9aae5c

test: Add timestamps columns to associated_records table

view details

Dylan Thacker-Smith

commit sha d5e05b7fba800d6d09104d273f3ad30e0908893c

Skip expiring the primary cache key index on save without changes.

view details

Dylan Thacker-Smith

commit sha f2f87d08d6bd112f2985e4826503ae6249a588bd

Skip expiring parent primary cache index on save without changes.

view details

push time in a month

create barnchShopify/identity_cache

branch : skip-cache-invalidation-if-no-changes

created branch time in a month

push eventdylanahsmith/rails-demo

dependabot[bot]

commit sha b8b9d5b07d07203fd935f83b42c79d080bffb61d

Bump puma from 4.3.3 to 4.3.5 (#3) Bumps [puma](https://github.com/puma/puma) from 4.3.3 to 4.3.5. - [Release notes](https://github.com/puma/puma/releases) - [Changelog](https://github.com/puma/puma/blob/master/History.md) - [Commits](https://github.com/puma/puma/commits) Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

view details

push time in a month

PR merged dylanahsmith/rails-demo

Bump puma from 4.3.3 to 4.3.5 dependencies ruby

Bumps puma from 4.3.3 to 4.3.5. <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/puma/puma/blob/master/History.md">puma's changelog</a>.</em></p> <blockquote> <h2>4.3.4/4.3.5 and 3.12.5/3.12.6 / 2020-05-22</h2> <p>Each patchlevel release contains a separate security fix. We recommend simply upgrading to 4.3.5/3.12.6.</p> <ul> <li>Security <ul> <li>Fix: Fixed two separate HTTP smuggling vulnerabilities that used the Transfer-Encoding header. CVE-2020-11076 and CVE-2020-11077.</li> </ul> </li> </ul> </blockquote> </details> <details> <summary>Commits</summary> <ul> <li>See full diff in <a href="https://github.com/puma/puma/commits">compare view</a></li> </ul> </details> <br />

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


<details> <summary>Dependabot commands and options</summary> <br />

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
  • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
  • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
  • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

You can disable automated security fix PRs for this repo from the Security Alerts page.

</details>

+3 -3

0 comment

2 changed files

dependabot[bot]

pr closed time in a month

push eventdylanahsmith/rails-demo

dependabot[bot]

commit sha f234cc13726aa654223caf6dfd954377778500c3

Bump websocket-extensions from 0.1.4 to 0.1.5 (#1) Bumps [websocket-extensions](https://github.com/faye/websocket-extensions-ruby) from 0.1.4 to 0.1.5. - [Release notes](https://github.com/faye/websocket-extensions-ruby/releases) - [Changelog](https://github.com/faye/websocket-extensions-ruby/blob/master/CHANGELOG.md) - [Commits](https://github.com/faye/websocket-extensions-ruby/compare/0.1.4...0.1.5) Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

view details

push time in a month

PR merged dylanahsmith/rails-demo

Bump websocket-extensions from 0.1.4 to 0.1.5 dependencies

Bumps websocket-extensions from 0.1.4 to 0.1.5. <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/faye/websocket-extensions-ruby/blob/master/CHANGELOG.md">websocket-extensions's changelog</a>.</em></p> <blockquote> <h3>0.1.5 / 2020-06-02</h3> <ul> <li>Remove a ReDoS vulnerability in the header parser (CVE-2020-7663)</li> </ul> </blockquote> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/faye/websocket-extensions-ruby/commit/8108e77333026634eda1a6a32f32da3a7a1da8c4"><code>8108e77</code></a> Bump version to 0.1.5</li> <li><a href="https://github.com/faye/websocket-extensions-ruby/commit/c36eb3e010dce9eabc7415dbe05cafaa0ae83cd4"><code>c36eb3e</code></a> Remove ReDoS vulnerability in the Sec-WebSocket-Extensions header parser</li> <li><a href="https://github.com/faye/websocket-extensions-ruby/commit/8174a4a0f95b8f35ea42595d9d4d88debf492521"><code>8174a4a</code></a> Test on JRuby 9.{0,1,2} rather than "head"</li> <li><a href="https://github.com/faye/websocket-extensions-ruby/commit/96059802a6649ad3ca63625ffc5b5dbcd9ea91d9"><code>9605980</code></a> Update Ruby versions 2.4 to 2.7 on Travis</li> <li><a href="https://github.com/faye/websocket-extensions-ruby/commit/bd6d0acc01fa985f014d37183f0c7854b86b60f9"><code>bd6d0ac</code></a> Mention license change in the changelog</li> <li><a href="https://github.com/faye/websocket-extensions-ruby/commit/a8c847876b2242d562e6186b6fd90dd073b9fcd2"><code>a8c8478</code></a> Formatting change: {...} should have spaces inside the braces</li> <li>See full diff in <a href="https://github.com/faye/websocket-extensions-ruby/compare/0.1.4...0.1.5">compare view</a></li> </ul> </details> <br />

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


<details> <summary>Dependabot commands and options</summary> <br />

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
  • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
  • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
  • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

You can disable automated security fix PRs for this repo from the Security Alerts page.

</details>

+1 -1

0 comment

1 changed file

dependabot[bot]

pr closed time in a month

Pull request review commentrails/rails

activerecord: No warning for return out of transaction block without writes

 def transaction(requires_new: nil, isolation: nil, joinable: true)                :commit_transaction, :rollback_transaction, :materialize_transactions,                :disable_lazy_transactions!, :enable_lazy_transactions!, to: :transaction_manager +      def mark_transaction_written_if_write(sql) #:nodoc:

Pushed this minor change

dylanahsmith

comment created time in a month

push eventdylanahsmith/rails

Eugene Kenny

commit sha 8ae6626b654dded1e3a29702bfc5784d33aa30cb

Override clear_cache in custom path resolver https://buildkite.com/rails/rails/builds/69620#ffaa78f1-aba4-42b9-91c8-0fe5d333263a Since 096d143c8c41c8231c32717372b1bb9c861c739a, when the tests run in parallel it's possible for `ActionView::LookupContext::DetailsKey.clear` to be called while this test is running, which wouldn't work correctly.

view details

Dylan Thacker-Smith

commit sha 65b703d6b5b136a4492d38d206a05e67f53546a2

activerecord: Remove mention of raising on a return out of a transaction If the transaction block is exited due to a timeout, we don't want to change what exception is raised. Also, not raising will allow the transaction to be conveniently aborted by an `return` or `break` statement.

view details

Dylan Thacker-Smith

commit sha 4332613b6d1cd7e69e5ee161cd52c943abf3f4a8

activerecord: No warning for return out of transaction block without writes It doesn't matter if the transaction is rolled back or committed if it wasn't written to, so we can avoid warning about a breaking change.

view details

push time in a month

pull request commentrails/rails

activerecord: No warning for return out of transaction block without writes

Looks like the CI failures are a problem on master, since they are in actionview and railties, neither of which depend on activerecord (the only gem changed by this PR)

dylanahsmith

comment created time in a month

delete branch dylanahsmith/rails

delete branch : deprecate-transaction-return

delete time in a month

PR opened rails/rails

activerecord: No warning for return out of transaction block without writes

Summary

https://github.com/rails/rails/pull/29333 introduced a deprecation warning for transaction blocks that are exited with break, return or throw which has a couple of problems which @eileencodes brought to my attention (https://github.com/rails/rails/pull/29333#issuecomment-634236156).

That deprecation warning was introduced for code like

Timeout.timeout(1) do
  Example.transaction do
    example.update_attributes(value: 1)
    sleep 3 # simulate something slow
  end
end

because it wasn't expected that the timeout would commit the transaction. However, the warning says the next version of rails will raise an exception and it would also be unexpected for that code to not result in a Timeout::Error exception. So the first commit in this PR changes the deprecation warning message to remove the mention of raising.

The other problem is with deprecation warnings from returns out of a transaction block that hasn't made any writes to the transaction, where it doesn't matter if the transaction is rolled back or committed. E.g.

with_lock do
  return if some_crtieria_met?

  # do work
end

The second commit in this PR avoids that deprecation warning by marking open transactions as having written on the first write query, then making the warning conditional on the current transaction having written.

Other Information

Since the https://github.com/rails/rails/pull/29333 hasn't been released, I've just edited the existing CHANGELOG entry for it.

+28 -3

0 comment

9 changed files

pr created time in a month

pull request commentrails/rails

Deprecate committing a transaction exited with return or throw

Yeah, I'm with you now about not raising in rails 6.2

If the deprecation warning is ignored because it isn't appropriate in some cases, then it won't be effective in letting developers know where to update their code to avoid being effected by the breaking change.

I realize the write_query? might not be perfect, but I think it will be good enough to help make the deprecation warning accurate enough to be useful. It is implemented using a whitelist of read queries, so it might result in false positives warnings, but shouldn't result in false negatives. Without using that, we basically have a lot more false positives without an easy way of fixing the false positives by improving the read query whitelist.

I'll open a PR with my proposed path forward.

dylanahsmith

comment created time in a month

push eventdylanahsmith/rails

Dylan Thacker-Smith

commit sha 0090de4638723535f3126d98cebe2f6b720b40df

activerecord: No warning for return out of transaction block without writes It doesn't matter if the transaction is rolled back or committed if it wasn't written to, so we can avoid warning about a breaking change.

view details

push time in a month

create barnchdylanahsmith/rails

branch : transaction-return-no-raise

created branch time in a month

pull request commentrails/rails

Deprecate committing a transaction exited with return or throw

That is an interesting use case, since in your early return example it doesn't really matter if the transaction is committed or rolled back since there hasn't been a write to the database. As a result, there is also no concern about ambiguity of whether the early return is supposed to mean the transaction should be committed or rolled back.

I'm more concerned about the case where the application writes to the database and then does an early return with the intention of committing the transaction, since we can't support that without also committing the transaction in the case of a timeout from a Timeout.timeout(duration) do block surrounding the transaction.

To support your use case, perhaps we should use write_query? on database queries in a transaction to detect the first write to it, then we could only give a deprecation warning where we need to make a breaking change.

The other question is whether we should even bother to raise an exception in Rails 6.2 in place of the deprecation warning after rolling back the transaction. If we allow early returns out of a transaction without writes, then it would be simpler remove the write detection code and just always rollback from the transaction. The simpler behaviour would avoid the need to explain the special case to the user in documentation. It would also avoid translating a timeout into a different exception. I think the only advantage of raising an exception would be to make the breaking change more visible and thus easier to notice if the warning wasn't noticed, but the rollback from an early return seems pretty likely to be noticed from test suites if the deprecation isn't noticed. The ability to rollback the transaction through break or return inside the transaction might also be quite convenient for transactions with writes.

dylanahsmith

comment created time in a month

Pull request review commentShopify/liquid

[StaticRegisters] Remove registers attr_reader

 class StaticRegistersUnitTest < Minitest::Test   include Liquid -  def set-    static_register        = StaticRegisters.new-    static_register[nil]   = true-    static_register[1]     = :one-    static_register[:one]  = "one"-    static_register["two"] = "three"-    static_register["two"] = 3-    static_register[false] = nil--    assert_equal({ nil => true, 1 => :one, :one => "one", "two" => 3, false => nil }, static_register.registers)--    static_register+  def test_set+    static_register = StaticRegisters.new(a: 1, b: 2)+    static_register[:b] = 22+    static_register[:c] = 33++    assert_equal(1, static_register[:a])+    assert_equal(22, static_register[:b])+    assert_equal(33, static_register[:c])   end    def test_get-    static_register = set--    assert_equal(true, static_register[nil])-    assert_equal(:one, static_register[1])-    assert_equal("one", static_register[:one])-    assert_equal(3, static_register["two"])-    assert_nil(static_register[false])-    assert_nil(static_register["unknown"])+    static_register = StaticRegisters.new(a: 1, b: 2)+    static_register[:b] = 22+    static_register[:c] = 33++    assert_equal(1, static_register[:a])+    assert_equal(22, static_register[:b])+    assert_equal(33, static_register[:c])+    assert_nil(static_register[:d])   end    def test_delete-    static_register = set+    static_register = StaticRegisters.new(a: 1, b: 2)+    static_register[:b] = 22+    static_register[:c] = 33 -    assert_equal(true, static_register.delete(nil))-    assert_equal(:one, static_register.delete(1))-    assert_equal("one", static_register.delete(:one))-    assert_equal(3, static_register.delete("two"))-    assert_nil(static_register.delete(false))-    assert_nil(static_register.delete("unknown"))+    assert_nil(static_register.delete(:a))+    assert_equal(1, static_register[:a])

For this PR, maybe we should just remove the assertions for the value after it is deleted, rather than making it seems like the current behaviour is intentional.

tjoyal

comment created time in 2 months

Pull request review commentShopify/liquid

[StaticRegisters] Remove registers attr_reader

 class StaticRegistersUnitTest < Minitest::Test   include Liquid -  def set-    static_register        = StaticRegisters.new-    static_register[nil]   = true-    static_register[1]     = :one-    static_register[:one]  = "one"-    static_register["two"] = "three"-    static_register["two"] = 3-    static_register[false] = nil--    assert_equal({ nil => true, 1 => :one, :one => "one", "two" => 3, false => nil }, static_register.registers)--    static_register+  def test_set+    static_register = StaticRegisters.new(a: 1, b: 2)+    static_register[:b] = 22+    static_register[:c] = 33++    assert_equal(1, static_register[:a])+    assert_equal(22, static_register[:b])+    assert_equal(33, static_register[:c])   end    def test_get-    static_register = set--    assert_equal(true, static_register[nil])-    assert_equal(:one, static_register[1])-    assert_equal("one", static_register[:one])-    assert_equal(3, static_register["two"])-    assert_nil(static_register[false])-    assert_nil(static_register["unknown"])+    static_register = StaticRegisters.new(a: 1, b: 2)+    static_register[:b] = 22+    static_register[:c] = 33++    assert_equal(1, static_register[:a])+    assert_equal(22, static_register[:b])+    assert_equal(33, static_register[:c])+    assert_nil(static_register[:d])   end    def test_delete-    static_register = set+    static_register = StaticRegisters.new(a: 1, b: 2)+    static_register[:b] = 22+    static_register[:c] = 33 -    assert_equal(true, static_register.delete(nil))-    assert_equal(:one, static_register.delete(1))-    assert_equal("one", static_register.delete(:one))-    assert_equal(3, static_register.delete("two"))-    assert_nil(static_register.delete(false))-    assert_nil(static_register.delete("unknown"))+    assert_nil(static_register.delete(:a))+    assert_equal(1, static_register[:a])

This also have implication over how key? behave.

Yeah, to maintain the current API, we would either need to keep track of deleted keys separately or to have a deleted key marker (e.g. similar to the UNDEFINED constant). That would mean we would always have to retrieve the value and special case this marker value. That makes me wonder if it is worth supporting that complexity to preserve this approach.

I'm not sure how these methods were chosen vs some others that were removed when moving from Hash to StaticRegisters (eg.: .each or empty?).

Good question. Perhaps it means we actually rely on this delete method, but if we don't need it, it would be simpler to not support.

This might be going back to "StaticRegisters" shouldn't exist in the first place.

Indeed. This would be much simpler if we just had two separate hashes on the context where context.registers could be initialized with context.static_registers.dup. If this lazy approach is a performance optimization, then it seems like it was added prematurely.

tjoyal

comment created time in 2 months

Pull request review commentShopify/liquid

[StaticRegisters] Remove registers attr_reader

 class StaticRegistersUnitTest < Minitest::Test   include Liquid -  def set-    static_register        = StaticRegisters.new-    static_register[nil]   = true-    static_register[1]     = :one-    static_register[:one]  = "one"-    static_register["two"] = "three"-    static_register["two"] = 3-    static_register[false] = nil--    assert_equal({ nil => true, 1 => :one, :one => "one", "two" => 3, false => nil }, static_register.registers)--    static_register+  def test_set+    static_register = StaticRegisters.new(a: 1, b: 2)+    static_register[:b] = 22+    static_register[:c] = 33++    assert_equal(1, static_register[:a])+    assert_equal(22, static_register[:b])+    assert_equal(33, static_register[:c])   end    def test_get-    static_register = set--    assert_equal(true, static_register[nil])-    assert_equal(:one, static_register[1])-    assert_equal("one", static_register[:one])-    assert_equal(3, static_register["two"])-    assert_nil(static_register[false])-    assert_nil(static_register["unknown"])+    static_register = StaticRegisters.new(a: 1, b: 2)+    static_register[:b] = 22+    static_register[:c] = 33++    assert_equal(1, static_register[:a])+    assert_equal(22, static_register[:b])+    assert_equal(33, static_register[:c])

Doesn't this do the same thing as test_set? If the only thing new tested in this test method is getting an missing key, then maybe the test should be renamed to test_get_missing_key and just do assert_nil(StaticRegisters.new({})[:missing])

tjoyal

comment created time in 2 months

Pull request review commentShopify/liquid

[StaticRegisters] Remove registers attr_reader

 class StaticRegistersUnitTest < Minitest::Test   include Liquid -  def set-    static_register        = StaticRegisters.new-    static_register[nil]   = true-    static_register[1]     = :one-    static_register[:one]  = "one"-    static_register["two"] = "three"-    static_register["two"] = 3-    static_register[false] = nil--    assert_equal({ nil => true, 1 => :one, :one => "one", "two" => 3, false => nil }, static_register.registers)--    static_register+  def test_set+    static_register = StaticRegisters.new(a: 1, b: 2)+    static_register[:b] = 22+    static_register[:c] = 33++    assert_equal(1, static_register[:a])+    assert_equal(22, static_register[:b])+    assert_equal(33, static_register[:c])   end    def test_get-    static_register = set--    assert_equal(true, static_register[nil])-    assert_equal(:one, static_register[1])-    assert_equal("one", static_register[:one])-    assert_equal(3, static_register["two"])-    assert_nil(static_register[false])-    assert_nil(static_register["unknown"])+    static_register = StaticRegisters.new(a: 1, b: 2)+    static_register[:b] = 22+    static_register[:c] = 33++    assert_equal(1, static_register[:a])+    assert_equal(22, static_register[:b])+    assert_equal(33, static_register[:c])+    assert_nil(static_register[:d])   end    def test_delete-    static_register = set+    static_register = StaticRegisters.new(a: 1, b: 2)+    static_register[:b] = 22+    static_register[:c] = 33 -    assert_equal(true, static_register.delete(nil))-    assert_equal(:one, static_register.delete(1))-    assert_equal("one", static_register.delete(:one))-    assert_equal(3, static_register.delete("two"))-    assert_nil(static_register.delete(false))-    assert_nil(static_register.delete("unknown"))+    assert_nil(static_register.delete(:a))+    assert_equal(1, static_register[:a]) -    assert_equal({}, static_register.registers)+    assert_equal(22, static_register.delete(:b))+    assert_equal(2, static_register[:b])++    assert_equal(33, static_register.delete(:c))+    assert_nil(static_register[:c])++    assert_nil(static_register.delete(:d))   end    def test_fetch-    static_register = set+    static_register = StaticRegisters.new(a: 1, b: 2)+    static_register[:b] = 22+    static_register[:c] = 33 -    assert_equal(true, static_register.fetch(nil))-    assert_equal(:one, static_register.fetch(1))-    assert_equal("one", static_register.fetch(:one))-    assert_equal(3, static_register.fetch("two"))-    assert_nil(static_register.fetch(false))+    assert_equal(1, static_register.fetch(:a))+    assert_equal(1, static_register.fetch(:a, "default"))+    assert_equal(22, static_register.fetch(:b))+    assert_equal(22, static_register.fetch(:b, "default"))+    assert_equal(33, static_register.fetch(:c))+    assert_equal(33, static_register.fetch(:c, "default"))      assert_raises(KeyError) do-      static_register.fetch(:unknown)+      static_register.fetch(:d)     end-    assert_equal("default", static_register.fetch(:unknown, "default"))--    static_register[:unknown] = "known"-    assert_equal("known", static_register.fetch(:unknown))-    assert_equal("known", static_register.fetch(:unknown, "default"))-  end--  def test_fetch_default-    static_register = StaticRegisters.new--    assert_equal(true, static_register.fetch(nil, true))-    assert_equal(:one, static_register.fetch(1, :one))-    assert_equal("one", static_register.fetch(:one, "one"))-    assert_equal(3, static_register.fetch("two", 3))-    assert_nil(static_register.fetch(false, nil))+    assert_equal("default", static_register.fetch(:d, "default"))   end    def test_key-    static_register = set--    assert_equal(true, static_register.key?(nil))-    assert_equal(true, static_register.key?(1))-    assert_equal(true, static_register.key?(:one))-    assert_equal(true, static_register.key?("two"))-    assert_equal(true, static_register.key?(false))-    assert_equal(false, static_register.key?("unknown"))-    assert_equal(false, static_register.key?(true))-  end--  def set_with_static-    static_register        = StaticRegisters.new(nil => true, 1 => :one, :one => "one", "two" => 3, false => nil)-    static_register[nil]   = false-    static_register["two"] = 4-    static_register[true]  = "foo"--    assert_equal({ nil => true, 1 => :one, :one => "one", "two" => 3, false => nil }, static_register.static)-    assert_equal({ nil => false, "two" => 4, true => "foo" }, static_register.registers)--    static_register-  end--  def test_get_with_static-    static_register = set_with_static--    assert_equal(false, static_register[nil])-    assert_equal(:one, static_register[1])-    assert_equal("one", static_register[:one])-    assert_equal(4, static_register["two"])-    assert_equal("foo", static_register[true])-    assert_nil(static_register[false])-  end--  def test_delete_with_static-    static_register = set_with_static--    assert_equal(false, static_register.delete(nil))-    assert_equal(4, static_register.delete("two"))-    assert_equal("foo", static_register.delete(true))-    assert_nil(static_register.delete("unknown"))-    assert_nil(static_register.delete(:one))--    assert_equal({}, static_register.registers)-    assert_equal({ nil => true, 1 => :one, :one => "one", "two" => 3, false => nil }, static_register.static)-  end--  def test_fetch_with_static-    static_register = set_with_static--    assert_equal(false, static_register.fetch(nil))-    assert_equal(:one, static_register.fetch(1))-    assert_equal("one", static_register.fetch(:one))-    assert_equal(4, static_register.fetch("two"))-    assert_equal("foo", static_register.fetch(true))-    assert_nil(static_register.fetch(false))-  end--  def test_key_with_static-    static_register = set_with_static--    assert_equal(true, static_register.key?(nil))-    assert_equal(true, static_register.key?(1))-    assert_equal(true, static_register.key?(:one))-    assert_equal(true, static_register.key?("two"))-    assert_equal(true, static_register.key?(false))-    assert_equal(false, static_register.key?("unknown"))-    assert_equal(true, static_register.key?(true))+    static_register = StaticRegisters.new(a: 1, b: 2)+    static_register[:b] = 22+    static_register[:c] = 33++    assert_equal(true, static_register.key?(:a))+    assert_equal(true, static_register.key?(:b))+    assert_equal(true, static_register.key?(:c))+    assert_equal(false, static_register.key?(:d))   end    def test_static_register_can_be_frozen-    static_register = set_with_static+    static_register = StaticRegisters.new(a: 1)++    static_register.static.freeze -    static = static_register.static.freeze+    assert_raises(RuntimeError) do+      static_register.static[:a] = "foo"+    end      assert_raises(RuntimeError) do-      static["two"] = "foo"+      static_register.static[:b] = "foo"     end      assert_raises(RuntimeError) do-      static["unknown"] = "foo"+      static_register.static.delete(:a)     end      assert_raises(RuntimeError) do-      static.delete("two")+      static_register.static.delete(:c)     end   end    def test_new_static_retains_static-    static_register          = StaticRegisters.new(nil => true, 1 => :one, :one => "one", "two" => 3, false => nil)-    static_register["one"]   = 1-    static_register["two"]   = 2-    static_register["three"] = 3--    new_register = StaticRegisters.new(static_register)-    assert_equal({}, new_register.registers)--    new_register["one"]   = 4-    new_register["two"]   = 5-    new_register["three"] = 6--    newest_register = StaticRegisters.new(new_register)-    assert_equal({}, newest_register.registers)--    newest_register["one"]   = 7-    newest_register["two"]   = 8-    newest_register["three"] = 9--    assert_equal({ "one" => 1, "two" => 2, "three" => 3 }, static_register.registers)-    assert_equal({ "one" => 4, "two" => 5, "three" => 6 }, new_register.registers)-    assert_equal({ "one" => 7, "two" => 8, "three" => 9 }, newest_register.registers)-    assert_equal({ nil => true, 1 => :one, :one => "one", "two" => 3, false => nil }, static_register.static)-    assert_equal({ nil => true, 1 => :one, :one => "one", "two" => 3, false => nil }, new_register.static)-    assert_equal({ nil => true, 1 => :one, :one => "one", "two" => 3, false => nil }, newest_register.static)-  end+    static_register = StaticRegisters.new(a: 1, b: 2)+    static_register[:b] = 22+    static_register[:c] = 33 -  def test_multiple_instances_are_unique-    static_register          = StaticRegisters.new(nil => true, 1 => :one, :one => "one", "two" => 3, false => nil)-    static_register["one"]   = 1-    static_register["two"]   = 2-    static_register["three"] = 3--    new_register = StaticRegisters.new(foo: :bar)-    assert_equal({}, new_register.registers)--    new_register["one"]   = 4-    new_register["two"]   = 5-    new_register["three"] = 6--    newest_register = StaticRegisters.new(bar: :foo)-    assert_equal({}, newest_register.registers)--    newest_register["one"]   = 7-    newest_register["two"]   = 8-    newest_register["three"] = 9--    assert_equal({ "one" => 1, "two" => 2, "three" => 3 }, static_register.registers)-    assert_equal({ "one" => 4, "two" => 5, "three" => 6 }, new_register.registers)-    assert_equal({ "one" => 7, "two" => 8, "three" => 9 }, newest_register.registers)-    assert_equal({ nil => true, 1 => :one, :one => "one", "two" => 3, false => nil }, static_register.static)-    assert_equal({ foo: :bar }, new_register.static)-    assert_equal({ bar: :foo }, newest_register.static)-  end+    new_static_register = StaticRegisters.new(static_register)+    new_static_register[:b] = 222 -  def test_can_update_static_directly_and_updates_all_instances-    static_register          = StaticRegisters.new(nil => true, 1 => :one, :one => "one", "two" => 3, false => nil)-    static_register["one"]   = 1-    static_register["two"]   = 2-    static_register["three"] = 3+    newest_static_register = StaticRegisters.new(new_static_register)+    newest_static_register[:c] = 333 -    new_register = StaticRegisters.new(static_register)-    assert_equal({}, new_register.registers)+    assert_equal(1, static_register[:a])+    assert_equal(22, static_register[:b])+    assert_equal(33, static_register[:c]) -    assert_equal({ nil => true, 1 => :one, :one => "one", "two" => 3, false => nil }, static_register.static)+    assert_equal(1, new_static_register[:a])+    assert_equal(222, new_static_register[:b])+    assert_nil(new_static_register[:c]) -    new_register["one"]         = 4-    new_register["two"]         = 5-    new_register["three"]       = 6-    new_register.static["four"] = 10+    assert_equal(1, newest_static_register[:a])+    assert_equal(2, newest_static_register[:b])+    assert_equal(333, newest_static_register[:c])+  end -    newest_register = StaticRegisters.new(new_register)-    assert_equal({}, newest_register.registers)+  def test_multiple_instances_are_unique+    static_register_1 = StaticRegisters.new(a: 1, b: 2)+    static_register_1[:b] = 22+    static_register_1[:c] = 33++    static_register_2 = StaticRegisters.new(a: 10, b: 20)+    static_register_2[:b] = 220+    static_register_2[:c] = 330++    assert_equal({ a: 1, b: 2 }, static_register_1.static)+    assert_equal(1, static_register_1[:a])+    assert_equal(22, static_register_1[:b])+    assert_equal(33, static_register_1[:c])++    assert_equal({ a: 10, b: 20 }, static_register_2.static)+    assert_equal(10, static_register_2[:a])+    assert_equal(220, static_register_2[:b])+    assert_equal(330, static_register_2[:c])+  end -    assert_equal({ nil => true, 1 => :one, :one => "one", "two" => 3, false => nil, "four" => 10 }, new_register.static)+  def test_initialization_reused_static_same_memory_object+    static_register_1 = StaticRegisters.new(a: 1, b: 2)+    static_register_1[:b] = 22+    static_register_1[:c] = 33 -    newest_register["one"]      = 7-    newest_register["two"]      = 8-    newest_register["three"]    = 9-    new_register.static["four"] = 5-    new_register.static["five"] = 15+    static_register_2 = StaticRegisters.new(static_register_1) -    assert_equal({ "one" => 1, "two" => 2, "three" => 3 }, static_register.registers)-    assert_equal({ "one" => 4, "two" => 5, "three" => 6 }, new_register.registers)-    assert_equal({ "one" => 7, "two" => 8, "three" => 9 }, newest_register.registers)+    assert_equal(1, static_register_2[:a])+    assert_equal(2, static_register_2[:b])+    assert_nil(static_register_2[:c]) -    assert_equal({ nil => true, 1 => :one, :one => "one", "two" => 3, false => nil, "four" => 5, "five" => 15 }, newest_register.static)+    static_register_1.static[:b] = 222+    static_register_1.static[:c] = 333 -    assert_equal({ nil => true, 1 => :one, :one => "one", "two" => 3, false => nil, "four" => 5, "five" => 15 }, static_register.static)-    assert_equal({ nil => true, 1 => :one, :one => "one", "two" => 3, false => nil, "four" => 5, "five" => 15 }, new_register.static)+    assert_equal(222, static_register_2[:b])+    assert_equal(333, static_register_2[:c])   end

We could use assert_same to ensure the static is exactly the same object

    assert_same(static_register_1.static, static_register_2.static)
  end
tjoyal

comment created time in 2 months

Pull request review commentShopify/liquid

[StaticRegisters] Remove registers attr_reader

 class StaticRegistersUnitTest < Minitest::Test   include Liquid -  def set-    static_register        = StaticRegisters.new-    static_register[nil]   = true-    static_register[1]     = :one-    static_register[:one]  = "one"-    static_register["two"] = "three"-    static_register["two"] = 3-    static_register[false] = nil--    assert_equal({ nil => true, 1 => :one, :one => "one", "two" => 3, false => nil }, static_register.registers)--    static_register+  def test_set+    static_register = StaticRegisters.new(a: 1, b: 2)+    static_register[:b] = 22+    static_register[:c] = 33++    assert_equal(1, static_register[:a])+    assert_equal(22, static_register[:b])+    assert_equal(33, static_register[:c])   end    def test_get-    static_register = set--    assert_equal(true, static_register[nil])-    assert_equal(:one, static_register[1])-    assert_equal("one", static_register[:one])-    assert_equal(3, static_register["two"])-    assert_nil(static_register[false])-    assert_nil(static_register["unknown"])+    static_register = StaticRegisters.new(a: 1, b: 2)+    static_register[:b] = 22+    static_register[:c] = 33++    assert_equal(1, static_register[:a])+    assert_equal(22, static_register[:b])+    assert_equal(33, static_register[:c])+    assert_nil(static_register[:d])   end    def test_delete-    static_register = set+    static_register = StaticRegisters.new(a: 1, b: 2)+    static_register[:b] = 22+    static_register[:c] = 33 -    assert_equal(true, static_register.delete(nil))-    assert_equal(:one, static_register.delete(1))-    assert_equal("one", static_register.delete(:one))-    assert_equal(3, static_register.delete("two"))-    assert_nil(static_register.delete(false))-    assert_nil(static_register.delete("unknown"))+    assert_nil(static_register.delete(:a))+    assert_equal(1, static_register[:a])

Is this behaviour intentional? It wasn't previously covered by test_delete. If the registers are supposed to behave like a hash, then having delete not actually behave like a delete seems like unexpected behaviour.

tjoyal

comment created time in 2 months

Pull request review commentShopify/liquid

[StaticRegisters] Cache for reads

 # frozen_string_literal: true +require 'forwardable'+ module Liquid   class StaticRegisters+    extend Forwardable+     attr_reader :static, :registers +    def_delegators :@cache, :[], :key?+     def initialize(registers = {})       @static    = registers.is_a?(StaticRegisters) ? registers.static : registers       @registers = {}++      @cache = @static.dup

Why bother having yet another cache. If we are going to eagerly duplicate the static registers, then we could just do that for @registers

      @static = registers.is_a?(StaticRegisters) ? registers.static : registers
      @registers = @static.dup

However, if there are a lot of registers that aren't used in the rendered snippet, that could add extra overhead. Was Liquid::StaticRegisters added to do this lazily for performance reasons? Otherwise, why do we bother even having it in the first place when we could just store the static registers in the currently unused Liquid::Context#static_registers attribute

tjoyal

comment created time in 2 months

Pull request review commentShopify/liquid

[StaticRegisters] Cache for reads

 # frozen_string_literal: true +require 'forwardable'+ module Liquid   class StaticRegisters+    extend Forwardable+     attr_reader :static, :registers +    def_delegators :@cache, :[], :key?+     def initialize(registers = {})       @static    = registers.is_a?(StaticRegisters) ? registers.static : registers       @registers = {}++      @cache = @static.dup     end      def []=(key, value)       @registers[key] = value-    end--    def [](key)-      if @registers.key?(key)-        @registers[key]-      else-        @static[key]-      end+      @cache[key] = value     end      def delete(key)-      @registers.delete(key)+      @registers.delete(key).tap do+        @static.dup.merge(@registers)

A @static.dup.merge(@registers) line is the same thing, create a new hash but then doesn't use it.

tjoyal

comment created time in 2 months

Pull request review commentShopify/liquid

[StaticRegisters] Cache for reads

 # frozen_string_literal: true +require 'forwardable'+ module Liquid   class StaticRegisters+    extend Forwardable+     attr_reader :static, :registers +    def_delegators :@cache, :[], :key?+     def initialize(registers = {})       @static    = registers.is_a?(StaticRegisters) ? registers.static : registers       @registers = {}++      @cache = @static.dup     end      def []=(key, value)       @registers[key] = value-    end--    def [](key)-      if @registers.key?(key)-        @registers[key]-      else-        @static[key]-      end+      @cache[key] = value     end      def delete(key)-      @registers.delete(key)+      @registers.delete(key).tap do+        @static.dup.merge(@registers)

I don't understand this tap block, since tap doesn't return the result of the block so it looks like this block is just building an object that won't get used.

tjoyal

comment created time in 2 months

Pull request review commentShopify/liquid

[StaticRegisters] Cache for reads

 # frozen_string_literal: true +require 'forwardable'+ module Liquid   class StaticRegisters+    extend Forwardable+     attr_reader :static, :registers +    def_delegators :@cache, :[], :key?

If you are trying to avoid *args looks like def_delegators has the same problem (https://github.com/ruby/ruby/blob/38a4f617de157586668dd726d518eadcebf1bca2/lib/forwardable.rb#L207-L233), although perhaps that could be fixed upstream to leverage ...

tjoyal

comment created time in 2 months

Pull request review commentShopify/liquid

[StaticRegisters] Fetch raise on missing

 def delete(key)       @registers.delete(key)     end -    def fetch(key, default = nil)-      key?(key) ? self[key] : default+    def fetch(key, *args, &block)+      if @registers.key?(key)+        @registers.fetch(key)+      else+        @static.fetch(key, *args, &block)

If we really wanted to go after performance, we could consider generating a single unified Hash whenever there is a mutation and avoid checkout 2 of them every call.

Yeah, this might make more sense. If we typically mutate the registers, we could just unconditionally merge them ahead of time, which would be quite simple

tjoyal

comment created time in 2 months

Pull request review commentShopify/liquid

[StaticRegisters] Fetch raise on missing

 def delete(key)       @registers.delete(key)     end -    def fetch(key, default = nil)-      key?(key) ? self[key] : default+    def fetch(key, *args, &block)+      if @registers.key?(key)+        @registers.fetch(key)+      else+        @static.fetch(key, *args, &block)

If we want to avoid the performance overhead, then you could use

    UNDEFINED = Object.new

    def fetch(key, default=UNDEFINED, &block)
      if @registers.key?(key)
        @registers.fetch(key)
      elsif default != UNDEFINED
        @static.fetch(key, default, &block)
      else
        @static.fetch(key, &block)
tjoyal

comment created time in 2 months

Pull request review commentShopify/liquid

[StaticRegisters] Fetch raise on missing

 def delete(key)       @registers.delete(key)     end -    def fetch(key, default = nil)-      key?(key) ? self[key] : default+    def fetch(key, *args, &block)+      if @registers.key?(key)+        @registers.fetch(key)+      else+        @static.fetch(key, *args, &block)

Using *args parameter will end up unnecessarily allocating an array. So this could cause a performance problem if it is called often enough. Although, it doesn't look like we are using this method internally to liquid, so it might not matter.

tjoyal

comment created time in 2 months

PR merged Shopify/liquid

Reviewers
Fix ParseTreeVisitorTest for ruby-head

Problem

I noticed all the test/integration/parse_tree_visitor_test.rb tests were failing on liquid-c CI and was able to reproduce the same problem in liquid itself locally. The failures look like

  1) Error:
ParseTreeVisitorTest#test_variable:
ArgumentError: wrong number of arguments (given 1, expected 0)
    /Users/dylansmith/src/liquid/lib/liquid/parse_tree_visitor.rb:28:in `block in visit'
    /Users/dylansmith/src/liquid/lib/liquid/parse_tree_visitor.rb:27:in `map'
    /Users/dylansmith/src/liquid/lib/liquid/parse_tree_visitor.rb:27:in `visit'
    /Users/dylansmith/src/liquid/lib/liquid/parse_tree_visitor.rb:31:in `block in visit'
    /Users/dylansmith/src/liquid/lib/liquid/parse_tree_visitor.rb:27:in `map'
    /Users/dylansmith/src/liquid/lib/liquid/parse_tree_visitor.rb:27:in `visit'
    test/integration/parse_tree_visitor_test.rb:245:in `visit'
    test/integration/parse_tree_visitor_test.rb:11:in `test_variable'

The relevant ruby change that caused this to happen is https://bugs.ruby-lang.org/issues/16260.

Previously, the arity of the symbol proc (&:name in this case) given to ParseTreeVisitor#add_callback_for was -1, so it would pass the node to the symbol proc.

Now, the arity is -2, which more accurately represents the fact that the first argument to the proc is required. This causes ParseTreeVisitor#add_callback_for to decide to pass two arguments to the block. The first argument gets used as the receiver which the method referenced by the symbol calls. The new ruby feature makes it such that additional arguments get passed along to the method, so the second block argument gets passed to the name method, but the name method doesn't take any arguments.

Solution

Avoid using a symbol proc in the test to avoid the future ruby incompatibility.

Note that there was another unrelated ruby-head failure in a test counting object allocations. So I'm ignoring ruby-head failures for now. That looks like something that might need fixing upstream, but still requires deeper investigation.

+3 -1

1 comment

2 changed files

dylanahsmith

pr closed time in 2 months

delete branch Shopify/liquid

delete branch : remove-bad-arity-assumption

delete time in 2 months

push eventShopify/liquid

Dylan Thacker-Smith

commit sha 81149344a5ba53b30e8ab7d77d605dc484a0a3ff

Fix ParseTreeVisitorTest for ruby-head

view details

Dylan Thacker-Smith

commit sha c2f67398d05fcdd10db35a7f9541668323fedd06

Allow ruby-head failures Ignore an object allocation test failure on ruby-head for now.

view details

Dylan Thacker-Smith

commit sha bd0e53bd2e0dade901dc6fe013a5bba9a5dde02d

Merge pull request #1239 from Shopify/remove-bad-arity-assumption Fix ParseTreeVisitorTest for ruby-head

view details

push time in 2 months

issue commentShopify/graphql-batch

Loaders that are nested / based on another

I'm not able to reproduce this problem. Why do you say "the inner one is never executed"? What is the result that you are seeing? The then block should be called unless the promise resolves to an error.

23tux

comment created time in 2 months

issue closedShopify/liquid

Removing carriage returns

Hi,

This issue is related to https://github.com/Shopify/liquid/issues/460. As suggested in the issue by @fw42 , I used "strip_newline" filter but it is not removing the "\r" carriage return character while generating the XML input template. My sample text input is something like "ABC \r\n DEF" but the rendered template has the text as "ABC \r DEF". We are using DotLiquid v 2.0.314.0

closed time in 2 months

pravinkarthy

issue commentShopify/liquid

Removing carriage returns

We are using DotLiquid v 2.0.314.0

Looks like you are reporting this to the wrong repo. http://dotliquidmarkup.org/ says:

DotLiquid is a templating system ported to the .net framework from Ruby’s Liquid Markup.

This has been fixed in ruby a long time ago (https://github.com/Shopify/liquid/pull/203) so it should be easy to convince DotLiquid to port the change

pravinkarthy

comment created time in 2 months

issue commentShopify/identity_cache

Dalli integration

It doesn't look like active support's ActiveSupport::Cache::MemCacheStore provides CAS support or exposes the underlying Dalli::Client instance, so I think we need to add CAS support to active support if we want to integrate with ActiveSupport::Cache::MemCacheStore.

Alternatively, we can provide a cache adapter that integrates with a Dalli::Client instance directly. The cache adapter will be used by IdentityCache::CacheFetcher if it provides cas and cas_multi methods that behave like those provided by the memcached_store gem. In addition to those methods, it will also need to provide a write method that behaves like ActiveSupport::Cache::Store.

saiqulhaq

comment created time in 2 months

Pull request review commentShopify/liquid

add filters, adjust namespacing, add requires to initializer

+module Liquid+  module String+    def titleize(input)+      input.titleize+    end

This is another implicit active support dependency.

gatesporter8

comment created time in 2 months

Pull request review commentShopify/liquid

add filters, adjust namespacing, add requires to initializer

+module Liquid+  module String+    def titleize(input)+      input.titleize+    end++    def pluralize(num, singular, plural)+      num == 1 ? singular : plural

This assumption won't be valid when dealing with other languages

gatesporter8

comment created time in 2 months

Pull request review commentShopify/liquid

add filters, adjust namespacing, add requires to initializer

+module Liquid+  module Mixed+    def present(input)+      input.present?+    end++    def blank(input)+      input.blank?

These methods aren't defined in ruby, so this is implicitly dependent on active support.

gatesporter8

comment created time in 2 months

Pull request review commentShopify/liquid

add filters, adjust namespacing, add requires to initializer

+module Liquid+  module Number+    def precision(input, *args)+      sigdigs = args[0] || 2+      sprintf("%.#{sigdigs}f", input.to_s.gsub(/[^0-9.]/, ''))+    end++    def ordinalize(input)+      input.ordinalize+    end++    # Given a number/float/string, convert it to a comma-separate, two-decimal-+    # place number. Make sure to maintain any currency symbols as well.+    # e.g. "$-250100" => "$-250,100.00"; 1234.5 => "1,234.50"+    # @param [Mixed] input - the value to filter.+    # @return [String] the money-fied result.+    def money(input)

This filter is quite hard coded to a specific money format

gatesporter8

comment created time in 2 months

Pull request review commentShopify/liquid

add filters, adjust namespacing, add requires to initializer

+module Liquid+  module Number+    def precision(input, *args)+      sigdigs = args[0] || 2+      sprintf("%.#{sigdigs}f", input.to_s.gsub(/[^0-9.]/, ''))+    end++    def ordinalize(input)+      input.ordinalize

This is another implicit dependency on active support

gatesporter8

comment created time in 2 months

Pull request review commentShopify/liquid

add filters, adjust namespacing, add requires to initializer

+module Liquid+  module Array+    def empty(input)+      input.empty?+    end++    def count(input)

Why not use the size filter?

gatesporter8

comment created time in 2 months

Pull request review commentShopify/liquid

add filters, adjust namespacing, add requires to initializer

+require 'utils'+module Liquid+  module Date+    def date(input, format)

There is already a date filter

gatesporter8

comment created time in 2 months

push eventlgierth/promise.rb

Dylan Thacker-Smith

commit sha 8cd9efe145630037a733ed4ca1518be758b7faf4

Fix tests

view details

push time in 2 months

push eventlgierth/promise.rb

Dylan Thacker-Smith

commit sha 533b4aea81a86cac5f1bfac83c2861bc9d530d40

Fix tests

view details

push time in 2 months

push eventlgierth/promise.rb

Dylan Thacker-Smith

commit sha 2be3992a966a6d4c3a667a4fe28f615841f20bb2

Fix tests

view details

push time in 2 months

create barnchlgierth/promise.rb

branch : fix-tests

created branch time in 2 months

issue commentShopify/graphql-batch

Ruby 2.7 keyword arguments warning

Pushed a release (https://github.com/Shopify/graphql-batch/releases/tag/v0.4.3)

barthez

comment created time in 2 months

release Shopify/graphql-batch

v0.4.3

released time in 2 months

push eventShopify/graphql-batch

Dylan Thacker-Smith

commit sha de001c84f20d041dc3a5964ba886ac79545d993e

Add allowed_push_host to gemspec to fix release script

view details

push time in 2 months

push eventShopify/graphql-batch

Dylan Thacker-Smith

commit sha 8902644e4353228669ac2e94112a31e63f24348c

Release version 0.4.3

view details

push time in 2 months

push eventShopify/identity_cache

Dylan Thacker-Smith

commit sha 0e54659766453a35e502127b7912783e3ebdaf08

Fix broken prefetch_associations of a polymorphic cache_belongs_to (#461)

view details

push time in 2 months

delete branch Shopify/identity_cache

delete branch : fix-prefetch-polymorphic-belongs-to

delete time in 2 months

PR merged Shopify/identity_cache

Fix broken prefetch_associations of a polymorphic cache_belongs_to

cc @casperisfine

Problem

We noticed a weird test failure in https://github.com/Shopify/identity_cache/pull/459 which turned out to be a problem with the implementation of prefetch_associations for a polymorphic cache_belongs_to. It seemed to be making incorrect assumptions of what the load_batch load strategy method is supposed to be given as an input, where it was passing a nested hash when it was only supposed to be a single hash.

I changed the failing tests, that were just asserting on the number of queries performed, to actually assert on the result of the prefetch. This resulted in the following test failures without the corresponding fix:

  1) Failure:
FetchMultiTest#test_fetch_multi_with_polymorphic_has_one [test/fetch_multi_test.rb:304]:
--- expected
+++ actual
@@ -1 +1 @@
-[#<Item id: 4, item_id: nil, title: nil, created_at: "2020-05-05 14:35:54", updated_at: "2020-05-05 14:35:54">, #<ItemTwo id: 1, item_id: nil, title: nil, created_at: "2020-05-05 14:35:54", updated_at: "2020-05-05 14:35:54">]
+[#<PolymorphicRecord id: 1, owner_type: "Item", owner_id: 4, created_at: "2020-05-05 14:35:54", updated_at: "2020-05-05 14:35:54">, #<PolymorphicRecord id: 2, owner_type: "ItemTwo", owner_id: 1, created_at: "2020-05-05 14:35:54", updated_at: "2020-05-05 14:35:54">]


  2) Failure:
FetchMultiTest#test_fetch_multi_with_polymorphic_has_many [test/fetch_multi_test.rb:322]:
--- expected
+++ actual
@@ -1 +1 @@
-[#<Item id: 4, item_id: nil, title: nil, created_at: "2020-05-05 14:35:54", updated_at: "2020-05-05 14:35:54">, #<Item id: 4, item_id: nil, title: nil, created_at: "2020-05-05 14:35:54", updated_at: "2020-05-05 14:35:54">, #<ItemTwo id: 1, item_id: nil, title: nil, created_at: "2020-05-05 14:35:54", updated_at: "2020-05-05 14:35:54">]
+[#<Item id: 4, item_id: nil, title: nil, created_at: "2020-05-05 14:35:54", updated_at: "2020-05-05 14:35:54">, #<PolymorphicRecord id: 2, owner_type: "Item", owner_id: 4, created_at: "2020-05-05 14:35:54", updated_at: "2020-05-05 14:35:54">, #<PolymorphicRecord id: 3, owner_type: "ItemTwo", owner_id: 1, created_at: "2020-05-05 14:35:54", updated_at: "2020-05-05 14:35:54">]

Solution

Fix the argument to load_strategy.load_batch. This meant also changing the code handling the result of that call, since it was previously relying on it holding the owner record. I also changed some variable names, since they seemed confusing (e.g. calling a cache fetching a cache key).

+26 -17

0 comment

2 changed files

dylanahsmith

pr closed time in 2 months

push eventShopify/identity_cache

Dylan Thacker-Smith

commit sha 7e4cee0fc54e0dbd5b3c055b117cd72dd6269a06

Split line into two lines for readability

view details

push time in 2 months

Pull request review commentShopify/identity_cache

Fix broken prefetch_associations of a polymorphic cache_belongs_to

 def fetch(records)        def fetch_async(load_strategy, records)         if reflection.polymorphic?-          cache_keys_to_associated_ids = {}+          type_fetcher_to_db_ids_hash = {}            records.each do |owner_record|             associated_id = owner_record.send(reflection.foreign_key)             next unless associated_id && !owner_record.instance_variable_defined?(records_variable_name)-            associated_cache_key = Object.const_get(+            foreign_type_fetcher = Object.const_get(               owner_record.send(reflection.foreign_type)             ).cached_model.cached_primary_index-            unless cache_keys_to_associated_ids[associated_cache_key]-              cache_keys_to_associated_ids[associated_cache_key] = {}-            end-            cache_keys_to_associated_ids[associated_cache_key][associated_id] = owner_record+            (type_fetcher_to_db_ids_hash[foreign_type_fetcher] ||= []) << associated_id           end -          load_strategy.load_batch(cache_keys_to_associated_ids) do |associated_records_by_cache_key|+          load_strategy.load_batch(type_fetcher_to_db_ids_hash) do |batch_load_result|             batch_records = []-            associated_records_by_cache_key.each do |cache_key, associated_records|-              associated_records.keys.each do |id, associated_record|-                owner_record = cache_keys_to_associated_ids.fetch(cache_key).fetch(id)-                batch_records << owner_record-                write(owner_record, associated_record)-              end++            records.each do |owner_record|+              associated_id = owner_record.send(reflection.foreign_key)

Wouldn't that just mean it would fail if the attribute method were marked private? I'm not sure what advantage that would provide, considering that this is mean to be internal to the model's implementation.

dylanahsmith

comment created time in 2 months

push eventShopify/identity_cache

Dylan Thacker-Smith

commit sha d9938909109f2576489b8c3b244c2fcba21aa3c4

Remove a couple of internal tests for load_multi_from_db coercion These shouldn't be needed, since we should be coercing being building the cache keys. It looks like these tests were added to test a line that is now removed.

view details

push time in 2 months

Pull request review commentShopify/identity_cache

Stop passing the column to Connection#type_cast

 def load_one_from_db(id)       def load_multi_from_db(ids)         return {} if ids.empty? -        ids = ids.map { |id| model.connection.type_cast(id, id_column) }

I tried running the tests puts caller if ids != ids.map { |id| model.connection.type_cast(id, id_column) } to see if we were failing to coerce the ids before building the cache keys, since that would cause more problems. In the process, I found a couple of tests for this line, but they were just calling this private method directly with heavy mocking/stubbing, so they don't represent real usage. I'm pushing a commit to remove those useless tests.

casperisfine

comment created time in 2 months

push eventShopify/identity_cache

Pierre Grimaud

commit sha b918c105a6d4f2877949ec30359f5df9600670a0

Fix typos

view details

Camilo Lopez

commit sha 088269e18dba182b099abc05c5f2183820be96f5

Merge pull request #457 from pgrimaud/master Fix typo in README.md

view details

Dylan Thacker-Smith

commit sha daaba7351346c9ed67a146dd7e6720c6ad5ec7d8

Fix should_use_cache check to avoid calling it on the wrong class. (#454)

view details

Jean Boussier

commit sha cf771996e669d5436322225550ea9f9980c69810

Clear some warnings and test against Ruby 2.7

view details

Jean byroot Boussier

commit sha 95c3e3d4f3274976550389ded2f6243904e29029

Merge pull request #460 from Shopify/update-test-suite Clear some warnings and test against Ruby 2.7

view details

Dylan Thacker-Smith

commit sha a4eeff986599815813465ed2c0ee8d69ac267ab5

Fix broken prefetch_associations of a polymorphic cache_belongs_to

view details

push time in 2 months

PR opened Shopify/identity_cache

Reviewers
Fix broken prefetch_associations of a polymorphic cache_belongs_to

cc @casperisfine

Problem

We noticed a weird test failure in https://github.com/Shopify/identity_cache/pull/459 which turned out to be a problem with the implementation of prefetch_associations for a polymorphic cache_belongs_to. It seemed to be making incorrect assumptions of what the load_batch load strategy method is supposed to be given as an input, where it was passing a nested hash when it was only supposed to be a single hash.

I changed the failing tests, that were just asserting on the number of queries performed, to actually assert on the result of the prefetch. This resulted in the following test failures without the corresponding fix:

  1) Failure:
FetchMultiTest#test_fetch_multi_with_polymorphic_has_one [test/fetch_multi_test.rb:304]:
--- expected
+++ actual
@@ -1 +1 @@
-[#<Item id: 4, item_id: nil, title: nil, created_at: "2020-05-05 14:35:54", updated_at: "2020-05-05 14:35:54">, #<ItemTwo id: 1, item_id: nil, title: nil, created_at: "2020-05-05 14:35:54", updated_at: "2020-05-05 14:35:54">]
+[#<PolymorphicRecord id: 1, owner_type: "Item", owner_id: 4, created_at: "2020-05-05 14:35:54", updated_at: "2020-05-05 14:35:54">, #<PolymorphicRecord id: 2, owner_type: "ItemTwo", owner_id: 1, created_at: "2020-05-05 14:35:54", updated_at: "2020-05-05 14:35:54">]


  2) Failure:
FetchMultiTest#test_fetch_multi_with_polymorphic_has_many [test/fetch_multi_test.rb:322]:
--- expected
+++ actual
@@ -1 +1 @@
-[#<Item id: 4, item_id: nil, title: nil, created_at: "2020-05-05 14:35:54", updated_at: "2020-05-05 14:35:54">, #<Item id: 4, item_id: nil, title: nil, created_at: "2020-05-05 14:35:54", updated_at: "2020-05-05 14:35:54">, #<ItemTwo id: 1, item_id: nil, title: nil, created_at: "2020-05-05 14:35:54", updated_at: "2020-05-05 14:35:54">]
+[#<Item id: 4, item_id: nil, title: nil, created_at: "2020-05-05 14:35:54", updated_at: "2020-05-05 14:35:54">, #<PolymorphicRecord id: 2, owner_type: "Item", owner_id: 4, created_at: "2020-05-05 14:35:54", updated_at: "2020-05-05 14:35:54">, #<PolymorphicRecord id: 3, owner_type: "ItemTwo", owner_id: 1, created_at: "2020-05-05 14:35:54", updated_at: "2020-05-05 14:35:54">]

Solution

Fix the argument to load_strategy.load_batch. This meant also changing the code handling the result of that call, since it was previously relying on it holding the owner record. I also changed some variable names, since they seemed confusing (e.g. calling a cache fetching a cache key).

+25 -17

0 comment

2 changed files

pr created time in 2 months

create barnchShopify/identity_cache

branch : fix-prefetch-polymorphic-belongs-to

created branch time in 2 months

pull request commentShopify/identity_cache

Stop passing the column to Connection#type_cast

Yeah, that's what I mean about it looking like a bug. I'll open another PR with a fix.

casperisfine

comment created time in 2 months

more