⚡️ Speed up function unmarshal_json by 238%#125
Open
codeflash-ai[bot] wants to merge 1 commit intomainfrom
Open
⚡️ Speed up function unmarshal_json by 238%#125codeflash-ai[bot] wants to merge 1 commit intomainfrom
unmarshal_json by 238%#125codeflash-ai[bot] wants to merge 1 commit intomainfrom
Conversation
The optimization introduces **LRU caching for Pydantic model creation**, which eliminates the expensive overhead of repeatedly creating the same unmarshaller models. **Key changes:** - Extracted model creation into `_get_unmarshaller()` function decorated with `@lru_cache(maxsize=64)` - The `create_model()` call, which was taking 93.8% of execution time in the original code, is now cached and reused for identical types **Why this optimization works:** - `create_model()` is computationally expensive as it dynamically creates new Pydantic model classes with validation logic - The line profiler shows the original `create_model()` call took ~55.8ms out of 59.5ms total (93.8% of time) - With caching, subsequent calls for the same `typ` retrieve the pre-built model in ~0.44ms instead of recreating it - The cache hit ratio is high since applications typically unmarshal the same types repeatedly **Performance benefits:** - **237% speedup** overall (24.3ms → 7.19ms) - Individual test cases show **4000-10000% improvements** for simple types that benefit most from caching - Large data structures (1000-item lists/dicts) show more modest but still significant gains (300-1000% faster) This optimization is particularly effective for workloads that repeatedly deserialize the same data types, which is common in API clients, data processing pipelines, and serialization-heavy applications.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
📄 238% (2.38x) speedup for
unmarshal_jsoninsrc/mistralai/utils/serializers.py⏱️ Runtime :
24.3 milliseconds→7.19 milliseconds(best of110runs)📝 Explanation and details
The optimization introduces LRU caching for Pydantic model creation, which eliminates the expensive overhead of repeatedly creating the same unmarshaller models.
Key changes:
_get_unmarshaller()function decorated with@lru_cache(maxsize=64)create_model()call, which was taking 93.8% of execution time in the original code, is now cached and reused for identical typesWhy this optimization works:
create_model()is computationally expensive as it dynamically creates new Pydantic model classes with validation logiccreate_model()call took ~55.8ms out of 59.5ms total (93.8% of time)typretrieve the pre-built model in ~0.44ms instead of recreating itPerformance benefits:
This optimization is particularly effective for workloads that repeatedly deserialize the same data types, which is common in API clients, data processing pipelines, and serialization-heavy applications.
✅ Correctness verification report:
🌀 Generated Regression Tests and Runtime
To edit these changes
git checkout codeflash/optimize-unmarshal_json-mh4jmtfqand push.