Conversation
Instead of doing data + parity recovery in separate passes do everything in one pass. ``` benchmark old MB/s new MB/s speedup BenchmarkReconstruct50x5x50000-32 74908.09 109926.03 1.47x BenchmarkReconstruct10x2x1M-32 165523.19 249533.53 1.51x BenchmarkReconstruct5x2x1M-32 141100.72 217592.22 1.54x BenchmarkReconstruct10x4x1M-32 144233.98 239901.83 1.66x BenchmarkReconstruct50x20x1M-32 39208.33 52027.88 1.33x BenchmarkReconstruct10x4x16M-32 40617.55 54814.64 1.35x ```
📝 WalkthroughWalkthroughThe Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
📜 Recent review detailsConfiguration used: defaults Review profile: CHILL Plan: Pro 📒 Files selected for processing (1)
🧰 Additional context used🧬 Code graph analysis (1)reedsolomon.go (2)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (10)
🔇 Additional comments (2)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Instead of doing data + parity recovery in separate passes do everything in one pass.
Will apply whenever parity is recovered. Pure data recovery remains the same.
Summary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings.