MLFlow
MLFlow
MLflow Models can be deployed to multiple platforms, including PyTorch, ONNX, Scikit-Learn, and
others, making it a versatile tool for model deployment across different frameworks.
This feature allows users to track, visualize, and compare different experiment runs, making it easier
to analyze the performance of various models and configurations.
Summary MLflow components for tracking experiments, packaging reproducible projects, and
deploying models in a standardized way.
Key Points
•Tracking – Logs parameters, metrics, models to monitor runs
•Projects – Standardize ML code organization and dependencies
•Models – Package models and dependencies for deployment
Reflection Questions
1.How could you use the tracking UI to compare model experiments?
2.Why is it useful for MLflow projects to specify software environments?
3.What model deployment platforms does MLflow support?
Challenge Exercises
4.Use MLflow tracking with local model training experiments
5.Containerize an MLflow Project to ensure software consistency
6.Deploy a registered MLflow model and request real-time inferences
•Conda Environment - Specifies the dependencies and software required to recreate the
runtime environment
•Entry Points - Define the scripts that can be executed within the project workflow
•Git Repository - Remote repository that contains an MLflow project enabling portability
•mlflow run - Executes an MLflow project locally or from a Git repo with given parameters