Adversarial Robustness Toolbox (ART) is a Python library for Machine Learning Security. ART provides tools that enable developers and researchers to evaluate, defend, certify and verify Machine Learning models and applications against the adversarial threats of Evasion, Poisoning, Extraction, and Inference. ART supports all popular machine learning frameworks (TensorFlow, Keras, PyTorch, MXNet, sci-kit-learn, XGBoost, LightGBM, CatBoost, GPy, etc.), all data types (images, tables, audio, video, etc.) and machine learning tasks (classification, object detection, generation, certification, etc.).
Features
- Adversarial Threats
- ART for Red and Blue Teams (selection)
- The library is under continuous development
- Documentation available
- Examples available
License
MIT LicenseFollow Adversarial Robustness Toolbox
Other Useful Business Software
AI-generated apps that pass security review
Retool lets you generate dashboards, admin panels, and workflows directly on your data. Type something like “Build me a revenue dashboard on my Stripe data” and get a working app with security, permissions, and compliance built in from day one. Whether on our cloud or self-hosted, create the internal software your team needs without compromising enterprise standards or control.
Rate This Project
Login To Rate This Project
User Reviews
Be the first to post a review of Adversarial Robustness Toolbox!