Improve explanations for AutoML image classification
Stay organized with collections
Save and categorize content based on your preferences.
When you are working with AutoML image models, you can configure
specific parameters to improve your explanations.
The Vertex Explainable AI feature attribution
methods are all based on variants of
Shapley values. Because
Shapley values are very computationally expensive, Vertex Explainable AI provides
approximations instead of the exact values.
You can reduce the approximation error and get closer to the exact values by
changing the following inputs:
Increasing the number of integral steps or number of paths.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-01-30 UTC."],[],[]]