Global explainability aims at making the overall ML model transparent and comprehensive
Local explainability focuses on explaining the model’s individual predictions
Interpretability has to do with how accurately a machine learning model can associate a cause to an effect whereas explainability has to do with explaining the ability of the parameters hidden in deep neural nets to justify the results
Feature importance is a technique that explains the features that make up the training data using a score (importance) - to indicate how useful or valuable the feature is relative to other features. it provides the following benefits:
Feature attribution - indicate how much each feature in your model contributed to the predictions for each given instance
Sampled Shapley - provides a sampling approximation of exact Shapley values to determine the black box feature importance

Integrated gradients
XRAI (explanation with ranked area integrals)

Differentiable models - you can calculate the derivative of all the operations in your TensorFlow graph. Use the integrated gradients method
Nondifferentiable models - such as operations that perform decoding and rounding tasks. Use the sampled Shapley method
Detecting bias and fairness in data