Model Interpretability

Categories: notes

Understanding and being able to justify what your machine learning model has learned is very important.

In this post, I cover :

  • understanding model by their weights
  • understanding how individiual feature interact through partial dependency plots
  • justifying individual predictions using Shap and Lime

Read More →