Interpreting Machine Learning Models with SHAP
Curious about what drives your model’s predictions?
This focused workshop dives deep into SHAP (SHapley Additive exPlanations), one of the most powerful tools for local and global model interpretability.
✅ What you’ll learn:
- The theory behind SHAP and Shapley values
- How to use SHAP for local and global interpretability
- Visualizing SHAP values effectively
- Common pitfalls, best practices, and performance tips
🧑💻 Format:
- Focused theory sessions on SHAP
- Hands-on coding with Jupyter Notebooks
- Interactive exercises and discussions
🎯 Who it’s for:
ML practitioners, researchers, and data scientists with basic Python & ML knowledge.
Interested?
Send an email to discuss scope, format, and timing. The workshop can be adapted to your team’s needs.
Interested? Get in touch at chris@christophmolnar.com and let’s set up your workshop. For in-house workshops, I can travel to any location easily reachable from Munich. I look forward to helping your team!
See also my other workshops