Interpretable Machine Learning for Tabular Data
Want to make your machine learning models more transparent and trustworthy?
In this interactive workshop, you’ll learn practical techniques to interpret ML models, understand their decisions, and communicate results with confidence.
✅ What you’ll learn:
- Key concepts and methods in interpretable ML
- Local interpretability methods like SHAP and LIME
- Global interpretability methods like the Partial Dependence Plot and Permutation Feature Importance
- Limitations, pitfalls, and best practices
🧑💻 Format:
- Theory inputs, grounded in practice
- Hands-on coding with Jupyter Notebooks
- Interactive exercises and group discussions
🎯 Who it’s for:
ML practitioners, researchers, and data scientists with basic Python & ML knowledge.
Interested?
Get in touch via email to discuss scope, format, and timing. The workshop can be tailored to your team’s needs.
Interested? Get in touch at chris@christophmolnar.com and let’s set up your workshop. For in-house workshops, I can travel to any location easily reachable from Munich. I look forward to helping your team!
See also my other workshops