Appendix D — References

Aas, Kjersti, Martin Jullum, and Anders Løland. 2021. “Explaining Individual Predictions When Features Are Dependent: More Accurate Approximations to Shapley Values.” Artificial Intelligence 298: 103502. https://doi.org/https://doi.org/10.1016/j.artint.2021.103502.
Adadi, Amina, and Mohammed Berrada. 2018. “Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI).” IEEE Access 6: 52138–60. https://doi.org/10.1109/ACCESS.2018.2870052.
Adebayo, Julius, Justin Gilmer, Michael Muelly, Ian Goodfellow, Moritz Hardt, and Been Kim. 2018. “Sanity Checks for Saliency Maps.” In Proceedings of the 32nd International Conference on Neural Information Processing Systems, 9525–36. NIPS’18. Red Hook, NY, USA: Curran Associates Inc.
Alain, Guillaume, and Yoshua Bengio. 2018. “Understanding Intermediate Layers Using Linear Classifier Probes.” arXiv. https://doi.org/10.48550/arXiv.1610.01644.
Alber, Maximilian, Sebastian Lapuschkin, Philipp Seegerer, Miriam Hägele, Kristof T. Schütt, Grégoire Montavon, Wojciech Samek, Klaus-Robert Müller, Sven Dähne, and Pieter-Jan Kindermans. 2019. iNNvestigate Neural Networks!” Journal of Machine Learning Research 20 (93): 1–8. http://jmlr.org/papers/v20/18-540.html.
Alberto, Túlio C, Johannes V Lochter, and Tiago A Almeida. 2015. “Tubespam: Comment Spam Filtering on Youtube.” In 2015 IEEE 14th International Conference on Machine Learning and Applications (ICMLA), 138–43. IEEE.
Allaire, JJ, Yihui Xie, Christophe Dervieux, Jonathan McPherson, Javier Luraschi, Kevin Ushey, Aron Atkins, et al. 2024. rmarkdown: Dynamic Documents for r. https://github.com/rstudio/rmarkdown.
Alvarez-Melis, David, and Tommi S. Jaakkola. 2018. “On the Robustness of Interpretability Methods.” arXiv. https://doi.org/10.48550/arXiv.1806.08049.
Apley, Daniel W., and Jingyu Zhu. 2020. “Visualizing the Effects of Predictor Variables in Black Box Supervised Learning Models.” Journal of the Royal Statistical Society Series B: Statistical Methodology 82 (4): 1059–86. https://doi.org/10.1111/rssb.12377.
Athalye, Anish, Logan Engstrom, Andrew Ilyas, and Kevin Kwok. 2018. “Synthesizing Robust Adversarial Examples.” In International Conference on Machine Learning, 284–93. PMLR.
Bach, Sebastian, Alexander Binder, Grégoire Montavon, Frederick Klauschen, Klaus-Robert Müller, and Wojciech Samek. 2015. “On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation.” PLOS ONE 10 (7): e0130140. https://doi.org/10.1371/journal.pone.0130140.
Barrett, Tyson, Matt Dowle, Arun Srinivasan, Jan Gorecki, Michael Chirico, Toby Hocking, and Benjamin Schwendinger. 2024. data.table: Extension of data.frame. https://CRAN.R-project.org/package=data.table.
Bau, David, Bolei Zhou, Aditya Khosla, Aude Oliva, and Antonio Torralba. 2017. “Network Dissection: Quantifying Interpretability of Deep Visual Representations.” In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 3319–27. https://doi.org/10.1109/CVPR.2017.354.
Biecek, Przemyslaw. 2018. DALEX: Explainers for Complex Predictive Models in r.” Journal of Machine Learning Research 19 (84): 1–5. https://jmlr.org/papers/v19/18-416.html.
———. 2020. ceterisParibus: Ceteris Paribus Profiles. https://CRAN.R-project.org/package=ceterisParibus.
Biggio, Battista, and Fabio Roli. 2018. “Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning.” Pattern Recognition 84 (December): 317–31. https://doi.org/10.1016/j.patcog.2018.07.023.
Bilodeau, Blair, Natasha Jaques, Pang Wei Koh, and Been Kim. 2024. “Impossibility Theorems for Feature Attribution.” Proceedings of the National Academy of Sciences 121 (2): e2304406120. https://doi.org/10.1073/pnas.2304406120.
Biran, Or, and Courtenay V. Cotton. 2017. “Explanation and Justification in Machine Learning: A Survey.” In Proceedings of the IJCAI-17 Workshop on Explainable Artificial Intelligence (XAI). https://www.cs.columbia.edu/~orb/papers/xai_survey_paper_2017.pdf.
Borgelt, Christian. 2005. “An Implementation of the FP-Growth Algorithm.” In Proceedings of the 1st International Workshop on Open Source Data Mining: Frequent Pattern Mining Implementations, 1–5. OSDM ’05. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/1133905.1133907.
Breiman, Leo. 2001. “Random Forests.” Machine Learning 45 (1): 5–32. https://doi.org/10.1023/A:1010933404324.
Brown, Tom B., Dandelion Mané, Aurko Roy, Martín Abadi, and Justin Gilmer. 2018. “Adversarial Patch.” arXiv. https://doi.org/10.48550/arXiv.1712.09665.
Bühlmann, Peter, and Torsten Hothorn. 2007. “Boosting Algorithms: Regularization, Prediction and Model Fitting.” Statistical Science 22 (4): 477–505. https://doi.org/10.1214/07-STS242.
Caruana, Rich, Yin Lou, Johannes Gehrke, Paul Koch, Marc Sturm, and Noemie Elhadad. 2015. “Intelligible Models for HealthCare: Predicting Pneumonia Risk and Hospital 30-Day Readmission.” In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1721–30. KDD ’15. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/2783258.2788613.
Chen, Zhi, Yijie Bei, and Cynthia Rudin. 2020. “Concept Whitening for Interpretable Image Recognition.” Nature Machine Intelligence 2 (12): 772–82. https://doi.org/10.1038/s42256-020-00265-z.
Cohen, William W. 1995. “Fast Effective Rule Induction.” In Machine Learning Proceedings 1995, edited by Armand Prieditis and Stuart Russell, 115–23. San Francisco (CA): Morgan Kaufmann. https://doi.org/10.1016/B978-1-55860-377-6.50023-2.
Cook, R. Dennis. 1977. “Detection of Influential Observation in Linear Regression.” Technometrics 19 (1): 15–18. https://doi.org/10.1080/00401706.1977.10489493.
Dandl, Susanne, Christoph Molnar, Martin Binder, and Bernd Bischl. 2020. “Multi-Objective Counterfactual Explanations.” In Parallel Problem Solving from NaturePPSN XVI, edited by Thomas Bäck, Mike Preuss, André Deutz, Hao Wang, Carola Doerr, Michael Emmerich, and Heike Trautmann, 448–69. Cham: Springer International Publishing. https://doi.org/10.1007/978-3-030-58112-1_31.
Deb, K., A. Pratap, S. Agarwal, and T. Meyarivan. 2002. “A Fast and Elitist Multiobjective Genetic Algorithm: NSGA-II.” IEEE Transactions on Evolutionary Computation 6 (2): 182–97. https://doi.org/10.1109/4235.996017.
DeLMA, and Will Cukierski. 2013. “The ICML 2013 Whale Challenge - Right Whale Redux.” https://kaggle.com/competitions/the-icml-2013-whale-challenge-right-whale-redux.
Deng, Jia, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. ImageNet: A Large-Scale Hierarchical Image Database.” In 2009 IEEE Conference on Computer Vision and Pattern Recognition, 248–55. https://doi.org/10.1109/CVPR.2009.5206848.
Doshi-Velez, Finale, and Been Kim. 2017. “Towards a Rigorous Science of Interpretable Machine Learning.” arXiv Preprint arXiv:1702.08608.
Fanaee-T, Hadi, and Joao Gama. 2014. “Event Labeling Combining Ensemble Detectors and Background Knowledge.” Progress in Artificial Intelligence 2 (2): 113–27. https://doi.org/10.1007/s13748-013-0040-3.
Feinerer, Ingo, and Kurt Hornik. 2024. tm: Text Mining Package. https://CRAN.R-project.org/package=tm.
Feinerer, Ingo, Kurt Hornik, and David Meyer. 2008. “Text Mining Infrastructure in r.” Journal of Statistical Software 25 (5): 1–54. https://doi.org/10.18637/jss.v025.i05.
Fisher, Aaron, Cynthia Rudin, and Francesca Dominici. 2019. “All Models Are Wrong, but Many Are Useful: Learning a Variable’s Importance by Studying an Entire Class of Prediction Models Simultaneously.” Journal of Machine Learning Research : JMLR 20: 177. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8323609/.
Flora, Montgomery, Corey Potvin, Amy McGovern, and Shawn Handler. 2022. “Comparing Explanation Methods for Traditional Machine Learning Models Part 1: An Overview of Current Methods and Quantifying Their Disagreement.” arXiv. http://arxiv.org/abs/2211.08943.
Fokkema, Marjolein. 2020a. “Fitting Prediction Rule Ensembles with r Package Pre.” Journal of Statistical Software 92: 1–30.
———. 2020b. “Fitting Prediction Rule Ensembles with R Package pre.” Journal of Statistical Software 92 (12): 1–30. https://doi.org/10.18637/jss.v092.i12.
Freedman, David, and Persi Diaconis. 1981. “On the Histogram as a Density Estimator:L2 Theory.” Zeitschrift für Wahrscheinlichkeitstheorie Und Verwandte Gebiete 57 (4): 453–76. https://doi.org/10.1007/BF01025868.
Freiesleben, Timo, Gunnar König, Christoph Molnar, and Álvaro Tejero-Cantero. 2024. “Scientific Inference with Interpretable Machine Learning: Analyzing Models to Learn About Real-World Phenomena.” Minds and Machines 34 (3): 32. https://doi.org/10.1007/s11023-024-09691-z.
Friedman, Jerome H. 2001. “Greedy Function Approximation: A Gradient Boosting Machine.” The Annals of Statistics 29 (5): 1189–1232. https://doi.org/10.1214/aos/1013203451.
Friedman, Jerome H., and Bogdan E. Popescu. 2008. “Predictive Learning via Rule Ensembles.” The Annals of Applied Statistics 2 (3): 916–54. https://www.jstor.org/stable/30245114.
Friedman, Jerome, Robert Tibshirani, and Trevor Hastie. 2010. “Regularization Paths for Generalized Linear Models via Coordinate Descent.” Journal of Statistical Software 33 (1): 1–22. https://doi.org/10.18637/jss.v033.i01.
Fürnkranz, Johannes, Dragan Gamberger, and Nada Lavrač. 2012. Foundations of Rule Learning. Cognitive Technologies. Berlin, Heidelberg: Springer. https://doi.org/10.1007/978-3-540-75197-7.
Garnier, Simon, Ross, Noam, Rudis, Robert, Camargo, et al. 2024. viridis(Lite) - Colorblind-Friendly Color Maps for r. https://doi.org/10.5281/zenodo.4679423.
Gauss, Carl Friedrich. 1877. Theoria Motus Corporum Coelestium in Sectionibus Conicis Solem Ambientium. Vol. 7. FA Perthes.
Ghorbani, Amirata, Abubakar Abid, and James Zou. 2019. “Interpretation of Neural Networks Is Fragile.” Proceedings of the AAAI Conference on Artificial Intelligence 33 (01): 3681–88. https://doi.org/10.1609/aaai.v33i01.33013681.
Ghorbani, Amirata, James Wexler, James Zou, and Been Kim. 2019. “Towards Automatic Concept-Based Explanations.” In Proceedings of the 33rd International Conference on Neural Information Processing Systems, 32:9277–86. 832. Red Hook, NY, USA: Curran Associates Inc.
Goldstein, Alex, Adam Kapelner, Justin Bleich, and Emil Pitkin. 2015. “Peeking Inside the Black Box: Visualizing Statistical Learning With Plots of Individual Conditional Expectation.” Journal of Computational and Graphical Statistics 24 (1): 44–65. https://doi.org/10.1080/10618600.2014.907095.
Goodfellow, Ian J., Jonathon Shlens, and Christian Szegedy. 2015. “Explaining and Harnessing Adversarial Examples.” arXiv. https://doi.org/10.48550/arXiv.1412.6572.
Gorman, Kristen B., Tony D. Williams, and William R. Fraser. 2014. “Ecological Sexual Dimorphism and Environmental Variability Within a Community of Antarctic Penguins (Genus Pygoscelis).” PloS One 9 (3): e90081. https://doi.org/10.1371/journal.pone.0090081.
Greenwell, Brandon M., Bradley C. Boehmke, and Andrew J. McCarthy. 2018. “A Simple and Effective Model-Based Variable Importance Measure.” arXiv. https://doi.org/10.48550/arXiv.1805.04755.
Grömping, Ulrike. 2020. “Model-Agnostic Effects Plots for Interpreting Machine Learning Models.” Reports in Mathematics, Physics and Chemistry, Department II, Beuth University of Applied Sciences Berlin Report 1: 2020.
Hahsler, Michael, Christian Buchta, Bettina Gruen, and Kurt Hornik. 2024. arules: Mining Association Rules and Frequent Itemsets. https://CRAN.R-project.org/package=arules.
Hahsler, Michael, Sudheer Chelluboina, Kurt Hornik, and Christian Buchta. 2011. “The Arules r-Package Ecosystem: Analyzing Interesting Patterns from Large Transaction Datasets.” Journal of Machine Learning Research 12: 1977–81. https://jmlr.csail.mit.edu/papers/v12/hahsler11a.html.
Hahsler, Michael, Bettina Gruen, and Kurt Hornik. 2005. “Arules – A Computational Environment for Mining Association Rules and Frequent Item Sets.” Journal of Statistical Software 14 (15): 1–25. https://doi.org/10.18637/jss.v014.i15.
Hamner, Ben, and Michael Frasco. 2018. Metrics: Evaluation Metrics for Machine Learning. https://CRAN.R-project.org/package=Metrics.
Hastie, Trevor. 2009. “The Elements of Statistical Learning: Data Mining, Inference, and Prediction.” Springer.
Heider, Fritz, and Marianne Simmel. 1944. “An Experimental Study of Apparent Behavior.” The American Journal of Psychology 57 (2): 243–59. https://doi.org/10.2307/1416950.
Holte, Robert C. 1993. “Very Simple Classification Rules Perform Well on Most Commonly Used Datasets.” Machine Learning 11 (1): 63–90. https://doi.org/10.1023/A:1022631118932.
Hooker, Giles. 2004. “Discovering Additive Structure in Black Box Functions.” In Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 575–80.
———. 2007. “Generalized Functional ANOVA Diagnostics for High-Dimensional Functions of Dependent Variables.” Journal of Computational and Graphical Statistics 16 (3): 709–32. https://doi.org/10.1198/106186007X237892.
Hornik, Kurt, Christian Buchta, and Achim Zeileis. 2009. “Open-Source Machine Learning: R Meets Weka.” Computational Statistics 24 (2): 225–32. https://doi.org/10.1007/s00180-008-0119-7.
Horst, Allison Marie, Alison Presmanes Hill, and Kristen B Gorman. 2020. palmerpenguins: Palmer Archipelago (Antarctica) Penguin Data. https://doi.org/10.5281/zenodo.3960218.
Horst, Allison M., Alison Presmanes Hill, and Kristen B. Gorman. 2020. “Allisonhorst/Palmerpenguins: V0.1.0.” Zenodo. https://doi.org/10.5281/zenodo.3960218.
Hothorn, Torsten, Kurt Hornik, and Achim Zeileis. 2006. “Unbiased Recursive Partitioning: A Conditional Inference Framework.” Journal of Computational and Graphical Statistics 15 (3): 651–74. https://doi.org/10.1198/106186006X133933.
Hothorn, Torsten, and Achim Zeileis. 2015. partykit: A Modular Toolkit for Recursive Partytioning in R.” Journal of Machine Learning Research 16: 3905–9. https://jmlr.org/papers/v16/hothorn15a.html.
Inglis, Alan, Andrew Parnell, and Catherine B. Hurley. 2022. “Visualizing Variable Importance and Variable Interaction Effects in Machine Learning Models.” Journal of Computational and Graphical Statistics 31 (3): 766–78. https://doi.org/10.1080/10618600.2021.2007935.
Janzing, Dominik, Lenon Minorics, and Patrick Blöbaum. 2020. “Feature Relevance Quantification in Explainable AI: A Causal Problem.” In International Conference on Artificial Intelligence and Statistics, 2907–16. PMLR.
Kahneman, Daniel, and Amos Tversky. 1982. “The Simulation Heuristic.” In Judgment Under Uncertainty: Heuristics and Biases, edited by Amos Tversky, Daniel Kahneman, and Paul Slovic, 201–8. Cambridge: Cambridge University Press. https://doi.org/10.1017/CBO9780511809477.015.
Karimi, Amir-Hossein, Gilles Barthe, Borja Balle, and Isabel Valera. 2020. “Model-Agnostic Counterfactual Explanations for Consequential Decisions.” In Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, 895–905. PMLR. https://proceedings.mlr.press/v108/karimi20a.html.
Karpathy, Andrej, Justin Johnson, and Li Fei-Fei. 2015. “Visualizing and Understanding Recurrent Networks.” arXiv. https://doi.org/10.48550/arXiv.1506.02078.
Kaufmann, Emilie, and Shivaram Kalyanakrishnan. 2013. “Information Complexity in Bandit Subset Selection.” In Proceedings of the 26th Annual Conference on Learning Theory, 228–51. PMLR. https://proceedings.mlr.press/v30/Kaufmann13.html.
Kim, Been, Rajiv Khanna, and Oluwasanmi Koyejo. 2016. “Examples Are Not Enough, Learn to Criticize! Criticism for Interpretability.” In Proceedings of the 30th International Conference on Neural Information Processing Systems, 2288–96. NIPS’16. Red Hook, NY, USA: Curran Associates Inc.
Kim, Been, Martin Wattenberg, Justin Gilmer, Carrie Cai, James Wexler, Fernanda Viegas, and Rory Sayres. 2018. “Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV).” In Proceedings of the 35th International Conference on Machine Learning, 2668–77. PMLR. https://proceedings.mlr.press/v80/kim18d.html.
Kindermans, Pieter-Jan, Sara Hooker, Julius Adebayo, Maximilian Alber, Kristof T. Schütt, Sven Dähne, Dumitru Erhan, and Been Kim. 2019. “The (Un)reliability of Saliency Methods.” In Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, edited by Wojciech Samek, Grégoire Montavon, Andrea Vedaldi, Lars Kai Hansen, and Klaus-Robert Müller, 267–80. Cham: Springer International Publishing. https://doi.org/10.1007/978-3-030-28954-6_14.
Koh, Pang Wei, Kai-Siang Ang, Hubert H. K. Teo, and Percy Liang. 2019. “On the Accuracy of Influence Functions for Measuring Group Effects.” In Proceedings of the 33rd International Conference on Neural Information Processing Systems, 32:5254–64. 472. Red Hook, NY, USA: Curran Associates Inc.
Koh, Pang Wei, and Percy Liang. 2017. “Understanding Black-Box Predictions via Influence Functions.” In Proceedings of the 34th International Conference on Machine Learning - Volume 70, 1885–94. ICML’17. Sydney, NSW, Australia: JMLR.org.
Koh, Pang Wei, Thao Nguyen, Yew Siang Tang, Stephen Mussmann, Emma Pierson, Been Kim, and Percy Liang. 2020. “Concept Bottleneck Models.” In Proceedings of the 37th International Conference on Machine Learning, 5338–48. PMLR. https://proceedings.mlr.press/v119/koh20a.html.
Kuhn, and Max. 2008. “Building Predictive Models in r Using the Caret Package.” Journal of Statistical Software 28 (5): 1–26. https://doi.org/10.18637/jss.v028.i05.
Kuźba, Michał, Ewa Baranowska, and Przemysław Biecek. 2019. pyCeterisParibus: Explaining Machine Learning Models with Ceteris Paribus Profiles in Python.” Journal of Open Source Software 4 (37): 1389. https://doi.org/10.21105/joss.01389.
Lapuschkin, Sebastian, Stephan Wäldchen, Alexander Binder, Grégoire Montavon, Wojciech Samek, and Klaus-Robert Müller. 2019. “Unmasking Clever Hans Predictors and Assessing What Machines Really Learn.” Nature Communications 10 (1): 1096. https://doi.org/10.1038/s41467-019-08987-4.
Laugel, Thibault, Marie-Jeanne Lesot, Christophe Marsala, Xavier Renard, and Marcin Detyniecki. 2017. “Inverse Classification for Comparison-based Interpretability in Machine Learning.” arXiv. https://doi.org/10.48550/arXiv.1712.08443.
Legendre, Adrien Marie. 1806. Nouvelles méthodes Pour La détermination Des Orbites Des Comètes: Avec Un Supplément Contenant Divers Perfectionnemens de Ces méthodes Et Leur Application Aux Deux Comètes de 1805. Courcier.
Lei, Jing, Max G’Sell, Alessandro Rinaldo, Ryan J. Tibshirani, and Larry Wasserman. 2018. “Distribution-Free Predictive Inference for Regression.” Journal of the American Statistical Association 113 (523): 1094–1111. https://doi.org/10.1080/01621459.2017.1307116.
Letham, Benjamin, Cynthia Rudin, Tyler H. McCormick, and David Madigan. 2015. “Interpretable Classifiers Using Rules and Bayesian Analysis: Building a Better Stroke Prediction Model.” The Annals of Applied Statistics 9 (3): 1350–71. https://doi.org/10.1214/15-AOAS848.
Liaw, Andy, and Matthew Wiener. 2002. “Classification and Regression by randomForest.” R News 2 (3): 18–22. https://CRAN.R-project.org/doc/Rnews/.
Lipton, Peter. 1990. “Contrastive Explanation.” Royal Institute of Philosophy Supplements 27 (March): 247–66. https://doi.org/10.1017/S1358246100005130.
Long, Jacob A. 2024. interactions: Comprehensive, User-Friendly Toolkit for Probing Interactions. https://doi.org/10.32614/CRAN.package.interactions.
Lundberg, Scott M., Gabriel G. Erion, and Su-In Lee. 2019. “Consistent Individualized Feature Attribution for Tree Ensembles.” arXiv. https://doi.org/10.48550/arXiv.1802.03888.
Lundberg, Scott M., and Su-In Lee. 2017. “A Unified Approach to Interpreting Model Predictions.” In Proceedings of the 31st International Conference on Neural Information Processing Systems, 4768–77. NIPS’17. Red Hook, NY, USA: Curran Associates Inc.
Ma, Chiyu, Jon Donnelly, Wenjun Liu, Soroush Vosoughi, Cynthia Rudin, and Chaofan Chen. 2024. “Interpretable Image Classification with Adaptive Prototype-based Vision Transformers.” arXiv. http://arxiv.org/abs/2410.20722.
Mahmoudi, Amin, and Dariusz Jemielniak. 2024. “Proof of Biased Behavior of Normalized Mutual Information.” Scientific Reports 14 (1): 9021. https://doi.org/10.1038/s41598-024-59073-9.
Merriam-Webster. 2017. “Definition of Algorithm.” https://www.merriam-webster.com/dictionary/algorithm.
Meschiari, Stefano. 2022. Latex2exp: Use LaTeX Expressions in Plots. https://CRAN.R-project.org/package=latex2exp.
Meyer, David, Evgenia Dimitriadou, Kurt Hornik, Andreas Weingessel, and Friedrich Leisch. 2024. E1071: Misc Functions of the Department of Statistics, Probability Theory Group (Formerly: E1071), TU Wien. https://CRAN.R-project.org/package=e1071.
Meyer, Patrick E. 2022. infotheo: Information-Theoretic Measures. https://CRAN.R-project.org/package=infotheo.
Miller, Tim. 2019. “Explanation in Artificial Intelligence: Insights from the Social Sciences.” Artificial Intelligence 267 (February): 1–38. https://doi.org/10.1016/j.artint.2018.07.007.
Mitchell, Rory, Joshua Cooper, Eibe Frank, and Geoffrey Holmes. 2022. “Sampling Permutations for Shapley Value Estimation.” Journal of Machine Learning Research 23 (43): 1–46. http://jmlr.org/papers/v23/21-0439.html.
Molnar, Christoph, Bernd Bischl, and Giuseppe Casalicchio. 2018. iml: An r Package for Interpretable Machine Learning.” JOSS 3 (26): 786. https://doi.org/10.21105/joss.00786.
Molnar, Christoph, Giuseppe Casalicchio, and Bernd Bischl. 2018. “Iml: An R Package for Interpretable Machine Learning.” Journal of Open Source Software 3 (26): 786. https://doi.org/10.21105/joss.00786.
———. 2020a. “Interpretable Machine LearningA Brief History, State-of-the-Art and Challenges.” In ECML PKDD 2020 Workshops, edited by Irena Koprinska, Michael Kamp, Annalisa Appice, Corrado Loglisci, Luiza Antonie, Albrecht Zimmermann, Riccardo Guidotti, et al., 417–31. Cham: Springer International Publishing. https://doi.org/10.1007/978-3-030-65965-3_28.
———. 2020b. “Quantifying Model Complexity via Functional Decomposition for Better Post-hoc Interpretability.” In Machine Learning and Knowledge Discovery in Databases, edited by Peggy Cellier and Kurt Driessens, 193–204. Cham: Springer International Publishing. https://doi.org/10.1007/978-3-030-43823-4_17.
Molnar, Christoph, Timo Freiesleben, Gunnar König, Julia Herbinger, Tim Reisinger, Giuseppe Casalicchio, Marvin N. Wright, and Bernd Bischl. 2023. “Relating the Partial Dependence Plot and Permutation Feature Importance to the Data Generating Process.” In Explainable Artificial Intelligence, edited by Luca Longo, 456–79. Communications in Computer and Information Science. Cham: Springer Nature Switzerland. https://doi.org/10.1007/978-3-031-44064-9_24.
Molnar, Christoph, Gunnar König, Bernd Bischl, and Giuseppe Casalicchio. 2023. “Model-Agnostic Feature Importance and Effects with Dependent FeaturesA Conditional Subgroup Approach.” Data Mining and Knowledge Discovery, January. https://doi.org/10.1007/s10618-022-00901-9.
Mothilal, Ramaravind K., Amit Sharma, and Chenhao Tan. 2020. “Explaining Machine Learning Classifiers Through Diverse Counterfactual Explanations.” In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 607–17. FAT* ’20. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/3351095.3372850.
Murdoch, W. James, Chandan Singh, Karl Kumbier, Reza Abbasi-Asl, and Bin Yu. 2019. “Definitions, Methods, and Applications in Interpretable Machine Learning.” Proceedings of the National Academy of Sciences 116 (44): 22071–80. https://doi.org/10.1073/pnas.1900654116.
Muschalik, Maximilian, Hubert Baniecki, Fabian Fumagalli, Patrick Kolpaczki, Barbara Hammer, and Eyke Hüllermeier. 2024. “Shapiq: Shapley Interactions for Machine Learning.” arXiv. https://doi.org/10.48550/arXiv.2410.01649.
Nguyen, Anh, Jeff Clune, Yoshua Bengio, Alexey Dosovitskiy, and Jason Yosinski. 2017. “Plug & Play Generative Networks: Conditional Iterative Generation of Images in Latent Space.” In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 3510–20. IEEE Computer Society. https://doi.org/10.1109/CVPR.2017.374.
Nguyen, Anh, Alexey Dosovitskiy, Jason Yosinski, Thomas Brox, and Jeff Clune. 2016. “Synthesizing the Preferred Inputs for Neurons in Neural Networks via Deep Generator Networks.” In Proceedings of the 30th International Conference on Neural Information Processing Systems, 3395–3403. NIPS’16. Red Hook, NY, USA: Curran Associates Inc.
Nicholas L. Crookston, and Andrew O. Finley. 2007. yaImpute: An r Package for kNN Imputation.” Journal of Statistical Software 23 (10). https://doi.org/10.18637/jss.v023.i10.
Nickerson, Raymond S. 1998. “Confirmation Bias: A Ubiquitous Phenomenon in Many Guises.” https://doi.org/https://journals.sagepub.com/doi/10.1037/1089-2680.2.2.175.
Olah, Chris, Alexander Mordvintsev, and Ludwig Schubert. 2017. “Feature Visualization.” Distill. https://doi.org/10.23915/distill.00007.
Olah, Chris, Arvind Satyanarayan, Ian Johnson, Shan Carter, Ludwig Schubert, Katherine Ye, and Alexander Mordvintsev. 2018. “The Building Blocks of Interpretability.” Distill. https://doi.org/10.23915/distill.00010.
Papernot, Nicolas, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z. Berkay Celik, and Ananthram Swami. 2017. “Practical Black-Box Attacks Against Machine Learning.” In Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, 506–19. ASIA CCS ’17. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/3052973.3053009.
Pedersen, Thomas Lin. 2024. patchwork: The Composer of Plots. https://CRAN.R-project.org/package=patchwork.
R Core Team. 2024a. R: A Language and Environment for Statistical Computing. Vienna, Austria: R Foundation for Statistical Computing. https://www.R-project.org/.
———. 2024b. R: A Language and Environment for Statistical Computing. Vienna, Austria: R Foundation for Statistical Computing. https://www.R-project.org/.
Rdusseeun, LKPJ, and P Kaufman. 1987. “Clustering by Means of Medoids.” In Proceedings of the Statistical Data Analysis Based on the L1 Norm Conference, Neuchatel, Switzerland. Vol. 31.
Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. 2016a. “Model-Agnostic Interpretability of Machine Learning.” arXiv Preprint arXiv:1606.05386.
———. 2016b. “"Why Should I Trust You?": Explaining the Predictions of Any Classifier.” In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–44. KDD ’16. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/2939672.2939778.
———. 2018. “Anchors: High-Precision Model-Agnostic Explanations.” Proceedings of the AAAI Conference on Artificial Intelligence 32 (1). https://doi.org/10.1609/aaai.v32i1.11491.
Robnik-Šikonja, Marko, and Marko Bohanec. 2018. “Perturbation-Based Explanations of Prediction Models.” In Human and Machine Learning: Visible, Explainable, Trustworthy and Transparent, edited by Jianlong Zhou and Fang Chen, 159–75. Cham: Springer International Publishing. https://doi.org/10.1007/978-3-319-90403-0_9.
Roscher, Ribana, Bastian Bohn, Marco F. Duarte, and Jochen Garcke. 2020. “Explainable Machine Learning for Scientific Insights and Discoveries.” IEEE Access 8: 42200–42216. https://doi.org/10.1109/ACCESS.2020.2976199.
Rudin, Cynthia. 2019. “Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead.” Nature Machine Intelligence 1 (5): 206–15. https://doi.org/10.1038/s42256-019-0048-x.
Schloerke, Barret, Di Cook, Joseph Larmarange, Francois Briatte, Moritz Marbach, Edwin Thoen, Amos Elberg, and Jason Crowley. 2024. GGally: Extension to ggplot2. https://CRAN.R-project.org/package=GGally.
Schmidhuber, Jürgen. 2015. “Deep Learning in Neural Networks: An Overview.” Neural Networks 61 (January): 85–117. https://doi.org/10.1016/j.neunet.2014.09.003.
Scholbeck, Christian A., Christoph Molnar, Christian Heumann, Bernd Bischl, and Giuseppe Casalicchio. 2020. “Sampling, Intervention, Prediction, Aggregation: A Generalized Framework for Model-Agnostic Interpretations.” In Machine Learning and Knowledge Discovery in Databases, edited by Peggy Cellier and Kurt Driessens, 205–16. Communications in Computer and Information Science. Cham: Springer International Publishing. https://doi.org/10.1007/978-3-030-43823-4_18.
Selvaraju, Ramprasaath R., Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. 2017. “Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization.” In 2017 IEEE International Conference on Computer Vision (ICCV), 618–26. https://doi.org/10.1109/ICCV.2017.74.
Shapley, Lloyd S. 1953. “A Value for n-Person Games.” Contribution to the Theory of Games 2.
Shrikumar, Avanti, Peyton Greenside, and Anshul Kundaje. 2017. “Learning Important Features Through Propagating Activation Differences.” In Proceedings of the 34th International Conference on Machine Learning - Volume 70, 3145–53. ICML’17. Sydney, NSW, Australia: JMLR.org.
Simon, Noah, Jerome Friedman, Robert Tibshirani, and Trevor Hastie. 2011. “Regularization Paths for Cox’s Proportional Hazards Model via Coordinate Descent.” Journal of Statistical Software 39 (5): 1–13. https://doi.org/10.18637/jss.v039.i05.
Simonyan, Karen, Andrea Vedaldi, and Andrew Zisserman. 2014. “Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps.” arXiv. https://doi.org/10.48550/arXiv.1312.6034.
Simonyan, Karen, and Andrew Zisserman. 2015. “Very Deep Convolutional Networks for Large-Scale Image Recognition.” arXiv. https://doi.org/10.48550/arXiv.1409.1556.
Slack, Dylan, Sophie Hilgard, Emily Jia, Sameer Singh, and Himabindu Lakkaraju. 2020. “Fooling LIME and SHAP: Adversarial Attacks on Post Hoc Explanation Methods.” In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 180–86. AIES ’20. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/3375627.3375830.
Smilkov, Daniel, Nikhil Thorat, Been Kim, Fernanda Viégas, and Martin Wattenberg. 2017. SmoothGrad: Removing Noise by Adding Noise.” arXiv. https://doi.org/10.48550/arXiv.1706.03825.
Staniak, Mateusz, and Przemyslaw Biecek. 2018. “Explanations of Model Predictions with Live and breakDown Packages.” arXiv. https://doi.org/10.48550/arXiv.1804.01955.
Strobl, Carolin, Anne-Laure Boulesteix, Thomas Kneib, Thomas Augustin, and Achim Zeileis. 2008. “Conditional Variable Importance for Random Forests.” BMC Bioinformatics 9 (1): 307. https://doi.org/10.1186/1471-2105-9-307.
Štrumbelj, Erik, and Igor Kononenko. 2011. “A General Method for Visualizing and Explaining Black-Box Regression Models.” In Adaptive and Natural Computing Algorithms, edited by Andrej Dobnikar, Uroš Lotrič, and Branko Šter, 21–30. Berlin, Heidelberg: Springer. https://doi.org/10.1007/978-3-642-20267-4_3.
———. 2014. “Explaining Prediction Models and Individual Predictions with Feature Contributions.” Knowledge and Information Systems 41 (3): 647–65. https://doi.org/10.1007/s10115-013-0679-x.
Su, Jiawei, Danilo Vasconcellos Vargas, and Kouichi Sakurai. 2019. “One Pixel Attack for Fooling Deep Neural Networks.” IEEE Transactions on Evolutionary Computation 23 (5): 828–41. https://doi.org/10.1109/TEVC.2019.2890858.
Sudjianto, Agus, Aijun Zhang, Zebin Yang, Yu Su, and Ningzhou Zeng. 2023. PiML Toolbox for Interpretable Machine Learning Model Development and Diagnostics.” arXiv Preprint arXiv:2305.04214.
Sundararajan, Mukund, and Amir Najmi. 2020. “The Many Shapley Values for Model Explanation.” In Proceedings of the 37th International Conference on Machine Learning, 9269–78. PMLR. https://proceedings.mlr.press/v119/sundararajan20b.html.
Sundararajan, Mukund, Ankur Taly, and Qiqi Yan. 2017. “Axiomatic Attribution for Deep Networks.” In Proceedings of the 34th International Conference on Machine Learning - Volume 70, 3319–28. ICML’17. Sydney, NSW, Australia: JMLR.org.
Szegedy, Christian, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. 2016. “Rethinking the Inception Architecture for Computer Vision.” In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2818–26. https://doi.org/10.1109/CVPR.2016.308.
Szegedy, Christian, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2014. “Intriguing Properties of Neural Networks.” arXiv. https://doi.org/10.48550/arXiv.1312.6199.
Tay, J. Kenneth, Balasubramanian Narasimhan, and Trevor Hastie. 2023. “Elastic Net Regularization Paths for All Generalized Linear Models.” Journal of Statistical Software 106 (1): 1–31. https://doi.org/10.18637/jss.v106.i01.
Therneau, Terry, and Beth Atkinson. 2023. rpart: Recursive Partitioning and Regression Trees. https://CRAN.R-project.org/package=rpart.
Tomsett, Richard, Dave Braines, Dan Harborne, Alun Preece, and Supriyo Chakraborty. 2018. “Interpretable to Whom? A Role-based Model for Analyzing Interpretable Machine Learning Systems.” arXiv. https://doi.org/10.48550/arXiv.1806.07552.
Tomsett, Richard, Dan Harborne, Supriyo Chakraborty, Prudhvi Gurram, and Alun Preece. 2020. “Sanity Checks for Saliency Metrics.” Proceedings of the AAAI Conference on Artificial Intelligence 34 (04): 6021–29. https://doi.org/10.1609/aaai.v34i04.6064.
Tufte, Edward R, and Peter R Graves-Morris. 1983. The Visual Display of Quantitative Information. Graphics press Cheshire, CT.
Urbanek, Simon. 2022. jpeg: Read and Write JPEG Images. https://CRAN.R-project.org/package=jpeg.
———. 2024. rJava: Low-Level r to Java Interface. https://CRAN.R-project.org/package=rJava.
Van Looveren, Arnaud, and Janis Klaise. 2021. “Interpretable Counterfactual Explanations Guided by Prototypes.” In Machine Learning and Knowledge Discovery in Databases. Research Track, edited by Nuria Oliver, Fernando Pérez-Cruz, Stefan Kramer, Jesse Read, and Jose A. Lozano, 650–65. Cham: Springer International Publishing. https://doi.org/10.1007/978-3-030-86520-7_40.
Van Noorden, Richard, and Jeffrey M. Perkel. 2023. AI and Science: What 1,600 Researchers Think.” Nature 621 (7980): 672–75. https://doi.org/10.1038/d41586-023-02980-0.
Venables, W. N., and B. D. Ripley. 2002. Modern Applied Statistics with s. Fourth. New York: Springer. https://www.stats.ox.ac.uk/pub/MASS4/.
von Jouanne-Diedrich, Holger. 2017. OneR: One Rule Machine Learning Classification Algorithm with Enhancements. https://CRAN.R-project.org/package=OneR.
Wachter, Sandra, Brent Mittelstadt, and Chris Russell. 2018. “Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR.” Harvard Journal of Law and Technology 31 (2): 841–87.
Watson, David S., and Marvin N. Wright. 2021. “Testing Conditional Independence in Supervised Learning Algorithms.” Machine Learning 110 (8): 2107–29. https://doi.org/10.1007/s10994-021-06030-6.
Wei, Pengfei, Zhenzhou Lu, and Jingwen Song. 2015. “Variable Importance Analysis: A Comprehensive Review.” Reliability Engineering & System Safety 142 (October): 399–432. https://doi.org/10.1016/j.ress.2015.05.018.
Wickham, Hadley. 2007. “Reshaping Data with the reshape Package.” Journal of Statistical Software 21 (12): 1–20. http://www.jstatsoft.org/v21/i12/.
Wickham, Hadley, Mara Averick, Jennifer Bryan, Winston Chang, Lucy D’Agostino McGowan, Romain François, Garrett Grolemund, et al. 2019. “Welcome to the tidyverse.” Journal of Open Source Software 4 (43): 1686. https://doi.org/10.21105/joss.01686.
Witten, Ian H., and Eibe Frank. 2005. Data Mining: Practical Machine Learning Tools and Techniques. 2nd ed. San Francisco: Morgan Kaufmann.
Wood, S. N. 2017. Generalized Additive Models: An Introduction with r. 2nd ed. Chapman; Hall/CRC.
Wood, S. N. 2003. “Thin-Plate Regression Splines.” Journal of the Royal Statistical Society (B) 65 (1): 95–114.
———. 2004. “Stable and Efficient Multiple Smoothing Parameter Estimation for Generalized Additive Models.” Journal of the American Statistical Association 99 (467): 673–86.
———. 2011. “Fast Stable Restricted Maximum Likelihood and Marginal Likelihood Estimation of Semiparametric Generalized Linear Models.” Journal of the Royal Statistical Society (B) 73 (1): 3–36.
Wood, S. N., N., Pya, and B. S"afken. 2016. “Smoothing Parameter and Model Selection for General Smooth Models (with Discussion).” Journal of the American Statistical Association 111: 1548–75.
Wright, Marvin N., and Andreas Ziegler. 2017. ranger: A Fast Implementation of Random Forests for High Dimensional Data in C++ and R.” Journal of Statistical Software 77 (1): 1–17. https://doi.org/10.18637/jss.v077.i01.
Xie, Yihui. 2014. knitr: A Comprehensive Tool for Reproducible Research in R.” In Implementing Reproducible Computational Research, edited by Victoria Stodden, Friedrich Leisch, and Roger D. Peng. Chapman; Hall/CRC.
———. 2015. Dynamic Documents with R and Knitr. 2nd ed. Boca Raton, Florida: Chapman; Hall/CRC. https://yihui.org/knitr/.
———. 2024. knitr: A General-Purpose Package for Dynamic Report Generation in r. https://yihui.org/knitr/.
Xie, Yihui, J. J. Allaire, and Garrett Grolemund. 2018. R Markdown: The Definitive Guide. Boca Raton, Florida: Chapman; Hall/CRC. https://bookdown.org/yihui/rmarkdown.
Xie, Yihui, Christophe Dervieux, and Emily Riederer. 2020. R Markdown Cookbook. Boca Raton, Florida: Chapman; Hall/CRC. https://bookdown.org/yihui/rmarkdown-cookbook.
Yang, Hongyu, Cynthia Rudin, and Margo Seltzer. 2016. sbrl: Scalable Bayesian Rule Lists Model. https://CRAN.R-project.org/package=sbrl.
———. 2017. “Scalable Bayesian Rule Lists.” In International Conference on Machine Learning, 3921–30. PMLR.
Yang, Zebin, Agus Sudjianto, Xiaoming Li, and Aijun Zhang. 2024. “Inherently Interpretable Tree Ensemble Learning.” arXiv. https://doi.org/10.48550/arXiv.2410.19098.
Zeileis, Achim, Torsten Hothorn, and Kurt Hornik. 2008. “Model-Based Recursive Partitioning.” Journal of Computational and Graphical Statistics 17 (2): 492–514. https://doi.org/10.1198/106186008X319331.
Zeiler, Matthew D., and Rob Fergus. 2014. “Visualizing and Understanding Convolutional Networks.” In Computer VisionECCV 2014, edited by David Fleet, Tomas Pajdla, Bernt Schiele, and Tinne Tuytelaars, 818–33. Cham: Springer International Publishing. https://doi.org/10.1007/978-3-319-10590-1_53.
Zhang, Zhou, Yufang Jin, Bin Chen, and Patrick Brown. 2019. “California Almond Yield Prediction at the Orchard Level With a Machine Learning Approach.” Frontiers in Plant Science 10 (July): 809. https://doi.org/10.3389/fpls.2019.00809.
Zhao, Qingyuan, and Trevor Hastie. 2019. CAUSAL INTERPRETATIONS OF BLACK-BOX MODELS.” Journal of Business & Economic Statistics: A Publication of the American Statistical Association 2019. https://doi.org/10.1080/07350015.2019.1624293.
Zhu, Hao. 2024. kableExtra: Construct Complex Table with kable and Pipe Syntax. https://CRAN.R-project.org/package=kableExtra.