DKE research theme

Explainable and Reliable Artificial Intelligence (ERAI)

Two of the major challenges for Artificial Intelligence are to provide ‘explanations’ for recommendations made by intelligent systems, and guarantees of their ‘reliability’. Explanations are important to help people involved understand the reasons why a recommendation was made. Reliability is important when decisions concern people’s safety or profoundly affect their lives.

This is an old research theme of the Department of Data Science and Knowledge Engineering (DKE). DKE has become the Department of Advanced Computing Sciences.

Recent high-profile successes in Machine Learning have mostly been achieved using deep neural networks, which yield ‘black-box’ input-output mappings that can be challenging to explain to users. Especially in the medical, military and legal fields, black-box machine-learning techniques are unacceptable, since decisions may have a profound impact on peoples’ lives. As reported in recent news, AI may amplify the biases that already pervade our society without strict ethical standards, unbiased data collection, and algorithmic bias considerations.

Research focus and application

Research within the ERAI theme Investigates different ways to make intelligent systems better explainable and more reliable. Some of the research foci are:

  • Analyzing whether the input-output relations of a system can provide high-level correlationbased explanations of the black-box AI system.
  • Logic-based systems can provide explanations and are able to reason with (legal) regulations that should be adhered. Integrating logic-based approaches with machine learning approaches is one possible way to realize explainable artificial intelligence, and is an important challenge for the near future.
  • Learning explainable models instead of learning a mapping may improve explainability with minor or no decrease in the quality of predictions / decisions / recommendations. • Improving the understanding of deep neural networks. Especially w.r.t. the influence of input and intermediate layers on the outputs, information flow through the network, detecting changes that can make the system collapse and determining the tolerance of the system.
  • Using machine learning and causal inference to explain and understand the relationship between variables in high-dimensional and complex data.
  • Developing model-checking tools for properties of artificial intelligence and control systems in safety-critical engineering, medical interventions, and legal compliance.
  • Mediating effects of individual characteristics on explanation effectiveness. How to adapt explanations to user specific factors that may influence the effectiveness of explanations. E.g. how individual user characteristics (e.g., cognitive abilities, expertise) influence the effectiveness of interactive explanation interfaces.
  • New evaluation criteria and methodologies. It is also important to study the effects of explanations over time. However, decision support and recommender systems research to date has primarily studied the impact of recommending individual items, or conducted analysis based on historical data. E.g. using simulations to model interactions with recommendations, or the effects of different algorithms and algorithm parameters.
  • Explanation interfaces to deal with biases. Human decision-making is rarely completely rational and suffers from biases, such as confirmation or availability biases. These biases can be considered in the design of explanations interfaces. For example, automatically generated explanations can be used to support credibility assessments, supporting people who are having trouble making assessments about controversial content they find online.
  • Explanations for groups. Explanations can also be helpful in situations where it is difficult to reach an agreement, e.g., when people have diverging preferences in a group of tourists. This also raises interesting questions about the tension between privacy and transparency of explanations in group decision making.
  • Making reliable prediction with confidence bounds on the error.

Reliable Artificial Intelligence in practice

Dr. Pieter Collins discusses Ariadne, a software package for verifying properties of continuous dynamical systems

Highlighted publications

  • Collins, P. (2020). Computable analysis with applications to dynamic systems. Mathematical Structures in Computer Science, 30(2), 173-233. [096012952000002]. https://doi.org/10.1017/S096012952000002X

  • Mehrkanoon, S. (2018). Indefinite kernel spectral learning. Pattern Recognition, 78, 144-153.

  • Mehrkanoon, S. (2019). Deep neural-kernel blocks. Neural Networks, 116, 46-55. https://doi.org/10.1016/j.neunet.2019.03.011

  • Mehrkanoon, S., & Suykens, J. A. K. (2018). Deep hybrid neural-kernel networks using random Fourier features. Neurocomputing, 298, 46-54. https://doi.org/10.1016/j.neucom.2017.12.065

  • Morsomme, R., & Smirnov, E. (2019). Conformal Prediction for Students' Grades in a Course Recommender System. In Proceedings of Machine Learning Research: Conformal and Probabilistic Prediction and Applications, 9-11 September 2019, Golden Sands, Bulgaria (Vol. 105, pp. 196-213). Proceedings of Machine Learning Research http://proceedings.mlr.press/v105/morsomme19a/morsomme19a.pdf

  • Neeven, J., & Smirnov, E. (2018). Conformal stacked weather forecasting. In Proceedings of the 7th Symposium on Conformal and Probabilistic Prediction and Applications, COPA 2018, 11-13 June 2018, Maastricht, The Netherlands. (Vol. 91, pp. 220-233). Proceedings of Machine Learning Research.

  • Seiler, C., Ferreira, A-M., Kronstad, L. M., Simpson, L. J., Le Gars, M., Vendrame, E., Blish, C. A., & Holmes, S. (2021). CytoGLMM: conditional differential analysis for flow and mass cytometry experiments. BMC Bioinformatics, 22(1), [137]. https://doi.org/10.1186/s12859-021-04067-x

  • Sun, Z., & Roos, N. (2018). Dynamically stable walk control of biped humanoid on uneven and inclined terrain. Neurocomputing, 280, 111 - 122. https://doi.org/10.1016/j.neucom.2017.08.077

  • Tintarev, N. & Masthoff, J. Ricci, F.; Rokach, L.; Shapira, B. & Kantor, P. B. (Eds.) Recommender Systems Handbook (3rd Ed) Explaining Recommendations: Beyond single items. 2021

  • Trebing, K., Stanczyk, T., & Mehrkanoon, S. (2021). SmaAt-UNet: Precipitation nowcasting using a small attention-UNet architecture. Pattern Recognition Letters, 145, 178-186. https://doi.org/10.1016/j.patrec.2021.01.036

  • van der Heijden, K., & Mehrkanoon, S. (2020). Modelling human sound localization with deep neural networks. In ESANN 2020 proceedings, European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (pp. 521-526) https://www.esann.org/sites/default/files/proceedings/2020/ES2020-59.pdf

  • Warnes, Z., & Smirnov, E. (2020). Course Recommender Systems with Statistical Confidence. In Proceedings of the 13th International Conference on Educational Data Mining, EDM 2020, Fully virtual conference, July 10-13, 2020 (pp. 509-515). educationaldatamining.org. https://educationaldatamining.org/files/conferences/EDM2020/papers/paper_103.pdf

  • Y. Jin, N.N. Htun, N.Tintarev, and K. Verbert. Effects of personal characteristics in control-oriented user interfaces for music recommender systems. User Modeling and User-Adapted Interaction, 2019.

See all DKE publications

Software tools

Ariadne - A C++ library for formal verification of cyber-physical systems, using reachability analysis for nonlinear hybrid automata http://www.ariadne-cps.org/