ABSTRACT
Causal reasoning is the main learning and explanation tool used by humans. AI systems should possess causal reasoning capabilities to be deployed in the real world with trust and reliability. Introducing the ideas of causality to machine learning helps in providing better learning and explainable models. Explainability, causal disentanglement are some important aspects of any machine learning model. Causal explanations are required to believe in a model's decision and causal disentanglement learning is important for transfer learning applications. We exploit the ideas of causality to be used in deep learning models to achieve better and causally explainable models that are useful in fairness, disentangled representation, etc.
- Aditya Chattopadhyay, Piyushi Manupriya, Anirban Sarkar, and Vineeth N Balasubramanian. 2019. Neural Network Attributions: A Causal Perspective. In Proceedings of the 36th International Conference on Machine Learning. PMLR.Google Scholar
- Blender Online Community. 2018. Blender - a 3D modelling and rendering package. http://www.blender.org. visited on 01-08--2020.Google Scholar
- Li Deng and Dong Yu. 2014. Deep learning: methods and applications. Foundations and trends in signal processing, Vol. 7, 3--4 (2014), 197--387.Google ScholarCross Ref
- Freddy, Krishna Lecue, Sahin Gade, Cem, Krishnaram Geyik, Varun Kenthapadi, Ankur Mithal, Riccardo Taly, Pasquale Guidotti, and Minervini. [n.d.]. Tutorial on Explainable AI. https://xaitutorial2020.github.io/raw/master/slides/aaai_2020_xai_tutorial.pdf.Google Scholar
- Muhammad Waleed Gondal, Manuel Wuthrich, Djordje Miladinovic, Francesco Locatello, Martin Breidt, Valentin Volchkov, Joel Akpo, Olivier Bachem, Bernhard Schölkopf, and Stefan Bauer. 2019. On the transfer of inductive bias from simulation to the real world: a new disentanglement dataset. In Advances in Neural Information Processing Systems. 15740--15751.Google Scholar
- Khurram Javed, Martha White, and Yoshua Bengio. 2020. Learning Causal Models Online. arxiv: 2006.07461 [cs.LG]Google Scholar
- Jungseock Joo and Kimmo Kärkkäinen. 2020. Gender Slopes: Counterfactual Fairness for Computer Vision Models by Attribute Manipulation. arxiv: 2005.10430 [cs.CV]Google ScholarDigital Library
- Diederik P Kingma and Max Welling. 2013. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013).Google Scholar
- Matt J. Kusner, Joshua R. Loftus, Chris Russell, and Ricardo Silva. 2017. Counterfactual Fairness. arxiv: 1703.06856 [stat.ML]Google Scholar
- Loic Matthey, Irina Higgins, Demis Hassabis, and Alexander Lerchner. 2017. dSprites: Disentanglement testing Sprites dataset. https://github.com/deepmind/dsprites-dataset/. visited on 2020-08-01.Google Scholar
- David Nelson, Adriano M. Pereira, and Renato Alves de Oliveira. 2017. Stock market's price movement prediction with LSTM neural networks. 2017 International Joint Conference on Neural Networks (IJCNN) (2017), 1419--1426.Google ScholarCross Ref
- Judea Pearl. 2001. Direct and indirect effects. In Proceedings of the Seventeenth conference on Uncertainty in artificial intelligence. 411--420.Google ScholarDigital Library
- J. Peters, D. Janzing, and B. Schölkopf. 2017. Elements of Causal Inference: Foundations and Learning Algorithms. MIT Press, Cambridge, MA, USA.Google Scholar
- Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. "Why Should I Trust You?": Explaining the Predictions of Any Classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, August 13--17, 2016 . 1135--1144.Google ScholarDigital Library
- Peter J Sadowski, Daniel Whiteson, and Pierre Baldi. 2014. Searching for higgs boson decay modes with deep learning. In Advances in Neural Information Processing Systems. 2393--2401.Google Scholar
- Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. 2017. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision. 618--626.Google ScholarCross Ref
- Shayak Sen, Piotr Mardziel, Anupam Datta, and Matthew Fredrikson. 2018. Supervising Feature Influence. arxiv: 1803.10815 [cs.LG]Google Scholar
Index Terms
- Causality in Neural Networks - An Extended Abstract
Recommendations
Granular Causality Applications: Using Part-of Relations for Discovering Causality
Causal markers, syntactic structures and connectives have been the sole identifying features for automatically extracting causal relations in natural language discourse. However, various connectives such as "and", prepositions such as "as", and other ...
Counterfactual Explanations for Reinforcement Learning Agents
AAMAS '23: Proceedings of the 2023 International Conference on Autonomous Agents and Multiagent SystemsReinforcement learning (RL) algorithms often use neural networks to represent agent's policy, making them difficult to interpret. Counterfactual explanations are human-friendly explanations which offer users actionable advice on how to change their ...
Counterfactuals and causability in explainable artificial intelligence: Theory, algorithms, and applications
AbstractDeep learning models have achieved high performance across different domains, such as medical decision-making, autonomous vehicles, decision support systems, among many others. However, despite this success, the inner mechanisms of these models ...
Highlights- A survey on model-agnostic counterfactual approaches for XAI.
- A novel taxonomy for model-agnostic counterfactual approaches for XAI.
- A set of properties for causability systems for XAI.
- Opportunities and challenges for ...
Comments