Boston University's Kate Saenko discusses explainable AI and interpreting decisions made by deep learning algorithms and neural networks.
DARPA's XAI initiative aims to shed light inside the black box of artificial intelligence algorithms. Project Manager Dave Gunning explains how the agency is working to create explainable AI tools that will build trust and reliability into AI models.
Deep learning and neural networks are becoming increasingly accurate at performing complicated tasks. But are they robust as well. Researchers at MIT-IBM Watson AI Lab have developed methods to evaluate the robustness of neural networks against adversarial examples.
Kaufmann, cofounder of Starmind, talks about the current progress of artificial intelligence, how the human brain works and what it takes to create human-grade AI.