Boston University's Kate Saenko discusses explainable AI and interpreting decisions made by deep learning algorithms and neural networks.
DARPA's XAI initiative aims to shed light inside the black box of artificial intelligence algorithms. Project Manager Dave Gunning explains how the agency is working to create explainable AI tools that will build trust and reliability into AI models.
Deep learning and neural networks are becoming increasingly accurate at performing complicated tasks. But are they robust as well. Researchers at MIT-IBM Watson AI Lab have developed methods to evaluate the robustness of neural networks against adversarial examples.
In his book Augmented Mind, Alex Bates argues that the real opportunities of AI lie in augmenting humans, not replacing them.
doctor patient healthcare medicine
Artificial intelligence provides a unique opportunity to give back the gift of time to doctors and patients.
Yubico Yubikey security keys
Stina Ehrensvard, CEO and co-founder of Yubico, explains how the authentication space has evolved in the past 12 years.
Product management app design
GoPractice CEO and founder Oleg Yakubenkov discusses how he created a unique product management simulator course.
artificial intelligence conscience
Does artificial intelligence require moral values? We spoke to Patricia Churchland, neurophilosopher and author of "Conscience: The Origins of Moral Intuition"
Kaufmann, cofounder of Starmind, talks about the current progress of artificial intelligence, how the human brain works and what it takes to create human-grade AI.
adversarial attack computer vision
Neuroscience-inspired deep learning architectures are more resilient to adversarial attacks, researchers at MIT and IBM have found.