Boston University's Kate Saenko discusses explainable AI and interpreting decisions made by deep learning algorithms and neural networks.
DARPA's XAI initiative aims to shed light inside the black box of artificial intelligence algorithms. Project Manager Dave Gunning explains how the agency is working to create explainable AI tools that will build trust and reliability into AI models.
Deep learning and neural networks are becoming increasingly accurate at performing complicated tasks. But are they robust as well. Researchers at MIT-IBM Watson AI Lab have developed methods to evaluate the robustness of neural networks against adversarial examples.
In his book Augmented Mind, Alex Bates argues that the real opportunities of AI lie in augmenting humans, not replacing them.
Artificial intelligence provides a unique opportunity to give back the gift of time to doctors and patients.
GoPractice CEO and founder Oleg Yakubenkov discusses how he created a unique product management simulator course.
Kaufmann, cofounder of Starmind, talks about the current progress of artificial intelligence, how the human brain works and what it takes to create human-grade AI.
Neuroscience-inspired deep learning architectures are more resilient to adversarial attacks, researchers at MIT and IBM have found.
12Page 1 of 2