Neuroscience-inspired deep learning architectures are more resilient to adversarial attacks, researchers at MIT and IBM have found.
AI will transform our world and the businesses leading the future, but only if it is easily accessible to everyone
Alex Momot, Founder and CEO of REMME, shares insights on the opportunities and challenges of IoT and which direction the industry should take.
If you’ve been following technology news, you’ve probably heard of end-to-end encryption. It’s the technology that makes sure the data you send—whether it’s a file, an email, or...
Boston University's Kate Saenko discusses explainable AI and interpreting decisions made by deep learning algorithms and neural networks.
Harvard Medical University Professor Gabriel Kreiman discusses biological and computer vision and explains what separates current AI systems from the human visual cortex.
DARPA's XAI initiative aims to shed light inside the black box of artificial intelligence algorithms. Project Manager Dave Gunning explains how the agency is working to create explainable AI tools that will build trust and reliability into AI models.
Researchers at the University of Maryland discuss why security must be part of the machine learning research process.
Deep learning and neural networks are becoming increasingly accurate at performing complicated tasks. But are they robust as well. Researchers at MIT-IBM Watson AI Lab have developed methods to evaluate the robustness of neural networks against adversarial examples.
12Page 1 of 2