machine learning security
Researchers at the University of Maryland discuss why security must be part of the machine learning research process.
mouse maze reinforcement learning
Neuroscientist Daeyeol Lee discusses different modes of reinforcement learning in humans and animals, AI and natural intelligence, and future directions of research.
integrated circuit board ic
Cerebras CEO Andrew Feldman discusses the hardware challenges of LLMs and his vision to reduce the costs and complexity of training and running large neural networks.
adversarial attack computer vision
Neuroscience-inspired deep learning architectures are more resilient to adversarial attacks, researchers at MIT and IBM have found.
In his book Augmented Mind, Alex Bates argues that the real opportunities of AI lie in augmenting humans, not replacing them.
Deep learning and neural networks are becoming increasingly accurate at performing complicated tasks. But are they robust as well. Researchers at MIT-IBM Watson AI Lab have developed methods to evaluate the robustness of neural networks against adversarial examples.
IBM RoboRXN lab
The team behind IBM's RoboRXN platform explains how AI, robotics, and the cloud can change the future of drug and chemicals research.
IBM Watson's CTO describes the opportunities, prospects and challenges that lie ahead for artificial intelligence–powered chatbots
Yubico Yubikey security keys
Stina Ehrensvard, CEO and co-founder of Yubico, explains how the authentication space has evolved in the past 12 years.
Product management app design
GoPractice CEO and founder Oleg Yakubenkov discusses how he created a unique product management simulator course.