Neuroscientist Terry Sejnowski discusses the early struggles of deep learning, its explosion into the mainstream, and the lessons learned from decades of research and development.
AI21 Labs chief scientist Yoav Levine explains how retrieval augmented language modeling can solve one of the biggest problems of LLMs.
Cerebras CEO Andrew Feldman discusses the hardware challenges of LLMs and his vision to reduce the costs and complexity of training and running large neural networks.
Neuroscientist Daeyeol Lee discusses different modes of reinforcement learning in humans and animals, AI and natural intelligence, and future directions of research.
Nenshad Bardoliwalla, chief product officer at DataRobot, discusses challenges machine learning in different sectors and how no-code platforms are helping democratize AI.
Mailchimp has released a new AI tool to provide recommendations for email marketing campaigns. It might be its most impactful foray into AI content marketing.
In his book, "The Myth of Artificial Intelligence," computer scientist Erik Larson discusses how widely publicized misconceptions have led AI research down narrow paths that are limiting innovation and scientific discoveries.
The Self-Assembling Brain, a book by neurobiology professor Peter Robin Hiesinger, sheds light on a largely ignored aspect of intelligence.
Researchers at the University of Maryland discuss why security must be part of the machine learning research process.
Harvard Medical University Professor Gabriel Kreiman discusses biological and computer vision and explains what separates current AI systems from the human visual cortex.