Reinforcement learning from human feedback (RLHF) is the technique that has made ChatGPT very impressive. But there is more to RLHF that large language models (LLM).
In a new NeurIPS paper, Geoffrey Hinton introduced the “forward-forward algorithm,” a new learning algorithm for artificial neural networks inspired by the brain.
By Mona Eslamijam
Image credit: 123RF
This article is part of Demystifying AI, a series of posts that (try to) disambiguate the jargon and myths surrounding AI.
The transformer model has become one of the main highlights of advances in deep learning and deep neural networks.
Neural architecture search NAS is a series of machine learning techniques that can help discover optimal neural networks for a given problem.
Support vector machine (SVM) is a type of machine learning algorithm that can be used for classification and regression tasks.
Data augmentation improves machine learning performance by generating new training examples from existing data.
Graph neural networks (GNN) are a type of machine learning algorithm that can extract important information from graphs and make useful predictions.
Deep reinforcement learning is one of the most interesting branches of AI, responsible for achievements such as mastering complex games, self-driving cars, and robotics.
Federated learning is a technique that helps train machine learning models without sending sensitive user data to the cloud.