Gradient descent is the main technique for training machine learning and deep learning models. Read all about it.
Everything to know about LLM fine-tuning, supervised fine-tuning, reinforcement learning from human feedback (RLHF), and parameter-efficient fine-tuning (PEFT)
Low-rank adaptation (LoRA) is a technique that cuts the costs of fine-tuning large language models (LLM) to a fraction of its actual figure.
Large language models suffer from fundamental problems, such as failing at math and reasoning. Augmented language models address some of these problems.
Multimodal language models bring together text, images, and other datatypes to solve some of the problems current artificial intelligence systems suffer from.
Reinforcement learning from human feedback (RLHF) is the technique that has made ChatGPT very impressive. But there is more to RLHF that large language models (LLM).
In a new NeurIPS paper, Geoffrey Hinton introduced the “forward-forward algorithm,” a new learning algorithm for artificial neural networks inspired by the brain.
By Mona Eslamijam
Image credit: 123RF
This article is part of Demystifying AI, a series of posts that (try to) disambiguate the jargon and myths surrounding AI.
The transformer model has become one of the main highlights of advances in deep learning and deep neural networks.
Neural architecture search NAS is a series of machine learning techniques that can help discover optimal neural networks for a given problem.