By David Benigson
We’ve all witnessed the meteoric rise of generative AI, with conversations skyrocketing by a staggering 522 percent in just six months. Programs like OpenAI’s ChatGPT and DALL-E have captured imaginations, while industry leaders and pioneers – like the “Godfather of AI,” Geoffrey Hinton – voice concerns about potential misuses of this family of technologies. But even as exciting new applications for AI emerge every day, the core question facing businesses is far more basic: Can you trust what generative AI gives you enough to make big decisions with it?
While industry and governments chart regulatory, legal, and ethical frameworks, we already have a complementary tool that can mitigate generative AI’s downsides while enhancing its creative potential. This tool, called discriminative AI, is another branch of machine learning specifically designed to evaluate content and categorize new information. If generative AI is your creative, artistic friend who spontaneously shoots wild ideas from the hip, discriminative AI is your no-nonsense buddy with a laser focus on facts. Together, these two forms of artificial intelligence make a formidable team.
Discriminative AI excels in differentiating ideas or entities, making it exceptionally skilled at categorization. For instance, it can discern whether a news article discusses the fruit “apple” or the technology company “Apple,” or whether a writer’s sentiment is positive, negative or neutral. Because of its power of discernment, discriminative AI excels at decision-making. While generative AI creates something new, discriminative AI determines if something is correct.
While this form of machine learning may not headline today’s news, it holds the potential to address several of the challenges swirling around its generative AI cousin – and markedly increase trust in AI’s creative and strategic outputs.
First, discriminative AI can help us more skillfully navigate a world of ever-increasing AI-generated content by determining which is made by humans and which is made by computers. Alongside other existing tools like blockchain that prove origination and authenticity, discriminative AI can track content not just based on where it was published, but also by its origins – helping indicate if a human or robot created it.
Second, discriminative AI can expand use cases for generative AI. By isolating metrics like share of voice, sentiment and salience, discriminative AI already helps C-Suites decide how to position their brands for greatest impact. Advanced analytics, powered by discriminative AI, helps communicators identify whitespace, high-velocity topics and emerging risks, leading to smarter strategies. Add generative AI to the mix, and we can accelerate smart, strategic decision-making and execution. We’re just beginning to imagine those possibilities, from rapid creative prototyping, to outside-the-box thinking on risk mitigation, to new forms of stakeholder mapping, and much more.
Together, these two forms of machine learning can enhance data-driven decision-making. In the past, rebuilding and retraining data sets to accommodate new sources like Twitter required significant investments of time, finances and energy. Generative AI can now perform this task swiftly. It can also enhance existing data-driven insights, summarize key themes and begin to translate data into a proposed course of action. As we’ve already seen, generative AI can also enhance user experience for all kinds of applications – from web browsing and search to data-driven intelligence services like the one I lead – facilitating more intuitive and frictionless interactions with data and insights.
AI companies are also working on solutions to counter a known risk in generative AI: hallucinations. By using only high-quality data sources as inputs, deliberately excluding lower quality sources, using advanced information retrieval methods, and inputting restrictive prompts, generative AI can produce trustworthy answers that are accurate and reliable. The end result is content that will be far more trustworthy than what out-of-the-box solutions, like ChatGPT, can produce.
Of course, the marriage of discriminative and generative AI cannot solve every challenge associated with the field. Artificial intelligence remains nascent and rapidly developing, with new obstacles and potential solutions emerging every day. Legal, economic, ethical and business frameworks need to complement technical solutions to build comprehensive governance structures. And there are risks in trusting one form of AI to balance another – consolidating still more power in the hands of machines. Nonetheless, artificial intelligence is here to stay, and discriminative AI can complement its creative counterpart in exciting ways.
As we chart AI’s future, let’s remember the full suite of tools at our disposal. By weaving together generative AI and its more bookish cousin, discriminative AI, we can start to balance creativity with accuracy. We can move closer to an AI future that aligns with our collective values and aspirations.
About the author
David Benigson is the CEO of Signal AI.