<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>MIT &#8211; MartechView</title>
	<atom:link href="https://martechview.com/tag/mit/feed/" rel="self" type="application/rss+xml" />
	<link>https://martechview.com</link>
	<description>Where Technology Powers Customer Experience</description>
	<lastBuildDate>Fri, 22 Aug 2025 14:04:35 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>

 
	<item>
		<title>MIT: 95% of AI Pilots Fail, But New AI Shows Promise</title>
		<link>https://martechview.com/mit-95-of-ai-pilots-fail-but-new-ai-shows-promise/</link>
		
		<dc:creator><![CDATA[MartechView Editors]]></dc:creator>
		<pubDate>Fri, 22 Aug 2025 14:04:35 +0000</pubDate>
				<category><![CDATA[News]]></category>
		<category><![CDATA[AI and Machine Learning in Marketing]]></category>
		<category><![CDATA[Airbus]]></category>
		<category><![CDATA[Data Analytics and Marketing Metrics]]></category>
		<category><![CDATA[emerging technologies]]></category>
		<category><![CDATA[Enterprise AI]]></category>
		<category><![CDATA[generative AI]]></category>
		<category><![CDATA[MIT]]></category>
		<category><![CDATA[NASA]]></category>
		<category><![CDATA[OpenAI]]></category>
		<category><![CDATA[PromptQL]]></category>
		<category><![CDATA[Siemens]]></category>
		<guid isPermaLink="false">https://martechview.com/?p=31947</guid>

					<description><![CDATA[<p>MIT finds 95% of generative AI pilots deliver no ROI; PromptQL’s transparent, uncertainty-aware AI offers a path to scalable, trustworthy enterprise AI.</p>
<p>The post <a rel="nofollow" href="https://martechview.com/mit-95-of-ai-pilots-fail-but-new-ai-shows-promise/">MIT: 95% of AI Pilots Fail, But New AI Shows Promise</a> appeared first on <a rel="nofollow" href="https://martechview.com">MartechView</a>.</p>
]]></description>
										<content:encoded><![CDATA[<h2>MIT finds 95% of generative AI pilots deliver no ROI; PromptQL’s transparent, uncertainty-aware AI offers a path to scalable, trustworthy enterprise AI.</h2>
<p><span style="font-weight: 400;">A </span><a href="https://mlq.ai/media/quarterly_decks/v0.1_State_of_AI_in_Business_2025_Report.pdf" target="_blank" rel="noopener"><span style="font-weight: 400;">new report</span></a><span style="font-weight: 400;"> from MIT has sent shockwaves through the enterprise AI world. According to the </span><i><span style="font-weight: 400;">State of AI in Business 2025</span></i><span style="font-weight: 400;"> study, 95% of generative AI pilots deliver zero return on investment. The findings, based on 300 public deployments and more than 150 executive interviews, suggest that billions of dollars have been spent on </span><a href="https://martechview.com/tag/ai-and-machine-learning-in-marketing/"><span style="font-weight: 400;">AI experiments</span></a><span style="font-weight: 400;"> that never scale — and that most organizations are stuck on what MIT researchers call the “GenAI Divide.”</span></p>
<p><span style="font-weight: 400;">The numbers are stark. Forty percent of organizations say they’ve deployed AI tools, but only 5% have managed to integrate them into workflows at scale. Most projects die in pilot purgatory. Meanwhile, headlines are warning of an “AI bubble,” and investors are shorting AI stocks on the idea that generative AI’s big enterprise moment is already stalling out.</span></p>
<h3><span style="font-weight: 400;">But not everyone agrees with that reading.</span></h3>
<p><span style="font-weight: 400;">“Confidently wrong is the problem,” says Tanmai Gopal, co-founder and CEO of </span><a href="https://promptql.io/" target="_blank" rel="noopener"><span style="font-weight: 400;">PromptQL</span></a><span style="font-weight: 400;">, a unicorn AI company that counts OpenAI, Airbus, Siemens, and NASA as customers. “If the system is not always accurate even the tiniest percent of the time, I need to know when it’s not. Otherwise, my minutes turn into hours; the ROI disappears.”</span></p>
<h3><span style="font-weight: 400;">The Verification Tax</span></h3>
<p><span style="font-weight: 400;">In his </span><a href="https://promptql.io/blog/being-confidently-wrong-is-holding-ai-back" target="_blank" rel="noopener"><span style="font-weight: 400;">blog post</span></a><span style="font-weight: 400;">, </span><i><span style="font-weight: 400;">Being “Confidently Wrong” Is Holding AI Back</span></i><span style="font-weight: 400;">, Gopal describes what he calls the “verification tax.”</span></p>
<p><span style="font-weight: 400;">“I don’t know when I might get an incorrect response from my AI. So I have to forensically check every response.”</span></p>
<p><span style="font-weight: 400;">This tax explains much of what MIT labeled as the GenAI Divide. Enterprises eagerly launch pilots, but employees end up spending so much time double-checking outputs that the promised efficiencies never materialize.</span></p>
<p><span style="font-weight: 400;">It’s not that generative AI lacks raw horsepower — the models can be dazzling. It’s that their confidence is uncalibrated. In regulated or high-stakes industries, one bad answer can outweigh ten good ones. As Gopal puts it: “For serious work, one high-confidence miss costs more credibility than ten successes earn.”</span></p>
<h3><span style="font-weight: 400;">The Learning Gap</span></h3>
<p><span style="font-weight: 400;">MIT’s researchers framed the same issue differently. They found that most enterprise AI tools don’t retain feedback, adapt to workflows, or improve over time. Without those qualities, they stall.</span></p>
<p><span style="font-weight: 400;">Gopal agrees. “Without high-quality uncertainty information, I don’t know whether a result is wrong because of ambiguity, missing context, stale data, or a model mistake. If I don’t know why it’s wrong, I’m not invested in making it successful.”</span></p>
<p><span style="font-weight: 400;">That insight matters because it reframes the entire conversation. If AI isn’t failing due to lack of capability, but because it hasn’t been designed to communicate its limits and learn from corrections, then the fix is less about building bigger models — and more about building humbler ones.</span></p>
<h3><span style="font-weight: 400;">How PromptQL Solves It</span></h3>
<p><span style="font-weight: 400;">PromptQL has built its entire platform around solving this exact problem — what Gopal calls the difference between being “confidently wrong” and “tentatively right.”</span></p>
<p><span style="font-weight: 400;">Instead of presenting outputs as gospel, PromptQL calibrates confidence at the response level:</span></p>
<ul>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Quantifies uncertainty. Every answer comes with a confidence score. If the system is unsure, it abstains — effectively saying </span><i><span style="font-weight: 400;">“I don’t know.</span></i></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Surfaces context gaps. Rather than hiding uncertainty, the system flags </span><i><span style="font-weight: 400;">why</span></i><span style="font-weight: 400;"> an answer may be unreliable: missing data, ambiguity, or lack of context.</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Builds an accuracy flywheel. Each abstention or correction becomes training fuel. PromptQL captures those signals, letting the system improve continuously — closing the “learning gap” MIT identified as the number one cause of pilot failure.</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Integrates into workflows. Instead of sitting in a chatbox, PromptQL embeds directly into enterprise processes like contracts, engineering, or procurement, so uncertainty flags and corrections appear exactly where the work is happening.</span></li>
</ul>
<p><span style="font-weight: 400;">“The starting point of this loop is if an AI system could tell the user when it’s not certain about its accuracy in a concrete and native way,” Gopal writes. That loop — abstain, get corrected, learn — is what he calls the accuracy flywheel. “We don’t need perfection; we need a loop that tightens.”</span></p>
<h3><span style="font-weight: 400;">Tentatively Right Beats Confidently Wrong</span></h3>
<p><span style="font-weight: 400;">This humility-first approach has led to adoption in some of the most skeptical corners of the enterprise market. While 95% of pilots stall, PromptQL is closing seven- and eight-figure contracts with Fortune 500s, government agencies, and regulated industries — the exact places MIT says AI has struggled to gain traction.</span></p>
<p><span style="font-weight: 400;">The company is living proof that enterprise AI is not failing. The wrong kind of enterprise AI is.</span></p>
<p><span style="font-weight: 400;">As Gopal puts it: “No amount of solving any other problem — integration, data readiness, organizational readiness — will change the fact that AI’s tendency to be confidently wrong keeps it out of real-world use cases.”</span></p>
<h3><span style="font-weight: 400;">A Different Conclusion</span></h3>
<p><span style="font-weight: 400;">The takeaway, then, is not that AI is doomed to fail. It’s that enterprises must demand a different kind of AI: one that is transparent about its uncertainty, tightly integrated into workflows, and capable of improving with every interaction.</span></p>
<p><span style="font-weight: 400;">The MIT report is right to highlight the GenAI Divide. But if we only focus on the 95% that failed, we miss the 5% that are actually scaling — and why.</span></p>
<p><span style="font-weight: 400;">The companies that build and adopt AI that admits when it doesn’t know are quietly rewriting the story. PromptQL is one of them.</span></p>
<p><span style="font-weight: 400;">And if their traction holds, the conclusion isn’t that enterprise AI is a bubble. It’s that a small handful of companies have already figured out how to burst it.</span></p>
<p>The post <a rel="nofollow" href="https://martechview.com/mit-95-of-ai-pilots-fail-but-new-ai-shows-promise/">MIT: 95% of AI Pilots Fail, But New AI Shows Promise</a> appeared first on <a rel="nofollow" href="https://martechview.com">MartechView</a>.</p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
