A former attorney turned technology chief on accountability, explainability, and why the companies cutting corners on AI governance will pay for it later.
There is no shortage of executives willing to talk about responsible AI. There is a considerably shorter list of those willing to be honest about what it actually costs — and what it demands of the people whose names are on the org chart when things go wrong.
Peter Yeung, Chief Information Officer at Optimizely, is in the second group. A former practicing attorney with 18 years at the bar before moving into technology leadership, he brings an unusual combination of legal precision and operational candor to questions that the industry too often answers with carefully worded reassurance.
In a wide-ranging conversation, Yeung addresses the governance paradox at the heart of enterprise AI — how to move quickly without moving recklessly — and argues that accountability, far from being a legal fiction, is a structure that leaders must be willing to sign their name to. He also takes on explainability under GDPR, the data minimization debate, and the accelerating fragmentation of the global data landscape.
His answers are not always comfortable. That is precisely what makes them worth reading.
Excerpts from the interview;
Companies are rolling out AI faster than governance can keep up. Is ‘responsible AI’ just a story businesses tell to move quickly, or do you truly think governance can match the pace of deployment?
The companies actually getting value from AI aren’t treating governance as a brake; they’re building it into how they scale. Most of us started broadly: put the tools in people’s hands, see what sticks. That phase served its purpose, but what’s working now is the opposite — picking a handful of high-impact use cases and making sure the data, controls, and workflows behind them are genuinely solid, secure, and trustworthy. Done right, governance accelerates things by cutting rework, risk, inaccuracies, and fragmentation.
That said, I’d be lying if I said governance doesn’t have a cost. The fastest innovation I’ve seen on AI happens in the messy middle — small teams shipping fast, breaking things, learning in days rather than quarters. The moment you wrap that in review boards, data classifications, and approval workflows, you do slow it down. That’s just the reality. The trick isn’t pretending the trade-off doesn’t exist; it’s finding the right balance for where you are. Too little governance and you end up with a graveyard of pilots and a compliance problem. Too much and you kill the energy that made AI exciting in the first place.
Responsible AI isn’t a layer you bolt on top of performance; it’s what allows AI to graduate from experimentation into something the business can actually rely on. But you have to be honest that getting the balance right is the work.
When AI systems use flawed or unclear data and cause harm, responsibility is often spread among teams and vendors. Right now, isn’t the idea of clear accountability in AI mostly just a legal fiction?
As CIO at Optimizely, with both the CISO and Trust organization reporting into me, I’d push back on the idea that accountability is a legal fiction — but I understand why people frame it that way. AI accountability is more complex than in traditional systems because it spans multiple teams: the people sourcing the data, the people building or selecting the models, and the people deciding how outputs are actually used in the business. Spread that across vendors, too, and yes, it can feel diffuse. If you then include my statement above, which calls for empowering individuals within the business to innovate at speed, the task becomes daunting.
But the way I look at it, regardless of the actor — vendor, third-party model, internal team, or individual employee — we are ultimately accountable, both internally and to our customers, for the end result. That accountability can’t be outsourced. The vendor contract doesn’t absolve us. The model provider doesn’t absolve us. If something goes wrong, our customers don’t care about the seven hops in the supply chain; they care that we own it.
What makes that real, rather than rhetorical, is structure. We treat AI like any other critical business process: explicit ownership of data inputs, clear responsibility for model deployment, and a named, accountable owner for outcomes in production. Without that, accountability genuinely does dilute across vendors and teams, and that’s where the “legal fiction” critique starts to land. With it, you create a clear line of responsibility even in a distributed system, and you give the CISO and Trust functions something concrete to govern against.
So it’s not a fiction. It’s just harder, and it requires leaders to actually sign their names.
Rules like GDPR require that automated decisions be explainable. But big AI systems often cannot give real reasons for their choices. Are we trying to enforce laws that no longer fit the world we live in?
Having practiced as an attorney for 18 years, I’d say the question is sharper than the framing suggests — but the answer isn’t quite “the laws no longer fit.” It’s that the laws were never as clear as people assume.
GDPR’s intent is absolutely still relevant: to protect individuals and hold companies accountable for automated decisions that affect them. That hasn’t aged. But read Article 22 alongside Articles 13–15 and Recital 71, and what you find is a requirement to provide “meaningful information about the logic involved” — with genuine, ongoing debate among regulators and legal scholars about what that actually means in practice. GDPR doesn’t even explicitly grant a “right to explanation”; it’s inferred. The framework was contested before modern AI arrived. Large models didn’t break a clean framework; they stress-tested an ambiguous one.
That matters because, in the absence of clear guidelines, the standards organizations actually have to meet are believability and traceability. Can you credibly describe how the system reached its decision? Can you trace the data, the controls, and the human checkpoints? Have you documented it clearly enough to walk a regulator, a customer, or a court through it without flinching? That’s the real test today.
So no, I don’t think we’re enforcing laws that no longer fit. We’re operating in a gap that regulators and industry need to close together. Until they do, the burden is on companies to set their own bar: traceable data, auditable decisions, guardrails on outputs, and documentation you’d be comfortable defending.
AI works best with lots of data, but privacy rules call for using as little data as possible. If companies have to choose, will they prioritize performance over principle? Are we already seeing this happen?
There’s real tension here, but the framing as a binary choice between performance and principle is a bit limiting. The premise that AI works best with “lots of data” is itself worth challenging. More data isn’t automatically better — if it’s poor quality, incomplete, or stripped of the right context, you’re just feeding the model noise. And noise-in produces worse-outcomes-out: hallucinations, bias amplification, and decisions you can’t defend. I’d rather have a smaller, well-governed, well-contextualized data set than a sprawling lake of mixed-quality inputs, while certainly following the GDPR tenet of Privacy by Design.
I think it reframes the privacy question. Privacy rules pushing companies toward data minimization aren’t necessarily working against AI performance — in many cases, they’re forcing the discipline that actually improves it. The companies getting this right are being deliberate about their data strategy: prioritizing quality, relevance, and governance over volume. That’s not a compromise position; that’s just better engineering.
Are we seeing companies cut corners on privacy for short-term performance? Yes, and it tends to come back to bite them through regulatory exposure, customer trust erosion, or models that don’t generalize the way they thought. Trust is becoming a genuine differentiator, particularly in customer-facing and enterprise use cases, and you can’t retrofit it.
The right answer is to design systems where privacy and performance are engineered in from the start, rather than treated as a trade-off you settle later. When done well, they reinforce each other rather than compete.
With decisions like Schrems II and laws like the CCPA, are we heading toward a split internet where data cannot move freely across countries? If so, what will break first: innovation or trust?
What’s interesting about the question is that it frames the split as a US–Europe divergence, when the more consequential fault line is East versus West — between western frameworks debating how to balance rights and commerce, and an eastern framework where the state’s relationship to data is structurally different. That gap isn’t closing through a successor to the Privacy Shield/US-EU Data Privacy Agreement.
So yes, we’re already in a split internet. Between Schrems II, CCPA, the EU AI Act, India’s DPDP, China’s PIPL, and a patchwork of US state laws, any global business is operating across fifteen-plus regulatory environments. My background on both the technology and legal sides of things, coupled with my ability to adjust to both business and customer needs, makes this isn’t hypothetical anymore — it’s the operating environment. We architect for it: data residency, regional processing, model deployment choices that respect where data can and can’t go.
On what breaks first — innovation and trust fail together, even if one precedes the other. If regulation becomes so prescriptive that nothing can cross borders without months of legal review, innovation slows. If companies route around the rules, trust collapses, and regulators tighten further. It’s a doom loop either way.
The companies that come through this well won’t bet on innovation at all costs over trust, or really cumbersome trust over innovation. They’ll invest in both, and accept that regulatory complexity is now part of the engineering/product/support lifecycle, not separate from it.