facebook
| Sep 23, 2025

Edwin Poot of Thredd on Agentic AI, Governance, and Building the Infrastructure of Trust

Most people only talk about what Agentic AI can do, but they don’t think about how to ready systems to support it at scale. That’s where the real challenge is.
By Mikayla Lewis , Edwin Poot |

8 minutes

We connected at the end of his workday, camera flickering, a golden retriever barking somewhere off-screen, and Edwin Poot got straight to it. He doesn’t pitch sparkly demos; he talks about failure modes. “We’re not giving our support agents the ability to delete production databases,” he said, a line that doubles as a thesis for how he thinks about Agentic AI.

Poot is Dutch and unapologetically direct, a trait he says he’s learned to calibrate across cultures; he also has two golden retrievers with distinct personalities that, on some days, join his Teams calls. The combination, plain-spoken risk focus and a human, at-home cadence, frames his larger point: most people fixate on what agents can do, not on how to make them behave at scale.

Inside Thredd, he’s building for that behavior: policy-as-code governance, fine-grained IAM, an agent identity registry, and a simulation environment so clients can test before anything touches production, because 10,000 users might switch agents on at once. Costs, latency, and explainability aren’t footnotes; they’re the product.

As Poot puts it, the work is “about keeping it secure, governed, and cost-effective” before the flashy use cases hit the wild, which is why our first question was about what most teams miss when they talk about Agentic AI, and how Thredd’s guardrails change the conversation.

My read, after an hour with him: Poot is not trying to impress you with AI. He is trying to make sure it behaves. That clarity is why the next part matters. What follows is our conversation.


Q&A with Edwin Poot

Q
Q: You’ve described Agentic AI as something most people misunderstand. What makes it so different from traditional AI, and why is it so misunderstood?
A
Edwin Poot: Most people only talk about the external factors, like what Agentic AI can do for you. But they don’t think about how to actually ready your systems to support it at scale. That’s where the real challenge is.
An agent acts like a person doing a task, accessing APIs, and sometimes replicating itself multiple times. It can carry out those tasks in parallel, hitting your systems far faster than a human ever could. That’s where performance pressure comes in. And it’s not just about performance. It’s about permissioning. What if the agent does something it’s not supposed to do? You have to think about the granularity of those permissions.
We’re not giving our support agents the ability to delete production databases. That’s a basic principle, but with agents, if you’re not careful, that’s exactly what could happen.
Q
Q: What infrastructure are you building at Thredd to support this new reality?
A
Edwin Poot: We are building the entire foundation needed to support agents across use cases, from the core language model to the interface layer, which is partially live today. We use Amazon Bedrock to run it, and we’ve built a chat-style agent UI that looks and feels like ChatGPT, where a user can type in a task and the agent can perform it. But under the hood, we’ve implemented MCP—Model Context Protocol—to make those interactions context-aware when accessing APIs. That’s not something you get out of the box. And we’re layering that with authentication, permissioned credentials, real-time knowledge APIs, and service compute that can elastically scale per region.
We also feed it carefully selected company documentation, so the agent has actual context when performing tasks. But we don’t just turn it loose. We scope every agent with what I call “policy-driven scoping of permissions/credentials for agents.” That means each agent is restricted to only the specific data and operations it needs to do its job, and nothing more.
Q
Q: Can you give an example of how Agentic AI will show up in the real world, specifically in the context of financial services?
A
Edwin Poot: One of the first examples we’re targeting is configuration support. A lot of support tickets come from simple misconfigurations. So we’re designing an agent that can access the client’s Thredd API setup, read their configuration, analyze it, and proactively offer feedback like, “Hey, I think this setting is off, did you mean to configure it this way?” That’s a huge lift off the support desk, and unlike a human, that agent has access to all our documentation instantly. No human could possibly know everything ever published in our knowledge base, but the agent can.
We’re also looking at agents for autonomous dispute management, fraud reversals, and even virtual card optimization in tricky onboarding flows. There’s a concept floating around called “commerce agents,” which is basically AI that has access to your wallet and can make purchases for you. But again, that only works if you build in the right constraints. You probably don’t want your agent to spend more than $200 a week without approval, right? That requires building in approval steps, transaction limits, and other safeguards before the agent ever makes a payment.
We’re designing those now; templates for agents that clients can tweak, enable, disable, and even simulate in test environments before going live. And we’re thinking about the pricing models too, because every agent running consumes compute and language model tokens. If 10,000 users all activate agents, that’s a lot of simultaneous AI calls. The cost structure has to make sense for us and our clients.
Q
Q: You emphasize explainability, especially in fraud prevention. How does Agentic AI change how we think about fraud detection and trust?
A
Edwin Poot: We currently use a third party called Featurespace for behavioral profiling and fraud detection. It works by analyzing transaction patterns to flag anomalies, but the problem is: sometimes what’s flagged isn’t fraud. It’s just unusual behavior. That’s where in the future Explainable AI—or XAI—comes in. Let’s say your card is declined while you’re trying to pay at a restaurant. With XAI, we don’t just block the transaction… we tell you why it was blocked. And if you say, “No, that was me,” the system should allow you to identify yourself biometrically and approve the transaction in real time.
That’s the future we’re building toward. Right now, networks don’t support that kind of dynamic approval process. But with XAI, we can extract metadata from the decision and offer feedback loops. It’s not just about saying “yes” or “no, but rather about being able to prove why you said yes or no, and offering a short window for challenge and reversal. We’re also thinking about adaptive trust scoring. That means learning from session history across payment contexts, not just looking at isolated behavior. Because if we want to move toward trusted autonomy, we can’t rely on brittle rules. We need real-time, explainable models that adjust and respond intelligently.
Let’s say your card is declined while you’re trying to pay at a restaurant. With XAI, we don’t just block the transaction… we tell you why it was blocked. – Edwin Poot
Q
Q: It sounds like governance is baked into every part of your AI strategy. How are you rethinking governance and compliance in this new landscape?
A
Edwin Poot: It starts with seeing AI policy as code. Every token, every API call, and every action an agent takes must be enforceable by rules built into the architecture, not managed through a manual process. That only works if you have fine-grained identity and access controls built into your APIs. That’s why we’re rolling out a brand-new IAM solution that leverages Amazon’s native features and runs regionally to reduce latency. When someone is paying in a restaurant using a client app connected to our system, even half a second of delay can create friction in the customer experience. So performance is a governance issue, too.
And governance doesn’t stop there. We’re planning to register every agent instance in what we call an “agent identity registry.” That way, every agent is traceable, auditable, and revocable. If a client has 5 million cardholders and 10% use agents, that’s potentially 500,000 active agent instances. How do you track them? How do you revoke access if something goes wrong? You need systems that scale, not just in terms of tech, but in terms of trust. That’s why we’re even exploring the idea of an audit agent that monitors other agents. If one behaves unexpectedly, it can trigger a deactivation and notify the user. We want to build for the future, not just the demo.
Q
Q: Beyond technology, you’ve made big changes to how your teams are structured. Why did you shift toward a cross-functional model?
A
Edwin Poot: The old way wasn’t working anymore. When decisions are centralized, they don’t scale. You start with a small team, maybe 50 people, and the CEO is still making most decisions. But when you grow past 150, 200, 300 people, that model breaks down. So we moved to a cross-functional team setup led by what I call a “single-threaded leader”—someone responsible for delivery across departments, in our case this would be the Technical Product Managers we have recently recruited. We embed people from different functions into one unit, and we use feature-driven involvement and OKRs to keep them aligned.
That way, each person owns a piece of the whole and drives it forward, end to end. It’s a mindset shift. You go from siloed departments that throw things over the fence to a team that’s accountable for the entire lifecycle. And because we’ve automated so much of our software delivery process—from serverless architecture to continuous deployment—we can release multiple times a day. That kind of speed only works if decision-making is decentralized enough for teams to act without waiting for a steering committee.
Q
Q: You mentioned that people only talk about the user-facing side of AI. What do you think they’re missing?
A
Edwin Poot: Everyone talks about what the agent can do—checkout automation, e-commerce, support tickets—but they don’t talk about what that does to your infrastructure. It’s easy to write a prompt. It’s much harder to build a system that secures it, governs it, and keeps it cost-effective. What happens when an agent spawns 20 versions of itself and starts hammering your APIs? What happens if that behavior isn’t aligned with your data policies, or it causes a surge in compute costs you weren’t expecting? That’s the stuff I think about. That’s what keeps it scalable, or breaks it.
And I don’t hear that at the conferences. I hear about the flashy demos and use cases, not about the registry, or the enforcement model, or the synthetic data training required for privacy compliance. We’re not just building a cool feature, we’re building the backbone for AI in fintech. The goal is for clients to enable agents with complete confidence that we’ve already handled the performance, security, and compliance challenges they don’t see.
Q
Q: What’s a fun fact about you that people might not know?
A
Edwin Poot: I’m Dutch, so directness is just part of how I communicate. In the Netherlands, that’s normal. You say what you mean, and people appreciate it. But in other cultures, that same directness can come across differently, so I’ve had to be mindful of how it’s received and adapt when needed.
Aside from that, other fun facts would be that I split my time between the Netherlands, Spain and the UK and I have two golden retrievers.

 


Follow Edwin’s Journey


Edwin Poot

About Edwin Poot

Edwin Poot is a global CTO who partners with boards and CEOs to align technology with growth and M&A goals. He has led VC- and PE-backed companies through international expansion and platform modernization, building cloud-native platforms that support fast go to market and scale. He develops high-performing distributed teams and installs clear, accountable ways of working. Product focused, he works with product and design to shape customer-driven roadmaps and new revenue. He also established a Data and AI platform, rolling out agentic AI for onboarding, support, and decision making. Known for transforming complex, regulated estates into resilient, data-driven platforms, he helps high-growth companies scale with speed and stability.

Mikayla Lewis
Executive Author

Features Editor, Strixus

Mikayla Lewis is a seasoned editor, writer, and creative visionary who brings the perspectives of the world’s top executives to life through in-depth interviews and compelling storytelling. view profile

OTHER ARTICLES

Tags:

Related Posts