Today on the AI Daily Brief, what more than a thousand executives told us about AI agents?
The AI Daily Brief is a daily podcast and video about the most important news and discussions
in AI.
All right, friends, quick announcements before we dive in.
First of all, thank you to today's sponsors Notion, Litzy, KPMG, and Robots and Pencils.
To get an ad-free version of the show, go to patreon.com/aidailybrief or you can subscribe
on Apple Podcasts.
And if you are interested in sponsoring the show, send us a note at
[email protected].
We're sold out for 2025 and about halfway through the beginning of 2026.
Send us a note and we can get you all the information you need.
Now with that, we turn to today's topic.
And since this is a long read slash big think type of weekend episode, even longer for many
of you in the US, I thought it would be fun to go beyond the headlines and actually dig
into the data around what we are hearing about AI and agents in the enterprise based on our
conversations with thousands of executives.
Now the context for this, for those of you who don't know, is my startup, Super Intelligent.
Super Intelligent is effectively an AI business intelligence startup that focuses on AI and
agent planning in the enterprise.
We use voice agents to totally transform the process of discovery, opportunity mapping,
and figuring out what AI and agent opportunities are most pertinent for your organization,
and what you're going to need to do to get ready to take advantage of them.
Over the last six months, we have done thousands and thousands of these interviews, and this
analysis represents a subsection of what we've learned.
We're going to talk about the most common challenges we see, what the biggest blockers
are, as well as some of the interesting opportunities and what the biggest enablers are.
Hopefully this is the type of presentation that can be extremely practical and useful
for those of you who are inside businesses figuring out how to harness AI and agents.
To kick us off, let me give you a few grounding statistics.
As part of this opportunity mapping, we curate what we call an agent readiness score.
It's on a scale of 100, and it's divided into quartiles with the bottom quartile being
called agent initiate, the next quartile being called agent explorer, the third quartile
being agent pilot, and the fourth being agent ready.
Now, there's no shame in any of these quartiles.
They simply reflect different levels of preparation and organizational development when it comes
to AI and agents more specifically.
Overall, the average readiness score was 52.1, which is in that agent pilot category, and
58% of all the organizations we work with are at the low end of that agent pilot category.
This means pretty much exactly what you think it would mean.
There are probably some AI pilots and AI infrastructure there, but it's not necessarily super robust.
Maybe there's been some dabbling even with agents, but not necessarily a lot of full
scale deployments.
There are probably lingering issues around either governance or data or something else,
but there is also a foundation to be built upon.
That's the agent pilot stage.
Explorer, which represented 39% of organizations, is a step back from that.
Maybe there's even less infrastructure ready and even fewer pilots going on, but we're
still not at a null state.
Initiate is really on that low end and refers to organizations that are just getting up
and running when it comes to anything AI and agents.
Again, no shame in that game.
There are plenty of organizations that fit in that mold.
One other interesting overview is that, on average, we find that big organizations are
a little bit farther along than small organizations.
You might think that this would be the opposite, given that smaller organizations feel like
they are better able to dynamically adapt, but it turns out this is hard for everyone,
and large organizations have just had a mandate from on high to get out ahead of this compared
to others.
When it comes to the top use cases, once again, I don't think that if you are a regular
listener here or if you spend any time on these issues inside your company, it will
seem super surprising.
At the very top is Enterprise Knowledge Search.
That came up as a recommendation in 48% of our audits.
There's just a ton of information locked inside corporations that, if it were more easily
accessible, would help people inside those organizations do their job better.
Agent-assisted coding is, once again, very high on that list.
In fact, the only reason that I think it's as low as only showing up in 45% of the cases
is that in addition to auditing full organizations, we frequently work with just specific lines
of business or departments or even functions, and on average, those have tended not to be
coding organizations.
Some others that you might expect are showing up pretty frequently, customer service agents,
sales support, back office reporting automation, but you can kind of get a feel for the type
of distribution of use cases that are coming up.
It's pretty wide.
It's pretty cross-cutting.
Let's move now into the challenges that we found.
You always got to start on the hard stuff and then come out to the opportunities, right?
One really interesting note is that, in general, tech readiness has not been the issue, and
what I mean by that is that most organizations at this point, even smaller organizations,
have relatively modern, sophisticated technology platforms.
They're working with some of the major cloud providers.
They've been updated for new possibilities over the last five years.
Basically, the issues with AI are not, in general, the clunkiness of legacy platforms.
Where the issues do come in, and by far, the number one blocker and the number one challenge
across a huge cross-section of these audits is data fragmentation.
Really, this could be data everything.
Fragmentation is an issue.
How usable the data is, even if it's not fragmented is an issue.
Compatibility is an issue.
Data access is an issue.
We often see organizations that have spent a bunch of time trying to organize all their
data, but still have huge issues around who can access what types of data.
This is particularly the case in highly regulated industries where there are really strict barriers
between different people having different types of data access within the organization.
We also see it in the finance industry where different data sets are relevant for different
types of investment or financial activities.
We talk a lot right now on this show about context engineering, context orchestration,
context in general, as the big unlocker of all the next opportunities of AI and agents.
I think this is going to be a massive theme for 2026, and part of why I think so is that
we just see this over and over and over again.
Even companies that have spent time on this challenge, no one is fully there.
It remains an issue, and thinking about it as very foundational and important, I think
is going to benefit organizations in the year to come.
Next up, we move to some of the perhaps more surprising findings.
One is that enthusiasm can inadvertently create resistance.
What I mean by this is that in many organizations, executives have jumped out into the lead when
it comes to the AI opportunity.
They are pushing their people to go out and learn AI, to use AI tools to integrate AI into
their workflows, to design entirely new workflows around AI.
And even when those executives have good intentions, that rapid, top-down push for multiple new
AI tools and processes can create change fatigue.
It can create a feeling of overwhelm for employees who are just trying to do their job while also
trying to adapt to this whole new reality.
Which brings us to a related and very important point, something that we see echoes of in
far more than half of our audits, employees very frequently report some variant of being
too busy to learn the thing that saves time.
This is one of the great paradoxes of AI inside the enterprise right now.
There is in many cases, actually, not a skepticism to the idea that AI tools could make one's
life better by making their work more efficient.
The challenge is that there is a learning curve.
There is a barrier to entry.
And unless an organization has structured the time to give people the time to go learn
those new processes, it just becomes another category of work to do and another item on
the to-do list that was already too long.
It is far more often employee bandwidth rather than budget.
That is a constraining factor to AI adoption right now.
And there is such a common mistake of executives providing excitement, providing the tools,
but not providing mandated time to learn and experiment with those tools.
And it's that third missing element that becomes the big blocker.
Another challenge that comes up really frequently in well over half of our audits is some type
of a policy awareness gap.
Now the outcome of this, in other words, of people not knowing exactly what AI policies
are, how they're supposed to use the tools or how they're not, what they are and aren't
allowed to do, is either one, avoidance, or two, shadow usage.
In other words, using the tool without telling colleagues or managers.
Now shadow AI isn't always bad a priority.
In some cases, it's just a chance for people to learn how to use new tools, sometimes on
their own time and even on their own budget.
And they bring that knowledge back into the workplace.
The problem is that if people are using these tools, and especially if they are bringing
sensitive data to those tools outside of the enterprise ecosystem, there can be real challenges
in that sort of usage.
What we find over and over and over again is that people are not using these tools externally
because they specifically want to be breaking the rules.
It's because they don't really know what the rules are, and they're not exactly sure
how to go find out.
Now there are other reasons for shadow AI, probably the biggest one being that historically
there has been a gap between the quality of the tools that are available to the enterprise
user, as opposed to those that consumers get to use on their own.
More recently that gap has started to close, thank goodness.
And now this policy awareness gap is one of the biggest drivers at least that we see of
that shadow IT.
Couple other patterns that come up pretty frequently.
The first is that we see organizations stuck trying to figure out whether they're supposed
to buy or build.
When actually in the context of AI, that is a false dichotomy.
There really is no such thing as off the shelf when it comes to AI and especially when it
comes to agents.
Even the most off the shelf agent is still going to involve some amount of customization,
some amount of wiring into your existing systems.
And so getting out of the mindset of buying versus building and just thinking about an
integrated process that's going to live somewhere in the middle there is really important.
Relatedly, one of the most interesting anti-patterns that we see, in other words, something where
you would think it would be good for an organization, but actually ends up really dragging them back,
is an overabundance of a DIY mindset.
We will frequently see really strong resistance from the people that you might think would
be the most tech forward IT.
And formerly at the beginning of these surveys, although much less so now, engineering departments
more broadly.
But whatever the reason for it is, a pride of purpose or a past experience or a sense
that you know your system's best, this DIY mindset that we always have to build everything
on our own, when we see this DIY mindset, those organizations tend to be less far along
on their agent journey than those who don't have it.
Overall, unsurprisingly, across all of these interviews and all of these audits, co-pilots
and assistants are increasingly table stakes.
They are everywhere.
They are proliferated.
There have even been some agent pilots and agent experiments, but full agent platforms,
in other words, thinking systematically about agents, is extremely, extremely rare.
Let's sum up some of the big blockers that we see.
Fragmented data, like I said, remains one of, if not the biggest problem for all organizations.
Even the organizations who are higher in agent readiness still deal with issues of fragmented,
unstructured, unorganized data.
There tends not to be platforms, instead we still live in pilot world.
There is a governance fog where people aren't exactly sure about the rules of using these
tools.
There is the change fatigue that we talked about, and consequently, skills gaps.
Over 70% of the organizations that we surveyed reported some issues with big skills gaps with
their workforce when it came to AI.
I actually have a thesis that because there has been such a big emphasis on agents this
year, automations, that it replace entire categories of work or workflows, that the two-year journey
towards upskilling people that had started right after ChatGBT and was proceeding bigly
at the end of 2024, got waylaid and kind of kicked back over to HR and learning and development,
as opposed to being a front and center focus for the main organization.
I think one of the things that you'll see in 2026 is a recalibration where you're not
going to have such a strict divide between augmented AI, IE employees using tools, versus
agentic AI, in other words, entire workflows being automated.
Organizations are going to try to do that all at once, and to do so, one of the things
they're going to need is better documentation around their processes.
This is the last big blocker that came up in something like 44 or 45% of the interviews
that we've done.
Turns out it's very, very hard to automate workflows when those workflows live exclusively
inside people's heads and are not articulated anywhere that an agent can read and learn.
Chatbots are great, but they can only take you so far.
I've recently been testing Notion's new AI agents, and they are a very different type
of experience.
These are agents that actually complete entire workflows for you in your style, and best
of all, they work in a channel that you already know and love because they are purpose-built
Notion super users.
Notion's new AI agents completely expands the range of what Notion can do.
It can now build documents from your entire company's knowledge base, organize scattered
information into organized reports, basically do tasks that used to take days, and get them
complete in minutes.
These agents don't just help with work, they finish it.
Getting started with building on Notion is easier than ever.
Notion agents are now your very own super user to help you onboard in minutes.
Your AI teammates are ready to work.
Try Notion AI for free at the link in our show notes.
Enterprise engineering leaders start every development sprint with the Blitzy platform
bringing in their development requirements.
The Blitzy platform provides a plan, then generates and pre-compiles code for each task.
Blitzy delivers 80% plus of the development work autonomously while providing a guide
for the final 20% of human development work required to complete the sprint.
Public companies are achieving a 5x engineering velocity increase when incorporating Blitzy
as their pre-IDE development tool, pairing it with their coding co-pilot of choice to
bring an AI native STLC into their org.
Blitzy is providing a limited time, 30-day free proof of concept for qualifying enterprises.
The team will provide a 5x velocity increase on a real development project in your org.
Visit blitzy.com and press Book Demo to learn how Blitzy transforms your STLC from AI-assisted
to AI-native.
That's blitzy.com.
Not if AI wasn't just a buzzword, but a business imperative.
On You Can with AI, we take you inside the boardrooms and strategy sessions at the world's
most forward-thinking enterprises.
Hosted by me, Nathania Winnemore, and powered by KPMG, this seven-part series delivers real
world insights from leaders who are scaling AI with purpose, from aligning culture and
leadership to building trust, data readiness, and deploying AI agents.
Whether you're a C-suite executive, strategist, or innovator, this podcast is your front row
seat to the future of enterprise AI.
So go check it out at www.kpmg.us/aipodcasts or search You Can with AI on Spotify, Apple
Podcast, or wherever you get your podcasts.
AI isn't a one-off project.
It's a partnership that has to evolve as the technology does.
Robots and pencils work side-by-side with clients to bring practical AI into every phase.
Automation, personalization, decision support, and optimization.
They prove what works through applied experimentation and build systems that amplify human potential.
As an AWS-certified partner with global delivery centers, robots and pencils combines reach
with high-touch service.
Where others hand off, they stay engaged.
Because partnership isn't a project plan.
It's a commitment.
As AI advances, so will their solutions.
That's long-term value.
Progress starts with the right partner.
Start with robots and pencils at robotsandpencils.com/aidalybrief.
But let's shift over now to opportunities.
First, talking about some of the observations we saw from organizations that are doing well
or even where organizations are struggling, the parts that were bright spots.
One thing that came up pretty frequently is that AI is so high leverage that you can in
many cases get massive ROI from a single individual.
A number of times across these thousands of interviews, we saw or heard examples where
someone inside a particular function had figured out a new AI or agent-enabled workflow for
a core piece of what they did that was not only able to help them do their work better,
but because there were a bunch of other people like them with a similar role, was able to
transmit across the organization, leading to hundreds of thousands or even millions
of dollars of benefit, again, just from the genesis of a single individual.
I think this speaks to the need to have good information sharing systems where people are
actively encouraged not just to use AI, but to share their successes so they can be disseminated
across the organization.
Here's one that people are sometimes surprised about.
There are so many flashier ways to use agents that it might surprise you that internal support
bots are very frequently one of the use cases that really get internal teams, including skeptics,
on board with AI.
This comes back to that idea that there is a ton of knowledge, information, data all locked
up in various silos and pockets across an organization.
And if you had better tools to access across that information, perhaps in an interlocutory
chatbot sort of way, it could make a big difference to people's jobs in the here and now.
And that is exactly what we see.
The organizations that did some version of an internal support bot as one of their early
use cases had real success putting that early win on the board, and it had a double ROI.
The first ROI was the benefit that it provided for people who were just trying to do their
jobs and who were able to do so more effectively and more quickly.
But the other benefit was less resistance to other AI deployments that would come later.
Next we have sort of the inverse of that anti-pattern we talked about before of the DIY mindset.
Zero prior automation is in our experience actually an advantage.
One of the things that has come out across these interviews is that if organizations had
spent a bunch of time with previous approaches to automation, think RPA, they in many cases
had to unlearn those systems, both in terms of the human knowledge unlearning, but also
to rip them out and unstructure them.
Organizations that were leapfrogging any sort of automation 1.0 type of strategy like
RPA had an advantage because they were able to go straight to these new UI UX patterns
and new experience patterns that are different about AI and agents.
So if you haven't done any automation, you think that makes you farther behind, you actually
could turn that to an advantage.
Now if we see these psychological benefits to people and reduced resistance after internal
support bots, the place that we are starting to see the first ROI in actual practical business
in financial terms is in and around back office automation functions and around support roles.
So finance and support is where we're seeing the first ROI.
This does not mean, of course, that that is the right place for your organization to start.
But to the extent that you are trying to put early wins on the board, especially early wins
where you can actually calculate the benefit, there's a lot to be said for looking into
those categories.
We have another inverse here.
If one of our big blockers was undocumented processes, the reverse is obviously true as
well where better documented processes mean faster pilots, more results, more quickly.
One of the things that's great about this too is that AI can help.
In addition to there being services like superintelligent that do at least some part of this and other
services that go even more directly at having AI observe what people are doing and turn that
into workflows that agents can understand.
You can also do this without any custom software at all.
For example, have someone fire up a loom that just screen captures them as they go about
doing some particular job or task that they do day in and day out.
When they're done, paste that loom into chat GBT and A, have it turn it into a document
explaining the process or B, even from there start to make suggestions on where there might
be efficiencies just in that human version of it.
Point is that there's no such thing as too documented when it comes to processes.
The better documented the process is the faster you're going to start to see some value from
the early implementations of AI.
Now moving over into the governance idea, if we see the organizations that don't have
clear governance struggling with shadow AI, there is a very clear pattern among organizations
that are more successful with their governance, which we might call a sandbox with guardrails.
The idea here is pretty simple.
You can't be so restrictive with AI and agents that people can't go figure out how to get
value out of these things.
There has to be some tolerance for and even encouragement of experimentation.
That's the sandbox part.
The guardrails are the systems you put around it to make sure that there aren't potentially
deleterious effects and negative externalities or consequences that come out of that sandbox
experimentation.
This doesn't have to be super complicated.
It could be as simple as saying, you can play around with this type of tool with this portion
of our work and our data, but not this portion.
These datasets are excluded from that sandbox.
Over the specifics, we see this pattern in over two thirds of the organizations that
have established governance regimes for AI and agents.
Now once again, we did top lockers before.
Now let's talk about the top enablers.
Although they can be overexuberant and we talked about that before, committed execs still
are one of the key enablers across all the organizations we had.
Without that, it is very hard to get momentum around an AI strategy.
A second top enabler organizations that tended to score higher on agent readiness were much
more likely to have some sort of AI task force or center for excellence or other centralized
organization that could both absorb internal needs and disseminate potential AI strategies
that could serve as a resource as new tools were rolled out, that could think about the
different experiments that were happening and try to see what should be scaled across
the whole organization.
Number three, you've heard me talk a number of times about quick wins.
Like anything, AI is a momentum game.
When people see the benefit of it, they want to do more of it.
So looking for those quick wins where you can quickly realize either really tangible ROI
for the organization as a whole, or just really clear benefits to individuals and how they
work, those sort of things can build a momentum that you need for broader, more sophisticated
and more comprehensive strategies related to the task forces, the idea of AI champions
and training.
The training part is obvious.
We're way behind on this organization simply put just need to spend more resources on
training their people.
Now, some of this is the market's fault.
I've complained numerous times about some of the deficits that I see in training products
right now.
But if you really want your teams to be operating at highest function now, you have to give
them resources to learn how to use these tools, even if that means some amount of customized
design and consultation.
And one of the best strategies around training is to specifically elevate the people who
are getting out ahead and getting farther than their peers when it comes to AI.
Out of sheer personal interest, almost every organization has some cohort of people who
are naturally and natively and without being asked spending their nights or weekends learning
these tools.
Maybe they're learning them in a different context and they're not using them in the
same way they would use them at work, but these folks are getting used to the patterns
they're learning, prompting, they're learning how to organize context.
Organizations that score higher on the agent readiness exam tend to have some formalization
of those people as, for example, AI champions who can be a resource for their peers as their
peers get caught up to speed on AI as well.
Lastly, while we found in general that a solid API driven core modern tech foundation is pretty
prominent across all these organizations and that tended not to be the big blocker, for
those small number of organizations that didn't have that, they were dead in the water.
So our assumption is that while most organizations are there, if you happen to be working off
of really legacy clunky tech infrastructure, you got to change it.
There's just no way of getting around it.
Now one really interesting and practical takeaway from all of this, the single biggest lift that
we see, basically the single biggest factor that shows a difference in the average agent
readiness score from an organization that has it to an organization that doesn't is an
established governance framework.
Organizations with established AI governance frameworks were 6.6% more agent ready on average
than those without.
And I think that this speaks to the idea that governance is not just about the rules of
the road.
It's about creating safe space where people can experiment and build and try new things
within their core work streams rather than off on the side without people knowing.
So if you take away anything else from this, if you don't have a governance framework right
now, you can probably make a big dent in how well AI is working in your organization by
just focusing on that.
Now the last thing I want to go through is sort of a summary of some of the archetypes
we see of organizations in case these help you place yourself in context.
The first we'll call the visionary bottleneck.
These organizations are vision rich but plumbing poor.
They tend to have strong executive intent.
They have modern SaaS systems, but they have weak data set up.
They're dealing with those data fragmentation issues.
The risks for this type of organization is change fatigue that we talked about and practically
pilot purgatory.
Lots and lots of things starting but not ever adding up to something greater than the sum
of its parts.
Next we have the cautious incumbent.
These organizations are often in a regulated or conservative industry where they almost
by definition prioritize governance, risk management, and security above all else.
These are almost the inverse of the last where they have prioritized formal AI policies and
a diligent review process, but to the detriment of just getting out and trying things.
In these organizations, their caution creates a culture of low trust and slow experimentation.
Employees are often wary of AI's accuracy to the point where they don't even try it
and their fear of compliance missteps holds them back from doing anything.
The risks here are analysis paralysis, competitive lag, and stifled innovation.
The next archetype we'll call grassroots tinkerers.
This is where there's broad general support for AI experimentation, but in the absence
of a central strategy.
So in this case, you see lots of people being encouraged to use GPTs or co-pilots, but without
a real roadmap and without a lot of support in form of upskilling or long-term strategy.
The risks here are really inconsistent quality and ultimately being left behind as organizations
more systematically bring on agentic workflows.
The grassroots tinkerers jumped out ahead when it came to some of the co-pilot type uses
but are falling behind in the realm of agents without that central strategy.
Lastly, and aspirationally, we have the foundation builder.
These are the organizations that take a deliberate infrastructure first approach to AI.
They tend to have a strong central IT or data team that are focused on the plumbing of AI,
consolidating data into a unified lake house, establishing a secure AI gateway, developing
an enterprise agent platform that others can plug into.
The biggest risk here is that while they are extremely technologically sound, the approach
can be slow to deliver tangible business value, which can sometimes create frustration among
business units that are eager to just get those quick wins.
And so what you can see about all of these things is that there is no such thing as just
purely negative.
In each of these archetypes, the organizations have particular strengths, but in many cases
vis-a-vis AI, those strengths are weaknesses and vice versa.
Understanding which of these archetypes or other archetypes your organization fits into
might be a way to help you better identify where your remediations and next steps are
best going to be.
Wrapping up, right now I think that there are two big contenders for year of titles for
2026.
By this, I mean the big enterprise themes that we're going to see, and it's not necessarily
any of the more.
In fact, I think it's going to be both hand.
The two that I am seeing discussed most often and resonate most with me based on all of
these interviews and surveys and also the conversations that we have is the idea that
26 is going to be the year of context and the year of ROI.
ROI is easy to say and hard to figure out.
One of the things that comes up over and over and over again is that CIOs right now understand
that ROI is important, but that it also does not fit into traditional frameworks from a
pre-AI era.
Organizations are not only trying to measure ROI, they're trying to figure out how to
measure ROI in the first place.
This is something that Superintelligent is thinking deeply about.
Preview we are about to launch a performance pulse product that helps organizations better
track the results of their AI and agent deployments, because I think this is just a complex, challenging
area that every organization is going to be thinking about and needing help with.
But I will also say that one of the things that is going to lead organizations to be
successful when it comes to ROI is better context.
I think that there is going to be a bunch of narrative cloud cover going into next year
for organizations to peel back away from flashy agent pilots and instead see data foundation
work as sexy, exciting, something that the organizations should be really invested in
and thrilled about.
Basically, whereas six months ago, it might have been cooler to release some pilot agent
than it was to get five MCP servers set up, I think that flips going into next year and
I think the organizations that really lean into that year of context and working on their
data foundations are going to by the end of the year, also have it have been the year
of ROI.
And in any case, that's going to do it for this particular episode.
I hope this was interesting.
Let me know if this type of ground level feedback and data is useful.
For now, I hope you're having a great long weekend.
Appreciate you listening or watching as always and until next time, peace.
.