Afsheen Afshar, Founder of Pilot Wave Holdings – Interview Series
Afsheen Afshar
, Founder of Pilot Wave Holdings, is a veteran data science and investment leader whose career spans senior roles at Goldman Sachs, J.P. Morgan, and Cerberus Capital Management, where he helped pioneer large-scale data science and AI initiatives within financial institutions. With a technical foundation rooted in neuroscience and machine learning research at Stanford University, Afshar has built a career at the intersection of advanced analytics, private equity, and operational transformation, ultimately focusing on applying AI to real-world business performance. At Pilot Wave, he brings together investment expertise, operational leadership, and deep technical knowledge to identify, acquire, and scale companies using data-driven strategies and modern technology.
Pilot Wave Holdings
is an acquisition and growth platform focused on transforming small and mid-sized businesses through artificial intelligence and advanced technology. The firm leverages proprietary AI systems to analyze operations, uncover inefficiencies, and drive performance improvements across its portfolio companies. By combining hands-on operational involvement with modern data infrastructure, Pilot Wave aims to modernize traditional businesses that have historically lacked access to advanced tools, positioning them for scalable, long-term growth in an increasingly technology-driven economy.
You’ve held pioneering AI leadership roles at firms like JPMorgan and Cerberus, and later founded Pilot Wave Holdings to bring AI into traditional industries. What core insight or frustration led you to shift from building AI inside large institutions to acquiring and transforming companies directly?
The core issue was the need for empowerment to move with speed. Inside large institutions, even when people agree on the opportunity, there are usually too many layers between identifying the problem and actually doing something about it. AI does not usually fail because the technical work is impossible, it fails because the organization is too slow, too political, or too fragmented to act with urgency. I wanted to work in an environment where strategy, operations, and technology could be aligned rapidly. Acquiring and building companies directly creates that kind of empowerment. If you actually want to change how a business runs with speed and volition, being the owner is important.
Much of the industry still celebrates successful pilots, yet real value comes from production systems. Why do AI initiatives so often break down at that transition point, and what separates organizations that successfully operationalize AI from those that stall?
A lot of pilots are designed to succeed, which is exactly why so many companies fool themselves. They happen in clean environments, with extra attention, limited scope and none of the friction that shows up in production. The deeper issue is often an empathy gap. Technologists often do not take the time, or frankly do not have the desire, to learn the operator experience, so they build something that works in theory or in a demo but does not fit the reality of the job. The companies that operationalize AI successfully are the ones that take the human workflow seriously from the beginning and build for the messiness of real operations instead of trying to avoid it. Everyone says they want production value, but many teams are still optimizing for pilot applause.
Your work focuses on embedding AI into sectors like infrastructure, manufacturing, and e-commerce. How does deploying AI in these environments differ fundamentally from deploying it in digital-native or software-first companies?
The difference is that in more traditional Main Street businesses, the empathy and human element are even more important than people in the AI world usually want to admit. In software-first environments, teams can often move quickly and patch problems later. In infrastructure, manufacturing, and e-commerce, the work is tied to physical systems, real constraints and people who know immediately when something does not fit the way the business actually runs. That means you cannot just show up with a technically elegant solution and expect adoption. If you do not understand the operator experience, your AI strategy is probably already broken. These environments expose shallow thinking very quickly, which is part of why they matter so much.
You’ve argued that AI adoption should start with business priorities rather than tools. What does that look like in practice, and how should leadership teams reframe their approach to AI transformation?
Most leadership teams are starting in the wrong place. They begin with a what-can-this-tech-do-for-us conversation because it sounds exciting and current, when the right place to start is what are our most important business priorities. Once you know that, then you can talk honestly about the best tools to address those priorities, and it does not always have to be AI. That sounds obvious, but most companies are still chasing technology first and hoping the business case will somehow appear afterward. It is backwards and it leads to a lot of wasted motion. If leadership wants real outcomes, they need to stop treating AI strategy like a shopping exercise.
At Pilot Wave, you’re not just advising companies, you’re reshaping them post-acquisition. What are the first structural or cultural changes you implement to make AI adoption actually stick?
The first thing is finding both senior and junior sponsors. The junior sponsors know the day-to-day reality and can make sure the rank and file actually do what needs to be done, while the senior sponsors make sure politics is minimized and the effort does not get quietly strangled. A lot of companies lean too heavily on top-down support and then wonder why nothing changes in practice. The truth is that AI adoption usually fails either because the organization resists it at the ground level or because leadership lets interference pile up around it. You need both forms of support in place early. Otherwise, the initiative becomes another executive talking point that never really lands.
As AI agents become more capable and infrastructure becomes increasingly abstracted, what strategic risks emerge for companies that don’t control their own data and AI stack?
I would argue that companies always need foundational control. That requires instrumenting every system, which is how Pilot Wave approaches system design, because if you cannot see what is happening, measure it, and place guard rails around it, then you are taking on risk you do not understand. That does not mean you should not delegate tasks, because delegation will absolutely continue at scale, but delegation without measurement is not an executable strategy. A lot of the market is getting seduced by abstraction because it makes things feel easier and faster, but that convenience can hide real systemic fragility. If the right instrumentation, measurement, and guard rails are in place, the potential systemic risk can be minimized. If they are not, you are building dependency before you have earned trust.
There’s a growing gap between how AI is marketed and how it performs in real-world environments. What signals should technical leaders and operators look for to distinguish meaningful AI capabilities from superficial claims?
Always ask for real value measurement. I have been religious about measuring value for my entire career, down to individual projects, because without that discipline it becomes very easy to confuse excitement with results. Every effort should be held to an ROI and tracked. If someone cannot explain clearly how the system affects revenue, cost, throughput, labor efficiency or some other real business metric, then there is a good chance they are selling theater. The industry has become far too comfortable rewarding polished demos and vague claims. Without rigorous value measurement, there is real risk of throwing away time and money.
You’ve built and led large-scale data science organizations. How do you see the role of AI teams evolving as automation increases and agent-based systems take on more responsibilities?
AI will take on higher and higher level tasks. At Pilot Wave, we are already developing AI that can take as input something like “grow my revenue by 10 percent” rather than “redo my website”, which is much closer to where a lot of AI still sits today. That shift changes the role of AI teams in a serious way because the work becomes less about isolated tasks and more about how systems reason across actual business goals. A lot of teams are still thinking too narrowly about automation and are underestimating how quickly technology is moving up the stack. The center of gravity is going to shift from task execution toward business delegation. That is a much bigger change than most enterprises are preparing for.
Many enterprises are investing heavily in AI, yet struggle to generate measurable ROI. What are the most common failure patterns you’ve observed, and how can they be avoided?
Most AI efforts, especially at large enterprises, are still too focused on sexy dashboards, buzzwords, and things that are easy to present internally but hard to tie to real value. Companies spend a lot of time making the work look sophisticated instead of making it useful. The failure pattern is usually not mysterious, it is just a lack of discipline around actionable value creation. If there is no clear economic objective, no owner and no measurement framework, the effort should not move forward. Being religiously focused on value creation at every stop along the way is critical. Otherwise enterprise AI becomes a very expensive branding exercise.
Looking ahead, which AI capabilities or system-level breakthroughs do you believe will have the greatest impact on physical-world industries over the next five to ten years?
The ability to give very high-level goals to an AI system and delegate major parts of the business is going to become very real very soon. That is the capability that will matter most, because it moves AI beyond narrow task execution and into actual operating leverage. As a consequence, people will focus more on the relationship and trust aspects of business, along with the actual physical nature of the work at hand, whether that is construction or another field-based industry. A lot of people still talk about AI as a productivity layer sitting off to the side, but that view is already starting to feel outdated. The systems are becoming capable of taking on much broader responsibility. The future is very exciting, but it is also going to be much more disruptive than many incumbents want to admit.
Thank you for the great interview, readers who wish to learn more should visit
Pilot Wave Holdings
.
