Rajan Sethuraman, CEO of LatentView Analytics – Interview Series
Rajan Sethuraman
, CEO of LatentView Analytics, is a seasoned executive whose career spans consulting, talent leadership, and enterprise transformation, with leaderships at Accenture and KPMG before joining LatentView. His progression from Chief People Officer to CEO reflects a strong emphasis on talent development, organizational design, and scalable operating models, which now shape his approach to AI and analytics. With deep experience in recruitment, learning, and business strategy, he has consistently focused on aligning people, culture, and technology to drive measurable outcomes, culminating in leading LatentView through its IPO and global expansion while positioning AI as a business capability rather than a standalone function.
LatentView Analytics
is a global data analytics and digital transformation firm that helps enterprises harness data, artificial intelligence, and advanced analytics to improve decision-making and drive growth. The company provides services such as data engineering, predictive analytics, and AI-driven consulting across industries including financial services, retail, and technology, working with Fortune 500 clients worldwide. Its core value lies in transforming raw data into actionable insights that enable businesses to optimize operations, anticipate trends, and create competitive advantage in an increasingly economy.
You began your career in consulting and talent leadership at Accenture and KPMG before becoming CEO of LatentView Analytics, where you led the company through its IPO and first acquisition. How has your background in talent development and organizational strategy shaped the way you scale AI and analytics today?
My early career focused heavily on talent, leadership development, and building organizations that could scale. That experience continues to shape how I think about AI today. Technology alone does not scale an organization. What matters is how teams adopt it, how leaders align around it, and how clearly the business problem is defined. At LatentView, we spend a lot of time helping organizations build the operating model, skills, and culture needed to turn analytics and AI into everyday decision-making.
Because of that, I tend to think about AI through the lens of organizational readiness. Scaling AI requires strong domain expertise, data foundations, and teams that can translate insights into action. My focus has always been on building those capabilities together versus treating AI as a siloed capability.
You’ve spoken about AI minimalism — prioritizing clarity, curiosity, and culture over chasing every new GenAI trend. What does AI minimalism look like in practice for enterprise leaders?
AI minimalism begins with focus. Enterprise leaders do not need to pursue every new model or capability that appears. They need a small number of meaningful problems where AI can improve decisions or productivity in a measurable way. That might be pricing decisions, supply chain planning, or how knowledge moves across the organization. Starting with a well-defined problem helps teams build confidence and learn what responsible scaling actually looks like.
It also means embedding AI into real workflows instead of treating it as an isolated experiment. When teams see the technology helping them solve everyday problems, adoption tends to grow naturally. Curiosity and experimentation are still important, but they work best when they are anchored in a clear sense of purpose.
Many organizations are rushing into generative AI without strengthening their data foundations first. What are the warning signs that a company is building on unstable ground?
One thing I often notice is when the AI conversation moves much faster than the data conversation. If leaders are talking about copilots and generative models but there is still confusion about where key data lives, who owns it, or which metrics the business actually trusts, that’s usually a sign the foundation isn’t ready yet. AI systems depend heavily on reliable and well-governed data. Without that, it becomes difficult for people to trust the outputs.
Another signal is when companies have many pilots running but very few that influence real decisions. Generative AI can produce impressive demonstrations, but the real test is whether it becomes part of how the organization operates.
Since stepping into the CEO, you’ve driven measurable impact for global clients. What differentiates companies that successfully operationalize AI from those that remain stuck in pilot mode?
The companies that scale AI successfully treat it as an operating discipline, not an innovation side project. They assign executive ownership, connect use cases to measurable business outcomes, and design for integration from the beginning. They also invest in the less glamorous work, such as data pipelines, governance, process redesign, and user adoption. That is usually the difference between a pilot that gets attention and a capability that actually changes how decisions are made.
At LatentView, we have seen that companies move faster when they anchor AI in a business problem that already matters to the enterprise, such as improving planning accuracy, inventory outcomes, or supplier visibility. AI that is tied to metrics the business already cares about has a much better chance of getting funded, governed, and adopted at scale.
How do you approach scaling AI responsibly across a large organization while maintaining governance, security, and accountability?
Responsible scaling starts with acknowledging that AI decisions eventually affect real customers, employees, and business outcomes. That means governance cannot be an afterthought. Organizations need clear policies around data access, model oversight, and monitoring once systems are in production.
In practice, the most effective governance models are cross-functional. Business leaders, technology teams, and risk or compliance groups all need to be involved. AI systems also benefit from transparency around how outputs are generated and where human judgment remains essential. With guardrails established early, organizations can expand adoption while maintaining trust.
LatentView works with enterprises at very different levels of digital maturity. How does your AI strategy differ when advising a organization versus one still early in its analytics journey?
With a organization, the conversation is usually about acceleration. They already have meaningful data assets, so we focus on prioritizing high-value use cases, improving accessibility, and embedding AI into workflows where the business can act on it quickly. That could mean enterprise knowledge retrieval, connected planning in supply chain, or domain-specific models that improve decision velocity across functions.
For organizations earlier in the journey, the starting point is different. We spend more time on data readiness, governance, BI modernization, and capability building so the company can support AI in a sustainable way. In those situations, maturity assessment and sequencing matter a lot. You do not want to promise an agentic future to a business that still lacks trusted data, common KPIs, or executive alignment on the problem it is trying to solve.
Given your deep experience in talent acquisition and learning, what skills should companies prioritize developing internally in the AI era versus hiring externally?
Internally, I think companies should focus on building broad AI and data literacy. Not everyone needs to become a data scientist, but people across the business should prioritize grounding decisions on insights, ask better questions, and use AI tools in their daily workflow. When this practice spreads across teams, it becomes much easier to identify where AI can genuinely help and where it probably shouldn’t be used.
Externally, the hires tend to be more specialized.s like data engineering, machine learning architecture, and AI governance require deep expertise that organizations may not always have in-house. The companies that do this well usually combine those specialists with business teams that understand the context and the decisions that need to improve.
Cultural resistance often slows transformation. What leadership behaviors have you found most effective in building trust and momentum around AI adoption?
Clear communication from leadership makes a big difference. Employees want to understand why new technologies are being introduced and how they connect to the company’s strategy. Explaining the purpose behind AI initiatives and tying them to real business goals helps build confidence across the organization.
Learning is just as important. Automation and AI are already reshaping manys, so companies need to actively support employees in developing new capabilities. People engage much more openly with change when they see real opportunities to build new skills and grow alongside the technology.
As AI becomes embedded in decision-making processes, how should boards and executive teams rethink performance metrics and accountability?
AI changes how decisions are made, so leadership teams need to look beyond traditional project metrics. The real question becomes whether AI is improving the quality and speed of decisions in areas that matter to the business. That might mean better demand forecasts, more accurate pricing decisions, or faster responses to changes in the market.
If those outcomes are improving, AI is doing its job. AI performance cannot sit in a separate dashboard from business performance for very long.
Accountability also needs to be clearer. Someone still owns the data, someone is responsible for monitoring models in production, and someone ultimately makes the decision. AI can support those decisions, but governance and oversight remain essential.
Over the next three to five years, what shifts in enterprise AI adoption will matter most — and what should leaders start doing now to stay ahead?
Over the next few years, AI will start showing up much more inside everyday business decisions. Many companies have spent time experimenting with pilots and proofs of concept. The next step is making sure those capabilities actually support how teams plan, forecast demand, manage supply chains, or make marketing decisions.
Work will also evolve as AI becomes more capable. As routine tasks become more automated, manys will shift toward guiding, interpreting, and working alongside AI systems. Organizations that strengthen their data foundations and help employees build those capabilities will adapt much more easily as AI becomes part of everyday operations.
Thank you for the great interview, readers who wish to learn more should visit
LatentView Analytics
.
