Over my seven-plus-year career in data science, working on projects ranging from customer-value measurement to product analytics and personalization, one question has remained constant through it all:Do we have the right data, and can we trust it?
With the rapid rise of Generative AI, that question hasn’t disappeared; it’s become even more urgent. As AI systems evolve from proof-of-concept assistive chatbots to autonomous agents capable of reasoning and acting, their success increasingly depends not on how complex or powerful they are, but on how well they understand the context in which they operate.
In recent weeks, leaders like Tobi Lütke (CEO of Shopify), Andrej Karpathy (former Director of AI at Tesla), and others have spotlighted this shift. Lütke’s tweet was widely reshared, including by Karpathy, who elaborated on it further. He emphasized that context engineering is not about simple prompting, but about carefully curating, compressing, and sequencing the right mix of task instructions, examples, data, tools, and system states to guide intelligent behavior. This emerging discipline, still poorly understood in most organizations, is quickly becoming foundational to any serious application of generative AI.
This growing attention tocontext engineeringsignals a broader shift underway in the AI landscape. For much of the past year,prompt engineeringdominated the conversation, shaping new job titles and driving a surge in hiring interest. But that momentum is tapering. A Microsoft survey across 31 countries recently ranked “Prompt Engineer” near the bottom of roles companies plan to hire(Source).Job search trends reflect the change as well: according to Indeed, prompt-related job searches have dropped from144 per milliontojust 20–30(Source).
But this decline doesn’t signal the death of prompt engineering by any means. Instead, it reflects a field in transition. As use cases evolve from assistive to agentic AI, ones that can plan, reason, and act autonomously, the core challenge is no longer just about phrasing a good prompt. It’s about whether the model has the right information, at the right time, to reason and take meaningful action.
This is where Context Engineering comes in!
Suppose prompt engineering is about writing the recipe, carefully phrased, logically structured, and goal-directed. In that case,context engineeringis about stocking the pantry, prepping the key ingredients, and ensuring the model remembers what’s already been cooked. It’s the discipline of designing systems that feed the model relevant data, documentation, code, policies, and prior knowledge, not just once, but continuously and reliably.
In enterprises, where critical knowledge is often proprietary and fragmented across various platforms, including SharePoint folders, Jira tickets, Wiki pages, Slack threads, Git Repositories, emails, and dozens of internal tools, the bottleneck for driving impact with AI is rarely the prompt. It’s the missing ingredients from the pantry, the right data, delivered at the right moment, in the right format. Even the most carefully crafted prompt will fall flat if the model lacks access to the organizational context that makes the request meaningful, relevant, and actionable.
And as today’s LLMs evolve intoLarge Reasoning Models(LRM), and agentic systems begin performing real, business-critical tasks, context becomes the core differentiator. Models like OpenAI’s o3 and Anthropic’s Claude Opus 4 can handle hundreds of thousands of tokens in one go. But sheer capacity is not enough to guarantee success. What matters is selectively injecting the right slices of enterprise knowledge: source code, data schemas, metrics, KPIs, compliance rules, naming conventions, internal policies, and more.
This orchestration of context is not just document retrieval; it’s evolving into a new systems layer. Instead of simply fetching files, these systems now organize and deliver the right information at the right step, sequencing knowledge, tracking intermediate decisions, and managing memory across interactions. In more advanced setups, supporting models handle planning, summarization, or memory compression behind the scenes, helping the primary model stay focused and efficient. These architectural shifts are making it possible for AI systems to reason more effectively over time and across tasks.
Without this context layer, even the best models stall on incomplete or siloed inputs. With it, they can reason fluidly across tasks, maintain continuity, and deliver compounding value with every interaction.
Case in point:This isn’t just theory. One standout example comes from McKinsey. Their internal GenAI tool,Lilli,is context engineering in action. The tool unifies over 40 knowledge repositories and 100,000+ documents into a single searchable graph. When a consultant poses a question, it retrieves the five to seven most relevant artifacts, generates an executive summary, and even points to in-house experts for follow-up. This retrieval-plus-synthesis loop has driven ~72% firm-wide adoption and saves teams ~30% of the time they once spent hunting through SharePoint, wikis, and email threads, proof that the decisive edge isn’t just a bigger model, but a meticulously engineered stream of proprietary context (Source).
What Does ContextActuallyMean in the Enterprise?
By now, it’s clear that providing the right context is key to unlocking the full potential of AI and agentic systems inside organizations. But “context” isn’t just a document or a code snippet; it’s a multi-layered, fragmented, and evolving ecosystem. In real-world settings, it spans everything from database schemas to team ownership metadata, each layer representing a different slice of what an intelligent system needs to reason, act, and adapt effectively.
Based on my experience working across hundreds of data sources and collaborating with cross-functional product, engineering, and data teams, I’ve found that most enterprise context and information fall into nine broad categories. These aren’t just a checklist; they form a mental model: each category captures a dimension of the environment that AI agents must understand, depending on the use case, to operate safely, accurately, and effectively within your organization.