Getting on the Agentic automation bus: Why your business isn't ready
- Richard Tabassi
- May 5
- 2 min read

The transformative potential of Generative AI (GenAI) and Large Language Models (LLMs) in business is undeniable, yet their effectiveness is consistently hindered by one critical barrier: the lack of contextual understanding embedded in data. While businesses have made significant strides in collecting and federating high-quality data, the real challenge lies in translating this data into actionable insights that align with the nuanced expectations of domain experts and operators. Without human translation of tribal knowledge and a robust framework to address ambiguity in business rules, even the most advanced GenAI systems fall short of delivering on their promises.
A common misconception is that data quality alone determines the success of AI systems. While essential, data quality is only part of the equation. The broader issue is the lack of context and specificity within existing data models. GenAI systems need more than just raw information; they require carefully designed schemas and mechanisms to surface edge cases and prompt for additional details when ambiguity arises. In many cases, business rules are incomplete, inconsistently applied, or overly reliant on implicit knowledge held by experts. To address this, organizations must prioritize the development of systems and APIs that not only manage CRUD operations but also include wizard-like capabilities to document what the data and operations are intended to achieve, down to specific table functions and potential API limitations.
The 80/20 principle is a useful framework for understanding the future of AI-enabled workflows. Businesses should aim for GenAI agents to handle the routine 80% of tasks, while reserving the remaining 20% for domain experts equipped with deep contextual knowledge. This division requires early investment in systems that capture and model business context more effectively. GenAI prompt engineering must evolve beyond guiding what the system should do to explicitly define what it should avoid doing. Ambiguity and subjective patterns, which often derail automated systems, must be clearly documented and incorporated into AI workflows to reduce errors and maintain operator trust.
Another critical consideration is how businesses design their APIs and data models to support these advanced workflows. APIs should not merely act as gateways to data but as intelligent intermediaries that guide AI systems in interpreting and applying the data. These APIs must include structured prompts for capturing edge cases, identifying known limitations, and documenting subjective decision-making processes. By embedding these capabilities into the design of APIs and data systems, businesses can reduce friction and ensure that their GenAI implementations are not only efficient but also adaptable to complex, real-world scenarios.
The next frontier in AI-driven business transformation lies not in the collection of data but in the way it informs choices and actions. High-quality, federated data is a tremendous achievement, but it is not the endpoint. To fully realize the potential of GenAI, businesses must rethink how they design systems to accommodate both routine automation and the sophisticated needs of operators. By focusing on the human translation of tribal knowledge, reducing ambiguity, and building systems that anticipate and address edge cases, organizations can unlock a new era of efficiency and innovation. The time to act is now—before the gaps in context and design hinder the very progress AI promises to deliver.
Kommentare