AI and AI Agents are the new kids on the IT block, sparking interest and excitement, and firmly embedding themselves in the zeitgeist. They offer much promise and opportunity for businesses to further transform their data and automate manual processes.
The ability for any organisation to get the most out of AI will depend on how well they can integrate these components into their current enterprise and securely expose data and processes to the new capability. A strategic integration approach is key to this.
What are Agentic Systems?
AI Agents, or Agentic Systems, are part of the evolution of LLM based solutions. With an LLM, such as Chat GPT, CoPilot or Grok, the users fire text (or images, or audio) at a Large Language Model (LLM). The LLM can understand and process the data, and then, via a massive model of similarly processed data, gathered from the internet up to a point in time, or other sources (see RAG), provide an answer via generative AI – which means it generates an response that makes sense to a human being.
The text inbound from the user is called a “prompt”.

Prompts
When I say: “What is the most populous city on Earth?”, the LLM can understand what I am asking, and then generate an answer based on its model.
The answer I get back will be based on what the LLM has interpreted my requirement to be, and its reply will be a generative string of text to meet the requirement. This is generated on the fly, so the reply may well differ slightly if I ask the same question again – although (hopefully) it will always tell me that Tokyo is the city.
If I want the LLM to be more specific in its response, then I add more to my prompt:
“What is the most populous city on Earth? Tell me in one sentence.”
This shapes the LLM’s answer to my more specific need.
For a more complexity I could instruct it thus:
“Ask me what the most populous city is and then check that my answer is correct. If my answer is correct, ask me what country the city is in, and then confirm if my answer is correct.”
As you can see, we are being more and more proscriptive about what we require from the LLM and building in step-by-step scenario.
Now, consider a yet more complex scenario, perhaps we want the LLM to collect information for a customer onboarding form. We have a specific number of fields, with specific formatting requirements (such as date of birth, NI number, address, etc).
How do we tell the LLM we need to collect all this data from a user, and what the data structures must be in some cases?
Prompt Engineering (system prompts and user prompts)
We do it with prompts, still.
But far more complex and detailed prompts.
We are now specifying the prompt as paragraphs of information to the LLM, and we are using capitalisation and formatting to be very precise about the instructions.
These are sometimes called system prompts because they are instructing the LLM on what we want it to do over a series of steps, in this case interactions with the user. It has several things it must consider and perform, and specific questions we want it to ask.
The LLM doesn’t “remember” anything, so it is told this every time we call it, as well as having the entire chat history (i.e., all previous answers) also sent every time. It processes this data every single time, from anew. As you can imagine, this becomes complex.
Note: You wouldn’t see this using a LLM UI, such as CoPilot, because the UI is caching and sending the chat history for you each time. Processing this data computationally increases each time because the chat history grows as you chat – this is why it seems like the LLM “forgets” what has occurred sometimes, because the chat history buffer (context window) is only so large as the UI truncates it to save processing overhead. Other techniques, such as summarising the chat history are also used to maintain consistency.
Now, extending the scenario further, after we have collected the information, we want the LLM to check the data against a credit check service, and a fraud check service, check the user against company customer acceptance policy documentation, and save the information into our CRM.
The size and complexity of the prompt is now very significant regarding maintenance and reliability of LLM output. We are building out a complex LLM prompt, that involves data formatting, orchestration and external calls all into a single massive prompt. The complexity of the prompt introduces huge scope for interpretation and thus can result in unreliable or inconsistent results from the LLM as it tries to “understand” the entirety of what it is being asked to do.
Inevitably, this becomes nye on impossible to achieve as the complexity increases exponentially.
The solution is to break down the prompts into focused tasks and marshal the tasks as individual calls to the LLM. These specialisations are referred to as agents.

Agents (specialisation of system prompts)
To simplify the LLM solution, and enable more consistency in a probabilistic actor, such as the LLM, we look to split the single large prompt into several simplified prompts defining very targeted tasks, and then orchestrate calls between these agents using workflow, as they complete their tasks. Effectively, creating a “team” of agents, each specialised into single activities, that integrate like services.
As you can see from the above language, we are moving into more traditional IT integration territory. We are once again orchestrating calls via workflow and effectively treating agents as services encapsulating business logic.
In breaking up the LLM calls into “agents” we have simplified the solution, but there are still elements of it that an LLM cannot perform alone. Specifically, these are tasks that require either information that the LLM doesn’t have (i.e. company policy information) and tasks that require access to external systems, such as saving CRM data or credit checking.
This is where RAG and Tools come in, the final two building blocks of agentic systems.
RAG
RAG stands for Retrieval Augmented Generation. Essentially, RAG is a solution to an LLM limitation, such that the LLM only has knowledge of what it is trained on, which, in most cases (notwithstanding fine-tuning), will be publicly available information from the internet, up to a fixed point in the past. This means that the LLM has no knowledge of anything “recent” and no access to information that was not publicly available – such as internal company documents.
RAG is one solution that seeks to fix this by giving the LLM this information, as supplementary data added to a prompt (often from a vector database), or by giving it knowledge of sources it can use to get the information, such as a REST endpoint.
Note: the LLM doesn’t call the endpoint itself, it simply knows about the endpoint, and instructs the framework that is coordinating access to it that it should be done, and then the endpoint’s response it given back to the LLM for interpretation
Tools
Tools are like RAG, in that they are a resource that the LLM is told about, and then it can instruct the client/framework to use on its behalf.
In this case the tool could be RAG (e.g. look-up customer data), or it could be making a change, such as a CRM save REST endpoint, file system access, SharePoint access, etc.
Again, like prompts, we do not want to overload one agent with many tools. Adding too many tools to a single LLM call results in erratic and unpredictable behaviours from the LLM as it tries to work out which tool to use when.
With this capability we can complete the complex scenario that requires our agentic system to collect, verify and store data that it is requesting from the user. But the agents must be integrated, and orchestrated.
Integration
Going forward, these scenarios will become more complex and integrated, whereby agents, triggered semi- or fully-autonomously will be able to run workflows that trigger other agents. Agents will be required to perform more complex interactions with services via either REST calls or with each other via web-service or other (new and emerging such as Model Context Protocol) means of agent-to-agent, agent-to-service communication.
For agents to reliably communicate, and for their activity to be orchestrated in a consistent and understandable manner, there needs to be an integration capability.
Tools are being developed, such as Microsoft Semantic Kernel and Lang Chain, that are intended to provide frameworks that make defining and building agents simpler, as well as providing mechanisms for co-ordinating activity between agents such as Sematic Kernel Process Framework and Lang Graph.
However, in addition to this new-world, the fundamentals of enterprise integration, providing things like event-driven, scalable architectures, services and interfaces, will still be central in supporting these agents. The need to support the structured reliability and observability is still prevalent because, fundamentally, AI is just another piece of compute within the enterprise environment.

Agents offer a leap in capability from a business logic and data processing point of view, but they are still part of the enterprise landscape and need to be integrated into it like any other component.
If anything, it is more important now than ever for organisations to own and understand how to integrate their data and systems, if they are to take full advantage of what an AI enabled enterprise estate can offer.
This is more than the next step. It’s a new era of enterprise innovation. Contact us today to learn more about how our AI Agent Audit Service can innovate your world and unlock new possibilities through AI agents.
If you would like to find out about how we could support your company with your integration needs then get in touch!
follow us