
November 14, 2025 – Reading time: 5 minutes
What does it take to move beyond playing with Generative AI (GenAI) and step into creating repeatable AI applications? In our first article on GenAI Integration, we tackled the different levels of using AI for your business, delving into each level and their role in this structured progression. We also provided a step-by-step approach on how organizations can explore, refine, automate, and optimize their use of AI, as well as align their efforts with their innovation strategies and operational goals.

Figure 1: The four levels of Generative AI Integration
This is the second article in our four-part series on Generative AI Integration, wherein we discuss how your organization can go from Level 1 to Level 2. If you want to know more about all levels of GenAI Integration, we recommend reading our first article in this series.
The Value of Reaching Level 2 in GenAI Integration
To give a brief recap of what Levels 1 and 2 are in GenAI Integration:
- Level 1 focuses on the experimentation with standalone AI assistants, where individuals use them to perform simple tasks. However, results can be unpredictable at this stage since user prompts might be unstructured, and a consistent workflow has not been established yet.
- On the other hand, at Level 2, use cases have already been defined and are now primed for refining. Repeatable tasks are now automated (or being automated) since certain workflow aspects have been established such as well-thought-out prompts and engineered processes.
The benefit of moving beyond simply exploring generative AI tools is that, at Level 2, you lay the groundwork for scaling. Here, the foundations (e.g., processes, governance, user training, defined tasks) are set up so that you can move into fuller automation, embedded workflows, and enterprise-scale integration.
For you to reach Level 2 in GenAI Integration, we explain five key steps to achieving this: create an AI use case criteria, select your use cases, define your use cases, implement your AI use cases through training, and evaluate the success of your implementation through specific metrics.

Figure 2: The steps to reaching Level 2 from Level 1 of Generative AI Integration
AI Use Case Criteria
The first step is to create the criteria for evaluating your potential use cases. This will determine whether an AI use case works for your team or company goals. If you’re unsure about what to consider, you can refer to the following points when developing your criteria:
- Expected benefits vs costs – Are the benefits or ROI measurable? KPIs to be met?
- Alignment with business objectives – Does this grow revenue, reduce costs, or increase productivity?
- Process suitability – Is the process repetitive? Knowledge‑intensive? Will you work with structured or unstructured data?
- Technical feasibility – Can you use an existing GenAI model, or do you need to customize one?
- Risk/Compliance – Are there regulatory, ethical or privacy impediments?
Following these criteria posts will not only but ensure that your team doesn’t run to any future problems like lawsuits, technical bottlenecks, and unexpected costs. It’s possible that a particular use case won’t meet all your criteria. What’s important, however, is that your team is aware and prepared for the challenges that come with pursuing the use case.
AI Use Case Selection
The second step is to select the use cases that fit your created criteria. To do so, here are some insights that can help in your selection:
- Benefit vs. cost should always be considered. Otherwise, there is no (internal) business case and no budget for implementation
- Start with evaluating the benefit of the use cases and pre-select from there
- Dive into the complexity/cost of the preselected use cases
- Finally, focus on quick wins. You want to be successful with your AI automation project. This is a great foundation for implementing further use cases
AI Use Case Definition
The third step is to define your selected use cases. When defining a use case, it should contain the following:
- Task description which is supported or automated by AI
- Required input to be provided
- Required context to be provided
- Defined prompts and how to use them, eventually in a sequence
- Expected results (eventually based on examples)
- Potential AI mistakes to watch out for
- Defines tool and LLM model (results may vary from model to model)
- Defined quality criteria / KPIs for later evaluation
Implementation of Defined AI Use Cases
The fourth step is to implement your defined use cases. This will involve building a training strategy that will improve your team’s AI competence and should cover each member’s role, their AI skills and knowledge, and the effectiveness of the strategy:
- Identify key roles needed at Level 2 such as the GenAI product owner, prompt engineer, data steward, domain subject matter expert, and more
- Assess the current skills of your organization and what needs to be added or improved (e.g., prompt writing, AI‑tool usage, monitoring,)
- Develop a training roadmap. This includes an AI awareness training for all team members, role-specific trainings, knowledge-sharing hubs, and hands-on workshops where GenAI tools are used in controlled scenarios
- Monitor whether the trainings improved your team’s output quality, tool usage, and effort reduction
Quality Evaluation through Metrics or KPIs
The last step is to evaluate how successful your team has been in adopting GenAI for your identified use cases. This entails defining key metrics or KPIs such as:
- Usage: How many users are actively using your GenAI application? How many use-case deployments have been executed?
- Quality: How accurate were the outputs? How many reworks needed to be made? Was the quality up to standard?
- Productivity/efficiency: How much time and money were saved?
- Business impact metrics: How much revenue was generated? Did using the GenAI application give more opportunities to work on other high-priority tasks or projects?
Aside from the metrics, your team should continuously monitor output quality by reviewing sample outputs, establishing human‑in‑the‑loop checks, and track error/hallucination rates. That way, you ensure a smooth integration and continuous development of your GenAI use cases.
Your Next Move
Is your organization taking the necessary action to move up the AI adoption ladder? Our software and data technology experts are looking forward to discussing this in a personal meeting. Meet up with our Head of Software and Data Technologies, Dr. Marc Großerüsckamp, for an in-depth discussion of your challenges and ideas.
Learn more

Cybersecurity

Artificial Intelligence

Artificial Intelligence

Artificial Intelligence

