February 24, 2026

Posted by Aline Puhan-Schulz

Part 3: Generative AI as a structural project in B2B sales

Generative AI has become part of the everyday sales routine of many medium-sized B2B companies. Tools such as ChatGPT are used to formulate texts, condense information or prepare conversations. In practice, this is often done informally, without clear rules and without clear organizational integration into existing processes.

The central problem lies less in the technology itself than in the lack of structure. AI is used without clearly clarifying where in the sales process it should work, which tasks it performs and where human responsibility is consciously retained. In an increasingly volatile competitive environment, this is no longer an issue of innovation, but a question of operational performance.

“The first rule of any technology used in a business is has automation applied to an efficient operation will magnify the efficiency. The second is that automationapplied to an inefficient operation will magnify theinefficiency. ”

Bill Gates, The Road Ahead

AI doesn't replace processes. It reinforces what already exists. This makes functioning processes more efficient, while unclean or unclear structures become more visible and problematic. Anyone who wants to meaningfully integrate generative AI into everyday sales work can therefore first benefit from an analysis of actual sales work. After all, sales does not consist of a uniform block of activities, but of a variety of different tasks, such as conducting customer meetings, compiling information, preparing offers, internal coordination or maintaining CRM systems. Generative AI therefore does not develop its benefits at the “sales” level, but at the level of these individual activities, especially where work is linguistic, repeatable or highly formalized.

Clarifying specific objectives is therefore crucial. The goal of “using AI” remains abstract and unhelpful. Goals that are directly linked to operational processes, such as reducing the time required for drafting offers, consistent documentation of discussions or more reliable follow-ups after customer appointments, are more meaningful. Such goals describe concrete changes in everyday working life and create a reliable basis for further decisions.

Against this background, it is clear that tasks are suitable for the use of generative AI in different ways. Scientist Ethan Mollick distinguishes between four categories: activities that consciously remain completely with humans (“Human Only”), tasks in which AI is used as a cognitive sparring partner, delegated activities in the sense of a “human-in-the-loop” approach, and tasks that can be fully automated. This differentiation is not so much a theoretical model as a decision-making tool for the allocation of responsibility, control and degree of automation.

Certain activities consciously remain in human hands. This includes, for example, conducting central customer meetings, prioritizing accounts, or making strategic decisions. This distinction is not a statement about the technical capabilities of AI, but is a normative decision about where companies want to locate responsibility, value judgments and final decisions. Especially in situations where consequences must be borne or considerations must be made, many organizations deliberately decide against delegating to systems.

There are also tasks for which AI creates preparatory work that must be checked and accountable for, such as summaries of talks, wording offers or follow-up texts. Other activities can be completely automated in the future, provided that they are clearly standardized and rule-based, such as filling predefined CRM fields. AI can be used as co-intelligence between these poles, for example when preparing complex conversations, playing through lines of argument, or showing variants. In these cases, it does not function as an autopilot, but as a way of thinking and structuring.

Only when tasks, responsibilities and control points are clearly defined can reliable use cases be developed. A functional use case requires clearly described process steps: time of data collection, role of AI in the process, scope of human testing and further use of the results in downstream systems. If this process clarity is missing, there are either additional frictional losses or operational risks due to uncontrolled transfer of results.

The selection of suitable tools is therefore secondary. In practice, this order is often reversed: applications are introduced before their place in the work process is defined. The benefits remain correspondingly limited. What is decisive is not the functionality of a system, but its ability to adapt to clearly described processes and to the existing system landscape.

So-called large language models such as ChatGPT or Google Gemini are particularly important. Their strength lies in the processing and generation of language. They are suitable for tasks where content needs to be structured, condensed or varied. In these roles, they act as productive partners in writing and thinking. They are unsuitable where they are misunderstood as a reliable source of knowledge or decision-making authority. Language models generate plausible texts but do not have their own understanding of truth. Their use therefore requires clear embedding and consistent control. Language models do not have their own understanding of truth, but generate plausible answers based on statistical patterns. They are only suitable to a limited extent for tasks that require exact data or binding information.

Empowering employees is crucial when introducing AI. One aspect of this is how to deal with prompts, the work instructions on language models. Proper prompting is often misunderstood as specialized technical knowledge. In practice, it is less about sophisticated formulations than about structured thinking. Anyone who uses AI in everyday sales work must learn to describe tasks precisely: What role should AI play? Which goal is to be achieved? What information is available? Which result format is expected? Blurred prompts are bound to produce blurry results, regardless of the model's performance. Especially in sales, this ability determines whether AI offers meaningful support or creates additional rework.

In the end, it turns out that the success of generative AI in B2B sales is not determined by the performance of the models, but by their organizational integration. Companies that treat AI as an isolated tool fall short of their expectations. On the other hand, anyone who systematically reorganizes work, clearly distributes responsibility and integrates leadership creates the basis for sustainable productivity gains. Generative AI does not replace sales. However, it changes the division of labor between people and systems and creates freedom where they provide the greatest value: in conversation with the customer.

As great as the potential of generative AI in sales is, it also raises new questions that are not limited to efficiency and work design. The handling of sensitive customer information, the processing of personal data and the use of external AI services affect key data protection and IT security requirements. Companies that want to use generative AI productively must therefore also clarify which data can be entered into AI systems, how results can be reused and which technical and organizational measures are required to comply with GDPR and avoid security risks. The next part of the GenAI Playbook series is therefore dedicated to the question of how companies can use generative AI responsibly, legally secured, organizationally controlled and without jeopardizing the integrity of their data.

Free demo