
Artificial intelligence is seen as a beacon of hope for efficiency, automation and better decisions. But why do so many AI projects fail despite state-of-the-art technology? In his guest article, Dr Tim Wiegels shows why it is not the algorithms that are the real problem, but inadequate data structures and a lack of clarity in processes. He makes it clear: without a stable basis, even the best AI only produces "smart mistakes" - and bad data quickly becomes an expensive risk.
This is not a provocative slogan, but a sober observation from practice. Many companies have high hopes for artificial intelligence (AI): Automation, increased efficiency, better forecasts, faster decisions. But reality shows that AI projects do not automatically deliver added value... sometimes they even create new problems.
The reason rarely lies in the technology itself, but in the basis: in data quality, in processes and in structural clarity.
Artificial intelligence recognises patterns, but it cannot judge whether data is "good", "fair" or "clean". If chaotic, unclear or contradictory data is fed in, an intelligent model will also produce erroneous results - only faster.
A major AI-powered real estate platform had to shut down its business model because the underlying data failed to accurately reflect real market dynamics.
Commercial facial recognition systems showed significantly higher error rates for certain demographic groups — not because of the AI logic itself, but due to imbalanced training data.
In a project involving automated segmentation, the use of AI failed because key KPIs such as “conversion” or “closure” were defined differently across the organization.
Many companies start their AI initiatives with tools, platforms or new models. However, the real question is:
How stable is the database?
A solid foundation for AI consists of three key elements:
Clean data:
Standardized formats, clearly defined values, no duplicates, and no room for interpretation.
Binding definitions:
Everyone involved understands the same thing by a KPI — regardless of team or system.
Transparent processes:
It is clearly traceable where data originates, who maintains it, and how it is further processed.
A stable data structure for AI does not require a budget in the millions; it requires discipline and prioritisation.
Start with the most important KPIs:
Define 3 to 5 key metrics and analyze them backwards:
How are they calculated? Where does the data come from? Who is responsible?
Document definitions in writing:
A KPI must not have multiple meanings. Documented definitions ensure consistency and comparability.
Define clear responsibilities:
Data quality is not a side task. Roles and responsibilities must be clearly assigned and defined.
Review reality regularly:
Do the data actually reflect operational reality? Or does the dashboard merely create an illusion of certainty?
Step-by-step improvement:
Don’t try to rebuild everything at once. But whatever you improve, keep it consistently clean and well-maintained.
AI can speed up processes and open up new perspectives. However, it cannot assume responsibility; decisions, priorities and ethical considerations remain human tasks.
Data strategy and data quality are therefore not purely technical issues - they are management issues.
AI is not a shortcut, but an amplifier. And AI makes visible where structures are missing.
No Comments