Despite rapid investment and widespread experimentation with artificial intelligence (AI), a significant trust gap continues to undermine the technology’s impact across the Middle East and North Africa (MENA).
Research from Alteryx reveals that only 28% of professionals trust AI to support decision-making, and just 27% are confident using it for forecasting or planning, two of the most critical business functions.
According to Sabya Sen, vice-president and head of IMEA and APAC at Alteryx, the issue is not resistance to AI itself, but a deeper concern around reliability and transparency.
“The confidence gap is being driven by a lack of visibility, consistency and business context,” says Sen. “Many professionals are comfortable using AI for routine support, but confidence falls when the output affects judgement, forecasting or long-term planning because the consequences are far greater.”
This challenge is particularly acute in the United Arab Emirates (UAE), where 94% of data leaders report they lack complete visibility into how AI systems reach decisions. Across the broader MENA region, fragmented data environments, uneven governance frameworks and strict compliance requirements further complicate adoption.
“Leaders are unlikely to rely on systems they cannot properly trace, interrogate or align with the realities of how the business operates,” Sen adds.
Building trust through transparency and governance
For organisations looking to close this trust gap, the solution lies not in deploying more advanced AI models, but in strengthening the foundations that underpin them. Transparency, testability and alignment with business logic are key.
“Organisations can build confidence in AI by making outputs transparent, testable and grounded in business logic,” Sen explains. “The key is to start small. Focus on defined use cases like forecasting or demand planning, where results can be measured against real outcomes.”
Rather than rushing AI into high-stakes environments, companies should prioritise incremental adoption, supported by clear data lineage, consistent metric definitions and human oversight. Guardrails are essential to ensure that outputs can be validated and challenged when necessary.
Alteryx’s research underscores the importance of data quality: 49% of leaders identify high-quality, well-governed data as the most important factor for AI success, while 28% are prioritising stronger governance frameworks.
“Organisations can build confidence in AI by making outputs transparent, testable and grounded in business logic. The key is to start small. Focus on defined use cases like forecasting or demand planning, where results can be measured against real outcomes”
Sabya Sen, Alteryx
“Ultimately, confidence grows when AI is transparent, repeatable and aligned to how the business already makes decisions,” says Sen.
While AI adoption rates are high globally, reaching an estimated 84% of organisations, according to McKinsey & Company, only 31% have successfully scaled use cases, and just 11% are realising meaningful value. In MENA, the gap between ambition and execution is particularly pronounced.
Sen points to a combination of structural and organisational barriers. “The barriers to scaling AI in MENA are interconnected, which is why progress often stalls despite strong ambition,” she says.
Talent shortages remain a critical constraint, especially in fast-growing markets such as the UAE and Saudi Arabia, where demand for AI and data expertise continues to outpace supply. At the same time, many organisations are still operating on fragmented data architectures, legacy systems, and inconsistent governance models.
There is also a strategic misalignment in many AI initiatives. “There’s a tendency to launch AI initiatives without clear alignment to business priorities, leading to pilot fatigue and limited enterprise impact.”
The risks of AI democratisation
As AI tools become more accessible, organisations are increasingly moving away from centralised data teams towards broader deployment across business units. While this democratisation can accelerate innovation, it also introduces new risks.
“AI-ready data is data you can trust,” says Sen. “It’s accurate, timely, well-governed and tied to clear ownership. It’s consistently defined across the business, traceable back to source, and accessible with the right controls in place.”
AI-ready data is data you can trust. It’s accurate, timely, well-governed and tied to clear ownership. It’s consistently defined across the business, traceable back to source, and accessible with the right controls in place Sabya Sen, Alteryx
Without this foundation, decentralised AI adoption can exacerbate existing challenges. Common governance pitfalls include inconsistent metric definitions, unclear data ownership and limited visibility into data lineage. In some cases, organisations deploy AI models to production before data has been properly standardised, resulting in short-term gains but long-term inefficiencies.
“These shortcuts may accelerate early progress, but they ultimately erode trust and lead to costly rework,” Sen warns.
With 89% of organisations maintaining or increasing their AI budgets, the pressure is mounting on CIOs and technology leaders to deliver measurable returns. However, higher spending does not automatically translate into better outcomes.
“Companies waste AI investment when they start with technology instead of the business problem,” says Sen. “The better approach is to focus on a small set of high-impact use cases, improving forecast accuracy, reducing delays, controlling cost leakage, or strengthening risk visibility, and build from there.”
Platform selection is another critical factor. CIOs should prioritise solutions that integrate seamlessly with existing data environments, support robust governance, and enable collaboration between technical and business users.
“Higher budgets alone don’t drive return on investment,” Sen concludes. “Value comes from clear use cases, strong data foundations and platforms that deliver consistent, governed outcomes at scale.”