At the World Economic Forum in Davos, global executives told reporters they see artificial intelligence as central to their long‑term strategies but remain frustrated by how unreliable and immature the technology feels in day‑to‑day operations. Business leaders described a widening gap between ambitious boardroom plans and what current AI tools can actually deliver for customers and employees right now, creating a mix of urgency and impatience that is shaping investment decisions across industries.
Boardroom optimism: AI as an unavoidable future bet
Executives at Davos repeatedly framed artificial intelligence as a strategic necessity rather than an optional experiment, telling Reuters that “business leaders agree AI is the future” and that any company ignoring it risks falling behind. They are treating AI as a core capability on par with cloud computing or global supply chains, not a side project for innovation labs. In boardrooms, that conviction is translating into multi‑year commitments that tie AI to revenue growth, cost savings and competitive positioning, especially in sectors such as finance, manufacturing and consumer technology where digital transformation is already well advanced.
Leaders interviewed in Davos said they are embedding AI into investment plans, product roadmaps and long‑term competitiveness strategies, even as they acknowledge that current tools are far from perfect. Capital budgets now include line items for model development, data infrastructure and specialized hardware, while product teams are being asked to design new services that assume AI will be reliable enough to sit in front of customers. The stakes are high for shareholders and employees, because these decisions lock in priorities for years and signal that management is willing to absorb short‑term friction in order to secure what they view as an inevitable AI‑driven future.
On-the-ground frustration: ‘They just wish it worked right now’
Behind the confident rhetoric, executives also voiced sharp frustration that current AI systems are too unreliable or brittle to handle many core business processes without heavy human supervision. Leaders described tools that perform impressively in controlled demos but stumble when exposed to messy real‑world data, legacy IT systems and the nuanced judgment calls that define customer service or risk management. According to the Davos conversations, many companies that tried to automate workflows such as claims processing, loan underwriting or complex technical support found that staff had to double‑check AI outputs, eroding the efficiency gains that were supposed to justify the investment.
Several executives quoted in the Davos reporting said they “just wish it worked right now,” a phrase that captures their impatience with hallucinations, integration headaches and regulatory uncertainty that slow deployment. Hallucinated answers in customer‑facing chatbots, for example, can generate incorrect billing information or misleading medical guidance, forcing firms to add extra review layers that blunt the technology’s speed advantage. Integration with decades‑old enterprise software often requires costly custom engineering, while compliance teams warn that unclear rules around data use and accountability make it risky to let AI act autonomously. For investors and employees, that disconnect between promise and performance means AI projects can feel like sunk costs rather than immediate productivity engines.
What has changed since earlier AI hype cycles
Executives in Davos were careful to distinguish the current generative AI moment from earlier automation waves, stressing that this time they are committing real capital and C‑suite attention rather than running small pilots on the margins. In previous cycles, machine learning projects often lived inside research groups or innovation hubs, with limited impact on core operations. By contrast, the leaders who spoke in Davos described AI initiatives that report directly to chief executives, chief technology officers and boards, with explicit mandates to reshape how products are designed, marketed and supported. That structural shift signals to employees and investors that AI is no longer a speculative bet but a central pillar of corporate strategy.
According to the Davos reporting, the novelty now is not belief in AI’s potential but the tension between that conviction and the short‑term disappointment with tools that still fail in visible, costly ways. Generative systems that can draft contracts, summarize research or generate code have expanded the range of tasks that executives imagine automating, yet the same systems can produce subtle errors that are hard to detect at scale. That combination of breadth and brittleness is new, and it forces leaders to weigh the reputational and financial risks of early deployment against the fear of ceding ground to faster‑moving rivals. For workers, the shift raises questions about how quickly job roles will change and whether training programs can keep pace with the technology’s uneven progress.
Risk, regulation, and reputational concerns
Enthusiasm for AI in Davos was consistently tempered by worries about legal exposure, data privacy and compliance in markets where regulators are only beginning to set rules. Executives described a patchwork of emerging standards that differ across jurisdictions, making it difficult to design a single AI strategy that works in the United States, the European Union and Asia without constant legal review. Data protection laws, sector‑specific rules in areas like healthcare and finance, and new AI‑focused proposals all shape what companies can do with customer information and how transparently they must explain automated decisions. For global firms, that uncertainty adds cost and slows experimentation, because every new AI feature must be vetted not only for technical soundness but also for regulatory risk.
Leaders in the Davos reporting also warned that reputational damage from visible AI failures could slow adoption, even as competitors race ahead with more aggressive deployments. A chatbot that gives offensive answers, a recommendation engine that appears biased or a risk model that misclassifies customers can trigger public backlash, regulatory scrutiny and internal morale problems. Companies that have spent years building trusted brands are particularly wary of handing sensitive interactions to systems that might behave unpredictably under pressure. The result is a cautious posture in which many firms limit AI to behind‑the‑scenes roles, even when they believe that more prominent use could unlock significant value, because the cost of a single high‑profile mistake could outweigh months of incremental gains.
Short-term workarounds and long-term bets
Faced with this mix of strategic urgency and operational friction, companies interviewed in Davos are responding by ring‑fencing AI pilots to lower‑risk internal tasks while they wait for more robust models and clearer regulation. Many executives described using generative tools to draft internal reports, summarize meeting notes or assist software developers, where errors can be caught by colleagues before they reach customers. Others are experimenting with AI to optimize logistics, forecast demand or flag anomalies in sensor data, applications that improve efficiency but do not directly alter contractual commitments or medical advice. These workarounds allow firms to build familiarity with AI, gather performance data and refine governance processes without exposing themselves to the full brunt of legal and reputational risk.
At the same time, leaders are building data infrastructure, talent pipelines and governance frameworks so they can move quickly once AI systems are reliable enough to match their strategic ambitions. Investments in centralized data platforms, standardized taxonomies and secure access controls are intended to give future models high‑quality inputs, while recruitment of machine learning engineers, prompt specialists and AI‑literate product managers aims to close the skills gap. Governance committees that include legal, compliance, security and business leaders are being set up to define acceptable use, audit model behavior and respond to incidents. For shareholders, these long‑term bets signal that management is preparing for a world in which AI is deeply embedded in every function, even if the technology that will power that world is still evolving and, for now, often falls short of the expectations set in Davos boardrooms.