Transforming Business Through AI: Key Insights
- Raimund Laqua
- 2 days ago
- 6 min read
The business world is changing fast as companies adopt AI technology. At a recent conference that I attended, experts shared valuable insights on making this transition successfully. Here's what stood out.

Finding the Balance
AI offers two main benefits for businesses: it can make your current work more efficient, and it can help you do things that weren't possible before. But there's a catch – as one speaker put it, "AI becomes an accelerant - whatever is weak will break." In other words, AI will make your strengths stronger but also expose your weaknesses faster.
This dynamic creates both opportunity and risk. Organizations with solid foundations in data management, security, and operational excellence will see AI amplify these strengths. Meanwhile, companies with existing weaknesses may find AI implementations expose these vulnerabilities.
The tension between innovation and exposure stood out as a consistent theme. Leaders face the challenge of encouraging creative AI applications while managing potential risks. As one presenter noted, "adopting AI is an opportunity to strengthen your foundations," suggesting that the implementation process itself can improve underlying systems and processes.
Getting Governance Right
Companies need clear rules for using AI safely. Mercedes-Benz showed how they've built AI risk management into their existing structures. Many experts suggested moving away from rigid checklists toward more flexible guidelines that can evolve with the technology.
What matters most? Trust. Customers don't just want AI – they want AI they can trust. This means being careful about where your data comes from, protecting privacy, and being open about how your AI systems work.
The establishment of ISO 42001 as an audit standard signals the maturing governance landscape. However, many speakers emphasized that truly effective governance requires moving "from compliance to confidence" – shifting focus from simply checking boxes to building genuinely trustworthy systems.
A key insight was that "you can do security without compliance, but you can't do compliance without security." This highlights how fundamental security practices must underpin any meaningful compliance effort. Well-designed guardrails, now developing as the new compliance measures, should be risk-based rather than prescriptive, allowing for innovation within appropriate boundaries.
Data provenance received particular attention, with speakers noting that "AI loves data and you will need to manage/govern your use of data." This becomes especially challenging when considering privacy regulations, as legal departments often restrict the use of existing customer data for AI applications. Speakers suggested more nuanced approaches are needed to balance innovation with appropriate data protection.
Different Approaches Around the World
How companies use AI varies greatly by location. European businesses tend to focus heavily on compliance, with frameworks like the EU AI Act shaping implementation strategies. Regional differences significantly impact how organizations approach AI adoption and governance.
Some participants questioned whether the EU AI Act might be too restrictive, noting discussions about potentially toning down certain requirements – similar to adjustments made to GDPR after implementation. This reflects the ongoing challenge of balancing protection with innovation.
Compliance expertise varies by region as well. I observed that "compliance is a bigger deal in Europe and they are good at it," suggesting that European organizations may have advantages in navigating complex regulatory environments. This expertise could become increasingly valuable as AI regulations mature globally.
Workforce Changes
We can't ignore that some jobs will be replaced by automation. This creates a potential two-tier economy and raises important questions about training and developing people for new roles. Companies need to build AI literacy across all departments, from engineering to legal, HR, and marketing.
The conference highlighted that AI literacy isn't one-size-fits-all – training needs to be tailored to different functions. Engineers need technical understanding, while legal teams require compliance and risk perspectives. Marketing departments might focus on ethical use cases and customer perception.
A particularly interesting trend is taking shape around AI skills development. Many professionals are moving into AI governance roles, but fewer are pursuing AI engineering due to the longer lead time for developing technical expertise. This could create imbalances, with potentially too many governance specialists and too few engineers who can implement AI systems properly.
Beyond job replacement, AI promises to transform how knowledge workers engage with information. Rather than simply replacing analysts, AI can help them process "the mountain of existing data" – shifting focus from basic results to deeper insights. This suggests a future where AI augments human capabilities rather than simply substituting for them.
The "Shadow AI" Problem
Just like when employees started bringing their own devices to work, companies now face "shadow AI" – people using AI tools without official approval. This growing challenge is more pervasive than previous BYOD issues, as AI tools are easily accessible online and often leave fewer traces.
Implementing an acceptable use AI policy is the most effective way to address this challenge. Such a policy clearly defines which AI tools are approved, how they can be used, and what data can be processed through them. Rather than simply banning unofficial tools, effective policies create reasonable pathways for employees to suggest and adopt new AI solutions through proper channels.
The policy should balance security concerns with practical needs – if official tools are too restrictive or cumbersome, employees will find workarounds. By acknowledging legitimate use cases and providing approved alternatives, companies can bring shadow AI into the light while maintaining appropriate oversight.
Regular training on the policy helps employees understand not just the rules but the reasoning behind them – particularly the security and privacy risks that shadow AI can introduce. When employees understand both the "what" and the "why," they're more likely to follow guidelines voluntarily.
The proliferation of shadow AI creates a fundamental governance challenge captured by the insight that "you can't protect what you can't see." Organizations first need visibility into AI usage before they can establish effective governance. This requires technical solutions to detect AI applications across the enterprise, combined with cultural approaches that encourage transparency.
Bringing Teams Together
One clear message from the conference: AI governance and engineering must work hand-in-hand. No single person or team has all the answers for creating responsible AI systems. This calls for collaboration across departments and sometimes specialized roles like AI Compliance Engineering.
A key challenge is that traditional organizational structures often separate these functions. In practice, it appears that AI governance cannot be effectively separated from AI engineering, yet many companies attempt to do just that. Successful organizations are creating new collaborative structures that bridge these domains.
The automotive industry provides useful parallels. As one presenter noted, "automotive has 180 regulations, now AI is being introduced from an IT perspective." This highlights how AI governance is emerging from IT but needs to learn from industries with long histories of safety-critical regulation.
However, important differences exist. One speaker emphasized that "IT works differently than the automotive industry," suggesting that governance approaches need adaptation rather than simple transplantation between sectors. The growing consensus suggests that use case-based approaches to AI risk management may be more effective than broad categorical rules.
Defining clear interfaces between governance and engineering appeared as a potential solution, with one suggestion to "define KPIs for AI that should be part of governance." This metrics-based approach to governance integration could help standardize how AI systems are measured and evaluated within governance frameworks.
Moving Forward
As your company builds AI capabilities, you'll need both effective safeguards and room for innovation. This is a chance to strengthen your organization's foundation through better data management and security practices.
The most successful companies will develop approaches tailored to specific uses rather than applying generic rules everywhere. And as AI systems become more independent, finding the right balance between automation and human oversight will be crucial.
The rise of autonomous AI agents introduces new challenges. As AI systems become more sophisticated, there are legitimate concerns that certain types of AI agents might operate with limited human oversight and could potentially act in unexpected ways depending on their autonomy levels. These considerations highlight the need for governance approaches that can handle increasingly sophisticated AI systems.
The conference acknowledged that "an evergreen process has not been developed yet" for AI governance, suggesting that organizations must remain adaptable as best practices continue to evolve. This dynamic environment creates space for innovation in governance itself – developing new methods and controls that can effectively manage AI risks while enabling beneficial applications.
In this changing landscape, the winners will be those who can blend good governance with practical engineering while keeping focused on what matters most – creating value for customers and the business. By addressing AI governance as an enabler rather than just a constraint, organizations can build the confidence needed for successful adoption while managing the inherent risks of these powerful technologies.