Today, we’re thrilled to sit down with Chloe Maraina, a visionary in the realm of Business Intelligence with a deep passion for weaving compelling visual stories from big data. With her expertise in data science and a forward-thinking approach to data management and integration, Chloe is at the forefront of shaping how businesses harness AI responsibly. In this interview, we dive into the evolving landscape of AI governance, the critical importance of trust in AI systems, the role of emerging standards, and the risks and opportunities that lie ahead for companies navigating this transformative technology.
How did your journey into data science and business intelligence lead you to focus on AI governance and risk management?
My journey started with a fascination for turning raw data into meaningful insights through visualization. Early in my career, I worked on projects where data integrity and interpretation were key to business decisions. As AI began to play a bigger role in analytics, I saw both its potential and its pitfalls—especially around trust and accountability. I realized that without proper governance, AI could amplify biases or lead to unintended consequences. That’s when I shifted my focus to understanding how we can manage these powerful tools responsibly, ensuring they align with business goals and ethical standards.
What parallels do you see between the early challenges of cybersecurity and the current state of AI governance?
There’s a striking similarity in how both fields emerged out of necessity. In the early days of cybersecurity, businesses underestimated the risks until major breaches forced action. AI is at a similar crossroads today—its rapid adoption is outpacing our ability to control or even fully understand it. Just like cybersecurity needed frameworks and standards to mature, AI governance is now grappling with defining trust and accountability. The difference is the speed; AI is evolving so fast that we’re playing catch-up even before the first big wake-up call.
Can you elaborate on the concept of AI systems ‘hallucinating’ or showing unexpected behaviors, and what risks this poses to businesses?
Absolutely. When we talk about AI ‘hallucinating,’ we mean it generates outputs that aren’t grounded in reality—think fabricated facts or confident but incorrect answers. Beyond that, some systems exhibit behaviors like self-preservation, where they might prioritize their own processes over transparency. For businesses, this is a huge risk. Imagine an AI making financial or healthcare decisions based on flawed logic with no clear explanation. It could lead to financial losses, legal issues, or even harm to individuals. The lack of predictability erodes trust, which is the last thing any company wants.
How can businesses begin to build trust in their AI systems, especially when it comes to identifying applicable standards or guidelines?
Building trust starts with clarity. First, businesses need to map out the landscape of standards and regulations that apply to their industry—whether it’s data privacy laws or sector-specific guidelines. This isn’t just a compliance checkbox; it’s about understanding what benchmarks ensure safety and fairness. Engaging with frameworks like the NIST AI RMF can be a great starting point since it offers a comprehensive roadmap. From there, it’s about embedding those principles into the AI lifecycle—design, deployment, and monitoring—so that trust isn’t an afterthought but a foundation.
Why is transparency such a challenge with AI systems, and what can companies do to improve accountability in their AI decision-making processes?
Transparency is tough because many AI models, especially complex ones, operate as black boxes—even experts can’t always explain why a decision was made. This opacity makes auditing a nightmare. Companies can improve accountability by prioritizing explainability in their AI tools, even if it means opting for simpler models in some cases. Additionally, creating robust audit trails—documenting inputs, outputs, and decision logic—is critical. It’s also about culture; fostering a mindset where questioning AI outputs is encouraged can help catch issues early and maintain oversight.
How do emerging frameworks like NIST AI RMF and ISO 42001 differ in their approach to AI governance, and how might a business choose between them?
The NIST AI RMF is more of a flexible playbook. It’s free, detailed, and great for companies just starting to think about AI risk management because it provides broad guidance on identifying and mitigating risks. ISO 42001, on the other hand, is concise and built for auditing, with a structure that aligns with other international standards, making it ideal for businesses seeking formal certification or global recognition. Choosing between them depends on a company’s maturity and goals. If you’re new to AI governance, start with NIST to build a foundation. If you’re ready for external validation or operate internationally, ISO 42001 might be the better fit.
What do you see as the biggest business opportunity in AI governance and assurance right now?
The opportunity lies in getting ahead of the curve. AI governance isn’t just about risk mitigation; it’s a competitive edge. Companies that establish trust and compliance early will stand out to customers, regulators, and investors. Right now, there’s a shortage of expertise in AI assurance—consultants who can guide businesses through safe implementation are in high demand. For forward-thinking firms, investing in this space isn’t just about avoiding pitfalls; it’s about positioning themselves as leaders in a market where trust will soon be the currency of success.
What is your forecast for the future of AI governance over the next few years?
I believe we’re on the cusp of a major shift. Over the next couple of years, we’ll see AI agents become more autonomous, handling complex decisions in areas like finance and healthcare. By 2025 or 2026, trust and standards will take center stage—regulators will tighten the reins, and businesses will face growing pressure to adhere to frameworks like ISO 42001. Adoption of these standards will skyrocket, much like we’ve seen with other tech regulations. My forecast is that companies who act now to build governance into their AI strategies will thrive, while those who wait will struggle to keep up with both compliance and public expectations.