Domo Enhances Data Governance to Power Enterprise AI Strategy

Domo Enhances Data Governance to Power Enterprise AI Strategy

Chloe Maraina is a powerhouse in the world of data science, known for her unique ability to transform massive datasets into clear, actionable visual narratives. As a Business Intelligence expert, she has spent years at the intersection of data management and integration, advocating for systems that are both powerful and accessible. Recently, the conversation around data governance has shifted significantly with the introduction of new tools designed to handle the complexities of AI-driven workflows. Chloe joins us to discuss how these advancements are reshaping the role of system administrators and the very architecture of organizational data.

The following discussion explores the evolution of data governance from simple access control to complex orchestration. We delve into the technical nuances of user impersonation for troubleshooting, the integration of security protocols directly into the ETL process, and the logistics of deploying branded internal applications across major mobile platforms. Furthermore, we examine the critical need for standardized communication protocols between autonomous AI agents to ensure transparency and prevent operational conflicts.

AI workflows are transforming administrators from simple gatekeepers into orchestrators of consistent data experiences. How does this shift impact daily system maintenance, and what specific steps should teams take to ensure data remains predictable as these automated workflows proliferate?

The transition from a gatekeeper to what we might call a “shopkeeper” or orchestrator is a fundamental change in how we perceive system maintenance. Historically, administrators were focused on who could enter the “store,” but now they are responsible for the quality and consistency of everything on the shelves. In a world where AI-driven workflows are constantly pulling from various data products, maintenance is no longer just about fixing broken links; it is about ensuring that the data experience remains predictable for both human users and automated agents. Teams need to move toward a model of orchestration where they are actively managing the data lifecycle, infusing governance controls at every stage to set the table for responsible AI. This means implementing tools that allow for broader oversight of the entire data estate, ensuring that as AI takes on more business tasks, the underlying data remains a reliable foundation.

Debugging permission issues often involves blind spots that can stall production for entire departments. How does the ability to view a platform as a specific user accelerate the diagnosis of access problems, and what safeguards are necessary to maintain privacy while utilizing this level of administrative power?

User impersonation is a feature that was highly requested by the community precisely because it addresses the “blind debugging” that haunts so many IT departments. When an administrator can securely view and interact with the platform as another user, they can instantly validate how security policies are actually behaving in a production environment. This eliminates the guesswork and the back-and-forth emails usually required to troubleshoot complex access issues, which can save dozens of hours during a deployment cycle. To maintain privacy and security, this power must be restricted to authorized administrators and should be used specifically for diagnosing identified problems. It is foundational for governance at scale because it provides a direct way to ensure that the rules we set in theory are actually working in practice without compromising the integrity of the user’s personal environment.

Enforcing row-level security traditionally requires moving data or managing complex manual configurations during the transformation phase. What are the practical benefits of applying personal data permissions directly within the ETL process, and how does this integration help establish consistent guardrails for AI agents?

By integrating personal data permissions directly into tools like Magic ETL, we are essentially building the security into the data’s DNA as it is being queried and transformed. This approach is revolutionary because it allows us to enforce row-level security without the high risk and latency involved in moving data to secondary environments. From a practical standpoint, it simplifies the implementation of governance across the entire enterprise, making it much easier to manage global policies. For AI agents, these integrated permissions act as essential guardrails, ensuring that even as an agent moves between different business functions, it can only access the data it is specifically authorized to see. This consistency is critical for operating safely and efficiently, as it prevents agents from making decisions based on restricted or inappropriate information.

Organizations are now deploying branded internal applications directly through the Apple App Store and Google Play. How should administrators approach configuring unique navigation experiences for different user groups, and what are the primary logistical hurdles in maintaining these custom apps across diverse mobile operating systems?

Administrators must take a highly personalized approach to navigation configurations, or Nav Configs, to ensure that different user groups are not overwhelmed by irrelevant data. The goal is to create a tailored experience where a sales executive sees a completely different interface than a floor manager, even though they are using the same underlying branded app. This level of customization requires a deep understanding of the specific workflows of each group to ensure that the most relevant insights are always front and center. Logistically, the hurdle lies in maintaining these branded applications across both the Apple App Store and Google Play, which involves navigating different update cycles and OS requirements. By using native app distribution tools, organizations can streamline this process, allowing them to deliver a polished, professional experience that feels like a bespoke corporate tool rather than a generic mobile dashboard.

Beyond connecting data sources, agentic AI requires clear frameworks for how various automated bots interact with one another. How can a coordination layer help surface conflicts between interacting agents, and what role do open-source protocols play in standardizing these complex, auditable communications?

As we move toward a multi-agent environment, we need more than just logs; we need a genuine coordination layer that provides observable and auditable behavior. This layer serves as a critical checkpoint that can identify and surface conflicts between agents before those conflicts turn into actual agentic decisions that might harm the business. Implementing open-source standards like the Model Context Protocol (MCP) or the Agent2Agent (A2A) protocol is vital because they provide a universal language for these interactions. These protocols standardize how agents connect to data sources and how they communicate with each other, ensuring that the entire ecosystem is transparent and controllable. Without these standard frameworks, organizations risk creating a “black box” of automated decisions that are impossible to audit or correct when things go wrong.

What is your forecast for data governance in the age of AI?

I believe we are entering an era where data governance will shift from being a reactive, restrictive function to a proactive, foundational element of business strategy. In the near future, governance will not be something that is “added on” to a project, but rather a set of automated, intelligent guardrails that are woven into every data lifecycle, from ingestion to agentic decision-making. We will see a much heavier reliance on open-source protocols to manage inter-agent communication, making transparency a default state rather than a manual effort. Ultimately, organizations that embrace this shift toward orchestration and automated oversight will be the ones that can truly operationalize AI at scale. My forecast is that the most successful companies will be those that treat their data estate as a living ecosystem, where governance tools empower users to act with confidence rather than slowing them down with red tape.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later