Forum Discussion

snigdha21's avatar
snigdha21
Icon for Microsoft rankMicrosoft
Apr 27, 2026

Designing a Governed RTO Compliance Agent Using Copilot Studio and Databricks Genie

Enterprise AI adoption in HR scenarios comes with a unique challenge: how do you deliver actionable insights without compromising privacy, trust, or policy boundaries?

In this blog, I’ll share how we built an RTO (Return‑to‑Office) Compliance Agent using Microsoft Copilot Studio and Databricks Genie, focusing on governance‑first design, controlled data access, and real‑world enterprise constraints.

This solution was developed as part of an HRLT proof‑of‑value initiative and is designed to support people managers with clear, aggregated compliance insights, delivered conversationally inside Microsoft Teams.

The Problem We Were Solving

As hybrid work models mature, organizations need a reliable way to answer questions such as:

  • How compliant is my team with RTO expectations?
  • Are there trends across regions or time periods?

Traditional dashboards often fall short because they:

  • Require manual interpretation
  • Expose too much granular data
  • Are difficult to govern at scale

Our objective was to create an AI‑powered conversational interface that provides:

  • Only manager‑authorized, aggregated insights
  • Zero visibility into individual‑level behavior
  • Built‑in enforcement of HR and privacy policies

Architecture Overview

The solution integrates Copilot Studio with Databricks Genie, backed by curated data sources.

(Image: High-level Copilot Studio and Databricks Genie architecture)

Key Components

  • Copilot Studio – Conversational orchestration, policy enforcement, and Teams deployment
  • Databricks Genie – Governed natural-language interface to curated datasets
  • RokFusion Platform – Trusted HR and badge-swipe data

This layered approach ensures governance is applied before data is ever queried.

Controlled End-to-End Data Flow

The interaction pattern follows a strict, auditable flow:

  1. A manager asks a question in Copilot Studio
  2. Copilot forwards the request to Genie with instruction constraints
  3. Genie executes logic only on curated, approved tables
  4. Calculations are performed at team or manager level only
  5. Copilot formats and returns compliant responses (text, tables, or charts)

At no point are employee IDs, badge events, or individual metrics exposed.

Using Genie as a Governance Layer, Not Just a Query Tool

One of the most critical decisions was to treat Databricks Genie as a policy‑enforcement layer, not merely a natural‑language SQL generator.

(Image: Genie instruction configuration enforcing compliance rules)

What We Configured in Genie

  • Synonyms and NL mappings for HR terminology
  • Strict filtering logic for employee categories
  • Population threshold enforcement (minimum count)
  • Explicit rejection of sensitive attributes such as gender, race, religion, or age
  • Prevention of formula or row‑level data exposure

This approach ensured that even malformed or risky prompts could not bypass policy constraints.

Compliance Scenarios Supported

The agent supports multiple business‑aligned interpretations of RTO compliance:

  • Hybrid Compliance
    Hybrid employees counted only on eligible hybrid days
  • Onsite Compliance
    Onsite employees counted across standard working days
  • All Employees View
    Weighted aggregation combining hybrid and onsite logic

These scenarios are embedded into the agent’s instruction logic, not dynamically inferred at runtime—ensuring consistency and auditability.

Why We Chose Conversational AI Over Dashboards

A key insight early on was that managers don’t want spreadsheets—they want answers.

Instead of navigating filters and charts, managers can ask:

  • “What was my team’s compliance last week?”
  • “Show me a comparison across regions.”

When required, the agent can also render simple visual outputs.

(Image: Sample Microsoft Teams output with compliance visualization)

Importantly, visuals follow the same governance rules as text responses.

Publishing and Validation in Microsoft Teams

Once configured, the agent was published directly from Copilot Studio to Microsoft Teams, making adoption frictionless.

(Image: Publishing Copilot Studio agent to Microsoft Teams)

End‑to‑end testing validated:

  • Authorization boundaries
  • Population rules
  • Safe handling of incomplete or ambiguous queries

Key Engineering Learnings

  1. Governance must be instruction‑driven
    Relying on frontend filtering alone is insufficient for HR data.
  2. Natural language needs strong guardrails
    Enterprise AI benefits from being constrained, not free‑form.
  3. Aggregation builds trust
    Managers are more comfortable with insights when they know individual visibility is impossible.
  4. Copilot Studio accelerates enterprise delivery
    Security, deployment, and integration stay within the Microsoft ecosystem.

Closing Thoughts

This RTO Compliance Agent demonstrates how Copilot Studio and Databricks Genie can be used to build governed, enterprise‑ready AI solutions—especially in sensitive domains like HR.

By embedding policy into architecture, instructions, and data access, we were able to deliver:

  • Useful insights
  • Strong privacy guarantees
  • High user trust

This pattern is extensible well beyond RTO—opening the door for future HR intelligence use cases built on the same foundation.