Strategic Design

Streamlining Process Mining Workflows with AI Agents

Context

AI-driven automation is transforming business operations, but with great power comes great responsibility. When autonomous agents execute tasks without oversight, organizations face risks like unpredictable behavior, compliance breaches, and lack of accountability.

 

This project aimed to bridge the gap between automation and governance, creating a solution that makes AI agents transparent, auditable, and aligned with business goals.

Opportunity

Companies want the efficiency of AI agents but fear the risks of “black box” automation. Without visibility and control, even small errors can lead to compliance breaches or financial loss.

 

The challenge was simple yet complex: How do we give users the tools to trust automation?

 

That meant creating an experience where they could see what agents do, set boundaries, and measure impact, all without adding complexity to their workflow.

Goal

The goal was to design a solution that:

  • Visualizes agent behavior in the context of business processes.
  • Provides governance tools to define rules and detect deviations.
  • Shows clear ROI through performance metrics and comparisons.

 

This was about creating a framework for trust that fits seamlessly into enterprise workflows.

Discovery

The team’s first step was to uncover what trust in automation looks like for real users. Through stakeholder interviews, we learned that Business Process Managers and Business Process Analysts worry about accountability, while Process Owners want proof of efficiency.

 

We also tested mid-fidelity wireframes early through usability testings to validate usability and expectations. This helped uncover what users valued most:

 

  • Clear visual cues for agent actions.
  • Side-by-side comparisons of manual vs automated performance.
  • Compliance insights that feel actionable, not technical.

Defining

Designing the right screens starts with understanding the bigger picture. At this stage, we focused on mapping how users would interact with the solution from start to finish, creating a flow that feels natural and supports decision-making at every point.

 

The experience needed to:

  • Highlight inefficiencies in current processes.
  • Recommend AI agents with clear value propositions.
  • Allow rule-based configuration for compliance.
  • Provide dashboards for performance and auditability.

 

These elements formed the backbone of the design, guided by principles of transparency, accountability, and simplicity so users see what matters without feeling overwhelmed.

From manual-heavy workflows to AI-powered efficiency with governance built-in.

Ideation

Before moving into wireframes, the team explored multiple ways to visualize governance without overwhelming users. This phase was about thinking broadly and narrowing down — sketching layouts, testing flows, and considering how to balance complexity with clarity.

 

We started with quick diagrams comparing the current manual process to the future automated process, which helped identify where transparency mattered most. From there, we brainstormed concepts for dashboards, rule-setting screens, and compliance alerts.

 

The final direction prioritized simplicity and trust, focusing on clear visual cues, modular panels, and progressive disclosure to keep details accessible without clutter.

Exploring different layouts and flows before moving to wireframes.

Wireframing & Early Testing

Wireframes were the first step to bring the concept to life. At this stage, the focus was on structure and clarity, not aesthetics. For that, we designed mid-fidelity screens to test core interactions like process visualization with the agent execution, agent conformance recommendations, and rule configuration.

Key Design Decisions:

 

  • Color coding for clarity: Human (yellow), System (blue), Agent (purple).
  • Side-by-side comparisons of performance (human/agent).
  • Progressive disclosure to avoid overwhelming users with data.
  • Customizable rules and boundaries for agent execution.

Testing these wireframes with users revealed what worked and what didn’t. Feedback helped refine navigation and terminology before moving to Hi-Fi design.

Prototyping

With the structure validated, the next challenge was making governance feel approachable and trustworthy. Complex data like compliance logs and performance metrics needed to be presented in a way that was clear, actionable, and visually balanced.

 

We focused on creating modular dashboards and intuitive rule-setting screens, ensuring that users could quickly interpret data and make informed decisions without feeling overwhelmed.

Value Potential Dashboard: Clear metrics on cost savings, speed improvements, and ROI

Execution Insights (Violations): Visual alerts for deviations and compliance breaches

Agent Performance Comparison: Compare agents’ executions by time, cost, and steps. Different prompts lead to different results, helping users choose the most efficient option.

Performance Dashboard: Real-time metrics comparing agent vs human performance.

Validation

Design is only successful if users trust it. To confirm this, I conducted usability testing with Hi-Fi prototypes and gathered feedback through surveys and interviews.

 

Key Findings:

  • 80% of users felt confident deploying agents after using the dashboard.
  • Compliance insights significantly increased trust in automation (which was their main concern).
  • Users valued the ability to compare cases and track ROI visually.

Also worth exploring

More projects

UX Research

Building a User-Centered Foundation for SAP’s Platform

Explore!

abstract painting

Sustainable UX/UI Design

Integrating Sustainability into the Design Workflow

Coming soon!

Let’s connect!

© Felipe Pereira 2025 All Rights Reserved

Strategic Design

Streamlining Process Mining Workflows with AI Agents

Context

AI-driven automation is transforming business operations, but with great power comes great responsibility. When autonomous agents execute tasks without oversight, organizations face risks like unpredictable behavior, compliance breaches, and lack of accountability.

 

This project aimed to bridge the gap between automation and governance, creating a solution that makes AI agents transparent, auditable, and aligned with business goals.

Opportunity

Companies want the efficiency of AI agents but fear the risks of “black box” automation. Without visibility and control, even small errors can lead to compliance breaches or financial loss.

 

The challenge was simple yet complex: How do we give users the tools to trust automation?

 

That meant creating an experience where they could see what agents do, set boundaries, and measure impact, all without adding complexity to their workflow.

Goal

The goal was to design a solution that:

  • Visualizes agent behavior in the context of business processes.
  • Provides governance tools to define rules and detect deviations.
  • Shows clear ROI through performance metrics and comparisons.

 

This was about creating a framework for trust that fits seamlessly into enterprise workflows.

Discovery

The team’s first step was to uncover what trust in automation looks like for real users. Through stakeholder interviews, we learned that Business Process Managers and Business Process Analysts worry about accountability, while Process Owners want proof of efficiency.

 

We also tested mid-fidelity wireframes early through usability testings to validate usability and expectations. This helped uncover what users valued most:

 

  • Clear visual cues for agent actions.
  • Side-by-side comparisons of manual vs automated performance.
  • Compliance insights that feel actionable, not technical.

Defining

Designing the right screens starts with understanding the bigger picture. At this stage, we focused on mapping how users would interact with the solution from start to finish, creating a flow that feels natural and supports decision-making at every point.

 

The experience needed to:

  • Highlight inefficiencies in current processes.
  • Recommend AI agents with clear value propositions.
  • Allow rule-based configuration for compliance.
  • Provide dashboards for performance and auditability.

 

These elements formed the backbone of the design, guided by principles of transparency, accountability, and simplicity so users see what matters without feeling overwhelmed.

From manual-heavy workflows to AI-powered efficiency with governance built-in.

Ideation

Before moving into wireframes, the team explored multiple ways to visualize governance without overwhelming users. This phase was about thinking broadly and narrowing down — sketching layouts, testing flows, and considering how to balance complexity with clarity.

 

We started with quick diagrams comparing the current manual process to the future automated process, which helped identify where transparency mattered most. From there, we brainstormed concepts for dashboards, rule-setting screens, and compliance alerts.

 

The final direction prioritized simplicity and trust, focusing on clear visual cues, modular panels, and progressive disclosure to keep details accessible without clutter.

Exploring different layouts and flows before moving to wireframes.

Wireframing & Early Testing

Wireframes were the first step to bring the concept to life. At this stage, the focus was on structure and clarity, not aesthetics. For that, we designed mid-fidelity screens to test core interactions like process visualization with the agent execution, agent conformance recommendations, and rule configuration.

Key Design Decisions:

 

  • Color coding for clarity: Human (yellow), System (blue), Agent (purple).
  • Side-by-side comparisons of performance (human/agent).
  • Progressive disclosure to avoid overwhelming users with data.
  • Customizable rules and boundaries for agent execution.

Testing these wireframes with users revealed what worked and what didn’t. Feedback helped refine navigation and terminology before moving to Hi-Fi design.

Prototyping

With the structure validated, the next challenge was making governance feel approachable and trustworthy. Complex data like compliance logs and performance metrics needed to be presented in a way that was clear, actionable, and visually balanced.

 

We focused on creating modular dashboards and intuitive rule-setting screens, ensuring that users could quickly interpret data and make informed decisions without feeling overwhelmed.

Value Potential Dashboard: Clear metrics on cost savings, speed improvements, and ROI

Execution Insights (Violations): Visual alerts for deviations and compliance breaches

Agent Performance Comparison: Compare agents’ executions by time, cost, and steps. Different prompts lead to different results, helping users choose the most efficient option.

Performance Dashboard: Real-time metrics comparing agent vs human performance.

Validation

Design is only successful if users trust it. To confirm this, I conducted usability testing with Hi-Fi prototypes and gathered feedback through surveys and interviews.

 

Key Findings:

  • 80% of users felt confident deploying agents after using the dashboard.
  • Compliance insights significantly increased trust in automation (which was their main concern).
  • Users valued the ability to compare cases and track ROI visually.

Also worth exploring

More projects

UX Research

Building a User-Centered Foundation for SAP’s Platform

Explore!

two people sitting on a ledge talking

Sustainable UX/UI Design

Integrating Sustainability into the Design Workflow

Coming soon!

Let’s connect!

© Felipe Pereira 2025 All Rights Reserved

Strategic Design

Streamlining Process Mining Workflows with AI Agents

Context

AI-driven automation is transforming business operations, but with great power comes great responsibility. When autonomous agents execute tasks without oversight, organizations face risks like unpredictable behavior, compliance breaches, and lack of accountability.

 

This project aimed to bridge the gap between automation and governance, creating a solution that makes AI agents transparent, auditable, and aligned with business goals.

Opportunity

Companies want the efficiency of AI agents but fear the risks of “black box” automation. Without visibility and control, even small errors can lead to compliance breaches or financial loss.

 

The challenge was simple yet complex: How do we give users the tools to trust automation?

 

That meant creating an experience where the users could see what agents do, set boundaries, and measure impact, all without adding complexity to their workflow.

Goal

The goal was to design a solution that:

  • Visualizes agent behavior in the context of business processes.
  • Provides governance tools to define rules and detect deviations.
  • Shows clear ROI through performance metrics and comparisons.

 

This was about creating a framework for trust that fits seamlessly into enterprise workflows.

Discovery

The team’s first step was to uncover what trust in automation looks like for real users. Through stakeholder interviews, we learned that Business Process Managers and Business Process Analysts worry about accountability, while Process Owners want proof of efficiency.

 

We also tested mid-fidelity wireframes early through usability testings to validate usability and expectations. This helped uncover what users valued most:

 

  • Clear visual cues for agent actions.
  • Side-by-side comparisons of manual vs automated performance.
  • Compliance insights that feel actionable, not technical.

Defining

Designing the right screens starts with understanding the bigger picture. At this stage, we focused on mapping how users would interact with the solution from start to finish, creating a flow that feels natural and supports decision-making at every point.

 

The experience needed to:

  • Highlight inefficiencies in current processes.
  • Recommend AI agents with clear value propositions.
  • Allow rule-based configuration for compliance.
  • Provide dashboards for performance and auditability.

 

These elements formed the backbone of the design, guided by principles of transparency, accountability, and simplicity so users see what matters without feeling overwhelmed.

From manual-heavy workflows to AI-powered efficiency with governance built-in

Ideation

Before moving into wireframes, the team explored multiple ways to visualize governance without overwhelming users. This phase was about thinking broadly and narrowing down — sketching layouts, testing flows, and considering how to balance complexity with clarity.

 

We started with quick diagrams comparing the current manual process to the future automated process, which helped identify where transparency mattered most. From there, we brainstormed concepts for dashboards, rule-setting screens, and compliance alerts.

 

The final direction prioritized simplicity and trust, focusing on clear visual cues, modular panels, and progressive disclosure to keep details accessible without clutter.

Exploring different layouts and flows before moving to wireframes

Wireframing & Early Testing

Wireframes were the first step to bring the concept to life. At this stage, the focus was on structure and clarity, not aesthetics. For that, we designed mid-fidelity screens to test core interactions like process visualization with the agent execution, agent conformance recommendations, and rule configuration.

Key Design Decisions:

 

  • Color coding for clarity: Human (yellow), System (blue), Agent (purple).
  • Side-by-side comparisons of performance (human/agent).
  • Progressive disclosure to avoid overwhelming users with data.
  • Customizable rules and boundaries for agent execution.

Testing these wireframes with users revealed what worked and what didn’t. Feedback helped refine navigation and terminology before moving to Hi-Fi design.

Prototyping

With the structure validated, the next challenge was making governance feel approachable and trustworthy. Complex data like compliance logs and performance metrics needed to be presented in a way that was clear, actionable, and visually balanced.

 

We focused on creating modular dashboards and intuitive rule-setting screens, ensuring that users could quickly interpret data and make informed decisions without feeling overwhelmed.

Value Potential Dashboard:

Clear metrics on cost savings, speed improvements, and ROI.

Execution Insights (Violations):

Visual alerts for deviations and compliance breaches.

Agent Performance Comparison:

Compare agents’ executions by time, cost, and steps. Different prompts lead to different results, helping users choose the most efficient option.

Performance Dashboard:

Real-time metrics comparing agent vs human performance.

Validation

Design is only successful if users trust it. To confirm this, I conducted usability testing with Hi-Fi prototypes and gathered feedback through surveys and interviews.

 

Key Findings:

  • 80% of users felt confident deploying agents after using the dashboard.
  • Compliance insights significantly increased trust in automation (which was their main concern).
  • Users valued the ability to compare cases and track ROI visually.

Also worth exploring

More projects

UX Research

Building a User-Centered Foundation for SAP’s Platform

Explore!

abstract painting

Sustainable UX/UI Design

Integrating Sustainability into the Design Workflow

Coming soon!