Structuring Human-AI Interaction: Analysis of 5 Frameworks for UX Designers

Ubiratan Silva, M. Sc.

Structuring Human-AI Interaction: Analysis of 5 Frameworks for UX Designers

Ubiratan Silva, M. Sc.

2025 has been a year of rapid evolution and growth, with significant progress in artificial intelligence and its integrations with other fields of knowledge. It is impressive how quickly this technology has become embedded in our professional, academic, scientific, and personal lives.

For UX, UI, Product, and Service designers, curiosity and dedication are necessary when dealing with such fast-paced progress. Embracing AI requires rethinking processes, understanding the underlying technology, ensuring that human values and user needs remain at the center of what we create, and acknowledging that all of this is currently being debated amidst the whirlwind.

With AI becoming essential for digital innovation, our role as designers is evolving, undergoing complete transformation and redefinition. We are no longer just creating interfaces, but experiences that merge human-centered principles with new forms of interaction, devices, and technologies. This transition demands critical and technical thinking, data-driven systems, and a strong focus on the human being and on issues that are not always commercial, such as ethics and responsibility in projects involving AI.

To guide this new understanding, technology companies and universities are presenting diverse and interesting strategies for human-centered AI, highlighting it as a new theme to be explored. Here I list five of the main initiatives in this regard, as a quick analysis (LinkedIn) of each one.

Inspired and adapted from the article: This article is inspired by Rob Chappel’s “Human-centered AI: 5 frameworks for UX designers” https://uxdesign.cc/human-centered-ai-5-key-frameworks-for-ux-designers-6b1ad9e53d23 The article compiles frameworks from IBM, Google, Microsoft, and Carnegie Mellon University, offering an overview of each along with resources and references (links) to navigate the rapid evolution of AI technologies and tools.

1. IBM Human-AI Context Model
https://www.ibm.com/design/ai/fundamentals

The IBM Human-AI Context Model is at the core of its Design for AI practice, serving as a structured framework to ensure AI solutions interact fluidly with users and evolve with their feedback, while respecting and enhancing the context of use.

  • Understanding intent: AI systems should prioritize human goals, considering user intent, emotions, and context. Intent represents the fundamental purpose of the system, uniting the needs, desires, and values of users and businesses.
  • Data and policy: Refers to data collected from users and the world, as well as the policies governing its use. Collection and handling must comply with ethical and regulatory standards. Context is essential for personalizing recommendations, considering factors such as location, time, or urgency.
  • Machine understanding and expression: AI’s ability to interpret structured and unstructured data, apply logic, update knowledge with new insights, and communicate responses aligned with user expectations.
  • Human reactions and improvement cycle: Systems should work with humans, not just for humans, balancing automation with human agency. Continuous learning and improvement cycles are based on user interactions and feedback.
  • Outcome evaluation: Measuring the real impact of AI, ensuring it effectively and ethically meets user needs.

2. Google’s Explainability Rubric
https://explainability.withgoogle.com/rubric

Google’s Explainability Rubric aims to create AI systems that are transparent, fair, and user-centered, highlighting 22 key pieces of information that should be communicated to the user.

It is divided into three levels:

  • General: Explanation of how the product or service works, the role of AI, main benefits, business model, safety measures, and transparency.
  • Features: Details on AI-driven functions, clarifying when they are active, control options, limitations, customization possibilities, data usage, and human involvement.
  • Decision: Clarity on how automated decisions are made, the system’s confidence in its outputs, and mechanisms to correct errors or allow user contestation and feedback.

3. Microsoft’s HAX Toolkit
https://www.microsoft.com/en-us/haxtoolkit/

The Human-AI Experiences (HAX) Toolkit from Microsoft is a comprehensive framework to help teams develop user-oriented AI products.

Its components include:

  • Guidelines for Human-AI interaction: Best practices for ensuring intuitive experiences.
  • HAX Design Library: Applicable examples and patterns for each interaction guideline.
  • HAX Workbook: A collaborative tool for prioritizing guidelines, streamlining the design process.
  • HAX Playbook: Focused on natural language applications, pointing out common pitfalls and strategies for mitigation.

4. AI Brainstorming Kit from the HCI Institute (Carnegie Mellon)
https://aidesignkit.github.io/

The kit from Carnegie Mellon is designed to help teams explore AI capabilities and select relevant, user-centered projects, avoiding the development of irrelevant solutions.

  • It classifies AI functions into pattern detection, trend prediction, content generation, and task automation.
  • It provides examples of real AI products across multiple sectors (health, education, transportation).
  • It includes ideation prompts, impact-effort matrices, and performance grids to select high-impact, feasible ideas.
  • It is ideal for workshops and strategic innovation sessions, ensuring innovation is aligned with real user needs.

5. People + AI Guidebook from Google
https://pair.withgoogle.com/guidebook/

Created by Google’s People + AI Research (PAIR) team, this guidebook compiles methods, use cases, and design patterns to help create impactful AI solutions.

Key themes include:

  • Starting with human-centered AI: Assessing whether AI adds value, setting expectations, and communicating benefits.
  • Using AI in products: Balancing automation and user control, managing trade-offs between accuracy and coverage.
  • User onboarding: Making exploration safe and explanatory, anchored in elements already familiar to the user.
  • Explaining AI: Demonstrating AI’s capabilities and limitations, communicating confidence levels, and providing contextual explanations.
  • Building responsible datasets: Involving specialists, careful design for data labelers, and responsible dataset maintenance.
  • Building and calibrating trust: Transparency about privacy, accountability for errors, and creating feedback channels.
  • Balancing control and automation: Gradual automation and handing back control to the user when necessary.
  • Failure support: Planning for error resolution and user recovery mechanisms.

These five frameworks provide a foundation for designing AI that integrates into everyday life, from interactive robots to organizational apps. Approaching AI with human-centered frameworks means balancing technical capabilities with responsibility, questioning the real need for AI in use cases, and building systems that learn from continuous user feedback.

User Centrality and Ethical Responsibility
All frameworks emphasize the importance of placing user needs at the center of the process, whether by identifying intent behind AI use (IBM) or enabling feedback and contestation (Google, Google PAIR). Essentially, it is not enough to “design for the user,” but rather to “design with the user,” fostering a continuous cycle of improvement and adaptation based on their real context.

Transparency and Explainability
The focus on explainability, especially in Google’s frameworks, reflects a growing demand for transparent systems, where users understand how AI reaches decisions and know when they can question or intervene. This is vital to build trust and promote responsible use, especially in sensitive contexts such as healthcare and finance.

Flexibility and Multidisciplinary Collaboration
The use of collaborative kits and playbooks (Microsoft HAX, CMU Brainstorming Kit) reveals the complexity of AI projects, which require multidisciplinary teams and iterative design processes involving developers, researchers, business, domain experts, and end users.

Data and Privacy
There is a recurring emphasis on ethical data governance, from responsible collection to the direct involvement of specialists during dataset management and maintenance (Google PAIR). This is vital due to potential biases, misuse, and privacy risks.

Balancing Automation and Human Agency
The frameworks argue that automation should be progressive and should always allow users to regain control in cases of error or the need for supervision (Microsoft, Google PAIR). This fosters safer and more adaptive experiences, reducing frustration and risks.

Iteration, Continuous Feedback, and Improvement
The cycle of learning and evolution based on user feedback closes the loop (IBM, Google, Microsoft, CMU), ensuring that AI systems are not rigid but instead learn and improve according to real-world use.

Final Considerations
The analyzed frameworks present solid paths for human-centered AI design, recommending practices of transparency, ethics, continuous adaptation, multidisciplinary collaboration, and a genuine focus on real user needs. The complexity of AI must not overshadow the simplicity, clarity, and empathy of good design. This seems to be the greatest challenge for designers today and in the near future.

Ubiratan Silva: Product and Service Design Lead, PhD candidate in Design at UFRGS, Master in Design at Unisinos, BA in Social Communication – Advertising at UFRGS, Design Leadership, CEO of Online UX Team. Specialist in Product Strategy and AI, with over 25 years in UX/UI, Educator, Mentor, and Researcher in Design and Artificial Intelligence.

https://www.linkedin.com/in/ubiratansilva

Keywords:

UI/UX
Branding
Identidade visual (marca)