Today we are sharing publicly Microsoft’s Responsible AI Standarda framework to guide how we build AI systemsGeneral Chat Chat Lounge This is an important step in our journey to develop better, more trustworthy AI. We are releasing our latest Responsible AI Standard to share what we have learned, invite feedback from others, and contribute to the discussion about building better norms and practices around AI.
Guiding more responsible outcomes toward product development
AI systems are the product of many different decisions made by those who develop and deploy them. From the purpose of the system to how people interact with AI systems, we need to proactively guide these decisions toward more beneficial and equitable outcomes. That means keeping people and their goals at the center of system design decisions and respecting enduring values like fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability.
The Responsible AI Standard sets out our best thinking how We will build AI systems uphold these values and earn society’s trust. It Provides specific, actionable guidance for our teams that goes beyond the high-level principles that have dominated the AI landscape to date.
The standard details of concrete goals or outcomes that teams developing AI systems must strive to secure. These goals help break down a broad principle like ‘accountability’ in its key enablers, such as impact assessments, data governance, and human oversight. Each goal is then composed of a set of requirements, which are steps that teams must take to ensure that AI systems meet the goals throughout the system lifecycle. Finally, the standard maps available tools and practices have specific requirements so that Microsoft’s teams have the resources to help them succeed.
The need for this kind of practical guidance is growing. AI is becoming more and more a part of our lives, and yet, our laws are lagging behind. They are not caught up with AI’s unique risks or society’s needs. While we see signs that government action on AI is expanding, we also recognize our responsibility to act. We believe that we need respect towards AI systems by designGeneral Chat Chat Lounge
Refining our policy and learning from our product experiences
Over the course of a year, a multidisciplinary group of researchers, engineers, and policy experts crafted the second version of our Responsible AI Standard. It builds on our Previous responsible AI efforts, Including the first version of the standard that launched internally in the fall of 2019, as well as the latest research and some Important lessons learned from our own product experiences.
Fairness in Speech-to-Text Technology
The potential of AI systems to exacerbate societal biases and inequities is one of the most widely recognized harms associated with these systems. In March 2020, an academic study revealed that speech-to-text technology across the tech sector produced error rates for some Black and African American communities that were nearly double those for white users. We stepped back, considered the study’s findings, and learned that our pre-release testing had not taken into account satisfactorily for the rich diversity of speech across people from different backgrounds and from different regions. After the study was published, we engaged an expert sociolinguist to help us better understand this diversity and expand our data collection efforts to narrow the performance gap in our speech-to-text technology. In the process, we found that we needed to grapple with challenging questions about how best to collect data from communities in a way that engages them appropriately and respectfully. We also learned the value of bringing experts into the process early, including to better understand factors that might account for variations in system performance.
The Responsible AI Standard records the pattern we follow to improve our speech-to-text technology. As we continue to roll out the standard across the company, we expect the Fairness Goals and Requirements to be identified and this will help us get ahead of potential fairness harms.
Appropriate Use Controls for Custom Neural Voice and Facial Recognition
Azure AI’s Custom Neural Voice is another innovative Microsoft speech technology that enables the creation of a synthetic voice that sounds almost identical to the original source. AT&T has brought this technology to life with an award-winning in-store Bugs Bunny experience, and Progressive has brought Flo’s voice to online customer interactions, among many other customersGeneral Chat Chat Lounge This technology has exciting potential in education, accessibility, and entertainment, and yet it is also easy to imagine how it could be used inappropriately to impersonate speakers and deceive listeners.
Our review of this technology through our Responsible AI program, including the Sensitive Uses review process required by the Responsible AI Standard, led us to adopt a layered control framework: we restricted customer access to the service, ensured acceptable use cases were actively defined and communicated. through a Transparency Note and Code of Conduct, and established technical guardrails to help ensure the active participation of the speaker when creating a synthetic voice. Through these and other controls, we helped protect against misuse, while maintaining beneficial uses of the technology.
Building on what we learned from Custom Neural Voice, we will apply similar controls to our facial recognition servicesGeneral Chat Chat Lounge After a transition period for existing customers, we are limiting access to these managed customers and partners, narrowing the use cases to pre-defined acceptable ones, and leveraging technical controls into the services.
Fit for Purpose and Azure Face Capabilities
Finally, we recognize that for AI systems to be trustworthy, they need to have appropriate solutions to the problems they are designed to solve. As part of our work to align our Azure Face Service with the requirements of the Responsible AI Standard, we are also retiring capabilities that infer emotional states and identity attributes such as gender, age, smile, facial hair, hair, and makeup.
Taking emotional states as an example, we have decided we will not provide open-ended API access to technology that can scan people’s faces and purport to infer their emotional states based on their facial expressions or movements. Experts inside and outside the company have highlighted the lack of scientific consensus on the definition of “emotions,” how challenges in general reference across use cases, regions, and demographics, and the heightened privacy concerns around this type of capability. We also decided that we needed to carefully evaluate all AI systems that purport to infer people’s emotional states, whether the systems use facial analysis or any other AI technology. The Fit for Purpose Goal and Requirements in the Responsible AI Standard now help us to make system-specific validity assessments upfront, and our Sensitive Uses process helps us provide nuanced guidance for high-impact use cases, grounded in science.
These real-world challenges inform the development of Microsoft’s Responsible AI Standard and demonstrate its impact on the way we design, develop, and deploy AI systems.
For those who want to dig into our approach further, we have also made available some key resources that support the Responsible AI Standard: our Impact Assessment template and guide, and a Collection of Transparency Notes. Impact Assessments has proven valuable at ensuring teams in Microsoft discover the impact of their AI system – including its stakeholders, intended benefits, and potential harms – at the earliest design stages. Transparency Notes are a new form of documentation that we disclose to our customers’ capabilities and limitations of our core building block technologies, so they have the knowledge necessary to make responsible deployment choices.
A multidisciplinary, iterative journey
Our updated Responsible AI Standard reflects hundreds of inputs across Microsoft technologies, professions, and geographies. This is a significant step forward in our practice of responsible AI because it is much more actionable and concrete: it sets out practical approaches for identifying, measuring, and mitigating harms ahead of time, and requires teams to adopt controls to use safely and protect. against misuse. You can learn more about the development of the standard in this
While our standard is an important step in Microsoft’s responsible AI journey, it is just one step. As we make progress with implementation, we expect to encounter challenges that require us to pause, reflect, and adjust. Our standard will be a living document, evolving to address new research, technologies, laws, and learnings from within and outside the company.
There is a rich and active global dialogue about how to create principled and actionable norms to develop and deploy AI responsibly. We have benefited from this discussion and will continue to contribute to it. We believe that industry, academia, civil society, and government need to collaborate to advance the state-of-the-art and learn from one another. Together, we need to answer open research questions, close measurement gaps, and design new practices, patterns, resources, and tools.
Better, more equitable futures will require new guardrails for AI. Microsoft’s Responsible AI Standard is a contribution toward this goal, and we are engaging in the hard and necessary implementation work across the company. We’re committed to being open, honest, and transparent in our efforts to make meaningful progress.