Microsoft Highlights Responsible AI Efforts in New Report

Copilot creator showcases responsible AI efforts including expanding safety tools for Azure customers

Ben Wodecki, Jr. Editor

May 9, 2024

3 Min Read
A bald man in a dark blue jumper stands in front of the Microsoft logo
Ethan Miller/Getty Images

Microsoft has published a report detailing its responsible AI practices.

The 40-page Responsible AI Transparency Report lauds the company’s own efforts for building generative AI responsibly.

The report highlights several safety measures including the launch of 30 tools for developing responsible AI and more than 100 features for its AI customers to help them safely deploy solutions.

The report highlights a 17% increase in Microsoft’s responsible AI community which now spans more than 400 members.

Microsoft also revealed that all of its employees are mandated to undertake responsible AI training. Ninety-nine percent of employees have completed related modules as part of the company’s annual Standards of Business Conduct training.

“At Microsoft, we recognize our role in shaping this technology,” according to the report.. “We have released generative AI technology with appropriate safeguards at a scale and pace that few others have matched. This has enabled us to experiment, learn and hone cutting-edge best practices for developing generative AI technologies responsibly.”

After racing ahead with its generative AI efforts in early 2023, Microsoft sought to temper its speed with safety.

Among its safety-focused efforts, Microsoft launched a program to help train customers to deploy regulatory-compliant AI applications and pledged to pay the legal fees of companies facing intellectual property lawsuits from using its Copilot products.

Related:Microsoft Invests $1.7B to Improve AI, Cloud Infrastructure in Indonesia

Earlier this year, Microsoft published a series of AI principles, showcasing its commitment to fostering competition in the AI space. However, those principles were published in the wake of growing antitrust investigations over its ties to OpenAI and more recently French AI startup Mistral.

In its latest attempt to paint itself in a safety-focused light, Microsoft highlighted expanded tools given to Azure customers to evaluate their AI systems for issues like hate speech and security circumventions.

The report also highlights expansions to its red-teaming efforts. Red teaming is when developers and security professionals stress-test AI models by exploiting gaps within the models' security architecture.

To support external security checks, Microsoft’s report references PyRIT, its internal security testing tool that it made public earlier this year.

“Since its release on GitHub, PyRIT has received 1,100 stars and been copied more than 200 times by developers for use in their own repositories where it can be modified to fit their use cases,” according to the report.

Related:Microsoft Vows Fair AI Practices Amid Antitrust Scrutiny – MWC 2024

Microsoft’s commitment extends to collaboration through the Frontier Model Forum, an industry body for safe AI development formed last July with rivals Google and Anthropic.

Referencing the new nonprofit in its report, Microsoft said it will continue to share information on safety risks and work to create best practices for the responsible development of large-scale AI systems.

Microsoft’s report concludes with a pledge to continue investing in efforts to build AI responsibly which includes a commitment to create tools for customers so they can safely develop their own AI applications.

Brad Smith Microsoft’s president and Natasha Crampton, the company’s chief responsible AI officer, wrote in the report: “We believe we have an obligation to share our responsible AI practices with the public and this report enables us to record and share our maturing practices, reflect on what we have learned, chart our goals, hold ourselves accountable and earn the public’s trust.

“There is no finish line for responsible AI. And while this report doesn’t have all the answers, we are committed to sharing our learnings early and often and engaging in a robust

dialogue around responsible AI practices.”

Veera Siivonen, Saidot’s chief commercial officer, said the report emphasizes the importance of proactive governance in the growing AI market.

“As a primary player in this industry, [Microsoft] have a duty to be transparent, not least to the companies building solutions on top of the work they do,” Siivonen said. “It is important to recognize the symbiotic relationship between OpenAI and Microsoft, Microsoft essentially lends its trust to OpenAI by giving the company access to its customers through the cooperation. Given the influence that both companies hold, building a culture of trust, transparency and accountability absolutely should be the number one priority for both Microsoft and OpenAI.”

Read more about:

ChatGPT / Generative AI

About the Author(s)

Ben Wodecki

Jr. Editor

Ben Wodecki is the Jr. Editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to Jr. Editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others. He holds an MSc in Digital Journalism from Middlesex University.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like