Responsible AI is a thing

AI needs to be easier, explainable, accessible, and most of all, responsible. (Image: Gerd Altmann from Pixabay)

The road to more powerful AI-enabled data centers and supercomputers is going to be paved with more powerful silicon. And if Nvidia has its way, much of the silicon innovation will be the technology it has developed.

At the ongoing Computex computer hardware expo in Taipei, Nvidia announced a series of hardware milestones and new innovations to help advance the company’s goals. 

“AI is transforming every industry by infusing intelligence into every customer engagement,” Paresh Kharya, senior director of product management at Nvidia, said during a media briefing. “Data centers are transforming into AI factories.”

One of the key technologies that helped enable Nvidia’s vision is the company’s Grace superchip. At Computex, Nvidia announced that a number of hardware vendors, including ASUS, Foxconn Industrial Internet, GIGABYTE, QCT, Supermicro, and Wiwynn, will be manufacturing Grace Base systems that will begin shipping in the first half of 2023. 

Nvidia first announced the Grace Central Processing Unit Superchip as an ARM-based architecture for AI and high-performance computing workloads in 2021. It’s the building block of the superchip AI factory.

“Grace is built to accelerate the largest AI, HPC, cloud, and hyperscale workloads,” Kharia added.

AI responsibly

This week at Build 2022, Microsoft’s annual conference for software engineers and web developers, responsible AI was at the heart of the discussion, John Montgomery, corporate vice president of Microsoft Azure AI, told VentureBeat. 

“The question was how to make AI more accessible and easier to understand. Users need to create something compliance officers can leverage” said Montgomery.

Most notable is Azure Machine Learning’s preview of a responsible AI dashboard which will also now offer a responsible AI “scorecard” in preview to summarize model performance and insights so that all stakeholders can easily participate in compliance reviews. It makes it easier for people with less data science experience to use these tools and export the scorecard into a PDF. 

For instance, a team of medical professionals is exploring how AI could help reduce waiting times, support recommendations from healthcare teams, and provide patients with better information so they can make more informed decisions about their own care. 

The model is hosted in Microsoft’s Azure cloud and uses the responsible AI dashboard in Azure Machine Learning. Medical professionals can see how the model works and get a clearer understanding of why the AI has reached those conclusions. This boosts confidence that their advice to patients is based on accurate and reliable data.

“Machine learning models are kind of a dark art, so a data scientist can explain to another data scientist why they behave a certain way, but you need to be able to explain to business owners, CEOs, auditors, and compliance teams—that’s where the responsible AI tooling comes in, to help explain why models are making the decisions they’re making,” added Montgomery. 

For AI to become more useful for business applications, it needs to be easier, simpler, more explainable, more accessible, and most of all, responsible. 

For some, “responsible” implies adopting AI in a manner that’s ethical, transparent, and accountable. For others, it means ensuring that their use of AI remains consistent with laws, regulations, norms, customer expectations, and organizational values. In any case, “responsible AI” promises to guard against the use of biased data or algorithms, providing an assurance that automated decisions are justified and explainable.

Monica Savellano

Monica’s first foray into the world of consumer tech began over 20 years ago with a 1st Generation iPod. She’s currently catching up on the world of technology at a much slower pace than the industry is growing.

Previous
Previous

The myth of a good guy with a gun

Next
Next

Nothing is certain, except Marcos Jr. and taxes