[ad_1]
Between Microsoft’s Tay debacle, the controversies surrounding Northpointe’s Compas sentencing software, and Facebook’s own algorithms helping spread online hate, AI’s more egregious public failings over the past few years have shown off the technology’s skeevy underbelly — and just how much work we have to do before they can reliably and equitably interact with humanity. Of course such incidents have done little to tamp down the hype around and interest in artificial intelligences and machine learning systems, and they certainly haven’t slowed the technology’s march towards ubiquity.
Turns out, one of the primary roadblocks to emerge against AI’s continued adoption have been the users themselves. We’re no longer the same dial-up rubes we were in the baud rate era. An entire generation has already grown to adulthood without ever knowing the horror of an offline world. And as such, we have seen a sea change in perspectives regarding the value of personal data and the business community’s responsibilities to change it. Just look at the overwhelmingly positive response to Apple’s recent iOS 14.5 update, which grants iPhone users an unprecedented level of control over how their app data is leveraged and by whom.
Now, the Responsible Artificial Intelligence Institute (RAI) — a non-profit developing governance tools to help usher in a new generation of trustworthy, safe, Responsible AIs — hopes to offer a more standardized means of certifying that our next HAL won’t murder the entire crew. In short they want to build “the world’s first independent, accredited certification program of its kind.” Think of the LEED green building certification system used in construction but with AI instead.
“We’ve only seen the tip of the iceberg,” when it comes to potential bad behaviors perpetrated by AIs, Mark Rolston, founder and CCO of argodesign, told Engadget. ”[AI is] now really insinuating itself into very ordinary aspects of how businesses conduct themselves and how people experience everyday life. When they start to understand more and more of how AI is behind that, they will want to know that they can trust it. That will be a fundamental issue, I think, for the foreseeable future.”
Work towards this certification program began nearly half a decade ago alongside the founding of RAI itself, at the hands of Dr. Manoj Saxena, University of Texas Professor on Ethical AI Design, RAI Chairman and a man widely considered to be the ”’father” of IBM Watson, though his initial inspiration came even further back.
“When I was asked by the IBM board to commercialize Watson, I started realizing all these issues — I’m talking 10 years ago now — about building trust in automated decisioning systems including AI,” he told Engadget. “The most important question that people used to ask me when we were trying to commercialize was, ‘How do I trust this system?’”
Answering that question is the essence of RAI’s work. As Saxena describes it, AI today guides our interactions with the myriad facets of the modern world much like how Google Maps helps us get from one place to another. Except instead of navigating streets, AI is helping us make financial and healthcare decisions, who to Netflix and Chill with, and what you watch on Netflix ahead of the aforementioned chillin’. “All of these are getting woven in by AI and AI is being used to help improve the engagement and decisions,” he explained. “We realized that there are two big problems.”
The first is the same issue that has plagued AI since its earliest iterations: we have no flippin’ clue as to what’s going on inside them. They’re black boxes running opaque decision trees to reach a conclusion whose validity can’t accurately be explained by either the AI’s users or its programmers. This lack of transparency is not a good look when you’re trying to build trust with a skeptical public. “We figured that bringing transparency and trust to AI and automated decisioning models is going to be an incredibly important capability just like it was bringing security to the web [in the form of widespread HTTPS adoption],” Saxena said.
The second issue is, how do you solve the first issue in a fair and independent manner. We’ve already seen what happens when society leaves effective monopolies like Facebook and Google to regulate themselves. We saw the same shenanigans when Microsoft swore up and down that it would self-regulate and play fair during the Desktop Wars of the 1990s — hell, the Pacific Telegraph Act of 1860 came about specifically because telecoms of the era couldn’t be trusted to not screw over their customers without government oversight. This is not a new problem but RAI thinks its certification program might be its modern solution.
Certifications are awarded in four levels — basic, silver, gold, and platinum (sorry, no bronze) — based on the AI’s scores along the five OECD principles of Responsible AI: interpretability/explainability, bias/fairness, accountability, robustness against unwanted hacking or manipulation, and data quality/privacy. The certification is administered via questionnaire and a scan of the AI system. Developers must score 60 points to reach the base certification, 70 points for silver and so on, up to 90 points-plus for platinum status.
Rolston notes that design analysis will play an outsized role in the certification process. “Any company that is trying to figure out whether their AI is going to be trustworthy needs to first understand how they’re constructing that AI within their overall business,” he said. “And that requires a level of design analysis, both on the technical front and in terms of how they’re interfacing with their users, which is the domain of design.”
RAI expects to find (and in some cases has already found) a number of willing entities from government, academia, enterprise corporations, or technology vendors for its services, though the two are remaining mum on specifics while the program is still in beta (until November 15th, at least). Saxena hopes that, like the LEED certification, RAI will eventually evolve into a universalized certification system for AI. He argues, it will help accelerate the development of future systems by eliminating much of the uncertainty and liability exposure today’s developers — and their harried compliance officers — face while building public trust in the brand.
“We’re using standards from IEEE, we are looking at things that ISO is coming out with, we are looking at leading indicators from the European Union like GDPR, and now this recently announced algorithmic law,” Saxena said. “We see ourselves as the ‘do tank’ that can operationalize those concepts and those think tank’s work.”
All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.
[ad_2]
Source link