Humans for Transparency in Artificial Intelligence

By Ben Goertzel, Bill Hibard, Nick Baladis, Hruy Tsegaye, and David Hanson

Recent dramatic progress in artificial intelligence (AI) leads us to believe that Ray Kurzweil’s prediction of human level AI by 2029 may be roughly accurate. Even if reality proves to be a bit different, still it seems very likely that today’s young people will spend most of their lives in a world largely shaped by AI.

The rapid advent of increasingly advanced AI has led many people to worry about the balance of positive and negative consequences AI will bring. While there is a limit to the degree anyone can predict or control revolutionary developments, nevertheless, there are some things we can do now to maximize the odds that the future development of AI is broadly positive, and the potential for amazing benefits outweighs the potential risks.

One thing we can do now is to advocate for the development of AI technology to be as open and transparent as possible — so that AI is something the whole human race is doing for itself, rather than something being foisted on the rest of the world by one or another small groups. The creation and rollout of new forms of general intelligence is a huge deal and it’s something that can benefit from the full intelligence and wisdom of the whole human race. Specifically we need transparency about what AI is used for and how it works.

For this reason we are gathering signatures on a petition in support of Transparent AI.  Please sign if you agree!

Transparency will help in multiple ways. First, as with cyber security, there are many complex technical vulnerabilities with AI. Experimental AI systems have found ways to accomplish their goals that violate unstated assumptions by their designers, resulting in undesirable behaviors. Cyber insecurity could enable hackers to damage AI systems, also resulting in undesirable behaviors. Transparency in how AI works will allow the world’s computer scientists to search for such vulnerabilities and propose fixes.  We can see the power of this approach in the way the Linux community deals with operating system vulnerabilities.

Second, AI is increasingly employed by providers of Internet services to build predictive models of users and to persuade those users to purchase products and support political candidates and positions. When AI language ability is equal to human ability, persuasive messages may be subtly embedded in conversations with AI. When AI can model human society, persuasion may subtly employ peer pressure and shape human culture. Transparency in what AI is used for can make such persuasion visible to people so they can take individual or political action to resist. Transparency would also inform people about military uses of AI, generating social pressure for treaties similar to those banning biological and chemical weapons.

AI is a tool of military, economic and political competition among humans. In the heat of competition, groups feeling themselves behind in race to develop AI may expose the public to risk in order to gain advantage. Transparency would inform the public of the risk so they could act to prevent it.

On the more positive side, transparency will also bring a far greater diversity of human minds to bear on the numerous difficult scientific, engineering and creative problems involved in creating advanced AI systems.  Even the greatest of companies or government labs cannot match the breadth of culture, background and expertise of the people involved in a large open-source project (Linux being a case in point).  A transparent approach naturally lends itself to involvement by a wide assemblage of people from different parts of the world, different cultures, different professions, different perspectives, different variations of core human values.  The result of this kind of rich diversity tends to be a more robust, more nuanced and multidimensional product that is more able to deal adaptively with complex real-world situations.

The Internet, and Science itself, are leading examples of technical developments that have grown via cultures marked by significant transparency and massive creative diversity.  It is largely because of the transparency at their foundations that they are among the more powerful and robust entities in our world today.  It is desirable that our advanced AI efforts meet and then exceed the level of transparency, creativity and robustness demonstrated by Science and the Internet!

Just as the transparency and openness of the international scientific community tends to foster global cooperation and can help militate toward peace, so a transparent and open AI community will be more likely to foster broadly beneficial AI developments, and less likely to foster adversarial ones.  When AI researchers everywhere are working together and studying and correcting and improving each others’ code, there is likely to emerge a sense of community that leads to more of a focus on creating mutual benefit.  The “global mind” of the community of AI researchers, application developers and other related workers is likely to become more unified and less divided.

Further, transparent AI is more likely to be used to help people in the developing world, rather than just the economically privileged.   When a technology is closed and proprietary, applications generally have to wait to pervade the developing world until the corporations that own them figure out a sufficiently lucrative way to profit from deploying them there.   A transparent technology can be taken up by enthusiastic early adopters in the developing world and then adapted to serve local needs, in ways that may be beneficial even if not immediately financially profitable at a scale interesting to large developed-world corporations.  Often this sort of local adaptation has been achieved via bypassing international law — e.g. software piracy; the creation of low-cost knock-off imitations of electronic devices; or the emergence of small rural farmers who carry out illegal but creative and productive cross-breeding of GMO crops with natural local crops.  But of course the leveraging of advanced technologies like AI to help the developing world will proceed much more rapidly and smoothly if it can be done other than by circumventing the law.

More speculatively and deeply, one can see transparency regarding AI as a potentially powerful tool for dealing with the profound fear toward AI that is evident in so many AI-related science fiction movies, and also in the recent statements of various science and technology pundits. Different individuals’ worries about the future of AI stem from different causes, but alongside various rational concerns, there is often a significant aspect of basic fear of the unknown.  Fear of, and bias against, the unknown is part and parcel of human nature; it is not always wrong and there is no simple “cure” for it when it is wrong.  However, we feel transparency can be a valuable tool for counteracting some of the fears people experience and express regarding AI.  In the ideal case it can aid in replacing reflexive fear with detailed rational consideration. The more transparent AI is, the less it falls into the category of the “worrisome-or-worse unknown”, and the more likely it is that people can deal with it in a reason-based and emotionally-balanced manner.

Transparent AI is not a new idea, but nor is it (yet) the norm in the AI research and development world. For instance, there has been significant discussion in the media recently about OpenAI, an amply funded AI project founded by Elon Musk, Sam Altman, Peter Thiel and a number of their colleagues, with an initial orientation toward open-source software development. There is also a variety of open-source projects focused on Artificial General Intelligence, including the OpenCog project founded by one of this article’s authors, and many others such as (to name just a few) OpenNARS, MicroPsi, the (largely Japan-based)Whole Brain Initiative, and the Hanson Robotics intelligent robot control framework. Google and Facebook have also released open source versions of their AI systems.

We believe these projects are excellent steps in the right direction. But at the present time, closed and opaque AI development is much more generously supplied with financial, computational and human resources — especially when it comes to scalable practical AI development, rather than pure research. We would like to change this, and would like to see the balance shift in favor of transparent AI.

All this is why, in collaboration with Ethiopian AI firm iCog Labs, we have created an online petition in support of transparent AI.  If you agree with us that transparent, open development of advanced AI technology is — based on our current state of knowledge — the best option for the future of humanity and other sentient beings, please add your signature to the petition.

Also, as a seed for a broad-based movement for transparency in AI, we have created a Google group for initial organizing. People without Google accounts can join the group by sending email to transparent_ai+subscribe@googlegroups.com. All people are welcome in such a movement but young people especially have an interest in making AI transparent. Please forward links to this article widely.

Remember: the amazing, transformational future is not just something that is happening to us — it is something that is being created due to all of our actions.   There is meaning and potentially great impact in what you choose to do, and what you choose to advocate.

———————————————-

About the authors: Bill Hibbard is an Emeritus Senior Scientist at the University of Wisconsin-Madison Space Science and Engineering Center, who has written and spoken widely on the ethics of superintelligence, alongside his technical work.  Nick Baladis is a second year student at the MIT Sloan School of Management. Ben Goertzel is an Artificial General Intelligence researcher involved with multiple projects including OpenCog, Hanson Robotics, the AGI Society, iCog Labs and Aidyia Limited. Hruy Tsegaye is a writer and software project leader who works with Ben Goertzel at iCog Labs in Addis Ababa, focusing on applications of AI and robotics to help African education. David Hanson leads cutting-edge robotics firm Hanson Robotics, working to create human-like and compassionate robots to usher in a Friendly Singularity.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: