Undoubtedly, artificial intelligence (AI) shares the prospect of being one of the most influential and profound inventions in human history. Although AI itself does not develop in cohesion, the separate effort to find solutions to individual problems propels a revolution, changing the way human being reason and make decisions. Fearing the change is a fear to face our very own inventiveness. 

The above summarizes the opening of an op-ed published in the August issue of The Atlantic, written by the former Secretary of State, Henry Kissinger; former Google Chief Executive Officer Eric Schmidt, and Daniel Huttenlocher, inaugural Dean of Massachusetts Institute of Technology’s Schwarzman College of Computing. 

The article embarks readers to think about the impacts of AI by dissecting the “revolution” and highlights components that require additional attention.  

AI is not entirely novel 

The trio began with AlphaZero, which picked up chess playing by playing against itself and beating the World’s best chess player in less than 24 hours of training. What puzzles many is it does not follow traditional chess playing strategies, it was performing what a human chess player would call, “counterintuitive learning”. Yet, many chess players are now trying to study its moves and incorporate them into theirs, after the explanations of how the program mastered chess playing, was published last December. 

AI’s ability to perform well-specified tasks to uncover associations between data and actions means it will not conform to human reasoning. Rather, AI will exponentially expand on what human is familiar with, through learning from its own experiences. It is up to the human to interpret and decipher, what AI is actually doing. As such, it will become increasingly challenging to control AI, from a national security perspective. 

Deployment of certain strategies require a level of transparency, but AI is not able to render policymakers that kind of insights. There is a need for a new concept of AI security, one which disconnects it from the network in which it is operating at the moment. Actually, not only AI is opaque, the companies that are developing it are too, not very transparent. 

A group of ethics researchers from the University of British Columbia analyzed 41 wearable devices and their marketing materials. The results showed that whilst most devices promise to alleviate stress, enhance sleep quality and increase concentration, these claims are not formally supported by peer-reviewed research. Users are never informed of the origins of these assertions.

Familiar but not that similar 

Besides, most companies will assimilate their devices in a way which is compatible to the culture and society. For examples, the aging population and Shintoism (regarding inanimate objects to be alive with spirits but not human-like) had encouraged the developments of digital companions in Japan. AI has the capability to remove language barriers and inhibit real cultures. This brings unprecedented impact on the way it shapes and diffuses important information. 

At the same time, the government may use technology to monitor or even regulate behaviors or movements of people. Defining what is aberrant becomes challenging because the line between real and unreal is now blurred. Think about how AI uses GAN (generative adversarial network) to create ultra-realistic faces of people that never existed

Nevertheless, the three writers remain hopeful, as long as we began to think about how AI should be regulated and monitored. The trio got the ball rolling by putting forward several ideas: To establish a department of AI ethics and facilitate thinking and exercising responsible deployment of AI; to program AI that refuses to answer philosophical questions, especially those that derail us from our understanding of reality, and to involve human in important decision making processes. 

All of which have to take place based on “what we already, what we are sure to discover in the near future, and what we are likely to discover when AI becomes widespread,” the trio proposed.  

Enjoy the article? Find out more on AIMed website or subscribe to our weekly e-newsletter today!

Author Bio
synthetic gene empathy chinese artificial intelligence data medicine healthcare ai

Hazel Tang

A science writer with data background and an interest in the current affair, culture, and arts; a no-med from an (almost) all-med family. Follow on Twitter.