It took years for regulators to impose restrictions that limit the use of mobile phone while driving. Are drivers making calls via handheld mobile more distracted than those who are texting, playing games, or browsing the internet? Can leniency be given to experienced drivers or driving in areas with less risks? Is a ban effective in stopping related car accidents? These debates influenced regulators on where and how they should set their parameters.

Some aspects of AI are easier to regulate than others

A similar scene is observed in artificial intelligence (AI) at the moment. The technology is not mature enough for regulators to understand how it will be used or misused and this justified the “wait and see” stand taken by many governments. Defining AI and the aspects of it are the first step to spearhead AI regulation. It also turned out to be the hardest. Cognilytica, a Washington based AI and machine learning focused analyst firm released a Worldwide AI laws and regulations report recently and highlighted nine different AI-relevant areas.

They include facial recognition and computer vision; operation and development of autonomous vehicles; data usage and privacy; conversation systems and chatbots; lethal autonomous weapons systems; AI ethics and bias; AI supported decision making; malicious use of AI, and other uses, creation and interaction with AI systems. The report said the European Commission is the most active in terms of proposing new rules and regulations, with existing or proposed rules in seven out of the nine categories of areas while the US still has a relatively “light” regulatory approach.

Some areas like autonomous vehicles are easier to define. US and many European countries like Belgium, Germany, and Finland already had related laws in place to test driverless cars. Others like AI ethics, no country has regulations to outline AI responsibilities yet because the topic can go rather abstract at times and a solution will never be as easy as tacking a box and concludes this is ethical and this is not. Nevertheless, Europe, Singapore, Australia and Germany are all actively considering.

Challenging to demonstrate how AI is benefitting or harming the public

The result of not being able to define AI means regulators are unsure if they should control AI that is stagnant (i.e., algorithms that only perform tasks that they were designed to do) or fluid (i.e., algorithms that can improve over time with new data). Besides, many sectors, including healthcare, are not able to strike a balance between requiring more data from people to make AI work properly and maintaining people’s trust that AI can bring real benefits and change.

Furthermore, lawmakers need to find a sweet spot to control AI but also making sure it can thrive within the sector it is deployed to without introducing new risks. For example, many companies have chatbots on their websites to assist clients. So, related regulations should not put a stop in using these virtual assistants but around protecting clients when they are revealing sensitive information such as their bank details or medical history.

It is natural for the industry to move ahead of the government as they are the ones who will determine the terms that are representative of their needs for legislators to make more informed decisions. Can we draw lessons from the use of cell phones while driving? Probably, since we all need to access the technology that is being used before developing meaningful regulations.

*

Author Bio

Hazel Tang A science writer with data background and an interest in the current affair, culture, and arts; a no-med from an (almost) all-med family. Follow on Twitter.