The world is observing rapid innovations in the field of AI like never before. New AI technology is soon expected to become an integral part of our personal and professional lives. Today, we are already interacting with AI through smartphones, computers, and other connected devices which is making our lives more comfortable. Right from enhancing patient services in healthcare to optimizing logistics and detecting fraud intelligent machine systems are transforming our lives. However, with the rise of AI, both pioneers and critics are agreeing on the importance of ethics.
All the tech giants including Amazon, Alphabet, Facebook, Microsoft, and IBM are talking about the landscape of AI. This is a new frontier for emerging technology which makes it crucial for ethics and risk assessment. Some of the big questions AI experts are facing include:
In the future, AI is predicted to add immensely add to the economic system. Many companies even today follow an hourly wage system, but as AI will automate company task it will drastically cut down the human workforce. This will mean that the generated revenues will be distributed among fewer people widening the wealth gap. For example, in 2014, the three biggest companies in Detroit and Silicon Valley generated roughly the same revenue, however, the number of employees in Silicon Valley was 10 times less. So the question arises that how do we structure a fair post-labor economy?
Human and machine intelligence comes from learning. In AI, when a system is under training they learn to detect correct patterns according to their input and act on them. Then it goes to another test phase and is tested with more examples to see how it performs. As the training phase cannot cover all possibilities of the real world it can be fooled in many ways. This brings us another question, how can we protect the systems from making such mistakes?
AI is advancing in terms of modeling human conversation and relationships. This milestone will lead to more human and machine interaction be it sales or customer service. Humans have limited attention span and bots are able to channel virtually unlimited resources into relationships building. This led us to a simple question, how machines will impact human behavior and interaction?
Advancements in technology will make it powerful enough to be used for malicious activities. It has can be used to exploit the vulnerability of AI systems. Also, as these technologies get smarter they will impact they will change the nature of threats, making them more random in appearance, harder to detect, efficient at identifying and targeting vulnerabilities, and adaptive to systems and environments. This will make cybersecurity more important as the system will be more capable and fast by orders of magnitude. This brings us down to another question: How are we planning to keep AI safe from adversaries?
AI capable of speed and capacity, however, it is very hard to trust it to be fair and neutral. For example, Google is one of the leaders in AI but in Google’s Photos which relies on AI for the identification of people, scenes, and objects, it sometimes goes wrong. This happens when the camera missed the mark on racial sensitivity or makes other similar mistakes. Humans themselves are biased and judgmental and as AI systems are created by us they can be imperfect. The next question arises, how do we eliminate AI bias?
Today, humans dominate the food chain but not due to sharp teeth or muscles but intelligence and ingenuity. In the years to come, AI will become faster, smarter, and more capable, additionally, they will be able to freely move and defend themselves. This brings us to another serious question: if AI will have the same advantage over humans one day as we have over the food chain, how do we stay in control from the system?
The first threat to AI might not be the adversaries rather the system could possibly of it turned against humans. This doesn’t mean the AI disasters in Hollywood movies. It might fulfill a wish like a genie but with terrible unforeseen consequences. This might happen due to a lack of understanding of the full context the wish was made in.
For example, if an AI system was asked to eradicate cancer and after a lot of computing, it produces a formula that way of ending cancer if by killing everyone with cancer. Under such circumstances, the system has achieved its goal very efficiently, but not according to the human intend. Bring us to the next question: How are we planning to protect against unintended consequences?
AI systems today are fairly superficial; however, with new developments every day they are becoming more complex and alive. While conscious experience is still unknown to neuroscientists there is an understanding of its basic mechanisms of reward and aversion just like animals. So if its reward functions are giving negative input can we consider it to be suffering? Furthermore, a system’s genetic algorithms generated many instances at the same time. The unsuccessful instances are then erased and only the successful instance survives which then combines to form the next generation of instances. Can this be considered as mass murder. Leading to a complex question: Will they be treated like animals with intelligence or consider suffering as a feeling of a machine?
These the most asked question out many rising every day. AI has huge potential but humans should not forget it’s only for our betterment. They should remember the importance of technology and implement it for the welfare of humanity.