The Development of Artificial Intelligence and Intelligent Agents: Potential Ethical and Privacy Issues
![]() |
...the technology scene couldn't be less different from the Sci-Fi movie ten years ago |
“Hey Google, good morning!” That was me talking to my Google virtual assistant this morning. It replied, addressing me with by the last name before updating me about the weather, traffic to work and news from three major news networks! This intelligent assistant can turn on lights, control TV and sound systems, the Thermostat and a whole range of tasks just by asking it! It is the year 2020 and the technology scene couldn't be less different from the Sci-Fi movie ten years ago; thanks to rapid development in Information Technology. The discourse about artificial intelligence (AI) has shifted from marvelling at IBM’s Deep Blue computer defeating Gary Kasparov[1] in the chess to self-driving cars, AI-enhanced rockets on Mars/Moon missions, intelligent robots etc.
Earlier studies tend to compare human qualities like thought, rationality, and actions when describing AI. John McCarthy defined AI as “the science and engineering of making intelligent machines, especially intelligent computer programs.”[2] However, Matthew U. Scherer in the Harvard Journal of Law and Technology highlighted some of the challenges of defining AI as an “intelligent agent” especially as it relates to addressing the ethical issues arising from the development of AI Technology (Scherer, 2016). What could happen if the employment of AI is left unchecked? Shouldn’t there be laws protecting human privacy and systems, ensuring safe and fair use of intelligent agents? Increasingly, the last few years have seen governments and organizations around the world expressing concerns about AI technology in warfare, unemployment, AI Racial Bias, AI errors or mistakes and the many privacy concerns/insecurities that exist with the growing number of IoT devices. This write-up seeks to discuss some of the ways individuals and organizations are impacted and the effort being made to make AI technology safe for humanity.
Karen Hao, an artificial intelligence reporter for the MIT Technological Review, in her work referenced a survey by the Center for the Governance of AI that “eight in ten Americans believe that AI and robotics should be managed carefully.”[3] The same source also showed that most people favoured tech companies and non-government organizations to manage AI. The results of the survey are being put to test when the military contracted Microsoft to design its HoloLens 2 augmented reality headset for military use. According to NBC News, the military claimed that this intelligent tech can “increase lethality” by “enhancing the ability to detect, decide and engage before the enemy.” One can conclude that any military with the most advanced AI-enhanced weapons system is in an advantaged position. It also suggests that battle between two “AI powers” could have far destructive consequences. For obvious ethical reasons, Microsoft workers around the world protested the contract. The race for AI-enhanced military and weaponized systems is on just like the nuclear arms race. What could go wrong if there is an error in the algorithm of an AI system or what happens if it gets hacked?
Consequently, the recent Boeing 737 Max plane crash is a typical manifestation of what could go wrong if AI is allowed to “act with considerable autonomy” (Scherer, 2016). The 737 Max’s automation systems took control of the aeroplane in order to control a perceived stalling situation without notice or option to override. A lot of blame is being heaved on Boeing for making “the (MCAS) too opaque”[4] for the human pilots and the Federal Aviation Administration (FAA) for shoddily certifying the aeroplane. Joseph P. Farrell concluded that the “automated safety systems designed to avoid a stall turned out to have given us a rogue plane, killing us to make us safe.” Similar AI defects have been reported in the auto industry with Volvo’s self-driving car failing to recognize and running over a cyclist. It is safe to assume that tech companies/programmers are liable when their product AI malfunctions.
![]() |
...the government should work with creators to ensure that they comply with existing laws that cover discrimination & privacy. |
Accordingly, humans are responsible for writing AI programs and one wonders if their biases are passed on to the AI system. One need not look far to see many scenarios where the AI acted with prejudice or make a total error of judgement. Hannah Devlin reported in The Guardian, that computer scientists researched and discovered that “machine learning algorithms are picking up deeply ingrained racial and gender prejudices concealed with the patterns of language use. This was demonstrated with a machine learning tool called “wording embedding” which reveals pleasant words matching white faces while African-American names were commonly associated with unpleasant words.” This research is particularly important as it studied language use in programming which is the foundation of the emerging machine learning systems. These systems will be used to run algorithms on job sites, search engines, criminal justice applications, facial recognition platforms etc. If anything, the findings in the research reveal how serious the problem is and why it should be mended at the foundation.
Virtual assistants are probably the most familiar intelligent agents that most people can relate to. The last few years have seen devices ranging from smartphones to smart TVs, Bluetooth speakers, and cars getting updated with intelligent personal assistants like Amazon’s Alexa, Apple Siri, and Google. They are quite popular these days and fast becoming essentials in many homes. With customized voice summons, they can provide information and control a whole range of IoT devices. TechCrunch reports that “47.3 million U.S. adults have access to a smart speaker.” Unlike the other AI systems previously discussed, invasion of privacy, eavesdropping, and hacking (which turns venerable IoT to zombies or tools for DDoS) seem to be the major challenges facing virtual assistants and IoTs. Tech companies should be mandated to reveal any feature of an AI device that might intrude on privacy and provide better security for IoT.
Conclusion
One can conclude that the rapid development in information technology in the last ten years helped to usher in the age of artificial intelligence as we know it today. If the progress continues at the present rate, we can expect to have AI incorporated with virtually every aspect of human life. Hence, the need to address the ethical concerns from the foundation like it was suggested by the “wording embedding” research (Devlin, 2017). The arrival of AI technology was not a sudden one, but it seems that the world was ill-prepared for its attendant ethical concerns like AI-enhanced weapons system which is essentially a WMD[5], AI going rogue due to faulty design, human biases manifesting in AI systems and prone-to-attack virtual assistants and IoT. Studies have established that AI is a smart system that is programmed to learn and complete tasks with “the best possible outcome,” (Scherer, 2016). The output or performance of an AI system is dependent on human input. Therefore, the government should work with creators to ensure that they comply with existing laws that cover discrimination & privacy such that standards will be established to regulate the development of artificial intelligence technology.
References
1. Balasubramaniam, S. (2018). Artificial Intelligence. DAWN: Journal for Contemporary Research in Management, 5(1), 12–18. Retrieved from http://search.ebscohost.com.ezproxy.umuc.edu/login.aspx?direct=true&db=bth&AN=131913440&site=eds-live&scope=site
2. Hao, K., & Hao, K. (2019, January 11). Americans want to regulate AI but don't trust anyone to do it. Retrieved April 12, 2019, from https://www.technologyreview.com/s/612734/americans-want-to-regulate-ai-but-dont-trust-anyone-to-do-it/
3. Scherer, M. U. (2016). Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies. Harvard Journal of Law & Technology, 29(2), 353–400. Retrieved from http://search.ebscohost.com.ezproxy.umuc.edu/login.aspx?direct=true&db=asn&AN=118277780&site=eds-live&scope=site
4. Solon, O. (2019, February 22). 'We did not sign up to develop weapons': Microsoft workers protest $480m HoloLens military deal. Retrieved April 13, 2019, from https://www.nbcnews.com/tech/tech-news/we-did-not-sign-develop-weapons-microsoft-workers-protest-480m-n974761
5. Dafoe, B., & Center for the Governance of AI, U. (2019). Artificial Intelligence: American Attitudes and Trends. Retrieved from https://governanceai.github.io/US-Public-Opinion-Report-Jan-2019/general-attitudes-toward-ai.html#harmful-consequences-of-ai-in-the-context-of-other-global-risks
6. Farrell, J. (2019). The Opacity of AI and Those Boeing Plane Crashes. Retrieved from https://gizadeathstar.com/2019/03/the-opacity-of-ai-and-those-boeing-plane-crashes/
7. Devlin, H. (2017, April 13). AI Programs Exhibit Racial and Gender Biases, research reveals. Retrieved April 14, 2019, from https://www.theguardian.com/technology/2017/apr/13/ai-programs-exhibit-racist-and-sexist-biases-research-reveals
8. Thompson, J. (2019, March 15). Boeing 737 Max: An Artificial Intelligence Event? The Unz Review. Retrieved April 14, 2019, from https://www.lewrockwell.com/2019/03/no_author/boeing-737-max-an-artificial-intelligence-event/
[1] Chess playing is often regarded as one of the ways to measure human intelligence. Gary Kasparov, a chess World Champion lost to IBM’s “Deep Blue” an early AI computer in 1997. Gary has been described as the greatest chess player of all time.
[2] John McCarthy is an American computer scientist at Stanford University and Dartmouth College. He is known to have coined the term artificial intelligence and contributed immensely to the its study. “He was interested in developing systems that exhibited human-level intelligence”. (http://jmc.stanford.edu/general/index.html)
[3] This information is originally from the Center for the Governance of AI where a survey experiment showed respondents agree or disagree whether robots and artificial intelligence are technologies that require careful management. See reference 5.
[4] Maneuvering Characteristics Augmentation System (MCAS) a sort of airplane pilot assist. The MCAS had engaged when it perceived the airplanes were stalling which was false. Boeing allegedly “kept out the changes to the MCAS out of the pilots’ training manuals” which suggested that the pilots in the crashed plane had no control over a nosediving aircraft. https://gizadeathstar.com/2019/03/the-opacity-of-ai-and-those-boeing-plane-crashes/
[5] Weapons of mass destruction
Comments
Post a Comment