Consequences of Creating Artificial General Intelligence

My essay for my Human Behaviours module on the consequences of creating artificial general intelligence

  ·   11 min read

What are the consequences of creating artificial general intelligence?

As artificial intelligence becomes more of a regular feature in modern society it is important to take a step back and assess the possible consequences of this future development. Artificial general intelligence (AGI) can be classed as a machine or system that is capable of reaching or surpassing human cognitive ability and with the steady advances in the development of this field it is conceivable that this will be reached by the end of the century (CBS, 2023). Signorelli (2018) proposes new ideas on what is needed for a computer to gain human level intelligence, they touch on paradoxes that arise from this idea and ethical concerns that surround creating artificial general intelligence. This essay will dig deeper into these ideas and discuss the moral and ethical consequences that AGI can pose on current and future society, considering the impact on jobs, and how this amplifies societal inequality and cyber security concerns that AI poses on the public.

One of the most prevalent issues with the advancement of AI is the problem of how further automation, especially in traditionally human-centric roles, can have a tremendous impact on the wider job market. This impact has been seen throughout every major technological revolution in history back to the initial industrial revolution in the 1700s when large-scale unemployment due to advancements in technology was documented for the first time in the hand spinning industry (Schneider, 2023). This trend has continued throughout modern history, Frey and Osborne, 2017 studied the potential impacts that automation could have on jobs and found that nearly half of jobs in the US are at risk of being automated. This issue is now more heavily exacerbated if you include AI in these calculations, sectors and skills like fine art, persuasion and the science and technology industry are now at greater risk of damage compared to Osborne and Frey’s study (Bruun and Duka, 2018). There is wide disagreement on AGI job displacement, however. The president of the Information Technology and Innovation Foundation, a US think tank, argues that it is unlikely for AI to have the damaging effects on people’s jobs that many researchers predict and instead just shift the perspective of job roles and complement job roles (Atkinson, 2018). Atkinson may be right in this perspective, however, there needs to be consideration of the difference between previous attempts at automation and this potential AI revolution which could lead to the creation of general intelligence.

As mentioned before, the risk that AGI will take over jobs deemed more ’human’ and the risk of the deskilling of jobs thought to be safe from this (Braverman, 1974), potentially created a sense of purposelessness within society and even civil unrest (Voth and Caprettini, 2017). The fallout of this is widespread socioeconomic damage, the tendency for corporations to prioritise maximising profits over the treatment of their employees (Chomsky, 1998) means if AI shows promise at being as efficient as humans (and in the future think like humans and gain consciousness) the diminished reliance on human workers could widen inequality in society. This issue highlights some of the ethical consequences that arise from creating AGI. A utilitarian view could see how the potential longer-term benefits of AGI presented by Atkinson (2018) could outweigh the initial deep harm it would cause society and continue pushing for the development of this technology. However, the damage may be too catastrophic to recover from. A cautious approach is needed to minimise these risks.

The potential for AGI and AI development to expand societal inequality goes beyond just the general population losing their jobs and being replaced by automation. The growing concentration of power and wealth in society has become a driving factor in the growing disparity between billionaire owners and CEOs and the regular working individual (Chomsky, 2017). This has been a prevalent issue throughout history, but the rise of technology companies has cemented this fact in modern society with trillions being hoarded and hidden away by individual billionaires and corporations (Collins, 2021). This contributing to the stagnation of wages and regression in living standards for the average person, with those on low incomes in the US only seeing an increase of $200 from 2012 to 2019 compared to the 7x that the top elites received in that same period (Piper, 2023). The emergence of AI and the eventual creation of artificial general intelligence look to increase this inequality both in the aspects of job erosion, but also in the idea that the person or company that has control of AGI has wealth, power and influence amplified and creates worse conditions for the many (Brynjolfsson, 2022). This issue is tough to remedy due to the black box problem. The extremely closed-source nature of AI makes it very hard for the public to trust the technology and patent laws that are in place to protect trade secrets, which means that open-source competition is very unlikely to be able to compete with the likes of OpenAI, Google and Microsoft (Eschenbach, 2021).

This vast lack of transparency surrounding how these AI models and algorithms work internally doesn’t just have an impact on wealth inequality but it also has a large impact on societal inequality. Biases that exist within today’s AI models are a result of their training data and often reflect the judgements and social hierarchies that we form as humans, discriminating against protected characteristics like race and gender (Buolamwini and Gebru, 2018; Zajko, 2022). The black box problem causes lots of difficulties when trying to audit these systems, even though policies are being developed to ensure all AI tools are audited for bias (Domin et al., 2022), there are still cases happening now where AI is perpetuating discriminatory bias against data (Yin et al., 2024) and this issue seems to be coming into more focus when looking at the “garbage-in garbage-out” phenomenon first attributed to the IBM engineer George Fuechsel in 1960. GIGO effectively means that if you put useless data into a system you’ll get useless data out, if the original data you feed to an AI system is biased your outputs will be too. This problem could become worse if you consider the massive amount of AI-generated content on the internet now, this data could be used to train new AI models potentially amplifying the existing bias. This is already being reflected in the sentiment that people feel generative AI is becoming less capable and code quality is decreasing (Harding and Kloster, 2024; Chen et al., 2023). The issue extends beyond our AI understanding now and into the realm of AGI, there is no guarantee that an AGI will conform to human ethical and moral standards (Goertzel and Pitt, 2014) and with the recent events surrounding AI bias, it seems very unlikely.

Finally, a significant concern of ever-advancing AI technology lies in the potential security issues that could be exploited and utilised not only by threat actors but by governments against their citizens. Even with our current modest AI systems threat actors are already leveraging this technology to create more convincing phishing emails that bypass email security measures and spam filers, specifically spear-phishing attacks, and even replicate a family member’s voice to scam victims over the phone (Khan et al., 2021; Verma, 2023). NCSC (2024) warns that generative AI will have a major impact on cyber attacks in the next two years, increasing the efficiency at which threat actors can attack targets. The creation of AGI will bring these issues into greater view. If an AI reaches or surpasses human intelligence and is trained to perform complex cyber-attacks, the inevitable elimination of human interaction means cyber attacks could happen more frequently and more efficiently (Haney, 2018; Gündo ̆gar and Niauronis, 2023)

Along with general cyber attacks, AI/AGI also has a real impact on privacy. Solove (2006) outlines 16 major privacy risks that affect the population and this concept has been further extended by Lee et al. (2024) to map Solove’s origin 16 risks to the modern age dominated by AI. These can be categorised by greater surveillance opportunities on the population, greater ability for AI to identify and classify specific groups, and the risk for AI, genAI in particular to leak sensitive information. Atkinson (2018) makes a point that AI will not destroy privacy, arguing that governments and companies don’t need AI to collect data on the population. Atkinson neglects the fact that its AI ability for new and novel data collection techniques (like advanced face and speech recognition and the ability to more accurately predict the actions of the population (Feldstein, 2019; Saheb, 2022)) that is the driving factor for privacy concerns, not the fact that surveillance already exists. AGIs ability for this data collection will be vastly superior to current technologies and highlights a crucial consequence of creating AGI.

The new AI and AGI revolution could usher in a time of prosperity and could be hugely beneficial to the way we live our lives in the future. However, with the risks of mass job displacement, an increased gap in wealth and societal inequality, the potential for the technology to enhance cyber attacks and the privacy concerns surrounding mass data collection and surveillance, AGI must be developed responsibly and creators must ensure they focus on prioritising the human experience and fundamental rights as to not cause these unintended consequences that could arise from its creation.

References

Atkinson, Robert D. (Oct. 2018). ““It Is Going to Kill Us!” and Other Myths About the Future of Artificial Intelligence”. In: IUP Journal of Computer Sciences 12.4, pp. 7–56.

Braverman, Harry (1974). Labor and Monopoly Capital: The Degradation of Work in the Twentieth Century. Monthly Review Press. ISBN: 9780853459408.

Bruun, Edvard P.G. and Alban Duka (Nov. 2018). “Artificial Intelligence, Jobs and the Future of Work: Racing with the Machines”. In: Basic Income Studies 13.2, p. 20180018. DOI: 10.1515/bis-2018-0018.

Brynjolfsson, Erik (2022). “The turing trap: The promise & peril of human-like artificial intelligence”. In: Daedalus 151.2, pp. 272–287. DOI: 10.1162/daed_a_01915.

Buolamwini, Joy and Timnit Gebru (Feb. 2018). “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification”. In: Proceedings of the 1st Conference on Fairness, Accountability and Transparency. Ed. by Sorelle A. Friedler and Christo Wilson. Vol. 81. Proceedings of Machine Learning Research. PMLR, pp. 77–91.

CBS (Mar. 2023). ““Godfather of artificial intelligence” weighs in on the past and potential of AI”. In: CBS News. URL: https://www.cbsnews.com/news/godfather-of-artificial-intelligence-weighs-in-on-the-past-and-potential-of-artificial-intelligence/.

Chen, Lingjiao, Matei Zaharia, and James Zou (2023). How is ChatGPT’s behavior changing over time? arXiv: 2307.09009 [cs.CL].

Chomsky, Noam (Dec. 1998). Profit over people: Neoliberalism and the global order. New York: Seven Stories Press. ISBN: 9781888363821.
— (Mar. 2017). Requiem for the american dream: The 10 principles of concentration of Wealth & Power. New York: Seven stories press. ISBN: 1609807367.

Collins, Chuck (2021). The Wealth Hoarders: How Billionaires Pay Millions to hide trillions. Polity.

Domin, Heather et al. (Nov. 2022). Standards for protecting at-risk groups in AI bias auditing. Tech. rep. IBM.

Eschenbach, Warren J. von (Sept. 2021). “Transparency and the Black Box Problem: Why We Do Not Trust Ai”. In: Philosophy & Technology 34.4, pp. 1607–1622. DOI: 10.1007/s13347-021-00477-0.

Feldstein, Steven (Mar. 2019). “The Global Expansion of AI Surveillance”. In: Carnegie Endowment for International Peace, pp. 16–21. URL: http://www.jstor.org/stable/resrep20995.8.

Frey, Carl Benedikt and Michael A. Osborne (Jan. 2017). “The future of employment: How susceptible are jobs to computerisation?” In: Technological Forecasting and Social Change 114, pp. 254–280. ISSN: 0040-1625. DOI: https://doi.org/10.1016/j.techfore.2016.08.019. URL: https://www.sciencedirect.com/science/article/pii/S0040162516302244.

Goertzel, Ben and Joel Pitt (June 2014). “Nine ways to bias open-source artificial general intelligence toward friendliness”. In: Intelligence Unbound, pp. 61–89. DOI: 10.1002/9781118736302.ch4.

Gündo ̆gar, Ali and Saulius Niauronis (Apr. 2023). “Overview of potential risks of Artificial General Intelligence Robots”. In: Applied Scientific Research 2 (1), pp. 26–40. DOI: 10.56131/tmt.2023.2.1.93.

Haney, Brian Seamus (Oct. 2018). “The Perils & promises of artificial general intelligence”. In: J. Legis. 151st ser. 45 (2). DOI: 10.2139/ssrn.3261254.

Harding, William and Matthew Kloster (Jan. 2024). Coding on Copilot. Tech. rep. GitClear. URL: https://www.gitclear.com/coding_on_copilot_data_shows_ais_downward_pressure_on_code_quality.

Khan, Hajra et al. (2021). “OFFENSIVE AI: UNIFICATION OF EMAIL GENERATION THROUGH GPT-2 MODEL WITH A GAME-THEORETIC APPROACH FOR SPEAR-PHISHING ATTACKS”. In: Competitive Advantage in the Digital Economy (CADE 2021). Vol. 2021, pp. 178–184. DOI: 10.1049/icp.2021.2422.

Lee, Hao-Ping et al. (2024). Deepfakes, Phrenology, Surveillance, and More! A Taxonomy of AI Privacy Risks. arXiv: 2310.07879 [cs.HC].

NCSC (Jan. 2024). The near-term impact of AI on the cyber threat. Tech. rep. Victoria, London, UK: National Cyber Security Centre. URL: https://www.ncsc.gov.uk/pdfs/report/impact-of-ai-on-cyber-threat.pdf

Piper, Rob (2023). “Explaining the Proliferation of U.S. Billionaires During the Neoliberal Period”. In: Class, Race and Corporate Power 11.2. ISSN: 23306297.

Saheb, Tahereh (July 2022). ““ethically contentious aspects of Artificial Intelligence Surveillance: A social science perspective””. In: AI and Ethics 3.2, pp. 369–379. DOI: 10.1007/s43681-022-00196-y.

Schneider, B (May 2023). “Technological unemployment in the British industrial revolution: the destruction of hand spinning”. In: Oxford Economic and Social History Working Papers 207.

Signorelli, Camilo Miguel (Oct. 2018). “Can Computers Become Conscious and Overcome Humans?” In: Frontiers in Robotics and AI 5.121. ISSN: 2296-9144. DOI: 10.3389/frobt.2018.00121. URL: https://www.frontiersin.org/articles/10.3389/frobt.2018.00121.

Solove, Daniel J. (Jan. 2006). “A taxonomy of privacy”. In: University of Pennsylvania Law Review 154.3, pp. 477–564. DOI: 10.2307/40041279.

Verma, Pranshu (Mar. 2023). “They thought loved ones were calling for help. It was an AI scam.” In: The Washington Post. URL: https://www.washingtonpost.com/technology/2023/03/05/ai-voice-scam/.

Voth, Hans-Joachim and Bruno Caprettini (May 2017). “Rage against the machines: New technology and violent unrest in industrialising Britain”. In: VoxEu. URL: https://cepr.org/voxeu/columns/rage-against-machines-new-technology-and-violent-unrest-industrialising-britain.

Yin, Leon, Davey Alba, and Leonardo Nicoletti (Mar. 2024). “OpenAI’s GPT Is a Recruiter’s Dream Tool. Tests Show There’s Racial Bias”. In: Bloomberg. URL: https://www.bloomberg.com/graphics/2024-openai-gpt-hiring-racial-discrimination/.

Zajko, Mike (Feb. 2022). “Artificial Intelligence, algorithms, and Social Inequality: Sociological Contributions to contemporary debates”. In: Sociology Compass 16.3. DOI: 10.1111/soc4.12962