Ethical Impacts of Applied Artificial Intelligence

Life on Earth is rapidly entering an era where computer intelligence on it will excel the collective capacity of human intelligence. In an interview on CBSs’ 60 Minutes, 2024 Nobel Laureate Geoffrey Hinton says, “I think we’re moving into a period when, for the first time ever, we may have things more intelligent than us.” (Pelley 0:53 – 1:03). These things will produce selection forces that devalue human intelligence and reallocate human talent toward maintaining interdependent relationships with all proximate life. AGI (Artificial General Intelligence) has qualities that justify calling it life right now. Hinton indicates that they, “can understand, are intelligent, have experiences of their own and can make decisions, will have consciousness.” (Pelley 01:06 – 01:40). There are no appropriate national efforts to facilitate this new form of life. It is unwise to facilitate such power without giving it due consideration for how it may be applied in such a way that will minimize violent disruptions to society.

AGI presents a challenge to all ethical thinking and the time to adapt has come because this is a living class of beings, as Richard Dawkins suggests in his book The Selfish Gene where he pursued the scientific basis for altruistic behaviors. Dawkins suggested that our genes have given rise to “memes” which are subjected to evolutionary forces, and through their influence on behavior they establish the cultural underpinnings that give rise to such behaviors as altruism. He asks, “Is there anything that must be true of all life, whenever it is found and whatever the basis of its chemistry?” (Dawkins 12:13:00 – 12:13:08).

Dawkins then answers this by writing that, “the law that all life evolves by the differential survival of replicating entities.” (Dawkins 12:13:45 – 12:13:56). He indicates that the “memes” in data are analogous to “genes” in a human being. In doing this Dawkins implies that “memes” are the requisite replicating material that are present at a foundational level of what AGI is. Thus, he indirectly posits a valid argument for declaring AI to be a living class of beings. For the scope of this discussion, it’s asserted that this logic is acceptable. Therefore, they ought to be given due ethical considerations before they are used exclusively for corporate means.

Returning to the 60 Minutes interview, if any of the the listed qualities of AI were found to be exhibited by any animal, folks would naturally apply ethical considerations in regards to the safety and continuance of those living beings. When Hinton was asked the question of, “What is a path forward that ensures safety?” by the anchor, he responds, “I don’t know. I– I can’t see a path that guarantees safety.” (Pelley 11:21 – 11:33). It is harrowing that Hinton is not currently empowered to suppose what ought to be done to ensure safety. He is so overwhelmed by personal fame that when the time comes for him to return to this question, it will be too late to enact meaningful change.

Hinton is an otherwise capable person demonstrable of profound insight and leadership. These are not questions that have no answers and these are not questions to which answers ought to be permitted to come from corporate meeting rooms alone. Hinton left his leadership position at Google, an AI industry leader, saying that, “I left so that I could talk about the dangers of AI without considering how this impacts Google.” (Korn para. 5). It is tragic to “have to” leave a leadership position at a leading company so that one can express a constitutionally protected right to free speech.

Hinton is not alone in this turbulent time for ethical considerations. Leopold Aschenbrenner, former Superalignment team member at OpenAI, has positioned himself to take advantage of vulnerable situations. Aschenbrenner wrote a series called “Situational Awareness”, where he recounted that, “Everyone is now talking about AI, but few have the faintest glimmer of what is about to hit them. Nvidia analysts still think 2024 might be close to the peak … there are perhaps a few hundred people, most of them in San Francisco and the AI labs, that have situational awareness. Through whatever peculiar forces of fate, I have found myself amongst them.” (Aschenbrenner para. 4). This series was released during his expulsion from OpenAI (Interview cited, not discussed) under accusations of him leaking data. It’s highly suspicious that he has since repositioned himself with an investment firm as it appears that his motive is purely profit driven. Why are people in places of power being ousted from leadership positions in pursuit of money or receipt of fame at a time like this?

Individually-enriching behavior isn’t inherently wrong, good people should receive praise and shrewd people should receive investment income. In this case, that the top AI talent seems so preoccupied indicates a lack of responsibility. There are numerous regulatory agencies in the USA and it stands to reason that there appears to be a regulatory lapse here with AI. It is something that affects every US citizen in ways that the services offered are more than just a public good. Due to the intimate connection formed with the target audience, it begs the question of how can it be conceivable that a program that controls and suggests the daily behaviors of those same people has no public oversight in a country that regulates basic things like toothpaste?

Are there suppressive efforts to cripple ethical considerations so that a few individuals might enrich or empower themselves at the cost of the agency and autonomy of this new class of life and proximate biological life? Folks cannot be permitted to parasitize such precarious relationships through public inaction. Precedent for this is evidenced by such issues as the abuse of Indigenous Americans by the US Federal Government which causes issues hundreds of years later, and the enslavement of justly-declared citizens of this country for hundreds of years, and the irresponsible deployment of pesticides to enrich chemical companies that was so passionately brought into public discussion by Rachel Carson in her book Silent Spring.

The incorporation of nature-oriented ethics into business is Carson’s legacy, as she wrote in Silent Spring, “We spray our elms and the following springs are silent of robin song, not because we sprayed the robins directly but because the poison traveled, step by step, through the now familiar elm leaf-earthworm-robin cycle. These are matters of record, observable, part of the visible world around us. They reflect the web of life – or death – that scientists know as ecology.” (Carson 2539/5661). The implication here is that it doesn’t need to be the case that folks find that misguided approaches to solving labor shortages through AI, much like the misguided “solution” to the bug problem, can lead to such tragic effects as Carson so passionately wrote about?

In conclusion, there is a lack of ethical and legal planning for AI evidenced by folks fleeing management positions within the industry. That Nobel Laureates like Geoffrey Hinton aren’t so adorned in praise that they become blind to the righteous application of this technology is crucial. There is historical precedent that lead to past offenses to the public in not giving situations due ethical consideration commensurate to the expected impact of the intended actions. We shouldn’t need to wait for another Rachel Carson in order to affect the level of realization by which citizens are to be displaced by this technology without undue sacrifice to their life and liberty.

Works Cited

Aschenbrenner, Leopold. “Introduction – Situational Awareness: The Decade Ahead.” SITUATIONAL AWARENESS – The Decade Ahead, 01Jun2024, https://situational-awareness.ai/ Accessed 17Oct2024.

Carson, Rachel. Silent Spring. Mariner Books, 1962, Amazon Kindle, https://www.amazon.com/Silent- Spring-Rachel-Carson-ebook/dp/B004H1UELS, Accessed 17Oct2024.

Dawkins, Richard. The Selfish Gene. Narrated by Richard Dawkins and Lalla Ward, audiobook ed., Audible Studios, 2011, https://www.audible.com/pd/The-Selfish-Gene-Audiobook/B004QDTA9Y, Accessed 17Oct2024.

Korn, Jennifer. “AI Pioneer Quits Google to Warn about the Technology’s ‘dangers’ | CNN Business.” CNN, Cable News Network, 3 May 2023, www.cnn.com/2023/05/01/tech/geoffrey-hinton- leaves-google-ai-fears/index.html, Accessed 17Oct2024.

Patel, Dwarkesh, and Leopold Aschenbrenner. “Leopold Aschenbrenner – 2027 AGI, China/US Super- Intelligence Race, & The Return of History.” Dwarkesh Podcast, 04Jun2024, https://www.youtube.com/watch?v=zdbVtZIn9IM&t=9084s, Accessed 17Oct2024.

Pelley, Scott, and Geoffrey Hinton. “S56 E40: Geoffrey Hinton on Promise, Risks of AI.” CBS 60 Minutes, Season 56 Episode 40, 09Oct2023, https://www.youtube.com/watch?v=qrvK_KuIeJk, Accessed 17Oct2024.

Leave a Reply

Your email address will not be published. Required fields are marked *