top of page

Pay For Play

Sep 15, 2024

5 min read

1

2

0



I hate snakes. The hatred stems from fear, whether that fear was created by C-rate movies like Anaconda or my African religious superstitions, I have had a relationship with serpents that puts me at a disadvantage. Because I fear them, I have not studied them, have not learned how to navigate a world in which they exist and will most likely always exist. Instead, I avoid them and everything they come with. I used to fear AI, yet another fear birthed by Sci-Fi fantasy. From the Matrix to the Terminator, I have always believed that a robot uprising was in the cards and that we are all doomed. It appears that I have a flair for drama and theatre, but my fears have not been without just cause: AI is risky, AI is dangerous and like serpents, it is a reality we cannot escape, a reality that is full of irony, dichotomy and nuance. This irony is symbolised by the Caduceus, a symbol for Medicine that has its roots in Greek mythology. The Caduceus is a staff, intertwined by two serpents. A symbol of healing, but yet it is intertwined with danger, with the risk of death. This symbol is confusing, one can even look at it and think back to Moses and his staff and the Hebrew God turning the serpent into a staff. The staff and the serpent are one in the same, a symbol for good. A snake's venom in and of itself can be used to poison or heal. If there was ever a foreshadowing of AI it is this; a powerful force that is risky, that elicits fear and anger but also heals, also moves the world forward and makes lives better. We cannot talk about one without the other, we cannot have one without the other. We can however be better than I have been, instead of avoiding it and all it comes with, we can confront, learn and embrace it and all its contradictions.




In developing and deploying AI systems we must be aware of the danger that lurks behind our own curiosities. We often talk about our biases and internalized prejudices, and we will delve into that, but curiosity is a danger we often overlook. AI was an inevitable evolution for humanity, we are built with an innate desire for more, more adventure, more territory, we are conquerors by nature and Technology is the new frontier. This curiosity in AI is good. In fact, in this regard, we need to look at AI as Augmented Intelligence rather than Artificial Intelligence, an enhanced intelligence that has seen breakthroughs in communication, education and medicine. Can curiosity be bad? Have your 5-year-old place their hand on a burning stove to satisfy their curiosity and let that be the answer. A burnt hand and mild trauma are nothing compared to the potential harm we are teetering on in the world of AI. AI Ethics leader and Computer Scientist Timni Gebru highlights this in her critique of Google Deepmind's chief AGI (Artificial General Intelligence) scientist Shane Legg. Legg has been named as one of Time Magazine's 100 Most Influential People in AI for 2023, he also happens to be a scientist who co-authored a paper where he explores his curiosities about intelligence, he asks "Are men on average more intelligent than women? Are white people smarter than black people?". In her LinkedIn post Gebru expresses her irritation by stating that she has "...been losing (her) mind seeing the equivalent of eugenicist cults running the field of AI at the moment, getting all the money, airtime, education ... and PR". To be fair, the paper was published in 2007, Legg may have changed his views and left those "curiosities'' behind. Gebru does, however, have a valid take: eugenics and race in AI are things worthy of us all losing our collective minds over. One questions the necessity of these questions and one also considers the implications of researchers using AI to "study" racial and gender specific differences. Why? Why do we want to know who is more intelligent? What do we do with the findings? How are we defining intelligence? Are we willing to perpetuate racism and misogyny to satisfy our curiosities and are we willing to build AI systems that "unknowingly" perpetuate misogyny and racism? If we are, curiosity will not kill the cat, AI will.




Before we develop and deploy AI systems, researchers and AI labs must confront their biases and internalized prejudices. AI mirrors back the data input and the developers' unintentional and concealed prejudices. The call for diversity in AI cannot be for optics, diversity must be at the conceptual level of everything we create in the world of AI.  How do we limit exclusionary programming? By including people of all genders, ages, races, ethnicities, gender identities, disabilities and every other "category" we currently have. This will greatly slow down the rate at which we develop systems. This is crucial in building safe and responsible AI. We cannot afford to race to be first, we must not applaud AI labs for being the first to do anything. We need to applaud labs for being the most thorough, responsible, ethical and risk averse labs. This sounds naive in the capitalistic world we live in, but we are naive, we are 5-year-olds with our hands nearing the stove top, the stove is on and we are about to get burned. The greatest intelligence we have with regards to AI as developers is understanding that we do not know much. We can not guarantee how a system will interpret the data it processes*. Building an AI algorithm to detect the likelihood of prisoners re-offending once they are released from federal detention is a breakthrough that can save lives and increase the sense of peace and freedom citizens have. However, what do we do when the system perpetuates racism? Do we do away with the AI? It merely reflects the society whose police force stems from slavery and racism, in this regard the AI is "innocent". This example is very real; in the US an algorithm called COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) has been used to detect the recidivism of convicted criminals. Recidivism is the tendency of offenders relapsing and repeating the behaviour that initially landed them behind bars. Pulitzer award-winning investigative journalism organization Pro-Publica published a report about the COMPAS algorithm, the findings of the report showed the following: " black defendants were twice as likely as white defendants to be misclassified as a higher risk of violent recidivism, and white recidivists were misclassified as low risk 63.2 percent more often than black defendants''. The report details other findings that highlight the racial biases of the COMPAS algorithm. The algorithm is risky, but we can reduce that risk by ensuring that we delay deployment. Models such as these need refinement, we can develop and use controlled environments to thoroughly assess the potential consequences of the algorithm, but we need to go further into a terrain we often avoid; we need to introspect and assess our societal prejudices and build systems that can overcome them. We may lie to ourselves, but AI does not. Until we can confront and learn from "others", we will build systems that continue to help society and harm society all in one.




With great power comes great responsibility, and accountability and consequence. Modern times may require us to edit quotes, even our favourite ones. AI does not pose a risk to society, this use of language is weak, timid and to euphemise AI and its risk is to delay our action as society to act with urgency. AI is a risk, it is a threat, but danger does not equal evil. Venom is dangerous but years of research has shown us that the thing we fear can heal us, but this discovery required patience and a healthy dose of fear. AI and its development requires the same clinical, cynical and concerned approach. We cannot play with AI without paying the price. 


Sep 15, 2024

5 min read

1

2

0

Comments

Deine Meinung teilenJetzt den ersten Kommentar verfassen.
bottom of page