Encyclopedia Britannica adds: “(It is) A question first posed by the contemporary British philosopher Philippa Foot as a qualified defence of the doctrine of double effect and as an argument for her thesis that negative duties carry significantly more weight in moral decision-making than positive duties.”
My modification: add two more complications to the problem. Speed or acceleration. And complexity. The trolley is flying at breakneck speed, and you need complex manoeuvres to change tracks. Welcome to the age of AI.
AI usually does not figure in our daily national debates. Our media, to its significant disadvantage, seems content with delivering platitudes related to developments like the 27th Amendment, while the rules of the game are being written elsewhere. And this is not peculiar to just this country. The mainstream media (MSM) everywhere has exposed its fatal flaw in its inability to understand the rise of something as basic as social media. As the delivery methods, organising principles and attention span shifted, the MSM missed a beat. The fact is that while some of us may work for gated and guarded silos, which is, of course, a distinct advantage, we, too, are just content creators. The word “journalist” may still exist as a job title, but it is redundant. So, to think that the MSM would pay AI the attention it deserves is a big and unrealistic ask.
But it is AI, not the Ravenous Bug Blatter Beast Of Traal, a creature of such mind-boggling stupidity that if you cannot see it, it assumes it cannot see you. AI sees you even if you refuse to see it. As for the 27th Amendment or other such developments you take too seriously, well, they amount to nothing given that AI will flatten every workspace, including states and governments, within five to seven years.
This week, however, AI made its presence felt in our discourse. A reporter from a leading publication forgot to remove an AI-generated message from their piece, and it was published. This has caused quite a furore in the industry. I understand the concern about the oversight failure. But I am surprised by the moral indignation. As a journalist, I have seen too many reporters add their byline to a wire story while forgetting to remove the service’s credit line at the end. Of course, large publishing houses are expected to have multiple layers of oversight, and when these fail, it is a legitimate cause for concern. That said, to think a journalist will not leverage AI to improve their work is suicidally counterintuitive.
I vividly remember a newsroom where the chief reporter would ask his subordinate, yours truly, to edit his weekly column. That his written piece was usually flawed at a deep structural level and fell short of the assigned word limit by half meant I had to restructure and fill in the rest without any attribution. Human nature doesn’t change much. Despite almost three decades of writing, I still take a couple of hours to produce a piece like this. A chatbot can generate up to a hundred words per second. There is no competition.
Now, let us turn to the global discussion. We fear what we do not understand. Therefore, considerable online fear-mongering is to be expected. But two major talking points have emerged recently, which threaten to overwhelm and crowd out the legitimate concerns. These exaggerated fears relate to the possibility of an AI investment bubble and an AI-caused doomsday. While misplaced, both stem from genuine human experience and survival instinct.
They say once bitten, twice shy. There were moments in history where investors lost perspective due to the zeitgeist. Historians and the fallout will not let us forget the dotcom bubble of the 1990s and the financial crash of 2008. So, when people see unimaginably huge sums of money being invested in AI, they think that it is another bubble. And frankly, given the nature of human greed and myopia, one bad step can land us all in the worst possible financial crisis. But to a large extent, such fears are overblown and misplaced if not unfounded. AI, unlike the untested dotcoms or subprime property derivatives, has immense value. So much value that it is already renegotiating the very foundation of human civilisation.
Similarly, the fear of a sudden rise of an AI overlord hell-bent on destroying humanity may seem compelling in Hollywood films or science fiction pages, but for now, it is patently absurd. We are still far from achieving an artificial superintelligence. Today’s Gulliver is still unreasonably and badly tied down.
However, this instinctive fear of AI is adding to the complications of civilisation. Irrational fear causes irrational overreactions. So far, gatekeepers of industries like Hollywood, media and publishing are trying to block AI’s help in content generation. Since we haven’t had honest conversations about what goes and what stays, we risk making ourselves irrelevant. AI development is accelerating and growing more complex. Suppose we cautiously and transparently allow AI collaboration in content creation, install well-conceived guardrails and develop processes for due diligence and accountability. In that case, the result might be a well-equipped and retooled workforce for the new age. You may have heard that AI may not take your job, but a person who knows how to use AI effectively will. Why not be that person yourself?
The quantum of investment in AI, however, creates an imperative. Investors want to see profits. If legitimate AI use cases are denied to the users, what happens next? Here is a quote by Sam Altman to make you sober: “In December, as we roll out age-gating more fully and as part of our ‘treat adult users like adults’ principle, we will allow even more, like erotica for verified adults.”
There is already an unhealthy quantity of porn on the internet. Additionally, you have agents provocateurs like Elon Musk, who continually push the envelope. Now imagine if major AI firms start adding to this content. If you do not adapt to the AI reality, you will condemn our future generations to harmful addictions, which will make the future of civilisation truly bleak. Allow legitimate use cases in your industry with filters and due diligence now, before it is too late.
This is a textbook case of the trolley problem. You cannot save everything. However, timely attention and response can give us some agency in this increasingly complex and accelerating future. Who knows this may ultimately benefit the civilisation in the end.