by V. Duane J. Lacey, Ph.D.

I acknowledge that there is a fear of AI today and that it is mainly concerned with an uncertain future. I have called this a form of Hobbesian diffidence specifically toward our future with AI.

 

Generally speaking I do not subscribe to the idea that fear can adequately account for everything. However, I do think it is worthwhile to recognize that fear sometimes works in 'mysterious ways', and does so without our realizing it. Desh Subba's Philosophy of Fearism helps us to recognize fear as central to our philosophical tradition, especially the western tradition. As a strategy for approaching philosophical issues both past and present, fearism focuses our attention on how fear operates both when we are aware of it and also when we are not. Here I want to highlight some of these ways in which fear seems to manifest in the ongoing development of our relationship specifically to Artificial Intelligence or AI. In particular, I will focus on a kind of Hobbesian 'diffidence' and its relevance to certain aspects of the world around us today, as well as a sense of "nostalgia‟ for the world that has since become a thing of the past. In both cases the question arises: is it fear that might be at work, though not immediately recognizable as such, in our concerns about AI? Can diffidence actually be useful for us? Can nostalgia?

      AI is itself a kind of question mark. Recently in public interviews Harari has said that every generation thinks it is facing a unique new problem never encountered before, but that this time it is actually true. The reason, he argues, why this might be the case is that AI is a new problem that we have not seen before insofar as it represents a technology that can both create new things, and also make its own decisions, whereas previous new technologies, no matter how
dramatic, impactful or advanced, have not presented this possibility. In other words, humans (even if only a small, entitled, irresponsible and powerful self-interested group of them) have always had a level of control over technological advancements. This is why much of the relevant philosophical literature that addresses issues concerning technology and culture especially in the twentieth century is focused on understanding how to 'use' technology in an ethical and
responsible manner. Thinkers like Benjamin, Heidegger and Jonas, and later Postman, usually help us, in part, to recognize the need to resist a 'technophile' love for further and further advancement, i.e., they have different ways of approaching and critiquing the idea that just because we can do something does not mean that we should. This question of responsibility, however, does not apply in the same way to AI as it does to previous other technologies; or to put it another way, while certainly it is necessary to ask these same questions about AI, by themselves these questions are not sufficient to address the challenges that AI represents.


      In this regard, one new challenge that AI represents is a degree of autonomy from human control. As such, AI is as though another group or category of social actor, which means that we may be able to apply a Hobbesian perspective not merely with respect to fellow humans, but with respect to AI. Here Subba's fearist critique of Hobbes' system as “Fearolotic” ('fear' and 'politics') or a politics grounded in fear, is a helpful perspective that allows us to recognize the risk of misunderstanding fear in the manner that Hobbes might be said to misunderstand it. This perspective also allows us to redirect a Hobbesian notion of fear when he characterizes it as 'diffidence' (for example in Part I, Chapter XIII of the Leviation) in our dealings with others in a state of nature. In other words the diffidence that Hobbes identifies as one of the causes for our quarrelsome nature, i.e., our distrust of one another without a social contract, is likewise now
applicable to AI. We do not know and we are not confident about how this technology will interact with us. Thus, we are distrustful, suspicious, diffident toward, and in this sense fearful of AI.

      For Hobbes, the natural solution to this mutual mistrust, this diffidence, is attained through human reason and our desire for confidence through peace. Hence, the social contract, when enforced, is our agreed upon source of confidence against diffidence. Why should such a solution not work in the case of AI? Perhaps the most obvious factor is that such a social contract would require the acknowledgment of AI as an equal player or participant in the contract. Here the weight of a Hobbesian notion presents itself quite clearly: what if we cannot trust an AI to uphold its side of the bargain? This is the experience of diffidence, and neither was this state of affairs sufficient in the Hobbesian system of for a contract between humans. Instead, a Sovereign was required in order to enforce the contract. Who, or what, then, should act as Sovereign in a social contract between humans and AI? If AI becomes as advanced beyond human capacity as some fear that it will, would it agree to a human Sovereign? Would humans agree to an AI Sovereign?


     When or how, moreover, would any of this agreement or lack thereof take place? Perhaps it is already happening, and this is one of the reasons why it so difficult for us to know exactly what will happen or when; not so much because we are unable to predict the outcome, but because it is already taking place gradually, in disparate and seemingly unconnected ways. If we step back for a moment to consider the extent to which AI is already a well-integrated part of our daily lives, through our Google searches, our online identities, the algorithms to which we either knowingly or unknowingly expose ourselves and by which we are both directly and indirectly influenced, and so on, it is worth considering that our fear of AI is always hypothetical, and that it is precisely by means of a hypothetical fear that we fail to recognize the extent to which we have already entered into, or are entering into, a social contract with AI. In this regard, fear becomes a distraction, like the misdirection in a magic trick. In some ways, fear always operates in this manner. When we are actually experiencing something it is no longer a matter of fear, but rather a matter of fact, sometimes painful, sometimes joyful, and sometimes with indifference or lack of real awareness. If this dynamic is at work in the case of AI, then it would mean that our fear of AI in the future is exactly what allows AI to infiltrate our present without our really being aware of it.

     If this dynamic is currently at work, then our challenge is to redirect our Hobbesian diffidence from our future with AI, and focus it upon our already present contract with AI. But how might we do so? Actually it is not so difficult to redirect our diffidence in this way, albeit impractical (which is itself part of the challenge). In order to redirect our diffidence especially on an individual level, we may simply consider our daily routines. How much of our daily lives involves or requires AI? Which aspects and activities of our day to day lives would remain precisely as that they are if they were completely devoid of AI? Perhaps, when each of us actually does step back to consider our lives in this manner and with this diffidence, we may even come to realize that it is not our lives with AI that we fear, but rather our lives without it. I
do not say that this would be or that it is the case, but it is possible.


     It is with this possibility, then, that nostalgia might actually be a useful experience. One of the challenges we face when redirecting our diffidence in the manner described above is that our lives today seem unimaginable without at least some form of AI. Yet there are those who are still alive today who can, from memory, better imagine such a life. That pang or pain of lament that accompanies nostalgia may seem like a matter of loss. The feeling of nostalgia, that is, may
seem like a useless longing for that which is no longer the case. Is it not pointless, then, to lament and long for that which is gone and can never be attained again? Was that life, which was less inundated with such pervasive amounts of AI as we encounter today, however we might remember it, not itself part of the very conditions that led to AI and our lives as they are now in the first place? Even if we could 'go back', would we not just end up in the same present state of
affairs? That latter question, of course, is quite difficult to answer and requires a good scientific and science-fiction mind with a grasp of the actual possibility of time travel to even begin to answer. However, the pain of nostalgia could be useful, not because we can go back in time, but because it may be a guide post, a warning or an indication of something that we can, in fact, address and fix, at least to some extent and most of all with respect to ourselves individually. Here again, similar to what Subba wants us to recognize about fear, it is not simply something that we must avoid or overcome. Fear, and I think nostalgia as well, are both real experiences that do have value when they are properly integrated into our larger human experience. Instead of a vague longing for the past, we may allow the sadness or pain of nostalgia to be as though an open doorway through which to pass and explore, whereby one may ask oneself: what, exactly, do I long for? Certainly we do not want a return to everything from the past down to the smallest minutiae. So what is it, exactly? With this question, once again, we may apply our diffidence. What role and in what form, if any, did AI play in that past, and specifically that part of the past to which our nostalgia has led us? If it is something that we can rescue from the past and reintegrate into or lives again today, then our relationship to AI is now insightful, informed, more intentional and under our control, at least to some extent and more so than before.


I acknowledge that there is a fear of AI today and that it is mainly concerned with an uncertain future. I have called this a form of Hobbesian diffidence specifically toward our future with AI. When, however, we consider a perspective such as that of Subba's fearism, we are reminded that we can redirect that diffidence of the future toward our present lives. When we use the fear and distrust that characterizes diffidence as a tool through which to analyze our already present social contract with AI, then we are taking control of that fear and putting it to work on what is the case, rather than what might be the case. In so doing, our attention is not misdirected toward an unknown future away from our present, and we may begin to discern the details of our fear and diffidence. Next, what aspects of our lives today, now that we are using diffidence as a tool, cause us to lament what no longer is the case or is missing? What leads to a feeling of nostalgia? We may then put this feeling to use as well, and combine our diffidence of the present (not of the future) with our nostalgia for the past. We may turn that diffidence further toward the past and toward that for which we are nostalgic, and ask what is viable (and desirable), from that which is gone, to bring back into our present lives today? How much of that which we would like to rescue from the past is already imbued with AI, and in what form? These are some suggestions, then, for how we can make use of our capacities for diffidence and nostalgia in addressing our fear of Artificial Intelligence, past, present and future.

E-mail me when people leave their comments –

You need to be a member of Fearlessness Movement to add comments!

Join Fearlessness Movement

Comments

  • A thoughtful paper. The author's suggestion (action) to utilize, move-into-and-forward, with diffidence, with nostalgia and a context of fear(ism) seems well advised. I agree that machines (machinic intelligence) and A.I. generally are here to stay, it is more an issue how we humans manage and balance the agendas, ideologies, and means of production hitherto (?), that is where the rub lay. I am not overly optimistic or pessimistic about A.I. Like most technology is over-marketed with hype. 

This reply was deleted.