top of page

On Foxes, Hedgehogs, and A.I.

Archilochus, the 7th century B.C.E. Greek poet, is attributed with the observation, "[t]he fox knows many things, but the hedgehog knows one big thing". At first glance, an ancient aphorism would have little bearing on modern technology.  But as artificial intelligence (A.I.) becomes increasingly ubiquitous, the poet's adage offers a surprisingly relevant lens through which we can examine our current moment - both culturally and geopolitically.        

However, it was the historian Isaiah Berlin who re-presented this textual fragment to the modern mind in his 1957 work.  Sensing the complexity of post-war security, economic, and cultural shifts, Berlin employed the trope to frame a question directed at the core of what he thought were problems confronting modernity, particularly western liberal democracies:  "Who knows and what, precisely, do they know?" 


ree

Berlin's interpretation of this saying has as much to do with a presumed similarity between different animal traits - the sneakiness of the fox and defensiveness of the hedgehog - as it does a parallel to human temperament and outlooks.  And because A.I. provides its users a near instantaneous "knowledge" about so many diverse areas of life, wiser souls counsel a return to Berlin's question, yet to consider it with a different emphasis: "Who knows and what, precisely, do they know?".


Knowing or Judgment?


The Internet helped escort the arrival of the "Digital Age" which has exalted the value of information.  Indeed, information serves increasingly as the currency of power into the 21st century, both for individuals and nation-states.  Those who "know" - and the speed by which they know - possess an advantage over others.  We see examples of this in the public (e.g. defense), as well as the private sector (e.g. wealth acquisition).

But does the technology of A.I. help us - foxes and hedgehogs alike - to know?

The better question is: does A.I. aid our knowing in order to improve our judgment? Here, the answers, I think, are mixed.  Already at work long before the arrival of A.I. were pernicious societal trends, particularly in the United States, both in the decline of critical thinking and problem-solving skills, as well as a generally diminished role for analysis and the acknowledgment of plain facts.  The speed and brute calculative rigor of A.I.'s arrival only accentuate those continuing trends.  A.I. supplies the needed instant gratification for the modern mind, along with sufficiently amassed information which seemingly provides evidence in support of (or against) the user's prompt.

Yet this is precisely the issue:  the user tends to assume that A.I. responses are dispositive rather than making an effort to weigh that response against, say, a counter-factual prompt.  How many of us have actively sought-out disconfirming evidence of an A.I.-generated response?  Better yet:  how many of us have re-formulated a logical contrary of our original prompt so as to evaluate A.I.'s responses to both?  The subtle danger of A.I. is not that it "knows", but rather we clothe it uncritically as "knowing". 

The growing, credulous disposition toward A.I. has repercussions, for not only those sectors increasingly reliant on it, like financial markets and national security, but also the foxes and hedgehogs among us who work within these segments of society.  Now under greater tension is the fox's typical flexibility and attentiveness to diversity, relative to the hedgehog's ability to synthesize and "see the whole" of things. That is to say, the nature of A.I. has the capacity to both dull (and sharpen) what makes the human "fox" and "hedgehog" unique:  the ability to think and act on the basis of that thought.  A passive integration of this new technology will result in the further waning of our critical faculties and, cumulatively, a diminishment of our agency.

A.I., however, is not the first technology to caution moderation in its adoption.  Human history is full of instances in which the arrival of new know-how becomes a source of societal change and, often, conflict.  We can point to examples like:  the printing press (16th century), the steam engine (19th century), wire-harnessed electricity (20th century), and that technology that bridged the 20-21st centuries and with which we are now all too familiar, the Internet.  Each of these technological advances afforded man the capacity to harness its power for the improvement of himself and society.

But what makes A.I. different is that it potentially weakens our ability to reason and judge.  Precisely in the promise of a tool that helps, its uncritical adoption leads to an atrophy of what the tool was intended to improve.  More dangerous still is the likelihood that our judgment withers incrementally - over time and not all at once - thereby making a decline more difficult for us to recognize and correct.

 

Broad Margins


Complex, man-made systems are created often with redundancies to mitigate a physical failure of the system itself, as well as counterbalance human error.  Redundancies like these offer the user broad margins within which error does not necessarily result in harm.

In the field of economics where A.I. has been integrated, we see an increasingly reduced margin for error.  The natural inclination for the human investor to seek an advantage - however small - in free market environments has led to a reconsideration of A.I.'s ultimate value in light of a clearer understanding of its inherent risks. Institutional reliance on A.I. would certainly seem increasingly exposed to greater risk tradeoffs than the individual, particularly nation-states with sovereign wealth funds (SWF).

And in the field of national security, the failsafe's originally afforded to human decision chains have all but been eliminated with the integration of A.I.  Increasingly, the dynamic in national security decisions is reactive rather than a reaction. This, coupled with an increasing dependence of (western) industrial systems on a poorly governed cyberspace, will only accentuate the effect of A.I. in the digital playground of the 21st century leading to a gradual removal of each sector's broad margins.


Conclusion


A.I. is not the problem; we are.  Berlin's Aesopian characters underscore the need for each in order to avoid the weakness and deficiencies of the other:  the fox's fragmentary vision and intellectual diffusion; the hedgehog's indifference to complexity and inflexibility.

Agency resides not only in individuals, but also in the nation-states and institutions from which they are composed: each is an extension of the individuals within them.  Yet unlike prior technological advances, the nature of A.I. - as simulacra of natural intellection - only heightens the differences between foxes and hedgehogs to the extent that we wrongly ascribe to A.I. what each knows.

Comments


bottom of page