Autistification and Its Discontents: From Self-Driving Cars to Self-Thinking Brains | by Cameron Bandari

image

10th century Persian polymath Zakariya al-Razi tinkering in the lab

Abstract: I begin this essay by summarizing David Krakaur’s theory of complementary and competitive cognitive artifacts (a fuller explanation of which can be heard here). I then utilize Timur Kuran’s theory of soft knowledge and Eric Weinstein’s four types of fake news to argue that Google is perhaps the most troubling competitive cognitive artifact around today, due to its increasingly monopolistic status as the arbiter of reliable knowledge. This is followed by an outline of how Elon Musk’s proposed cyborg technology could unintentionally result in socio-politically and epistemologically catastrophic consequences, and how his foundational assumptions might obscure the significance of these risks from his analysis. I refer to this problem as autistification, and use it to introduce my theory of culture as extended cognition, in order to begin an exploration of how we can be more thoughtful in our approach to adopting novel technologies.

Why Sam Harris Should Memorize the Quran

David Krakauer of the Santa Fe Institute has coined the term complementary cognitive artifact to refer to objects, ideas, belief-systems and prescribed behaviors that have been refined and transmitted within a culture, often over the course of centuries or more. These cultural artifacts evolved in a bottom-up fashion, with individuals tinkering and iterating in ways that converged towards optimality over the course of generations. One of the key characteristics of these cognitive artifacts is that they didn’t evolve to be optimal in just a universal sense — samosas are not necessarily optimal for the Swedish; rather, they were optimal for the needs and capacities of the culture in which they emerged. Another critical component is that even when these cognitive artifacts are taken away from you, “you have in your mind a trace of their attributes that you can deploy.”

Think of the abacus. In the same way that learning arithmetic allows you to perform calculations without a pen and paper, expert abacus users — a complementary cognitive artifact that has fallen out of fashion lately — no longer have to use a physical abacus. According to Krakauer, “as you get better and better, the place in the brain in which the abacus is represented shifts from language-like areas, like Broca’s areas, to visual-spatial areas in the brain. And so it really is a beautiful example of an object in the world restructuring the brain in order to perform a task efficiently — in other words — intelligently.” Moreover, “if you become competent at the abacus, you’re not just competent at arithmetic”; your “linguistic competence and geometric reasoning” improve as well. “So it doesn’t have a firewall around it such that it’s functional advantages are confined to arithmetic…in fact, I think that’s generally true for all interesting complementary cognitive artifacts.”

Take writing for example: research has shown that writing in cursive is “intimately connected with the acquisition of literacy itself.” And I would argue that the specific culturally transmitted form of cursive or calligraphic writing — the complementary cognitive artifact — not only restructures the physical brain and its ability to perceive and understand language, but restructures its user’s entire vision of the universe. Thus, Japanese, Arabic, and Latin calligraphy each produce and contain their own respective alternate universes. As research by Erica Michael and Marcel Just of Carnegie Mellon University has shown, “different brain areas are involved in hearing speech and reading it, and different comprehension centers in hearing words and reading them.” Perhaps it should come as no surprise, then, that the cultures with the richest calligraphic traditions often also placed a huge emphasis on the memorization of overwhelmingly long poems and texts. Ibn Sina had memorized the Qur’an by the age of ten, and for centuries Yemenite Jews taught their children to memorize the entire Torah.

image

Japanese calligraphy by 19th century sword master Yamaoka Tesshu

As Norman Doidge has pointed out, “up through the 19th and early 20th centuries a classical education often included rote memorization of long poems in foreign languages, which strengthened the auditory memory (hence thinking in language) and an almost fanatical attention to handwriting, which probably helped strengthen motor capacities and thus not only helped handwriting but added speed and fluency to reading and speaking.” These pedagogical techniques were gradually discontinued in the 60’s, as they were considered “too rigid, boring, and ‘not relevant.’” But, according to Doidge, “the loss of these drills has been costly; they may have been the only opportunity that many students had to systematically exercise the brain function that gives us fluency and grace with symbols…” and “their disappearance may have contributed to the general decline in eloquence, which requires memory and a level of auditory brain-power unfamiliar to us now.”

In his book “The Brain That Changes Itself,” Doidge recalls that during “the Lincoln-Douglas debates of 1858, the debaters would comfortably speak for an hour or more without notes, in extended memorized paragraphs; today many of the most learned among us, raised in our most elite schools since the 1960’s, prefer the omnipresent PowerPoint presentation — the ultimate compensation for a weak premotor cortex.”

This brings us to the second class of cognitive artifacts, which Krakauer has termed competitive cognitive artifacts. Whereas the use of a complementary cognitive artifact improves both your ability to perform a discrete skill (navigate physical space using a map) and your cognitive capacity in a number of related areas (navigate relationship spaces or idea spaces), the use of a competitive cognitive artifact leads to a general weakening of your ability to perform both the central task and a host of peripheral skills. Intuitively we know this, yet we continue to use these competitive cognitive artifacts out of both laziness (convenience) and dependence.

The calculator is a classic example of a competitive cognitive artifact. Initially we use calculators out of convenience; we don’t feel like doing long division in our heads, and the teacher said we’re allowed to use calculators on the test anyway. Over time, however, we actually lose the ability to perform these calculations on our own, even with a pencil and paper. And so our relationship to the cognitive artifact changes from one of convenience to one of dependence.

As in the case of the complementary cognitive artifacts, there isn’t a firewall around the impact of competitive cognitive artifacts such that they only affect a localized discrete skill or capacity. Consequently, dependence upon a mechanical calculator doesn’t simply weaken your ability to perform arithmetic calculations. A wide array of cognitive capacities related to mathematical reasoning and abstract logic are almost certainly affected, in ways which are presumably opaque to us.

The automobile is another very representative case of a competitive cognitive artifact. Cars clearly provide us with ‘optimality’ and ‘efficiency’ in that they allow us to travel large distances with very little time and effort. Perhaps more importantly, cars provide us with an opportunity to insulate ourselves from the outside world and, in doing so, relieve us of the unendurable burden that is human social interaction. This liberation, however, is not without its costs.

As with the calculator, our relationship to cars has quickly become one of dependence. Were you to take my car away from me, I would be no better than before at getting around. In fact, I’d probably be worse, as I would be less fit from sitting around rather than walking or biking, and would presumably have difficulty finding my way around town without the ubiquitous Waze or Google Maps. Now factor in the number of people that are killed by automobile-related accidents each year and you begin to get an illustration of a troubling costs we come to accept from cognitive artifact simply because of the attractiveness of their utility value.

self-driving concept car

Automobiles are particularly illustrative of the impact and evolution of cognitive artifacts, as they’re currently poised to escalate their level of competitiveness with the advent of self-driving cars. Like shoes, these driverless cars would further isolate our legs from the task of spatial navigation, as well as contribute to the aforementioned weakening of our mapmaking abilities. In addition to these peripheral affects, reliance upon driverless cars would weaken and eventually eliminate our ability to actually drive manually. In the case of a technical malfunction, you couldn’t be expected to take over and manually drive with any level of competence. Consequently, the surface considerations of ‘safety’ (would you rather have a robot or an ape behind the wheel?) and ‘efficiency’ (wouldn’t it be nice to take a nap or get some work done during your commute?) simply distract from the sharp increase in fragility incurred by a reliance upon this ‘technological advance.’

This brings us to Krakauer’s central question regarding competitive cognitive artifacts: can we depend upon these objects always being around?

If not, then we should be worried.

I would modify Krakauer’s question slightly by asking not only if we can depend upon these objects always being around, but also if we can depend upon them always being on our side. With that added consideration, it becomes clear that Google is perhaps the most concerning competitive cognitive artifact around today, due to its increasingly monopolistic status as the arbiter of reliable knowledge.

The key to our fragility here is that we don’t have another Google to verify whether or not Google is indeed providing reliable knowledge. So the more we look to Google (as opposed to libraries, oral traditions, common sense, direct observation) to determine what is and isn’t reality, the more fragile we become as a society to its potential distortions.

Cognitive Outsourcing and the Centralization of ‘Verified Knowledge’

Timur Kuran points out that as a given society’s knowledge of the world becomes increasingly complex, the cognitive overhead of penetratingly understanding every relevant detail of every possible field (the pros and cons of oral surgery and of quantitative easing and of transgenic modification of organisms and of Hanbali Jurisprudence etc.) becomes essentially impossible for any single individual to manage. People therefore seek out ‘reliable’ authorities to provide the answers to whatever their questions may be.

As there are often multiple competing groups with mutually exclusive answers to any given question, and as inquirers have neither the time nor expertise to determine which group has the true answer, the inquirers will generally defer to the conclusion of whichever group can provide the most convenient ride to their position. Crucially, if one group were to give a somewhat tedious but comprehensively supported explanation for their position and another group were to simply appeal to ‘expert consensus,’ the consensus position would win more minds, as it would be the most convenient ride to take — regardless of its veracity.

Troublingly, people that end up adopting positions due to the perception that they are ‘the consensus’ become extremely unlikely to change their minds upon seeing contradictory evidence, because they didn’t think their way into those positions in the first place. These people would likely only change their minds if their perceptions of the consensus were to change.[1]

Viewed in this light, Google’s decision to modify its algorithm in order to filter out ‘fake news’[2] is extremely concerning. One reason for concern is that when we look for ‘answers’ to our questions, we aren’t exclusively interested in determining which position is true; we are also very much interested in determining which position bears the least social cost, or confers the most social benefit, upon its holder. For example, say we are interested in determining whether or not mass Muslim immigration to France is ‘a good idea.’ We may overhear conversations among friends and family describing Islam as ‘a religion of peace,’ and may later notice that the perpetrators of several recent terrorist attacks in France were Muslim extremist — a fact that appears to contradict our friends’ beliefs. And so we continue to seek out more information before coming to a conclusion.

Throughout this evaluation process, we are constantly checking our privately held beliefs against our perception of public opinion and revising our position along the way. If, after a period of time, we still have some uncertainty over which position to hold on the issue, we may type in a Google search for “Muslim Immigration in France.” If all of the top results indicate that only troglodytes are averse to mass Muslim migration, and that only racists would assume a link between sincere belief in say Salafi Islamic theology and likelihood of committing violence, then we will revise our position to be in line with the socially vetted position.

The other reason for concern here is that the algorithmic elimination of competing narratives is itself a form of fake news, albeit not the form that Google or CNN would like you to pay attention to.

The consensus narrative is that the term ‘fake news’ only applies to stories that are factually false, but Eric Weinstein has highlighted an additional three types of fake news: narrative, institutional, and algorithmic. Weinstein’s key insight is that the big players don’t mess around with factually false news, not because they are unwilling to, but because they don’t actually have to.[3] This is because people don’t look to mainstream media outlets such as CNN, the Washington Post, and the New York Times to learn what is happening in the world, but rather to learn how they are supposed to feel about what is happening. By relying on techniques such as Russell Conjugation, these outlets are able to overlay emotive content upon the otherwise ‘value-neutral facts’ that they broadcast such that a certain narrative is absorbed, often unknowingly, by the consumer. And though this narrative overlay is not factually false, it is still something extra.

So when The Washington Post refers to groups such as ‘al Qaeda,’ ‘al Nusra,’ and ‘ISIS’ as ‘moderate Syrian rebels,’ they aren’t necessarily making things up. There are indeed ‘rebel’ groups fighting against Bashar al Assad, and the degree to which they are deemed ‘moderate’ is ultimately a subjective consideration. This emotive conjugation, however, leads the reader to empathize with groups whose ideologies they would presumably otherwise be horrified by, had they not outsourced the cognitive overhead of understanding the Syrian conflict.

When the narrative meets the factually false

These sources of narrative fake news rely upon authoritative institutions such as Ivy League universities and prominent think tanks to lend credibility to their narratives. So when an Islamism apologist like Shadi Hamid appears on CNN, the mere fact that he works for the Brookings Institute allows him to disseminate his bullshit with little to no challenge[4].

The synergy between this institutional form of fake news and the narrative form of fake news has, until recently, monopolized society’s perception of both what is happening in the world and how we are supposed to feel about it. The internet, however, has posed a potential challenge to this historic monopoly. The democratization of knowledge dissemination has freed up limitless alternative narratives, facts, and perspectives for the curious browser to sift through, should she so choose. The limitlessness of these alternatives, however, is as liberating as it is encumbering: how are we to filter this knowledge? To where should we direct our attention?

Google’s search algorithm performs the much-needed task of curating the internet’s information. Without such an algorithm, we would be utterly lost. The trouble is, the degree to which Google’s interests and criteria for determining credibility correspond to ours is unclear. If Google decides to present us with only information that is vetted by “institutional” or “narrative-driven” media sources, then our view of the world will be warped and constrained into a form that may or may not actually map to reality. This selective ideological curation is itself a form of fake news — albeit algorithmic, as opposed to factually false, fake news.

Given Kuran’s insight that we are constantly checking our private opinions against our perception of public opinion in order to determine the degree to which they are reputationally sound and empirically verified, and given the fact that our information-gathering process is increasingly transitioning from an analog form (ask grandma, read old books, stop looking at your phone and observe physical reality) to the digital form (who needs human memory when you can Google the answer?) then Google’s search monopoly could easily lead to a near homogenization of public knowledge via the convergence of algorithmic, narrative, and institutional fake news. The recent scandal around James Damore’s internal Google memo highlights the danger of allowing this to happen. As Damore has pointed out, “Google has several biases and honest discussion about these biases is being silenced by the dominant ideology…[this] has created a politically correct monoculture that maintains its hold by shaming dissenters into silence.” Were this censorship to be built into their algorithm — and there are indications that it already has been — then we should be seriously concerned.

From Self-Driving Cars to Self-Thinking Brains

This trend towards outsourcing our cognitive capacities and sense-making toolkits to the whims of impersonal algorithms is about to reach a categorically different level with the introduction of cyborg technology. If Elon Musk’s new company Neuralink delivers on its promise (he hasn’t ‘failed’ us thus far), then humans will soon fuse our brains with AI. Musk’s logic is that “we’re already a cyborg…you have a digital or partial version of yourself in the form of your emails and your social media…[but] your output level is so low, particularly on a phone, your two thumbs sort of tapping away. This is ridiculously slow. Our input is much better because we have a high-bandwidth visual interface to the brain — our eyes take in a lot of data. So there’s many orders of magnitude difference between input and output. Effectively merging in a symbiotic way with digital intelligence revolves around eliminating the I/O constraint, which would be some sort of direct cortical interface…a neural lace.”[5]

image

If adopted, this competitive cognitive artifact would lead to both epistemological fragility, via an acceleration of the troubling trends already explored regarding Google’s increasing monopoly over our sense-making functions, and to socio-political fragility, by creating a top-down pathway for governments or elites to monitor the content of our minds in real time.

In the case of epistemology, we must ask ourselves: what are the odds that our computer-brain interface doesn’t eliminate ‘disagreeable’ sources (and their corresponding alternative worlds) from our cognitive possibility space? What are the odds that there is a reliable correspondence between the possible worlds that may be real and the world or worlds we’re programmed to consider valid? And to what extent would we be able to ‘jump out of the circle’ and view our sense-making algorithms from an exterior vantage point?

As in the case of Google, we must thoughtfully consider whether the Neuralink algorithm would curate only the versions of reality that are aligned with the end goals of the company, and crucially, to what degree those goals and incentives are aligned with ours.[6] If the tendency to homogenize knowledge via the convergence of narrative, institutional, and algorithmic fake news is applied to Neuralink’s sense-making algorithm — and there is no reason to believe it won’t be — then it’s safe to assume that the adoption of cyborg technology would actually result in a net loss to the adaptive fitness of our sense-making toolkits.

This is because more information isn’t nearly as important as more cognitive flexibility, more imagination, and more analytical freedom. By asking both “can we expect these artifacts to always be around” and “can we expect these artifacts to always be on our side,” we can quickly recognize that this technology is actually a trap, a pathway to mental slavery disguised as a pathway to limitless mental freedom.

As for the social and political costs: if they can make cyborgs, they can make (for example), docile cyborgs (that will never rebel against their dystopian overlords), or they can make cyborgs that decide against reproduction (limiting world population), or any other form of cyborg that is agreeable to whomever is in control of the technology.[7]

And what if you misbehave?

Do they turn your brain off?

What if you have an inappropriate thought?

Do you suppose your thoughts won’t be monitored?

Picture a future where your neighbor starts to suspect that the government-sanctioned news wasn’t real, and that the 2040 terror attack in Moscow never happened. Algorithmic tripwires in the AI automatically begin to monitor his brain, while sending moment-by-moment updates to a government database stored on the cloud. The neighbor begins to suspect that something is off, so he considers talking to his wife about it.

Suddenly, he blacks out.

He regains consciousness moments later, with his memory erased and his sense-making algorithm modified.

How sure are you of the impossibility of such a scenario arising from the convergence of cyborg technology and Google-like ideological censorship?

Autistification and Culture

I must clarify here that I don’t actually suspect Elon of bad intentions. I’m quite confident that he sincerely believes in the promise of disseminating this cognitive artifact throughout the population; it’s just that he doesn’t appear to have priced in the competitive cost of cyborg technology into his analysis. Moreover, this inability to accurately price in the cost of a cognitive artifact isn’t actually his fault. Rather, it is the result of the broader, more basement-level cognitive-artifact-structure that is guiding his decisions. I call that structure autistification. [8]

Autistification is both (a) the act of linearizing a complex system such that it conforms to a top-down engineering approach to problem solving and (b) the set of faulty first-principle assumptions that obfuscate one’s ability to perceive the harms in doing so.

Autistification tends to emerge when the presence of scientism meets the absence of culture, resulting in the promotion of technological-salvationist answers to complex global issues.

Robot bees, aka Harvard-level autistification

Someone operating from this paradigm considers innovation to be a replacement for culturally transmitted cognitive artifacts, whereas someone deeply connected to culture sees innovation as something to embed into their culture. Before adopting a novel cognitive artifact, the latter type of individual has the foresight to ask questions like, “to what extent does this fit into the superstructure I’ve inherited from my grandma?” and “Am I sure that that this cognitive artifact is superior over the long term to the ones that I’ve inherited?”

These questions are not particularly easy to answer, as the extent to which we can know in advance whether a given cognitive artifact is complementary or competitive is unclear. Given the incompleteness of our knowledge, how can we most thoughtfully consider whether to adopt a given novel cognitive artifact?

Running Probabilities at Grandma’s House

I’d like to propose a heuristic based upon the Lindy Effect, which is a statistical property stating that ideas and technologies (that is, cognitive artifacts) age in reverse. So whereas a 25 year old human being is much more likely to live to be 26 than a 90 year old is to be 91, a cognitive artifact that has been around for 200 years is probabilistically much more likely to survive another 200 than a 5 year old cognitive artifact is to survive another 5. A key point here is that these Lindy-proven cognitive artifacts have survived for a reason. That is to say, they have positively impacted their users such that it has been adaptive over a long period of time to culturally transmit their use. Furthermore, the precise reasons and mechanisms behind the adaptive fitness of these cognitive artifacts are always opaque to us.[9]

Consequently, your grandma’s cognitive artifacts are probabilistically more likely to be complementary than Elon Musk’s.

Stated somewhat differently:

Intransigence, or rather some degree of culturally transmitted intransigence, is a robust evolutionary strategy for coping with the complexity and uncertainty of human life.

Culture as Extended Cognition

In the same way that the spider’s web extends out into the world and creates a structure to increase both its sensitivity to relevant information and its ability to process and respond to that information, culture is a kind of scaffolding that we layer onto the world, helping us identify and appropriately respond to the meaningful (signal) while discarding the noise.

This scaffolding consists of a multi-layered hierarchy of cognitive artifacts, and the depth of a culture can be roughly determined by the degree to which it provides Lindy-proven complementary cognitive artifacts to deal with the various novel situations we may come across during the course of our lives. Viewed in this framework, there is a morphology to culture, and these morphological forms can be compared. And I suspect that there is an inverse correlation between the depth of a culture and its willingness to adopt autistificated cognitive artifacts.

As globalization continues to raze and homogenize the world’s cultural landscape, Peter Thiel has suggested that the developing world should simply engage in a wholesale abandonment of its cognitive artifacts and copy the technologies and systems of the developed world. But if the developed world’s cognitive artifacts are rooted in an autistificated paradigm, should the developing world not take a more thoughtful approach to their development? If so, what would that approach look like? Is there a wisdom to reactionary traditionalism? And is there a way to benefit from the technological progress of the developed world without having to discard one’s culture in the process?

I’ll explore some possible approaches to tackling these and other related questions over the course of the next several essays.

If you have any feedback or would just like to get in touch generally, feel free to reach out to me at Cameronbandari1@gmail.com

[1] In Kuran’s terminology, this would constitute a form of “soft knowledge,” as opposed to hard knowledge, which is arrived at through personal experience, experiment, testing, and reasoning.

[2] Russell Conjugation for non-consensus positions.

[3] Though perhaps this has changed in the era of Bana Alabed.

[4] As Maajid Nawaz has pointed out, no major Islamic scholar finds Hamid’s theory of ‘Islamic Exceptionalism’ to be at all theologically valid.

[6] In the case of AI, the decision-making structure of the technology is actually opaque to even the creators themselves; consequently, neither Musk’s team nor the user base would have any precise understanding of the selection criteria.

[7] assuming that a human being can actually control AI, which is a separate but clearly intimately related issue.

[8] The issue of autistification is so broad and pervasive that it warrants an entirely separate essay (or series of essays) of its own. For now, I ask the reader to please not be too inflamed by the coarseness of the term; there is a logic to my creation of this neologism.

[9] For more on Lindy and the opacity of complex systems I refer the reader to Nassim Taleb’s “Antifragile.”