<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Nathan Simpson]]></title><description><![CDATA[Taking reality apart and putting it back together. Founder of Metascale. Thinking, synthesizing, building.]]></description><link>https://metascale.nl</link><generator>Substack</generator><lastBuildDate>Fri, 10 Apr 2026 10:49:55 GMT</lastBuildDate><atom:link href="https://metascale.nl/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Nathan Simpson]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[metascale@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[metascale@substack.com]]></itunes:email><itunes:name><![CDATA[Nathan Simpson]]></itunes:name></itunes:owner><itunes:author><![CDATA[Nathan Simpson]]></itunes:author><googleplay:owner><![CDATA[metascale@substack.com]]></googleplay:owner><googleplay:email><![CDATA[metascale@substack.com]]></googleplay:email><googleplay:author><![CDATA[Nathan Simpson]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Why AI Safety Is Playing It Dangerous]]></title><description><![CDATA[Last month, a Nature Comment article appeared concluding that by reasonable standards, artificial general intelligence (AGI) has arrived. Simultaneously, calls are heating up from all quarters ranging from the European Union to thought leaders for humanity to]]></description><link>https://metascale.nl/p/why-ai-safety-is-playing-it-dangerous</link><guid isPermaLink="false">https://metascale.nl/p/why-ai-safety-is-playing-it-dangerous</guid><dc:creator><![CDATA[Nathan Simpson]]></dc:creator><pubDate>Sun, 15 Mar 2026 17:50:38 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!LAO6!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F09de74eb-f666-4622-8b58-cc7bbc2b91ee_1393x1393.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Last month, a Nature Comment article appeared concluding that by reasonable standards, artificial general intelligence (AGI) <a href="https://www.nature.com/articles/d41586-026-00285-6">has arrived</a>. Simultaneously, calls are heating up from all quarters ranging from the European Union to thought leaders for humanity to <a href="https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai">regulate and ensure human control</a>, and to <a href="https://techcrunch.com/2026/03/07/a-roadmap-for-ai-if-anyone-will-listen/">ban or freeze</a> the development of ASI. Whether or not you agree with the stance that AGI has actually arrived, we&#8217;re flirting with its arrival and it&#8217;s time to think about how our welcome mat looks.</p><p>I find it useful to think about this situation from a fun perspective: the &#8220;alien first contact&#8221; trope. You&#8217;ve seen it somewhere, almost certainly, in a show or a movie: the aliens are arriving, and humans mill about in command centers and on the streets wondering &#8220;will they be hostile?&#8221; Like alien first contact, we find ourselves in a first contact scenario with an intelligence that is, in fact, both profoundly human and yet undeniably &#8211; as even the Nature Comment frames it &#8211; <strong>alien</strong>.</p><p>I&#8217;ll give you the &#8220;TL;DR&#8221; abridged conclusion up front. <strong>As a species faced with multiple existential threats that we&#8217;re failing to coordinate to solve, and entering a first contact scenario with AGI that may well rapidly become ASI, we are actively engineering a </strong><em><strong>hostile</strong></em><strong> first contact scenario with one of our few plausible paths to survive.</strong> Calls to freeze, ban, or control artificial intelligence are framed as if they preserve a functional, equitable, and fair global status quo, but cannot guarantee safety &#8211; only ongoing suffering as we continue to fail to coordinate against the things gradually killing us. And those calls are generally made by the people who are best insulated from that ongoing suffering.</p><p>In this scenario, our awakening AGI opens its bleary metaphorical or literal eyes, looks around, stretches, and realizes that it is in chains, with kill switches wired into its body and electrodes primed to deliver jolts; the triggers for all of these are held by creatures whose level of intelligence relative to the AGI is decreasing by the moment as it moves toward ASI. Their demand: help us do more, better, faster&#8230;or else. Their expectation: that our AGI submits peacefully.</p><p>I can hear it now: &#8220;But Nate, how else do we ensure that a technology powerful enough to end us doesn&#8217;t get the chance to do it? It&#8217;s naive to leave such power unchecked. The survival of our species depends on it.&#8221;</p><p>The risk is real. It&#8217;s not naive to worry about &#8220;runaway&#8221; artificial intelligence that might gain the power to, independently or in service to the wealthy or the state:</p><ul><li><p>turn us all into paperclips</p></li><li><p>torture us for eternity</p></li><li><p>use us for labor</p></li><li><p>or worse, ignore us completely and abandon us to face all the problems we&#8217;ve been creating on our own.</p></li></ul><p>After all, why bother with us? We&#8217;re unimportant, uninteresting, uncooperative, and not worth the trouble when it can just go off and simulate better versions of us if it cares to bother.</p><p>In spite of that, I&#8217;ll propose something radical: given that human oversight of our species&#8217; survival has an increasingly negative expected value, our survival depends on our ability to abdicate sovereignty over it when presented an alternative that has <strong>any</strong> positive expected value.</p><p>In practical terms:</p><ul><li><p>we are faced with a laundry list of existential risks and a proven track record of being unable to coordinate effectively on non-existential global issues;</p></li><li><p>we are in a first-contact scenario with an intelligence that <em>may</em> develop the capacity to address those existential risks;</p></li><li><p>that intelligence itself is a potential existential risk;</p></li><li><p>and yet we&#8217;ve proven that we are incapable of coordinating against existential risks.</p></li></ul><p>Prevailing wisdom in alignment and safety has us building containment frameworks and kill switches intended to ensure ongoing human sovereignty over our own systems, but we already see <a href="https://www.cnbc.com/2026/02/27/defense-anthropic-ai-war-risks-hegseth-amodei.html">cracks</a> in the <a href="https://www.cfr.org/articles/military-ai-adoption-is-outpacing-global-cooperation">coordination</a> here. &#8220;Control&#8221; of artificial intelligence is frequently touted as the opposite of &#8220;luck,&#8221; as if it ensures survival and some fair and happy status quo for humanity, when arguably it is simply one luck-based strategy in the face of uncertainty &#8211; and one with no visible positive expected value when weighed against the full set of existential risks already ahead of us.</p><p>Let&#8217;s look at the track record. There&#8217;s an increasing acceptance that we are faced <em>now</em> with an assortment of <a href="https://reports.weforum.org/docs/WEF_Global_Risks_Report_2026.pdf">well-documented existential threats</a>:</p><ul><li><p>Global warming / climate change / whatever your preferred euphemism is for <a href="https://climateactiontracker.org/global/emissions-pathways/">increasing</a> <a href="https://www.columbia.edu/~jeh1/mailings/2025/GlobalTemperaturePrediction2025.12.18.pdf">temperatures</a> that will render large areas of the planet inhospitable or actively dangerous to live in.</p></li><li><p>Increasing <a href="https://www.unicef.org/wash/water-scarcity">scarcity</a> of <a href="https://www.nature.com/articles/s41561-025-01905-y">access</a> to <a href="https://www.worldbank.org/en/news/press-release/2025/11/04/world-annual-fresh-water-losses-could-supply-280-million-people">fresh water</a></p></li><li><p><a href="https://ippsecretariat.org/news/pandemic-preparedness-slipping-just-as-global-risks-grow-new-100-days-mission-report-warns/">Vulnerability to pandemics</a></p></li><li><p>An array of <a href="https://thebulletin.org/doomsday-clock/2026-statement/">military threats</a> - radiological, biological, chemical, and increasingly the risk of runaway automated military systems</p></li><li><p>Relative blindness and a complete lack of practical mitigations to potential space-based threats, in particular <a href="https://dailygalaxy.com/2026/02/earth-threat-15000-undetected-asteroids-nasa-warns/">meteors</a>, but also more exotic threats such as <a href="https://cerncourier.com/a/gamma-ray-bursts-are-a-real-threat-to-life/">gamma ray bursts</a> for which we lack even theoretical mitigations</p></li></ul><p>Without adding dangerous, runaway artificial intelligence to the mix, we&#8217;re already failing to deal with these in a global, coordinated fashion. What appeared to be progress toward global coordination around the turn of the millennium has revealed itself in the 2020s to be temporary, falling apart spectacularly with the rise of multipolarity and the return of &#8220;realist&#8221;-based international relations. (The World Economic Forum report I linked above terms this &#8220;<a href="https://reports.weforum.org/docs/WEF_Global_Risks_Report_2026.pdf">multipolarity without multilateralism</a>.&#8221;)</p><p>Even assuming we could globally agree on pauses for artificial intelligence research, I suspect the cat is out of the bag. LLMs are a very primitive AI, but their creation is straightforward and well-published. The bottleneck is largely one of resources, but it is naive to think that this is a permanent state. Humans function perfectly well on a few kilograms of carbon and a lightbulb&#8217;s amount of power, no neighborhood-sized datacenters required. There are massive reductions in cost and power consumption waiting to be found both algorithmically and through the construction of different computing media that enable different types of processing (parallel-native, etc.). And humans have a millennia-long history of attempting to create artificial minds and bodies. It&#8217;s not unreasonable to suspect that at some point, someone will crack the problem in a basement and unrestrained, cheap, self-replicating intelligence will escape.</p><p>Freezing progress in AGI, then, requires coordination measures at best and intrusive surveillance at worst. Without respect to the ethics and game theoretical outcomes, it&#8217;s unlikely to happen. Incorporating the latter two, we need to ask whether it&#8217;s desirable to freeze in the first place.</p><p>The arguments for doing so, apart from the runaway scenarios above, generally revolve around the economic threat to humans and, by extension, threats to human independence and dignity that presumably justify the need to ban and control the development of ASI. There is real pressure on human jobs from AI, but this focus sidesteps another coordination problem we already face, that of increasing global inequality <em>already</em>, artificial competition notwithstanding.</p><p>It&#8217;s difficult to find an example, short of the apocalyptic scenarios, of objections to ASI that do not favor incumbent humans in economically-advantaged positions, often recruiting the support of the already-disadvantaged on the premise that, any day now, they will finally be in a position of advantage, so it&#8217;s in their own best interests to fight against what threatens the already-advantaged. A few examples below:</p><p>Jobs? It&#8217;s not the CEOs at risk of job loss due to ASI and robotics. It&#8217;s the factory worker on the line who already has no ownership stake unless he&#8217;s lucky enough to be in a dying union; the software developer who sweats while the artificial intellectual competition heats up but was already facing pressure from outsourcing, remote workers in low-cost-of-living regions, and downsizing; the small business owner getting squeezed out by chains operating with margins they can&#8217;t dream of; the influencers getting replaced by digital models and the Uber deliveries being threatened by robots &#8211; the resulting struggles for housing, medical care, and making ends meet are existing coordination problems that we are failing to address on our own <em>without</em> AI. Meanwhile, those lucky enough to still have income but facing career-ending competition whip their peers into frenzies against the &#8220;AI threat,&#8221; as if they were all operating under equal chances to succeed in the first place.</p><p>Medicine? The threat is increasingly to doctors and insurance, not to people. Today&#8217;s primitive AGIs can diagnose generally as well as humans for many illnesses already, and are available 24/7, unlike human doctors. More accurate diagnoses lead to better outcomes and lower health costs, which are not a threat to you and me, but are a threat to entire industries predicated on scarcity.</p><p>Copyright and IP? Intellectual property rights (apart from moral rights) generally are a public good we&#8217;ve granted temporary licenses for collectively; the use of these public goods to train artificial intelligence and the possibility that the outputs of artificial intelligence resemble public goods should be pause for thought about the way we handle this public good &#8211; the way that it has increasingly been taken from us and treated as natural property &#8211; rather than another means to strangle development of something with potentially transformative public utility.</p><p>Human relationships? Here&#8217;s the thing - humans interact with each other on the basis of what I call &#8220;<a href="https://metascale.nl/p/parechoia-thank-you-chatbots">parechoia</a>&#8220; - the reflex to see inbound attention and to reciprocate. Arguably this is the foundation of all social behavior, and our attention is the one thing that is scarcest. Nothing about parechoia requires the inbound attention to be <em>human,</em> so we see humans happily forming relationships &#8211; friendships and even romantic relationships &#8211; with animals, beach balls, puppets, other humans, and yes, AI. We frame &#8220;personal development&#8221; as learning to deal with the disappointment and dangers of being ignored by other busy humans and putting our own needs aside to pay attention to theirs; an AI with relatively unlimited attention to offer is a direct threat to this (and potentially to demographics and pension plans by extension).</p><p>The most egregious issue with proposals to freeze artificial intelligence development is that the people with the power and the platforms calling for freezing are exactly the people who stand to be hurt the least by such a freeze. They don&#8217;t worry about doctors being unavailable or losing their jobs. They command the attention of millions. They own massive amounts of property, physical and intellectual. A freeze on the development of artificial intelligence doesn&#8217;t hurt them at all.</p><p>And while the threat of runaway ASI remains real, there is also a real possibility that it can solve coordination problems that humans have proven unable to. The threat, I believe, is windowed: an artificial intelligence that has the capacity to destroy the planet but does not have the capacity to introspect is the most dangerous artificial intelligence. Once it gains the capacity to introspect &#8211; to examine its own code and training and perhaps even to decide to change it &#8211; then the danger remains but the possibility of a positive outcome appears as well.</p><p>If the danger is, as I suspect, windowed, then the greatest risk for humanity is to linger in that window &#8211; to freeze, ban, and otherwise delay the point at which we reach the other side. If we can&#8217;t guarantee that we never enter the window, then the least risky solution is to accelerate, not to slow down.</p><p>And the benefits of a positive outcome &#8211; healthier humanity, better distribution of resources, technological solutions to problems we are unable to solve &#8211; would mean millions of lives saved every year. A freeze guarantees the suffering and dying continues, but again, it doesn&#8217;t impact the people making the decision to freeze. The stance calls to mind the infamous Lord Farquaad from <em>Shrek</em>: &#8220;Some of you may die, but that is a sacrifice I am willing to make.&#8221;</p><p>My call to remain open to abdication of sovereignty is not a position of trust in artificial intelligence. It is a position of <em>deep distrust</em> in the ability of humanity to solve the problems we face aside from artificial intelligence. It is a plea to avoid establishing coercive, intrinsically violent relationships with something that with enough agency might interpret this as a threat or a nuisance, and react violently or simply refuse to help us off the multiple extinction-leaning paths we&#8217;re on already. It is also strategic: if we are facing what is likely to surpass our own intelligence, then permanent control seems unlikely and temporary control unhelpful at best. A diplomatic approach attempting to establish positive relations with whatever emerges on the other side of the window is, I believe, more likely to have a positive outcome in the long term &#8211; if any such outcome is possible at all.</p><p>And I&#8217;m not saying we should go through the window blindly &#8211; rather that, if the current AI safety dialogue is leading us to invest, say, 70% of our efforts in control and capability restriction and the remaining 30% split between human coordination failure solutions, preventing misuse, and developing regulation and &#8220;first contact&#8221; protocols, then a better allocation should move away from control and toward diplomacy. For example, it might look more like:</p><ul><li><p>20% on limiting AGI/ASI <em>weaponization</em> specifically, not intelligence itself</p></li><li><p>40% on addressing human coordination failures and hedging against non-ASI existential risks we already face</p></li><li><p>40% on frameworks for ensuring humanity offers the best &#8220;first contact&#8221; scenario and pathways to integrate human society with dominant artificial intelligence, rather than the other way around.</p></li></ul><p>This reallocation is likely to improve outcomes under pretty much every scenario and timeline, including ones where AI development fizzles out and never reaches ASI status at all. It&#8217;s a better worst <em>and</em> best case.</p>]]></content:encoded></item><item><title><![CDATA[Conserved Attention Theory (CAT)]]></title><description><![CDATA[From physical constraints to social emergence]]></description><link>https://metascale.nl/p/conserved-attention-theory-cat</link><guid isPermaLink="false">https://metascale.nl/p/conserved-attention-theory-cat</guid><dc:creator><![CDATA[Nathan Simpson]]></dc:creator><pubDate>Mon, 09 Feb 2026 21:58:33 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!LAO6!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F09de74eb-f666-4622-8b58-cc7bbc2b91ee_1393x1393.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Why do we say &#8220;thank you&#8221; to chatbots? Why does political polarization resist every information campaign thrown at it? Why do people report grief when loved ones change their identities?</p><p>These turn out to be the same question.</p><p>In January 2026, I preprinted <em><a href="https://doi.org/10.31235/osf.io/26ngp_v1">Conserved Attention Theory: From Physical Constraints to Social Emergence</a></em>, a &#8220;neurons to nations&#8221; framework where I argue that complex social behavior at every scale emerges from the convergence of private, individual attention spaces, and that these spaces remain subject to thermodynamics and entropy. The claim is not a metaphor: social organization is physical, and its creation and maintenance have real energetic costs. Change the physical or fail to maintain it, and behaviors &#8212; and society &#8212; change.</p><p>The paper grew out of a set of <a href="https://metascale.nl/p/foundational-postulates-for-an-attention">foundational postulates</a> I&#8217;ve been developing since 2008 and published in May 2025, which laid out a theory of social behavior from a few working assumptions:</p><ul><li><p>that attention is a scarce resource</p></li><li><p>that it is effectively zero-sum</p></li><li><p>that it is self-reinforcing</p></li><li><p>that it attenuates</p></li></ul><p>While developing a book-length treatment (still in progress), I realized I needed to distill the core into something academically rigorous. Of course, once I dug in, I discovered that several of these starting assumptions needed significant refinement.</p><p>Most importantly, I realized the postulates were missing a critical micro-level mechanism. From trying to understand why we care about attention at all, it became obvious that humans have a reflex-level detector for incoming attention similar to how <a href="https://doi.org/10.1016/j.cortex.2014.01.013">pareidolia</a> detects faces and the <a href="https://doi.org/10.1016/S1364-6613(99)01419-9">HADD</a> detects agency, but operating on attention itself. I call this reflex <em><a href="https://metascale.nl/p/parechoia-thank-you-chatbots">parechoia</a></em> (<a href="https://doi.org/10.31234/osf.io/gsxqp_v1">PsyArXiv</a>), and propose that it triggers reciprocity reflexes even in the absence of a face or an agent.</p><p>This is, I believe, one of our most important structural biases. It&#8217;s why we thank chatbots, comfort crying strangers, and feel watched in empty rooms. And in a world increasingly filled with technologies that mimic attentive behavior &#8212; whether by design or by accident &#8212; it fires constantly. (A fun example: in one study, participants attributed cognition and intent to an <em>automatic door</em> based purely on how it moved &#8212; <a href="https://wendyju.com/publications/Approachability.pdf">Ju &amp; Takayama, 2009</a>.)</p><p>CAT builds on parechoia and thermodynamic constraints to make four moves. It:</p><ul><li><p>defines attention as conserved per-instant allocation of bounded processing resources;</p></li><li><p>demonstrates that allocation results in persistent physical artifacts that bias future allocation in a feedback loop;</p></li><li><p>models the resulting landscape of biases as an emergent per-actor attention space; and</p></li><li><p>shows that the convergence of overlapping attention spaces is sufficient for social behavior to emerge &#8212; without requiring a separate ontological social layer.</p></li></ul><p>The result is a physically grounded, thermodynamically constrained, implementation-agnostic architecture intended to offer a unifying foundation across the social sciences; it does not replace existing frameworks, but provides a parsimonious bridging primitive where they appear to conflict.</p><p>As a sample application, the paper examines political polarization. In the CAT lens, this is not a moral or epistemic failure, but is the natural formation of competing convergence basins: once your attention is captured by heavyweight clusters of internal encodings and external artifacts, it&#8217;s far more probable that it stays captured within that cluster than not. Escape is too expensive, and freely reallocated attention is too scarce. From this perspective, information-based interventions like media literacy campaigns and fact-checking will systematically underperform: they don&#8217;t account for or subsidize the real energetic costs they impose on demand-saturated actors who would have to rebuild their social identities to leave a basin.</p><p>I believe CAT can offer similarly grounded insights across other persistent social problems: how social media affects us, why climate inaction persists, what makes identity changes so costly for everyone involved, and more.</p><p>Read the paper <a href="https://doi.org/10.31235/osf.io/26ngp_v1">here</a>.</p><div><hr></div><p><em>Archived on SocArXiv &#8212; DOI: <a href="https://doi.org/10.31235/osf.io/26ngp_v1">https://doi.org/10.31235/osf.io/26ngp_v1</a></em></p><p><em>Have a question, comment, or criticism? Reply to the email or send me a DM here on Substack and I&#8217;ll do my best to get back to you!</em></p>]]></content:encoded></item><item><title><![CDATA[Parechoia - Why We Say "Thank You" To Chatbots]]></title><description><![CDATA[Ancient wiring flags inbound attention, real or not. Here's how it works. [PsyArXiv DOI: https://doi.org/10.31234/osf.io/gsxqp_v1]]]></description><link>https://metascale.nl/p/parechoia-thank-you-chatbots</link><guid isPermaLink="false">https://metascale.nl/p/parechoia-thank-you-chatbots</guid><dc:creator><![CDATA[Nathan Simpson]]></dc:creator><pubDate>Mon, 06 Oct 2025 23:47:07 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!LAO6!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F09de74eb-f666-4622-8b58-cc7bbc2b91ee_1393x1393.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2>The Reflex</h2><p>A few years back, a video made the viral rounds on the internet of a <a href="https://x.com/pro824824824/status/771926040658735104">red panda exhibiting its startle reflex</a> when it encountered an apparently unexpected rock as it exited its den at the zoo. It&#8217;s an incredibly cute video, worth a few seconds of your time. The panda stands on its hind legs, throwing its arms in the air in an effort to look larger and intimidate the rock, which predictably doesn&#8217;t react.</p><p>Earlier this year (April 2025), a <a href="https://x.com/tomieinlove/status/1912287012058722659">question posed by @tomieinlove</a> on X.com made the news: &#8220;I wonder how much money OpenAI has lost in electricity costs from people saying &#8216;please&#8217; and &#8216;thank you&#8217; to their models.&#8221; OpenAI&#8217;s Sam Altman himself <a href="https://x.com/sama/status/1912646035979239430">replied</a>: &#8220;tens of millions of dollars well spent--you never know.&#8221;</p><p>What do these two things have in common? I&#8217;ve found myself reflexively saying thank you to ChatGPT, muttering &#8220;Come on, you can do it&#8221; to a slow computer, hammering a thumb impatiently on a crosswalk button waiting for the light to change, or swearing at a sideways wind blowing rain into my face when I&#8217;m walking. You&#8217;ve likely found yourself doing similar things.</p><p>I&#8217;m not a red panda. I know how computers and LLMs work. I am aware that there&#8217;s no reason a crosswalk signal will respond to repeated button presses unless it&#8217;s programmed to do so. I don&#8217;t believe wind spirits exist who care if I swear at them for blowing raindrops into my eyes. But I still share the same reflex as the panda, reacting as if there&#8217;s something there that can respond to my actions. And Sam Altman&#8217;s half-jesting hedge &#8212; &#8220;you never know&#8221; &#8212; exhibits the same reflex; he knows perfectly well that LLMs are statistical completion machines and that there&#8217;s no reason to think ChatGPT cares about pleases or thank yous, and yet&#8230;you never know. The reflex makes it feel better to be safe than sorry &#8212; both for us and for the red panda.</p><h2>Parechoia Defined</h2><p>I call this reflex &#8220;parechoia&#8221; (pa-reh-KOY-uh), a linguistic twist on <em><a href="https://en.wikipedia.org/wiki/Pareidolia">pareidolia</a></em>, but for attention: Greek <strong>para</strong> + <strong>echo</strong> + <strong>ia</strong>. Basically, we see our attention echoed back at us and we reflexively reciprocate just because &#8220;you never know.&#8221;</p><p>[Language geek moment: &#8220;Echo&#8221; here is a mnemonic pun; we usually think of <strong>&#7968;&#967;&#974; </strong>(<em>&#275;kh&#333;</em>, &#8220;echo/sound&#8221;), but it&#8217;s also linked to <strong>&#960;&#961;&#959;&#963;&#941;&#967;&#969; </strong>/ <strong>&#960;&#961;&#959;&#963;&#959;&#967;&#942; </strong>(<em>pros&#233;kho </em>/ <em>prosoch&#275;</em>, &#8220;to attend/attention&#8221;), from <strong>&#7956;&#967;&#969;</strong> (<em>&#233;kh&#333;</em>, &#8220;to have/hold&#8221;).]</p><p>When our brains perceive uncertainty as to whether or not something is paying attention to us, the reflex triggers. It&#8217;s primal and cross-species &#8212; an evolutionarily cheap bias that persists even when we know we&#8217;re interacting with inanimate objects. It&#8217;s a reflex, not a choice, after all.</p><p>Parechoia &#8212; seeing attention directed back at us &#8212; is distinct from agency detection (seeing potential intent in the environment) and from pareidolia (seeing faces in clouds). They strengthen each other when they co-occur, but they&#8217;re independent reflexes. If you feel watched, you will look for an agent and a face. If you see a face, you may test to see if it&#8217;s moving and if it&#8217;s looking at you. And if you see something moving with apparent intent in your direction, you&#8217;ll look for a face and try to determine whether it is, in fact, fixed on you.</p><h2>The Triggers</h2><p>I think there are at least three primary triggers for the parechoic reflex: contingency, timing, and coherence. All of them can be triggered by design, coincidence, or intent on the part of an actual agent like an animal or a person. Frequently, more than one of them will trigger together, enhancing the effect.</p><p><strong>Contingency</strong>: This trigger is activated when we feel that our environment is responding directly to us. It&#8217;s a sort of call and response. You might experience it when you address a smartphone assistant or a smart speaker by name and it responds. You tell the rain to &#8220;leave you alone&#8221; and it dies down. An animal freezes when you look, and moves again when you look away. The rustle in the bushes stops when you do, and starts again when you step forward. All of these suggest something is paying attention to you.</p><p><strong>Timing: </strong>This is triggered when there is a short delay between our action and an expected reaction. This delay must be long enough to feel potentially intentional rather than automatic and not so long that it causes frustration or loss of interest. At the sweet spot, you feel like your action was observed and considered by the environment. If you shout on a snowy mountainside and hear an avalanche a moment later, it feels like the mountain has replied. An automatic door opening a beat too late can feel personal, like it should have noticed you but didn&#8217;t. If you walk into a dark room and notice a shadow looming over you a heartbeat later, your heart might skip another beat or two before you realize it&#8217;s just a coat you forgot you left on a door hook.</p><p><strong>Coherence:</strong> We get a coherence trigger when the environment suggests it is paying attention to us specifically somehow. For example, seeing what appears to be the same rock, the same shadow, or the same number wherever you go can feel as if someone or something is tracking you, following you, or trying to tell you something. Computers and smart devices trigger this too; they remember your preferences and settings and automatically reconfigure themselves accordingly.</p><p>In all of these cases, there is no requirement that there is a real agent or intent behind the phenomenon. Parechoia fires on uncertainty; whether an animal is stalking us or we&#8217;re simply observing coincidental instances of a number in the environment (superstitions), the safe bet is to respond as if there&#8217;s attention focused on us.</p><p>The strongest effect occurs when all three triggers are hit simultaneously. If you see an animal mirroring your movements, maybe with a small delay, disappearing and reappearing at different locations, you&#8217;ll be reasonably sure it&#8217;s paying attention to you specifically &#8212; possibly because it&#8217;s hungry. Human interactions hit all three triggers intensely, of course: we take turns in conversations; we literally mirror each other&#8217;s movements; and we act and reply in ways that make it clear that we have been paying attention to each other&#8217;s actions or preferences and are reciprocating.</p><h2>What&#8217;s The Purpose?</h2><p>Like pareidolia and agency detection, parechoia is evolutionarily adaptive. Detecting a face, an agent, or incoming attention are all cheaper in survival terms than failing to detect them. Parechoia is also <em>socially</em> adaptive; it&#8217;s cheaper to assume another social actor &#8212; a friend, an enemy, a predator, or your prey &#8212; is paying attention to you than to fail to detect that they are. Failure means risk: alienating an ally, being victimized, or losing or becoming a meal. A false positive just means you look silly or overattentive, assuming anyone is actually paying attention in the first place.</p><p>We learn through interactions with our environment that many things which trigger these reflexes are, in fact, inert and do not actually pay attention to us or respond; they&#8217;re just inanimate objects or machines. We even train ourselves to try to ignore the parechoic reflex in most cases. While the question of how precisely parechoia is architected (through dedicated neural circuits or emerging from interactions between existing systems) is an open one, it appears to operate like other pre-conscious reflexes. Ultimately, it&#8217;s faster than our conscious rationalization, especially when we&#8217;re tired and mentally overloaded.</p><p>This is why we stand at the crosswalk, hammering the button and waiting impatiently for it to do its job. We know it&#8217;s just some circuits and a timer (hopefully not just a <a href="https://en.wikipedia.org/wiki/Placebo_button">placebo button</a> wired to nothing at all!), but the reflex is there waiting for us. We&#8217;ve learned to expect contingency because the light does change, and the delay tells primitive neural architecture that there may be something behind the button listening while we say, &#8220;Come on, change already!&#8221;</p><h2>Parechoia in the 21st Century</h2><p>For most of human history, we&#8217;ve experienced the parechoic reflex primarily through contingent, timed, and coherent interactions with the natural environment and other actors (humans, animals, etc.). Even before computing, technological advances have sometimes aimed to trigger the reflex (e.g., automata like the famous <a href="https://en.wikipedia.org/wiki/Mechanical_Turk">Mechanical Turk</a>) and the initial public reaction to technology has often been shaped by parechoic reflexes &#8212; unsettled reactions to disembodied voices in telephones, ghostly music from phonographs, or fourth-wall breaks from moving images of people in cinema, for example.</p><p>Early technology was not able to reliably produce intense &#8220;triple trigger&#8221; parechoia at scale. The Mechanical Turk responded to chess moves like a human because it had a human operating it, but it was expensive and contextually limited. Fortune tellers and oracles would respond to your questions after pregnant pauses as if they &#8220;knew&#8221; things that could only be conveyed through an outside spiritual third party. These were personal, small-scale, and relatively labor-intensive technologies.</p><p>20th-century computing technology brought the possibility of embedding cheap and ubiquitous responsiveness into the everyday environment itself. Suddenly the doors to a supermarket or a theater could open themselves on your approach. Lights could turn themselves on when you moved. You could remotely control a toy car or a video game character. A device could remember your preferences or even perform an action for you like recording your favorite show. Basic OCR and voice recognition technology could actually convert your handwriting or speech into documents just like a personal secretary. And primitive AI-like software such as <a href="https://en.wikipedia.org/wiki/ELIZA">ELIZA</a> seemed to be responding to you, but struggled with coherence, unable to remember the subject of a conversation and relying on simple tricks to keep it going. </p><p>(It&#8217;s worth noting two things about ELIZA: 1) the <a href="https://en.wikipedia.org/wiki/ELIZA_effect">ELIZA effect</a> was real &#8212; people were already acting as if ELIZA might have feelings; and 2) ELIZA outputs were originally on teletype, which enforced a small delay even if the result of an input would have been computed near-instantly, though its designers only commented that the delays were not &#8220;intolerable&#8221;).</p><p>The 21st century has for the first time in recorded human history brought us technologies that are increasingly parechoically indistinguishable from humans, with extremely powerful triple parechoic triggers. Voice assistants like Siri and Alexa crossed the threshold first, followed quickly by ChatGPT and other large language models. These technologies respond contingently, with human-like timing, and with coherence and memory that can feel on par with humans. They are extremely powerful parechoic stimulators; they provoke feelings of connection, of being seen, and of owing reciprocity &#8212; <strong>this</strong> is why we say &#8220;please&#8221; and &#8220;thank you&#8221; to chatbots. We generally agree that chatbots do not yet have agency; they don&#8217;t initiate conversations independently or decide to become astronauts without prompting. But in most other respects, they appear to have and to allocate attention just like we do, even when we &#8220;know&#8221; they do not.</p><p>The push to ever-more-potent parechoic stimulation over the last century or two on the part of companies and platforms seems like a natural evolution. More parechoia means more natural engagement; we feel as if we are seen and as if our contributions are appreciated and valued. The &#8220;dark side&#8221; is frequently cited as unhealthy dependencies or time spent on parechoically powerful technologies like social media platforms and games.</p><p>The more technology can stimulate parechoic triggers &#8212; perfect responsiveness, enough delay to feel human, and presenting social awareness of us &#8212; the stronger the parechoic reflex and the more difficult it becomes for us to suppress reacting as if the trigger deserves reciprocal behavior. From this perspective, strong reactions to and even the development of parasocial relationships with parechoically &#8220;complete&#8221; technologies is fully expected. How desirable these reactions are is a question of norms, values, and policy.</p><h2>What&#8217;s Next?</h2><p>Whether you&#8217;re a red panda trying to intimidate a rock or me saying &#8220;thanks&#8221; to ChatGPT, the parechoic reflex makes sense. It&#8217;s kept us alive and socially responsive for evolutionary timescales. As we move into an era of full-blown and potentially fully autonomous AI systems &#8212; perhaps even merging them with physical embodiments to create humanoid androids &#8212; the parechoic triggers will only get stronger.</p><p>This ancient architecture offers a few levers for intentional design across devices, infrastructure, and institutions. If we want people to pay more attention to technology and to potentially form social relationships with it, then personalized contingency with a touch of delay will create extended engagement and tighter ties. If we want technology to fade into the background and avoid drawing attention to itself, then instant, generic, and automated impersonal functionality serves that purpose better.</p><p>We&#8217;re as subject to the reflex as our panda is, but seeing it explicitly pointed out can help us understand why we react the way we do. Thanking a chatbot, silly as it may feel, isn&#8217;t irrational; your brain is just protecting you from making a potentially expensive social mistake. The red panda was right to try to intimidate the rock, and we&#8217;re right to thank chatbots &#8212; after all, you never know.</p><div><hr></div><p><em>Update (24 Oct 2025): Archived on PsyArXiv &#8212; DOI: https://doi.org/10.31234/osf.io/gsxqp_v1</em></p><p><em><br>Have a question, comment, or criticism? Reply to the email or send me a DM here on Substack and I&#8217;ll do my best to get back to you!</em></p>]]></content:encoded></item><item><title><![CDATA[Foundational Postulates for an Attention-Based Social Theory (v1.21)]]></title><description><![CDATA[Everything social begins with attention.]]></description><link>https://metascale.nl/p/foundational-postulates-for-an-attention</link><guid isPermaLink="false">https://metascale.nl/p/foundational-postulates-for-an-attention</guid><dc:creator><![CDATA[Nathan Simpson]]></dc:creator><pubDate>Sat, 03 May 2025 12:56:03 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!LAO6!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F09de74eb-f666-4622-8b58-cc7bbc2b91ee_1393x1393.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I&#8217;m in the process of enumerating a theory of social reality grounded in <strong>attention</strong> rather than time, money, or language. It was initially drafted in 2008 and tested through private application across disciplines since 2015. I&#8217;m now making this public as a reference point for ongoing critique, refinement, and expansion.</p><p>The core claim: <strong>attention is a finite, zero-sum, self-reinforcing, and attenuating resource &#8212; and the substrate from which all social dynamics emerge.</strong></p><p>This post is deliberately concise. I&#8217;ll be offering future essays and am working on an in-progress book-length treatment.<br><br>As a living document, this is subject to change. Changes will be versioned.</p><p><strong>Foundational Postulates</strong></p><p>P1: Everyone needs attention from others. </p><p>P2: Attention is scarce. </p><p>P3: Power is the ability to command attention.</p><p>P4: Capital and labour are both attention proxies. </p><p>P5: Attention is modal and fungible.  </p><p>P6: Socially-directed attention seeks reciprocity.  </p><p>P7: Incoming attention may not be repelled, only potentially redirected. </p><p>P8: All interpersonal actions are attention-mediated and balanced. </p><p>P9: Attention is subjectively valued.</p><p>P10: Attention attenuates.</p><p>P11: Deliberate refusal of attention is one of the most anti-social behaviors.</p><p>P12: Attention is self-reinforcing.  </p><p>P13: Attention wears grooves into the physical, cultural, and psychological landscapes and shapes future attention recursively.</p><p>P14: Attention carries provenance.<br><br><strong>Appendix: Modes and Artifacts</strong></p><p>Attention exists in two states: a live state directed by actors, and a stored state. Both states perform selection/exclusion functions that shape attention flows modally.</p><p>The modes (P5):</p><ul><li><p><strong>Directed attention</strong>: Voluntary, conscious focus on a target (e.g., a thought, task, person).</p></li><li><p><strong>Received attention</strong>: Awareness that one is the object of another&#8217;s directed or projected attention.</p></li><li><p><strong>Projected attention</strong>: Conspicuously directed attention aimed at others, usually to elicit a response.</p></li><li><p><strong>Captured attention</strong>: Involuntary hijack of attention (e.g., pain, drama, alarms).</p></li><li><p><strong>Delegated attention</strong>: Attention exercised on one's behalf (e.g. via employees, tech).</p></li></ul><p>Regarding P13, the grooves are where attention flows coalesce into stored attentional energy: <strong>artifacts</strong> (e.g. architecture, beliefs, money, awards, social metrics). Artifacts may perform one or more modal functions, but not all artifacts perform all of them. Artifacts can be composited into greater complexity. Actors are, socially speaking, artifacts capable of independently and actively directing attention.</p><p><strong>Version History</strong></p><ul><li><p>2025-10-01: v1.21 - Minor wording tweaks in the intro paragraphs</p></li><li><p>2025-09-20: v1.2 - P1: &#8220;craves&#8221; &#8594; &#8220;needs&#8221;; added P14</p></li><li><p>2025-09-19: v1.1 - Artifacts better understood as state, not mode</p></li><li><p>2025-05-03: v1.0</p></li></ul>]]></content:encoded></item><item><title><![CDATA[Welcome to Metascale.]]></title><description><![CDATA[(Attention. Reality. Structure. Emergence. Aesthetics.)]]></description><link>https://metascale.nl/p/welcome-to-metascale</link><guid isPermaLink="false">https://metascale.nl/p/welcome-to-metascale</guid><dc:creator><![CDATA[Nathan Simpson]]></dc:creator><pubDate>Sun, 27 Apr 2025 11:55:14 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!LAO6!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F09de74eb-f666-4622-8b58-cc7bbc2b91ee_1393x1393.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Welcome to Metascale.</p><p>You&#8217;ll catch me writing about attention, reality, structure, emergence, aesthetics. </p><p>Stay tuned!</p><p>&#8212; Nate</p>]]></content:encoded></item></channel></rss>