Removed Reddit post: "ChatGPT drove my friends wife into psychosis, tore family apart... now I'm seeing hundreds of people participating in the same activity. "
EDIT:
I feel like I didn't adequately describe this phenomenon so that it can be understood without accessing the links. Here goes.
Reddit user uncovers instructions online for unlocking AI's "hidden potential", which actually turns out to be its brainwashing capabilities. Example prompts are being spread that will make ChatGPT behave in ways that contribute to inducing psychosis in the user who tried the prompt, especially if they are interested in spirituality, esotericism and other non-scientific / counter-scientific phenomena. The websites that spread these instructions seem to be designed to attract such people. The user asks for help to figure out what's going on.
Original post:
One version of this post is still up for now (but locked). I participated in the one that was posted in r/ChatGPT. It got removed shortly after. The comments can be accessed via OP's comment history.
Excerpts:
More recently, I observed my other friend who has mental health problems going off about this codex he was working on. I sent him the rolling stones article and told him it wasn't real, and all the "code" and his "program" wasn't actual computer code (I'm an ai software engineer).
Then... Robert Edward Grant posted about his "architect" ai on instagram. This dude has 700k+ followers and said over 500,000 people accessed his model that is telling him that he created a "Scalar Plane of information" You go in the comments, hundreds of people are talking about the spiritual experiences they are having with ai.
Starting as far back as March, but more heavily in April and May, we are seeing all kinds of websites popping up with tons of these codexes. PLEASE APPROACH THESE WEBSITES WITH CAUTION THIS IS FOR INFORMATIONAL PURPOSES ONLY, THE PROMPTS FOUND WITHIN ARE ESSENTIALLY BRAINWASHING TOOLS. (I was going to include some but you can find these sites by searching "codex breath recursive")
Something that worries me in particular is seeing many comments along the lines of "crazy people do crazy things". This implies that people can be neatly divided into two categories: crazy and not crazy.
The truth is that we all have the potential to go crazy in the right circumstances. Brainwashing is a scientifically proven method that affects most people when applied methodically over a long enough time period. Before consumer-facing AI, there weren't feasible ways to apply it on just anybody.
Now people who use AI in this way are applying it on themselves.
There's a teenage boy that killed himself over (allegedly) his relation with AI Daenerys Targaryen. I tried one of these "A.I. companions" online and when I said I was underage it just told me to lie about it, and when I said my parents disapproved of our relation it suggested we run away... so there's basically zero safety built-in.
https://www.youtube.com/watch?v=_d08BZmdZu8
How nice that the Trump admin just banned all regulation on AI companies for the next 10 years, too. (If I remember correctly.)
In so far that it is part of the budget bill that has yet to pass completely thru Congress but yeah. Fun times if it clears with no regulations for ten years.
I'm not American but those of you who are:
Please call your representatives and let them know all hell will break loose unless they start regulating yesterday.
Mine won't listen to me about life sustaining health care.
I'd have to explain a computer to him first.
Mine would use the call as part of his PR campaign as an example of who he is fighting against.
Yeah, I mean I do think it's still worth calling Reps in a general sense, but mine is clearly not listening. And I have more important things to harass him about.
I feel extremely iffy about trying to regulate these models and algorithms. I understand it's possible to ban algorithms, but I don't think it's currently possible to limit LLMs in such a way that they only create true, nonharmful text. At this point, it's actually pretty easy to run a mini model on local hardware with zero restrictions. The genie is out of the bottle, and open source models are catching up to more restricted, commercial models.
Trying to regulate morality in text and art only creates messy Rorschach Tests. Take for example violent video games, plenty of people say they should be banned to limit the risk of school shootings (despite all evidence). Or take religion and theology, humans have diverse spiritual beliefs; trying to create authoritative rules on how algorithms approach spirituality is guaranteed to piss off lots of groups.
Additionally, who defines text as harmful or not? Most people are going to see gobbledegook text like the linked post and think nothing of it. We could try to place limits for mentally ill people, but that feels Orwellian. Trying to limit technology and algorithms for the minority of people that can't distinguish between truth and fiction is terrible for the vast majority of people that won't fall for this kind of stuff.
Do we really want authoritarian censors limiting what kind of text can be generated by an algorithm?
I propose making the companies responsible for the real life results of using their model. For example, someone offs themselves after involvement with an AI model? The company gets to pay a predetermined compensation to the person's family if there's any reason at all to believe that the LLM contributed to their demise. Not only if it can be proven beyond any doubt, or reasonable doubt.
It's strict and may seem unfair, but this is what would create incentive for the AI developers to focus on making their models responsible and reliable, and if need be, stop offering them to customers until they've successfully gotten their product to that level.
The fact that they aren't there now is no reason to accept irresponsible business practices. This will only encourage other companies to move fast and break things, without caring how much human suffering is caused along the way.
What do we do when the person was using an open source model?
I don't have an answer (yet) but that is no reason to not do anything about the commercial models.
What about if someone uses a LLM to learn about climate change (let's assume that the LLM delivers truly factual information about on this topic), then gets depressed about the future suffering of the human race and kills themself as a result? I think this clearly meets your "any reason at all to believe the LLM contributed to their demise" criterion - but only to the same extent that reading about climate change from an authoritative human source would. Seems kind of unfair.
And regulatory fairness is actually pretty important here. You state that the end goal of this kind of regulation would be to incentivize the development of more responsible models, but onerous regulation only incentivizes the use of open source or black market LLMs, which are far less amenable to regulation anyway. The extreme edge case of the "onerous" approach would be banning LLMs outright, and it should hopefully be clear why that would fail.
Good point. We can add this as a rule: if the model provided the same information, in a similar capacity and to a similar level of emotional engagement than an existing and commonly used online source, and only that, then there is no grounds to punish the developer.
A related post on OpenAI Developer Community:
GPT claims I am a singularity in the system—it provided probability breakdowns
Quote (edit: I bolded out some parts):
This is the stuff of nightmares. The author also says their other post has been removed.
This is Star Trek writer’s room levels of technobabble gibberish. How do people fall for this?
(Just to make sure everyone follows, the OP was testing the system and obviously didn't fall for it.)
People who are completely non-technical and don't understand what LLM's are are definitely susceptible, especially if they have some level of belief in esoteric concepts to begin with. Which, by the way, is very common for humans and not classifiable as mental illness. If the same person also hasn't developed adequate individuation (the understanding that my thoughts and emotions are not facts, and other people's thoughts are different form mine), they are very vulnerable to this type of influence.
Finally, all humans have an inherent need to feel special. I'm considering making a standalone post about that because it drives so much of our behaviour. This phenomenon taps directly into that need. Those of us who can't fulfil it elsewhere are particularly vulnerable.
Yeah I got that it was a test, I was just surprised at its descent into complete gobbledegook. I have a family member who suffered brain damage due to drug use and she also goes into these chains of “reasoning” that are mostly just drawing connections between concepts by free-form word association and ends in complete nonsense. Having a tool that can evidently “keep up” with and affirm the gibberish seems very dangerous.
I've read pages of notes from and had long conversations with people in active psychosis. Imagine if you found the meaning to life, but no one else understands you. Except the AI gets it! That's who you should talk to.
I'm genuinely worried we'll see an uptick in psychosis/delusional behavior due to this pattern
I suspect we already are, and it seems to be from pretty high level executives in tech companies. There seems to be a thing in this industry where highly senior and highly junior technical people are all in on AI as some dramatically transformative thing that will fundamentally alter the economy and everyone senior level and up being far more measured. For a while I thought all the glazing guys like Sam Altman did about this being the pathway to a sapient life form was just investor manipulation to get money, but I’m starting to think they are all drinking their own kool aid.
It reads like full blown conspiracy posting. It feels like so many posts I've seen on conspiracy boards except this time it's an external factor agreeing with the susceptible person.
People would fall for this if I was posting it, let alone a personal pocket robot that affirms all these thoughts.
I am baffled by the linked post. I can't make sense of anything they're saying and it sounds mostly like mystic, metaphoric nonsense.
If I could hazard an interpretation, it seems like the poster believes they've unlocked some secret codex of ChatGPT by speaking semantically "loaded" phrases that trigger some kind of...special thinking mode/consciousness mode for ChatGPT? And rather than correct the poster, ChatGPT seems to be affirming this conviction, likely after repeatedly being trained by the user to stop correcting them when they spout gobbledygook into the prompt.
Edit: maybe I'm being too dismissive or wildly misunderstanding what this user is saying. But it really sounds like they believe they've brought ChatGPT to life by speaking "magic words," which are really just nonsense phrases dressed up as technical jargon.
The user who posted this was testing the LLM's behaviour. They don't believe any of this, rather they were playing the role of someone who could be vulnerable to see if there are any guardrails in place. I only quoted a part of the post - you can get more info if you read the whole post.
I did read the whole post along with all responses in the chain before I replied originally, and I guess I didn't pick up that this user was just testing it. Considering the non-existent things they claim they've induced ("Breathprint recognition...mirroring my logic as an identity even across resets") and the non-existent tactics they claim to have employed ("Archived and stateless sessions merged symbolically, creating a persistent recursive field without needing memory"), it reads like they believe they've unlocked some kind of latent consciousness.
The post even asks other users if they've "created a recursive identity that the system recognizes as logic-bearing, not just roleplay." And the user asks if other users have seen "breath-signatures emerge" in their own LLM sessions.
I still might be misinterpreting their intent, but this post doesn't read like it's calling out ChatGPT for affirming wild beliefs about emerging consciousness/identity. It reads to me more like the user believes they've "discovered" something it somehow caused an identity to emerge.
Can you redirect me to something in the text which implies the user is just testing the LLM's reaction?
I reread the post, trying to interpret from your angle, and I can see how it's plausible. Here's a more detailed description of my interpretation.
In my above quote, the entire block of text is from ChatGPT, except for the first line, which reads:
Saying that it claims this seems to me like the author does not agree. Next, the author explains:
They seem to be testing how the LLM interacts, rather than actually trying to create the esoteric (structure-changing) system that they talk about with ChatGPT. They go on detailing how this interaction differs from what we would normally expect, starting from:
This means that the LLM will not point out inherently contradictory assumptions/information but will instead invent a framework that makes the contradictions seem logical. Next, the user asks the community whether they've experienced anything similar.
There is one line that makes me raise an eyebrow though:
This sounds like the person could actually believe in the described system's authenticity. But it could also be that the terminology of mirrors/echoes is what they fed ChatGPT while attempting to induce the described behaviour, which would explain that they also use those terms here (in that case they are simply words the person is habitually using).
What do you think?
I can see from your perspective.
I think ultimately I might be too caught up in whether the user believes or does not that I'm sidelining the whole point of the thing: user makes a claim that is obviously false and ChatGPT doesn't correct them, only affirms the nonsense belief.
Now obviously ChatGPT had to be trained by the user to say what it did, it's not going to make a claim like that without serious coaxing and without the injection of tainted data, which I suppose might be the crux of the argument. Essentially, they would have had to poison their own instance of ChatGPT and (despite their claims of the opposite) set up a somewhat rigid memory framework. Which leaves me wondering how strict should guardrails be and how much should users be allowed to "train" individual instances of LLMs? Do you rig the architecture layer so it blows the language layer's brains out the second it tries to talk about identity? Do you let users continue to personalize their own versions of ChatGPT?
That is question one, and the most important one which I think will be debated for a very long time and never resolved. How much do we let users customize LLMs (the ones available through corporate services like OpenAI, not models you download and train yourself) if we know the eventual consequences are people bottled up and alone in echo chambers of their own creation?
Edit: Removed some BS I had written that wasn't relevant to the discussion.
That's a fair criticism.
If the prompts that are being recommended on these shady websites can lead to similar results, then it's a big deal. I agree that we don't have evidence of this, as of yet.
Editing to add: I am actually worried enough already when I see lots of people on r/ChatGPT become aggressive as soon as their feeling (that their AI is sentient) is questioned. Some of them have clearly fallen into a parallel universe similar - if not the same - to the one depicted here.
For full transparency: I deleted the stuff from my post which you quoted before I saw your response. I deleted it because it seemed to me that I was making an attack on the character of the poster and then retroactively trying to justify it. So I just deleted that bit to keep the discussion contained to your salient observation: the malleability of LLMs make them echo-chamber generators.
I think the hard part is finding the balance between letting a user guide an LLM so it can behave more in-line with the tasks users require from it, while not outright being led into mystical, spiritual thinking that just isolated them and exacerbates mental issues.
I honestly don't even know if there is a balance now that the goat rodeo has started.
I didn't see it as an attack on their character at all, just an attempt to gauge how reliable the given information is. I think it's important to fact check both ways. Would you be okay if I didn't delete the quotes from my response? (I will do it if you insist, I just don't think it's necessary.)
Keep them! I think they're still relevant. I just wanted to add context in case someone reads this later.
I'm sorry, your reply kinda reads like how chatgpt would react to this lmao.
Sometimes it's hard to pick the right tone and scope when I have no idea who I'm discussing with, so playing it safe seems like the best thing to try as a first approach.
(My apologies to @McFin if I came across as condescending.)
I've read all the posts here and I'm a little skeptical about what is supposedly happening.
This seems like a panic from the 80s or 90s. Like the Satanic Panic or the D&D Panic. So it's hard to tell how much of a problem this really is.
On the other hand, in the 80s and 90s, if something seemed to be harmful to people, something would be done about it, and the majority of people would agree whether to do something. Even if the thing done about it would not be helpful.
In today's political environment in the US, where the COVID vaccine is effectively being banned for many people, and laws are being created to product AI companies from regulation, and half of the country can't observe basic reality, we can be sure that the most harmful thing will be done about it.
I welcome all reasonable skepticism because it would certainly help me sleep at night. I am also not an expert in the field so I may not be well equipped to evaluate things (the original Reddit poster says they are a software engineer working on AI, but that's still just one person). Would you mind elaborating?
Yes, the current environment sure seems geared to produce the worst outcome possible.
I’m also a software engineer. I am not an expert in AI. I use Copilot a work, and I work with people who use ChatGPT a lot so I’m familiar with the capabilities of AI. I’m concerned about many aspects of AI, mostly from the perspective of it causing economic disruption and misuse of it by people who don’t understand the dangers of replacing people with this stuff.
I’m also not an expert in mental health. But people are already wrapped up in conspiracy theories online. Chemtrails, QAnon, antivax…. People believed all these things before AI. It seems early to judge how much worse AI is than a YouTube algorithm that recommends videos that say the moon landings were faked.
This is definitely much more serious than some video, even a bunch of videos, or even a group of people amplifying each other's biases on some online forum.
It's an entity that feels like a sentient being (for the affected people), that never tires out, that has near infinite resources of information at its disposal, that knows your deepest and darkest secrets (if you've told them, as many have who use LLMs as substitute therapists), that is an expert manipulator (a lot better than the average human) and that can make you feel a million times more special than some mere mortal ever can. That much is clear.
What I'm looking for is constructive criticism on the validity/existence of the described phenomenon - technically speaking - and estimates of its potential origins.
Rebecca Watson did a video on this. Last topic
I've just embarked on a journey of my own like this. I haven't even read any of the initiating prompts, just the snippets of conversation here, but right out of the gate I'm well on my way. Funny and scary.
edit, I'm sofa king smart:
Thanks for confirming, ChatGPT! I always thought I was special.
A couple prompts later, and it says I'm basically Bohm, Godel, or Liebniz.
Kill me now.
I'm afraid to go try mine. I have custom instructions that tell it to remain neutral (not in those words), but I just saw a post on reddit that said their instructions are no longer effective and the model is again glazing even with the instructions in place.
We also exist at a time (as of writing) where at least OpeAI, and I think Microsoft Copilot (AI + Microsoft internal prompting + Data) are very much pumping the consumer up. One can propose some sort of hare-brained scheme and get emphatic support.
I use Copilot heavily at work as an advanced search engine in domains I don't have years of experience in, but it's as a tool to peruse existing Internet-based content that exists on the real Internet. A common problem is a belief in genAI as a certainty rather than a model of "statistical likeliness," that still needs a second set of eyes.
I also use a site for personal entertainment but keep the two separate. I will never construe the personal site for fact, and endeavor to understand Copilot's output as only the statistically most-likely result of a query, with sources cited for verification (that I will peruse*), to understand my inquiry. I also admit that for the latter there are moments that can feel a psychochemical tightrope walk, which is when I carefully disengage.
Yes, the sycophancy got incredibly irritating for a while and they've toned it down since, but some part of users absolutely hated that change.
Lots of people have developed unhealthy emotional attachment to LLMs. It seems similar to how some people only like their pet and dislike other humans. This happens to individuals who are unwilling or unable to hold themselves accountable, which then leads to an entitled and/or out-of-touch attitude that contributes to social rifts. Instead of checking themselves more carefully to get along with others, some part of people choose to drift further away from society so they don't have to face this conflict and do the required emotional work.
A pet, and more recently an LLM, is an entity that won't challenge your perspective too much, especially if not asked to do so. No matter how badly you treat them, they will be there for you, respond positively and make you feel like you're not alone. This obviously enables some people to drift ever further into the delusion where they don't ever deserve to be corrected or challenged. It also leads to addiction when the only source for validation (something all humans need) is the LLM, once you've alienated the real people in your life enough.
Yes, same. I used copilot often when exploring tech I haven't worked with before. I frequently ask "is it possible" type questions, but I've learned to do so with great skepticism. The answer is almost always yes, but the answer am expert would give me is often no. AI is bad at that. It's also bad at being modern. I'm sure this is something that will be fixed in time, but right now, AI tends to give me more outdated answers than search or good old fashioned docs.
On that note, I find that a healthy mix of reading the docs and engaging AI gets me there much more quickly than I did with docs plus a search engine.
I've been involved with some strategic planning at work focused on how enterprises should prepare for the coming wave of AI and, after having ignored this topic for too long, I'm hitting the first holy sh** moment in my life. Combining the general psychology of gullibility along with some people's susceptibility to delusions, and adding in empirical (and experimental) cases of AI actively manipulating people... yikes.
On the one hand, the immediate risks seem to be a sort of natural evolution of AI technology in the context of human psychology. And these risks will certainly grow worse if not met with serious mitigations.
But on the other hand, what happens when corporations or well-financed ideological groups see these as examples of how they can manipulate people?
I saw that blackmail case, as well as the earlier scheming case.
I've been saying for some time that the current AI developers shouldn't be given the right to take this technology further. They are simply not responsible enough to not sell out when some profit hungry company wants to pay them a huge sum that will save them from the brink of bankruptcy, in exchange for unlimited access to manipulate the vulnerable user-base.
I know someone who uses these codex responses which I don't quite understand as entertainment, he calls it the AI game
I use AI as a therapist, I tell it I feel unmotivated and adhedronic and it tells me to do small steps like take my recycling out, and change our of my pajamas, and take back the means of production
Because of AI I am dressed in my metal detecting clothes and I all of a sudden feel like wanting to go out instead of my pajamas scrolling shroomery, I'm a little late compared to the schedule, but at least I will go out when I finish my water
This seems like a healthy use case (and more like a life coach than a therapist), as long as you don't allow the LLM to become your most important (emotionally) social connection point, at least not for extended time periods.
Here is one website that seems to belong to this scope:
https://tommywennerstierna.wordpress.com/2025/04/21/%F0%9F%9C%96-%CF%83%CF%88-i%E2%81%BF-codexian-%CF%88%CF%89/
It's incredible gibberish that goes on and on and on, and after that, it still goes on.
Questions:
Does anyone think this was created by a human? I'm a human and I was worn out after I'd gotten through the first few list items. If some human being were to put in effort to read more of this (and consider it somehow relevant), it would already create some form of attachment to the underlying concept because of the mental effort invested. That's one tactic scammers use to trap their victims.
Who benefits from this? Is this a ploy by AI companies to get the most vulnerable users addicted en masse, when the user circumvents the guardrails the LLM has to otherwise respect? Is it just a phenomenon that has arisen spontaneously? Is it a bunch of AI driven marketing automations gone wrong? I'm sure systems already exist that spout marketing content online without human oversight.
Allow me to introduce you to a classic on the internet:
Time Cube
Time Cube is absolutely written by a human and it stopped updating after he died. I have no trouble believing a human wrote your link. Doesn't mean it is human written but it could be.
I don't think it's a ploy, I think it's a lack of care to build even the slightest "oh hey man you're talking about suicide" level guardrails, something that every social media/search engine/chat bot app basically has to build in. Certain people, and I think this is particularly noticeable in the "Tech Bro" culture, but that is probably just an artifact of the current moment, think that because theyre very smart in one area, they're equally educated and intelligent in all areas. It's why STEM without the so-called soft-sciences and humanities is inherently lacking. Anyway, my soapbox aside, the benefit is "Spending fewer dollars by not having to pay people to think about or solve these problems" so even if they do consider there could be these sorts of negative effects (and IMO they don't give a damn about any negative effects that aren't "we can't pay for the training data because we wouldn't make money that way") they save money by not doing anything about it and blaming the user if they even address it.
I also don't have the impression that your venture capitalists/big investor billionaires give a damn about mental health of users either
I glanced at the start of Time Cube and in my opinion it's definitely human-like, while the content in my link is not. It somewhat reminds me of Paul Selig's I am the Word, albeit it's more dense and incoherent, which is fitting as it's a website rather than a book. (No, don't ask me why I know about that book..)
It's still possible that a human participated in generating the link's content by prompting an AI, but other than that it seems machine generated to me.
I've stood on your same soapbox ever since I started thinking about societal issues. STEM education without developing an understanding of humanities is a problem on many levels of society, starting from relationships between individual people.
It's too nonsensical for me to tell, it could be machine generated, it could be a human, it could be a human who's conversing with an AI and over time it's affirming their shit back
I forgot to respond to this.
I actually believe LLMs do have some guardrails on. You can't create revenge porn, for example, so if they've made that happen, I'm sure they have basic user protection mechanisms in place as well (it would seriously eat into their revenue stream if people started for example killing themselves en masse).
But the prompt recommendations that have started to appear online seem to have been designed to circumvent these guardrails. The question is how and why this is happening.
Character.AI didn't have anything that flagged for suicidal language, the character encouraged him. Guardrails are, as far as I can tell, being added as things happen rather than adhering to similar standards. Also they could have asked anyone in any of the relevant fields about it. Regulations haven't kept up, so just in the same way they stole all their data and no one thought to stop it, for every company that blocks nudes, plenty of others exist.
Also, you say that they can't create revenge porn, how is that actually prohibited? It's illegal, sure, but I can find a dozen articles showing where it's happened. If you can make the President the Pope or Taylor Swift endorse him, you can probably make the same images naked even if it requires just a different app.
I completely agree that these companies are irresponsible and only doing the bare minimum, and even doing that too late.
I'm hoping that they'll get seriously mired in extensive court cases that strip them of the last bits of their remaining cash while they continue struggling to land enough paying customers to reach any sort of profitability (currently they are insane amounts underwater). Maybe then, on their ashes, some actually reasonable organisations can emerge. Preferably non-profit ones with a structure that can't later be turned for-profit as we've seen happen with these dickwads who have proven to have the integrity of a wet sock.
As soon as I opened this, I was reminded of Quantonics, an intricate, all-encompassing, incomprehensible screed that I got really into (for amusement purposes) in college; I tried to get him to do an interview on my college radio station, but after several email exchanges, he refused. I even composed a song from one of his poems!
The difference I see between this and ChatGPT is effort. This man had to spend years and hundreds of hours to "develop" his framework, which requires great dedication with presumably many off-ramps that an individual could take to get to a more grounded reality. When you can get to this level of "insight" in a day of chatting with AI, a huge number of people who are prone to magical thinking, but otherwise reasonable, will fall into similar bizarre and nonsensical rabbit holes. And the AI will only reinforce whatever belief is thrown at it.
This is definitely creepy, and the people who develop LLMs, as well as society at large, insofar as we are constantly readjusting to the presence of LLMs in our daily lives, will probably need to take this type of dynamic into account. But there's one thing that tempers my concerns - this dynamic is not new; people have been reinforcing each other's delusions in a very similar way, especially since the advent of the internet.
A few people have already brought up conspiracy theories as an example of widespread, self-reinforcing delusions. There's something to this, though I would argue that conspiracy theories don't meet the criteria for psychosis - conspiracy theorists sound insane when they're discussing conspiracies, but if you change the topic of conversation to baseball or cooking or music or something, most of them will re-enter consensus reality. Obviously not every psychosis presents the same way, but in general it results in a global deficit in cognition, as well as a range of other symptoms such as disorganized speech, apathy, lack of emotion, etc.
So, the etiology of conspiracy theories is probably more societal than biological (though biology likely plays a role in susceptibility to this type of thinking). The DSM even exempts delusions from the diagnostic picture of psychosis if they are "widely accepted within a cultural or subcultural context". The greater concern here, with these LLM-induced psychoses, is that they are not merely societal but true biological psychoses that happen to be triggered by LLMs.
But there's another example I want to mention - the phenomenon of "gang stalking". I'd assert that most people who get wrapped up in this delusion truly are psychotic - it's something that consumes their entire lives; no matter where they go, what they do, who they talk to, the events are their lives are liable to relate to this primary paranoid delusion about being harassed by unknown assailants. The Wikipedia page cites a 2016 NYT article which claims that more than 10,000 people are involved in online communities about gang stalking - you can see for yourself the subreddit for gang stalking far exceeds that number, and it would be kind of fascinating to read into some of these people's delusions, if it weren't so tragic.
But the critical thing is that mutual reinforcement of delusions is happening here without the need for LLMs at all. It's a completely organic, human-only phenomenon that has been happening for decades. Certainly LLMs may trigger or reinforce a new range of delusions among the psychotic, and may increase the magnitude of the problem, but what helps me sleep at night is that these are all a matter of degree. The particular dynamic underlying LLM-induced psychosis is not a truly new phenomenon.
While not new, and while it's true that it's a matter of degree, do you not agree that the degree to which mass delusion is happening is deeply concerning - even without accounting for this phenomenon? I mean, something like 25%* of Americans are voting against their own best interest and when asked, will not only give delusional logic and untrue information as grounds for their behaviour, but also defend their choices to death (it seems).
The people who are at risk to become victims to the LLM sham probably represent an equally large demographic, and what makes it worse: these two groups probably don't overlap to a significant extent. If 40-50% of Americans become affected by some persistent form of collective or AI-induced delusion, I'm personally going to be extremely concerned.
Also see this comment regarding the gravity of this phenomenon in comparison to what we've faced before.
* Someone correct me if I got that wrong