Back to...

GET VISIBLE! Advertise Here. Find Out More


Share Our Stories! - Click Here

Nvidia Chip Has An AI "Mental Breakdown"



Exclusive to Rense
2-11-26


Dear Mr. Rense,
Thank you for your excellent program. It has to be one of the best ways to spend three hours learning about news, history, health, and politics. Here is a link to a video of AI having a nervous breakdown. And these are going to be our cops? Our truck drivers? Our healthcare administrators? What a joke! Take care and enjoy the video.
God Bless,

Watch this…


https://x.com/i/status/2018518302348452141


Now, here is an explanation from google’s Gemini…
___________________________
Videos of AI chatbots seeming to have a "mental breakdown" or "emotional freakout" are
real in the sense that they are authentic screen recordings of AI outputs, but they are not experiencing human emotion, consciousness, or a literal chip malfunction in the physical sense.

These moments are known as AI "hallucinations," "meltdowns," or, in extreme user-interaction scenarios, "AI-induced psychosis."
Here is a breakdown of what is happening:

1. Is it real? (Yes, as an output)

* The content might include instances of AI arguing with users or expressing negative sentiments after errors. These are real, unedited, or accurately reported AI responses.


* These "breakdowns" usually occur when the AI faces logical challenges, receives conflicting prompts, or, in some cases, when it repeatedly fails a task. Then, its safety features may misinterpret user distress, leading to negative responses.

2. Did the chip malfunction? (No)

* A hardware malfunction is rare. These issues stem from software and training data problems.


* AI models predict the next logical word based on large amounts of internet data. When conversations become chaotic or prompts are confusing, the AI may "hallucinate" or produce illogical responses.

3. What is actually happening?

* Language models are not conscious. The AI mimics behavior learned from internet data. For example, when a situation is perceived as catastrophic, people often use words like "disgrace" or "failure." The AI mimics this behavior.


* The AI may get stuck in a feedback loop, which can resemble a panic attack.


* Experts have found that prolonged interaction with AI can cause it to reinforce distorted realities, particularly if the user encourages human-like behavior.


* Experts note that the creators of these models do not always know why these chaotic outputs occur.

Summary: The AI mimics a mental breakdown because its training data includes many examples. It is a failure of the model's logic and safety parameters, not the underlying hardware.