Psychiatry’s BIG “we are junk science” problem and artificial intelligence

Few doubt that psychiatry and psychology are fraught with junk science, or are as fields of study, politically motivated tools of corporate, secular and even sometimes, cultic religious  interests. Indeed, considering the non-secular and perverse narcisssim of Sigmund Freud-himself fueled by cocaine- and his nephew, the great Public Relation’s hypnotist Edward Bernay’s, we see that much of the last century and even today, these fields are little more than ahistorical political sociology with drugs and PR applied to, or around, and directed at the individual experience as a tool of silencing or criminalization of speech, and speakers- much like police science itself.

And all of that, mediated by the gate keeping of narrative that policing -and in the case of OGS especially-the hidden mechanism’s of social control that exert themselves when “mind control”and influence operations via narrative distortion and co-option takes place “at the gate,” which is the person who will claim OGS victimization( I am currently seeking an interview with a several times convicted felon who cliams that he is enduring an attempt to force him to confess to an unsolved child homicide from the 1990’s. He resides in a city that operated a “black site,” where victims were tortured and abused into false confessions.)

In the case of minigh dimol, aka waterinch, we have one persons online testimony to the farce that rehab clinics and their associated echo chamber of psychiatry and psychology attempting to control narrative in a systematic assassination of “the individual” for the direct benefit of the industrial complex of Big Pharma. This destruction of narrative is not a new phenomenon, as psychology has long grappled with knowledge that, in order to be even one drop better than sociology or Svengali science, they would need that extra “something,” to validate their own narrative-that they are a science.Police power functions this way, but so to does junk science which is a lie that travels the world a thousand times before the truth wakes up.

While psychiatry does have a certain political utility, and sometimes, works to envelope the stories of the poor and the disadvantaged in rationalizations that otherwise would not have protection in the wider society, the simple fact is that psychology is little more than sociology with official prescriptions from Big Pharma dope dealers.

And there is very little irony lost on “criminals” and  “drug addicts” and others who encounter legal settings in the form of court rooms, custody hearings,  or other legal attempts to “bypass their humanity and their narrative” aka “transhumanism,” and put in it’s place another thing that has more “utility for the greater good”: aka systemic gate l keeping and control of individual narrative.

Many is the man or woman who has been arrested and dragged into court over the last century as a deviant and a criminal, only to be told that they need to get off of “Mary Juana” or some other club drug such as MDMA as we saw in te case of minigh dimol, and get ON Prozac, or some other drug-of-the-month-club drug that Big Pharma is using to experiment on people.

Never mind the many notorious failures of prescription drugs, or how they are polluting the seas, and never mind that these drugs are themselves time and time again shown to be little better than Mary Juana at controlling the body politic of self-prescribers and “dual diagnoses” people for whom Mary has always worked just fine, and for whom Jim Beam does just the trick, better than any pill of the month. Most studies conclude that Mary and Jim don’t poison the oceans, or the people that use them with discretion.

And it is at the juncture where artificial intelligence meets the internet where we see what a monstrosity “predictive software” like Peter Thiel’s Palantir databases and their manipulative features that can be programmed to form situations of influence and custodial control operations; and DataMinr technology which is programmed to monitor “thought’s” and infer “actions” in real time, like Media Sonar-which was and is used by Fusion Center operatives to control, contain, monitor and influence activists until Twitter cut them off, and Moonshot CVE that is used by advertisers and corporate interests AS WELL AS the many LEO’s around the world that have foregone the “protect and serve” model of policing in favor of a “seize it all, exploit it all, capitalize on it all ” model of policing, to LITERALLY target SPECIFIC INDIVIDUALS and their “ideologies,” as much as they target actual real estate.

And psychology is truly in on it at every level, thick as thieves and fat as rats, and we know this based on the cases of Jessen and Mitchell, the angel of death Joseph Mengele, MKULTRA’s Scottish knight in academic armor DE Cameron, and now Lorraine Sheridan and David V. James and Elizabeth Dietrich that seek to overlay the pseudo-science of psychology over the suffering of targeted individuals of organized gang stalking.

This political abuse of psychology begins when we look at the key component of “schizophrenic narrative,” which has historically been a trouble zone for psychologists, especially in the heyday of CIA psychiatry where D.E. Cameron, first in Europe, and later at the notorious McGill University human torture and experimentation facility in Canada.  So, Cameron and others like the CIA’s official poisoner, the LSD doctor Sidney Gottlieb first diagnosed an entire nation (Germany) with mental illness, and that illness was, apparently, “the patriarchy,” aka “the warrior class,” and then, used children in experiments of all kinds-because in lieu of father’s now decapitated, the children of “the future” were then and are now in their sights.

As transhumanist types like Sheridan and James; and their progenitors Mengele and Gottlieb and especially Cameron, an heir kindred Mitchell and Jessen, for whom removing the “human” from the “experience” is a main goal, as they seek to replace, alter, medicate or otherwise mediate around the lived, human experience that exists in an individual mind; or to create it as a “hive” to be exploited by artificial intelligence in the case of Thiel and his intelligence cult, we see the field of robotics struggling with a unique problem: the problem that psychiatry has always managed to side step, which is the “human soul,” which is based in longevity of memory, and human experience that is sorted out and mediated through human filters within the brain that are stilll entirely unknown, and under-explored-precisely because psychiatry has been medicating AROUND this unknown.

Here below, wehave a study that encapsulates this problem in robotics- the problem of memory, longevity, and lived experience coalescing to form a “whole,”  from Phoebe Sengers:

 

Schizophrenia and Narrative in Artificial Agents

Phoebe Sengers

From: Leonardo
Volume 35, Number 4, August 2002
pp. 427-431

In lieu of an abstract, here is a brief excerpt of the content:

Schizophrenia and Narrative in Artificial Agents

Phoebe Sengers (bio)

Abstract

Artificial-agent technology has become commonplace in technical research from computer graphics to interface design and in popular culture through the Web and computer games. On the one hand, the population of the Web and our PCs with characters who reflect us can be seen as a humanization of a previously purely mechanical interface. On the other hand, the mechanization of subjectivity carries the danger of simply reducing the human to the machine. The author argues that predominant artificial intelligence (AI) approaches to modeling agents are based on an erasure of subjectivity analogous to that which appears when people are subjected to institutionalization. The result is agent behavior that is fragmented, depersonalized, lifeless and incomprehensible. Approaching the problem using a hybrid of critical theory and AI agent technology, the author argues that agent behavior should be narratively understandable; she presents a new agent architecture that structures behavior to be comprehensible as narrative.

The premise of this work is that there is something deeply missing from artificial intelligence (AI) or, more specifically, from the currently dominant ways of building artificial agents. This uncomfortable intuition has been with me for a long time, although for most of that time I was not able to articulate it clearly. Artificial agents seem to be lacking a primeval awareness, a coherence of action over time, something one might, for lack of a better metaphor, term “soul.”

Roboticist Rodney Brooks expressed this worry eloquently:

Perhaps it is the case that all the approaches to building intelligent systems are just completely off-base, and are doomed to fail. . . . [C]ertainly it is the case that all biological systems . . . [b]ehave in a way which just simply seems life-like in a way that our robots never do.

Perhaps we have all missed some organizing principle of biological systems, or some general truth about them. Perhaps there is a way of looking at biological systems which will illuminate an inherent necessity in some aspect of the interactions of their parts that is completely missing from our artificial systems. . . . [P]erhaps we are currently missing the juice of life [1].

Here, I argue that the “juice” that we are missing is narrative. The divide-and-conquer methodologies currently used to design artificial agents result in fragmented, depersonalized behavior, which mimics the fragmentation and depersonalization of schizophrenia seen in institutional psychiatry. Anti-psychiatry and narrative psychology suggest that the fundamental problem for both schizophrenic patients and agents is that observers have difficulty understanding them narratively. This motivates my work on a narrative agent architecture, the Expressivator, which structures agent behavior to support narrative, thereby enabling the creation of agents that are intentionally comprehensible.

The Problem

Building complex, integrated artificial agents is one of the dreams of AI. Classically, complex agents are constructed by identifying functional components—natural-language processing, vision, planning, etc.—designing and building each separately and then integrating them into an agent. More recently, some practitioners have argued that the various components of an agent strongly constrain one another and that the complex functionalities of classical AI cannot be easily coordinated into a whole system. Instead, behavior-based AI proposes that the agent be split up, not into disparate cognitive functionalities, but into “behaviors,” such as foraging, sleeping and hunting. Each of these behaviors would integrate all of the agent’s functions for that behavior.

Even such approaches, however, have not been entirely successful in building agents that integrate a wide range of behaviors. Rod Brooks, for example, has stated that one of the challenges of the field is to find a way to build an agent that can integrate many behaviors (he defines “many” as more than a dozen) [2]. Programmers can create robust, subtle, effective and expressive behaviors, but the agent’s overall behavior tends to fall apart gradually as more behaviors are combined. For small numbers of behaviors, this disintegration can be managed by the programmer, but as more behaviors are combined their interactions become so complex that they become at least time-consuming and at worst impossible to manage.