| Artificial intelligence |
(Computation technology, Software, enemy image)
|Interest of||• Ajay Agrawal|
• Sam Altman
• Yoshua Bengio
• Hans-Christian Boos
• Nick Bostrom
• Matthew Daniels
• Marvin Minsky
• Elon Musk
• National Security Commission on Artificial Intelligence
• Andrew Ng
• Peter Norvig
• Omar Al Olama
• Benjamin Pring
• Stuart J. Russell
• Lila Tretikov
|A branch of computer science intended to enable computers to carry out tasks previously made by human beings.|
Artificial intelligence (AI) is the branch of computer science which seeks to re-create the "intelligence" of human beings in software. It gained more popularity in the 2010s with deepfake technology and even more in the 2020s with nearly perfected automated chatbots which Google and Microsoft starting invested heavily in.
- 1 Official narrative
- 2 Problems
- 3 Natural language parsing
- 4 Semantic web
- 5 Concerns
- 6 Psychological aspect
- 7 Warfare
- 8 In popular culture
- 9 An example
- 10 Related Quotations
- 11 Related Document
- 12 References
In 2021, Stuart J. Russell is to deliver a series of four Lectures on the theme "Living with Artificial Intelligence". Lecture 1 was entitled "The Biggest Event in Human History".  A "golden age for humanity" could come along, although "machines don’t have an IQ". Intelligence is defined as follows by Russel: Humans are intelligent to the extent that our actions can be expected to achieve our objectives. Moreover, this poor definition of intelligence means that the end justifies the means if it is "successful" in achieving the goals of the programmer: All those other characteristics of intelligence; perceiving, thinking, learning, inventing, listening to lectures, and so on, can be understood through their contributions to our ability to act successfully. He does not explore who gets to define what success constitutes and for whom. 
AI is a misnomer. Machines do not exercise free will but work algorithms invented by humans. These may be complex, but in any case deterministic, at least in a stochastic sense. AI serves therefore as rationalization to deny responsibility for choices the programmer makes. The discussion avoids the topic of democratic control and largely denies a conflict of interests in human societies. The problem is not that the outcome of computer algorithms can not be foreseen, but which factions of society get to control the (very foreseeable) outcome.
Managing large data sets and computation speed is not equivalent to "intelligence"
Most tasks attributed to AI are not intelligent at all taking a closer look: solving relatively simple games with complete information such as chess, simple brute force is the major driving force to compute the decision tree and win. In more complex games such as go, probabilistic models and neural nets are used to reduce the complexity to managable levels. On more loosely defined tasks such as facial recognition or inferring intention from motion patterns, the success rate is often overstated. Pattern recognition is part of human brain activity; but intelligence is much more related to the conclusions a human draws from this information, including irrational conclusions. In a complex system operating at the border line to chaos so called irrationality might turn out to be advantageous in the long(er) run.
Learning is not equivalent to "intelligence"
Deep learning algorithms may take on specific non-rational styles of a specific human actor, but fail to translate this to other scenarios or playing styles. Human neural networks are adaptive to input streams of millions of channels. This is due to growth, pruning, neuro-regeneration and neuro-plasticity characteristic for living beings. (Porges)
AI touches on the question if real randomness (and therefore real choice) exists or if, in the end, everything is determinated. Quantum mechanics answered this non-trivial question: real unpredictability in fact exists as a foundation of physics at the particle level.  AI propaganda tries to convince the public otherwise and fosters a deterministic world view, which may push "believers" into a state of learned helplessness or dumbing down (the belief that one is too dumb to make decisions by oneself).
Natural language parsing
- Full article: Natural language parsing
- Full article: Natural language parsing
Natural language parsing, i.e. understanding ordinary human language, has long been the holy grail of artificial intelligence, as it offers the chance to communicate with computers as easily as with other people. However, its feasibility (or even possibility) — together with what actually constitutes "intelligence" - remains an open question.
- Full article: Reddit
- Full article: Reddit
Patents registered by Microsoft in 2021 seem to indicate that the natural language parsing technology has matured so much that it can imitate persons. Initial reports about the development of the technology started in late 2016.
Auto-generated biographies and website articles
- Full article: Semantic web
- Full article: Semantic web
After creating the World Wide Web, Tim Berners Lee announced that he was interested in a semantic web, that is, a global web of documents that are not human readable, but machine-readable. To this end, the W3C developed the Resource Description Framework (RDF), a data format intended to express meaning, which could in theory be auto-translated into human languages by software. This underpins the Semantic Mediawiki software which is used on this website. Each page on this site has a small blue RDF icon in the top right hand corner, which presents the page in RDF.
The "quality" of the AI depends on the dataset's used, the selection and training on these data set's depends on the intentions of the human's involved.
Global control grid
- Full article: Global control grid
- Full article: Global control grid
The idea of a global control grid has long been a fantasy of megalomaniac technocrats. As the 21st century progresses, more and more tasks formerly done by human beings are being turned over to software.
Except for systems expressively designed for openness - like SMW - AI may be used to create what in NAZI Germany was called Gleichschaltung aka conformity (in thinking). Examples include auto-suggesting search terms, "helping" people by removing "clutter" or other content and on the other side proposing "liked" content. Automation often results in limited choices for the computer layman, be it automobile driving or web surfing.
The internet user is bombarded today with "what others think/like/search now", etc (note the present tense). Salomon Asch's experiments have shown that similar group pressures create a stunning dynamic: 30% (or more) of subjects under similar conditions gave up on their personal opinion and followed the (in Asch's experiments manufactured, i.e. falsely portrayed) consent. Robert Epstein and others have reproduced similar results concerning Google's (and other search engine's power to influence elections and make businesses a failure or success, see for example The new mind control.
Early efforts were made by guessing what next action a user is likely to make (i.e proposing the last action again) or guessing what "problem" he is trying to solve. These - mostly annoying - efforts resulted in "professors" popping up while trying to compose letters (Microsoft Office) or similar help robots; more sophisticated troubles include search and filter bubbles, targeted adds, censorship, automatic user blocking and heuristic (browser) fingerprinting based on AI.
False positives are of concern, as the algorithms are proprietary: loans, for example were denied to people living in poor neighborhoods based on secret evaluations of a host of private data unknown to the customer of a bank. AI may be used to decide about social control techniques, as the Chinese Social credit system does, or deny access to public resources, log-ins, or account creation (Google et. al.) as conditioning (aka punishments) bypassing in effect the legal system. AI easily creates an unconcrollable jurisdiction.
The complexity of social interactions creates feedback loops of such information management systems - something a programmer would call recursion - which leads to self-similar patterns in thinking (if thinking is based on past memory and experience which it largely does), i.e. repeating the same patterns over and over - hopefully on a smaller scale. The power of this effect can readily be observed in the commercially-controlled media which mostly cites from and inside its own network of information channels. Other examples include hysteria, panics, delusion and stock market bubbles/crashes.
To avoid oscillation from uncontrolled and possibly dangerous feedback loops, diversity, i.e random variations in these channels are necessary. Overuse of AI poses a risk in that these variations are reduced in number and effect; or viciously abused for purposes of power and social control.
While the concept of "Artifical Intelligence" is clear, the process whereby it happens is impossibly obscure to a lot of people, giving the technology some sort of mystique that can be exploited for psychological purposes.
All software is made by human programmers who control it at every stage. Whether writing it using computer code or selecting data sets for it, human input is essential, and can therefore be used for political purposes. Nevertheless, AI software may appear to be somehow impartial, fooling people into accepting its decisions as easily as they do life's vicissitudes beyond human control.
"World computer simulations", predictive scenarios, i.e Event201 and large scale disaster preparation exercises may - unwittingly or not - fall in this category. They may help unfolding and at the same time controlling a self-fulfilling prophecy. 
For example, Blackrock's trading robot (called "Aladin") can move enough money to create (and exploit) monetary bubbles. Researched Google bubbles include the terms "terrorism", "covid" and "miserable failure" - the last returned "G. W. Bush" as first hit while the bubble was in full swing (the system had to be halted to stop skyrocketing). While in rare cases funny, the destructive effects on war, the world economy, personal freedom, public opinion and totalitarianism (resulting in "Gleichschaltung") are underestimated.
It is no secret that the enemy is using AI and neural networks. The computer processes information and "highlights" troop concentrations via mobile signals, messengers, and phones - where vulnerable positions are. The AI identifies weak and strong positions, where to attack, and where it is better not to move. When all the moves have been calculated, all that remains is to make a decision.
An example - the Ukrainian generals did not accept the plan for a counter-offensive at Kharkov, they were against it. The handlers insisted on the plan, the Americans pushed it through. Contrary to the opinion of the AFU command, an order was given to attack Balakleya-Yuzum, etc. The Ukrainian army was unsure of this operation, it went on strike, it was a forced order. But the result worked, marched like a tank through the mud.
Ukraine uses western interactive maps where everything is completely marked - tank columns, the concentration of equipment, etc. Any chief of staff - brigade, regiment etc. - to exclude "friendly fire" makes online corrections and they do not have difficulties with passing as we do (but it does not always work)...
In popular culture
The second season of Black Mirror has an episode in which people can chat with imitations of deceased friends through an AI that has taken their social media history in, to ease the period of mourning.
|Algorithm manipulation||Where algorithms on Social media are used in order to promote the Official narrative.|
|Sam Altman||“Both publicly and internally, leaders at Microsoft are cheering OpenAI's apparent return to normalcy following days of chaos.
The ChatGPT creator, in which Microsoft has reportedly invested some $13 billion, has been on a roller-coaster ride that began Friday when its board abruptly fired Sam Altman as CEO and ended with his return and the appointment of a new board early Wednesday.
Following Altman's ouster, Microsoft swooped in to hire him along with OpenAI co-founder and president Greg Brockman — who quit OpenAI in protest over Altman's termination — to lead a new advanced AI research team at Microsoft, and also offered to hire any other OpenAI employees who wanted to leave. Sam Altman is returning to OpenAI as CEO after his ousting last week, and three board members that participated in his termination have been removed. At that point, Microsoft, already majority owner in OpenAI, was positioned to essentially "acquire" OpenAI by absorbing its talent, after nearly all the startup's 770 or so workers signed a letter saying they would take Microsoft up on the offer unless Altman was reinstated.However, a deal was ultimately reached for Altman to return to OpenAI rather than allowing the $90 billion company to collapse, in what Fortune tech reporter David Meyer wrote is an outcome that "is pretty ideal for Microsoft."”
|Greg Coppola||“I’ve just been coding since I was ten, I have a Ph.D., I have five years of experience at Google, and I just know how algorithms are. They don’t write themselves. We write them to make them do what we want them to do.”||Greg Coppola||July 2019|
|Microsoft||“Following Altman's ouster, Microsoft swooped in to hire him along with OpenAI co-founder and president Greg Brockman — who quit OpenAI in protest over Altman's termination — to lead a new advanced AI research team at Microsoft, and also offered to hire any other OpenAI employees who wanted to leave.
Sam Altman is returning to OpenAI as CEO after his ousting last week, and three board members that participated in his termination have been removed. At that point, Microsoft, already majority owner in OpenAI, was positioned to essentially "acquire" OpenAI by absorbing its talent, after nearly all the startup's 770 or so workers signed a letter saying they would take Microsoft up on the offer unless Altman was reinstated.However, a deal was ultimately reached for Altman to return to OpenAI rather than allowing the $90 billion company to collapse, in what Fortune tech reporter David Meyer wrote is an outcome that "is pretty ideal for Microsoft."”
|Document:Off the Leash: How the UK is developing the technology to build armed autonomous drones||Article||10 November 2018||Peter Burt||The United Kingdom should make an unequivocal statement that it is unacceptable for machines to control, determine, or decide upon the application of force in armed conflict and give a binding political commitment that the UK would never use fully autonomous weapon systems|
- "The Biggest Event in Human History"
- Anton Zeilinger, "Albert Einstein und die Natur des Lichts", lecture (Ger.) at Salzburg University, Austria, 21. September 2009, youtube-ID: SqlMvQ-g6Yo (local copy)
- Zach Vorhies | Google's plans to dominate, then DECIMATE humanity, Brighteon conversations with Mike Adams
- [Whois Record for WikiAge.org - Created on 2019-05-07] http://archive.today/2021.04.09-223919/https://www.b.wikiage.org/wife-and-family-details-to-know/
- [Whois Record for XYZ.ng - Created on 2019-06-11] http://archive.today/2021.04.09-224328/https://www.xyz.ng/en/wiki/who-is-dr-reiner-fuellmich-insight-on-his-wikipedia-wife-and-family-1046990
- [Whois Record for TheArtsOfEn...ainment.com - ASN: United States Of America AS13335 CLOUDFLARENET, US (registered Jul 14, 2010)] http://archive.today/2021.04.09-224403/https://www.theartsofentertainment.com/who-is-dr-reiner-fuellmich-all-you-need-to-know-about-dr-reiner-fuellmich-family/
- [Whois Record for CelebPie.com - Created on 2019-08-15] http://archive.today/2021.04.09-224448/https://celebpie.com/dr-reiner-fuellmich-wife-family/
- A project at MIT had the focus to train an AI, that has the ability to describe pictures, to always see something bad. - https://web.archive.org/web/20180702084703/http://norman-ai.mit.edu/
- Meloy uses the term unconscious simulations to describe a very similar process as core element in in the psychoathic mind. The Psychopath avoids going insane by imposing his inner conflicts onto the world. - Meloy, J. Reid, Ph.D., The psychopathic mind: origins, dynamics, and treatment, Jason Aronson 2002