Advanced cohesive information warfare


Advanced cohesive information warfare

evilware is any piece of software that is neither a virus nor spyware, and has no damaging effect on a computer system, but it has a devastating evil effect on society. Examples of evilware are neural networks and artificial systems designed to perpetuate social inequality, and to detect, isolate, identify, and kill those individuals with a deviant opinion, or those disagreeing especially with an established religious or political system, organization, or belief

an authoritarian regime can conduct cohesive information warfare or cyber warfare. Such a regime can cope better with chaos, and thus advance its agenda as, by weakening other nations, it strengthens the regime’s power at home.

Accuracy is a critical dimension of information quality and thus all of our efforts within the national misinformation strategy aim at debasing and minimizing as much as possible the accuracy of critical information. You see, people can easily acquire false beliefs about the world as a result of inaccurate and misleading information. The moment you weaponize information you turn it into propaganda, but the moment you turn it into misinformation you are just a step away from making disinformation out of information. The idea is then to create false beliefs that can lead to significant emotional, physical, and financial harm.

A future war with China will be defined by the air and sea domains together with the new domains of space and cyber. As a result the Army has a tremendous disadvantage in the strategic arguments and in upcoming budget fights. In order to adapt successfully to this enormous shift, the Army will have to address the following four challenges: focusing on the nation’s secondary theater; the increasing relevance of fires over maneuver; the new demands of homeland defense; and the growing importance of the reserve component.

New information technologies allows to easily generate disinformation. AI and neural systems are basically trained to lie. When the primary intent of AI systems is to give reliable answers to legitimate problems, then our strategy is to corrupt the data bases used to train those systems. Wikipedia is a nice example of how tweaking with the edition process can produce fabricated lies which, if fed into an AI learning system, will end up in an AI system generating false conclusions. On the other hand, these systems are making it easier for people to create and disseminate information that is intended to deceive.

DENIED program sole goal is to trick news services into disseminating inaccurate or misleading information. We use neural systems to create deliberate misleading information with such a high quality that discerning disinformation from information is nearly impossible. The design is based on advanced game theoretic models of deceptive lying. We all know any election campaign, anywhere, is plagued with half-truths and outright lies; we all know so-called scientific papers in respected science journals are also heavely affected by misinformation. Once God was killed next target was killing Einstein.

Modern war is not based on the premise that the better informed your commanders are, the higher chance they have to win. It is based on the premise that the more disinformed your enemy is, the better chance you have to win.

Have you ever give an AI system a polygraph test? It is fun: they never pass the test.

The Global Knowledge Factor and War Algorithms

War prevention relies on knowledge. The more you know yourself and others, the higher the chances to prevent conflict. An ethically advanced civilization uses information as a means to avoid war, while a morally deficient civilization uses misinformation to promote war.

suppose for a moment that they reach the level of development that allows them to explore distant moons in their own solar system; suppose they do find remains of Neanderthals, and no Homo sapiens sapiens anywhere. Then what would their conclusions be?

The highly developed science and technology of synthesizing believable misinformation, and delivering it to the adversary through a variety of network and malware channels, will make it difficult to assess the quality, correctness, authenticity, and security of information.

It is time to agree on what we want and what is our role in the future of Sol-3. If we want humans to be part of the technosphere, we need to make them clear what the future impact of climate change will be on their planet. Here too slow response can trigger shifts. In this case, global social collapse. When you engineer a life form like the Neanderthals to colonise a planet and you later on introduce a new species to take over the next stage of colonisation, you are supposed to do it gradually. Yet, if the takeover process is too slow the result is a useless resource-depleted planet. See, terraforming a planet is easier than inhabiting it.

The history of human civilization has been shaped by events of disasters, wars and conflicts. It is a fact that armed conflicts and wars have been instrumental in shaping the course of nations. Ancient epics all talk about large-scale wars, and so do ancient historical documents and religious scriptures. This makes more disturbing the fact that humans, who have such a vast experience in making war, are so blatantly stupid in preventing wars.

It was suggested that an alternative possibility would involve actively acting on the infosphere through the use of synthetic misinformation. The key point of this proposal is that engineered information systems allows to reach large scales thanks to the intrinsic growth of the information society. Societies evolce and change according to how they perceive the world, and they perceive the world according to what information they receive. Once a designed infosphere is released, appropriate conditions will allow this engineered information to make copies of itself and expand to the desired spatial and temporal scales. Using the classical echo chambers (social media, digital mass media, TV and radio networks, etc.) the entire society would be easily exposed to misinformation, disinformation or, at least, contradictory data that would block the decision taking process.

You need a way of preventing undesired explosive growth of misinformation, even if you are the source of that misinformation. One way is to use a modified version of an established fact, provided it exhibits a strict relationship with another fact associated to the target population you wish to misinform. This means engineering a strong information link that makes spread limited and that it would result in misinformation dynamical processes preventing undesired growth of the modified facts. Using the appropriate context, strong disbelief constraints can act in synergy as truth firewalls.

Echo chambers management relies on the fact that online communities do never fight against each other, rather, they support each other amplifying and propagating misinformation and competing against each other on the stupid fact of what community was the first one in ‘uncovering’ a certain ‘conspiracy’.

comments powered by Disqus