Information Barrier?

From Lemopedia
Revision as of 12:59, 11 October 2014 by Robert Boettcher (talk | contribs) (Migration from Stanislaw-Lem.wikia.com)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Original Source   Telepolis Archive (Article from 1997) Translated by Robert Boettcher April 2014

Information barrier?

03.01.1997

Everywhere there are complaints about the Information deluge. There is no way back anymore. While some dream about smart agents, meaning intelligent servants, which take over the pain of having to make a choice, the others shut up or let themselves get swamped with senseless informationamounts. Stanislaw Lem considers, whether the detonated "Megabitbomb" not only hits on barriers of human capacity, but on technical barriers as well. The continuous replacement of human cognition through intelligent systems has for him in all cases one result - the possible early retirement of most humans.

 In the Book Summa Technologiae , which by now is a good 30 years old, I have introduced the metaphorical terms Megabitbomb and Informationbarrier. The key to for cognition capabilities is, so I had written back then, Information. The rapid increase of Scientists since the Industrial Revolution has caused the commonly known phenomena that the amount of information, which can be send through one channel of science, is limited. Science represents a channel which connects civilization with the non-human and the human plane. The increase of the number of scientists represents an increase in the capacity of this channel. This process can however, as any expotential growth, not be sustained over an arbitrary long period of time. When there is a lack of candidates for the Sciences, the explosion of the Megabitbomb will hit against the Informationbarrier.

Limits of Information processing

Has anything changed in this picture after the last thirty years? At first I would like to mention, that several attempts have been undertaken to calculate the absolute productive efficiency of computers of the "final" generation according to known absolute speeds in Physics, meaning the speed of light. However the results of the estimates widely digress. With the assumption of values which are specific for physics, utilizing the speed of light and the uncertainty principle according to the Planck constant, it was calculated that the most powerful computer, which was able to process data with the maximum attainable speed, would be a cube with an edge length of three centimeters. A with this assumptions unmentioned premise was however a solely iterative type of calculation, which in its simplest form characterizes the Turing-machine, which can only take one of two states: zero or one. One can execute every calculation of a linear processing computer with the simplest Turing-machine, only that a Turing-machine will take eternities for what a Cray can do in the fraction of a second.

Parallel computers

But it was quickly recognized that parallel processing computers could be build, though their programming and mode of operation brings with it a series of difficult to solve problems with it. One evidence that such computers are constructable we carry around in our own skull: the brain is actually, mainly through its way of construction, though not solely, the counterpart of a parallel computer. It consists of two big hemispheres, in which one even for the human as constructor indeed very strange strategy of allocation between subordinated areas is dominating.

For neurophysiologists that was true chaos which consisted solely of riddles. The researchers were able to determine symptoms for the failure of single functions, for example in the case of aphasia, amnesia, , Amnesie, alexia, etc., but were not able to explain their mode of operation functional or causally. Incidental we comprehend, as usually happend, still not very much of those and other phenomena in our brain. The brain can take up information with a speed of 0,1 to 1 bit per second, compared to which nowadays a stream of information flow upon us, which lies between three and twenty bits per second.

Information flood

The total of human knowledge doubles approximately every five years, while the time required for the doubling is constantly shrinking. At the transition from the 19th to the 20th century this rate was still approximately 50 years. Every day on this world there will be 7000 articles published and about 300 million journals plus 250,000 books printed. On the other hand there are already over 640 million radio and television sets. Since these statements are already four years old they are surely aimed to low, especially because of the gigantic gain of knowledge in the area of satellite television.

The amount of the presently saved information is supposed to be 10 to the power 14 Bit and will have doubled till the year 2000. Surely the information absorption capacity of the brain is already exhausted. Outside of science one can notice the symptoms of Informationasthenia much easier than within, especially when we constrict its area to the exact sciences. They are surrouned by a "halo" in the form of pseudo- or quasi-sciences which everywhere enjoy considerable popularity. As a rule of thumb it is useless and wrong "knowledge" (astrology, charlatanism, sectarianism of the type "christian science" and all "psychotronics" like telepathy, telekinesis, "secret knowledge", news about "flying saucers" or the "secrets of the pyarmids" etc.), which however is comfortably simple and fascinates by makin promises to clarify the human destiny, the meaning of life etc. pp. 

I am not limiting myself here however to the realm of the exact sciences, which actually since a long time is inundated by bleary falsifications, which are not only harmful, but also question the value of Science in society as a whole. Fraud happens ever more in Science, and is even encouraged by the Rule of publish or perish. Many factors hence therefore act together when reinforcing informationoverload. Contrary the above mentioned unscientific spheres are obliging to an informational self-limitation, which a viewer possesing a satellite dish can easily notice: in respect to the content almost all channels worldwide vary very marginal form each other, which simply means, that the programs of far away nations, countries and language regions by content are almost identical.

The overfed public is not interested in a conscious "downsizing of the television program selection" or about creating a counterfeit, it simply wants to see a variant of the known Sujet, which is why there will always be new Version from some Tarzans or the Three Musketeers or in the USA of fights with Native Americans or in the war of secession or in Europe from the last world war. The informationallergy in the visual area is especially noticeable. Innovation is feared in all channels more than fire, as shining of innovation it is however acknowledged. But of course I'm not playing the Critic here, for it is not my intention to cast judgment and therefore diminish the programs, which would only be superficial, but instead I try to expose the deeper, pure perceptual source of this fact, which the Television viewer knows by Experience. The gestalt of television consists of a strange similarity of a huge range of supposedly totally different, independently produced programs. As a rule of thumb of course, if one switches off the volume, it is hard to recognize whether we see pictures send from Turkey, UK, Netherland, Sweden, Denmark or Spanin, because our antenna receives from everywhere the almost idential semolina pudding with pepper.

The dilemma of Informationflood

From the informationflood an average human rescues himself through an reduction of the bitstream as well as through an eliminatio of that, which in some way is not 'necessary' for mental absorption. In everyday life this leads to an increased ethnocentrism in the media, to an "growing Im täglichen Leben führt das zu einem verstärkten Ethnozentrismus der Medien, zu einer "wachsender boldness" concerning contents, which may shock or hurt feelings. In science resevations of that kind however are not valid. That is the reason for the increasing weight of the meals, which are delivered to science from informatics with its legions of computers. As every new event, even though it is not that new anymore, computerization represents an essential lifedomain, which at the same time brings new worries with it. In the countries in which the computerization has just started (an de facto Poland belongs to those), those worries and this dilemma are as of yet unknown.

The first example that comes to hand explains where the crux is. In the Sci-Fi Novel Return from the Stars (1960) I have introduced into the narrative Kalster as small machines, which replace money transit and transfer. Surely there is no space in a novel, to describes the infrastructure of such an 'invention'! But today one reads in the magazines (for example in the United States) about so called smart cards which work by basically the same principle. There needs to be no more money in fluctuation. Even payment by cheque can be a thing of the past, because now everybody has an account and in the wallet a smart card. Upon payment one hands over this card to the cashier, who puts it into the cash counter that is connected to the bank. The computer transfers to the bank computer, how many units have to be deducted from the account. The same happens also on the way from payee (his computer) - the bank - the paid (on his "Kalster").

That is all fine and well - providing the condition that noone can get to our account with the help of an electronic lockpick. It is commonly known how long computer crime and Hacker exist, which can penetrate even the most protected computers of different General staff. Cash one can bury or hide in a treasure chamber, but the bank computers are for sure exposed to different attacks via online or via wireless. The phenomena of viruses is already so well known, that its not productive to occupy ourselves with this "dark" side of the informatics here.

Breaking the Informationbarrier

To break through the "informationbarrier" in science as one option computer networks could be used, which are connected the same way with each other that neurons are connected in the brain (and every neuron is, as we know, indirect or direcit connected with a couple of ten thousand others. Therefore the statement that, expressed shortened here, the brain contains between 12 and 14 bllion neurons is a kind of a misconception, since its the number of connections thats relevant and not the units, which only operate according to the principle flip-flop). On the other hand computer giants would contribute to it, which for the time being can be represented by my GOLEM XIV from the novel with the same title.

At this point I should probably explain, how I came up with this Golem and whereasin the conception of a "ubergolemnian" growth of terabitian power can consist, which growth is interrupted by periods of idleness. That is by far no "pure fiction", because I have always chosen the natural evolution of life on earth as my lodestar or rather as lodeconstallation. Its possibly most idiosyncratic inherent phenomena is the development of a series of plant- and animalspecies, which characterize themselves by an discontinual increase of complexity. From the first living beings it only is known that within three billion years (at the least, not at the maximum) composed the genetic code in its astouding universality. Out of it there came about algae capable of photosynthesis, followed by bacteria, protozoans, molluscs, then fishes, lurches, reptiles and then mammals, which with the emergence of hominides and with man at the peak found their coronation. Between the species there actually gape huge abyss. While there actually existed, for example, transitional forms between reptile and bird or between fish and mollusc, nothing remained left of them. This differentiation of species, these zones of species void I found so imporant, that in one book I have "transferred" it to the following accumulation of intelligence. They free themselves from the primary, through physiology and anatomy given tasks, which the central nervous system of any living animal (given is is an animalistic creature) has to fulfill.

Paradoxes of Information processing

Surelly the projection, that the biggest constructable computer is a "dice" with a feed size ot three centimeters, is to be refused. Whether however the construction of ever bigger computers will be a better way than the creation of a network in the image of the neuronal networks of the brain, only the future will reveal. The comparison of the brain with a computer of the last generation currently looks like this: the brain is clearly a parallel- and multiprocessorsystem, comprised of approximately 14 billion of neurons, which form a three dimensional structure, in which every neuron posesses up to 30000 connections with other neurons.

In case a connection executes only one operation in one second, the brain would theoretically be able to execute within this time ten billion operations. The flip-flop of a neuron lasts only a few milliseconds. Complex tasks like the recognition and understanding a language the brain executes in approximately one second, because it requires several processing operations. The computer however requires for an analogous task a million of elementary steps.

Since a neuron cannot transmit complex symbols to a second neuron, because it is a "simple" machine of the flip-flop type, the total performance capacity of the brain depends on a great number of alternating neuronal connections. Thanks to those we can simply utilize a language or languages. The task to multiply two multiple digit numbers however poses a problem, which already not everyone can accomplish. The phenomena of unbelievably skilled math wizards, which could on top of that even be moronic, is another riddle for me, because it bears witness for the existence of different sub-areas which even - or especially then - function more productively when other, normally in the task participating areas, are damaged! (The brain tolerates, generally speaking much easier than the computer).

Todays supercomputers function, as an expert claimed, on the development level of a five year old child (but this is not concerning affective performance ability). It appears paradox, that for the simulation of the brain in real time thousands of computers with highest computing powers would be necessary, for the execution of arithmetic operations however billions of humans.

An elementary operation of a neuron takes, as said, approximately 1 millisecond. A computer however can execute these in the range of a nanosecond. It therefore works six times faster. Still a person recognizes, upon entering a cafe, the face of a looked for associate, within a fraction of a second, where the computer would take few minutes...

Automatisation of creation

The for us, meaning the human future, most important question to be answered seems to be whether and how a creative information capacity in the sense of authentic creation can evolve. The solution of an arbitrary difficult mathematical task has not much in common with the task I think about, since the answer in the gestalt of a solution is „hidden inherently“in the mathematical structure of the task. I take the liberty to get back to the book, which I had quoted at the beginning. In it I have written that the transition from the non-renewable energy sources to neu – from muscle power, power of beasts, of wind, water to coal or oil to nuclear energy sources – requies a preceding information gathering. Only then will the amount of information surpass, thanks to the method of trial-and-error suprass the „critical point“ and the technology based on it will open new areas of energy and action for us. If the resources of fuel (coal, oil, gas) as I had written, for example at the midth of the 20th century had already been used up, it would be doubtable whether we would have initiated nuclear energy in the middle of the 20th century, because its liberation required great strengths, which first in the laboratories and then on an industrial scale have been realised. Despite that humanity, as I had written then, is absolutely not ready (nor is it today) to divert to the exclusive extraction and utilization of nuclear energy ...

The Cold Fusion, the cold fusion of deuterium with helium, announced by Fleischman and Pons has rapidly being labeled a failure, though in recent time especially the Japanese have restarted experiments in this area so that "nothing definite is known." I say this in the context of Informatics due the reason that with the setting of starting parameters, which of course spring from our current cosmological knowledge,we  can model, that means simulate, the scene of space 100 billions of years ahead (as has been done already), but are not able to extract suprising revelations from it - due to the reason because in the starting parameters any trace of them is missing. Here an example, that may suprise some or other of the readers.

Boleslaw Prus 1 lets Professor Geist , one of his characters in his Novel The Doll, proclaim:

"We have three dice with the same size and made of the same material, which however are of different weight. And why? Because in a full cube there are the most steel-particles, in the empty one less, and in this one made of wire the least. Imagine now you have succeeded to construct cage-like particles instead of full particles and then you will understand the Secret of Invention ..."

And now a quotation from an article of the Science ressort of the SPIEGEL (I do not quote scientific literature here, because my priority here is brewity):

"The Scientists expect a 'completely new Chemistry', which they hope will come from cage-like coal-spheres, so called fullerenes. For the first time they successfully developed in German Laboratories and the scope of their application - is immense. ... The fullerenes, names after the architectural self-supporting dome-structures from Richard Fuller , are empty spheres, which consist of sixty atoms carbon, which are connected with one another as are the pentagons that form a football. They can be used as construction-elements in space-ships because they bounce off unscathed from steel plate, when one fires on them with a speed of 27.000 km/h ..."

The Japanese synthesize already cylindrical fullerens, which are filled with lead-atoms, and have pulled a wire with a thickness of several atoms out of it. Tthe American Press writes about Buckministerfullerene, in which neon- or helium-atoms  could be pressed into ... . That is only the beginning. And what can say about the fantasy of Boleslaw Prus? Within a hundred years it has become reality ...

It is a trivial thing, that noone has noticed this convergence, because, I think, Prus himself didnt believe much in its realization (I am not sure in this aspect howeer; I don't know why he wanted to write a novel with the title "The Fame" about the - through Professor Geist achieved - discovery and later abandoned that idea). Basically the cage-like constructions (for example from carbon-atoms) were theoretically imaginable, one only didnt know how the synthesis had to be executed - high temperatures and pressure are required. But that are already information, which necessarily had to be given with the parameters of atoms at the initial assumption ...

Still the computer posesses no inventive abilities. I think however that it, despite not yet having made the inventor and scientist unemployed in another field - the industrial production - has already begun its invasion, which brings with itself the risk of mass-unemployment.

The Future

Norbert Wiener , the discoverer of Cybernetics , has predicted in his book Human Use of Human Mind  exactly this unemployment half a decade ago, which is caused through the growing automatization of ever more production processes. Who has not seen on television one of those many japanese car-factories, in which mostly yellow enameled whirl around like mad and in absence of humans screw together, weld and put together the elements of caroches, engines, couplings. Such human-deserted factories come into existence already, and through that unemployment will become, as some economists or engineers claim, become an irreversible, increasing and socially destabilising phenomena, because the labour of robotic-automats is cheaper than human and often more accurate. Those robots, whose world-army already has passed the number of three hundred thousand need no food, no wage, no rest and no leave, no social welfare and also no social insurance for retirement ("in old age" the robots end up as wrecks on the scrap heap).

We are not threatened by a riot of the robots and also not of their rule, which we have been scared of with so often. There is simply a bloodless conflict threatening us, because the yields of our intelligence and our civilization would have made working humans simply superfluous. Even when that does not happen overnight, it is still not sufficiently soothing, because the cost of investments in robots will be continuilsy decreasing and they will, as one can suspect, own the 21st century. It seems that in the beginning of life there indeed was the word: the word of the GENETIC CODE. And on the long way into the future this word will bear the information, which creates mindless and perfect workers out of dead matter ...

But what will then be our destiny? That is a questino which I cannot answer today.

Translated from the Polish by Ryszard Krolicki

Annex

Footnoes

1)

polish Author, *1847 - t 1912, Main Representative of the literal positivism