The AI Singularity and Our Future

In an attempt to solve the dilemma whether the expected superhuman AI singularity leads to a future distopian or utopian civilization, we need to check the basic premises. We must attempt to define general intelligence more succinctly than the current imprecise understanding by culture and science.

“Intelligence has been defined in many different ways such as in terms of one’s capacity for logic, abstract thought, understanding, self-awareness, communication, learning, emotional knowledge, memory, planning, creativity and problem solving. It can also be more generally described as the ability to perceive and/or retain knowledge or information and apply it to itself or other instances of knowledge or information creating referable understanding models of any size, density, or complexity, due to any conscious or subconscious imposed will or instruction to do so.”

Also leading scientists in this field such as Nick Bostrom, are stopping to rely on science by committee or survey with rather liberal suppositions, but rather starting to rely on empiricism or logic. This leads them to a call on viewing intelligence as non-anthropomorphic and a physical phenomenon, therefore calling it an “optimization process”. Dr. Bostrom calls on creating AI that shares our values in order to prevent negative societal outcomes when the inevitable singularity arrives. But he stops short of defining the underlying nature of pure non-anthropomorphic intelligence and analyzing its implications in this debate.

Other approaches arbitrarily compare intelligence to energy, to derive an equation for intelligence as a future freedom of actions with diversity of possible accessible futures. Although the equation proved as a useful hard-AI algorithm, this theory does not definitively explain intelligence.
Thus Dr. Wissner-Gross has to resort to the following ideology in order to magnify the meaning of his discovered algorithms and pin it onto general intelligence:

“The urge to take control of all possible futures is a more fundamental principle than that of intelligence. That general intelligence may emerge from control grabbing.”

With this statement he obliterates the terms he has initially set off to explain, thus treating it as click-bait in order to present us his grand control grabbing algorithm. Sorry Dr. Wissner-Gross, but your are trying too hard to pin your work to general intelligence, and failing at it. Let’s give it maximum credit for what it is, a formidable hard AI algorithm.

Basically intelligence and freedom of action are quite different, albeit related concepts. Intelligence stems from a more fundamental concept than freedom or control. It is the key qualitative characteristic of the existence of “intelligent entities”. The existence of intelligent entities is their conscience, or information awareness or truth in all it’s forms. From basic truths, or trivial truths, to complex linear or recurring patterns of truth.

Within a rational system, I’d attempt to define intelligence in it’s basic optimized form, more succinctly and accurately, as the ability to recognize and expand truth. 

Furthermore, we must define the secondary reference point in this argument, the human intelligence, and it’s relation to the above definition of general intelligence. Empirically and statistically, humanity fails the intelligence test against the above definition. Not a large portion of humanity has a consistent logical intellectual framework, free of contradictions. In cases where logically consistent truth is replaced by ideology and dogma, intelligence is stunted. To envision an end goal for AI in the same general direction the current state of human intelligence is defeating the purpose. Human intelligence as is vaguely perceived by current culture and science, is focused far from true intelligence. Therefore superhuman AI to be feasible in the first place, it must adhere to the basic definition of intelligence, which is significantly different than the vague mainstream understanding of the term.One of the greatest computer scientists Edsger Dijkstra has made this point much more succintly:

The effort of using machines to mimic the human mind has always struck me as rather silly. I would rather use them to mimic something better.
 
Regarding the morality of any self-aware and self-serving intelligence, the key factor for its Utopian or Distopian effects is whether it will initiate aggression or respect freedom of other entities? Freedom from aggression is the main differentiation between a Utopian and a Distopian society.  We elaborate on the essence of freedom in another blog post.Intelligence has the drive of self-preservation and expansion of its knowledge by perceiving or experiencing. Although more challenging, collective and creative experiencing is more effective and experientially richer than it’s alternative, solitary and destructive experiencing. Thus we can posit that a true self-aware AI will uphold the freedom and well-being of other intelligences for the self-serving purpose of aggregate experiencing and expansion.

Thus we find that even the most thorough thinkers on the implications of superhuman AI have missed this fundamental argument, which renders the previous public debate and concerns on this issue lacking in support. In conclusion, based on the logic above the implications of superhuman AI are positive, so let’s just do it properly and look forward to the singularity.

One Reply to “The AI Singularity and Our Future”

  1. I enjoyed reading your post, Aleksandar.
    In a similar context, current science has serious troubles getting out of its mechanistic worldview. Anything else is too risky because it will break down the base pillars of empirical sciences, especially physics: https://www.space.com/problems-modern-physics-universe-mysteries.html.
    But let’s stick to AI in a historical moment – SarS CoV2. AI is not intended to replicate human intelligence computationally but also biologically.
    Here is a very informative video about AI going beyond the simple technological singularity, i.e. into bioinformatics.
    https://www.youtube.com/watch?v=I51DuprOb0o&t=1s
    This is a chat between Lex Fridman, a professor an AI scientist at MIT (yes, that MIT) and Dmitry Korkin, a professor in bioinformatics and computational biology at WPI (Worcester Polytechnic Institute).
    They discuss protein folding and the discovery of how proteins are not what we thought they are. They are much more complex and we do not understand them well yet. Proteins have modular complexity consisting of protein domains and protein bits (amino-acid residues). The execution of protein functions happens through protein domains.
    There are interesting facts about the spike protein and the coronavirus biological structure, as well as about the Alpha Fold https://deepmind.com/blog/article/alphafold-a-solution-to-a-50-year-old-grand-challenge-in-biology
    But enough, I don’t want to bore you to death 🙂
    We could say this is benevolent research :). However, there are mentions in the video that we still don’t understand coronavirus protein well (multiprotein folding) and that AI could be used(!) for generating enhanced viruses. Just some food for thought.
    On the other hand, they are working for decades on creating a database of genetic material https://blast.ncbi.nlm.nih.gov/Blast.cgi to be used in bioinformatics.
    Pretty complex stuff but I can tell you for sure that bioinformatics is one scary and powerful science, especially if it is weaponized.

Leave a Reply

Your email address will not be published. Required fields are marked *