Algorithmic Stupidity
It is almost sure, that intelligence is at best analyzed as a measure, that we use to quantify unpredictable, but recurrent phenomena of cognitive initiatives to solve at least partially new tasks or problems with different tools, methods or resources. Phenomena of intelligence can be classified in groups: vision and auditory perception, understanding language, common-sense reasoning, abstract reasoning, probing general knowledge about the world, learning, problem solving, imagination, creativity and probably some more. However it is highly unclear how to standardize these experiments or how to explain - more or less - successful solutions or the ways mistakes were corrected.
- Recall that human intelligence can adapt to completely new contexts and can perform there e.g. pattern recognition with only a few examples as well as it can be creative. In that sense human intelligence is called "strong intelligence".
- understanding a dynamic environment present in perceptions: considering understanding as a concept of AI goes back to Oliver Selfridge at the seminal Dartmouth workshop 1956 and plays an unjustly minor role in current research, but is receiving increasing attention through the use of algorithmic cognitive architectures in the development of e.g. chatbots or cognitive language agents.
- reasoning: as symbolic operation, this was first done in 1956 by the program Logic Theorist. Developed by Allen Newell, Herbert Simon and Cliff Shaw the program could prove 38 of 52 theorems of the well known Principia Mathematica from the english mathematician Bertrand Russell. This can also be done today on a subsymbolic level e.g. by a MAC-net Deep Learning technology.
- controlled pursuit of goals: firstly emphased by McCarthy and Minsky and later promoted e.g. by Stuart Russell we observe this mainstream topic e.g. in autonomous driving or games like chess or GO mostly realized by Deep Reinforcement Learning. In the history of AI this approach has always dominated.
Today we expect at least some kind of augmentation of human intelligence as a result of the use of non-human-like algorithmic AI, in so far as algorithms not only solve certain problems faster than humans, but are neither guided nor constrained by human interests in their search for solutions. Unfortunately the results in AI research are not that promising - a fact that the mass media, which is partly financed and informed by the BigTech companies, has been omitted for reasons of profit expectations. Let us briefly consider a selection of the most important ones. These open problems affect every currently known variant of AI.
1. Algorithms are not able to fully understand human concepts.
The german logician and mathematician Kurt Gödel has argued for that decades ago in case of the human understanding of whole numbers, that are pretty basis in all fields of mathematics. Furthermore he argued, that due to his incompleteness theorems - undefeated until today - the working of the human mind cannot be reduced to the working of the human brain that to all appearances is a finite machine. Therefore intelligent behaviour cannot be predicted on the basis of Gödel's results. This has consequences: Conjectures do not fall from the sky, after all. Presented with a body of information, humans - particularly mathematicians - regularly come up with interesting conjectures and then often set out to prove those conjectures, usually with success. This discovery process (along with new concept formation) is one of the most creative activities of the human intellect. Human intelligence learns from thought experiments, is event-driven and corrects its own mistakes and decisions, as it recognises itself to a certain extent. Intelligence therefore has to do with autonomous reasoning.
2. There is no algorithmic non-monotonic reasoning.
Monotony states that logical consequences are robust under the addition of information. But in some cases adding new information the truth of the premises can not be guaranteed in our everyday reasoning. A non-monotonic logic is a formal logic whose consequence relation is not monotonic. In other words, non-monotonic logics are devised to capture and represent defeasible inferences (cf. defeasible reasoning), i.e., a kind of inference in which reasoners draw tentative conclusions, enabling reasoners to retract their conclusion(s) based on further evidence. Most studied formal logics have a monotonic consequence relation, meaning that adding a formula to a theory never produces a reduction of its set of consequences. Intuitively, monotonicity indicates that learning a new piece of knowledge cannot reduce the set of what is known. A monotonic logic cannot handle various reasoning tasks such as reasoning by default (consequences may be derived only because of lack of evidence of the contrary), abductive reasoning (consequences are only deduced as most likely explanations), some important approaches to reasoning about knowledge (the ignorance of a consequence must be retracted when the consequence becomes known), and similarly, belief revision (new knowledge may contradict old beliefs). To the best of our knowledge, algorithms, as we understand them right now, do not have the ability for non-monotonic reasoning, which always is a form of dynamic changing of the right and relevant conclusions.
3. Algorithms do not show cognition by semantic transfer.
Obviously not every instance of human thinking aims to solve a well-defined problem. This is obvious in case of building epistemic metaphors. Creativity e.g. typically has a different nature and even the conceptual reduction of one question or problem to another does not follow repeating rules or predictable patterns. This point is well illustrated by George Lakoff's theory of metaphors, that goes back to the idea, that we form new metaphors as epistemic vehicles by imagining them figuratively, or imagine living through the corresponding process like I am dejected. This does not mean that beings living outside a gravitational field, for example, would not be able to express emotions in other words, but it does mean that they would express emotions differently from humans with words. Human notions of happiness and sadness would be different because humans would have different bodies. Spatial concepts, such as “front”, “back”, “up”, and “down”, provide perhaps the clearest examples in which such embodied experience exists. If human experience is intricately bound up with large-scale metaphors, and both experience and metaphor are shaped up by the kinds of bodies we have that mediate between agent and world, argued Lakoff and Johnson, then cognition is embodied in a way not anticipated within traditional cognitive science. Then the body of an organism directly affects how it can think, because it uses metaphors related to its body as a basis for concepts. One also speaks of an epistemic theory of metaphors. At the moment, we have no idea how we could help algorithms achieve internal epistemic representations that would allow them to achieve comparable cognitive performance.
4. The Frame Problem.
The frame problem describes an issue with using first-order logic (FOL) to express facts about a robot in the world. Representing the state of a robot with traditional First Order Logic requires the use of many axioms that simply imply that things in the environment do not change arbitrarily. The frame problem is the problem of finding adequate collections of axioms for a viable description of a robot environment. It also denotes that problem of limiting the only relevant beliefs that have to be updated in response to prior actions. In the logical context, actions are typically specified by what they change, with the implicit assumption that everything else (the frame) remains unchanged.
So given some event or action and rules or laws specifying the resultant changes in some situation, how can an AI system deal with the non-changes in a tractable way? These are not logical consequences of the initial conditions, described what has changed. On the one hand, it is an unsolved problem to continuously reformulate as axioms in every situation ot context, what has not changed compared to, for example, changes that have taken place from the outside. On the other hand some additional semantic rules are needed to ensure that the new axioms are consistent. But these semantic rules also will depend on the situation or a changing context. Obviously humans do not have that problem and a general algorithmic solution of the frame problem is still missing.
5. The Ramification Problem
In philosophy and artificial intelligence (especially, knowledge based systems), the ramification problem is concerned with the indirect consequences of an action. It might also be posed as how to represent what happens implicitly due to an action or how to control the secondary and tertiary effects of an action. It is strongly connected to, and is opposite the qualification side of the frame problem.6. The Qualification Problem:
In philosophy and AI (especially, in knowledge-based systems), the qualification problem is concerned with the impossibility of listing all the preconditions required for a real-world action to have its intended effect. It might be posed as how to deal with the things that prevent me from achieving my intended result. It is strongly connected to, and opposite the ramification side of the frame problem.
7. No Skills for Novelty or Uncertainty
The previous development of AI operates at the level of a species' cognitive skills by removing any kind of novelty or uncertainty for highly specialized algorithms as far as possible. Hitherto AI has nothing to do with a series of inventions that realize a single, highly adaptive, on-the-fly behavior generation machine for various, non-repetitive tasks under changing conditions at the level of the individual. But this is exactly what we have, when we observe humans. And we have just started to figure out, how we might compare these two sorts of skills.
8. The Decision Problem
In his paper “On Computable Numbers, with an Application to the Entscheidungsproblem" Alan Turing reformulated Kurt Gödel's 1931 results on the limits of proof and computation, replacing Gödel’s universal arithmetic-based formal language with the formal and simple hypothetical devices that became known as Turing machines. He proved that some such machine would be capable of performing any conceivable mathematical computation if it were representable as an algorithm. He went on to prove that there was no solution to the Entscheidungsproblem by first showing that the halting problem for Turing machines is undecidable: in general, it is not possible to decide algorithmically whether a given Turing machine will ever halt. First of all, the Gödel-Church-Turing debate, which began in 1931, shows that algorithms are means to prove that a given mathematical problem has a solution that can be found in a finite number of steps while following certain criteria of deductive correctness. Therefore, from the outset, it is highly unclear how rule-based algorithms can ever mimic creative human intelligence. The reason why we talk about ‘human-like intelligence’ is the fact that algorithms do something completely different from a human brain when they play chess, for example: Human brains think in thought experiments that presuppose semantic arguments given in a natural language. Chess programes learn iteratively from numerical rewards that they seek in knowledge graphs.
9. No Commonsense Reasoning
Commonsense reasoning is one of the branches of artificial intelligence (AI) that is concerned with simulating the human ability to make presumptions about the type and essence of ordinary situations they encounter every day. These assumptions include judgements about the physical characteristics, purpose, intentions and behaviour of people and objects, as well as the possible outcomes of their actions and interactions.
10. The Binding Problem
That term has multiple meanings.
- the segregation problem: a practical computational problem of how brains segregate elements in complex patterns of sensory input so that they are allocated to discrete "objects". In other words, when looking at a blue square and a yellow circle, what neural mechanisms ensure that the square is perceived as blue and the circle as yellow, and not vice versa?
- the combination problem:The problem of how objects, background and abstract or emotional features are combined into a single experience.
Of course this list is incomplete. Sometimes issues like that are also called stupidity of AI. Recall that in contrast the also used term {\em artificial stupidity} can have two opposite meanings:
- Machine learning algorithms make stupid mistakes while learning from the data.
- Artificial intelligence is dumbed down to make mistakes and thus look more human.
0 Comments