Artificial intelligence to infinity
Introduction
Towards infinity, a current that implies an unlimited propagation, is an idea that pushes our ability to perceive and understand past, current and future events to the limit. It is said that in order to foresee the result of the actions of the present, we must know very well the history that is the basis of the current decisions, but most of the time it is not enough.
A pseudo-mathematical approach to the analysis of the existence of a prediction function, from the field of artificial intelligence, would point out that the necessity is supported by the evolutionary knowledge of the technological support, of the theoretical and applied implementation solutions, but to satisfy the sufficiency condition, all the evolutionary steps must be anchored in the factors that triggered them and also the connection between the factors must be substantiated in a similar way.
The functional elements of the intelligent artificial network thus outlined will allow the creation of an environment for the construction of possible trends and evolutions towards infinity in the fields of advanced learning based on artificial implementation supports. The next logical step in the analysis of the hypotheses formulated, for the creation of possible development directions, is the analysis of the interaction with the other technological and cultural evolution environments. The result obtained is a refined solution that is closely related to the possible impact in the propagation environment.
Therefore, in this chapter we will go through the various development variants of artificial intelligence and their applications in the real world, and by following the way of propagation from the origin to the present, we will identify the possible evolutions towards infinity of the stochastic and deterministic algorithmic learning technology based on an artificial existence environment.
Reaching the peak level of Artificial Intelligence is defined by the formation of a singularity or the achievement of an independent intelligence with all the related implications towards human nature, so the possible interactions following such an event will be investigated.
The emerging development of artificial intelligence
Artificial intelligence has been developed by understanding human learning patterns and looking for mathematical structures that can formulate these natural abilities. Starting from this idea, neural models that approximated the biological neuron model resulted in a first phase, the first model being McCulloch-Pitts in 1943[1], and the next Rosenblatt in 1958[2].
This form of the neuron is defined by a weighted pulse adder or adapted signals that are further propagated through a trigger threshold, introduced by a certain activation function, to the neuron's response. The activation function represents an important element in the learning process of a neuron, and the ability to adapt to the information used in the training process depends on the mathematical function used[4]. The name of this function is used to create a typology of neurons but, in turn, it is the one that shows us the connection of a "keystone" from the foundation of artificial intelligence with the answer to infinity. So an important element found in many neural models can infinitely propagate information and provide a finite response. It represents a mathematical form of reaching the limitless that is concretized in a certain answer.
Directions for effective implementation of these models have been slow to emerge due to initially non-existent and later inadequate implementation environments. In 1950 Alan Turing published "Computing Machinery and Intelligence", introducing the concept of the Turing test [3]. This test is developed to measure a machine's ability to exhibit human-like intelligence and is a reference method for artificial intelligence analysis and research. The impact of artificial intelligence solutions at the time of their appearance did not have an impact on the technological environment, being affected by the lack of mathematical support for processing the information bases necessary for learning and for creating weights resulting from learning. It is only more than 40 years after the first mathematical form of the neuron that a solid mathematical basis for advanced learning and weighting of artificial neurons has been established [5].
The important factors in the analysis of this emerging development of artificial intelligence are represented in principle by the knowledge of three significant aspects: the mathematical algorithms used for learning and the modeling capacity, by the processing power and related IT support and by the existence or non-existence of scientific and/or artistic propagation niches through mathematical models or corresponding information.
Mathematical algorithms used in learning and modeling ability
The term learning is used for artificial intelligence models although it is not specific, because an artificial neural model does not learn like humans, based on personal experimentation, but integrates input-output correlations, through mathematical relationships, namely by assigning weights to the inputs of the neuron or neurons in the network. This procedure of allocating weights in the artificial intelligence network, based on a process of minimizing the error of the answer provided, through the inference activity, compared to the expected one. and shaping the response to conform to what we wanted is called a training algorithm. Learning performance is affected by the algorithms used and the ability to shape the response by the type of activation functions used, how the input is processed, and how the neural units are interconnected in the network. Although there are comparative studies and meta-analyses on the performance of algorithms, which are in principle effective ways of evaluating the merits of competing techniques, they have so far not been able to provide definitive answers. One reason may be that such studies are themselves often subject to criticism and accusations of bias: For any concrete empirical comparison, proponents of a particular technique can always find fault with the details of the setup. Their standardization depending on how they are used and their destination can be a solution, but the multitude of existing variants still represents a challenge[6].
The way of allocating weights can be more or less related to a direct dependence between input and output, thus we identify two approaches used in practice, namely the deterministic and the stochastic [8]. The adjustment of weights is carried out by monitoring and correcting deviations in the internal structure of the artificial network so that the general response corresponds to the imposed one. The data used as a reference for learning the network does not have a clear representation in the internal structure of a stochastic AI network. Current approaches assume the knowledge and existence of at least three essential elements: an appropriate neural network, an appropriate learning algorithm, and appropriate information formatting techniques used [7].
A significant challenge indicated in the specialized literature is given by the limitation of the learning capacity by the loss of control over the deviation from the real model and is present in very large networks. This limitation propagated towards its limit or infinity shows that in the existing structure of artificial learning there is a limiting step that correlates the current level of knowledge and technology with that of the performance of artificial intelligence.
Processing power and IT support
Artificial intelligence appeared and exists in its current form due to the explosive development of electronic computing support and establishes a very close relationship between hardware processing capacity and the obtained performances. The evolution of processing power has contributed substantially to the implementation of artificial intelligence algorithms and models. The first models of artificial intelligence were directly integrated at the hardware level with analog and digital electronics, limited to the obtained configuration and prone to many operational errors [9]. Further technological steps for artificial intelligence models were the use of ion tube based computers, then generations of computers based on 8, 16, 32 and 64 bits respectively. The significant evolutionary leap was produced by the use of graphics processing architectures called GPUs for short. By using graphics processors to train artificial intelligence models, the processing capacity of information databases has increased exponentially, thus obtaining the first advanced models with deep learning in the field of artificial intelligence.
Propagation directions of Artificial Intelligence (AI)
The current directions of propagation cover a large part of human concerns, looking for both scientific and artistic development niches.
Deterministic AI models are based on the idea that the behavior of a system can be predicted with certainty, given the initial conditions and the rules that govern the system. This approach relies on well-defined algorithms and logical reasoning to make decisions and solve problems. Some examples of deterministic AI models include rule-based systems, expert systems, and classical planning algorithms. As a result deterministic implementations will rely on: transparency and interpretability (the decision-making process can be traced back to rules and algorithms), reliability and repeatability (they always produce the same result for the same input data), and computational and energy efficiency (they do not require the extensive processing and training required by some stochastic models). As propagation towards the limit, they can also induce risks in their use, namely: Lack of flexibility (they show rigidity in adapting to complex, dynamic or uncertain environments, because they are limited by predefined rules and algorithms), Fragility (they are vulnerable to unexpected inputs or border cases, because they may not have the ability to generalize or handle situations that were not explicitly programmed) and reduced ability to model complex phenomena (in the real world, processing natural language or image recognition are too complex to be accurately modeled only by deterministic approaches [10].
Stochastic modeling in AI is currently receiving the most attention and focuses on using probabilistic models and statistical methods to handle uncertainty and complexity. This approach often involves machine learning techniques, such as neural networks, decision trees, and Bayesian networks, which can learn from data and make decisions based on probabilities. As a result stochastic models can offer: adaptability and flexibility (they can adapt to complex, dynamic and uncertain environments by learning from data and updating their internal representations accordingly), generalizability (they can often generalize beyond the training data, allowing them to handle a wider range of inputs and situations) and modeling ability of complex phenomena (successful natural language processing, audio generative models, image recognition and generation, and decision making under conditions of uncertainty) [10].
The implementations of deterministic and stochastic modeling can bring many challenges in the real world, often the solutions used are obtained from a combination of the two types, trying to obtain better performance. Possible emerging directions are:
1. Autonomous vehicles: Self-driving cars and other autonomous vehicles powered by artificial intelligence may become more widespread, improving transportation efficiency, reducing accidents, and lessening environmental impact.
2. Personalized medicine: AI-based medical applications may become more advanced, enabling personalized treatment plans based on a person's genetic profile and medical history.
3. Smart Assistants: AI-powered virtual assistants can become more sophisticated, offering a wider range of services and capabilities, from scheduling and task management to emotional support and lifestyle recommendations.
4. The quantum computer: The integration of artificial intelligence and quantum computing can lead to breakthroughs in fields such as cryptography, materials science and drug discovery.
5. Sustainable energy: Artificial intelligence will be able to play a crucial role in developing more efficient and sustainable energy systems, such as smart grids and renewable energy management.
Robot or Frankenstein in the Future World
Reaching Infinity or the Finite Singularity
When an artificial entity was asked if it could achieve 100% accuracy in the real world (the ability to handle any event) it generated, based on the set of information it has, the following response:
Achieving 100% accuracy in artificial intelligence (AI) for real-world applications is highly unlikely, and there are several reasons for this:
– Complexity of real-world data: Real-world data is often messy, incomplete, and noisy. Factors such as different formats, missing values, and outliers make it difficult for AI systems to achieve perfect accuracy. For example, in image recognition tasks, variations in illumination, angles, and occlusions can lead to misclassifications.
– Dynamic environments: Many real-world applications operate in dynamic environments where conditions change over time. For example, in autonomous driving, road conditions, weather and traffic patterns can vary significantly. An AI trained on historical data can have difficulty adapting to new scenarios, leading to potential inaccuracies.
– Human errors and biases: AI systems learn from human-generated data, which may contain biases and errors. If the training data reflects these biases, AI can perpetuate them, resulting in distorted or inaccurate predictions. This is particularly problematic in sensitive areas such as employment, criminal justice and healthcare.
– Ambiguity and subjectivity: Many real-world tasks involve ambiguity and subjective interpretation. For example, sentiment analysis in natural language processing can be challenging because the meaning of words can change depending on context, tone, and cultural nuances. Achieving consensus on what constitutes "correct" in such cases is difficult.
– Algorithm limitations: Current AI algorithms have inherent limitations. Different algorithms excel in different domains, and no single method can perfectly model every aspect of a complex problem. Furthermore, even the most advanced models can overfit the training data, performing poorly on unseen data.
– Ethical and practical constraints: In some applications, achieving 100% accuracy may not be ethical or feasible. For example, in medical diagnostics, a perfect model could lead to overconfidence in artificial intelligence, which could overshadow human judgment and critical thinking.
– Costs and resources: Seeking 100% accuracy can lead to diminishing returns in terms of time, computing resources, and financial costs. In many cases, achieving a high level of accuracy that meets practical requirements is more beneficial than striving for perfection.
As we have seen the emulating abilities of the AI are expanding as the technology advances and the shapes sound more and more varied. Achieving the singularity can involve two main aspects: on the one hand, the predominant use of deterministic models can lead to the formation of versatile entities but limited in expressiveness (finite singularity) such as androids from fiction films (example: Data from the Star Trek film), on the other hand, the use of stochastic models introduces a highly unpredictable, even potentially dangerous variable and can lead to the formation of Cyborg entities (example: the Borg from the same Star Trek film)
The real world, the virtual world or the "reverse world"
The world generated by entities that use artificial intelligence at the moment strongly impacts the virtual world and for now not particularly strongly the real world, the strong impact being only through the electronic environment. According to the way of development of these entities, their capacity for self-definition is limited to the set of information used for learning. So the artificial entity is formed from the sets (siblings) of information produced by human experience and translates this training into a form of manifestation. The human being through the forms of manifestation gains experience and thus creates information. In other words, an artificial intelligence is a form of reverse manifestation of human nature. By propagating this entity to infinity one can thus create an "inverted world" that represents a very complex subject only by its source of formation without regard to interaction with the natural world.
Bibliography
[1] Mcculloch W., Pitts W., A Logical Calculus of the Ideas Immanent In Nervous Activity, Bulletin of Mathematical Biology, Vol. 52, No. l/2. pp. 99-115, 1990
[2] Rosenblatt, F., “The Perceptron: A probabilistic model for information storage and organization in the brain,” Psychological Review, vol. 65, pp. 386–408, 1958
[3] Turing, A. M., Computing Machinery and Intelligence. Mind 49: 433-460, 1950
[4] Lederer J., Activation Functions in Artificial Neural Networks:A Systematic Overview, Arxiv Online Library, arXiv:2101.09957, 2021
[5] Winter R., Widrow B., MADALINE RULE II: Training Algorithm for Neural Networks, IEEE 1988 International Conference on Neural Networks, 1988
[6] Dahl G., Schneider F., Nado Z., Agarwal N., ş.a., Benchmarking Neural Network Training Algorithms, Arxiv Online Library, arXiv:2306.07179v1, 2023
[7] Ruoyu S., Optimization for deep learning: theory and algorithms, Arxiv Online Library, arXiv:1912.08957v1, 2019
[8] Sands T., Ed., Deterministic Artificial Intelligence, IntechOpen, doi: 10.5772/intechopen.81309, 2020
[9] Carver M., Mohammed I., Analog VLSI Implementation of Neural Systems, The Kluwer International Series in Engineering and Computer Science, Vol. 80., 1989
[10] Taye M.M., Understanding of Machine Learning with Deep Learning: Architectures, Workflow, Applications and Future Directions, Computers, 2023
[11] Alexeev Y., Farag M., Patti T, Wolf M., ş.a., Artificial Intelligence for Quantum Computing, Arxiv Online Library, arXiv:2411.09131v1, 2024
[12] Sudabathula B., The convergence of quantum computing and artificial intelligence: Reshaping the Future of Machine Learning, International Journal of Information Technology & Management Information System, Volume 16, Issue 1, 2025

