COMPOSITE GEOMETRY AS A MATHEMATICAL METHOD OF NEURAL NETWORKS
Abstract
Traditional (Newton-Leibnitz) differentiation is compared with compositional differentiation, which is simply another, geometrically perfect, approach to determining the change of some quantities depending on others. It is emphasized that the computational result according to both traditional (algebraic) and compositional (geometric) differentiation is the same for the same initial data.
It is shown that both the Python programming language and compositional geometry involve breaking down any complex problem into a number of small problems of lower complexity.
The Python programming language provides for the implementation of operations with algebraic matrices, the formation of point polynomials occurs using composite matrices, which are intended for the formalization of geometric algorithms. At the same time, it is indicated that the resource consumption of operations on algebraic matrices is much higher compared to operations on composite matrices, the use of which in Python will be more efficient.
It is shown that the possibilities of unifying operations in Python coincide with the developed methods of unification in composite geometry. Existing records of point polynomials in general forms allow creating multilayer neural networks as classes and as objects in these classes. Because of this, the methods of composite geometry will more effectively convey the direct distribution of signals and the inverse distribution of errors by the neural network. This, in turn, will reduce resource consumption and speed up the process of machine learning of artificial intelligence.
It is concluded that the functioning of neural networks is provided by existing mathematical methods that exist in general and are not adapted to perform operations in them. Conversely, the possibilities of analytical formalization of composite geometry do not best correspond to operations in neural networks.
The optimization method in neural networks is the gradient descent method. The necessity and possibilities of developing one's own compositional method of gradient descent are indicated. If the gradient descent method existing in mathematics is based on finding a tangent straight line to the error surface, then the composite gradient descent method will immediately construct a tangent plane at each point of the error surface of the weighting coefficients. This will reduce resource consumption many times, and therefore will speed up machine learning of artificial intelligence many times. As a result, the time for decision-making by artificial intelligence in the work process will decrease. This becomes possible due to the fact that traditional (Newton-Leibnitz) derivatives are obtained using algebraic methods of differentiation, while the formation of composite derivatives is ensured by geometric (composite) methods of differentiation.
Keywords: neural networks, artificial intelligence, composite derivative, composite geometry, composite gradient descent method.