Integrating one-dimensional functions in the given limits using a feedforward neural network
The notebook contains a raw implementation and a proof-of-principle test for the method of numerical integration of one-dimensional functions using feedforward neural networks. First, we construct a network in PyTorch and train it to approximate the antiderivative of a given function in a given range of its variable. Then we construct a neural network in NumPy that takes the weights and biases obtained from training and allows to evaluate the integral of the functon (for the limits located within the range in which it was trained) faster. For certain integrand functions, limits and precisions this method can even compete with the standard quadrature functions in terms of the CPU time.