A project developed for building a very basic console application, which will solve various numerical problems such as: Linear equations, Non-linear equations, Differential equations and Matrix inversion Description:
Algorithm of the code :
Steps involved :
- Input Setup :
- Starting with a matrix mat of size n x (n+1) , where the last column represents the constant vector B . Also have an initial guess vector X, the maximum number of iterations Max_Ite, and a tolerance level tol
- Matrix and vector Extraction :
- Separating the input matrix into two parts: Matrix A containing the coefficients of the equations Vector B containing the constants.
- Diagonal Dominance Check : Checking if the matrix A is diagonally dominant . A matrix is diagonally dominant if for each row, the absolute value of the diagonal element is greater than the sum of the absolute values of the other elements in that row. If the matrix is not diagonally dominant, printing a warning message
- Initialization:
- Initializing a previous solution vector preX with the initial guess vector X.
- Iteration Process:
- For each iteration up to the maximum allowed iterations:
- For each row in the matrix:
- Setting the initial sum to the corresponding value in vector B.
- Subtracting the contributions of all other variables using the previous solution vector preX.
- Updating the current variable in vector X by dividing the sum by the diagonal element of the current row.
- Convergence Check:
- After updating all variables in X, calculate the maximum difference between the current values in X and the previous values in preX.
- If the maximum difference is less than the specified tolerance tol, printing a convergence message and return the current solution vector X.
- Updating Previous Values:
- Before the next iteration, updating preX to hold the current values of X for comparison in the next iteration.
- Handling Non-Convergence:
- If the algorithm reaches the maximum number of iterations without achieving convergence, printing a message indicating non-convergence and return the current approximation of X.
Processes Inside:
- bool diagonally_dominant (A) function:
This function checks if a given matrix is diagonally dominant . Diagonal dominance is an important property for convergence in iterative method . A matrix A is diagonally dominant if for each row A[i][i] is greater than or equals to the sum of other elements of the same row.
Process : This function check matrix A for being diagonally dominant . If it is diagonally dominant , the function returns true , otherwise false.
- Jacobi_iterative(mat,X,Max_ite,tol) function: This function performs the jacobi iterative method to approximate a solution to the system of linear equations AX = B
Process:
- Extracting Matrix A and vector B from the augmented matrix mat :
- A[i][j] = mat[i][j] -> Copies coefficient matrix A.
- B[i] = mat[i][n] -> Stores constants vector B.
- Diagonal Dominance check :
- Calls diagonally_dominant(A) .If A is not diagonally dominant,it warns that the jacobi iteration may not converge reliably.
- Initializing previous guess :
- preX = X -> sets preX as the previous guess (initially the same as X )
- Iteration loop:
- For each iteration from 1 to Max_Ite: For each row i in A: Calculating new Approximation for X[i]:
- sum = B[i] -> initializing sum to the constant term B[i].
- For each j≠i , subtracting the product of A[i][j] and preX[j] from sum. sum -= A[i][j] * preX[j]
- Dividing sum by the diagonal element A[i][i] to update X[i] X[i] = sum / A[i][i].
Convergence check:
- Calculating the difference between X and preX as the maximum absolute difference between corresponding elements.
- If diff < tol , the algorithm has converged , and it returns X with the number of iterations taken.
- Updating preX = X for the next iteration
- Non -Convergence handling : If the method fails to converge within Max_Ite iterations, it outputs a message and returns X as the best attempt.
Key Points:
- Convergence: The algorithm stops either when convergence is reached within the tolerance tol or upon the maximum number of iterations( Max_Ite)
- Limitations : The jacobi method may not converge if the matrix is not diagonally dominant .
The Gauss-Seidel method is another iterative technique for approximating solutions to a system of linear equations AX = B Like the Jacobi method, it relies on matrix diagonal dominance for convergence. However, Gauss-Seidel can converge faster as it uses updated values immediately within the same iteration.
Algorithms of the code:
Steps Involved:
- Input Setup:
- Starting with a matrix mat of size n×(n+1) ,where the last column represents the constant vector B. Also, have an initial guess vector X, the maximum number of iterations Max_Ite and a tolerance level tol.
- Matrix and Vector Extraction:
- Separating the input matrix into two parts:
- Matrix A containing the coefficients of the equations.
- Vector B containing the constants.
- Diagonal Dominance Check:
- Checking if the matrix A is diagonally dominant. This means that for each row, the absolute value of the diagonal element must be greater than the sum of the absolute values of the other elements in that row. If the matrix is not diagonally dominant, print a warning message indicating that the Gauss-Seidel method may not converge and could produce incorrect results.
- Initialization:
- Initializing a previous solution vector preX with the initial guess vector X.
- Iteration Process:
- For each iteration up to the maximum allowed iterations:
- For each row in the matrix:
- Starting by setting the initial sum to the corresponding value in vector B.
- Subtracting the contributions of all other variables using the current values in vector X.
- Update the current variable in vector X by dividing the adjusted sum by the diagonal element of the current row.
- Convergence Checking:
- After updating all variables in X, calculate the maximum difference between the current values in X and the previous values in preX.
- If the maximum difference is less than the specified tolerance tol, printing a convergence message and return the current solution vector X.
- Updating Previous Values:
- Before the next iteration, updating preX to hold the current values of X for comparison in the next iteration.
- Handling Non-Convergence:
- If the algorithm reaches the maximum number of iterations without achieving convergence, printing a message indicating non-convergence and return the current approximation of X.
Processes Inside:
- GaussSeidel_iterative(mat,X,Max_Ite,tol) This function performs the Gauss-Seidel iterative method to approximate a solution to the system AX = B
- Parameters:
- mat: The augmented matrix containing coefficients and constants of the system.
- X: The initial guess vector for the solution.
- Max_Ite: The maximum number of iterations allowed for convergence.
- tol: The tolerance level for convergence.
- Process:
- Extracting Matrix A and Vector B from mat: + A[i][j] = mat[i][j] — Copies coefficient matrix A. + B[i] = mat[i][n] — Stores constant terms in vector B.
- Checking Diagonal Dominance: + Calls diagonally_dominant(A). If not diagonally dominant, prints a warning message.
- Initializing preX = X as the previous guess.
- Iteration Loop: + For each iteration from 1 to Max_Ite: + For each row i, calculating sum = B[i]. + For each j≠i , subtracting the product of A[i][j] and X[j] from sum. + Updating X[i] = sum / A[i][i]. + Calculating diff, the maximum absolute difference between X and preX. + If diff < tol, return X with convergence message. + Update preX = X.
- Non-Convergence Handling: + If no convergence within Max_Ite, print a message and return X.
Key Points:
- Convergence: The algorithm stops when it reaches the tolerance tol or the maximum number of iterations (Max_Ite).
- Limitations: Gauss-Seidel may not converge if the matrix is not diagonally dominant.
Algorithm of the Code:
This code performs Gaussian Elimination on a given matrix mat
to convert it into an upper triangular form. The goal of Gaussian Elimination is to simplify the matrix, allowing easier solving for unknowns in a system of linear equations.
Steps involved:
- Partial Pivoting: Swap rows if necessary to bring the largest element (in absolute terms) in each column to the diagonal position, enhancing numerical stability.
- Row Scaling: Normalize each row by dividing by its pivot element (the diagonal element).
- Row Elimination: For each row, eliminate elements below the pivot by subtracting an appropriate multiple of the current row from the rows below.
Processes Inside:
-
Initialization:
n
is set to the matrix's size (number of rows).- A vector
piv
is created to store the pivot elements for each row after they are normalized.
-
Loop through Rows (Outer Loop
i
):- Partial Pivoting: For each row
i
, a nested loop (k = i + 1
ton
) compares each element in the current column with the diagonal element. If any element below the diagonal is larger, it swaps the rows.
- Partial Pivoting: For each row
-
Normalize Pivot Row:
- The diagonal element (pivot) for row
i
is stored inpivot
. - If the pivot is non-zero, it is saved in
piv
, and the entire rowi
is divided bypivot
to make the pivot equal to 1.
- The diagonal element (pivot) for row
-
Eliminate Entries Below Pivot (Inner Loop
k
):- For each row
k
below the current rowi
, afactor
is calculated as the value in thek
th row divided by the pivot element. - The
k
th row is then updated by subtractingfactor * (current row i)
from it, effectively making the element below the pivot zero.
- For each row
-
Return Result:
- The modified matrix
mat
(now in upper triangular form) is returned.
- The modified matrix
Algorithm of the Code:
This code implements the Gauss-Jordan elimination method, an extension of Gaussian Elimination that transforms the matrix into a reduced row echelon form (RREF). In this form, each pivot is 1, and all entries in each pivot's column (both above and below) are zero. This results in a fully simplified matrix, making it straightforward to solve systems of linear equations.
Steps involved:
- Partial Pivoting: Similar to Gaussian Elimination, rows are swapped to bring the largest element (in absolute terms) in each column to the pivot position.
- Row Normalization: Normalize each pivot row so that the pivot element becomes 1.
- Row Elimination Below the Pivot: For each pivot, eliminate all elements below the pivot.
- Row Elimination Above the Pivot: Starting from the last row upwards, make all elements above each pivot zero.
Processes Inside:
-
Initialization:
n
is set to the number of rows inmat
.
-
Loop through Rows (Outer Loop
i
):- Partial Pivoting: For each row
i
, a nested loop (k = i + 1
ton
) compares each element in the current column with the diagonal element. If a larger element is found below the current pivot, rows are swapped to enhance stability.
- Partial Pivoting: For each row
-
Normalize Pivot Row:
- The diagonal element (pivot) for row
i
is stored inpivot
. - If the pivot is non-zero, each element in row
i
is divided bypivot
to make the pivot equal to 1.
- The diagonal element (pivot) for row
-
Eliminate Entries Below Pivot (Forward Elimination):
- For each row
k
below the current rowi
, calculate thefactor
as the value in thek
th row divided by the pivot element in rowi
. - Update row
k
by subtractingfactor * (current row i)
from it, which makes the element in columni
zero.
- For each row
-
Eliminate Entries Above Pivot (Backward Elimination):
- A backward loop (
i = n-1
down to 0) ensures that each column containing a pivot also has zeroes in all rows above the pivot. - For each row
k
above the current rowi
, afactor
is calculated to zero out the entries above the pivot.
- A backward loop (
-
Return Result:
- The fully reduced matrix
mat
in RREF form is returned, with each pivot equal to 1 and zeros both above and below each pivot.
- The fully reduced matrix
Explanation Notes: Gauss-Jordan elimination provides a unique solution to a system of linear equations (if it exists) by reducing the matrix to its simplest form, allowing direct identification of variable values.
Algorithm of the Code:
- Initialize Variables:
- Read the number of variables ( n ).
- Set up the extended matrix A_ext to store both the coefficients and constants.
- Extract Coefficients and Constants:
- Separate the coefficients matrix ( A ) and the constants vector ( b ) from A_ext.
- Initialize ( L ) and ( U ) Matrices:
- Set up L (lower triangular matrix) and U (upper triangular matrix) with initial values. ( L ) has ones on the diagonal, while ( U ) is initialized to zero.
- LU Decomposition:
- Calculate the ( L ) and ( U ) matrices using:
- Upper Triangle Calculation: Calculate the elements in ( U ).
- Lower Triangle Calculation: Calculate the elements in ( L ).
- Calculate the ( L ) and ( U ) matrices using:
- Forward Substitution:
- Solve for ( y ) in the equation ( Ly = b ).
- Backward Substitution:
- Solve for ( x ) in the equation ( Ux = y ).
- Display Solution:
- Output the solution vector ( x ).
Processes It Does Inside:
- Input Collection:
- The function prompts the user to enter the number of variables and then takes input for coefficients and constants into an extended matrix A_ext.
- Matrix Separation:
- A_ext is split into two components: A for coefficients and b for constants.
- Matrix Initialization:
- L is initialized as a lower triangular matrix with 1s on the diagonal, while U is initialized as an upper triangular matrix filled with zeros.
- LU Factorization Process:
- The code iterates through A to compute the values for L and U:
- Upper Triangular Matrix ( U ): For each row ( i ), it calculates elements of U for columns.
- Lower Triangular Matrix ( L ): For each row ( i ), it calculates elements of L for rows ( j > i ), using values from U.
- The code iterates through A to compute the values for L and U:
- Forward Substitution for ( Ly = b ):
- The code solves for y by iterating from the first row to the last, ensuring the values of y satisfy ( Ly = b ).
- Backward Substitution for ( Ux = y ):
- The code solves for x by iterating from the last row to the first, ensuring the values of x satisfy ( Ux = y ).
- Solution Output:
- The function outputs the computed values of ( x ), representing the solution to the system.
The Bisection Method is an iterative technique for finding a root within a specified interval [a, b]. It relies on the principle that if a continuous function changes sign within an interval, then there must be a root in that interval.
Algorithms of the code: Steps involved:
- Input Parameters:
The algorithm requires:
- A vector of coefficients cof representing a polynomial.
- A start value start and an end value end to define the interval for root finding.
- A step size that determines the granularity of the intervals checked for roots.
- A tolerance level tol to determine the acceptable error margin for root approximation.
- Finding Intervals:
- The function intervals is defined to find all intervals in which the polynomial changes sign, indicating the presence of a root.
- It initializes two variables, a (starting point) and b (end point for the current interval), and checks pairs of values from start to end using the given step.
- For each pair (a, b), it evaluates the polynomial at these points using the function f(cof, x). If the product of f(cof, a) and f(cof, b) is less than or equal to zero, it means there is at least one root in the interval [a, b]. This interval is then added to a list of intervals.
- Bisection Algorithm:
- The bisection function takes an interval [a, b] and checks if it is valid for finding a root (i.e., if the function values at a and b have opposite signs).
- It calculates the midpoint c of the interval and evaluates the function at this midpoint.
- If the absolute value of the function at c is less than the specified tolerance tol, it indicates convergence, and the root c is returned.
- If the function changes sign between a and c, the search interval is updated to [a, c]. Otherwise, it updates to [c, b].
- This process repeats until the desired tolerance is achieved or the maximum interval width is reached.
- Finding All Roots:
- The bisection_method_roots function utilizes the previously defined functions to find all roots of the polynomial within the given interval [a, b].
- It first calls the intervals function to obtain all intervals where roots may exist.
- For each interval, it calls the bisection function to find the root and adds it to the list of roots if a valid root is found.
- Output:
- The final output of the bisection_method_roots function is a vector of roots found within the specified interval.
Processes Inside :
- Initial function Evaluations:
- Calculating fa = f(cof,a) and fb= f(cof,b) to evaluate the function at both ends of the interval
- If fa x fb >= 0 printing an error message indicating that the interval is invalid ( no sign change) and returning nan.
- Iteration loop:
- Initializing c=a to hold the midpoint of the interval
- While the interval width (b-a) >= tol:
- Calculating the midpoint c=(b+a)/2 and evaluating fc= f(cof,c).
- If |fc|<tol , breaking the loop as the root is sufficiently approximated
- If fa x fb < 0 , updating b=c and fb=fc ( the root lies in the left half )
- Otherwise updating a=c and fa=fc ( the root lies in the right half)
- Returning value:
- Returing the estimated root c
Limitations: The bisection method requires that the function be continuous and that there is a sign change over the interval [a,b].
Algorithms of the code:
Steps Involved:
- Input Parameters:
• The function false_position_method takes the following parameters:
- A vector cof containing the coefficients of the polynomial.
- Two doubles, a and b, representing the endpoints of the interval in which to search for roots.
- A tolerance level tol that specifies how close to zero the function value must be for a root to be considered found.
- A step size used to identify potential intervals where roots may exist.
- Finding Intervals:
- The function begins by finding potential intervals where roots may exist by calling the intervals function, passing the polynomial coefficients and the range defined by a and b, along with the step size.
- This returns a vector of pairs representing the intervals.
- Iterating Over Intervals:
- For each interval (a, b) in the vector of intervals:
- The function evaluates the polynomial at the endpoints a and b, storing the results in f_a and f_b, respectively.
- It checks if the function values at the endpoints have opposite signs (f_a * f_b < 0). If they do not, it skips to the next interval since no root exists in this interval.
- False Position Method Loop:
- If a valid interval is found, the method enters a while loop that continues until the width of the interval (b - a) is greater than or equal to the specified tolerance tol.
- Inside the loop, it calculates the root using the False Position formula.
- It then evaluates the function at this new root (f_root).
- If the absolute value of f_root is less than the tolerance (tol), the root is considered found, and it is added to the roots vector, breaking the loop.
- If the function value at the root has the same sign as f_a, it updates the lower bound a to root and recalculates f_a. Otherwise, it updates the upper bound b to root and recalculates f_b.
- Returning Roots:
- After processing all intervals, the function returns the vector roots, which contains all the roots found using the False Position Method.
Processes Inside:
- Finding intervals :
- Calling the intervals function to identify intervals where the function may have roots , passing in the coefficients and the interval limits (a,b) along with the step.
- Iterating over intervals:
- For each identified interval (a,b):
- Calculating fa=f(cof,a) and fb=f(cof,b);
- If fa x fb >=0 skiping to the next interval since no root exists in this interval
- Iteration loop:
- While the width of the interval (b-a)>=tol
- Calculating the root using the false position formula: Root = b - ( (fb.(b-a))/fb-fa)
- Evaluating froot – f(cof, root );
- If |froot| < tol considering the root found and breaking the loop
- If fa x froot <0
- Updating b = root and fb = froot
- Otherwise
- Updating a =root and fa = froot
- Returning value:
- Returning the list of found roots .
Limitations: The false position method relies on a sign change and can be sensitive to the choice of initial interval, which may affect convergence behavior
Algorithm of the Code:
This code applies the Newton-Raphson method to find all unique roots of a polynomial function defined by the coefficients in a vector. The function f(x)
represents a polynomial of degree 4, and f'(x)
is its derivative, both used to iteratively approximate roots. Starting with a range of initial guesses, the code converges to the unique roots of the polynomial and stores them.
Steps involved:
- Define Polynomial and Derivative: Functions
f(x)
andf'(x)
are created to evaluate the polynomial and its derivative for anyx
value. - Unique Root Check: Helper function verifies if a computed root is distinct and hasn't been found yet.
- Newton-Raphson Iteration: For each initial guess, the algorithm refines a root approximation using Newton-Raphson iterations until convergence within a specified tolerance.
- Collect and Sort Results: Stores unique roots in ascending order and returns them.
Processes Inside:
-
Function Definitions:
f(vf, x)
calculates the polynomial value for a givenx
using coefficientsa
,b
,c
,d
, ande
from vectorvf
.fp(vf, x)
computes the polynomial's derivative atx
.
-
Unique Root Verification (
isUniqueRoot
):- Takes a list of roots (
vc
) and checks if the calculated root (root
) is unique by comparing it with each existing root invc
. - If any stored root differs from the new one by less than
0.0001
, the root is considered non-unique.
- Takes a list of roots (
-
Newton-Raphson Iteration (
newtonraphson
):- Initializes an empty vector
vc
to store unique roots and sets a tolerancetolr
. - For each integer
i
between-maxAt
andmaxAt
, an initial guessx0
is calculated asi - maxAt / 2
. - Inner Iteration (for each guess):
- For each
x0
, computes up tomaxIt
iterations to refine the root approximation. - Steps per iteration:
- Calculates
f(x0)
andf'(x0)
. - If
f'(x0)
is too close to zero, exits to avoid division by zero. - Computes the new approximation
x1 = x0 - f(x)/f'(x)
. - If
|x1 - x0|
is withintolr
, it checks ifx1
is unique usingisUniqueRoot
. If unique, addsx1
tovc
.
- Calculates
- Updates
x0 = x1
for further refining if not converged.
- For each
- Initializes an empty vector
-
Result Collection:
- After collecting all unique roots, sorts
vc
in ascending order and returns it.
- After collecting all unique roots, sorts
Explanation Notes: The Newton-Raphson method iterates over a range of initial guesses to approximate each root of the polynomial, ensuring high accuracy by checking convergence and uniqueness.
Algorithm of the Code:
This code uses the Secant Method to approximate roots of a polynomial. It iteratively finds the points where the polynomial value approaches zero by using two initial guesses and updating them until convergence to a root. It checks each result for uniqueness before storing it.
Steps involved:
- Initialize: For each integer
i
in a specified range, two starting points (x1
andx2
) are chosen. - Secant Iteration: Each pair is refined through Secant iterations to approximate a root.
- Root Verification: The approximated root is checked for uniqueness before storing.
- Result Collection: All unique roots are collected, sorted, and returned.
Processes Inside:
-
Initialize Variables:
- Takes the polynomial coefficients in vector
vf
. - Defines
vc
to store unique roots and sets tolerancetol
for convergence.
- Takes the polynomial coefficients in vector
-
Outer Loop (Starting Guesses):
- Iterates through initial guesses from
-maxAt
tomaxAt
. - For each integer
i
, setsx1 = i
andx2 = i + 1
as the starting points. - Calculates initial polynomial values
fx1 = f(vf, x1)
andfx2 = f(vf, x2)
.
- Iterates through initial guesses from
-
Secant Iteration (Inner Loop):
- Up to
maxIt
iterations are performed to find a root approximation. - Per Iteration:
- Checks if
|fx2 - fx1|
is too small (near zero slope); breaks to avoid division by zero. - Calculates the next approximation
x3
using the Secant formula: [ x3 = \frac{x1 \cdot fx2 - x2 \cdot fx1}{fx2 - fx1} ] - If
|f(vf, x3)| < tol
, the method considersx3
a root. It checks ifx3
is unique usingisUniqueRoot
and, if so, adds it tovc
. - Updates
x1 = x2
,fx1 = fx2
,x2 = x3
, andfx2 = f(vf, x3)
for the next iteration if not converged.
- Checks if
- Up to
-
Result Collection:
- After identifying all unique roots, the vector
vc
is sorted in ascending order and returned.
- After identifying all unique roots, the vector
Explanation Notes: The Secant method requires two initial guesses but doesn’t need the derivative, making it a good alternative to Newton-Raphson. By iteratively updating the guesses, it approximates the roots, ensuring convergence and uniqueness through checks.
Algorithm of the Code:
This code uses the 4th-order Runge-Kutta method to approximate solutions to a differential equation over a specified range. Given initial conditions and step size h
, it iteratively calculates the values of x
and y
by applying the Runge-Kutta formulas.
Steps involved:
- Initialization: The initial values of
x
andy
are set. - Runge-Kutta Step Calculation: Four intermediate slopes (
k1
,k2
,k3
, andk4
) are computed using the differential equation. - Update Solution: Using the weighted average of the slopes, the method updates
x
andy
to approximate the solution at each step. - Display Results: The results are stored in a vector and printed as
(x, y)
pairs.
Processes Inside:
-
Function
rungekutta
:- Inputs:
h
(step size),x
andy
(current values of the variables), and parametersa
,b
, andt
for the functionf
. - Intermediate Slopes:
- Calculates
k1
,k2
,k3
, andk4
based onf(x, y, a, b, t)
.
- Calculates
- Update
y
andx
:- Calculates
y
- Updates
x
by addingh
.
- Calculates
- Returns: A pair
{x, y}
representing the next approximated values ofx
andy
.
- Inputs:
-
Function
show_rk
:- Purpose: Displays Runge-Kutta solution over a range, from 0 to approximately 4*pi.
- Steps:
- Initializes
vxy
, a vector of(x, y)
pairs, and starts withxy
at{0.0, 0.0}
. - Loop: Continuously calls
rungekutta
to get the next(x, y)
values and updatesxy
. - Output: Prints
x
andy
values after each step and stores the result invxy
. - Termination: Ends when
x
exceeds4π
.
- Initializes
This approach is ideal for simulating solutions to ordinary differential equations where accuracy is crucial. The 4th-order Runge-Kutta method provides high accuracy by averaging multiple slopes, which smoothens the approximation.
Algorithm of the Code:
This code finds the inverse of a given square matrix mat
using an augmented matrix approach with Gaussian elimination. The algorithm involves creating an augmented matrix by appending the identity matrix to mat
, then applying row operations to transform the left half into the identity matrix. The result in the right half will then be the inverse of mat
.
Steps involved:
- Initialize Identity Matrix: Append the identity matrix to the given matrix, creating an augmented matrix.
- Forward Elimination: Perform Gaussian elimination to create zeros below the pivot element in each column.
- Backward Elimination: Create zeros above each pivot, transforming the left half into the identity matrix.
- Extract Inverse: Copy the right half of the augmented matrix, which now holds the inverse, into a separate matrix.
Processes Inside:
-
Identity Matrix Creation:
idt
is initialized as an identity matrix of the same dimensionn
asmat
.- Each row
i
ofmat
is augmented with the corresponding row ofidt
, creating a combined matrix of size ( n \times 2n ).
-
Forward Elimination (Upper Triangular Form):
- For each row
i
, the code performs a partial pivot by finding the row with the largest absolute value in the current column to avoid numerical instability and swaps it with the current row. - The pivot row is normalized by dividing each element by the pivot value, ensuring that the diagonal element becomes 1.
- For each row below
i
, row operations are performed to create zeros in the current column by subtracting a multiple of the pivot row from each row below it.
- For each row
-
Backward Elimination (Identity Formation):
- Starting from the last row, the algorithm creates zeros above each pivot to fully transform the left half of the augmented matrix into an identity matrix.
- Each operation on row
i
clears the entries above the pivot position in each column.
-
Extracting the Inverse Matrix:
- Once the left half of
mat
is the identity matrix, the right half represents the inverse of the original matrix. - A new matrix
inv
is created to store the values of the inverse, which is then returned.
- Once the left half of
This function can be used for solving systems of linear equations where matrix inversion is required, as well as for applications in linear algebra where inverse matrices are needed.
show_matrix():
- This function displays a matrix in a readable format.
- It takes a 2D vector mat as input, which represents the matrix.
- It iterates through each element, printing each row and column.
- The output shows "RESULTING MATRIX" followed by the formatted matrix.
- Useful for visualizing matrices, particularly after transformations.
returnToMainMenu():
- This function pauses the program and waits for user input to return to the main menu.
- Displays a prompt to press "Enter" to return to the menu.
- Uses cin.ignore() and cin.get() to wait for input.
- Clears the console using system("CLS").
- Provides a smooth transition back to the main menu.
showMainMenu():
- Displays the main menu for selecting a numerical operation.
- Lists options for different types of numerical equations and matrix inversion.
- Outputs instructions for the user to enter their choice.
- Simple, user-friendly interface for navigating the program.
- Helps guide the user in making selections.
prompt():
- Acts as the main control for the program's operations based on user input.
- Shows the main menu and retrieves user choices.
- Based on the choice, it calls functions for solving linear/non-linear equations, differential equations, and matrix inversion.
- Handles all input and outputs results in a clear format.
- Uses returnToMainMenu to navigate back to the menu after each operation.