The purpose of this programming assignment is to test your ability to manipulate arrays in mathematical ways.
First make sure you've read all of Chapter 6. Also be sure to recall matrices and systems of equations from your [linear] algebra days.
Recall that many real-world problems can be solved through a system of equations describing the problem's parameters and how they relate to one another. An example of such a system would be the following:
3x + 4y - 2z = 12 x - 2y + z = 2 4x + y - 3z = 6
Recall further that we can represent such a system of equations as a matrix equation by extracting the appropriate parts into matrix form:
-- -- | 1 -2 1 | A = | 4 1 -3 | | 3 4 -2 | -- -- -- -- _ | x | X = | y | | z | -- -- -- -- _ | 12 | B = | 2 | | 6 | -- -- _ _ AX = B
(Here the overbar indicates a column matrix. Another notation uses lowercase letters for column matrices, but I didn't want you to confuse the original algebra variable x with the name of the column matrix for the variables.)
If we were to follow the rules of matrix multiplication and matrix equality to their logical conclusions, we could reconstitute the original system of equations from this matrix equation form.
Of course, we can store column and normal matrices in 1D and 2D arrays in a C program. Therefore, we might want to explore solving such systems programmatically.
The first thought from most minds is to implement Gaussian elimination. While this technique is simple, it is performed on a matrix formed by augmenting the coefficient matrix (A) with the constant column matrix (b). Although we could do this, it binds the coefficients to the constants in a tight way. It is often the case that the coefficient matrix can be re-used multiple times with different sets of constants for the right-hand sides. In such situations, use of Gaussian elimination would require redundant effort to solve for each set of constants.
A second thought that removes the tight binding of coefficients and constants is using a matrix [multiplicative] inverse. Then, just like when we solve a simple equation like ax=c by multiplying both sides by the multiplicative inverse of a, we could solve our matrix equation by multiplying by the matrix inverse of A:
-1 _ _ -1_ A AX = X = A B
Unfortunately, studying the techniques of inverting a matrix leads to the conclusion that they are complex and/or inefficient. Perhaps another way?
It turns out that we can factor a matrix A into the product of a pair of triangular matrices: LU. L here is lower triangular:
-- -- | a 0 0 0 0 0 | | b c 0 0 0 0 | | d e f 0 0 0 | | g h i j 0 0 | | k l m n o 0 | | p q r s t u | -- --
Whereas U is upper triangular:
-- -- | a b c d e f | | 0 g h i j k | | 0 0 l m n o | | 0 0 0 p q r | | 0 0 0 0 s t | | 0 0 0 0 0 u | -- --
(Not to imply that they have the same elements arranged differently. Just the location of the zeros is significant to the triangularity of the matrices.)
If we perform such a factorization, then our system of equations can be solved thusly:
_ _ _ AX = LUX = B
If we substitute a new column matrix for the product Ux — say, y — we can further simplify:
_ _ _ AX = LUX = B _ _ LY = B _ _ UX = Y
But how is this better? Aren't L and U still matrices and don't they still need inverting to solve the system? Yes, but NO!!! Since they are triangular matrices, their equations can be solved by simple substitution methods!
And the binding problem? The LU decomposition is dependent only on A — not anything about the constants' column matrix!
Then sign me up! How do we perform this decomposition process?
There are multiple algorithms to implement LU decomposition. I believe the simplest and plainest is the Doolittle algorithm. It quite clearly shows that the LU decomposition stores the steps of a Gaussian elimination process into a pair of matrices... Wait! Gaussian elimination?! I thought we gave that up!?
Yes...but it turns out that the LU decomposition process is a disentangling of the coefficients and constants with respect to the elimination process. The U matrix you get is pretty much the same as the row echelon form of the coefficients at the end of Gaussian elimination. The L matrix stores the essence of the calculations you used to eliminate the lower triangle from U. Thus you still have all the information necessary to rebuild A.
Also, I kinda lied when I said the wikipedia entry clearly showed ...er...anything. It is mostly complete and with a good an excellent knowledge of matrix processing and notation, you could piece it together. But, to help you through, here are some finer points:
You are building a sequence of A(n) and Ln matrices as n ranges through values from 1 to N-1 — yes little n and capital n are different; just like in C!
On each n value, you'll let i range from n+1 to N and fill in one column in the lower triangle of Ln. All diagonal entries of Ln should be 1 and all off-diagonal elements other than those explicitly calculated should be 0.
You'll need to keep all of the Ln matrices around to build L.
As to the final formation of L from the Lns, note that each Ln is an atomic triangular matrix which has an easy inversion pattern.
But along the way you'll be multiplying Ln by A(n-1) to form A(n). So no cheating and taking out just the column elements you want to keep... This will require 3D storage: an array of 2D arrays!
So, for example, your program might behave like this:
$ ./syseq.out Welcome to the System Solver!!! Please enter the number of variables/equations: 200 I'm sorry, that is too large of a system for me to handle. Please enter a size smaller than 21. Please enter the number of variables/equations: -21 I'm sorry, that is too small of a system to make sense. Please enter a size larger than 0. Please enter the number of variables/equations: 3 Now enter the coefficients of each equation -- one equation per line: 1 -2 1 4 1 -3 3 4 -2 Processing...done! I've factored your coefficients' matrix into the product: -- -- -- -- | 1.00000 0.00000 0.00000 | | 1.00000 -2.00000 1.00000 | | 4.00000 1.00000 0.00000 | | 0.00000 9.00000 -7.00000 | | 3.00000 1.11111 1.00000 | | 0.00000 0.00000 2.77778 | -- -- -- -- Now enter the constants for the right-hand side: 12 2 6 That would lead to a solution set of 6, 0.8, and 7.6. Do you have another right-hand side? yes Now enter the constants for the right-hand side: -4 8 1 That would lead to a solution set of -1.4, -1.16, and -4.92. Do you have another right-hand side? no Thanks for solving systems with me today! Bye...*wave* $
Note that you need to support a system of up to 20 equations in 20 unknowns! (If you want to support more, see the options below.)
You'll hear talk about a permutation matrix in many sources. This is optional although it can improve the precision of calculations for some matrices. (There is an option to do this below.)
This assignment is (Level 5).
Add (Level 2) to make your program handle systems larger than 20x20 — as large as the user wants, in fact! This will require you to delve into dynamic 2D arrays as seen in Chapter 12 of your C book. (Don't worry, making the 3D array for the Lns is really similar. If you need help, let me know!)
Add (Level 1) to calculate the determinant of the user's coefficients' matrix. Rather than the complicated procedure normally required, you can merely multiply the diagonal elements of each of your triangular matrices all together. (The determinant of the example matrix above is 25, for instance.)
Add (Level 2.5) to invert the user's coefficients' matrix — multiplicative inverse, that is.
But didn't you say we were avoiding the complicated procedures of inverting a matrix with this LU decomposition approach to system solution?!
Yes. I did. And we did. But, it turns out that some people still want — for whatever reason — to have the coefficients' matrix' inverse. And, it turns out it isn't too hard to do using the LU system solving technique. It is weird, but easier than other means. Although it does mean implementing a full matrix multiplication function since we can't just forward/backward substitute here. *shrug*
You can check your inverse calculation by solving the system with the inverse technique and checking that you get the same answer that way as with the LU technique. *smile*
Add (Level 4) to implement the permutation matrix thing everyone keeps talking about on all those web sites and in all those books. Be sure to print out your P matrix along with the triangular ones.
Remember that this will also change your determinant (if you did that option) by a factor of -1 to a power determined by how many row swaps you performed during the LU decomposition process.