Topical Information

The purpose of this programming assignment is to test your ability to manipulate arrays in mathematical ways.

Program Background

First make sure you've read all of Chapter 6. Also be sure to recall matrices and systems of equations from your [linear] algebra days.

Program Information

Recall that many real-world problems can be solved through a system of equations describing the problem's parameters and how they relate to one another. An example of such a system would be the following:

   3x + 4y - 2z = 12
    x - 2y +  z =  2
   4x +  y - 3z =  6

Recall further that we can represent such a system of equations as a matrix equation by extracting the appropriate parts into matrix form:

       --         --
       | 1  -2   1 |
   A = | 4   1  -3 |
       | 3   4  -2 |
       --         --

       -- --
   _   | x |
   X = | y |
       | z |
       -- --

       --  --
   _   | 12 |
   B = |  2 |
       |  6 |
       --  --

     _   _
    AX = B

(Here the overbar indicates a column matrix. Another notation uses lowercase letters for column matrices, but I didn't want you to confuse the original algebra variable x with the name of the column matrix for the variables.)

If we were to follow the rules of matrix multiplication and matrix equality to their logical conclusions, we could reconstitute the original system of equations from this matrix equation form.

Of course, we can store column and normal matrices in 1D and 2D arrays in a C program. Therefore, we might want to explore solving such systems programmatically.

The first thought from most minds is to implement Gaussian elimination. While this technique is simple, it is performed on a matrix formed by augmenting the coefficient matrix (A) with the constant column matrix (b). Although we could do this, it binds the coefficients to the constants in a tight way. It is often the case that the coefficient matrix can be re-used multiple times with different sets of constants for the right-hand sides. In such situations, use of Gaussian elimination would require redundant effort to solve for each set of constants.

A second thought that removes the tight binding of coefficients and constants is using a matrix [multiplicative] inverse. Then, just like when we solve a simple equation like ax=c by multiplying both sides by the multiplicative inverse of a, we could solve our matrix equation by multiplying by the matrix inverse of A:

     -1 _   _    -1_
    A  AX = X = A  B

Unfortunately, studying the techniques of inverting a matrix leads to the conclusion that they are complex and/or inefficient. Perhaps another way?

It turns out that we can factor a matrix A into the product of a pair of triangular matrices: LU. L here is lower triangular:

   --                --
   | a  0  0  0  0  0 |
   | b  c  0  0  0  0 |
   | d  e  f  0  0  0 |
   | g  h  i  j  0  0 |
   | k  l  m  n  o  0 |
   | p  q  r  s  t  u |
   --                --

Whereas U is upper triangular:

   --                --
   | a  b  c  d  e  f |
   | 0  g  h  i  j  k |
   | 0  0  l  m  n  o |
   | 0  0  0  p  q  r |
   | 0  0  0  0  s  t |
   | 0  0  0  0  0  u |
   --                --

(Not to imply that they have the same elements arranged differently. Just the location of the zeros is significant to the triangularity of the matrices.)

If we perform such a factorization, then our system of equations can be solved thusly:

    _     _   _
   AX = LUX = B

If we substitute a new column matrix for the product Ux — say, y — we can further simplify:

    _     _   _
   AX = LUX = B
          _   _
         LY = B
          _   _
         UX = Y

But how is this better? Aren't L and U still matrices and don't they still need inverting to solve the system? Yes, but NO!!! Since they are triangular matrices, their equations can be solved by simple substitution methods!

And the binding problem? The LU decomposition is dependent only on A — not anything about the constants' column matrix!

Then sign me up! How do we perform this decomposition process?

There are multiple algorithms to implement LU decomposition. I believe the simplest and plainest is the Doolittle algorithm. It quite clearly shows that the LU decomposition stores the steps of a Gaussian elimination process into a pair of matrices... Wait! Gaussian elimination?! I thought we gave that up!?

Yes...but it turns out that the LU decomposition process is a disentangling of the coefficients and constants with respect to the elimination process. The U matrix you get is pretty much the same as the row echelon form of the coefficients at the end of Gaussian elimination. The L matrix stores the essence of the calculations you used to eliminate the lower triangle from U. Thus you still have all the information necessary to rebuild A.

Also, I kinda lied when I said the wikipedia entry clearly showed ...er...anything. It is mostly complete and with a good an excellent knowledge of matrix processing and notation, you could piece it together. But, to help you through, here are some finer points:


So, for example, your program might behave like this:

$ ./syseq.out

                       Welcome to the System Solver!!!

Please enter the number of variables/equations:  200

I'm sorry, that is too large of a system for me to handle.

Please enter a size smaller than 21.

Please enter the number of variables/equations:  -21

I'm sorry, that is too small of a system to make sense.

Please enter a size larger than 0.

Please enter the number of variables/equations:  3

Now enter the coefficients of each equation -- one equation per line:
1  -2   1
4   1  -3
3   4  -2

Processing...done!

I've factored your coefficients' matrix into the product:

   --                         --  --                         --
   | 1.00000  0.00000  0.00000 |  | 1.00000 -2.00000  1.00000 |
   | 4.00000  1.00000  0.00000 |  | 0.00000  9.00000 -7.00000 |
   | 3.00000  1.11111  1.00000 |  | 0.00000  0.00000  2.77778 |
   --                         --  --                         --

Now enter the constants for the right-hand side:  12 2 6

That would lead to a solution set of 6, 0.8, and 7.6.

Do you have another right-hand side?  yes

Now enter the constants for the right-hand side:  -4 8 1

That would lead to a solution set of -1.4, -1.16, and -4.92.

Do you have another right-hand side?  no

Thanks for solving systems with me today!

Bye...*wave*
$ _

Note that you need to support a system of up to 20 equations in 20 unknowns! (If you want to support more, see the options below.)


You'll hear talk about a permutation matrix in many sources. This is optional although it can improve the precision of calculations for some matrices. (There is an option to do this below.)

This assignment is (Level 5).

Options