Difference between revisions of "EGR 103/Spring 2021/Lab 8"
(Created page with "== Introduction == Some of the problems ask about condition numbers. You will want to read Chapra 11.2 (pp. 292-297) to learn more about norms and condition numbers. There i...") |
(→Connect) |
||
Line 42: | Line 42: | ||
* 11.12 | * 11.12 | ||
** For $$a$$, you do not need the inverse - just use <code>np.linalg.solve()</code>. | ** For $$a$$, you do not need the inverse - just use <code>np.linalg.solve()</code>. | ||
+ | <!-- | ||
** For $$b$$, you will need the inverse. Specifically, take the inverse of the coefficient matrix and then do the math on paper to figure out how the Lake Havasu chloride concentration $$c_4$$ is a function of the loadings -- but do this with '''symbolic''' loadings instead of the numbers given on the right-hand side of the equation in the problem. You should then be able to figure out how much you need to the Lake Powell loading (which is currently 750.5) to bring $$c_4$$ from its current value down to 75. Connect wants to know the '''reduction''' in the Lake Powell loading, so the answer will be a positive number calculated as the difference between 750.5 and the loading needed to get the Lake Havasu concentration down to 75. | ** For $$b$$, you will need the inverse. Specifically, take the inverse of the coefficient matrix and then do the math on paper to figure out how the Lake Havasu chloride concentration $$c_4$$ is a function of the loadings -- but do this with '''symbolic''' loadings instead of the numbers given on the right-hand side of the equation in the problem. You should then be able to figure out how much you need to the Lake Powell loading (which is currently 750.5) to bring $$c_4$$ from its current value down to 75. Connect wants to know the '''reduction''' in the Lake Powell loading, so the answer will be a positive number calculated as the difference between 750.5 and the loading needed to get the Lake Havasu concentration down to 75. | ||
+ | --> | ||
== Gradescope == | == Gradescope == |
Latest revision as of 19:00, 18 March 2021
Contents
Introduction
Some of the problems ask about condition numbers. You will want to read Chapra 11.2 (pp. 292-297) to learn more about norms and condition numbers. There is also more information about norms and condition numbers at Python:Linear Algebra. The summary version is:
- A norm gives you a measure of the size of the entries in a vector or matrix.
- For 1-D arrays (vectors in the text), you need to know how 1, 2 (or e for Euclidian), and \(\infty\) norms are calculated and be able to calculate them by hand.
np.linalg.norm()
understands these. - For 2-D arrays (matrices in the text), you need to know how 1, f (Frobenius), and \(\infty\) norms are calculated and be able to calculate them by hand. The calculation of a matrix 2-norm by hand is beyond the scope of this class but Python can do it.
np.linalg.norm()
understands these too.
- For 1-D arrays (vectors in the text), you need to know how 1, 2 (or e for Euclidian), and \(\infty\) norms are calculated and be able to calculate them by hand.
- A condition number of a matrix gives an idea of the sensitivity of the solution relative to the sensitivity of the measurements. The larger the condition number, the more difficult the geometry, and thus the more prone to error the results are to measurement errors. The rule of thumb is that your final answer will have as many digits of precision as the number of digits in your measurements minus the base-10 logarithm of the condition number.
- For instance, if you take measurements to 5 significant figures and your system has a condition number of 100, your expectation is that your solution is accurate to \(5-\log_{10}(100)=5-2=3\) digits.
- You generally report ranges of digits if the condition number is not an integer power of 10. For instance, if you know your measurements to 9 figures and your condition number of 485, \(\log_{10}(485)=2.69\) so you will lose between 2 and 3 digits of precision meaning you know your final answers to within 6-7 digits (9 minus 2 or 3).
np.linalg.cond()
can calculate condition numbers andnp.log10()
can calculate log10 of a number.
Connect
- 8.2:
- Matrix multiplication in Python 3 is done with the @ symbol
- You can create an $$NxN$$ identity matrix with
np.eye(N)
- You can get a transpose of an array by appending the array name with
.T
- 8.3:
- Rearrange the equations so $$p$$ and $$r$$ are positive and $$q$$ is negative. Do not normalize. There are 8 different ways to arrange these equations based on which side you move things to alone, and really infinite solutions given and normalization!
- 11.6:
- Pay special attention to the note saying that you need to scale each row of the matrix by dividing by the largest number. If that number is negative, you will divide by the negative value. In the end, each row will have at least one "1" in it.
- If you try to normalize an array of integers by dividing a row by its maximum value and then replacing the old row with the new row, Python will automatically convert the new row into an integer, which is not what you want. Here is an example of normalizing done incorrectly for integers:This results in:
A = np.array([[1, -3, 2], [4, -5, 6], [9, 8, -7]]) A[0] = A[0] / (-3) A[1] = A[1] / (6) A[2] = A[2] / (9)
Instead, simply cast the matrix as a bunch of floats by multiplying by 1.0 when you create the matrix. The codeIn [1]: print(A) [[0 1 0] [0 0 1] [1 0 0]]
producesA = np.array([[1, -3, 2], [4, -5, 6], [9, 8, -7]])*1.0 A[0] = A[0] / (-3) A[1] = A[1] / (6) A[2] = A[2] / (9)
In [2]: print(A) [[-0.33333333 1. -0.66666667] [ 0.66666667 -0.83333333 1. ] [ 1. 0.88888889 -0.77777778]]
- 11.7
- No normalizing here.
- 11.12
- For $$a$$, you do not need the inverse - just use
np.linalg.solve()
.
- For $$a$$, you do not need the inverse - just use
Gradescope
Collaborative 1: linsolve
- The returned solution array will have the same dimensions as the array of right-side constants.
- The valid entries for the condition number for a matrix are 1, 2, np.inf, and "fro" where the latter has to be a string.
Collaborative 2: Chapra 8.10 (truss)
In general, there are many (correct) ways to re-write the equations in matrix form. For this specific problem, to make the autograder happy, the equations need to be in the matrix in the same order presented in the problem and the unknowns need to be in the same order as presented in the lab handout. Also, given the signs of the constants on the right side of the equation (all positive), you can figure out where to move the unknowns in the equations. Note that for this problem the coefficient matrix will be the same every time since the geometry of the system does not change.
Individual 1: Chapra 8.6 (mat_test)
- The autograder will not allow numpy (or anything else!) to be imported
- Your solution will likely have a triple loop. Do some matrix multiplications by hand for small and then larger matrices and figure out what processes you are using. Code those.
- Given that your solution will likely have a triple loop, be careful with your indentation!
- As noted, trying to create an empty list of list of 0 can be problematic since the interior lists are actually pointing to the same information. To visualize the wrong and right ways to make a list of lists of 0, see Python Tutor with examples.
Individual 2: Chapra 8.9 (tanks)
- The code for this will be similar to that from another problem (except for this one you do not have to return the coefficient matrix).
- Once again, I have made some decisions for you in terms of the order of the equations (do the tanks in order), the order of the unknowns (tank order), and the signs of the equations.
Individual Lab Report
1: Chapra 8.16
Note that the rotate_2d() function should not do any plotting - in fact, there is no reason to have matplotlib.pyplot in your chapra_08_16.py file! All the plotting will go in other scripts that import your function.
2 and 3: Sweeps
Make sure you understand the first two examples at Python:Linear_Algebra#Sweeping_a_Parameter before starting these. You can ignore Python:Linear_Algebra#Multiple_solution_vectors_simultaneously.