-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathCh01_Introduction.tex
More file actions
1324 lines (1184 loc) · 62.1 KB
/
Ch01_Introduction.tex
File metadata and controls
1324 lines (1184 loc) · 62.1 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
\mainmatter
\chapter{Introductory Topics}\label{ch:intro}
\section{An Introduction to Numerical Analysis}
\begin{quote}
{\it ``In the 1950s and 1960s, the founding fathers of [Numerical Analysis] discovered that
inexact arithmetic can be a source of danger, causing errors in results that `ought'
to be right.''} \\
\\ --\href{https://en.wikipedia.org/wiki/Nick_Trefethen}{Oxford Professor Lloyd (Nick)
Trefethen (2006)}
\end{quote}
The field of Numerical Analysis is really the study of how to take
mathematical problems and perform them efficiently and accurately on a computer. There
are some problems where numerical analysis doesn't make much sense (e.g. finding an
algebraic derivative of a function) but for many problems a numerical method that gives an
approximate answer is both more efficient and more versatile than any analytic technique.
For example, if we needed to solve the differential equation $\frac{dy}{dt} = \sin(y^2) +
t$ the nonlinear nature of the problem makes it hard to work with analytically but
computational methods that result in a plot of an approximate solution can be made very
quickly and likely give enough of a solution to be usable. Similarly, efficient
algorithms to do the basic operations of linear algebra (e.g. Gaussian elimination, matrix
factorization, or eigenvalue decomposition) are highly sought after and useful when
the matrices used for a model are very large.
In this chapter we will discuss some of the basic underlying ideas in
Numerical Analysis, and the essence of the above quote from Nick Trefethen will be part of the focus of this chapter.
Particularly, we need to know how a computer stores numbers and when that storage can get
us into trouble. On a more mathematical side, we offer a brief review of the Taylor
Series from Calculus at the end of this chapter. The Taylor Series underpins many of our approximation methods in
this class. Finally, at the end of this chapter we provide several coding exercises that
will help you to develop your programming skills. It is expected that you know some of
the basics of \ProgLang programming before beginning this class. If you need to review the
basics then see Appendix \ref{app:coding}. You'll have more
than just the basics by the end.
\begin{center}
Let's begin.
\end{center}
\section{Base 2 and Binary Arithmetic}
\begin{problem}\label{prob:base_10_faila}
By hand (no computers!) compute the first 50 terms of this sequence with the initial condition $x_0 = 1/10$.
\[ x_{n+1} = \left\{ \begin{array}{ll} 2x_n, & x_n \in [0,\frac{1}{2}] \\ 2x_n - 1, & x_n \in (\frac{1}{2},1] \end{array} \right. \]
\end{problem}
\solution{
\[ x_n = \{1/10, 2/10, 4/10, 8/10, 6/10, 2/10, 4/10, 8/10, 6/10, \ldots \} \]
}
\begin{problem}\label{prob:base_10_failb}
Now use Excel and \ProgLang to do the computations. Do you get the same answers?
\end{problem}
\solution{
In both Excel and \ProgLang you lose accuracy at about the 40th iteration.
}
\begin{problem}\label{prob:base_10_failc}
It seems like the computer has failed you! What do you think happened on the computer
and why did it give you a different answer? What, do you suppose, is the cautionary tale hiding behind the scenes with this problem?
\end{problem}
\solution{
The simplest answer is that $1/10$ is not computer representable.
}
\begin{problem}
Now what happens with this problem when you start with $x_0 = 1/8$? Why does this new
initial condition work better?
\end{problem}
A computer circuit knows two states: on and off. As such, anything saved in computer
memory is stored using base-2 numbers. This is called a binary number system. To fully
understand a binary number system it is worth while to pause and reflect on our base-10
number system for a few moments.
What do the digits in the number ``$735$'' really mean? The position of each digit tells
us something particular about the magnitude of the overall number. The number $735$ can
be represented as a sum of powers of $10$ as
\[ 735 = 700 + 30 + 5 = 7 \times 10^2 + 3 \times 10^1 + 5 \times 10^0 \]
and we can read this number as $7$ hundreds, $3$ tens, and $5$ ones.
As you can see, in a ``positional number system'' such as our base-10 system, the
position of the number indicates the power of the base, and the value of the digit itself
tells you the multiplier of that power. This is contrary to number systems like Roman
Numerals where the symbols themselves give us the number, and meaning of the position is
somewhat flexible.
% In other words, the location of the digit (as read from the right-hand side of the number
% starting at 0) is the power on the base, 10. Similarly
The number ``48,329'' can therefore be interpreted as
\[ 48,329 = 40,000 + 8,000 + 300 + 20 + 9 = 4 \times 10^4 + 8 \times 10^3 + 3 \times 10^2 + 2
\times 10^1 + 9 \times 10^0, \]
Four ten thousands, eight thousands, three hundreds, two tens, and nine ones.
Now let's switch to the number system used by computers: the binary number system. In a
binary number system the base is 2 so the only allowable digits are 0 and 1 (just like in
base-10 the allowable digits were 0 through 9). In binary (base-2), the number
``$101,101$'' can be interpreted as
\[ 101,101_2 = 1 \times 2^5 + 0 \times 2^4 + 1 \times 2^3 + 1 \times 2^2 + 0 \times 2^1 + 1
\times 2^0 \]
(where the subscript ``2'' indicates the based to the reader).
If we put this back into base 10, so that we can read it more comfortably, we get
\[ 101,101_2 = 32 + 0 + 8 + 4 + 0 + 1 = 45_{10}. \]
The reader should take note that the commas in the numbers are only to allow for greater
readability -- we can easily see groups of three digits and mentally keep track of what
we're reading.
\begin{problem}
Express the following binary numbers in base-10.
\begin{enumerate}
\item[(a)] $111_2$
\item[(b)] $10,101_2$
\item[(c)] $1,111,111,111_2$
\end{enumerate}
\end{problem}
\begin{problem}
Explain the joke:
\begin{center}
{\it There are 10 types of people. Those who understand binary and those who
don't.}
\end{center}
\end{problem}
\begin{problem}
Discussion: With your group discuss how you would convert a base-10 number into its
binary representation. Once you have a proposed method put it into action on the
number $237_{10}$ who's base-2 expression is $11,101,101_2$
\end{problem}
\begin{problem}
Convert to following numbers from base 10 to base 2 or visa versa.
\begin{itemize}
\item Write $12_{10}$ in binary \solution{
\[ 12_{10} = 8+4 = 1\times 2^3 + 1 \times 2^2 + 0 \times 2^1 + 0 \times 2^0 = 1100_2\]
}
\item Write $11_{10}$ in binary
\solution{The number ``11'' in base 10 is
\[ 11_{10} = 8+2+1 = 1 \times 2^3 + 0\times 2^2 + 1 \times 2^1 + 1 \times
2^0 = 1011_{2} \]
}
\item Write $23_{10}$ in binary
\solution{
\[ 23_{10} = 16 + 4 + 2 + 1 = 2^4 + 2^2 + 2^1 + 2^0 = 10111_2. \]
}
\item Write $11_2$ in base 10
\solution{
The number ``$11_2$'' is
\[ 11_2 = 1 \times 2^1 + 1 \times 2^0 = 2+1 = 3_{10}. \]
}
\item What is $100101_2$ in base $10$? \solution{
\[ 100101_2 = 1 \times 2^0 + 0 \times 2^1 + 1 \times 2^2 + 0 \times 2^3 + 0 \times 2^4 + 1 \times 2^5 = 1 + 4 + 32 = 37 \]
}
\end{itemize}
\end{problem}
\begin{problem}
Now that you have converted several base-10 numbers to base-2, summarize an efficient
technique to do the conversion.
\end{problem}
% \begin{problem}
% Converting base 2 numbers to base 10 is relatively easy -- expand in powers of 2 and add
% up the results. Converting base 10 numbers to base 2 is a little bit more
% interesting. Describe the technique that you used to convert the numbers $12$, $11$,
% and $23$ into base 2 in the previous problem.
% \end{problem}
%
\begin{example}
Convert the number 137 from base 10 to base 2. \\{\bf Solution:} One way to do the
conversion is to first look for the largest power of 2 less than or equal to your
number. In this case, $128=2^7$ is the largest power of 2 that is less than 137.
Then looking at the remainder, 9, look for the largest power of 2 that is less than
this remainder. Repeat until you have the number.
\begin{flalign*}
137_{10} &= 128 + 8 + 1 \\
&= 2^7 + 2^3 + 2^0 \\
&= 1 \times 2^7 + 0 \times 2^6 + 0 \times 2^5 + 0 \times 2^4 + 1 \times 2^3 + 0
\times 2^2 + 0 \times 2^1 + 1 \times 2^0 \\
&= 10001001_2
\end{flalign*}
\end{example}
Next we'll work with fractions and decimals. For example, let's take the base 10 number
$5.341_{10}$ and expand it out to get
\[ 5.341_{10} = 5 + \frac{3}{10} + \frac{4}{100} + \frac{1}{1000} = 5 \times 10^0 + 3 \times
10^{-1} + 4 \times 10^{-2} + 1 \times 10^{-3}. \]
The position to the right of the decimal point is the negative power of 10 for the given
position.
We can do a similar thing with binary decimals.
\begin{problem}
The base-2 number $1,101.01_2$ can be expanded in powers of 2. Fill in the question
marks below and observe the pattern in the powers.
\[ 1,101.01_2 = ? \times 2^3 + 1 \times 2^2 + 0 \times 2^1 +
? \times 2^0 + 0 \times 2^{?} + 1 \times 2^{-1}. \]
\end{problem}
\begin{problem}
Repeating digits in binary numbers are rather intriguing. The number
$0.\overline{0111} = 0.01110111011101110111\ldots$ surely also has a decimal
representation. I'll get you started:
\begin{flalign*}
0.0_2 &= 0 \times 2^0 + 0 \times 2^{-1} = 0.0_{10} \\
0.01_2 &= 0.0_{10} + 1 \times 2^{-2} = 0.25_{10} \\
0.011_2 &= 0.25_{10} + 1 \times 2^{-3} = 0.25_{10} + 0.125_{10} = 0.375_{10} \\
0.0111_2 &= 0.375_{10} + 1 \times 2^{-4} = 0.4375_{10} \\
0.01110_2 &= 0.4375_{10} + 0 \times 2^{-5} = 0.4375_{10} \\
0.011101_2 &= 0.4375_{10} + 1 \times 2^{-6} = 0.453125_{10} \\
\vdots & \qquad \qquad \vdots \qquad \qquad \qquad \vdots
\end{flalign*}
We want to know what this series converges to in base 10. Work with your partners
to approximate the base-10 number.
\end{problem}
\begin{example}
Convert $11.01011_2$ to base 10. \\ {\bf Solution:}
\begin{flalign*}
11.01011_2 &= 2 + 1 + \frac{0}{2} + \frac{1}{4} + \frac{0}{8} + \frac{1}{16} +
\frac{1}{32} \\ &= 1 \times 2^1 + 1 \times 2^0 + 0 \times 2^{-1} + 1 \times 2^{-2} + 0
\times 2^{-3} + 1 \times 2^{-4} + 1 \times 2^{-5}\\ &= 3.34375_{10}.
\end{flalign*}
\end{example}
\begin{problem}
Convert the following numbers from base 10 to binary.
\begin{enumerate}
\item[(a)] What is $1/2$ in binary? \solution{$0.1$}
\item[(b)] What is $1/8$ in binary? \solution{$0.001$}
\item[(c)] What is $4.125$ in binary? \solution{$100.001$}
\item[(d)] What is $0.15625$ in binary? \solution{$0.00101$}
\end{enumerate}
\end{problem}
\begin{problem}
Convert the base 10 decimal $0.635$ to binary using the following steps. Explain why
each step gives the binary digit that it does.
\begin{enumerate}
\item[(a)] Multiply $0.635$ by 2. The whole number part of the result is the
first binary digit to the right of the decimal point. \solution{$0.635 \times
2 = 1.27$ so the binary number is $0.1?????$.}
\item[(b)] Take the result of the previous multiplication and ignore the digit to the
left of the decimal point. Multiply the remaining decimal by 2. The whole
number part is the second binary decimal digit. \solution{In this case, $0.27
\times 2$ gives a 0 in the whole numbers spot so the binary number is
$0.10?????$.}
\item[(c)] Repeat the previous step until you have nothing left, until a
repeating pattern has revealed itself, or until your precision is {\it close
enough}. \solution{$0.635_{10} \approx
0.10100010100011110101110000\dots_2$}
\end{enumerate}
\end{problem}
\begin{problem}
Based on your previous problem write an algorithm that will convert base-10 decimals
(less than 1) to base decimal expansions.
\end{problem}
\begin{problem}
Convert the base 10 fraction $1/10$ into binary. Use your solution to fully describe
what went wrong in problems \ref{prob:base_10_faila} - \ref{prob:base_10_failc}?
\end{problem}
\solution{
\[ \frac{1}{10} = 0.0001100110011001100110011\dots \]
The fraction $\frac{1}{10}$ is a repeating decimal in binary and hence is not machine
representable!
}
\newpage\section{Floating Point Arithmetic}
Everything stored in the memory of a computer is a number, but how does a computer
actually store a number. More specifically, since computers only have finite memory we
would really like to know the full range of numbers that are possible to store in a
computer.
\begin{problem}
Consider the number $x = -129.15625$ (in base 10). As we've seen this number can be
converted into binary. Indeed
\[ x = -123.15625_{10} = -1111011.00101_2 \]
(you should check this).
\begin{enumerate}
\item[(a)] If a computer needs to store this number then first they put in the
binary version of scientific notation. In this case we write
\[ x = -1. \underline{\hspace{1in}} \times 2^{\underline{\hspace{0.25in}}} \]
\solution{
\[ -1.11101100101 \times 2^{6} \]
}
\item[(b)] Based on the fact that every binary number, other than 0, can be
written in this way, what three things do you suppose a computer needs to
store for any given number? \solution{The sign, the digits after the decimal
place, and the exponent}
\item[(c)] Using your answer to part (b), what would a computer need to store for
the binary number $x=10001001.1100110011_2$?
\end{enumerate}
\end{problem}
For any base-2 number $x$ we can write
\[ x = (-1)^{s} \times (1+ m) \times 2^E \]
where $s \in \{0,1\}$ is called the {\it sign bit} and $m$ is a binary number such that $0
\le m < 1$.
\begin{definition}
For a number $x = (-1)^{s} \times (1+m) \times 2^E$ stored in a computer, the number $m$
is called the {\bf mantissa} or the {\bf significand}, $s$ is known as the sign bit,
and $E$ is known as the exponent.
\end{definition}
\begin{example}
What are the mantissa, sign bit, and exponent for the numbers $7_{10}$, $-7_{10}$,
and $(0.1)_{10}$? \\{\bf Solution:}
\begin{itemize}
\item For the number $7_{10}=111_2 = 1.11 \times 2^2$ we have $s=0$, $m=0.11$ and $E=2$.
\item For the number $-7_{10}=111_2 = -1.11 \times 2^2$ we have $s=1$, $m=0.11$ and $E=2$.
\item For the number $\frac{1}{10} = 0.000110011001100\cdots = 1.100110011 \times 2^{-4}$
we have $s=0$, $m=0.100110011\cdots$, and $E = -4$.
\end{itemize}
\end{example}
In the last part of the previous example we saw that the number $(0.1)_{10}$ is actually a
repeating decimal in base-2. This means that in order to completely represent the number
$(0.1)_{10}$ in base-2 we need infinitely many decimal places. Obviously that can't
happen since we are dealing with computers with finite memory. Over the course of the
past several decades there have been many systems developed to properly store numbers.
The IEEE standard that we now use is the accumulated effort of many computer scientists,
much trial and error, and deep scientific research. We now have three standard precisions
for storing numbers on a computer: single, double, and extended precision. The double
precision standard is what most of our modern computers use.
\begin{definition}[Computer Precision Standards]
There are three standard precisions for storing numbers in a computer.
\begin{itemize}
\item A {\bf single-precision} number consists of 32 bits, with 1 bit for the
sign, 8 for the exponent, and 23 for the significand.
\item A {\bf double-precision} number consists of 64 bits with 1 bit for the sign,
11 for the exponent, and 52 for the significand.
\item An {\bf extended-precision} number consists of 80 bits, with 1 bit for the
sign, 15 for the exponent, and 64 for the significand.
\end{itemize}
\end{definition}
\begin{definition}
{\bf Machine precision} is the gap between the number 1 and the next larger floating
point number. Often it is represented by the symbol $\epsilon$. To clarify, the number 1 can
always be stored in a computer system exactly and if $\epsilon$ is machine
precision for that computer then $1+\epsilon$ is the next largest number that can
be stored with that machine.
\end{definition}
For all practical purposes the computer cannot tell the difference between two numbers if
the difference is smaller than machine precision. This is of the utmost important when you
want to check that something is ``zero'' since a computer just cannot know the difference
between $0$ and $\epsilon$.
\ifnum\Python=0 As a side note: You can determine the working precision in MATLAB by
typing ``eps'' in the command line.\fi
\begin{problem}
To make all of these ideas concrete let's play with a small computer system where each
number is stored in the following format:
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
$s$ & E & $b_1b_2b_3$ \\ \hline
\end{tabular}
\end{center}
The first entry is a bit for the sign (0$=+$ and $1=-$). The second entry, $E$ is for the
exponent, and we'll assume in this example that the exponent can be 0, 1, or $-1$. The
three bits on the right represent the significand of the number. Hence, every number in
this number system takes the form
\[ (-1)^s \times (1+ 0.b_1b_2b_3) \times 2^{E} \]
\begin{itemize}
\item What is the smallest positive number that can be represented in this
form?\\\solution{$1.000\times 2^{-1}= 0.1_2 = 1/2$}
\item What is the largest positive number that can be represented in this
form?\\\solution{$1.111\times 2^1 = 11.11_2 = 2+1+(1/2) + (1/4) = 3.75$}
\item What is the machine precision in this number system? \\\solution{Machine
precision is the gap between 1 and the next largest number. In this number
system, the next largest number is $1.001 \times 2^0$ so the gap is $0.001_2$
which means that $\epsilon = 2^{-3} = 1/8$}
\item What would change if we allowed $E \in \{-2,-1,0,1,2\}$?\\\solution{The
smallest number would be $1.000\times 2^{-2} = 0.01_2 = 1/4$ and the largest
would be $1.111 \times 2^2 = 111.1_2 = 4+2+1+(1/2) = 7.5$. Machine precision
would remain the same.}
\end{itemize}
\end{problem}
\begin{problem}
What are the largest and smallest numbers that can be stored in single and double
precicision?
\end{problem}
\begin{problem}
What is machine epsilon for single and and double precision?
\end{problem}
\begin{problem}
A typical computer number:
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
0 & E=4 & 10011001100000000000000 \\ \hline
\end{tabular}
\end{center}
What is this number? Is it stored in single or double precision?
\end{problem}
% \solution{
% Remember that the ``$1.$'' is actually in front of the binary {\it significand}. Hence,
% this number is
% \[ x = +1.100110011 \times 2^4 = + 11001.10011 = 1+8+16+(1/2)+(1/16)+(1/32) = 25.59375\]
% }
\begin{problem}
Explain the behavior of the sequence from the first problem in these notes using what
you know about how computers store numbers in double precision.
\[ x_{n+1} = \left\{ \begin{array}{ll} 2x_n, & x_n \in [0,\frac{1}{2}] \\ 2x_n - 1, & x_n \in
(\frac{1}{2},1] \end{array} \right. \quad \text{with} \quad x_0 = \frac{1}{10} \]
In particular, now that you know about how numbers are stored in a computer, how long
do you expect it to take until the truncation error creeps into the computation?
\end{problem}
Much more can be said about floating point numbers such as how we store infinity, how we store
NaN, and how we store 0. The
\href{https://en.wikipedia.org/wiki/Floating-point_arithmetic}{Wikipedia page for floating
point arithmetic} might be of interest for the curious reader. It is beyond the scope of
this class and this book to go into all of those details here.
\newpage\section{Polynomial Approximation and The Taylor Series}\label{sec:Taylor}
Now we turn our attention in this introductory chapter to something more mathematical.
The Taylor Series is a mathematical tool that is widely used in approximation theory, and
since a computer can only make approximations we need ways to approximate functions so
that a computer can understand them. The Taylor Series will be just the right tool for our
purposes.
Keep the following basic idea in your mind as you work through this section:
\begin{quote}
{\it The
Taylor Series is a tool for expressing complicated functions as infinite polynomials.
Polynomials are great for computers since they only involve addition, subtraction,
multiplication, and powers -- all things that computers are good at.}
\end{quote}
Consider the function $f(x) = e^x$. Euler's number
\[ e = 2.718281828459045\cdots \]
is irrational and
impossible for a computer to represent directly. How, do you suppose, does
a computer actually {\it understand} a function like $e^x$ (or any other
transcendental function for that matter)? \\
Answer: Polynomials! \\
Polynomials are some of the simplest types of functions since they involve very basic
mathematical operations: really just addition and multiplication (since subtraction
and division are just {\it special} addition and multiplication).
% \subsection{Polynomial Approximation}
Let's get a feel for how we approximate functions like $f(x) = e^x$ with a simple
exercise. In the following exercise we will build a polynomial function with certain
properties to {\it match} the exponential function.
\begin{problem}\label{prob:taylor_intro}
Let $f(x) = e^x$.
\begin{enumerate}
\item[(a)] Find a linear function of the form $g(x) = a_0 + a_1 x$ such that $g(0)
= f(0)$ and $g'(0) = f'(0)$. Plot both $f(x)$ and $g(x)$ on the same axes. \solution{
We know that $f(x) = e^x = f'(x)$ so $f(0) = f'(0) = 1$. Hence $a_0 = 1$
and $a_1 = 1$. Therefore the function $g(x) = 1+x$ matches the function
$f(x) = e^x$ in both funtion value and in first derivative at $x=0$.
}
\item[(b)] Find a quadratic function of the form $g(x) = a_0 + a_1 x + a_2 x^2$
such that $g(0) = f(0)$, $g'(0) = f'(0)$, and $g''(0) = f''(0)$. Plot both $f(x)$ and $g(x)$ on the same axes. \solution{
Again we see that $a_0 = a_1 = 1$ from the previous part. For the second
derivative we see that $g''(0) = 2a_2$ and $f''(0) = 1$ so $a_2 = 1/2$.
Therefore, $g(x) = 1 + x + \frac{1}{2} x^2$.
}
\item[(c)] Find a polynomial of order $n$ that matches the function $f(x) = e^x$
such that $g^{(k)}(0) = f^{(k)}(0)$ for all $k \le n$. Plot both $f(x)$ and
$g(x)$ on the same axes for several values of $n$. What do you observe about
the function $f(x)$ and the function $g(x)$? \solution{
\[ g(x) = 1 + x + \frac{x^2}{2} + \frac{x^3}{3!} + \frac{x^4}{4!} + \cdots
+ \frac{x^n}{n!}. \]
}
\end{enumerate}
\end{problem}
\begin{problem}\label{prob:taylor_intro2}
Repeat Problem \ref{prob:taylor_intro} with the function $f(x) = \sin(x)$.
\end{problem}
\solution{
\[ g(x) = x - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + \cdots . \]
}
\begin{problem}\label{prob:taylor_intro3}
Repeat Problem \ref{prob:taylor_intro} with the function $f(x) = \cos(x)$.
\end{problem}
\solution{
\[ g(x) = 1 - \frac{x^2}{2!} + \frac{x^4}{4!} - \frac{x^6}{6!} + \cdots . \]
}
\begin{problem}\label{prob:taylor_graphically}
Now let's graphically explore the results that you got from Problems \ref{prob:taylor_intro},
\ref{prob:taylor_intro2}, and \ref{prob:taylor_intro3}. You should have found the
following results (check your work from the previous problems):
\begin{itemize}
\item The exponential function:
\begin{flalign*}
f(x) &= e^x \approx 1+x \qquad \text{(First order approximation)} \\
f(x) &= e^x \approx 1+x+\frac{x^2}{2} \qquad \text{(Second order approximation)} \\
f(x) &= e^x \approx 1+x+\frac{x^2}{2} + \frac{x^3}{6} \qquad \text{(Third order
approximation)} \\
f(x) &= e^x \approx 1+x +\frac{x^2}{2} + \frac{x^3}{6} + \cdots +
\frac{x^n}{n!} \qquad \text{($n^{th}$ order approximation)} \\
\end{flalign*}
\item The sine function
\begin{flalign*}
f(x) &= \sin(x) \approx x \qquad \text{(First order approximation)} \\
f(x) &= \sin(x) \approx x - \frac{x^3}{6} \qquad \text{(Third order
approximation)} \\
f(x) &= \sin(x) \approx x - \frac{x^3}{6} + \cdots +
\frac{(-1)^{n} x^{2n+1}}{(2n+1)!} \qquad \text{($n^{th}$ order approximation)} \\
\end{flalign*}
\item The cosine function
\begin{flalign*}
f(x) &= \cos(x) \approx 1 - \frac{x^2}{2} \qquad \text{(Second order approximation)} \\
f(x) &= \cos(x) \approx 1 - \frac{x^2}{2} + \frac{x^4}{24} \qquad \text{(Fourth order
approximation)} \\
f(x) &= \cos(x) \approx 1 - \frac{x^2}{2} + \cdots +
\frac{(-1)^{n} x^{2n}}{(2n)!} \qquad \text{($n^{th}$ order approximation)} \\
\end{flalign*}
\end{itemize}
\begin{enumerate}
\item[(a)] Using any convenient graphing tool (e.g. Desmos, a graphing calculator,
\ProgLang, etc) make a plot of the exponential function along with the first,
second, and third order approximations. What do you notice and what do you
wonder?
\item[(b)] Repeat part (a) with the sine function.
\item[(c)] Repeat part (a) with the cosine function.
\item[(d)] Graphically test the following conjecture:
\begin{quote}
{\bf Conjecture: } {\it If I add more and more terms to the exponential, sine, or cosine
polynomial approximation then the resulting polynomial will match the
actual function better and better over a larger domain.}
\end{quote}
Based on the graphs that you made do you think that this conjecture is true or
false?
\end{enumerate}
\end{problem}
\begin{problem}\label{prob:taylor_graphically2}
In Problem \ref{prob:taylor_graphically} you should have confirmed that the Taylor
series for the exponential, sine, and cosine functions appear to get better and better
on larger and larger domains as the degree of the polynomial increases. Now we'll
test the conjecture in general:
\begin{quote}
{\bf Conjecture: } {\it The Taylor polynomial always be a better approximation over a larger
domain when we add more terms?}
\end{quote}
Below you will find several functions along with their polynomial approximations.
Test the above conjecture graphically on these approximations.
\begin{enumerate}
\item[(a)] $\displaystyle f(x) = \frac{1}{1-x} \approx 1 + x + x^2 + x^3 + x^4 +
\cdots x^n.$
\item[(b)] $\displaystyle f(x) = \ln(1+x) \approx x - \frac{x^2}{2} +
\frac{x^3}{3} - \frac{x^4}{4} + \cdots + \frac{x^n}{n}.$
\item[(c)] $\displaystyle f(x) = \arctan(x) = x - \frac{x^3}{3} +
\frac{x^5}{5} - \frac{x^7}{7} + \cdots + \frac{(-1)^n x^{2n+1} }{2n+1}.$
\end{enumerate}
\end{problem}
Now we'll formally state the actual definition of a Taylor Series.
\begin{definition}[Taylor Series]\label{def:taylor}
If $f$ is
infinitely smooth (has infinitely many derivatives) then $f$ can be expressed as an
infinite sum of power functions
\[ f(x) = \sum_{k=0}^\infty \frac{f^{(k)}(a)}{k!}(x-a)^k \]
in some neighborhood of $x=a$.
\end{definition}
It is sometimes easier to look at the definition of the Taylor series without the
summation notation.
\[ f(x) = f(a) + \frac{f'(a)}{1!}(x-a)^1 + \frac{f''(a)}{2!}(x-a)^2 +
\frac{f'''(a)}{3!}(x-a)^3 + \frac{f^{(4)}(a)}{4!}(x-a)^4 + \cdots. \]
\begin{example}\label{ex:taylor_exp}
Verify that the Taylor series for $f(x) = e^x$ is exactly what we found in Problems
\ref{prob:taylor_intro} and \ref{prob:taylor_graphically}. \\
{\bf Solution:}
First note that $f(x) = e^x$ has the beautiful property that $f^{(n)}(x) = e^x$. That
is, every derivative of the exponential function is just the exponential function
again. If we take $a = 0$ in the definition of the Taylor Series then we see that
\begin{flalign*}
e^x &= f(0) + \frac{f'(0)}{1!}(x-0)^1 + \frac{f''(0)}{2!}(x-0)^2 +
\frac{f'''(0)}{3!}(x-0)^3 + \frac{f^{(4)}(0)}{4!}(x-0)^4 + \cdots \\
&= e^0 + e^0 x + \frac{e^0}{2} x^2 + \frac{e^0}{3!}x^3 + \frac{e^0}{4!}x^4 +
\cdots \\
&= 1 + x + \frac{x^2}{2} + \frac{x^3}{3!} + \frac{x^4}{4!} + \cdots.
\end{flalign*}
\end{example}
\begin{problem}
We have encountered 6 different Taylor series thus far:
\[ e^x, \quad \sin(x), \quad \cos(x), \quad \frac{1}{1-x}, \quad \ln(1+x), \quad
\text{and} \quad \arctan(x). \]
Using the definition of the Taylor series, verify each of the formulas that we have
used. You can see a verification for $e^x$ in Example \ref{ex:taylor_exp}.
\end{problem}
One should keep in mind that the Taylor Series for a function may not make sense on the
whole real line even if the function's domain is the entire real line. You should have
taken note of this in problem \ref{prob:taylor_graphically2} (if you didn't then go back
to problem \ref{prob:taylor_graphically2}). The domain on which the
Taylor series makes sense is called the {\bf domain of convergence}. We can also talk
about a Taylor series having a {\bf radius of convergence} where the radius is the
distance from the center (the ``$a$'' in Definition \ref{def:taylor}) to the first point where
the Taylor Series does not converge.
To determine the
radius of convergence for a Taylor Series we should recall the Ratio Test from Calculus
\begin{thm}[The Ratio Test from Calculus]\label{thm:ratio_test1}
Let $\sum_{n=0}^\infty a_n$ be an infinite series. Suppose that
\[ \lim_{n \to \infty} \frac{|a_{n+1}|}{|a_n|} = r. \]
\begin{enumerate}
\item[(a)] If $0 < r < 1$ then the infinite series converges.
\item[(b)] If $r > 1$ then the infinite series diverges.
\item[(c)] If $r=1$ then the ratio test is inconclusive.
\end{enumerate}
\end{thm}
For the purposes of Taylor series we need a more robust statement of the Ratio Test.
Notice that in Theorem \ref{thm:ratio_test1} we are only considering an infinite series of
numbers, not an infinite series of functions. The following corollary gives a more
thorough statement of the Ratio Test as it applies to Taylor Series.
\begin{cor}[The Ratio Test for Taylor Series]
Let $f(x)$ be given as a Taylor Series
\[ f(x) = \sum_{n=0}^\infty \frac{f^{(n)}(a)}{n!}(x-a)^n. \]
The series converges for all $x$ such that
\[ \lim_{n \to \infty} \left| \frac{f^{(n+1)}(a) (x-a)^{n+1}}{(n+1)!} \right| \Big/
\left| \frac{f^{(n)}(a) (x-a)^k}{n!} \right| < 1. \]
Simplifying the fractions gives
\[ \lim_{n \to \infty} \frac{|f^{(n+1)}(a) (x-a)|}{|f^{(n)}(a) (n+1)| } <1. \]
The values of $x$ that satisfy this limit are called the {\it domain of convergence} for
the Taylor Series.
\end{cor}
\begin{example}\label{ex:Taylor1}
Find the domain of convergence of the Taylor Series for the exponential function $f(x)
= e^x$. \\{\bf Solution:} \\
For simplicity we will take the Taylor series to be centered at $a=0$. We know that
\[ f(x) = e^x = \sum_{k=0}^\infty \frac{x^k}{k!} \]
so we need to determine the following limit.
\[ \lim_{k \to \infty} \left| \frac{x^{k+1}}{(k+1)!} \frac{k!}{x^k} \right|. \]
Simplifying the fractions inside the absolute values gives
\[ \lim_{k \to \infty} \left| \frac{x}{(k+1)} \right| \]
which for every $x \in \mathbb{R}$ we see that
\[ \lim_{k \to \infty} \left| \frac{x}{(k+1)} \right| = 0 < 1. \]
Therefore, for every value of $x \in \mathbb{R}$ we know that the Taylor Series for
$e^x$ converges to the actual function. Hence the ``$=$'' sign that we used in the
beginning of this solution is actually valid for every $x$. Be careful since this is
not true for all functions.
\end{example}
\begin{example}\label{ex:Taylor2}
Find the Taylor series centered at $a=0$ for the function $f(x) = \frac{1}{1-x}$ and determine the
domain of convergence. \\{\bf Solution:} \\
Recall from the definition of the Taylor series that
\[ f(x) = \sum_{n=0}^\infty \frac{f^{(n)}(0)}{n!} x^n. \]
Hence we need to find a pattern in the sequence of derivatives $f^{(n)}(0)$ first.
\begin{flalign*}
f(x) &= (1-x)^{-1} \quad \implies \quad f(0) = 1 \\
f'(x) &= (1-x)^{-2} \quad \implies \quad f'(0) = 1 \\
f''(x) &= 2(1-x)^{-3} \quad \implies \quad f''(0) = 2 \\
f'''(x) &= 6(1-x)^{-4} \quad \implies \quad f'''(0) = 6 = 3! \\
f^{(4)}(x) &= 24(1-x)^{-5} \quad \implies \quad f^{(4)}(0) = 24 = 4! \\
f^{(5)}(x) &= 120(1-x)^{-6} \quad \implies \quad f^{(5)}(0) = 120 = 5! \\
& \vdots \\
f^{(n)}(x) &= (-1)^n n! (1-x)^{-n-1} \quad \implies \quad f^{(n)}(0) = n!
\end{flalign*}
Therefore the Taylor series for $f(x) = \frac{1}{1-x}$ is
\[ \frac{1}{1-x} = \sum_{n=0}^\infty \frac{n!}{n!} x^n \]
which can clearly simplify to
\[ \frac{1}{1-x} = \sum_{n=0}^\infty x^n = 1 + x + x^2 + x^3 + x^4 + \cdots \]
(a beautifully simple Taylor series for kind of a complicated function).
To find the domain of convergence we note that the absolute value of the ratio of
successive terms in the series is
\[ \left| \frac{x^{n+1}}{x^n} \right| = \left| x \right|. \]
Hence
\[ \lim_{n\to 0} \left| \frac{x^{n+1}}{x^n} \right| = |x| \]
and we see from the ratio test that $|x|<1$ is the domain of convergence. In other
words, the Taylor series only makes sense for $x$ values on the inverval $-1 < x < 1$.
A deeper look at this function reveals a deeper insight: The domain of convergence is
the distance from the center of the series, $a=0$, to the neareest vertical asymptote
at $x=1$.
\end{example}
\begin{problem}
Write \ProgLang code to show successive approximations of the function $f(x) = e^x$ on
the domain $-1 < x < 1$ using a Taylor series centered at $a=0$. Write your code so
that it animates through the approximations. Once your code is working, modify it to
do the same for $f(x) = \sin(x)$ centered at $a=0$ and for $f(x) = \cos(x)$ centered
at $a=0$.
\end{problem}
\begin{problem}
Consider the function $f(x) = \frac{1}{1+x^2}$.
\begin{enumerate}
\item[(a)] Build the Taylor Series for this function centered at $a=0$.
\item[(b)] Using the previous example as a guide find the domain of convergence
for the Taylor Series.
\item[(c)] Modify your \ProgLang code from the previous problem to demonstrate the
fact that $f(x) = \frac{1}{1+x^2}$ does not have an infinite domain of
convergence.
\end{enumerate}
\end{problem}
\section{Truncation Error with Taylor Series}
The great thing about the Taylor Series is that it allows for the
approximation of smooth functions as polynomials -- and polynomials are easily dealt with on
a computer since they involve only addition, subtraction, multiplication, and
integer powers. The down side is that the sum is infinite. Hence, every time we use a Taylor
series on a computer we are actually going to be using a truncated Taylor Series where we
only take a finite number of terms.
\begin{definition}[Truncation Error]
If $f(x) = \sum_{n=0}^\infty c_n (x-a)^n$ is the Taylor Series expansion for $f(x)$ at
the point $x=a$ then $f_N(x) = \sum_{n=0}^N c_n (x-a)^n$ is an approximation of $f(x)$
using only $N$ terms of the Taylor Series where $N < \infty$. The function $f_N(x)$
is only the same as $f(x)$ if $N$ goes to infinity. The {\bf truncation error} at the
point $x=x_0$ made when truncating the infinite series is the absolute difference
between $f(x_0)$ and $f_N(x_0)$
\[ T_{x_0} = | f(x_0) - f_N(x_0)|. \]
\end{definition}
\begin{problem}
What is the absolute truncation error when calculating $e^1$ with the Taylor Series
\[ e^x = 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + \cdots + \frac{x^n}{n!} + \cdots \]
using the following values for $n$? Recall that $e^1 \approx 2.718281828459045$.
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
$n$ & Value from the Taylor Series & Absolute Error \\ \hline \hline
0 & 1 & $|e^1 - 1| \approx 1.718281828459045\ldots$ \\ \hline
1 & 1+1=2 & $|e^1 - 2| \approx 0.718281828459045\ldots$ \\ \hline
2 & $1+1+1^2/2=2.5$ & $|e^1 - 2.5| \approx 0.218281828459045\ldots$ \\ \hline
3 & $1+1+1^2/2+1^3/6=2.\bar{6}$ & $|e^1 - 2.\bar{6}| \approx$ \\ \hline
4 & & \\ \hline
5 & & \\ \hline
6 & & \\ \hline
\end{tabular}
\end{center}
\end{problem}
\begin{problem}
Based on your answers to the previous problem, how many terms do you need in the
Taylor Series for $e^x$ to approximation $e$ to two decimal places?
\end{problem}
Remember that when using a computer we cannot ever use an
infinite series. Thus, every time we use a Taylor Series approximation we will naturally
be using a truncated version of the infinite series. This means that we need a good tool
to measure the error that we make when doing so.
To set the stage, let's say that we truncate a Taylor series at the $n^{th}$ term.
Therefore there is some remainder left over in the infinity of terms from the $(n+1)^{st}$
term on and we can write the series as
\[ f(x) = P_n(x) + R_n(x) \]
where $P_n(x)$ is just the $n^{th}$ order polynomial coming from the $0^{th}$ to the
$n^{th}$ terms of the Taylor Series. The remainder is the subject of the next theorem.
\begin{thm}[Taylor's Theorem]\label{thm:taylors_theorem}
Let $f$, $f'$, $f''$, \dots, $f^{(n)}$ be continuous {\it near} $a$ and let $f^{(n+1)}(x)$
exist for all $x$ {\it near} $x=a$. Then there is a number $\xi$ between $x$ and $a$
such that
\begin{flalign}
f(x) = f(a) + f'(a) (x-a) + \frac{f''(a)}{2}(x-a)^2 +
\frac{f'''(a)}{3!}(x-a)^3 + \cdots + \frac{f^{(n)}(a)}{n!}(x-a)^n + R_n(x)
\label{eqn:taylor}
\end{flalign}
where the remainder function $R_n(x)$ is given as
\begin{flalign}
R_n(x) = \frac{f^{(n+1)}(\xi)}{(n+1)!} (x-a)^{n+1}.
\label{eqn:taylor_remainder}
\end{flalign}
\label{thm:taylor}
\end{thm}
Often times we are using Taylor series that are centered at $a=0$ so for simplicity we
restate Taylor's theorem here with $a=0$.
\begin{cor}[Taylor's Theorem at $a=0$]
Let $f$, $f'$, $f''$, \dots, $f^{(n)}$ be continuous {\it near} $a$ and let $f^{(n+1)}(x)$
exist for all $x$ {\it near} $x=0$. Then there is a number $\xi$ between $x$ and $0$
such that
\begin{flalign}
f(x) = f(0) + f'(0) x + \frac{f''(0)}{2!}x^2 +
\frac{f'''(0)}{3!}x^3 + \cdots + \frac{f^{(n)}(0)}{n!}x^n + R_n(x)
\end{flalign}
where the remainder function $R_n(x)$ is given as
\begin{flalign}
R_n(x) = \frac{f^{(n+1)}(\xi)}{(n+1)!} x^{n+1}
\label{cor:taylor_remainder}
\end{flalign}
\label{cor:taylor}
\end{cor}
\begin{example}
The first three terms of the Taylor series for $f(x) = e^x$ centered at $x=0$ are \[
f(x) = e^x \approx 1 + x + \frac{x^2}{2}. \] Use Taylor's theorem to approximate the
error in this approximation when $x \approx 1$. \\ {\bf Solution:}
The remainder function gives us that there exists a number
$\xi$ such that $0 < \xi < 1$ and the remainder in the Taylor series
is \[ R_3(x) = \frac{f^{(3)}(\xi)}{3!}(x-0)^3 = \frac{e^\xi}{3!}x^3.
\] Therefore $R_3(x) \le \frac{e^1}{3!} \cdot 1^3 = \frac{e}{6}
\approx 0.45$. In Figure \ref{fig:taylor_thm_exp} we see that the
error is indeed less than this. Indeed, $f(1) =
2.718281828459045\cdots$ and $g(1) = 2.5$ so the actual error is about
$0.218 < 0.45$.
If we extend this example a bit we can see that the absolute error and the
approximated error (as found with Taylor's Theorem) converge to each other rather
quickly.
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
$n$ & Value from the Taylor Series & Absolute Error & Approximate Error \\ \hline \hline
0 & 1 & $1.71828$ & $2.71828$ \\ \hline
1 & 2 & $0.71828$ & $1.35914$ \\ \hline
2 & $2.5$ & $0.21828$ & $0.45305$\\ \hline
3 & $2.66666$ & $0.05162$ & $0.11326$ \\ \hline
4 & $2.70833$ & $0.00995$ & $0.02265$ \\ \hline
5 & $2.71806$ & $0.00023$ & $0.00054$ \\ \hline
6 & $2.71825$ & $0.00003$ & $0.00006$ \\ \hline
\end{tabular}
\end{center}
\end{example}
\begin{example}
The Taylor series for $g(x) = \frac{1}{1-x}$ is given in Example \ref{ex:Taylor2}.
What is the maximum error that can occur when approximating $f(c)$ for $c \in (-1,1)$
with a fifth-order polynomial? \\{\bf Solution:}
We know that $g(x) \approx \frac{1}{1-x} = 1+x+x^2 + x^3 + x^4
+ x^5$ for $x \in (-1,1)$ and from Taylor's remainder theorem we know
that $R_5(x) = g^{(6)}(\xi) x^{5+1} / (5+1)!$. The sixth derivative
of $g$ is $g^{(6)}(x) = 6! (1-x)^{-7}$ and therefore the largest that
$g^{(6)}{\xi}$ can be for $\xi \in (-1,1)$ is $6!$. Therefore the
remainder term simplifies to $R_5(x) = x^6$. Finally, the largest that
$R_5(x)$ can be on $x \in (-1,1)$ is $1$ so the maximum error is $1$.
\end{example}
Long story short: Taylor's theorem gives a way to bound the amount of error that you can
make when using a truncated Taylor series.
\begin{figure}[ht!]
\begin{center}
\begin{tikzpicture}
\begin{axis}[axis lines=center, domain=-1:1.2, xmin=-1, xmax=1.2, ymin=-1, ymax=3,
grid, legend pos=outer north east]
\addplot[smooth, black, very thick] {exp(x)};
% \addlegendentry{$f(x) = e^x$};
\addplot[smooth, red, dashed, very thick] {1+x+x^2/2};
% \addlegendentry{$g(x) = 1+x+x^2/2$};
\draw[fill=black] (axis cs:1,2.718282) circle(0.05cm);
\draw[color=red, fill=red] (axis cs:1,2.5) circle(0.05cm);
\end{axis}
\end{tikzpicture}
\begin{tikzpicture}
\begin{axis}[axis lines=center, domain=0.8:1.2, xmin=0.8, xmax=1.2, ymin=2.2, ymax=3,
grid]
\addplot[smooth, black, very thick] {exp(x)};
% \addlegendentry{$f(x) = e^x$};
\addplot[smooth, red, dashed, very thick] {1+x+x^2/2};
% \addlegendentry{$g(x) = 1+x+x^2/2$};
\draw[fill=black] (axis cs:1,2.718282) circle(0.05cm);
\draw[color=red, fill=red] (axis cs:1,2.5) circle(0.05cm);
% \draw[blue, thick, dashed] (axis cs:1.1,2.6) -- (axis cs:1,2.71828);
% \draw[blue, thick, dashed] (axis cs:1.1,2.6)
% node[anchor=west]{$\approx0.218$} -- (axis cs:1,2.5);
\draw[red] (axis cs:1,2.5) node[anchor=north west]{$(1,2.5)$};
\draw[black] (axis cs:1,2.71828) node[anchor=south east]{$(1,e)$};
\end{axis}
\end{tikzpicture}
\end{center}
\caption{The function $f(x) = e^x$ and a second order Taylor approximation. The solid
black curve is $f(x) = e^x$ and the dashed red curve is the Taylor approximation. The
right-hand plot shows a zoomed in view near the point $x=1$.}
\label{fig:taylor_thm_exp}
\end{figure}
\begin{problem}
The {\it engineer's approximation} to the sine function is:
\begin{center}
For $x$ close to $0$, $\sin(x) \approx x$.
\end{center}
Obviously the word {\it close} is relative. Use Taylor's theorem to determine how
much error is being made with the {\it engineer's approximation} if you want to calculate $\sin(0.5)$?
See Figure \ref{fig:taylor_thm_sine}.
\end{problem}
\solution{
This is simply the first term in the Taylor series, but since the quadratic term is
zero for the Taylor series for sine we can also think of this as a second order Taylor
polynomial ($\sin(x) \approx 0 + x + 0x^2$). Hence, the remainder function is
\[ R_3(x) = \frac{f^{(3)}(\xi)}{3!}x^3 \]
where $\xi$ is some number such that $0 < \xi < 0.5$. Therefore, $R_3(0.5) =
\frac{-\cos(\xi)}{6} \cdot (0.125) \approx -0.028 \cos(\xi)$ and since $|\cos(x)|\le
1$ we know that the error will be less than $0.028$ (approximately) when using the
engineer's approximation of sine at $0.5$.
}
\begin{figure}[ht!]
\begin{center}
\begin{tikzpicture}
\begin{axis}[axis lines=center, domain=-2:2, xmin=-2, xmax=2, ymin=-1, ymax=1,
grid, legend pos=outer north east]
\addplot[smooth, black, very thick] {sin(deg(x))};
% \addlegendentry{$f(x) = e^x$};
\addplot[smooth, red, dashed, very thick] {x};
% \addlegendentry{$g(x) = 1+x+x^2/2$};
\draw[fill=black] (axis cs:0.5,0.4794) circle(0.05cm);
\draw[color=red, fill=red] (axis cs:0.5,0.5) circle(0.05cm);
\end{axis}
\end{tikzpicture}
\begin{tikzpicture}
\begin{axis}[axis lines=center, domain=0.25:0.75, xmin=0.25, xmax=0.75,
ymin=0.25, ymax=0.75, grid]
\addplot[smooth, black, very thick] {sin(deg(x))};
% \addlegendentry{$f(x) = e^x$};
\addplot[smooth, red, dashed, very thick] {x};
% \addlegendentry{$g(x) = 1+x+x^2/2$};
\draw[fill=black] (axis cs:0.5,0.4794) circle(0.05cm);
\draw[color=red, fill=red] (axis cs:0.5,0.5) circle(0.05cm);
\end{axis}
\end{tikzpicture}
\end{center}
\caption{The function $f(x) = \sin(x)$ and a second order Taylor approximation. The solid
black curve is $f(x) = \sin(x)$ and the dashed red curve is the Taylor approximation. The
right-hand plot shows a zoomed in view near the point $x=0.5$.}
\label{fig:taylor_thm_sine}
\end{figure}
\begin{problem}
No computational software actually {\it knows} functions like the exponential function
or the sine function. Instead, they have a way to calculate values for these
functions based on Taylor series.
If we want to calculate a value for $e^{0.5}$ on a computer, how many terms in the
Taylor series do we need so that the truncation error is less than machine precision?
\end{problem}
\solution{
The truncation error for the exponential function is
\[ R_n(x) = \frac{e^\xi}{n!} x^n \]
where $\xi \in (0,0.5)$. Knowing that the exponential function is
monotonically increasing we know that $R_n(x) \le \frac{\sqrt{e}}{n!}
(0.5)^n.$ Hence we can approximate the number of terms by finding $n$
such that the $\frac{\sqrt{e} 0.5^n}{n!} < 1 \times 10^{-16}$. Making a table of
values fo the sequence on the right it is reasonably quick to see that we
need about 14 or 15 terms in the Taylor series in order for the precision
of this computation to be less than any observable error on a computer.
}
\begin{example}
How many terms of the Taylor Series for $e^x$ (centered at $a=0$) are needed to
calculate $e$ to within 10 decimal places? \\{\bf Solution:} \\
Recall that if $f(x) = e^x$ then
\[ f(x) = \sum_{k=0}^\infty \frac{x^k}{k!}, \]
and this Taylor Series representation converges for all $x \in \mathbb{R}$. From
Taylor's Theorem we know that the remainder term takes the form
\[ R_n(x) = \frac{f^{(n+1)}(\xi)}{(n+1)!} x^{n+1} \]
where $\xi \in (0,1)$. For the exponential function we know that $f^{(n+1)}(\xi) =
f(\xi) = e^\xi$. Furthermore, we are trying to approximate the error at $x=1$ so the
remainder term is
\[ R_n(1) = \frac{e^\xi}{(n+1)!}. \]
We want to find $n$ such that $R_n(1) < 10^{-10}$. Observe that since $\xi \in (0,1)$
we know that $e^\xi < e$. Hence we need $n$ such that $e \cdot 10^{10} < (n+1)!$.
Examining the orders of magnitude for the factorial sequence we see that $n \ge
13$ since $(13+1)! \approx 9 \times 10^{10} > e \cdot 10^{10}$.
Indeed, if we use 13 terms in the Taylor Series for $f(x) = e^x$ we get
\[ f(1) \approx 2.71828182844676, \]
and this approximation is correct to the $10^{th}$ decimal digit.
\end{example}
\begin{example}
Calculate $e$ to within machine precision. \\{\bf Solution:}\\
Modifying the previous example we see that we need $n$ such that
\[ R_n(1) = \frac{e^\xi}{(n+1)!} < 10^{-16}. \]
Examining the orders of magnitude of the factorial sequence we see that we need $n \ge
18$ terms in the Taylor Series to have a value of $e$ that is accurate to within