1 m HI it i jH i m EH Sam m HRQ fBMIMM HI NMH intiHt IHhiI iau HHBrnffl tiBglM —■ vtl IHWIBinBHOflinKiffl ^I'imI'v Mp •fill i niilit ill M' MiK'^'Mf iti'l LIBRARY OF THE UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN 510.84 I46r no. 713-714 cop. 2 Digitized by the Internet Archive in 2013 http://archive.org/details/automaticcurvefi713chun 0> / l/.V I + al UIUCDCS-R-75-713 C00-2383-0022 cop J AUTOMATIC CURVE FITTING FOR INTERACTIVE DISPLAY BY WON LYANG CHUNG May 197 title number NEW TITLE THIRD PANEL [i ~u THE. LlbKARY Or i Ht AUG1 1975 UNIVERSITY OF ILLINOIS AIGN • URBANA, ILLINOIS 2. INSERT IN THE VOLUME +„, UIUCDCS-R-75-713 VU) 7 & C00-2383-0022 AUTOMATIC CURVE FITTING FOR INTERACTIVE DISPLAY BY WON LYANG CHUNG May 1975 JHH. LlbKARY Or i Ma AUG1 1975 UNIVERSITY OF ILLINOIS UIUCDCS-R-75-713 C00-2383-0022 AUTOMATIC CURVE FITTING FOR INTERACTIVE DISPLAY* BY WON LYANG CHUNG May 1975 DEPARTMENT OF COMPUTER SCIENCE UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN URBANA, ILLINOIS 61801 * This work was supported in part by the Atomic Energy Commission under grant US AEC AT(11-1)2383 and submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Computer Science, 1975. y. AUTOMATIC CURVE FITTING FOR INTERACTIVE DISPLAY Won Lyang Chung, Ph.D. Department of Computer Science University of Illinois at Urbana-Champaign, 1975 This thesis presents a simple, economical, and reliable auto- matic graphical data compression algorithm for interactively displaying curves on the screen of CRT-like graphics terminals. An adaptive local scheme based on interpolating piecewise cubic polynomials with contin- uous slopes is designed and implemented. The scheme accepts a large number of data points as they are generated one point by one by an or- dinary differential equations package and compresses the data into a compact picture representation. The important features of this algorithm can be found in its adaptive use of two quantitative measures of approxi- mation—a local least-squares error norm and a pseudo distance norm, and its ability to regenerate aesthetically pleasing curves using the piece- wise cubic polynomial interpolants with relatively few knots by one-sided, local computational procedures. The advantages of one-sided, local pro- cedures are the economical computation based on local data and the fast search for knots. The problem of numerical stability imposed by one- sided, local procedures is solved by the idea of extended intervals which were intuitively developed and proved to have interesting mathe- matical and computational effects on the stability and error behavior of the algorithm. The scheme can be used for compression, transmission, and display of graphical data in both on-line and off-line environments. m ACKNOWLEDGMENTS I wish to express my sincere gratitude to my principal advisor, Professor C. William Gear, whose supervision, criticism, and support made this work possible. I am also grateful to Professor Daniel S. Watanabe for his assistance and constructive reading of this thesis. I am thankful to the Department of Computer Science and the Atomic Energy Commission who have provided me with financial support during the evolution of this thesis. Finally, I would like to thank my wife Jiho for her encourage- ment and patience. This thesis is dedicated to my parents. IV TABLE OF CONTENTS Page 1 INTRODUCTION 1 1.1 Statement of Problem 1 1.2 Characterization of Problem 8 1.3 Survey of Methods 10 1.4 Conventions and Notations 16 2 ANALYSIS OF THE GRAPHICAL DATA COMPRESSION ALGORITHM 19 2.1 Introduction: Adaptive Interpolation and Knot Selection 19 2.2 Analysis of the Continuous Linear Interpolation Scheme 28 2.3 Analysis of the Discrete Linear Interpolation Scheme 43 3 COMPUTATIONAL RESULTS 55 3.1 Implementation of the Algorithm 55 3.2 Examples and Test Results 64 4 CONCLUSION 76 4.1 Summary 76 4.2 Future Work 77 LIST OF REFERENCES 79 Appendix FORTRAN PROGRAMS 81 VITA 96 1 INTRODUCTION 1.1 Statement of Problem This thesis is addressed to the problem of designing and im- plementing a simple, economical, accurate and reliable automatic graphical data compression scheme for interactively displaying one-dimensional curves on the screen of CRT-like graphics terminals. The primary goals are the reduction of storage and the efficient processing and display of graphical data. However, our method can be used also for the direct input of graphical data and can be applied to a wide class of data fitting problems with proper modifications. Hence it may ultimately facilitate natural man-machine interaction through graphics. Three important and readily distinguishable aspects of the problem should be pointed out. First, we are dealing with the problem of computer graphics, in other words, the problem of representing and regenerating graphical data. According to the classification by Rosenfeld [27], the problem belongs to the area of transforming picture descriptions into visual images. Second, we are interested in the automatic compression of graphical data which consist of a large number of points produced one point at a time by an ordinary differential equation integrator. We wish to compress the input data points as they are generated without the need for storing all the input data points, thereby processing the input data on a local basis. We also wish to reduce a large set of numerical data to a small set of output data representing a picture. We are interested in an automatic scheme, not an interactive one as in the graphics systems for interactively designing free-form curves and surfaces developed by Coons [8], Forrest [14], and Riesenfeld [26]. Finally, we are interested in designing a practical computer algorithm applicable to a wide range of problems, not a mathematical model for a general description of curves. Mathematical analysis is used as a tool and is of secondary importance. In our analysis, we strive for consistency rather than completeness. The description of the problem environment will help us to have a deeper understanding of the problem through exposure of the nature of the input/output data and the function of the total environment. It will also serve as a typical example of the applications for which the method is designed. The environment of our problem is a generalized modeling and simulation system with interactive graphics capabilities. The system is designed for computer-aided design of general network systems and graphics are used as the medium of man-machine communication. The system configuration takes the familiar form of time-shared graphics with intelligent satellites [22] and its detailed description can be found in references [17, 20]. Figures 1.1 and 1.2 show block diagrams describing the hardware and software configurations of the system. A sample display generated by the plot package which is re- sponsible for graphical interactions is shown in Figure 1.3. We can see c o •r™ 4-> (O s. =3 CT) •r— *♦- C o o X 1 O | c t >• CO z 111 K W (9 3 O K 0. o CO IO CO o 4-> »o S- 3 Cn •r— *«- c o o cu s- (0 •3. 4-> o CO co CO cc CD CNJ cu S- =3 h- uj tr 5 -• z < _j «o u M z u a z UJ a. ui a UJ o o u_ Q < -I a. £^* N >/ >/ in m o o o m o r < 2 a: o o o < u >- K b. < C UJ «UJ-«r _j< ^ uj < - Ul _j o ua ±o J — 1 1- O J UJ a i 1- 1 H- a. O Q. O O) 00 CO 0) s- -« «nj m *■ m to K »->->->->>->->->-«>>->->- not only the curves but also the menu containing commands and a message in the display output that is generated by the plot package in which the automatic graphical data compression scheme is embedded. The im- plementation of the plot package is described in detail by the author in reference [25]. An excellent account of general principles in the design of such packages is given by Newman and Sproull [2}2] and by Foley and Wallace [12]. Therefore, the following discussion of the plot pack- age is rather sketchy. Three important characteristics of the plot package will be mentioned before we focus our attention on the main problem. First, the graphical nature of the package: it is concerned with the display of graphical data, and therefore requires efficient representation and processing of pictorial data. Second, the interactive nature of the package: the need for real-time response and for a language of dis- course, i.e., a graphical command language, are important. Third, the applications nature of the package: the requirements to choose a pro- gramming language, to maintain device- independence, and to have a picture representation appropriate for problem descriptions are important. The plot package is written in FORTRAN with special attention given to its size and speed. All graphics functions are realized through subroutine calls to low-level graphics software. The command language is designed using an approach similar to the GULP system [23], in which the syntax is specified at a user language definition stage and the semantics, which signify the translation to be carried out once a par- ticular statement is recognized (a "statement" is the coordinates of a UJ >- < 5 a. < V) Q Q i z o 1 ! < 4 -j < Ul 1- < o o z Q z < UJ UJ uj ■< ceo a UJ z ~i 3 co O i CC a. o > < Q. -J a. _l Q m >». O (X) fO m U =3 $- +J oo -o 4-> ■o c , t i < t k if i < k; F(n) e {f Q , f r ..., y, f. = fd^), i = 0, 1, ..., n. The output data are a set of ordered triples denoted by (x., •J y-» m.), where x- is the value of the j-th knot , y. is the value of the J J J J ordinate at x., and m. is the value of the slope. Define \J J X(N) = {x Q , x-j, ..., x N >, x Q < x ] < ... < x N ; Y(N) = (y , y r ..., y N >, yj = f(x.) t j = 0, 1, ..., N; M(N) = {m 0> m^ 9 ..., m^}, 17 The interval of interest will be denoted by I = [x Q ,x..]. The j-th subinterval will be denoted by I. = [x. i>x.] for j = 1, ..., N. The input data points which are deleted for the subinterval I . , will be denoted by the following notations: X j = l Hiy ^(JJ+T t i(j)+2 s ■•> ^(jJ+Sj' ^(j+l) = X J+1 ; y j = f i(j)' f i(J)+T f i(J)+2' ' ' f i(j)+ Sj .' f i(j+l) = y j+l ; where i (•) is a mapping from {0,1, ..., N} to {0,1, ...,n} and s. is J the number of data points in I. + , excluding endpoints. We also define the set of indices of data points for each I. by S. = (i(j-l) + 1, i(j-l) + 2, ..., i(j-l) + s.}. The piecewise cubic polynomials interpolating through X(N), Y(N) with slopes M(N) are denoted by Py(x) and the segment of Py(x) for the subinterval I. is denoted by P^x). Thus, the continuity and J J interpolation conditions of Py(x) are represented as follows: Pj(x) s iflj^ajCx) + mj3j(x) + y,-_-,Y.j(x) + y j 6 j (x) ' J " = ls •"» N ' where otj(x) = (Xj-x) (X-Xj -|)/hj t 2/ x„2 , » Sj(x) = (X-Xj.^'fxj-X)^ Yj(x) = (Xj-x^Zfx-Xj.,) + hjl/hj «j(x) - (x-Xj., ) 2 [2( Xj -x) ♦ h.]/hj 3 . and hj = Xj - Xj.,1 18 Py(x) = {Pj(x)}" ; yj., ■ ftxj.,) ■ Pj(Xj-i). y d ■ «xj.) W- m j-l = ^ P j (x) lx=x m. j-l fcPj(x)| . J-l. ...,N; V y = f 0' X N = V y N " V 19 2 ANALYSIS OF THE GRAPHICAL DATA COMPRESSION ALGORITHM In this chapter we discuss and analyze the adaptive algorithm in detail. In Section 2.1 we discuss the motivations behind our algo- rithm and present a detailed description of the scheme. In Section 2.2 we present a mathematical analysis of the continuous analogue of our scheme. The continuous case is easy to analyze, is of theoretical interest, and serves to illustrate the main features of the scheme. Finally in Section 2.3 we briefly sketch the analysis of the discrete algorithm following the pattern used in Section 2.2. The mathematical details are not presented because the formulae involved are rather messy. Instead we content ourselves with indicating how the analysis of Section 2.2 can be generalized to the discrete case. 2.1 Introduction: Adaptive Interpolation and Knot Selection In this section, we will give precise description of the automatic data compression algorithm by discussing the ideas and com- putational procedures for computing the interpolating piecewise cubic polynomials and selecting knots. The following particular method of computing the slope m. was chosen because it turned out to be superior to other schemes after we experimented with various local schemes [25]. The linear scheme is based on the minimization of the weighted L^-norm of the difference between the original data and the cubic inter- polant, n EjH | = j s w k [f(t k ) - Pj(t k )] 2 (2.D 20 The following weights w. are used in the computer program: t k - t k _ r k = i(j-l) + 1 W|. = ^t k+1 - t k _ r i(j-l) + 2 < k £ i(j-l) + Sj-1 (2.2) Vl - V k = 1(M) + Sj With these weights, the summation is the trapezoidal rule for the inte- gration of the square error over I-. Other weights can be used as long as they satisfy the requirements for stability of the algorithm. This is discussed in Section 2.3. The free parameter of the cubic polynomial P. is the slope m. at the right endpoint of I.. This slope is chosen to j j j minimize ||E-|L- Setting 3 i- o V j2 and solving for m. yields m j = kL Wk[fk ^ (tk> " ViWOT - Vj-^WV - y j fi j (t k )6 j (t k'^ s \tW? (2-3) This scheme is vulnerable to the problem of exponential error growth. This was observed experimentally and is evident in the math- ematical analysis of the scheme (see Section 2.3). The reason for this instability is the possible amplification of the errors in the slopes m-. This process can occur as follows. Suppose the data points are concentrated near the left end points of the intervals. This can lead 21 to a bad choice of a slope, say m,. Given this slope m,, the next cubic j j segment may have to twist itself badly to approximate the input data, leading to an even worse slope m, + -,. If this process is continued, the resulting interpolant may not be smooth and require so many knots that it is useless (see Figure 2.1). It was found, however, that we can avoid this problem by using additional "future" data to modify the slopes so that the interpolant can better accommodate the future behavior of the curve. This can be done by introducing an extended interval , defined as follows. Definition : The extended interval for I . , denoted by AI . , is defined by M j E(x j' t i(j) + e j ]w1the j il - It is convenient to define the set of indices of data points contained in AI,: ASj E (i(j) + 1, ..., i(j) + e .}. Hence we choose m. to minimize the weighted L^-norm of the errors for the subinterval including the extended interval I- + AI,, J J l|E,||= I w [f(t k ) - P.(t )] 2 J L keS,+AS. K J K J J This leads to the following formula for m.: J m j * k£S .Ls WkCfk6 J (tk) " m M a A )6 j ( V - "mVVW - VA'VV^s +AS WV] 2 J J (2.4) 22 Py Figure 2.1 Stability of the Linear Interpolation Scheme 23 In practice, a few additional points outside I- are sufficient to make the algorithm stable. Thus we can improve the performance of the algo- rithm while maintaining the local property of the algorithm. This is illustraded in Figure 2.1., Here the dotted curve is the interpolant obtained using one additional point beyond the knots, and the solid curve is the interpolant obtained using the same set of knots but no additional points. We now discuss the second method for computing the m., the nonlinear scheme . The development of the nonlinear algorithm was motivated by the observation that the linear algorithm with extended intervals performed very well for relatively flat segments of curves but not very satisfactorily for nearly vertical segments. The linear scheme requires too many knots for the faithful approximation of nearly vertical curve segments. A close look at the linear scheme shows an interesting fact: for nearly vertical segments each term in ||e.|£ can be very large although the interpolating polynomial is visually indistin- guishable from the original curve. This implies that the direct error between f(t) and P.(t) is a good measure of closeness for nearly vertical curves. This is illustrated in Figure 2.2. This leads us to define another measure of closeness, a pseudo distance denoted by d. , by the following formula: dk . { MV -,'Afti/i ( , 5) This "pseudo distance" approximates the shortest distance between 24 Uj.Yj) PSEUOO DISTANCE nri:_i (Wi-i) Figure 2.2 Comparison of Two Error Criteria 25 f(t. ) and P. and is used because the true distance is difficult to compute. In the nonlinear scheme, the sum of squares of the d. for 2 the subinterval I. together with a correction term, denoted by w(m.-s) , J %J is minimized with respect to m- to compute the value of m.. The cor- j j rection term, where s is the first-order difference approximation to m . , s = ^j' f i(j-i)+ s .^ / ^ x j' t i(j-l)+s.^ and w is a P° sitive wei 9 ht with values between and 1, is included for the purpose of preventing us from having a bad value of m. which might cause trouble in the later stages. Thus we select m. such that 357 C? Kl 2 + w(nvs) 2 ] = (2.6) keSj to obtain m j = s 'kL. {fk ' Pj(tk)}[ " 23 J (tk){1 + (p ^ (tk))2} - 2{f k -P j (t k )}Pj(t k )3j(t k )]/[2w{l + (Pj(t k )) 2 } 2 ] (2.7) This formula is clearly a nonlinear equation for m. and therefore must be solved for m. by an interative technique. It is solved by Steffensen's interation method with the initial estimate of m- obtained from the linear interpolation scheme. Although it is not feasible to give a mathematical analysis of the nonlinear scheme, observations from the experiments give evidence of satisfactory performance of this scheme. We did not encounter any difficulty even when we used only this scheme for all curve segments, achieving reasonalbe compression and smooth 26 approximations. The difference between the vertical distance |f(t. )-P. (t k )| and the pseudo distance d. is significant for nearly vertical segment, but we can easily see that d. - |f(t. )-P.(t. ) | when the segment is nearly horizontal (see Figure 2.2). As hoped, the nonlinear method was found to be superior to the linear one for nearly vertical curve segments. This result to- gether with the previous one indicates that one method is complementary to the other. Hence it is natural to use both methods in an adaptive manner in order to obtain the best possible overall performance, i.e., to obtain smooth approximation and high compression. The linear method (2.4) is used for relatively horizontal segments, and the nonlinear method (2.7) only for nearly vertical segments. The extra computational efforts the nonlinear scheme requires to calculate the value of the slope m. are offset by its ability to produce a compact curve description for interactive display. The knots are chosen to maximize the subinterval lengths sub- ject to condition that ||y-Py|| £ EPS where ||»|| is a certain norm and EPS is a specified maximum permissible error chosen to ensure the faithful approximation. Two norms for ||y-Py || were chosen since they were superior to other norms we have tested: the maximum error norm for the sub- interval scaled by the maximum value of |y| when the linear scheme is used and the square of the maximum pseudo distance when the nonlinear scheme is used. Of course, the same result could be achieved by using two tolerances for the maximum error, one for the linear scheme and its square root for the nonlinear scheme. 27 As a summary, we will give a concise presentation of the auto- matic data compression scheme. Let x. -. be the previously determined knot and t-/-_ , \ + be the present input data point to be processed. We wish to find x- and P-(x) which maximize h. = x- - x- -. while ||y-Py|| J J J J J ' * EPS for [Xj. r Xj]. [A] Determine the maximum value of the first-order approxima- tion to the slope of the input data for the subinterval by s = ( max |t |< -t k _ 1 |)YMAX/TMAX where and t k e[x r t i(j _ 1)+r ] YMAX = max |f(t.) TMAX = t n . If s is smaller than the threshold, then go to [B]. Otherwise, go to [C] [B] Linear scheme: Compute the slope m- at t«#. , \ + using (2.4). Determine EMAX = max V^j-l'S'U-n+r Go to [D]. f(t k ) " P j {t k ) ' /YMAX ' [C] Nonlinear scheme: Compute the slope m. at t. /. , \ + using (2.7). Determine EMAX = max |d. | 2 keS. K J Go to [D], 28 [D] If EMAX < EPS, save the present point; get the next input point, r = r + 1 ; go to [A]. If EMAX = EPS, t. /._ ,> + becomes the knot x- and go to [E]. If EMAX > EPS, the previously saved point is used as a knot and go to [E]. [E] Clear buffers; get next input points and repeat the above steps until X N = V 2.2 Analysis of the Continuous Linear Interpolation Scheme The results of the tests of the adaptive algorithm described in Section 2.1 using a carefully chosen set of examples convinced us that it is powerful and useful for automatic curve fitting. However, a good computer algorithm should be analyzed and clearly understood so that its users may use it with confidence. Only the interpolation based on the linear method is analyzed because the nonlinear method has re- sisted analysis. Instead of analysis we rely on empirical evidence that the nonlinear method works well most of time. The linear algorithm based on the continuous error norm will be defined and analyzed in a rigorous manner particularly because it is more interesting mathematically and simpler than the actual computer algorithm described in Section 2.1. Moreover, analysis of the continuous algorithm illustrates all the main characteristics of the algorithm, the only major difference being the dependence of the discrete algorithm 29 on the input data point distribution as we will see in Section 2.3. We will show in this section that the linear piecewise cubic polynomial interpolation scheme is numerically stable and uniformly convergent as the size of the subintervals approaches zero. We first define the continuous Lp-norm of the error as rX J [f(t)-p,(t);fdt x j-i We select m. such that ■J E, 3m. = (2.8) (2.9) and we obtain m. ■ L(m j _ 1 , yj.^Yj) = [{ J *(t)oj(t)dt X j-1 m. , J-l - y j-1 ^ J a..(t)3.(t)dt X j-1 J J Xj Yi (t)B,(t)dt - yfi 6.(t)e,(t)dt]/[ (8,(t)}*dt X j-1 X j-1 X j-1 (2.10) where a. (t), $-(t), y . ( t ) , and 6.(t) are functions previously defined in j j j j Section 1.5. Let m. be the value of the slope at the knot x. which is com- puted by the same type of formula as (2.10) with m. •, replaced by y. ,, the true value of the first derivative at the previous knot. That is, m j = L(y M» y J-T y J ] (2.11) Then, the amplification of the error caused by using the incorrect value m. -, is represented by 30 where r W =x j (y j-r m M ) < 2 - 12 > \a = I J a,(x)3,(x)dx/ {0.(x)rdx = - 0.75 (2.13) J J x J J J x J \H A j-1 or for the method with extended interval x i = rX.+Ah. f x.+Ah. o a j (x)3j(x)dx/l J J {&.{x)rdx X j-1 - A j-1 3 35a:: = - 0.75 {1 — L _ } (2.14) l-4a.+T0a?+15a? \) \) \J with a. = Ah./(x.-x. , ) where Ah. is the length of the extended interval We will call A- the stability parameter. We will drop the subscript j *J ' i .... . from A. for continuous case since the A- are equal for all j. J J Let e. = y_. - m. and e. = y'- - m.. Using (2.12), we obtain vi J J J J J ■j "'"J * Xe j-1 • I Subtracting y. from both sides and using the definitions of e- and e., j j j we obtain e. = - xej., + e.. (2.15) This error difference equation describes the error propagation process. Now suppose e« = and |e. | ^ E. In fact, we will show in Lemma 3.1 _ 3 that e_. = 0(h m ), where h „ is the maximum interval length. It is j max max easy to show that |e.| is bounded by 31 l e jl " iM^-l E ( 2 - 16 > Hence |e.| is bounded if and only if |x| < 1. The algorithm is defined vJ to be stable if | X | < 1 . It is simple to see from (2.14) that the linear method with extended interval is stable if < a. < °° for all j. The behavior of X is illustrated in Figure 2.3 as a function of a. The following ob- servations can be made about the relationship between the extended interval and stability. 1. X = - 0.75 for a = (without extended interval). 2. lim X = 1 and < |x| < 1 for a e [0,+°°). a-*» 3. |X| =0 for a - 0.3. The stability is maximized by choosing the extended interval such that Ah. = 0.3h. for j j each step. Once the question about the stability of the method is answered, we then have to attack the problem of error behavior. In other words, as the size of subintervals goes to zero, does the piecewise cubic polynomial interpolant Py converge uniformly to the original function y? If so, how fast is the convergence? Or, what is the order of approxima- tion? We will answer these important questions by developing a priori error bounds in the following theorems. The following main theorems prove not only the uniform con- vergence of the approximation, but also that the piecewise cubic poly- nomial interpolant Py is a fourth-order approximation to y with respect to the L -norm for sufficiently smooth y. 32 Figure 2.3 Stability Parameter of the Continuous Linear Scheme 33 We now start our analysis with a classical linear approxima- tion theorem by Peano [9], which is the basis of our derivation. Peano's Theorem n-4-T Let L(f) be a linear functional defined on space C [a,b] of the form L(f) = b n :(i) n J k (k) [ I a.(x)f^(x)] dx+ I I b f ^ v, i=0 n k=0 i=l 1K K *ik } where the a.(x) are piecewise continuous over [a,b] and the x.^ e [a,b], Let L(p) = for all p e p . Then, for all f e C n+1 [a,b], L(f) ■ f (n+1) (t)K(t)dt where K(t) = ^f L x [(x-t)!J], (x-t)J = f (x-t) n , X > t 0, x < t, and the subscript x on L means L operates on the argument as function of x. (r) The first main theorem on the pointwise error for y x ', r = 0, 1, 2, 3 is for linear interpolation without extended interval. 34 Theorem 2.1 Let y e C 4 [I]. Then, l|y (r W r, L'T I ||yW|Lhir.t.-o.i.2 > i where T o 7 384' T 2v^3 + 27 'l 532 T 2 2 + 9h r 24 ' 2 + 3h^ T - r '3 4 ' h r max min' h max = max l h iU max Uj^N J and h m . = min Ih.l . In order to prove this theorem, we will prove the following lemmas. Our proof is similar to that by Hall [19] for cubic spline interpolations. The values of all definite integrals appearing in the remainder of this chapter have been checked by FORMAC programs. The following lemma establishes an a priori bound for m-. Lemma 2.1 Let y e C 4 [I]. Then, for each x. e X(N), J lijl Myj-ijl * kll"y (4, 'lLhJL 35 Proof We will use Py and P.(x) to denote the interpolants obtained using in. instead of m.. By Peano's Theorem, j j fX. (4) where R(y) = y - P,(x) = J K(x,x)y^ ; (x)dx J Jx. , J-l K(x,x) = •|tR x [(x-t)^] = ^t{(x-t)^P x (x-t^}. Hence, fX. y - Pj(x) = J ( |r7K(x,T)}y (4) (T) dx. 3x \H We can set j = 1 without loss of generality. It is simple to show that for (x-x)+/3! 3 m" = ( h "^> {-(h-x) 2 (10h 2 +8hx+3x 2 ) + 22h 4 } 1 24h D Since (0-x)+/3! = and (0-x) 2 /2! = 0, it follows that V3T (x ~ t) + } = Vi^ + ^"(h-T)^ (x) Differentiating, we find fx" P x { IF (x ' T) + } = m l 3 l (x) + JT {h - T) l & \ M Th us, since 3-.(h) = 1 and 6-j(h) = 0, we find that 2 i _. fh (h-x) _ m y, - P^h) = i-yr^- m,} y W (x) dx ii JQ 36 It finally follows that 2 ly^l slly^'lLJj^^-^idt^Hy^iiy. q.e.d. The following lemma establishes our a priori error bound for m.. Lemma 2.2 Let y e C 4 [I]. Then, for each x, e X(N), le.l = ly'-m.l * ]^ ||y (4) || h 3 . 1 3 ' ' J J ' 16 " J M °° max Proof Using (2.16) and X = - 0.75, we find that i e .| < | 1-(-A) J i E <_l,. y (4),| h 3 |e j' " ' 1+X ' L " 16 l|y "-"max* The next lemma, due to Birkhoff and Priver [6], gives the error bounds for piecewise cubic Hermite interpolation. Lemma 2.3 For y e C 4 [I], |y (r) V r) IL ^J|y (4) ll hi-, r = o, 1,2, 3 r ".* "°° max' where 1/311 u 384' y l 216' y 2 12' y 3 2 s 37 and H y = y^^U) + yj6(x) + y^.^M + y.j<5(x) Lemma 2.4 For y e C 4 [I], ||Hy (r) -Py (r) |l ^J|y (4) |l hi":, r - 0. 1, 2, 3 max where 1 1 3h r 3h r a " 64* a l ' 16' a 2 " 8 ' and a 3 " 4 * Proof From the definitions of Hy and Py, Hy (r) - Py (r) = e. ,a (r) (x) + e.6 (r) (x). It is simple to show that l|Hy (r) -Py (r) |L s|||a (r) (x)| + |S (r) (x)||L | ■ e j sb rTI^ (4) "- h L where max . _ i . 4-, b, 1, b 2 » - mm Thus, V -T i ' b l- 1 ' b 2-lTZ- andb 3 = 12/h min a r = b r /16, r = 0, 1, 2, 3. Q.E.D. 38 Finally, it is easy to prove Theorem 2.1 by the triangular inequality, l|y (r) V r) IL ' l|y (r, V r) JL + llHy (r, -Py (r, IL. That is, T r = y r + a p , r = 0, 1, 2, 3. In the following paragraphs, we will show that the a priori error bounds for the linear interpolation with extended interval is an interesting function of the size of the extended interval. By showing the graph of the error bound constants which are numerically evaluated, we can readily observe the remarkable effect of the extended interval on the error behavior. In practice, the proper choice of the extended interval at each step not only improves the compression capability of the algorithm by reducing the error for the given subinterval, but also provides us with a useful algorithm which is both stable and insensitive to the input data-point distribution. We state the following theorem about the continuous linear interpolation with extended interval which summarizes the above dis- cussion. Theorem 2.2 Let y e C 4 [I]. Then, l|y (r) -Py (r) ll„ ^ E,J|y( 4 »||„h^, r- 0.1.2.3 39 where E Q = M e (a)/4 + 1/384, E ] = M e (a) + v^/216, E 2 = [1 + 6h r M e (a)]/12, E 3 = [1 + 12h^l e (a)]/2, and M (a) is given in Lemma 2.6 as a function of a. For the purpose of illustrating the magnitudes of the error bound constants the values of M e (a) are plotted as a function of a in Figure 2.4. This graph shows that M(a) and therefore the size of error fluctuates as the size of extended interval is changed. The minimum of M (a) occurs at a - 0.3, which coincides with the value of a which minimizes the stability parameter X. Therefore, we can control the size of the extended interval to obtain the best performance of the algorithm. For comparison, an arrow is drawn around 0.043 on the vertical axis in Figure 2.4 to show the error bound constant for cubic spline interpolation which was obtained by the same procedures as used in this section [19]. Even though this cannot be used as a sole measure of goodness of the interpolation scheme, it is mentioned just as one il- lustration of how good our scheme is. Since the proof of Theorem 2.2 is similar to that for Theorem 2.1, we will just state the following two lemmas about the error bounds for m. and m.. We assume that the extended intervals are a constant function of the intervals I.. J 40 0.00 0.37 0.74 1.11 1.48 1.85 Figure 2.4 Error Bound Constant M e (a) 41 Lemma 2.5 where Let y e C 4 [I]. Then, for each x- e X(N), \e.\- l^-mjl ^ e (*)\\y {A) \\y mx f l+a (1-t)| _ K e (a) = J 1—2! m l' dT > m^ = {(l+a-T)J[105a(l+a) 2 -21(l+a-T) + (3a 2 +4a+l)+ 7(l+a-x) 2 (3a+2) - 3(l+a-x) 3 ] + 2(l-T)^(l+a) 5 (60a 2 -55a+ll)} /[24(l+a) 5 (15a 2 -5a+l)]. and Due to the complexity of the integral, we cannot get the error bound constant K (a) in a neat mathematical form. However, the values of K (a) were numerically computed as a function of a. The graph of K (a) is plotted in Figure 2.5. Lemma 2.6 Let y e C 4 [I]. Then, for each x. e X(N), vi lejl ■ |yj-"jl * m 6 («) Hy (4> IL»4 x 42 0.00 0.37 0.74 1.11 1.48 1.85 Figure 2.5 Error Bound Constant K (a) 43 where M e (a) = K e (a)/|l+X|. 2.3 Analysis of the Discrete Linear Interpolation Scheme We will discuss the stability and error problems of the dis- crete linear interpolation scheme in a rather informal manner with emphasis on the practical aspects. The linear method based on the discrete error norm can be analyzed in the same way as the linear method based on the continuous error norm. The stability parameter is defined in the same manner except the integrals are replaced by summations. A i = I ViftJe.-ltJ/ I w.{3,(t k )} 2 (2.17) J keS.+AS, K J K J K keS.+AS. K J K This formula cannot be reduced to a neat form like (2.14). However, we can easily observe that A. is a function of not only the extended interval size but also the distribution of data points. At first thought, this sounds like a rather severe limitation on the actual algorithm because this implies that the algorithm when implemented as a computer program cannot handle a variety of data points. If this were true, the algorithm would not be useful since the input data is not tailored to the algorithm. For the discrete linear interpolation scheme, the value of the stability parameter varies for each subinterval according to the data point distribution within the subinterval. Therefore, the def- inition of stability for the continuous scheme cannot be used here. We 44 will define the stability of the discrete linear scheme as follows: if | A. | < 1 for all j, then the scheme is said to be strongly stable ; if | A. | < 1 for most j, then it is said to be (weakly) stable . This means that the discrete linear interpolation scheme is stable if |\. | ^ 1 for only a few subintervals. As we will see later, only the discrete linear interpolation scheme with the extended interval can be strongly stable. Therefore, the extended interval is critical for the discrete scheme whereas it is used only to improve the performance of the continuous scheme. The stability problem for the discrete linear interpolation scheme is discussed, first without extended intervals, and then with extended intervals. For the first case, we will discuss the stability in terms of data point distributions. We will examine the behavior of the stability parameter A. and the various data point distributions which make the algorithm stable. For the second case, we will explain how we can obtain stability through control of the extended interval. For both cases, we will discuss the case of equidistant data points as a specific example. Rewriting the expression of stability parameter for the discrete algorithm from (2.17) and setting t. = £.h. where h. = x. - x. -, and £ L $ 1 , we get j j j 45 Although the w. are given by (2.6) for practical purposes, we assume that they are just any positive numbers in the following discussion. From this expression for X. and the graphs of the factors A. and B. in the numerator and denominator, which are given in Figure 2.6 as functions of £, the following observations can be made about the stability parameter. 1. The A. are functions of the data point distribution, J 2. If all the £ k > (<) ^, then |X.| £ (>) 1. Furthermore, lim X. = - °° and lim X. = 0. This is obvious from the expression for X. or Figure 2.6 If IwJa.-U.)] 2 * I w.DUt.)] 2 , keS. K J K keS. J K J J then X. < 1. This can be easily proved by Cauchy's in- equality. It should be noted that the observation 2 is a special case of this. Let w k be equal and let g(£) = [a(?)] 2 -[3(?)] 2 . If for each £. < 2 there is a £. > j in tne interval I. where 9(Cl) ^ - g(?,-)» then X. < 1. This can easily be seen K 1 J from the graph of g(£). It should be noted that this makes plausible the assertion that X. < 1 for equidistributed data points. If the £ k and w k are such that A, ^ A 2 > . . . > A s , and j 46 • o- C\J * \ \ co oj- \ o •« — 1 >< • - CD O •- O — I h- 1 1 -^t-^ ^.00 0.19 0.38 0.57 0.7B 0.95 Figure 2.6 Data Point Distribution and Stability 47 B-j ^ A-j, B.jB 2 * A ] A 2 , ..., 6^2... B s * A 1 A 2 ...A g , then X. < 1. This is the trivial consequence of the theorem in Section 3.9.22 in Mitrinovic [21]. To be specific, let's look at equidistant data points with the w^ given by (2.6). The X. become \ = I k 3 (s.+l-k) 3 / I k 4 (s.+l-k) 2 J keS, J keS, J It is easy to show, using Bernoulli polynomials and the integer sums, that s^(l/4-3/5+3/6-l/7)+e(s^) X. = ^-7 7^- J sj(l/5-2/6+l/7)+6(sV) Thus, we get lim X. = - 0.75. The discrete linear scheme with the extended interval is of particular interest because it is the actual algorithm we use in the computer program. The actual computer algorithm uses only one extra point for the extended interval because it was observed from the test that one extra point simplifies computation without affecting the per- formance of the algorithm. Thus, A. is written as k I V k 3 OW< 1+ *) 3 4 n+ ,3 n keS i a (1+a) - D-, ^ = J = !_ J I w k ^(l-? k ) 2 +w e a 2 (l+a) 4 a 3 (l+a) 4 + D 2 48 where D i ~- i *£u-h? and D 2 ■ i \4" 0. da ' j \c. 2. lim X. = 1. a-*» J 3. lim X. = lim X. = tt- for a > 0. ^° J V lJ 1a 4. - °° < X. < 1 for < a < + °°. J 5. X. = for only one value of a £ 0. This is an obvious consequence of the observations. We have drawn the curves of X. as a function of a for a few data point distributions in Figure 2.7. The above properties of X. can be easily seen from these curves. Therefore, the use of one extra point can be justified by the fact that an extended interval of small size for each subinterval is enough to make the algorithm stable. In practice, choosing a proper size of the extended interval for each step requires the time-consuming process of evaluating the value of X. for each sub- interval and extra point and therefore of doubtful cost benefit. We can -2 easily show that |X,| = for < a < 0.3 because |D^| < 0.8 x 10 where D-, is defined above. This upper bound for |D, | is obtained from the area of the minimal triangle surrounding the curve A ^ in Figure 2.6. 49 Figure 2.7 Stability Parameters for the Discrete Linear Scheme 50 Figure 2.8 Stability Parameters for Equidistant Data Points 51 Finally, we will look at equidistant data points and make the following observations on the X.: a 4 (Ha) 3 - 1 1. lim X. = -^ 2 ^r- E G U)- 2. G(0) = - 0.75. 3. lim G(a) = 1. 4. G(a) = has a single root at a - 0.3. 5. For a > > 1 , G(a) ■* yfr. The graphs of the X. fors. = 2 and s. = 100 are shown in Figure 2.8, for the equidistant data points. Our discussion of the a priori error bounds for the discrete linear interpolation scheme will be very informal and therefore limited to the statement of observations and experience with test runs. It is easy to show that the a priori error bounds are expressed in terms of the X. whose values vary for each subinterval depending on the data point distribution. Solving the difference equation e. = - A.e. , + e. J J J-l J yields e* ■ l n (-A.)e. + n (-x.)e n J i=l k=i+l K n i=l n u — (r) (r) If |e-| is bounded, then |e-| is bounded and the bound for ||y v -Py v 'H, can be expressed in terms of the bound for |e.| as was done in Section •J 2.2. However, the X. are also dependent upon the size of the extended 52 intervals. This not only makes the error bound expressions very messy but also makes it worthless to obtain worst case error bounds since (r) (r)n || y -Py H^ < + °° is meaningless. However, it is not difficult to (r) (r) observe that the a priori bounds of ||y v -Py v H^for the discrete scheme with the extended interval will look very much like the graph in Figure 2.4. The only difference would be the magnitude of the error bound constants. We could observe this when we actually calculated the error bound constants for a given sample data point distribution. The most important fact here is that the role of the extended interval is critical for the stability and nice error behavior of the discrete linear interpolation scheme. Finally, we will show that |e.| is bounded in the following vJ lemma. Lemma 2.7 Let y e C 4 [I]. Then, for each x, e X(N), |e,l = |y;-I,l <^l|y (4) ILhi 'j -ji 24 l|jr M « M max Proof Set j = 1 without loss of generality. From Peano's Theorem, we find as in Lemma 2.1 that y-, - m 1 where 2 h (h-x) _ U) i-jT^- m 1 ]y (4) (T)dx ! 53 / Iw k [ 6l (t k )] 2 keS, Since (h-T)J — 21 - - m-j ^ (see Lemma 2.8) and, fh (h-x)J _ { - m^dx = 24, it follows that |yi-i|l ^lly (4) ll.h 3 Q.E.D, Lemma 2.8 (h-x)2 m-, ^ for ^ t ^ h, where m,is given in Lemma 2.7, Proof Set x = uh and t. = £. h to obtain 2! ^ r 4 2 3-25. (1-u) 3 2 (1-u); keS. k k k ' 5 k 3! = h fc {- 5k»-«k-) 3! 2! X w k?k (1 ^k )2 keS. K K } Examine the term in brackets. Let g(C.u) - r^ -35- - J— -J]- 54 We wish to show that (l-u)J g(5,u) < 2 , for < £, u < 1 It is simple to show that 35 g(e.u) * o, i.e., g(£,u) is a montonotically increasing function of £, and that (l-u)J g(l,u) = —yj — Then, the desired result follows. Q.E.D. 55 3 COMPUTATIONAL RESULTS In general, there do not exist clearly defined criteria by which we can measure the usefulness of a computer program. It is reasonable to talk about how good it is or how useful it is in terms of two things. The first is the behavior of the computer program as it is implemented and executed for a particular application or for a particular computer. The second is the performance of the algorithm itself which is rather independent of the particular implementation or the run time environment of the program. For a certain class of algor- ithms, especially combinatorial algorithms, the first problem tends to be treated as an artistic problem which is determined mainly by a programmer's skill or style, and the second is the concern of an area called the analysis of algorithms. For most numerical algorithms, it is a common practice to discuss them together. However, we will discuss them in two separate sections for the following two reasons. The first reason is for readability. The second is that there are several impor- tant problems that receive further scrutiny for the implementation of a good or useful program. 3. 1 Implementation of the Algorithm Since the existing version of the computer programs was written and used for testing the data compression algorithm described in the previous chapter and is far from being ready to be integrated into the plot package, our discussion will concentrate on the various problems 56 that have been considered to be important for the implementation of the algorithm and for the future improvement of the present programs. We cannot overemphasize the importance of the factors that affect the size and execution speed of the implemented programs in view of the fact that the algorithm is to be incorporated into a package of interactive programs. We will discuss three implementation problems: the adaptive error criteria, the data structures and storage organization, and the knot prediction problem. The data compression algorithm essentially consists of two processes: computing the piecewise cubic polynomial interpolant for a given subinterval and determining the location of the knots. The choice of an interpolation scheme and an error norm for each segment is made adaptively and the knot is chosen to maximize the length of the subinterval while maintaining faithful approximation. Two dif- ferent error norms are used as criteria for faithful approximation, i.e., the maximum error norm scaled by the display size when the linear scheme is used and the square of the maximum pseudo distance when the non- linear scheme is used. A simple criterion for switching from one scheme to the other is used in the program: a curve segment is defined to be nearly vertical if the maximum value of the first-order approximation to the slope for the subinterval scaled by the display size exceeds a threshold value THRESH. Several values of THRESH were tested and THRESH = 100 proved to be most effective. As we will see, different values of the maximum permissible error EPS are used for different test 57 _3 examples to obtain high compression. EPS = 10 seems to be a reasonable choice in most cases. As mentioned before, we use one extra point for the extended interval instead of choosing a. = 0.3 because it simplifies the computation without significantly affecting the performance of the algorithm. When there is no data point within a subinterval, we compute the value of m. using a cubic polynomial interpolating through four neighboring points, x j-r x j' S-UM' t i(j)+2* The second implementation problem of importance is the data structures and the storage organization for the computer algorithm. The final data structure containing X(N), Y(N), and M(N) generated by the data compression algorithm is a simple two-dimensional array. When there is more than one curve or function (this is the case for our application), we need to have more than one such array. Then, there exist two alternatives for the organization of the data which will also affect the coding of the program in a minor way. We can have either the same set of knots for all functions or separate sets of knots for each function. The former scheme is considered to be superior to the latter due to the following two reasons. The first reason is that the second scheme is more time-consuming computationally and requires larger arrays than the first one even though it can reduce the number of knots because the operation of keeping track of different knots for each func- tion becomes very messy for the one-sided step-by-step computation. The 58 second reason is that the first scheme is better suited for interactive graphical processing since the use of one set of knots would result in faster display of curves and more efficient searching through the arrays. In practice, we maintain separate arrays for X(N), Y(N), and M(N), a one-dimensional vector for X(N), a multidimensional array for the set of Y(N)'s and a multidimensional array for the set of M(N)'s. The question of the size of these arrays is discussed together with the problem of temporary storage. We will discuss the problem of how to manage the buffers which provide for the work space for the algorithm. This problem is important because extra storage in addition to the final data structure should be set aside to save temporarily a small set of input data points relevant to each computation step. To be specific, it is necessary to maintain buffers for T(n) and F(n), i.e., (tw. 1 x +1 , ..., t.^ ^ +s , t./jj, t i(j)+] } and {f i(j . 1)+r .... f i(j _ 1)+s . f i(j) , f i(j)+1 > for the com- putation step involving the subinterval I.. The length of each buffer when one extra point for the extended interval is counted is (s.+2), where s. is different for each subinterval. Although the observed average of s- is about 15, we could observe from experiments with var- ious buffer schemes of fixed or variable size that buffers with lengths of 30 to 40 words are needed to prevent serious degradation in perform- ance of the compression algorithm. In one scheme with buffers of fixed length 30, we used a set of heuristics to delete appropriate data points from the buffer whenever it was full until we obtained the desired knot. 59 + '•"3 4->"' _ •r-j '<—} -M"" ••""3 • • • + + 1 1 X ■•"3 •"-3 E # ! : C\J X CM >> CM E x~~ >T e' - o X O o E o o *3- (T3 4-> N CD S- o 0, then h] + K | 2) = [hg(a K+1 -EPS)-h] + K ; 1 )( V EPS)]/(a K+1 -a K ). Otherwise, choose a point whose index is determined by •( K+l ) (K) where i v ' and i v ' are the indices of the points which 63 were selected previously. This is repeated until we find the desired knot. The results of the experiments with this knot prediction scheme are summarized in Table 3.1. Examples 1 to 5 are described in Section 3 ' 2 ' Table 3.1 Efficiency of the Knot Prediction Scheme Example No. No. Input Data Points No. Knots Efficiency (%) 1 209 10 74.7 2 367 22 65.6 3 398 6 84.8 4 358 33 50.5 5 120 14 60.2 It should be noted that different knot prediction schemes lead to different sets of knots and interpolants. This is due to the fact that the algorithm does not exhaustively search through the solution space to find the optimal set of knots, i.e., the minimum number of knots which preserves the smoothness of the approximation. It has been observed from experiments with various knot prediction schemes including "no" prediction that the choice of any particular knot prediction scheme does not significantly affect the overall performance of the algorithm in terms of its compression power and the smoothness of approximation. This is a very nice property since it is consistent with our original objective for the knot prediction scheme, i.e., simplification of the computation without bad side effects. 64 A source listing of the FORTRAN programs which do not place a limit on buffer size and use the above knot prediction scheme is given in the Appendix. 3.2 Examples and Test Results We carefully selected a set of five examples which range from a highly smooth function to a square-wave function. Although the tests were carried out in a batch mode (not on-line), the input data which were generated by an ODE integrator (we used a subroutine package called DIFSUB by Gear [16]) were directly fed into the compression package and the output curves were drawn using a CALCOMP plotter. We would be able to obtain curves of the same quality as these if high-resolution display terminals were used. The test results are obtained from the version of the computer programs with buffers of large size and the knot prediction scheme described in Section 3.1. Computation time statistics confirmed our belief that the data compression process is fast enough to give us real- time response. The test results are summarized in Table 3.2 where the entries except for EPS are the number of knots and the percentage figures within the parenthesis are the compression ratios defined as follows: r^mr^co-;™ v.^+-:« - Numb er of knots Compression ratio = n— ■— — . ,._,,.,. nn i n+c - r Number of input points In the following pages, the differential equations and/or circuit diagrams and the printed output for X(N), Y(N), and M(N) together with the graph of Py(x) are given for each example. These results illus- trate the power of the adaptive compression algorithm. 65 Example 1. y' =-10 4 {y - sin x - tanh 10 4 (x-tt/2)} + cos x + 10 4 seen 10 4 (x-tt/2) y(0) = X(J) Yf.ll NMJ) ( 1 ) 0.0 0.0 -0. 100C0E 05 ( 2) 0.15R55D- -"2 -0.99841 F 00 0.30346F 01 ( 3) 0.1 6 86 80 or -0.83212E 00 0. 9723! E 00 ( 4) 0.72?15D CO -0.33900 F io 0.74039E on ( 5) 0.118130 01 -0.74905 c - -or 0.37851F 00 ( 6) 0.157040 01 0.17480E- ■0? 0. 23917E- -CI ( 7) 0.157620 01 0,200001= 01 -0.60439E- -02 ( 8) 0.25361D 01 0.15273F 01 -0. fc5951F 00 < 9) 0.41 26 3D 01 0.16688P 00 -0.55114E 00 (1 0) 0.51 9P8D n l O.U?25E 00 0.48089F 00 66 Figure 3.2 67 Example 2. (Pulse transformer): y' = 7.48(l-y) - 22.3(0.325 cos 22. 3t - sin 22.3t)e y' =-14.96(0.0728+y) + 44.6(0.325 cos 44.6(t-2) • aa en* 9U -14.96(t-2) iin 44.6(t-2))e , -7.48t , t < 2 - s - t > 2 y(0) = o i + I t V i — t Figure 3.3 X ( J ) Y ( J ) M ( J) ( 1) 0.0 o.o 0. 23250F 00 ( 2 ) 0.76911D- -01 r.89993F 00 0. 13372F 02 ( 3) 0.124900 cc 0.1 3237F 01 0.27665F 01 ( 4) 0.15R96D 00 0.1.31 891= 01 -0.32479E 01 ( 5) 0.2*299D rr -•.9 3473? 10 -~.28170F 01 < 6) 0.314310 00 0.90819= 00 0.18764F 01 ( 7) 0.44189D 00 0.10 38 3F 01 -0.675815 00 ( 8) ".514290 00 1.996^5= t)*\ -0.57649P "0 ( 9) 0.68864D 00 O. 1 0048E 01 -0.49651F- ■07. (10) 0.1RS22O 01 0.968Q6F 00 -0. 15975F 00 (11) 0.20012D 01 0.95Q93 r 00 -0.241?5F 00 (1?) 0.2104^0 01 -O. 1 0? -0.7' 799<=- ■01 0.47347E- -01 (18) 0.270720 01 -0.7?774F- ■01 0.48082F- -01 (19) 0.31 758D CI -C.72B00F- -"I 0.33710E- -CI ( 20) 0.3 722 4H 0! -0. 7*>flC0F- -01 0.26133F- -01 (21 ) 0.44433D 01 -0. 72800 E- ■01 0.29413E- -01 (22) 0.51+^60 0! -0 . 72800F- -01 0.29413F- -01 68 o ru o 01 □ ID o CO a o 0,00 0.9M 1.8 2.82 3.76 4.70 a . i Figure 3.4 Example 3. (Fast-rise current switch): y' = -5.0 x 10 5 y + V/6. x 10' 5 69 V = 0.0, t < 0.1 + 6.0, t > 0.1 - 6.0, t > 0.2 y(o) = +6V u 2N2410I 200 pf -It- 500 V 200pf r -wv- 500 h = R i = L 2 = Ro = 2N2696 40 yh 20 20 yh 10 Figure 3.5 X( J ) Y( J) "( J) 1 ) 0.0 0.0 ?) o.iooooo 00 O.o 0.^8968F-2? 3) n . ino?9n 00 n.?o~~ n p t -0 .34617F-C6 4) 0. ?ooooo oo o.?oooo^ 00 -0.25975F-O6 5) 0.2007.30 oc -0.70000F 00 0. A00A5F-0B 6J n # 3prr | .in /»r> -0.20001 F O r ,358?3F-0ft 70 Figure 3.6 71 Example 4. (Transformer): = 0.89{-161y - 158.8cos lOOt + 103.542 sin lOOt 123(0. 16e* 38t + 1.16e" 284t )} y(0) = Figure 3.7 X(J) Y(J) R l = 5 h = 3 L l = 0. 05h 4 = 0. 06h M = 0. 04h M( J) 1) 0,0 2) 0.142050-01 3) 0.303650-01 4) 0.438610-01 5) 0. 559960-01 6) 0.745660-01 7) 0.975160-01 8) 0.109580 00 ( 9) C.13157D 00 (10) 0.143080 CO (11 ) 0.163880 00 (12) 0.1763 60 00 (13) 0.1 96700 00 (14) 0. 2073m 00 (15) 0.228600 00 (16) 0.239290 00 (17) n . 26^690 00 (18) 0.?69750 00 (19) 0.277000 00 (20) 0.295ft5D 00 ( 21) 0.307370 00 (22) 0.327180 oo (23) 0.33897D 00 (24) 0.35891D 00 (25) n. 371470 ro (26) 0.391.900 00 (27) 0.403610 00 (2«) ^.423650 CC ( 29) 0.4451 50 00 (30) 0.458360 00 (31) 0.47824D 00 (32) 0.492360 00 ( 33) 0.500760 00 0. o -0.32116F 03 0. 22562P 00 0. 88327F 02 0, .90724F 00 ^.91082F 01 0. 3C718F 00 -0.96569F 02 0. . 748 76E 00 -0. 66714F 02 0, .4U31F ^0 0.93859E 02 0. 9196R C 00 -0.34839F 02 0. .63991F- ■01 -0. 10298F 03 0, 819 79F 00 0.58873F 02 r. 1 3358 c CO 0.1Q230E 03 0. , 76931E 00 -0.64462E 02 0. 30925F 00 -0.95876F 02 0. .68074P 00 0. 76059F 02 0, •26553F or> 0.99375 c 02 0. 64637^ 00 -0.79137E 02 0. .31776F 00 -0. 97668P 02 0, .59678F 00 ">.8 40 8 7F 02 0. ?2963E 00 0.1 C397E 03 0. 79387P 00 0.56232F 02 f) t 296?9F 00 -0.99150F 02 0. 731 68F 00 -0.66680F 02 0, ,?857' , « : n g 0.995 56F 02 n. 743 74 c 00 0.65262F 02 0, ,25622^ 00 -0. 10041E 03 0, 8^598F rtQ -D.56611F C2 0. 1C754C 00 0.103 71E 03 0. .84195P 00 0.50920F 02 0. 74913P- -ox -0.10294E 03 c. 84690 p 00 0.52474E 02 0, 2 4^39F 00 0.994C8F 02 0. 75787= 00 -0.6531 7E 02 0, -47127F 00 -0. 88478E 02 0, -94214F 00 -0.236C9E ^2 72 Figure 3.8 Example 5. (Double-differentiator): y' = -lOy + 10 3 (l-10t)e" 10t y(o) = o 73 v. = V(l - e-t/T A = 10 V = -10 T = 0.1 R l ■ R " Z in K-iLi — KoLo = T Figure 3.9 X( J) Y( J) "(J) ( 1) 0.0 0.0 o.irocoE 04 ( 2) 0.401410- -01 0.21477P 02 0. 19261E 03 ( 3) 0.677140- -01 0.22755E 02 -0.56950E 02 ( 4) 0.841600- -ri 0.21010E 02 -0. 1A266E 03 ( 5) 0.142530 00 0.98468E 01 -0.19740E 03 ( 6) 0.193910 00 0.84666E 00 -0.13984E 03 ( 7) 0. 223280 00 -0.27871E 01 -0. 106C1E 03 ( 8) 0.354810 en -^.79039E 01 0.42453E 01 ( LG-,II) AD(5,10) ttpxT pvr(2r*4) AD (5,70) NSV,EPS «m^t| T?,F10.31 & D ( 5 , 1 2 ) Fmt PMAT( 18*V4) T" r F(6,15) ITFXT,FP C . RVAT( 1HQ,2QA4/ 5X , 4HFP?= , F? 0.3/ ) *p(5, rM ' r ) rjvi(i) IV INPUT POINTS. AT* pn^NTS. = 1. AD(5,FMT,FND=^0) X(IN),Y(IN) = IN+1 c,r yp 30 UN. LE. 4501 = T N'-l. ( 1 ) = X( 1 ) ( 1 )=Y ( 1 ) NO=X(TM) A!JT n M) PETURN H&RDCCPY PLOT OUTPUT. 160 170 1 2" 50 C 00 = X0( YP( K=l DC HX IF OX 1 = OX XJ IF T F K = XP XI X2 YP GC IF K = XP YP cv CAL PFT ENC XS( NK) /3 00. 1 )=XS( 1) 1 )=YS(1) 150 = X* (HX = hX DX + = HX = XP ( XT (K. K*l = XS = XI (K) T r (K. K*l 1.) )/(HX*HX) 160 GE.350) GC to 200 = XM J) = YM J) NIJF R*FC (xP,YP,l,K f LBLX, 1,LBLY»1,XLEN, YLEN) 84 SUBROUTINE ftKNOTUNtNCNT ,NCN,NCP) IMPLICIT REAL*8(A-H,P-Z) REAL*4 Y(450),YS (50), DM< 50) DIMFNSIPN X(450) ,XS(50) COMMON TEND,YMAX,THRESH,EPS,X,XS,Y,YS,DM,NL,NR,NK D.4TA IWR/O/ C r r r f r r 10 20 r r r 30 40 50 P IECEWISE TM THE KNOTS AR AND FIXED AN NCM#CrMpiJTA Nrp = «C0MPUT,« mcnt= #compijt NT N=o NCP=0 MC NT =0 TNITI ALTZF IF (IWR.NE +WPITP{6,3 NL = 1 DIOLD=(Y( 0ST=C!OLD NR=2 DI1=(Y(NR D!2 = (0U- ADU=DABS ADI2=DABS IFUOIl.L IF(ADI2.L I F ( MP . GT . IF(DIt*DI AOST=DABS IFIDMAXK NP =NR + i IF(N'R.GE. DI0LD=0I1 GO tp i o BK POLYNOMIALS ARE CALCULATED ANO THE LOCATION OF E DETERMINED. EACH KNOT IS SUCCESSIVELY PREDICTED D THE SLOPE AT EACH KNCT IS CALCULATED. TIONS WITHOUT PREDICTION. """IONS WITH PREDICTION. ATION-S FOR IDEAL CASE. AND PREDICT T H E 2-ND KNOT. .0) 10) NK f XS(l) 2)-Y(l) )/(X(2)-X( 1) ) M )-Y DIOLD (DID (012) (NR))/(X(NR+l)-X IF< IWP. NE.O) ♦WR ITF(6,60) NL,NP,X(NP),Y(NR ) ,H,EMAX IF(E*AX-EPS) 160,200,170 160 IF(TND) 161,161,200 161 IF (NR.GE.TNI GO TO 2C0 NR=NP+1 NE = NR«-1 IF(NF.GT.IN) MF=IN IND=-1. GO T P 150 17^ IF(TMO) 250,172,172 172 IF(NR-NL.LE.l) GC TO 200 NF=NR *F =NP-1 TND = 1 GO tc 150 c r PRESENT PHIMT IS S* VFD * S * KNOT. r 200 NK=NK-H YSCMK )=Y(NR ) XS (NK)=X(N»,3X, »X(I ) = • ,E 12. 5, 3( 2X, E 12. 5) // 1 13X,»NL« ,3X,» NR' ,6X, » X ( NP ) • ,9X, «Y(NP P,11X, • H' , 12 X, • EM AX • ) IF(NR.GE.IN) RETURN IF('NK.GF.50) RETURN GO tp 30 END 89 FUBPCUTINE PCUBF(NEtEMAX) IMPLICI T PFAL*8( A-H, P-Z) PEAL*4 Y(450) f YS (50),DM(50) DIMENSION X<450) ,XS(50) COMMON TND,YMAXfTHRFSH f EPS,X,XS f Y,YStDM,NL,NR,NK r C PIECPWISE CUBIC POLYNOMIALS ARE DETERMINED FOR THE INTERVALS C - 1 ) I) ) = ( 1 X(NR1) X(NL) X(N»1 ) 1) f,n TC 70 XCN'U. ) -X(NL) -X(M»1) Y0*B4*B5-A1*A3*Y{NL1) ) /( A4*B4*B5) ♦{ Y0*B4*Bl-»-Al*A4* Y(NLM/ (A3*B4*B1)+(YC*B5*B1-A3*A4*Y(NR1I )/( A1*B5*B1) DM(MKX ) = Al*Y(NL)/(-A3*Bl)+*Y=YV^X IF(YB.EQ.0.) YB=1. H W X=CVX*TFMD/YR IFCDVX.LT.THRFSH) RETURN C NONLINEAR IN TEB&0LATI0N. S=(Y(NP )-Y(W-l) )/(X(NR|-XfNP.-l) I 30 SLP r LO-=SL0PE FX=F(XJ] ,XJ2,S,SLP0LD) FFX = F{ XJ1 ,XJ2»S, FX) PNX=PFX-? .*FX*SLPCLD TF(FNX.EO.0.» GC TO 50 SLCPF=5LPCLD-(FX-SLPCLC)**2/ENX IP(rABS(l*-NP2 ) /2 op T 60 1 TF{ c I.EO.p? ) GP tp 30 TF(( \oi-NP2)* ) NL,NP, X ( NR ) , Y ( NR ) , H, EMAX 100 FORMfiT(i0X,2(2X,I3) ,4 ( 2X ,E12. 5 ) ,3X, 1H*» IE(EPS-EMAX) 30,20,40 ?n EI=PPS PE T UPN 30 EJ=EMAX NPJ=NR 00 TP 50 40 EI = FM*x NPT=NR C CUCUL*TE THE NEXT KNO T . 50 IFCNPJ-NPI .LT. 6) GO rp 60 GO Tf 10 r IF nHERE ARE LESS THAN 6 PTS. IN BETWEEN, RETURN. 60 EI=^m*x RETURN END 96 VITA Won Lyang Chung was born in Seoul, Korea, on May 27, 1947. He studied under Samil Scholarship at Seoul National University and graduated with a B.S. in Electronics Engineering in 1969. He was a Trainee and graduate student at The Johns Hopkins University, Baltimore, Maryland, during the academic year of 1969-1970. He entered the Uni- versity of Illinois at Urbana-Champaign in the fall of 1970, and since then, has been a graduate research assistant in the Department of Com- puter Science. He is a student member of the Association for Computing Machinery and the Institute of Electrical and Electronics Engineers. He presented an abstract of thesis research at the ACM Computer Science Conference 1975, in Washington, D. C. Form AEC-427 (6/68) AECM 3201 U.S. ATOMIC ENERGY COMMISSION UNIVERSITY-TYPE CONTRACTOR'S RECOMMENDATION FOR DISPOSITION OF SCIENTIFIC AND TECHNICAL DOCUMENT I See Instructions on Reverse Side ) 1. AEC REPORT NO. COO-2383-0022 2. TITLE Automatic Curve Fitting for Interactive Display 3. TYPE OF DOCUMENT (Check one): Q"3- Scientific and technical report Pi b. Conference paper not to be published in a journal: Title of conference Date of conference Exact location of conference. Sponsoring organization □ c. Other (Specify) 4. RECOMMENDED ANNOUNCEMENT AND DISTRIBUTION (Check one): fr^ a. AEC's normal announcement and distribution procedures may be followed. ~2 b. Make available only within AEC and to AEC contractors and other U.S. Government agencies and their contractors. 3 c. Make no announcement or distrubution. 5. REASON FOR RECOMMENDED RESTRICTIONS: 6. SUBMITTED BY: NAME AND POSITION (Please print or type) C. W. Gear Professor & Principal Investigator Organization Department of Computer Science University jgfty Illinois at Urbana-Champaign 7. ABt CONTRACT ADMINISTRATOR'S CO RECOMMENDATION: FOR AEC USE ONLY :NTS, IF ANY, ON ABOVE ANNOUNCEMENT AND DISTRIBUTION PATENT CLEARANCE: LJ a. AEC patent clearance has been granted by responsible AEC patent group. LJ b. Report has been sent to responsible AEC patent group for clearance. LJ c. Patent clearance not required. BIBLIOGRAPHIC DATA SHEET 1. Report No. UIUCDCS-R-75-713 J. Title and Subtitle Automatic Curve Fitting for Interactive Display 3. Recipient's Accession No. 5. Report Date May 1975 '. Author(s) Won Lyang Chung 8- Performing Organization Rept. } " UIUCDCS-R-75-71^ >. Performing Organization Name and Address Department of Computer Science University of Illinois at Urbana-Champaign Urbana, IL 6l801 10. Project/Task/Work Unit No. 11. Contract /Grant No. US AEC AT (11-1)2383 12. Sponsoring Organization Name and Address US AEC Chicago Operations Office 9800 South Cass Avenue Argonne, IL 60^39 13. Type of Report & Period Covered 14. 15. Supplementary Notes 6. Abstracts This thesis presents a simple, economical, and reliable automatic graphical data compression algorithm for interactively displaying curves on the screen of CRT-like graphics terminals. An adaptive local scheme based on interpolating piecewise cubic polynomials with continuous slopes is designed and implemented. The scheme accepts a large number of data points as they are generated one point by one by an ordinary differential equations package and compresses the data into a compact picture representation. The important features of this algorithm can be found in its adaptive use of two quantitative measures of approximation — a local least-squares error norm and a pseudo distance norm, and its ability to regenerate aesthetically pleasing curves using the piecewise cubic polynomial interpolants with relatively few knots by one-sided, local computational procedures. The advantages of one-sided, local procedures are the economical computation based on local data and the fast search for knots. The problem of numerical stability imposed by one-sided, local procedures is solved by the idea of extended intervals which were intuitively developed and proved to have interesting mathematical and computational effects on the stability and error behavior of the algorithm. The scheme can be used for compression, transmission, and display of graphical data in both on-line and off-line environments. 7. Key Words and Document Analysis. 17a. Descriptors Automatic Curve Fitting, Interactive display, Computer Graphics, Graphical data compression, Piecewise polynomial interpolation, Local and adaptive procedures, Numerical stability 7b. Identifiers /Open-Ended Terms 7c. ( OSATI Fie Id /Group 8. Availability Statement Unlimited 19. Security Class (This Report) UNCLASSIFIED 20. Security Class (Thi Page UNCLASSIFIED 21. No. of Pages 103 22. Price ORM NTIS-38 ( 10-70) USCOMM-DC 40329-P7 1 <0 <\& 13 tfl '«•«« report/ 2no - "3-714(1975 Jl mi ■HI B$M8£ HI ■ll in rSHBh it* l KHlBBattatnBftl ISiHlllil ISHHliil Mllli Jam