L I B RAHY OF THE U N IVERSITY Of ILLI NOIS 510.84 iter no. 301-307 cop. 2 The person charging this material is re sponsible for its return on or before the Latest Date stamped below. The f,, mutilation, and underlining of books are reasons for disciplinary act.on and may res ult in dismissal from the Un.vers.ty. UN,VERS,TY OF I1UNO.S t^R^^^RBAN^CHAMP^ APA 6 RE FEB 1 A, L161— O-1096 Digitized by the Internet Archive in 2013 http://archive.org/details/implicitenumerat305ibar '/C $9 Report No. 305 >x^£# An Implicit Enumeration Program for Zero-One Integer Programming Dy T. Ibaraki T. K. Liu C. R. Baugh S . Muroga January 6, 1969 THE LIBRARY OF THE 369 The person charging this material is re- sponsible for its return on or before the Latest Date stamped below. Theft, mutilation, and underlining of books are reasons for disciplinary action and may result in dismissal from the University. University of Illinois Library JUL 1 5 1MB JUL -1 REU LI61 — o 1096 Report No. 305 An Implicit Enumeration Program for Zero-One Integer Programming by T. Ibaraki T. K. Liu C. R. Baugh S . Muroga January 6, 1969 Department of Computer Science University of Illinois Urbana, Illinois 6l801 /ABSTRACT This paper describes a theoretical background and computational experience of a recently implemented computer program for solution of a zero-one integer programming which is called ILLIP (iLLinois Integer Programming. ) The program is based on the implicit enumeration method. In order to speed up the original implicit enumeration method of Balas, Glover, Geoff rion, and Fleischmann a few gimmicks are proposed. Among those the biggest speed-up is attained by introducing a new column which works as a tag for each inequality indicating whether it should be checked or not for the current partial solution generated during the implicit enumeration procedure. In addition to the well known conditions in which a variable can be underlined, a new condition was found. A new concept of pseudo-underlining is also proposed. ILLIP was applied to many problems to test computational efficiency. Logical design of optimum digital network, for which our program was initially designed, was very efficiently solved. Other problems which have been known in literature also were solved. The method worked better than those proposed by different authors for some problems but not for other problems. I. INTRODUCTION The implicit enumeration algorithm for zero-one integer programs has been formulated by various contributors which includes Balas , Glover , Geoff rion , Fleischmann and Lemke -Spielberg among others. This wide-spread attention is mainly due to the fact that the implicit enumeration method appears to be quite efficient for a number of problems. In addition, a "reasonably good" feasible solution is often available if the algorithm is halted prior to normal termination, and addition and subtraction are the only arithematic operations required, thus eleminating round-off problems due to multiplication and division. Our computer program which is called ILLIP ( ILLinois Integer Programming Code) was initially designed for a particular class of problems l~19l T20l |"3l encountered in the logical design of optimal digital network. . These problems may be characterized by: ( a) more inequalities than variables and thus creating a relatively "small" solution region; (b) A very sparse coefficient matrix with non-zero coefficients most of which are 1 and -1, and; (c) an objective function with coefficients, and 1. Considering these properties, we adopted and modified the implicit enumeration algorithm. Our main effort was to speed up the enumeration procedure which is roughly based on the backtracking scheme due to Glover and the checking procedure of inequalities publicized by Fleischmann . Several new techniques were incorporated for the speed-up and they will be explained in detail. In addition, a new concept called "pseudo underline scheme" was introduced. 1 One of the typical features of any integer programming algorithm, however, is its erratic behavior for a given problem. An algorithm which can solve a problem very efficiently may not be very efficient for other problems. Although our approach is initially designed for such a problem as mentioned above, it also seems to be reasonably efficient for other problems taken from literature as will be shown in Section IV. However, a further speed-up could be usually obtained by making use of intrinsic properties of a given problem. Some gimmicks are tested for the traveling salesman problems and the covering problems. The result for these gimmicks also will be given in Section IV. One cf the largest problems successfully solved had 258 variables and 67k inequalities. The first feasible solution was obtained in less than 20 seconds, an optimum feasible solution in less than 30 seconds, and total computation time of less than 15 minutes on the IBM 3^0 model 75I« The largest problem was with 395 variables and 1012 inequalities, and took 5 minutes and 15 seconds on the same machine. For this problem, the algorithm was modified to make use of the structure of the given problem. II BASIC IMPLICIT ENUMERATION The integer linear programming problem is generally stated as: minimize c • x (2.1) subject to b + A x > where c is a vector of n non-negative coeff icients, ^ is a column vector of m constants, A is a m x n matrix and x is a column vector of the n zero-one variables. If the original integer linear programming problem is not given in the above form, we can easily transform the problem into the above form. The implicit enumeration algorithm for integer programming is an enumeration scheme which exhaust all possible variable assignments without examining all of the 2 combinations of an n variable problem. It can be interpreted as a special type of " branch and bound method 1 An excellent explanation of the basic implicit enumeration scheme is given by Geoffrion . Before we give the details of our algorithm we need some definitions and notations. Any x, whose coordinates are or 1, is called a solution * A solution that satisfies the constraints b + A x > is called a feasible solution, and a feasible solution that minimizes ex is called an optimal (feasible) solution . A partial solution S is defined as an assignment of the values one and zero to a subset of n variables. Any variable not assigned a value by S is called free . We adopt the notational convention that the symbol j denotes x. = 1 and the symbol -j denotes x. = 0. Hence, if n = 5 and J J S = 13,5, -2), then x=l, x =1 x =0 and x and x, are free. It will be seen that the order in which the elements of S are written will be used to represent the order in which the variables corresponding to the elements are set to 1 or by the algorithm. A completion of a partial solution S is defined as a solution that is derived from S by specifing all free variables. Implicit enumeration involves generating a sequence of partial solutions and simultaneously considering completions of each partial solution. As the calculation proceeds, feasible solutions are discovered and the best (minimum ex) one yet found is stored as an incumbent. The entire enumeration scheme is illustrated in Figure 1 where our modification of the algorithm is incorporated. Given a partial solution S, we can often detect some of the following three "S-conditions" (these conditions are not mutually exclusive) (a) S forces free variables to be specified in order to obtain feasible completions of the present partial solutions S, (b) there is no better feasible completion of S, or even if there are some feasible completions, there ' s no better (smaller c"-x than the incumbent) feasible completion of S, or, (c) the best feasible completion of S is found and it is better than the incumbent. If S forces some variables to be specified, we augment S by the specified variables with underline . For example if S = [3, 5, -2) forces x ■• 0, the next partial solution is S = [3, 5, -2, -1 } . In other words, an underline is attatched when the other partial solution which consists of the present partial solution and the opposite value of the specified variable (in the above example x. = l) need not be checked for some reason, or it has already been examined. Fig. 1 Implicit Enumeration Algorithm AQ1T - VAR : Augment S on the right by j or - j, where x . is any free variable. cannot decide (Are any variables forced to or 1? yes no Augment S on the right by the forced variables with underline. _s it found that a better feasible solution than the incumbent does not exist? CHK-IEQ yes cannot decide Is there a better feasible solution than the incumbent? iyes Replace the incumbent by the new better feasible solution. Backtrack : Locate the right most element of S which is not underlined. If none exists, terminate; otherwise, change the sign of the element, underline it and delete all elements to the right . When it is discovered that all feasible completions of S, if any, are worse than the incumbent, we simply backtrack. However, if the best completion of S is found and it is better than the incumbent, we replace the incumbent with this solution and then go to backtracking. The backtracking scheme mentioned above is the same as that of Glover's " and is used because of its low storage requirement. (A detailed description of backtracking is found in references [10], [8], A rough idea may be seen in Figure 1. ) If with a given S we cannot detect any of the three "S -conditions", then we must pick one of the free variables and assign 1 or to this variable, augmenting S on the right without underline (AGMT-VAR) . Then go to CHK-IEQ. As easily seen, the rate of convergence of implicit enumeration is greatly affected by the method of detecting the three "S -conditions" for a given partial solution S, and by the way of picking a free variable when no S-condition is detected. These will be discussed in some detail in the next section. Ill DETECTION OF "S - CONDITION? " We will discuss our modified detection method of "S - conditions". III-l (a) and (b) of the S - Conditions (i.e. variable setting and no better feasible completion). In checking each constraint we may be forced to set some variables to or 1 in order to avoid infeasible completion of the current S. This same check may also reveal the infeasibility of a constraint or that there is no better feasible solution than the incumbent and therefore lead to backtracking. ILLIP includes many of the techniques described in the literature [lL [^L [10], [15L [21]. However, some additional techniques which greatly increase the computational speed have been added to our algorithm. To facilitate our computation, the objective function is converted to an inequality z-l-c-x>0 (3-1)* where z = c« x and x is the incumbent solution. This inequality precludes all solutions which have the objective value larger than or equal to z. The initial value of z is set to a known upper bound on the objective function. Each time we encounter a better feasible solution and replace the incumbent, we also replace the value of z of (3»l) hy the objective function value of the new best solution. If all optimal solutions are desired, replace (3«l) hy z - c x > (3.2) and print all optimal solutions. Inequality (3»2) permits solutions with the same c" • x value as the incumbent, thereby permitting multiple solutions. * The constant - 1 in (3»l) is obtained by assuming all c. to be integers, 7 Let (3.1) (or (3-2)) be rewritten as b O + a 01 X l + a Oe X 2 + '•• +a On X n^° * (3.3) (2.1) with (3*3) together is denoted as B* + A* x > , (3.U) where b* is an m + 1 dimensional column vector and A* is the (m+l)xn coefficient matrix. The objective of the implicit enumeration method is merely to find a feasible solution of (3»^) with the dynamic value of z (i.e., b of (3»3)) properly assigned. This change allows us to limit the following discussion to only the feasibility of each inequality. In order to facilitate this explanation the following example problem will be carried along. minimize x_ + Xp subject to 1 - x - Xp - 3 + x x - Xp + 3x 3 1 - 2x + Xp - x + 2x^ > > v > - Xj- Xp + x - x^ > Therefore, b* = row number ' 2^ -1 -1 > 1 -1 -1 2 1 -3 A*= 1 -1 3 2 1 -2 1 -1 3 v 0. -1 -1 1 -1 h, (3.5) (3.6) 8 where z is initially set to 3« If S = {1, -2}, (3-7) then a new set W which is a list of the variahles occuring in S is W = { 1, 2}. (3-8) Now for any partial solution S we define three m+1 diminsional column vectors y(S), T(S), u(S) whose coordinates are jew y. (S) = b 4 + Z &. . X 4 (3-9) (3.10) |.(S) = b. + Z a. . x. + Z a = y (S) + Z a 1 x j0 a..>o where W denotes the set of indries of assigned variables of S. In other words y.(S) is the current value of the i-th inequality summed over only the assigned variables of S. The value of i.(S) gives the minimum value for the i-th inequality since it is the current value plus all the negative coefficients of the free variables. Similiarly u.(S) is the maximum value on the i-th inequality for the partial solution S. The following conditions justify the need for y., £., u.*: (l) If u.<0 for some i, then the i-th inequality cannot be satisfied for any completion of the partial solution S. The partial solution S is infeasible and we backtrack to consider some other partial solution. This condition is seen in the third inequality in the example of (3. 5) for S of (3«7)> since u =-1. * The S from y.(S), i.(S), u.(S) will be dropped for notational clarity. (2) If la. . I > u. and x. is free, then 1 id' i J x. - if a. . < J ij (3-12) x. = 1 if a. . > must hold. It is easily seen that these conditions hold since setting x. to the opposite value will cause u. to become negative and therefore infeasible. The special case u. = is particularly powerful since all free variables with non-zero coefficient will be set according to (3.12). Again in the example of (3. 5) ^=1- and a po = 3. Thus x can only be set to 1 with underline. Also, since u. = 0, both x and Xi must be set to 1 and 0, respectively in order to satisfy the h th inequality (3) If i. > for some i, the i-th inequality can never assume a negative value for any completion of S and the inequality is not restrictive. In other words the i-th inequality can neither cause backtracking nor force any variables to be specified since it is always satisfied. Therefore we can ignore the i-th inequality until backtracking eventually discards the present partial solution S (i.e. some variables of S are eliminated or changed after backtracking) . In the example of (3«5) &■. =0 and regardless of the values assigned to x„ and Xi the first inequality is always satisfied for S of (3-7). In other words, the detection of the S-conditions (a) and (b) can be done only through the above properties (l), (2) and (3)- If other effective checking procedures are available they could also be incorporated without changing the basic scheme of enumeration. However we adopted properties (l) and (2) to check the S-condition (a) and (b) because they seem simple and efficient enough for our problems. Then several gijnmicks are added to facilitate the computation further, as will be explained hereafter. 10 The values of column vectors 1, u, y are adjusted whenever a variable is attached or deleted from the partial solution S rather than calculating each entry every time they are needed. In this way the amount of computation is greatly reduced. ii i i * If a. . > u. > 0, we need at least a. . > 2 for the condition to work. Hence it is helpful to computation, to have a column vector d which indicates the existence of those coefficients whose absolute value is larger than 1 in each inequality. This is especially effective when the majority of the non-zero coefficients are 1 or -1. In our problem of optimal logic network synthesis, all non-zero coefficients in most inequalities had the absolute value of 1 except one coefficient. (Some inequalities may have more than one coefficient whose value is smaller than -1 or larger than +1. ) In this problem d was created according to the following scheme: if a. . I < 1 for all j = 1, 2, . . . , n, \=< j if I a. J > 1 for j = j n and |a | < 1 for j + j Q , v * otherwise. With the aid of these m + 1 d. values we can save much searching and i testing to find la. . > u. > 0. Another new scheme adopted in our program is the "cyclic checking" of constraints. By this we mean that the inequality check proceeds from row to row m and repeats to m over and over again until no more changes occurs, i.e. non of the three S-conditions occurs. * Assumes all a. . are integer, ij 11 TV. is scheme leads to more variables being jet and consequently the preclusion of more solutions in our enumeration. In addition to the columns £, u, y, d discussed above, introduce a new (m + 1^ - dimensional column t which works as a tag for each inequality indicating whether it should be checked or not, in order to facilitate the above cyclic checking. If t. = 1, the i-th inequality should be checked to determine if some S -conditions hold. If t. = 0, the check for the i-th inequality is skipped. We set the initial value of t and update its value as follows: (a) Initial value is: t. = 1 1 if I . < l t. = 1 if I. > o, for i = 0, 1, . . . , m, because I. > demonstrates that the i-th inequality is i non-restrictive and has absolutely no influence on any completion of 3. (b) After checking the i-th inequality, set t. = since there is no need to check this inequality again unless t. is set back to 1 by the next condition. (c) Whenever a free variable x. is specified, the vector t must be updated. J (c.l) If x. =0, then J ' t.(S') = 1 if a. . > and i.(S') < 0, 1J X (3.13) 1 1 t.(S') =0 if a. . < and i.(S') > 0, no change otherwise, where the new partial solution S' is S with x = added. 12 To prove this, first recall that the check of (a) and (b) of the S - conditions is done only by checking whether I. > 0, u. < and | a. . | > u. hold (see the properties (l), (2) and (3).). Thus the value of t. should be reconsidered only when those values have been changed by setting a certain variable. Next note that if a. . < and x. is set to 0, then i.(S') > i.(S) but ij 3 i i u.(S r ) = u.(S), and if a. . > and x. = 0, then u.(S') < u.(S) l l ' ij 3 i l ' but i.(S') = i.(S). The first case of (3.13) is true since by the new smaller value of u., some free variables may be forced to or 1 in this case. The second case of (3*13) must be added because of i.(S') > by setting x . to thereby eliminating the necessity of checking the i th inequality. Other two cases: a. . > and i.(S') > , ij i and a. . < and i.(S') < ij i cause no change as far as the S - conditions are concerned and consequently the tag t. is not changed. (In the first case, t. needs not be changed since if i.(S') > 0, then i.(S) > 0. In the second case, t. needs not be changed since the value of u. does l i not change. ) (c.2) If x. = 1, then J ' t.(S") = 1 if a. . < and i.(S") < 0, x 1J l ' (3.3*) t.(S") = if a. . > and i.(S") > 0, k no change otherwise, where S" is S with x. = 1 added. 3 The proof is similiar to that just described for the x. = case. 13 Initially many of the t. 's are 1. Starting with t we check in a round-robin fashion each i th inequality for which t. = 1. We continue this procedure, each time specifying a variable if it is forced, until: (A) some u. < which shows that the present partial solution S is infeasible and we go to the backtracking procedure, (B) A feasible solution is generated, or All the t. = which shows that no further information can be obtained from u,X and y at this moment. If the case (C) occurs, the partial solution will be augmented by sane variable without underline (see Fig. l). In this case which variables is augmented can significantly affect the convergence speed. A few different methods for picking this variable are discussed in section IV. V/hen we have backtracking, some variables are deleted from the partial solution S and TT must be altered again. Let us consider the updating procedure of £ by an example. Assume that we had the backtracking procedure with the partial solution S = { -1, -3, 2, 5, 7, -k). (3-15) After underlining Xp, changing its sign and discarding the suceeding variables in the backtracking procedure, S becomes S< = { -1, -3, -2 }. (3.16) t(S') could be calculated from € (S) according to the above procedure. However the following procedure was actually taken, because of easy calculation. Because x = 1 was included in S of (3-15) without underline and augmentation of a new variable without underline results from the above case (C) only, we must have had S" = { -1, -3) before S and also t(S") = must hold for this S". Therefore we can 114 calculate t(S') directly from t (S M ) since S' is S" with x p = added. As seen from the above example, the general rule is as follows; After locating the rightmost variable x. of the partial solution S, which is not underlined, the new t (S 1 ) is calculated from t (S") of the following properties, (i) t (S") = (ii) S* is the new partial solution which is S" augmented with the opposite value of x.. Consequently the computation of t is fairly simple and can easily be implemented. Another gimmick to be mentioned is that in order to utilize the low density of non-zero coefficients, only non-zero coefficients are packed in the computer memory in two ways: one is columnwise, i.e., the non-zero coefficients in a column are stored consecutively so that information for each column may be read out efficiently, and the other is rowwise. The more convenient of these two data are used in each of actual procedures. III-2 (c) of the S - conditions (i.e. detection of feasible solution.) During the course of the algorithm, the best feasible completion of S, in the sense that it attains the minimum objective function value among all feasible completions of S hitherto obtained can be often calculated. It can be calculated by any one of the following three approaches: (a) Detection of y(S) > 0. From the definition of y in equation (3*9)* this condition indicates that the completion of the present partial solution S obtained by specifying all free variables to is feasible and moreover it is the best completion of S since all coefficients c. of the objective function are non-negative. This case was first mentioned by Balas [l]. 15 (b) Detection of 1 S) :> 0. This condition shows that any completion of S is feasible. As a result, the best completion is obtained by specifying all free variables to 0. c) A paritial solution with no free variable left. In this case, since all variables are specified, there is only one completion of S and that is S itself which is obviously the best. Whenever the best feasible completion of the current partial solutio S is found by any of the above methods, it replaces the incumbent. Because of 3.1) a feasible solution always gives a better objective value). The backtracking procedure is then initiated. Obviously the decision of which test of the above (a), (b) and (c) is used should be done according to the nature of the given problem. In the excution reported hereafter, however, (c) is used throughout because of a computational simplicity. When all optimal feasible solutions are desired an enumeration procedure is not as simple as the case of a single optimal feasible solution. A case when all the coefficients c. of the objective function are positive can be also dealt with by the same discussion as the previous one (inequality (3-2)). If some of c. are 0, however, we need a more careful consideration. Each of the above tests (a), (b) and (c) needs the following modification. If the 1 > test is used, the set of completions of S obtained by specifying each free variable by x. = , if c. > 1 1 (3- IT) x. = 1 and , if c. = 0. 1 1 (a) can detect at the earliest chance, (b) is the next and (c) is the last. But the incorporation of (a) or (b) is more complex than (c) 16 exhausts all the best feasible completions of S. By using (3. 17), it is easy to generate all optimal solutions. On the other hand, if y > test is used, a more involved argument is needed , First, consider the set of completions of S discussed in (3* 17)« In this case, however, some of completions may not be feasible. Therefore, by checking the feasibility of every completion of S obtained by assigning free variables according to (3.17) we can get the set of all the best feasible completions of S. When all the free variables of a paritial solution are forced to 1 or 0, the feasible completion of S is uniquely obtained, regardless of the value of the coefficients in the objective function. The procedure for checking inequalities including the "cyclic check" and the procedure for detecting feasible solutions are summarized in the flow diagram of Figure 2. It is named CHK-IEQ. The flow diagram should be self-explanatory. 17 Figure 2 Flow chart of procedures in CHK-IEQ AGMT-VAR Backtrack 18 IV Augmentation of a Variable to Partial Solution When no further information can be obtained from checking S -conditions, the current partial solution is augmented by a variable without underline. The finite convergence of algorithm is guaranteed whichever free variable is chosen, but it heavily affects the convergence speed and therefore should be carefully selected. So far two approaches for this selection have been favored and proved to be efficient by computational results. One is to attempt to find a feasible solution as soon as possible without considering the objective value . The other is to attempt to proceed to a better feasible solution possibly at the sacrifice of prompt discovery of a feasible solution. -"■ -"In the latter approach, once a good feasible solution is stored, however, it will significantly facilitate the rest of the enumeration. This part of algorithm is denoted as AGMT-VAR and four methods implemented by us are now presented. Computational results of these methods of AGMT-VAR show a very irregular computation time. No method is better than others for all problems. (1) Balas' method : Let the current partial solution be S. Calculate m S min { y.(S) + a.., 0} (^.l) 1=0 x 1J for each free variable x . and pick up the subscript j which maximizes (l+.l). Then the next partial solution is S' = {S, j Q }. (k.2) * This is slightly different from the original. 19 (2) For each free variable x., lit L. "be the number of inequalities which satisfy both L (S) < (k.3) and £. (S) + a. . > 0. i ij - Pick up j n which maximizes L.. The next partial solution is S' = [S, j Q }. (k.k) In other words, this method is aimed at maximizing the number of inequalities with i.(S') > when 1 is assigned to x., so that the new partial solution may possess as many non-restrictive inequalities as possible. If several variables have the same number of inequalities with i.(S') > 0, a secondary criterion may be applied to break the tie. A criterion which we used was as f ollows : among j ' s which attain the maximum L., calculate the value 2 a ., (U.5) i J for all i's such that a. . > id jg.(S)+ a. . < 0. (h.6) Pick up the j n which maximizes (h.5) and the next solution is (U.1+) in the above. 20 (3) While (2) was deri\3d under the assumption that x. would be J specified to 1 only, (j n has no minus sign in (U.U)) the third method is a generalization of (2) by allowing each variable to be specified to either or 1. Let the number of inequalities which satisfies (U.3) be L. for a free variable x.. L . is defined as the number of inequalities which satisfy i!.(S) < 1 (h.l) and ^(S) - a > 0, for each free variable x.. Suppose that L. is the maximum among L. and L. (e = or l), then the next partial solution is J S' = { S, (-1) 1_ %}. (U.8) The similar secondary criterion as (2) may also be incorporated to break a tie. {h) The fourth method includes a consideration on the objective value. Although various sophistricated methods which are based on the objective value are now available -"- \ our fourth method uses the number L. defined as in the above. Then calculate J L^ /(ac + (3) (k.9) for each free variable, where a an p are constants.* Let j_ attain the maximum of (h.S). The next partial solution is s' = (a, j ). * In our computation a and f3 were set to 1. 21 This is aimed at leading to a partial so? ation closer to feasible solutions hopefully with the minimum, increase of the objective value. There have been a number of other methods available so far and a variety of modifications are also conceivable. However, rigorous theoretical comparison of these appears to be difficult since the efficiency of each approach heavily depends on each individual problem. As shown in Section VT, a specially tailored algoritham for each given problem might be necessary to attain the maximum efficiency. However, the above approachs are believed to be reasonably efficient for general problems in spite of its simplicity. 22 V Underline and Pseudo-Underlin e This section will discuss a new condition of underlining and then the concept of pseudo-underlining in a partial solution which were probably not discussed in the literature in the past. In our implicit enumeration, if many inequalities become i. > 0, showing that they are not restrictive, the remaining inequalities (i.e., inequalities with I. < 0) may have chances to satisfy the following condition for some free variable x.: (1) a. . < for all i for which i.(S) < (5.1) ij — 1 (2) a. . > for all i for which i.(S) < , (5.2) j-j -i- where S is the current partial solution. If case (l) occurs, x. = can be immediately added with underline to J the current partial solution. Namely, the next partial solution is S' = {S, -J}. (5.3) This is because, under the condition (5.1), if a feasible completion of S is obtained by specifying x . to 1 and other free variables to appropriate J values, the completion with only x. changed to (other free variables keep J the same values) is also feasible. Moreover, from the condition that c. > 0, the objective value for the latter completion is not worse than the former. Therefore we can preclude all the completion with x . = 1 from the further enumeration*. This is equivalent to considering the next partial solution as (5«3)« On the other hand, if case (2) occurs, x. = 1 can be augmented to the partial solution S with pseudo-underline which is denoted by "~". The next solution is thus denoted by S' = {S, j}. (5.1+) * It is assumed that a single optimum solution is seeked for. The procedure can be easily modified for the case of all optimum solutions, based on the modification discussed in Section III-2. 23 Pseudo-underline is defined as follows: if no feasible completion follows from S ' , a pseudo underline works as underline, but if a feasible completion is detected, pseudo underline is deleted and x. J is considered as a variable without underline. A proof is given below. If the partial solution {S,j} induces no feasible completion, neither does {S,-j} under condition (5.2). Hence, in this case we can exclude the enumeration for {S,-j} and it implies that x. can be added with underline. However, if some completions of {S,j} J are feasible, some completions of {S,-j} might be feasible and may attain better objective values, thus prohibiting the addition of x. with underline. Of course if c . =0, the pseudo underline of x . can be treated as underline j ! 1 j since (S,-j) can not attain a better feasible solution. Backtracking procedure with the above two cases for underlining and pseudo-underlines is the same as that in Section II except that pseudo-underlines are deleted from the partial solution whenever a feasible solution is detected. The termination of enumeration would be speeded up by this gimmick especially when the given problem is infeasible. The above two gimmicks for conditions (5.1) and (5*2) were incorporated in our program to test their effect on the computation speed. For many problems a reduction of the number of iterations was observed. Because of this improvement, the two gimmicks are incorporated in computer programs with which many problems are solved in a later section. In the actual implementation, condition (5»l) is checked for all free variables after finishing the cyclic check whereas condition (5*2) is checked for only variable which is selected by AGMT-VAR. 2k Although an idea behind the gimmick for condition (5.1) is the same as that of the set E defined by Balas , it does not seem to s have been explicitely discussed in literature in the past that we can underline a variable subject to condition (l). The second gimmick for condition (5*2) probably has not been discussed in the past. 25 VT Computational Results and Specializati n of the Algorithm This section includes computational results applied to such problems as the optimum network synthesis problems for which our program was initially designed, the covering problems, the travelling salesman problems and several other problems found in the literature. The program was written in FORTRAN TV and run on IBM 3^0/75 I. As will be observed from the results, our program seems powerful for such problems as optimum network synthesis problems which have features mentioned in Section I. Probably it is because mutual relationship of variables is very tight in such problems and specifying one variable causes many other free variables accordingly specified. In other words, the S-conditions discussed in Section III work effectively, even though the check is quite simple. However, for other problems our program is not always the fastest. For these problems, the S-conditions do not give information sufficient to preclude many partial solutions. In this paper we try to make use of intrinsic properties of given problems such as the optimum network design problems, the covering problems and the traveling salesman problems, rather than adding new procedures which may be effective for all types of problems. As will be explained individually, the simple modifications result in the considerable improvement. This direction, however, is not throughly explored and it should be further done so elsewhere. The results shown hereafter is mainly to show the adaptability of the implicit enumeration algorithm. The modifications added for each problem are of the following types: 26 (1) New checking methods of the S-conditions other than those discussed in Section III. (2) New AGMT-VAR algorithms based on the structure of given problems. (3) The estimation of the bound of objective function obtainable from the current partial solution. If we can obtain a lower bound of objective function by some means during computation and if the lower bound exceeds the objective value of the incumbent, the current partial solution is discarded and the backtracking procedure immediately follows. Modifications actually added will be explained later together with the given problems. 27 VI-1 Optimum Network Synthesis Problems Table 6.1 shows the computational results of our algorithm applied to optimum network synthesis problems, for which it was initially designed. The optimum network synthesis is to find a logic network consisting of NOR gates which realizes a given switching function, and which has the minimum number of gates and, as the secondary condition, minimizes the number of interconnections. Integer programming approach to this problem has various advantages over others, such as the easy incorporation of various restrictions on a network to be synthesized, the applicability of the formulation to other types of gates and the choice of a wide variety of objective functions to be optimized. Detailed formulation and further consideration will be found in [19L [20], [3], together with syntheses of other types of networks. Table 6.1 is the result of exhaustion of all optimum network for every three variable switching function. It is not only interesting as integer programming problems since considerably large problems were solved in reasonable computation time, but also it is a new approach to such optimum network synthesis problems. The percentage of non-zero elements is low (e.g. about 2% for R = 6) and decreases as R grows. In case of NOR gate network synthesis the integer programming approach seems [13] more efficient than any other existing methods , if optimum networks are synthesized under general restrictions such as fan-in and fan-out restrictions or without restrictions on the number of levels. R in Table 6.1 is the number of gates in the network to be synthesized. Therefore if we solve a integer program for R gate network and if the switching function to be realized actually needs more than R gates, the problem turns out to be infeasible. This infeasible case is also shown in 28 § •H P> O •H H ■P CO d) u CO 1 I «H ■8 CQ •H CQ CD £! -P H o I 3 p o CQ H -P Lf\ •H C— CQ \. •H O > VO co Ch O § H H 0) X! CD cu p> 1) CQ g •H P> o3 H cu p> •H Ch O H CD CU CD Table 6.1. It may be worth mentioning th..t an infeasible problem usually needs less computation time than a feasible problem of the same size. The algorithm was further improved by modifying AGMT-VAR in order to take into consideration the inherent property of NOR gate network . Also a strict lower bound estimation of the objective function based on the network structure is incorporated. The result is shown in the column "Specialized approach" in Table 6.1 for R = 6. It is considerably faster than the result for R = 6 in the column "General approach". The optimum networks for the switching function that needs 8 NOR gates under fan-ins and fan-outs restrictions were solved for the first time by this approach. [11] For these problems, Gomory's algorithm^ was also applied, but [21 was not efficient. 30 VI -2 Traveling Salesmen Problems Traveling salesman problems were also solved by the implicit enumeration algorithm. Although the integer programming approach seems inferior to other algorithms well tailored to the problem, such as "branch and bound" method according to the computational experience, it is attempted because the traveling salesman problem is a typical integer programming problem. As will be seen, a simple modification of the algorithm to utilize inherent properties of the problem works remarkably well. The integer linear formulation used is that by Miller, Tucker and Zemlin , in which a variables x. . is associated with the path from the i th city to the j th city, i, j=l, 2, ..., N, i ^ j, and a variable* u. with the i th city, i = 1, 2, ..., N. N is the number of cities to be considered. Let d. . be the distance from the i th city to the j th city, then the N city traveling salesman problem is formulated by; E E It 3 d. . x. . N E x. . i=l 1J = 1 minimize E E d.. . x.. _. (6.1) subject to a 3 = 1, 2, ..., N ±k (6.2) N E x. . = 1 i = 1, 2, ..., N d=l 1J d+i u. - u. + (N-l) x. . < N-2 (6.3) 1 J ij - 2 < i + 3 < N, * See [18] for what u ; represents. 31 where x. . is a zero-one variable but u. ray assume an integral value up to N-2. Each u. is expanded with zero-one variables u., 's as 1 ik N-2 u i ■ A u ik. < 6 -« rather than using the binary number expansion of u. for the reason which will be explained later. First, the general approach with method (2) of Section IV (this will be abbreviated as AGMT-VAR 2) was attempted and the result is shown in Table 6.2. To improve this result, the following three modifications are made. The first modification is to exclude variables u. n from the ik enumeration procedure because these variables are only for eliminating subtours which do not pass through the first city. In other words, variables u are added to a partial solution always with underline and therefore none of them will be choosen for underling in the backtracking procedure. The procedure will be given in the following. The justification of the procedure is not given, but it should be noticed that the expansion (6.U) is necessary for this purpose. Let y. be the coordinates of y defined by (3«9) which correspond to the set of inequalities (6.3). Then if y. > does not hold, free variables which have coefficient 1 in the inequalities with negative value of y, are successively specified to 1 with underline until the value of y. becomes non-negative. If this is not posssible, it means the existence of a subtour and the backtrack is initiated. The second modification is a new AGMT-VAR procedure which picks up a variable among free variables x. .'s with the minimum value of a. .. This variable is of course augmented to the partial solution without underline. 32 •CJ 1 CO | to CD ■ i ir\ ■H « 1 — ' H ft CD 1 3 o P ^ •H o *J>* fc P as s O W s (— 1 M c3 a) fn .G •■ — s -P CD o VD .5 a co • S.S •H CO H LPv 1 m •p s •H O a •p W d oo VO CO VD 3 ■3 ?H CD H E- Lf\ co O H t- -p VD OJ -d H co •H H a) l~ OJ 3 VD O VD •H o VD ir\ VO H O fi co « • • • • co •H CO O O § OJ CO ft EHv_- ir\ CO H w s OJ o •H f -P LP, t- o o VO VO 3 O O 8 1 co •\ •V •\ •\ F-j -P H on ir\ LPv s H OJ OJ A A $_ H VD o o -p VD CM • o •H o o • • o o ts 1 s -* -=t vo LP H H CO co EH-^ A A to co •H s H S3 3 H cn a a CO H 3 O* (1) CD a N CO 10 CD H £ o ITS O LP CO OJ VD « 2} 3 H H OJ !> ^■n o • — x O o o ^ o •rl o •H •rl •H u •H h u u ra ?H -P M -P -p CD Vj •P CO -P CO CO 0) 1 CU l 1 ° 'p^ CO p^ 10 to C/2 — ' v — ' VD vo VD a o H PI OJ vo ■s EH H LPv O VD co I CO to •H g •H ft in O «H Id to CO •H ■s a 33 The third one is the new bound estimation. First, let B be the set of cities for which no incoming path is specified yet. For each j € B, find d. = min d. . (6.5) where i nans over all i corresponding to free variables x. .. Then set D = Z d.. (6.6) 3 € B J Similarly, Q is the set of cities for which no outgoing path is specified yet. For each i e C, find d. = min d. ., (6.7) i , ij J where j runs over all j corresponding to free variables x. .. Set -'-J Dp = E d (6.8) i e C Let D be the larger of D_ and D p » Obviously D plus the objective value of the current partial solution can be used as a lower bound of the objective value obtainable from the current partial solution. With these modifications added, the computation time was significantly improved as shown in the column "Specialized algorithm" in Table 6.2, though the result is still much inferior to other reported algorithms such as branch and bound methods. The result is demonstrated only for the purpose of showing that an augmentation of the algorithm derived from inherent properties of the problem leads to a significant improvement in spite of its simplicity. 3U VI-3 Set Covering Problems A set covering problem is; n minimize S x. i=l subject to (6.9) A x > 1 where x is an n- dimensional vector of zero-one variables, A is an m x n m m matrix consisting of only and 1, and 1 = (1, „..,l) . Matrix A was randomly generated with the density 0.07 of coefficients 1. m = 30 in all problems. Five samples for different n were solved and the average solution time is shown in Table 6.3. A specialized algorithm is the algorithm with AGMT-VAR 2 modified in the following two respects: (l) A new checking method of partial solution S. Let a. be the i th column of A and a.(S) be the a 1 11 from which entries corresponding to all rows k such that I (S) > are deleted. Namely a. (S) is composed of inequalities which are restrictive under the partial solution S. Then if a.(S) < a\(S) (6#10) holds for at least one 3 such that 3 4 i and j e W, then x ± is specified to with underline. (2) An objective bound estimation. Let E(S) be the number of restrictive inequalities under the partial solution S, i.e. the number of inequalities with i.(S) < 0. Corresponding to a free variable x^., 35 CO J g p •H •H -P u a) ^ CO vo -4- VO CO vo _=t- bO 0> H -P a3 H CO "d •H -=f ^ C J" Oh O O^ «h c- O O a> a) CO OJ CO -4" VO CvJ 8 •9S^ H • • • CO OA 9 • H H H CO a j§ •H -p +3 a) H CO LT\ 3 g| OJ fc- *H !h O 0> 5) -P H H cd •d a) ts) •H H H in aJ t- •H **w O a 0^ CO -P LT\ co CO CO < M H r- • +3 •H O a & LT\ O LT\ 1 1 1 ^ 6 d) • • • O 1 1 1 •H CO H CO vo 1 1 1 EH^- H VO * E (S) > , (6.11) and define k be the smallest number which satisfies k Z E. (S) > E(S). (6.12)* V=l J p Then the objective value corresponding to variables in S and k can be used as a lower bound of objective function obtainable from S. Adding these two modifications, the result is shown also in Table 6.3* The set-covering problem has been extensively studied in conjunction with the minimization of prime implicant table discussed in r 17 1 r 5 1 the switching theory • Various techniques known in the area dould also be applied to improve the computation further, though we did not try yet. Table 6.3 shows the result of algorithm applied by Geoffrion'-"-' to similar problems in which he incorporated the linear programming as an aid. His result is impressively good and the increase of computation time seems linearly proportional to the number of variables. * Any partial solution S for which (6.32) does not hold for any k can be always found to make inequalities infeasible, by checking the sign of u. . 37 VI -L Problems Taken From Literature Table G.h shows computational' results for various problems* taken from literature . .AGMT-VAR which among those in Section IV gave the best performance for each problem is also shown. Compared with other algorithms, our algorithm does not always give the best performance. However, as expected, it works very well for such problem as IBM 9 which has more number of inequalities than the number of variables (this characteristic is similar to the optimum network synthesis problem. ) For general use, our approach may need modifications such as the [9] incorporation of linear programming as suggested by Geoff rion , since his computational results seem quite efficient constantly for various problems as seen from Table 6.k. However we believe that gjjnmicks discussed in earlier sections considerably speeded up the original •ati .[6] implicit enumeration procedure of Balas , Glover , Geoff rion and Fleischmann * Some of the problems actually solved were obtained from H. Salkin of the IBM. The authors are grateful for his generous help. 38 CO U rb O CD ,C CO P ft s o U P ft C CD 5 CD P «tH •H d bO cd a & cd H CO Eh--' 01 co fi O O ■p pg ,c -h cd > o & I bO CD Eh C w a) s O Eh < cj -H ON ON 3; O 3 o ON -4- J- o a\ on ON -3" J" o o t- I s - -d- o o o VO m o vo oo 00 o vo oo OJ o J- • o CO 3 OJ o o o • • • o o o SO CVj Oj OJ CO CO oo OJ OJ OJ OJ o ctS o Sh ft CO a o •H