LIBRARY OF THE UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAICN ho.£|-8>0 CO Digitized by the Internet Archive in 2013 http://archive.org/details/theoryofasynchro66mull UNIVERSITY OF ILLINOIS GRADUATE COLLEGE DIGITAL COMPUTER LABORATORY INTERNAL REPORT NO. 66 THEORY OF ASYNCHRONOUS CIRCUITS BY David E. Muller ,-f^ December 6, 1955 THEORY OF ASYNCHRONOUS CIRCUITS I. Introduction Asynchronous circuits have been loosely defined as switching circuits which do not require a "clock" or source of fundamental frequency for their operation. A considerable portion of the logical circuitry in Illiac requires no clock signals and may therefore be considered asynchronous. One advantage of asynchronous circuitry is the additional speed one obtains by allowing each operation to follow the one preceding it without having to wait for the clock signal to occur. Another advantage results from the fact that if asynchronous circuits are constructed according to certain rules they will not proceed in- correctly if one of the elements fails to act but will merely stop. The faulty element may then be located by observing the state of the circuit. This latter advantage will be explained in more detail in the later sections of this dis- cussion. One might think that in order to design asynchronous circuits one would have to keep close account of the times taken by the various elements in acting so that the entire circuit would behave in the way intended. This is true in some designs but it is possible to design circuits in such a way that the relative speeds of the elements do not affect the over-all behavior of the circuit. Such designs are of particular interest to us and are the ones which we shall analyze. In dealing with asynchronous circuits one finds that most of the definitions and concepts which have been developed for synchronous, or clocked, circuits no longer apply and it is necessary to begin at the beginning and redefine everything. II. Preliminary Considerations Definition 1. Decision Element A decision element has one output line f and a specified number k of input lines x, , x„, ».., x, . It may be represented by a circular symbol in the following way. -1- At any given time a signal having a value either or 1 must appear on each of the k+1 lines. The element is said to be in equilibrium for certain com- binations of values appearing at the points f and x, , x , . .., x and is said to be excited for all other combinations. The list of the former combinations is given by the specifications of the element. Any change occurring in f must be such as to place the element in equilibrium from an excited condition. If the inputs are held unchanged from some time t on and the element is excited then a change in f will occur at some time t + T where T is bounded by < T < M. M is an upper bound on all T but T itself is otherwise indefinite and may vary in any way from one element to another and even for a given element from one change to the next. We notice that the above definition may be self- contradictory if for a set of values at the inputs both and 1 at line f represent excited states. In this case the first part of the definition prohibits any change from occurring while the second part tells us that it must occur. We shall avoid this difficulty by never applying input combinations to elements which admit no equilibrium condition. It is interesting to contrast the above definition of a decision element with that which is used for synchronous circuits. In synchronous circuits one usually assumes a fixed time of operation for each element. This assumption permits the use of delay operations and other aids in the analysis of circuits which cannot be applied to asynchronous circuits. Definition 2. Asynchronous Circuit An asynchronous circuit is an interconnection of decision elements. This interconnection is done by attaching the lines together at points called nodes. Each node has at least one line connected to it and no more than one decision element output connected to it. No delay is assumed to take place along lines in the circuit. -2- Definition 3» Complete Circuit A complete circuit is an asynchronous circuit in which each node has a decision element output connected to it. Each node is then said to belong to the decision element which feeds it. III. The Notion of Speed Independence We mentioned earlier that we wish to treat circuits whose over-all behavior is independent of action times of the elements which make them up. This loose definition must be formalized so that it is applicable to circuits of the types defined earlier. Definition h. Immediate State of a Circuit The immediate state of an asynchronous circuit (abbreviated I-state) is given by specifying the value of the signal at each node in the circuit. Definition 5. Equilibrium State of a Circuit An asynchronous circuit is said to be in equilibrium if every element in the circuit is in equilibrium. Condition 1: A complete circuit satisfies condition 1 with respect to an I- state S if when placed in state S it will, after sufficient time, arrive at a unique equilibrium state E regardless of the action times taken by the elements. This definition seems to satisfy our intuitive notion which we referred to before and is essentially the same as the condition given by Huffman. It has two main weaknesses, however. (1) It cannot be applied to circuits which cycle indefinitely, and we should like to apply our notions to such circuits. (2) It is a very difficult condition to apply to an actual circuit since one would have to treat every possible I-state which might occur during the action of the circuit. -3- Condition 2: A complete circuit satisfies condition 2. (We shall say it is totally sequential.) with respect to an I-state S if when placed in state S it will go through a unique sequence of I-states S , S , . ... either indefinitely or until it arrives at an equilibrium state E. 2 This condition also has been treated by Huffman and others, and it certainly avoids the two objections listed above. We shall see, however, that it implies that only one element can be excited at a time in the circuit. This restriction is far too severe to be tolerated since it removes from con- sideration all circuits in which parallel action occurs, such as in gates, registers, and adders. If speed independence requires the penalty of serial action and the resulting low duty cycle then let us turn to other types of circuits. Definition 6. Broken Complete Circuit A complete circuit is said to be broken at a given node if that node is disconnected from the decision element which feeds it. If a circuit is broken when it is in a given state an artificial signal is placed at the node where it is broken which holds that node at the value specified by the state. A circuit may be broken at more than one node. Condition 3- A complete circuit satisfies condition 3 with respect to an I-state S if all possible I-states U which the circuit may go through after being placed in S have the following property: If the circuit is broken at any set of nodes (the circuit may be broken at no node, all nodes, or any combination of the nodes) when in state U then depending on U and the set of nodes at which the circuit is broken it will either cycle indefinitely, or it will proceed to an equilibrium state E which does not depend on the relative speeds of the elements . This condition should allow one to treat circuits which cycle in- definitely and yet it is not as restrictive as condition 2. It still suffers from the disadvantage that it is a difficult condition to apply. Theorem 1: If a circuit starting in I-state S does not cycle indefinitely then condition 3 implies condition 1. -h- Proof: Since the circuit does not cycle indefinitely it must reach an equil- ibrium state E. Let us choose U = S and creak the circuit in no nodes. Then by condition 3 E is unique and condition 1 is satisfied. Theorem 2: Condition 2 is equivalent to the statement that only one decision element can be excited at a time.. Proof. Assume that in some I-state U more than one decision element is excited. Then depending upon which decision element goes to equilibrium first we may have any one of several nodes changing after state U. This means that any one of several states may follow U. Assume now that only one decision element is excited in state U. Then only one node can change (the excited one) and U must be followed by a unique state. Theorem 3' Condition 2 implies condition 3° Proof: Let us first consider the effect of breaking a node whose decision element is to become excited. When the signal at the output of the decision element changes the node will remain unaffected since the break has disconnected the decision element from its node. We see therefore that if condition 2 applies and we break the circuit in a set of nodes when it is in state U it will continue to pass through a unique set of states (the same as it would if the nodes were not broken) until the decision element feeding a broken node becomes excited. If this happens the circuit will pass to equilibrium since by theorem 2 this element is the only excited one and when it goes to equilibrium no nodes will be affected and the entire circuit will be in equilibrium. Since the sequence of states is unique an equilibrium state will be unique if it is reached. The notion of a broken circuit and the statement of condition 3 give us a better idea of what is needed for speed independence in a circuit. It turns out, however, that condition 3 is difficult to apply to actual circuits and also very difficult to treat theoretically. For this reason we shall strenghthen condition 3 very slightly and set down a condition which also applies to most practical cases and which is considerably more convenient. -5- Condition k: Speed Independence . A complete circuit is said to satisfy con- dition h (is speed independent) with respect to an I-state S if during all possible transitions which follow S no decision element passes from an excited state to an equilibrium state unless in doing so its output changes; that is to say, unless the decision element itself acts. In the theory which follows we shall use condition h (speed independence) as our basic condition and derive its properties. We shall also show toward the end of this development that it implies condition 3- IV. Properties of Speed Independent Circuits Definition 7» Cumulative State ( C-state ) A cumulative state or C-state is defined with respect to some initial I-state S by listing the numbers of signal changes which have occurred at the nodes since the circuit was placed in I-state S. The C-state is thus an n- dimensional vector (if there are n nodes) whose components are all non-negative integers . It should be noted that the C-state provides information concerning the history of the circuit while the I-state is a vector of O's and l's which merely describes the momentary condition of the circuit. In an I-state U the I-state equals the sum of the C-state, and S modulo 2 where the sum is taken componentwise. Thus S and the C-state uniquely determine the I-state U whereas several C-states may correspond to the same I-state. This would mean that U may be obtained in several ways . To illustrate the difference between the two types of states consider the following one node circuit: MOT The I-states follow the sequence 0, 1, 0, 1 .... while the C-states follow the sequence 0, 1, 2, 3, •••• In this case the state vectors have but one component. -6- Definition 8. An I or C-state B is said to directly foliov a state A if B results from A by a change in a single node and if the decision element for this node is excited in state A, We note that if more than one element is excited in state A then B will follow A only if the excited element whose change leads to B acts first. Thus we see that "B directly follows A" means that B may actually follow A in time but will only follow it with certainty if but one element is excited in state A. Definition 9. Given two C-states A and B we shall write A C >A. Proof: The previous theorem combined with theorem h yelds this result. The second condition will be written B covers A. Theorem 7 : In a speed independent circuit if two different C-states X and Y both directly follow A then X u Y exist as a possible state and it directly follows both X and Y. Proof: Let us assume that X results from A by a change in the i'th node and that Y results from A by a change in the j'th node. We see that i f j since X and Y are not the same state. This means that in state A both i and j are fed by excited decision elements. In state X the j'th element is still excited by condition h and hence the j'th node may change and give state X U T. Thus X u Y directly follows X. Similarly it directly follows Y and the theorem is proved. Lemma: In a speed independent circuit if C-state X directly follows A and if B may follow A then X u B exists and may follow X and B. Proof: Since B may follow A we may construct a sequence of states by theorem 6 A, A,, Ap, ..., B such that each member directly follows the one preceding it. Two cases may now occur. (1) A 1 = X. In this case X O A = A, = X by definition 10, so B may follow X giving X U B and X u B = B thus proving the lemma . (2) A, ^ X. In this case A and X both directly follow A and theorem 7 applies. Thus X u A 1 exists and directly follows both A-, and X. In the above argument we may now replace A by A-, and X by X u A^ . The only difference is that the sequence A-., A p , ..., B is shorter than A, A,, A , ..., B by one state. -9- Let us assume that we repeat the application of (2) until the i'th step. We shall now have formed ( (X \j A,) u A ) u A. = X u A. . This state may follow both X and A.. If X u A. =A. _ then B may follow X and the lemma is proved as in (l). If (l) never occurs then (((Xu A ) u A ) u B = X u B will be formed and it will follow both X and B (by theorem 7) thus also proving the lemma, Theorem 8: In a speed independent circuit for any C-states X and Y the C-state X u Y exists and may follow both X and Y. Proof: Since the initial state S may be followed by X and also may be followed by Y we may construct two sequences as in theorem 6 S , X^ , X^ f ■> . = , X S f Y^ , Yg j •••; Y By the lemma we form X u Y . This may follow Y, so we can also construct (Xu Y x ) U Y 2 » Similarly we form ( (X u Y ± ) u y ) u Y = X U Y. This may follow both X and Y and the theorem is proved. Corollary: In a speed independent circuit if A and B are C-states and A> B then A may follow B, This is the converse of theorem h. Proof: Since A > B we have A u B = A and A u B may follow B. This last result together with theorem h means that A >B is equivalent to A may follow B in a speed independent circuit. Theorem 9 : In a speed independent circuit if X < A and Y< A then Xu Y< A. Proof: This result follows from the definitions in terms of numerical vectors and shows that X u Y is a least upper bound of X and Y. Theorem 10: In a speed independent circuit for any two C-states X and Y there exists a unique state A such that -10- A < X and A < Y and if for any B B < X and B < Y then B < A. We shall use the notation X O Y for A. Proof: There is at least one state R such that R < X and R < Y since the initial state S is such a state. The number of such states is finite, however, since X and Y are finite vectors. Let us write these R, , R_, ■,.», R, . Now we form R, u L u . . . u R, = A, Clearly R.< A for any R. but from the numerical definitions we also have A 9, and 10 prove that the states form a lattice. Theorem 7 and the corollary to 6 prove that it is semi-modular. We may note here also that this lattice must contain a zero element (the state S) since the state S may be followed by any other state. In most switching circuit applications one is only concerned with the signals appearing at some of the nodes of the circuit. The other nodes may be needed to allow the construction of the circuit from a given set of elements but the signals on them will not otherwise be of interest. Let the interesting set of nodes by N, , N , ..., N . If in a given C-state the numbers corresponding to these nodes are a,, a~, ..., a we may regard this set of numbers in much the same way as we regarded the complete set of numbers corres- ponding to the C-state. This leads us to Definition 11. A break set of states consists of all the C-states for which the numbers on nodes N-, , N„, ..., N (a subset of all n nodes) are no greater than a x , a 2 ,..., a m . A break set is the set of states which would be obtained if each node N. were broken when its number became a. . This would prevent its number from -11- becoming greater than a. and yet each state in the break set may occur since it follows from the initial state S without signals having to pass broken nodes . Theorem 12: A break set in a speed independent circuit is an ideal of the lattice of C-states. Proof: If X and Y are in the break set then X u Y is in the break set. For any T we have T/)X J so that J cannot be an equi- librium state. We see furthermore that if J does represent an equilibrium state then the ideal associated with the break set must be a principal ideal, v. Composition of Speed Independent Circuits It is of particular importance to the circuit designer to be able to form speed independent circuits in some systematic way. This composition of circuits is done in the synchronous case by putting together several partial circuits which are designed individually. In the asynchronous case the problem is more difficult since an arbitrary combination of speed independent circuits will not be speed independent, in general. For this reason we must develop rules for combination of speed independent circuits which preserve this property. The rules so far developed are all based on the following theorems „ -12- Theorem lk: If a speed independent circuit is broken at a node N. the output of the decision element feeding this node can change at most once after the break has been made. Proof: Let us assume that the output of the decision element feeding the broken node can change twice (or more times) after the break has occurred. Since the break prevents the change in output from affecting the nodes in the circuit in any way we see that if the decision element had been slower and the first change had not occurred then the changes in the inputs to this element which produced the second change would have occurred anyhow and the element would have gone from an excited state to an equilibrium state without having acted. This would constitute a violation of condition h« Theorem 15: Let a "not" element in a speed independent circuit be replaced by a second speed independent circuit which is broken at one node W. . Let the broken node be fed by the input for the "not" element and the output of the element which fed the broken node replace the output of the "not" element. If this replacement is made when the states in the two circuits are such that the signals match at the points which are joined then the resulting combined circuit is speed independent. Proof: Two cases must be considered: (1) Let us assume that the 'hot "element is in equilibrium when it is replaced. This means that in order to match signals the broken circuit must have had the element i act once since the break was made. This means that it will not act a second time pro- vided the signal at N. is not changed. (2) Let us assume that the "not "element is excited when it is re- placed. This means that the element i has not acted yet„ If it does act this will affect the first circuit in the same way that an action of the 'hot" element would have affected it. If it does not act then, the first circuit will behave as if the output of the "not" circuit had been broken* Thus -13- we see that the first circuit will not lose the property of speed independence. The second circuit also retains speed independence for the signal feeding N. cannot change more than once after the element i acts since if it were to change twice or more this would mean that the signal on the "not" circuit could have changed twice with no change in its output this bringing the "not" circuit back into equilibrium without its acting and violating con- dition h. We now give several examples to show how theorem 15 may be used to build up complex circuits from simpler ones. Example 1: The circuit — T^V *(^Ot) *Tkot)— is speed independent provided the signals on all three nodes are not the same Therefore we can replace two of the "nots" with arbitrary speed independent circuits, each broken in one node. N. Ki. J ' ^L *\ NOV 1 <£ICT. 1 £-K.T. 2, In this combination the two circuits will alternate in having the signals at the broken nodes change as long as these changes continue to occur. In this arrangement we may, therefore, regard the two circuits as acting alternately , Now let us define the C-element in the following way: Equilibrium states are: b C a b c 1 1 1 1 1 1 1 1 1 -\\- In other words the output will tend toward the value of the two inputs when- ever they agree. Otherwise the output will remain at its previous value « Using this element we form Example 2: The circuit is speed independent regardless of its initial state. Changes occurring at N 1 and N p may bear any time relation to each other hut the change at N_ will not occur until both N, and N have changed and agree. We may now replace the two "not" elements by arbitrary broken circuits thus ^