## Cs.elte.hu

Published in: Bol. Soc. Braz. Mat. 20 (1989), 87–99.

Singular spaces of matricesand their application in combinatorics
Abstract. We study linear spaces of n × n matrices in which every matrix is
singular. Examples are given to illustrate that a characterization of such subspaces
would solve various open problems in combinatorics and in computational algebra.

Several important special cases of the problem are solved, although often in disguise.

Let A be a linear subspace of the space IRn×n of real n × n matrices. We
say that A is singular if every matrix in A is singular. We are interested inthe problem of characterizing singular spaces of matrices, and in obtaining anefficient algorithm to determine if a space of matrices (given by a linear basis) issingular. Unfortunately, we cannot give a complete solution to these problems,but special cases with combinatorial applications will be solved.

Geometrically, det X = 0 defines a surface in IRn×n and we are interested
in the linear subspaces contained in this surface. Clearly, we may restrict ourattention to the maximal singular subspaces.

The problem arose in differential geometry (see Room (1938)), but my inter-
est in this problem stems from its connection to matching problems and otherfundamental problems in combinatorics. In this context, the problem was for-mulated by Edmonds (1967), who pointed out its relevance to combinatorialalgorithms and to the theory of computational complexity.

We may slightly generalize the problem by considering a linear subspace A of
real n × m matrices. We define the generic rank gr(A) of such a subspace as themaximum rank of matrices in it, and want to find an efficient way to computethis generic rank, given (say) a basis for A. (Unfortunately, no really efficientalgorithm is known for this problem, at least if we do not allow randomization.)However, it is easy to reduce this seemingly more general problem to the problemof characterizing singular matrix spaces.

A trivial upper bound on the generic rank is the column range rank lrr(A) of
the matrix space, defined as the dimension of the subspace spanned by all thecolumns of all the matrices in A. This number is easy to compute if we have a
basis of A. Similarly, we can use the row range rank rrr(A) as an upper boundon the generic rank.

We mention two further ways to formulate the problem, which are both use-
ful in some way. First, let A1, . . . , Ak generate our space A, and let x1, . . . , xk bevariables. Then A(x) = x1A1 + . . . + xkAk is a matrix, each entry of which is alinear form in these variables. So det A(x) is a polynomial in these variables andwe want to determine those matrices of linear forms for which this determinantis identically 0. Let us remark immediately that if the determinant is not iden-tically 0 then it is non-zero for almost all choices of the variables. In particular,if we choose transcendental values for the xi which are algebraically indepen-dent over the field generated by the entries of the Ai, then for this particularsubstitution detA(x) will be 0 if and only if it is identically 0. Unfortunately,this condition is of little use, since there is no way to evaluate a determinantwith transcendental entries.

On the other hand, the observation that if det A(x) is not identically 0
then almost all substitutions give a non-zero value is of algorithmic interest.

If we are given the matrices A1, . . . , Ak, we can generate random values forx1, . . . , xk (say, from a uniform distribution on [0, 1]) and evalute the resultingnumerical determinant. If the result is non-zero, we conclude that det A(x) isnot identically zero. If the result is zero, we conclude that det A(x) is identicallyzero.

This conclusion is of course not quite legitimate: we may err. If det A(x)
is identically 0, then we of course come to this conclusion. But if it is notidentically 0, we may be unlucky enough to hit a root, and come to the wrongconclusion. However, the probability of this to happen is 0.

Of course, in practice we can not generate random real numbers and compute
with them. If we generate, say, a random integer between 0 and N for each xithen, no matter how large is N , there will be a positive probability of error.

However, this probability will be very small (see Schwartz 1980), so we may stillregard the problem as practically solved. (Whether one can generate a randominteger in a given interval, or even a random bit, is a difficult question withphysical or even philosophical overtones, and we shall not discuss it here.)
Unfortunately, the above — randomized — algorithm does not give any
insight into the structure of such subspaces. In what follows we shall try toattack this more theoretical question.

Finally, we can also formulate the problem like this: let A = (aijk) be a
k × n × m “3-dimensional matrix”, i.e., a tensor with 3 indices. This defines atrilinear form α(x, y, z). We want to find the maximum rank of a bilinear formobtained by fixing (say) the first variable x. This formulation will lead us to aninteresting open question (see section 5).

Let us discuss some classes of singular subspaces. The first is the most
natural one and is discussed, e.g., in Room (1938).

Example 1. Let U and V be two subspaces of IRn such that dim U =
dim V + 1. Let A consist of all n × n matrices A for which AU ⊆ V . Clearly, Ais a linear subspace of matrices and every member of it is singular.

This example contains as special cases several other constructions that occur
— let A be a singular matrix and consider all matrices of the form AX
— consider all matrices with 0’s in their first row; more generally, all n × n
matrices with a fixed k × (n − k + 1) block of 0’s.

A more general way to put this example is the following: Let A be any space
of n × n matrices. For n × n matrices B and C, let BAC denote the space ofall matrices BAC, A ∈ A. Then
gr(A) ≤ gr(BAC) + 2n − rk(B) − rk(C).

In the example, and we can choose B and C trivially so that BAC = 0 andrk(B) + rk(C) > n. Then the inequality gives gr(A) < n.

This idea can be used, more generally, to construct new matrix spaces with
low generic rank from a given matrix space with low generic rank. Let A be aspace of n × m matrices of generic rank r. Let A be the set of all n × (m + k)matrices that arise from some matrix in A by adding k new columns arbitrarily.

Then A is a matrix space of generic rank r + k.

There is of course a dual form of this construction, corresponding to adding
Example 2. If n is odd then the space Sn of all skew symmetric n × n
matrices is singular. More generally, if a space A of skew symmetric matriceshas odd column range rank then
Our next example gives a construction to combine matrix subspaces with a
low generic rank into a new subspace of such kind.

Example 3. Let A1 and A2 be matrix spaces of the same dimensions.

Define A1 + A2 = {A1 + A2 : Ai ∈ Ai}. Then
Our last example we give is of a somewhat different, more special kind. We
will see, however, that it also has important applications.

Example 4. Let A1, . . . , Ak be skew symmetric n × n matrices. Let A
consist of all n × n matrices that arise in the form [A1x, . . . , Anx], where xranges through IRn. Then A is a singular space of matrices. To see this, observethat each matrix A = [A1x, . . . , Anx] in A satisfies xT A = 0. More generally, itis not difficult to show that if such a matrix space A is not the 0 space then
We could generalize this construction: let A1, . . . , Ap be skew symmetric
n × n matrices, b1, . . . , bp ∈ IRm, arbitrary vectors. Consider all matrices of theform
, where x ∈ IRn. Clearly, these form a matrix space in which
A complete characterization of singular matrix spaces seems to be unknown.

There are, however, some results that show that under certain restrictions,singular (or low generic rank) matrix spaces arise by the constructions describedin the previous section. As we shall see, these results were first obtained in acombinatorial framework, and correspondingly, the assumptions one has to makeon the matrices involved concern some properties of the matrices generating thesubspace. It is possible that there are conditions of an entirely different natureunder which analogous results can be obtained.

Theorem 1. Assume that the singular matrix space A is generated by rank
1 matrices. Then it arises by the construction in Example 1 above.

There is a slightly more general way to formulate this theorem:
Theorem 1*. Let A ⊆ IRn×m be a matrix space generated by rank 1 matri-
gr(A) = min{n − dim U + dim(AU ) | U ⊆ IRm}.

This theorem follows quite easily from the Matroid Intersection Theorem of
Edmonds (1970). In fact, we only need the special case when the matroids inquestion are linear, which can be formulated without any reference to matroids:
Theorem 2.(Edmonds 1970). Let a1, . . . , ap ∈ IRk and b1, . . . , bp ∈ IRl. The
maximum number s of indices 1 ≤ i1 ≤ . . . ≤ is ≤ p such that both sets ofvectors {ai , . . . , a } and {b , . . . , b } are linearly independent is given by
s = min dim lin{aj : j ∈ J} + dim lin{bj : j ∈ {1, . . . , n}\J} : J ⊆ {1, . . . , n} .

To derive Theorem 1* from Theorem 2, consider the rank 1 matrices gen-
erating the space: these can be written as a1bT , . . . , a
largest integer such that there are s linearly independent vectors among the aisuch that the corresponding bi are also linearly independent. Without loss ofgenerality we may assume that these are a1, . . . , as and b1, . . . , bs, respectively.

Now the matrix a1bT + . . . + a
has rank s by elementary linear algebra, and
so grA ≥ s. On the other hand, Theorem 3 implies that there exists a subsetJ ⊆ {1, . . . , n} such that
s = dim lin{aj : j ∈ J} + dim lin{bj : j ∈ {1, . . . , p}\J}.

Now denote U = {x ∈ IRn : aT x = 0 for all j ∈ J }. Then AU ⊆ lin{b
n − dim U + dim AU = n − (n − dim lin{aj : j ∈ J}) + dim lin{bj : j /
Hence grA ≥ n − dim U + dim AU . Since the opposite inequality is obvious, thisproves the theorem.

Our next theorem shows that if the matrix space is generated by rank 2 skew
symmetric matrices than its generic rank is still determined by the constructionsgiven in the previous section:
Theorem 3. Let A ⊆ IRn×n be a matrix space generated by skew symmetric
(a) If B is any n × n matrix and A1, . . . , Ak are spaces of skew symmetric
n × n matrices with odd column range rank such that
(b) There exist A1, . . . , Ak and B for which equality holds in (a).

This theorem again follows from a combinatorial result (Lov´
which is sometimes called the “matroid parity theorem”.

Theorem 4. Let a1, b1, a2, b2, . . . , ap, bp ∈ IRn. Then the maximum number
s of indices 1 ≤ i1 < . . . < is ≤ p such that ai , b , . . . , a , b
where W ranges over all subspaces of Rn, {H1, . . . , Hk} ranges over all parti-tions of {1, . . . , p}, and Hi ∪ W denotes the subspace generated by {ai : i ∈Hi}, {bi : i ∈ Hi} and W .

This theorem remains valid if we replace the real field by any other field.

On the other hand, it is not valid for general matroids. In fact, to computethe maximum in the theorem takes exponential time (Lov´
Korte 1982). But there are some classes of matroids to which it extends. Thefollowing version can be formulated without using matroid theoretical notions:if we replace the vector space by an algebraically closed field, linear independenceby algebraic independence, dimension by transcendence degree, and subspace byalgebraically closed subfield, then the result corresponding to Theorem 4 holds(Dress and Lov´
asz 1987). For a discussion of those matroids for which this
To derive Theorem 4 from Theorem 3, consider the rank 2 skew symmetric
matrices generating the space. These can be written as a1bT − b
tively linearly independent. It is obvious that the matrix space spanned byai bT , . . . , a bT has generic rank 2s. On the other hand, consider the subspace
W and the partition {Hi} for which the minimum in Theorem 4 is attained. Ifwe choose any n × n matrix with null space W as B, and the space generatedby the matrices {BT (ajbjT − bjaT )B : j ∈ H
We state one more class of matrix spaces for which the generic rank can be
determined. These are the subspaces considered in Example 5, again with therestriction that the matrices in the definition have rank 2:
Theorem 5. Let A1, . . . , Ap be skew symmetric n × n matrices of rank 2.

Let A consist of all n × n matrices that arise in the form [A1x, . . . , Apx], wherex ranges through IRn. For every partition of P = J1 ∪ . . . ∪ Jk of the index set{1, . . . , p}, let Ai denote the matrix space formed the columns in Ji. Then
Again, there is a corresponding result in matroid theory that implies this
theorem. Variants of this were discovered by Edmonds (1970), Mason (1976)and Lov´
asz (1977); see also Mason (1981). Let us say that a hyperplane H is in
general position with respect to a family F of subspaces if there exists a subfieldK of IR such that each of the given subspaces has a basis with coordinates inK and H is defined by an equation
Theorem 6. Let F be a family of subspaces of IRn and H, a hyperplane
in general position with respect to F . Let Q be the subspace spanned by allsubspaces of the form A ∩ H, A ∈ F . Then
where {F1, . . . , Fk} ranges over all partitions of F .

a) Matchings. Perhaps the nicest combinatorial applications of the meth-
ods and results from the previous sections are in matching theory. The funda-mental problem in this field is the following: given a graph G, decide whetherof not it has a perfect matching, i.e., a set of disjoint edges covering every node.

This problem, by no means easy, has both mathematically beautiful and prac-tically efficient solutions. For these, we refer e.g. to the monograph Lov´
However, the connection between linear algebra and the matching problem
has gone through a reverse route. The first necessary and sufficient condition forthe existence of a perfect matching in a bipartite graph was formulated in termsof linear algebra and proved using methods from linear algebra by Frobenius(1917). The problem itself grew out from the study of determinants. It wasK¨
onig (1916) who (in connection with an earlier, related result of Frobenius)
observed that determinant problems can be formulated and handled in termsof graphs.

(For more on the history of this basic theorem, see Lov´
Plummer 1986). Besides the necessary and sufficient condition which now occursin every graph theory book (and is called the Marriage Theorem or the K¨
onig also implied the following (easy) result. Let G
be a bipartite graph with bipartition {A, B} . Since we are interested in perfectmatchings, we assume that |A| = |B|. Consider a variable xe for each edge e.

Let FG = (fij) denote the matrix whose rows are indexed by the elements of A,whose columns are indexed by the elements of B and
Theorem 7. A bipartite graph G has a perfect matching if and only if
Note that the matrices obtained by substituting for the xe in all possible
ways form a matrix space generated by the rank 1 matrices having a single 1in a position corresponding to an edge. Theorem 7 says that the graph has aperfect matching iff this matrix space is non-singular.

As Edmonds (1967) points out, this theorem can be used to obtain an easy
randomized algorithm for bipartite matching: substitute random numbers forthe variables and evaluate the determinant. This algorithm does not directlysupply us with a perfect matching; but if we have a test for the existence of aperfect matching, it is easy to actually find one: for a given node v, we look foran adjacent node u such that G − u − v has a perfect matching (such a nodeu must exist if G itself has a perfect matching). Then we put the edge uv inthe perfect matching and, recurrently, find a perfect matching in G − u − v.

(This last trivial procedure breaks down in a parallel environment, as we shallsee soon.)
asz (1979) that the preceding algorithm can be ex-
tended to non-bipartite graphs, using a result in the groundbreaking paper ofTutte (1947). The main result of his paper is a celebrated combinatorial con-dition on the existence of a perfect matching, but along the lines Tutte alsoproved the following algebraic condition. Given a graph G, associate again avariable xe with each edge e. Let TG = (tij) denote the skew symmetric matrixwhose rows and columns are indexed by the node set V (G) = {1, ., n}, and
Theorem 8. The graph G has a perfect matching if and only if det(TG) is
Again, the matrices obtained from TG by substitution form a matrix space;
in this case, this matrix space is generated by rank 2 skew symmetric matrices.

One can derive the Marriage Theorem and Tutte’s Theorem from Theorems
1 and 3, respectively; but the proofs obtained are lengthy and not too attractive.

Also, one can use the remarks in section 1 to design a rather simple algorithmto test a graph for the existence of a perfect matching. The worst case runningtime of this algorithm (if we use advanced determinant evaluation techniques),is comparable with the best implementations of Edmonds’ blossom algorithmand other combinatorial algorithms (about n2.5). But due to the facts thatthey are always this slow, use randomization, and that numerical difficulties alsoarise, these algorithms remained curiosities until the study of parallel algorithmsbecame fashionable. It turns out that determinant evaluation can be parallelizedessentially optimally: a polynomial number of processors can solve the problemin polylog time (Csanky 1976). Every algorithm known to date to test theexistence of a perfect matching that is this well parallelizable is based on thislinear algebraic formulation of the problem.

Let us remark that even if this test tells us that the graph has a perfect
matching, it does not give us one. There is a trivial recursive procedure to usethis test in an algorithm to find a perfect matching, but this is not parallelizable.

Karp, Upfal and Wigderson (1985) show in a very general setting that (with afurther randomization) this weighted matching algorithm can be used to actuallyfind a perfect matching (if it exists) in parallel. Mulmuley, Vazirani and Vazirani(1986) illustrated the power of linear algebraic techniques, by designing a verynice and simple, more direct algorithm to find a perfect matching in a graph inpolylog time, using a polynomial number of processors.

b) Structural rigidity. Let G be a graph. We want to realize G by a
bar-and-joint structure: the nodes of G will be represented by flexible joints,
adjacent nodes must be connected by rigid bars. Will the resulting structure berigid?
This question is not yet well defined, since we can place the points in many
positions. Let us assume that we place then now in general position, say, theircoordinates are algebraically independent real numbers (in a sense this is themost “rigid” situation, but I do not want to go into the details of this theory).

It is easy to see that the rigidity of this “general position” structure does notdepend on the choice of these coordinates.

Let us give a heuristic argument for the translation of this problem into
linear algebra; the arguments can be made precise by using elements of thetheory of differential equations. Suppose that we are working in the plane. Alsosuppose that our structure is not rigid, and consider a motion of it. Let xv(t)denote the position of node v at time t. Then the fact that the bars are rigidsays that for every edge uv ∈ E(G),
This may be viewed as a system of homogeneous linear equations on the velocityvectors ˙
xv of the nodes. This system always has non-trivial solutions, e.g. we
xu the same for all nodes: this corresponds to translating the structure.

We also obtain a solution from rotating the structure. If the structure is non-rigid, we obtain further solutions, i.e., the solution space of the system is morethan 3-dimensional. Conversely, one can show that if the solution space hasdimension larger than 3, then the structure is non-rigid.

If we write out the matrix of the above system, we obtain an m × 2n matrix,
where m is the number of edges and n is the number of nodes of the graph.

There is one row corresponding to each edge and two columns correspondingto each node, one for the x-coordinate ond one for the y-coordinate. The rowcorresponding to an edge uv has xv − xu in the position corresponding to xv,xu −xv in the position corresponding to xu, yv −yu in the position correspondingto yv, yu − yv in the position corresponding to yv and 0 elsewhere. If wesubstitute all possible values for the xu and yu, we obtain a matrix space AG,and the structure is rigid if and only if gr(AG) = 2n − 3. (Exercise: show byalgebra that ≤ always holds).

This matrix space has a very special structure. Consider its transpose, and
the column corresponding to the edge e = uv. We introduce an orientationof this graph just for reference purposes. Let be denote the oriented incidencevector of this edge, i.e., the vector indexed by the nodes in which the uthcoordinate is 1, the vth is -1, and the rest 0. Let x and y denote the columnvectors formed by the first and second coordinates of the nodes, respectively.

Then the column corresponding to e is
. Clearly, matrices arising this way form a linear space. Moreover, we can writethe eth column in the following form:
So this matrix space has the structure of Example 4, with the Ai having rank2. Hence Theorem 5 can be applied. We do not go through the details (seeLov´
asz and Yemini (1982)), but formulate only the condition derived from this
theorem (essentially equivalent to the results of Laman 1970):
Theorem 9. A graph G is generic rigid in the plane if and only if it has
the following property: adding any new edge, the resulting graph has two edge-disjoint spanning trees.

c. Polyniomial identities. Let f (x1, . . . , xn) be a polynomial with integral
coefficients. Is f identically 0? This question has a trivial answer, taught inhigh schools: the polynomial is identically 0 iff eliminating all parantheses, allterms cancel. While for simple identities this is a very useful decision procedure,it may take exponential time, (measured in the length of the formula for f . Forexample, if we want to verify the trivial identity
by eliminating the parantheses, then we obtain exponentially many terms (be-fore they all cancel).

It is not known whether there exists a polynomial time procedure to verify
polynomial identities! (If we allow randomization, then there is one: just sub-stitute random values for the xi). One can show, however, that the problemreduces to the singular subspace problem. In fact, Valiant (1979) describes aconstruction that represents every polynomial as the determinant of a matrix inwhich each entry is either a variable or a constant. Moreover, the constructioncan be carried out in polynomial time. After a trivial homogenization, this givesa matrix space and the polynomial is identically 0 iff this space is singular.

Note that the determinant of every matrix space is of course a polynomial in
variables x1, . . . , xn if we represent the typical matrix in the space as x1A1+. . .+xnAn, where the Ai are the generators of the space. This polynomial howevercannot be written in general as polynomial of polynomial length (at least nosuch formula is known, and it is conjectured that no such formula exists). Somatrix spaces arising in this construction are of a special kind.

a. The central problem in this area is, of course, to characterize singular
matrix spaces (or, more generally, to characterize the generic rank of a matrixspace). If the matrix space is given by its generators, and these generatorsare, say, integral matrices, then the problem of deciding whether the space isnon-singular is, trivially, in N P from the point of view of complexity theory:if the space is non-singular then this is easily exhibited by specifying a linearcombination of the generators that is non-singular. Theorems 1, 3 and 5 arespecial cases when the problem is also in co − N P. There is no hope to provethat the problem is N P-complete, since as remarked at the beginning, there isa simple polynomial time randomized algorithm to solve it, i.e., the problem isin the class RP (and it is believed that RP = N P).

b. A more modest program is to find further special cases for which the
problem is in co-N P or in P. Perhaps the most important from the point ofview of applications would be to extend the application to rigidity theory todimension 3. Which graphs have the property that the generic joint-and-barstructure realizing them in 3-space is rigid? (Of course, one could extend theproblem to any dimension.)
The discussion generalizes from the plane and yields that the task is to
determine the generic rank of the space of matrices with 3|V (G)| rows and|E(G)| columns of the form
Unfortunately, no way is known to transform this matrix space into one of thetypes that are known to be solvable.

c. It is natural to ask if any of the solvable classes described in section 3
can be generalized. For example, can one characterize singular matrix spacesgenerated by rank 2 matrices? Or can one give a formula for the generic rankof a matrix space in the second (generalized) construction in Example 5, wherethe Ai are rank 2 skew symmetric matrices?
d. A very important special case is the problem of polynomial identities,
discussed in section 4. Is there a good way to recognize which polynomials areidentically 0? Do matrix spaces arising from polynomials have a more naturaldescription?
e. Note that if we formulate theorems 3 and 5 in terms of 3-tensors, we
obtain that they concern essentially the same class of tensors. Namely, theyconcern those 3-dimensional arrays (ai,j,k) in which fixing i we obtain a rank2 skew symmetric matrix. In one case, however, we consider the matrix space
connection between the two results. Is there a connection between the matrixspaces
ijk xj for a general 3-dimensional array?
we can compute the generic rank of one, does it help in computing the genericrank of the other?
asz (1987), Pseudomodular lattices and continuous matroids,
anky (1976), Fast parallel matrix inversion algorithms, SIAM J. Comput. 5,
asz (1987), On some combinatorial properties of algebraic ma-
J.Edmonds (1967), Systems of distinct representatives and linear algebra, J. Res.

Nat. Bur. Standards Sect. B 71B, 241-247.

J. Edmonds (1970), Submodular functions, matroids, and certain polyhedra, in: Com-binatorial Structures and their Appl. (ed. R. K. Guy, H. Hanani, N. Sauer and J.

Sch¨
onheim), Gordon and Breach, New York, 69-87.

Uber zerlegbare Determinanten, Sitzungsber. K¨
P. M. Jensen and B. Korte (1982), Complexity of matroid property algorithms, SIAMJ. Comput. 11, 184-190.

R. M. Karp, E. Upfal and A. Wigderson (1986), Constructing a perfect matching is in
Uber Graphen und ihre Anwendung auf Determinantentheorie und
Mengenlehre, Math. Annalen 77, 453-465.

G. Laman (1970), On graphs and rigidity of plane skeletal structures, J. Engrg. Math.

4, 331-340.

asz and A. Wigderson (1988), Rubber bands, convex embeddings, and
graph connectivity, Combinatorica 8 (1988) 91-102.

asz (1977), Flats in matroids and geometric graphs, in: Combinatorial Surveys,
Proc. 6th British Comb. Conf., Academic Press, 45-86.

asz (1979), Determinants, matchings, and random algorithms, in: Fundamen-
tals of Computation Theory, FCT’79 (ed. L. Budach), Akademie-Verlag Berlin,565-574.

asz (1980a), Selecting independent lines from a family of lines in a space, Acta
asz (1980b), Matroid matching and some applications, J. Comb. Theory B
asz (1980c), The matroid matching problem, in: Algebraic Methods in Graph
os, , Coll. Math. Soc. J. Bolyai 25, North-Holland,
asz and M.D.Plummer (1986), Matching Theory, Akad´
asz and Y. Yemini (1982), On generic rigidity in the plane, SIAM J. Alg.

J. H. Mason (1977), Matroids as the study of geometric configurations, in: HigherCombinatorics (ed. M. Aigner), Reidel, 133-176.

J. H. Mason (1981), Glueing matroids together: a study of Dilworth truncations and
matroid analogues of exterior and symmetric powers, in: Algebraic Methods in GraphTheory (ed. L. Lov´
os), Coll. Math. Soc. J. Bolyai 25, North-Holland,
K. Mulmuley, U. Vazirani and V. Vazirani (1986), Matching is as easy as matrix
inversion, Combinatorica 7, 105-113.

T. G. Room, The Geometry of Determinantal Loci, Cambridge Univ. Press, 1938.

J. T. Schwartz (1980), Fast probabilistic algorithms for verification of polynomial
W. T. Tutte (1947), The factorization of linear graphs, J. London. Math. Soc. 22,107-111.

L G. Valiant (1979), The complexity of computing the permanent, Theor. Comp.

Sci. 8, 189-201.

Source: http://www.cs.elte.hu/~lovasz/morepapers/singular.pdf

"NATURAL PRESERVATIVES" Research Director, Peter Black Medicare Ltd., White Horse Business Park, Aintree Avenue, Trowbridge, Wiltshire, UK. BA14 0XB SUMMARY This paper looks at the theoretical development of a natural preservative system using the author's data base on medicinal plants as a source of references. The legal aspects of this concept are considered. The traditio

Principles and Practices in the Treatmentof the Mentally Ill/ Emotionally Disturbed Principles and Practices in the Treatment of the Mentally Ill/ Emotionally Disturbed Problems of the Mentally Ill/ Emotionally Disturbed The CSEA Examination Preparation Booklet Series is designed to help members preparefor New York State and local government civil service examinations. This booklet isdesi