Skip to main content

Parallel computation with threshold functions

Preliminary version

  • Conference paper
  • First Online:
Book cover Structure in Complexity Theory

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 223))

Abstract

We study two classes of unbounded fan-in parallel computation, the standard one, based on unbounded fan-in ANDs and ORs, and a new class based on unbounded fan-in threshold functions. The latter is motivated by a connectionist model of the brain used in Artificial Intelligence. We are interested in the resources of time and address complexity. Intuitively, the address complexity of a parallel machine is the number of bits needed to describe an individual piece of hardware. We demonstrate that (for WRAMs and uniform unbounded fan-in circuits) parallel time and address complexity is simultaneously equivalent to alternations and time on an alternating Turing machine (the former to within a constant multiple, and the latter a polynomial). In particular, for constant parallel time, the latter equivalence holds to within a constant multiple. Thus, for example, polynomial-processor, constant-time WRAMs recognize exactly the languages in the logarithmic time hierarchy, and polynomial-word-size, constant-time WRAMs recognize exactly the languages in the polynomial time hierarchy. As a corollary, we provide improved simulations of deterministic Turing machines by constant-time shared-memory machines. Furthermore, in the threshold model, the same results hold if we replace the alternating Turing machine with the analogous threshold Turing machine, and replace the resource of alternations with the corresponding resource of thresholds. Threshold parallel computers are much more powerful than the standard models (for example, with only polynomially many processors, they can compute the parity function and sort in constant time, and multiply two integers in O(log*n) time), and appear less amenable to known lower-bound proof techniques.

Research supported by NSF grant DCR-84-07256.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. D. H. Ackley, G. E. Hinton, and T. J. Sejnowski, “A learning algorithm for Boltzmann machines,” Cognitive Science, vol. 9, pp. 147–169, 1985.

    Article  Google Scholar 

  2. M. Ajtai, “Σ 11 -formulae on finite structures,” Annals of Pure and Applied Logic, vol. 24, pp. 1–48, 1983.

    Article  Google Scholar 

  3. M. Ajtai and M. Ben-Or, “A note on probabilistic constant depth computations,” Proc. 16th Ann. ACM Symp. on Theory of Computing, pp. 471–474, Washington, D.C., Apr.-May 1984.

    Google Scholar 

  4. C. G. Bennet and J. Gill, “Relative to a random oracle A, PA ≠ NPA ≠ Co-NPA with probability 1,” SIAM J. Comp., vol. 10, pp. 96–113, 1981.

    Google Scholar 

  5. N. Blum, “A note on the ‘parallel computation thesis',” Inf. Proc. Lett., vol. 17, pp. 203–205, 1983.

    Google Scholar 

  6. A. K. Chandra, D. C. Kozen, and L. J. Stockmeyer, “Alternation,” J. ACM, vol. 28, no. 1, pp. 114–133, Jan. 1981.

    Article  Google Scholar 

  7. S. A. Cook, “Towards a complexity theory of synchronous parallel computation,” L'Enseignement Mathematique, vol. 30, 1980.

    Google Scholar 

  8. P. W. Dymond, “Simultaneous resource bounds and parallel computations,” Ph. D. Thesis, issued as Technical Report TR145/80, Dept. of Computer Science, Univ. of Toronto, Aug. 1980.

    Google Scholar 

  9. P. W. Dymond and S. A. Cook, “Hardware complexity and parallel computation,” Proc. 21st Ann. IEEE Symp. on Foundations of Computer Science, Oct. 1980.

    Google Scholar 

  10. M. Flynn, “Very high-speed computing systems,” Proc. IEEE, vol. 54, pp. 1901–1909, Dec. 1966.

    Google Scholar 

  11. S. Fortune and J. Wyllie, “Parallelism in random access machines,” Proc. 10th Ann. ACM Symp. on Theory of Computing, pp. 114–118, 1978.

    Google Scholar 

  12. M. Furst, J. B. Saxe, and M. Sipser, “Parity, circuits and the polynomial time hierarchy,” Math. Syst. Theory, vol. 17, no. 1, pp. 13–27, 1984.

    Google Scholar 

  13. L. M. Goldschlager, “Synchronous parallel computation,” Ph. D. Thesis, issued as TR-114, Dept. of Computer Science, Univ. of Toronto, Dec. 1977.

    Google Scholar 

  14. L. M. Goldschlager, “A universal interconnection pattern for parallel computers,” J. ACM, vol. 29, no. 4, pp. 1073–1086, Oct. 1982.

    Google Scholar 

  15. L. M. Goldschlager and I. Parberry, “On the construction of parallel computers from various bases of boolean functions,” Theor. Comput. Sci., vol. 41, no. 1, pp. 1–16, 1986.

    Google Scholar 

  16. J. Hartmanis and J. Simon, “On the power of multiplication in random access machines,” Proc. 15th Ann. IEEE Symp. on Switching and Automata Theory, pp. 13–23, 1974.

    Google Scholar 

  17. G. E. Hinton, T. J. Sejnowski, and D. H. Ackley, “Boltzmann machines: Constraint satisfaction networks that learn,” CMU-CS-84-119, Dept. of Computer Science, Carnegie-Mellon Univ., May 1984.

    Google Scholar 

  18. J. J. Hopfield, “Neural networks and physical systems with emergent collective computational abilities,” Proc. National Academy of Sciences, vol. 79, pp. 2554–2558, Apr. 1982.

    Google Scholar 

  19. M. Luby, “A simple parallel algorithm for the maximal independent set problem,” Proc. 17th Ann. ACM Symp. on Theory of Computing, pp. 1–10, Providence, Rhode Island, May 1985.

    Google Scholar 

  20. I. Parberry, “Parallel speedup of sequential machines: a defense of the parallel computation thesis,” Technical Report CS-84-17, Dept. of Computer Science, Penn. State Univ., Oct. 1984.

    Google Scholar 

  21. I. Parberry, “A complexity theory of parallel computation,” Ph. D. Thesis, Dept. of Computer Science, Univ. of Warwick, May 1984.

    Google Scholar 

  22. I. Parberry, “On the number of processors required to simulate Turing machines in constant parallel time,” Technical Report CS-85-17, Dept. of Computer Science, Penn. State Univ., Aug. 1985.

    Google Scholar 

  23. W. J. Paul, N. Pippenger, E. Szemerédi, and W. T. Trotter, “On determinism versus non-determinism and related problems,” Proc. 24th Ann. IEEE Symp. on Foundations of Computer Science, pp. 429–438, Tucson, Arizona, Nov. 1983.

    Google Scholar 

  24. N. Pippenger, “On simultaneous resource bounds,” Proc. 20th Ann. IEEE Symp. on Foundations of Computer Science, Oct. 1979.

    Google Scholar 

  25. V. Pratt and L. J. Stockmeyer, “A characterization of the power of vector machines,” J. Comput. Sys. Sci., vol. 12, pp. 198–221, 1976.

    Google Scholar 

  26. W. L. Ruzzo, “On uniform circuit complexity,” J. Comput. Sys. Sci., vol. 22, no. 3, pp. 365–383, June 1981.

    Google Scholar 

  27. J. E. Savage, “Computational work and time on finite machines,” J. ACM, vol. 19, no. 4, pp. 660–674, 1972.

    Google Scholar 

  28. J. T. Schwartz, “Ultracomputers,” ACM TOPLAS, vol. 2, no. 4, pp. 484–521, Oct. 1980.

    Article  Google Scholar 

  29. Y. Shiloach and U. Vishkin, “Finding the maximum, sorting and merging in a parallel computation model,” J. Algorithms, vol. 2, pp. 88–102, 1981.

    Google Scholar 

  30. M. Sipser, “Borel sets and circuit complexity,” Proc. 15th Ann. ACM Symp. on Theory of Computing, pp. 61–69, Boston, Mass., Apr. 1983.

    Google Scholar 

  31. L. Stockmeyer and U. Vishkin, “Simulation of parallel random access machines by circuits,” SIAM J. Comp., vol. 13, no. 2, pp. 409–422, May 1984.

    Google Scholar 

  32. L. J. Stockmeyer, “The polynomial time hierarchy,” Theor. Comput. Sci., vol. 3, pp. 1–22, 1977.

    Google Scholar 

  33. L. G. Valiant, “The complexity of enumeration and reliability problems,” SIAM J. Comp., vol. 8, no. 3, pp. 410–421, 1979.

    Google Scholar 

  34. L. G. Valiant, “The complexity of computing the permanent,” Theor. Comput. Sci., vol. 8, pp. 189–201, 1979.

    Google Scholar 

  35. C. Wrathall, “Complete sets and the polynomial-time hierarchy,” Theor. Comput. Sci., vol. 3, pp. 23–33, 1976.

    Article  Google Scholar 

  36. A. C. Yao, “Separating the polynomial-time hierarchy by oracles,” Proc. 26th Ann. IEEE Symp. on Foundations of Computer Science, Portland, Oregon, Oct. 1985.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Alan L. Selman

Copyright information

© 1986 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Parberry, I., Schnitger, G. (1986). Parallel computation with threshold functions. In: Selman, A.L. (eds) Structure in Complexity Theory. Lecture Notes in Computer Science, vol 223. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-16486-3_105

Download citation

  • DOI: https://doi.org/10.1007/3-540-16486-3_105

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-16486-9

  • Online ISBN: 978-3-540-39825-7

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics