When the Lambda Calculus and the Turing Machine were introduced around 80 years ago, the concept of a high level programming language did not exist, and there was no thought of developing compilers. It is noteworthy that since then, no compiler has been developed for a high level language for the standard models, in order to provide them with a simple semantics. It is argued the standard models are not suited to running complex software in workstation simulations in reasonable amounts of space and time, due to their adverse complexity characteristics. Neither can efficiently support random access memory or parallel composition of modules, which are needed for execution of complex, high level computation involving parallelism, in terms of primitive steps.


The Turing Machine.


Delta-complexity -- the number of tuples in Turing Machine (TM) instruction tables, becomes a relevant quantity for simulation, when one considers the implementation of parallel composition of programs and random access memory. The only way to make a parallel composition of n identical single tape TMs with q states, is to create a new TM with n tapes, with qn states, resulting in an exponential increase in the number of states wrt n. If we consider the Space program in Fig 1 in the paper linked at the bottom of this page, that parallel composes 65536 adders, then a parallel composition of TM adders would result in an instruction table with q65536 states -- rather more than the number of particles in the universe and clearly impossible to write out in any practical context.


It is natural to suppose that multi-tape TMs provide some means of implementing RAM efficiently. Unfortunately the access of storage in multi-tape TMs is constrained by tape cursors only being able to move one tape cell per machine step. Suppose some scheme is devised for a finite addressable, register array with r registers composed of a constant number of bits, to occupy r tapes. How might a TM be organised, that accepts as input the register array, and an integer i on a separate tape, and generate on an output tape, the contents of the ith register. I invite you to find a TM with sub-linear running time, that does not have an exponential number of instruction tuples wrt r. TM compilers for high level languages are not practical. For more detail see 9.3 of arxiv link.


The Lambda Calculus.


In the case of the Lambda Calculus, the problem derives from the lack of a denotational or abstract machine environment, and from the adoption of a context free grammar with high structural variability in which any branch of the syntax tree may be indefinitely long, as the syntax for the formalism. Various abstract or real machines with explicit memory could be proposed for lambda evaluations, but add another layer of complexity. How would they deal with the further difficulty that attempts to implement non-trivial lambda reductions to normal form as a computer simulation, have resulted in an explosion of storage requirements and execution time, as lamented by Wadsworth in his PhD thesis on page 136. For more detail see 9.4 of arxiv link.


Unfortunately graph rewriting of lambda graphs is not a feasible solution because a single application of a rewriting rule on a graph, involves finding a subgraph that matches the input graph associated with the rule. Searching for a subgraph is the subgraph isomorphism problem, which is known to be NP-complete. The most basic computational step in graph rewriting needs an exponential amount of time wrt graph size.


The Y-combinator is a lambda term that enables a form of recursion to be introduced into lambda programs, which is essential for full programmability given that iteration is impossible in the calculus -- a feature that is reflected in functional programming, in which all iteration is absorbed into recursion. This might appeal and improve programmability from one perspective.


Functional Programming put forward as a solution for parallel programming.


From the point of view of the runtime environment however, recursive procedures are costly to implement. Each recursive module call requires significant amounts of stack based housekeeping, affecting space requirements and running times. Implementing a recursive call in a parallel context is more costly, because it involves the forking of more modules, their allocation to machine resources, and scheduling and co-ordinating their activity. Although its possible for a compiler to convert some recursion into iteration, there is not as far as I am aware, a generalised procedure available that does not impose space overhead -- not desirable if high performance parallelism is an objective.


The features that have been suggested as aiding parallelism (referential transparency, strict typing etc..), are really more pertinent to verifiability, and can be replicated in alternative approaches that are not chained to recursion. In addition, parallel functional programming suggests no new machine model, and is fully abstracted from underlying hardware. Over four decades it struggled to become popular because of what appear to be inherent problems -- a phenomenon that has contributed to a growing awareness that parallel programming cannot be completely decoupled from the machine model. Even though functional programming is suited to formal methods, its inability to support parallelism well, in my view suggests there is no long term future for the paradigm. I believe SC has the potential to displace FP from its ecological niche. Not only is Space encapsulated, applicative, capable of supporting polymorphous modules and oriented to formal methods, it is much better suited for parallel programming.



The Curry-Howard isomorphism.


The Curry-Howard isomorphism is a means of relating proofs in various, conventional tree based logics, to terms in various extensions of conventional tree based Lambda Calculus. Can one assume that the difficulty of expressing high level programs and consequently high level proofs in the Lambda Calculus, does not place a limit on the practical utility of the Curry-Howard isomorphism?


I would like to argue that the overall agenda of type theory is constrained by focusing on standard tree based logics and calculi. A future paper will describe how simulation efficient compute models in the Alpha Ram family can provide a platform for spatialising logic and mathematics, and thereby provide an alternative means of realising mathematical argument as a form of computation. Complex structures, procedures, and proofs, could be brought into computational life as abstract data structures, virtual algorithms, and interstring-based logic programs defined in terms of a primitive formal model. The intention is to allow the use of only enough standard set theory and informal logic, for the definition of a low level, generic, synchronic model of computation. The Synchronic B-Ram defined in 3.2.2 of the linked report fulfills this requirement. It is proposed that the following steps are then taken:


(i) The construction of a native, high level interlanguage programming environment, able to support virtual programs and abstract data types.

(ii) The definition of a prenexed and interstring-based form of predicate logic called interlogic, together with a deterministic parallel deduction algorithm that is executable on the model.

  1. (iii)Finally, the recasting of the rest of mathematical discourse concerning structures derived from computable algebras, as interstring-based reasoning about virtual programs and abstract data types.


An 8 page pdf introduction to a finite simulation efficient model and programming interlanguage may be found at SSC.pdf.


Picture Credit: -Untitled- Analogue Photo 1993 Alex Berka.