Algorithm Theory & Computation Theory
Lambda-Calculus and
Combinatory Logic
A rather large piece of diamond, as a history. Not history of a war or peace, but a history of "TYPES". As rightly, Ludwig Wittgenstein wrote at his preface of his book (Tractatus Logico Philosiphicus) something like: This reading, is for someone that have contemplated rather similar thoughts and ideas, then it suddenly becomes the most beautiful and brilliant ideas that can come to minds. Likewise is just this piece of history. If you have thought about realm of abstract entities, then the jewellery is yours.
(page 504) Here, in the definition of the Generic Stack (whether be a list or an array), if they are to have properties that make them totally unrelated to items and therefore they can TO NOT OBEY properties like pop(push(a, s)) = a , which are the very operations we desire for some kind of task. Let's engage our imagination, what could be the properties of list-stacks or array-stacks that prevent those desired operations for stacks? For example I'm wondering about the Stack-Size function, how it is possible--to not be able to count the elements (items) contained in a list or in an array? Always coming back to my interest in intuitionistic logic of L. E. J. Brouwer, it is the most fundamental property of mathematical understanding (and operating on those__), and we can argue that it is actually the most fundamental property of "understanding" in general (full stop)-- that -- we, for an (Organic) agent, to attribute a property that resembles an act of intuition, recognising or understanding of anything, to differentiate two things completely alike, but only different in one property (that be recognisable in principle by the kind of agent under consideration), it is inevitable to acknowledge the first act of intuition (according to Brouwer) that relative to (conditions) of an agent, there exist a property that enables an agent, say a specie evolved on the planet earth (during roughly 4 billion years).
Using this point (coincidence) in the Roger Hindely's history of Type Theories, precisely the mapping of components of the Game Theoretic Semantics (A Type Theory) to components of a basic type theory in which "Games" are interpreted as "Types" (as the semantic counterpart of), and "Strategies" interpreted as (the semantic counterpart of) "Typed Terms" (by Games instead of Types).
In this setting of a Type Theory as a Game Semantics, which is deeply rooted in the correspondence between Propositions and Types, I want to make a philosophical connection in which in particular organic agents (But also applicable to computer systems as well) as the "agent" that plays a "Strategy" (a Typed Term by a Game Type). So we have here in the formal language of Game Semantics (A Type Theory), an abstract category of all "Games" Types, that will Label (construct) Typed Terms (Strategies) to be used in the (Ordinary) games of the "Agents".
My Main Point here is a concept I call "An Intentional Semantic Jump" with some accepted practical sacrifices (better to say rather rational choice of sacrificing more details (information) or sacrificing an ideal which is a "never achieving preciseness".
< A* , A* , e : (A* x A*) ---> K >
In my opinion, this definition describes the impossibility of 'infinite preciseness', and also describes an 'inevitability' of a realizer or intuition of an 'agent' to arrive at a fix point or level, at realizing (or agreeing) a certain type to be of value K. Value K is a dropped coin of a fluctuating 'realizer' between A (top) and A (bottom), the boundaries of a relativistic 'realisability'. This concept, I think, can be used also in the definition of 'equality' in general.
This result of Wadsworth, namely that "a term M had no head normal form if and only if it was unsolvable", or its close statement about Lambda(I)-calculus by Barendregt that Proved "a term is unsolvable if and only if it has no normal form."
From the mentioned two statements, I want to draw an informal and intuitional conclusion, which is:
If we are not able to reduce a sequence of terms and symbols (including operational symbols) in a language under consideration, to some Normalised sequence of symbols (of the language), then the original terms (and symbols) are unsolvable.
I'd rather say, if we cannot disambiguate the face (head of normal form or simply symbolical representation of) some statement, then we are equally unable to solve it. I think it is the main purpose of "Formal, Computationally enabled Languages". In the L. E. J. Brouwer terms, the very operation and strength of mathematical formalisms, is to empty it from emotional content and that also can mean (a possibility or potentiality) to let the monster out! Again according to Brouwer. The Great Brouwer! who (in my opinion) invented and based the 'Algorithm Theory' and Computing by theorising the very basis of 'computation theory' by means of his foundational works in Intuitionistic Mathematics and Logic which made a Constructive Mathematics possible. It is equally astonishing that at the same time that he constructed the foundations of Algorithm & Computation, 'Computational Category Theory' and Intuitionistic (then computational) 'Type Theories' and also analysis of his followers that made possible modifications to classical 'Set Theory', to make it Constructive and computationally sound, all of this, and then we come to have the shocking statements and actually predictions of Brouwer of "our mathematically-enabled world" or for example "our Software-defined-everything world concept". In his unpublished and cut (by his supervisor) PHD thesis introduction, and also in his short book in the year 1905, titled : "Life, Art and Mysticism" , he clearly states that (I don't remember exact wording), it is on the shoulders of community of mathematicians to avoid applied mathematics and to practice it as a pure intellectual passion. He foresees like a summer middle-day sun that slipping into that trap will equal to fuelling man's desire to power and abuse of nature. In his thinking, exploitation of military-industrial complex is one example. Under this scenario, Brouwer sees no less that a total corruption of man and his world (the nature).
(page 10) According to Church-style type theory, for a term f a-->b , its type would be uniquely a-->b. It means, semantic of having a unique type is precisely to have exactly one transformation rule such as (a) outputs (b). In general, it is also true that the meaning of a type (whether be Curry or Church style) is its transformation rule. However, Curry's approach to using types differs from Alonso Church and Bertrand Russel, in which he looks at types as annotations or labels that we assign to terms of a language to describe their behaviour.
(page 11) By re-reading the Howard's correspondence which is also called formulae-as-types, I've noticed that it also match my definition of meaning of a term (or semantic of), which is "the description of its transformations or steps to/between (series of) abstraction layers, from point of view of a certain agent". Then I think, formulae-as-types can be translated to something very close to a "dependently typed" formalism.
(page 41) According to the work of Bohm students namely, Dezani and Coppo, each normal form was assigned a finite sequence (m, j, h1, ... , hn) or (omega, h1, ... , hn) as their type, which essentially will encode a description of their computational behaviour.
Hossein M. Bojnordi
Primary Reference:
'History of Lambda-calculus and Combinatory Logic' , Felice Cardone and J. Roger Hindley † , 2006
Created & the content authored by Hossein Mousavi Bojnordi
e-mail: bojnourdi@gmail.com