P55671 Lambda-Calculus and Combinators: An Introduction link reply
Recently I've started reading into λ-calculus mostly with the pdf attached and some smaller reading into the essay of Kurt Godel on recursive functions and Alan Turing on Turing machines. And I have a few questions

1, So as the book mentions and proves in appendix A3, that simply typed lambda calculus will result in SC (strongly computable) and SN (strongly normalizable) terms. And there are clear rules of inductions under [tex: \lambda\beta\rightarrow], we can determine within a finite amount of time whether a given λ term can or cannot be typed. The logical conclusion from this is that there are λ terms that are not typeable under simply typed λ-calculus, but are SC and SN (or WN).
One such example that comes to mind is (λx.xx) (λx.x) that has one path of normalization ending in (λx.x), but I'm not too familiar with the subject to make this call.


2, The Church-Rosser theorem says that
>[tex: \beta]-nf of a given term is unique modulo [tex: \equiv_\alpha]
So Ω being the non-reducing, minimal function (λx.xx) (λx.xx), and L be (λx.xxy)(λx.xxy). How does Church-Rosser theorem apply to:
(λx.y) Ω
or
(λx.y) L
For both reduction takes you straight towards the 'y' atom, but there is an infinite chain of reductions for the other term, at any point of which, it can resolve back into the constant 'y'. What are we supposed to make of this? Or I suppose for any given reduction you can make, there will in fact, always exist another reduction you can make.


3, Turing machines and λ-calculus are supposed to be equivalent, at least to a naive outsider. However, there were valid questions raised when I looked into an algorithm for converting between the two. The gist is that Turing machines cannot take anything else but an input (encoded in) natural numbers, making it unable to accept non-encoded higher order functions, but at the same time being able to run analysis on the inside of encoded functions. Something which λ-calculus is unable to do. Making some set of programs seemingly non-translatable between the two. Which wasn't supposed to happen.
Also on this note, if there aren't clear conversion algorithms between Turing machines and λ-calculus, are there conversion algorithms between actual code and Turing machines. A quick search turns up nothing, but I remember that Brainfuck is an extension of P'' which is explicitly is a Turing machine. But other then Brainfuck, how would one go about, for example translating a given scope within a Lua program?

4, This is a bit overused and meme-y, but why are there so few uses to λ-calculus? It's used inside the field of mathematics, obscure linguistic research, and programming languages that nobody in the wider population uses. (no offense to anyone, being a Haskell user myself) Or is this just the engineers taking over the mathematicians to do their job and not care about century old weird modes of computation that a few nerds and university professors in the 60s started using.
P55672 link reply
>Or I suppose for any given reduction you can make, there will in fact, always exist another reduction you can make.
That came out bad.
>Or I suppose for any given reduction you can make to continue the infinite chain of reductions, there will in fact, always exist another reduction you can make to bring it back to the 'y' atom.
P55690 link reply
P55671
I haven't studied this stuff any deeper than you (in some parts I think you've gone deeper than me), but I'll see if I can help where I can.

>One such example that comes to mind is (λx.xx) (λx.x) that has one path of normalization ending in (λx.x), but I'm not too familiar with the subject to make this call.
I think you're right. Anything with (λx.xx) in it can't be typed because it's applying a function to itself. If that function has type A -> B where A and B are type expressions, then in order to apply it to itself A must be the same thing as A -> B, which isn't possible.

>2, The Church-Rosser theorem says that
>>[tex: \beta]-nf of a given term is unique modulo [tex: \equiv_\alpha]

As I understand it the uniqueness of the normal form is an obvious consequence of the Church-Rosser theorem, and the Church-Rosser theorem itself is the diamond property: If I can turn A into B by some sequence of [tex: \beta]-reductions, and I can also turn A into C by some other sequence of [tex: \beta]-reductions, then I can turn B and C into the same thing D (up to [tex: \equiv_\alpha]) by applying some sequence of [tex: \beta]-reductions to each. (This makes more sense with a picture so I attached a crude picture.)

The reason this implies the normal form is unique goes like this: Assume to the contrary A has two distinct normal forms B and C. Then I can reduce both to D. Since the sequences of [tex: \beta]-reductions that make B into D or C into D are allowed to be empty, D could be the same as B or C, but by our assumption it can't be the same as both. Whichever of B and C is distinct from D cannot a normal form because it can be reduced to D. This contradicts our assumption, so the normal form must be unique.

>How does Church-Rosser theorem apply to:
>(λx.y) Ω
>or
>(λx.y) L

This calls for more diagrams.

>Or I suppose for any given reduction you can make, there will in fact, always exist another reduction you can make.
I think you've got the idea here; this is exactly the Church-Rosser theorem applied to the case where B is a normal form (and therefore we must have D = B since no further [tex: \beta]-reductions can be applied to B).

>The gist is that Turing machines cannot take anything else but an input (encoded in) natural numbers, making it unable to accept non-encoded higher order functions, but at the same time being able to run analysis on the inside of encoded functions. Something which λ-calculus is unable to do. Making some set of programs seemingly non-translatable between the two. Which wasn't supposed to happen.
You may not be able to write a λ-calculus expression that looks inside a higher-order function, but you can encode that higher-order function as a syntax tree which λ-calculus expressions can look inside.

>why are there so few uses to λ-calculus?
Whether that's true depends on what you mean. Pure λ-calculus is a Turing tarpit. On the other hand, function expressions are ubiquitous.
x