How many n state turing machines are there
Which in turn are known to be able to emulate any conventional model of Turing Machines. Looking at the first page of the linked PDF, it's not explicit about whether the alphabet must be finite, although it's implied by the summary saying that addition can only be done modulo some finite value.
It is explicit about having an arbitrary alphabet, a single tape that's infinite in both directions and that the head has to move every time. I don't think I've ever seen anyone restrict a Turing machine in a novel way and not follow that up with a rigorous proof. Sharlin on Nov 23, parent prev next [—]. Not arbitrary. Modulo some constant k , the size of the tape alphabet.
It's just a lookup table. The transition function can only do modular addition within a cell, but the paper also describes how to build a k-ary counter. If your single state has the increment lookup table for a single cell, you just need to move to the next cell to do carries, and move to the previous cell to leave the counter when you're done.
A special carry character is written instead of zero to make sure you leave the whole counter, and you convert it to zeroes as you leave. In the Turing machine formalism that was taught at my university 1 , the set of symbols that can be stored in a cell must be finite, but can otherwise be anything needed to solve the problem at hand.
This means, in particular, that you can annotate symbols on the tape by having different variants of each input symbol, so long as the number of variants is strictly bounded. INPUT: can? I've never understood what the fascination with Turing machines is or what they actually are. Coding bum, here! Wait, no, wrong hat. Computer scientist, here! So, once upon a time, Alan Turing needed a formal model of computation, as in what a mathematician could do to solve a problem with a sufficient supply of pencils and paper.
In the simplest model, the machine starts at one end of the tape, which is unbounded in the other direction. There are two neat facts about this machine: 1 The Turing machine is universal: you can construct a machine that takes as input a description of any other Turing machine and its input, and can perform the same computation as the other machine. Examples: making the tape unbounded in both directions? Adding multiple tapes? A two-dimensional tape? Nope, all can be emulated by a standard machine.
You can reduce the power of a Turing machine; remove the tape entirely and you have a finite state machine;[2] limit the use of the tape to the most recently written symbol and you have a stack machine.
On one hand, those finite state machines are equivalent to regular languages, stack machines are equivalent to context-free languages, and Turing machines are equivalent to context-sensitive languages. On the other hand, Turing machines are equivalent to the other "reasonable" definitions of computation: general recursive functions and the lambada calculus, for example. In addition to not being able to extend the Turing machine for more power, all of the models of computation that are more powerful "cheat"; for example, being able to do an unbounded amount of computation in a single step.
For more details, see your local automata theory course. But that's not important right now. But we generally hand-wave that away. I was going to point out a typo more likely autocorrect. But my head exploded, contemplating the awesome power of the " lambada calculus ".
In a good way, I think. Someone noticed! My existence is validated! What's fascinating IMO is the realization that being able to compute anything needs very little requirements, that more complex models of machines, way different or with "obvious" improvements like more tapes, symbols or random access can't actually do more, but only hope that implementations end up being faster despite having more complex steps.
This is why you see some "shitty" esoteric programming languages where you think that nothing useful can be done, like Brainfuck end up being Turing-complete. Even some board games are Turing-complete, probably MtG is not the simplest example, but I bet there's a bunch of other ones that I don't know about.
Well, surely the former is a consequence of the latter. It had only been a vague, intuitive concept before. This is why I've never understood what it is. I see words, and it's not that I don't understand what those words mean but I simply don't know what is meant by them arranged in this way.
There's some understanding that I'm fundamentally lacking which I've been searching to correct for years. I've put it down to "Math" which seems to be a whole impenetrable language itself.
A language that my lack of understanding of despite many unguided attempts means I'll continue to be downvoted and snorted at when I say I still don't understand what a Turing machine or a monad actually is. Instead, we use more convenient ones that are themselves derived from the first-principles units. You just get used to them. This was John von Neumann talking. You are correct; you can learn what these things "are," but without at least some of the mathematics, it's going to be tough to fully grasp.
I so, so, so wish that people were taught that mathematics is a language of relationships rather than one of rote numerical calculation. I think more people would stick with it. It's a shame, because the doors it opens are huge. I don't quite understand how mathematics are more adapted to model the world than a programming language.
Short pithy answer: Exact real arithmetic is still an open research problem. Slightly longer answer: To the extent that they are formal languages, they both have the same ability to express things. More philosophical answer: Mathematics has a tradition of extension. Have a new idea you want to express as concisely as possible because you are going to be using it a lot? Grab a handful of Greek letters, symbols, funny fonts, and random other typography, and go to town.
Outside of Lisp,[1] this flexibility is hard to do in most programming languages. The short answer, they are not; The long er answer? Mathematics are more limited, and thus, at least on the surface, provide a more rigid model. Plus, we have about years worth of equations that we know to be accurate; not so true for C or any other programming language. Plus, we can model all the way down in mathematics, as it is more abstract in nature. The long est answer? It's turtles all the way down.
They are certainly easier to use for many problems, though. What they say is that any formal system powerful enough to contain basic arithmetic is either inconsistent ie. However, that doesn't imply that ZF cannot be "a perfect model of the real world", because the AC is irrelevant in a the real world, because the AC only applies to infinite sets, which do not exist in the real world.
Mathematics are a superset of physics as they allow to explain all possible universes not just our. I've yet to see one concrete statement that ZFC can't explain. There's a list of such statements on wiki if I remember well. As with many other interfaces, there are a few rules in the comments that implementations are expected to adhere to. OK, let me try. Consider the notion of an algorithm. You are probably familiar with an algorithm: a series of steps to be completed by a computer aka a person.
This is the launch point for what a Turing machine is. Turing asks the question: if there exists a machine that can slavishly follow instructions, what can such a machine do, and what can such a machine NOT do?
To answer this question without invoking the AI problem, Turing started with the simplest possible set of objects to work with: binary numbers. A simple machine then would be a machine that reads a "tape" of binary numbers. The machine is simple, so it can only read one number at a time. Based on the number read, the machine may either move the tape left reading the next number , move the tape right read the previous number , replace the digit on the current cell, or halt.
You can see why such a machine is considered simple: The range of inputs and the action space is VERY limited. And yet despite its simplicity, it's shown that such a simple machine can do addition. Specifically it can perform computation on recursively enumerable problems. It is in this very narrow and specific sense that Church's Lambda Calculus and Turing's machines are considered equivalent.
That is to say if a Turing Machine can calculate the result of a function that takes a natural number and returns a natural number, lambda calculus can calculate the same result, and vice versa [sidenote]. Now, having a machine that can do all that is pretty cool. Ideally you'd want to be able to describe the rules of the machine. That's called a programming language. This notion really only formalized itself way after Turing died.
Aho, Ulman and gang were really the ones who made the connection between languages and automata. Wang's B machine introduced the notion of abstraction in Turing machines by adding compound features that can be translated to simple Turing machines for example, an instruction that would be "delete the value, and move right".
This led to the study of programming languages. The family of imperative programming languages are inspired by Turing machines in this manner. A G-string is a huge natural number that represents a function. The concept of algorithm has nothing intrinsic to do with natural numbers. You could use any data structure, e. The Lambda Calculus is also an example, as it doesn't use natural numbers, but its own expressions.
Church-Turing concerns computations not algorithms. The difference is subtle. A "computation" is what we would call a function application today - given an input, compute the output based on some rules. An "algorithm" is a set of such rules, usually operational rules do this, then do this. The class of computations that Church and Turing are equivalent are the functions from N to N encoding required obvs , as proven by Turing. Everything else is conjecture. Turing gives "computable by Turing machine", and Church gives "effectively calculable by the lambda calculus".
Kleene holds them both to be the same, culminating in Theorem XXX. For example, you cannot define equality of terms in lambda calculus nor can lambda calculus realize itself. Bob Harper's book is the only book I know of that was explicit about this fact, though he dismisses it as a minor issue.
Barendregt surprisingly also missed this. Edit: Also, don't forget recursive functions dating from I don't know when, actually. It can either move to the left, right or not shift each transition. I tried some myself to figure out how many Turing machines there were and got the two following incompatible results.
Sign up to join this community. The best answers are voted up and rise to the top. Stack Overflow for Teams — Collaborate and share knowledge with a private group. Create a free Team What is Teams? Learn more. Ask Question. Asked 6 years, 6 months ago.
Active 1 year, 10 months ago. Viewed 2k times. Improve this question. That is why I suggested that you tried for small cases, where you can almost count the combinations manually. You are not far from the solution. Are their any different assumptions? Add a comment.
0コメント