—–BEGIN PGP SIGNED MESSAGE—–
On 03/04/2016 05:19 PM, Stephen Rice wrote:
> On 3/4/16, Leo Moser <firstname.lastname@example.org> wrote:
>> On simplicity you may be interested in:
seems vaguely reminiscent of lambda calculus.
the basic idea with lambda calculus to support multiple arguments it
creates a new function which then takes a single argument, via
currying. for example
two plus two, would have to be represented as
(plus two three)
which would create the function
plusTwo that would then be applied onto three for-example
Anyways I’m not sure where the author of the video got that this was
the basis of machine languages, because it just ain’t so.
(Modern) Computers are based on turing machines,
which effectively can travel on a piece of “tape” or memory, and flip
bits (1’s 0’s).
Lambda calculus isn’t just difficult for humans to wrap their heads
around, it also has a rather large overhead when implemented in
computers, with the necessity of “garbage collection”, which isn’t
necessary (by default) amongst lower-level languages like C.
There is actually a computer programming language which is somewhat a
parody of functional (lambda) programming. it’s called “unlambda” it
only has 3 core operations, “s” “k” and “`”
Technically any concievable computer program can be written using it.
for instance here is “hello world”
Though quite obviously that is rather unwieldy, though a bit clearer
than the brainf*ck version of hello world:
which is obviously even more primitive as it doesn’t even have ASCII
support for input.
And if you want to get really hard-core there is actually “One
instruction set computers”
though it’s of course mostly for theoretical exploration.
Or could just build computers from scratch using NAND, toffoli or
A slightly more practical example is MISC (minimal instruction set
computers), they generally have more than one but below 32
instructions. However as the Wikipedia page states:
“The disadvantage of an MISC is that instructions tend to have more
sequential dependencies, reducing overall instruction-level parallelism.
A slightly more useful distinction is RISC (Reduced insturction set),
and CISC (complex instruction set). Which are the two most common, the
main distinction is that RISC has a fairly strict one operation per
instruction policy, wheras CISC can have multiple operations per
command. Intel x86 processors are prominent examples of CISC, and ARM
is a prominent example of RISC.
However both RISC and CISC while being simple, can be considered to be
too-simple, since they tend to be sequential, and processors tend to
be parallel, so CISC and RISC processors tend to need a bunch of extra
circuitry in order to convert RISC/CISC instruction to the parallel
reality. To put this into linguistic terms, imagine a language
limited to only nominative, accusative, dative and verb.
So a (relatively) recent trend which takes into account the parallel
nature of the computers, and thus simplifying the hardware is VLIW
(very long instruction word), where each instruction specifies what
each part of the processor does. In case not each one has a task to
do, it can be modified to be variable length instruction word, or as
with the Itanium processors, only sequential dependence is marked, so
non-sequentially dependent processes can run in parallel.
In language terms, long-instruction, is like having a set number of
correlated conjunctions (and-also), sequential dependence is marked
with subordinators and (and-then) co-ordinating conjunctions. Of
course sentences are always variable length.
For the SPEL virtual-machine, which is also speakable, I’m using
variable length instruction word. though it likely wont have
subordinators, but will have co-ordinating conjunctions. I’m still
holding off finalizing it until I have more experience with OpenCl and
probably SystemC also.
For the number of instructions in the VM, probably going to be like
most assembly languages, by having only a few dozen commonly used
The virtual machine however I consider to be rather too low level for
use by humans.
Now if we look at humans, we find that on average adult humans has a
vocabulary of something like 10,000 words, or 10,000 instructions.
Though a 5 or 6 years old is considered to have somewhere around 1,500
words, and that is enough to get by for their purposes.
Though do have to consider that a bunch of adult vocabulary words are
currently I have ~4,300 root words defined,
which removes international synonyms,
i.e. in some languages “dad” and “father” is the same word,
so only “dad” is kept, since it is often the first word babies learn.
> It is interesting, but ultimately it seems like a head game:
> workable in theory, but probably not practical for humans.
While simplicity may be great for sequential machines,
it is rather too cumbersome for massively parallel-processing humans.
For example, if I tell you about the beautiful sunset I saw yesterday,
where looking up at the horizon on the cliff, I could see two spruces
standing tall, as they were surrounded by a magenta orange glow.
While you were reading that, you were also generating the imagery in
your mind’s eye, and keeping to the context of the conversation,
figuring out who it fits in the “big picture”.
That is why it makes sense for us to have a healthy selection of root
and grammar words for adult human usage. And if anyone really prefers
a simpler way of being, can always use the virtual-machine, or simply
limit themselves to simpler statements.
A dream of Gaia’s future.
You can use encrypted email with me,
how to: https://emailselfdefense.fsf.org/en/
BD7E 6E2A E625 6D47 F7ED 30EC 86D8 FC7C FAD7 2729
—–BEGIN PGP SIGNATURE—–
Version: GnuPG v2.0.22 (GNU/Linux)
—–END PGP SIGNATURE—–