I’ve been working on the virtual-machine for SPEL recently,
it needs to be highly compatible with parallel processing to be future-friendly.
SPEL is destined to be the foundation of GI-OS (General Intelligence Operating System). Where many AI’s would be working together in order to make it intelligence from the ground up.
Though with a shared-memory each process could search for it’s own “food”, replacing it with a blank block, and then once it’s done “digesting”/processing it can find a block to “excrete” the data.
Input-streams would be like sources of energy, like the sun (network), and vents/volcanoes(input devices).
primary consumers would be the drivers that put it into the shared-memory,
secondary consumers would process it, tertiary consumers output it,
decomposers would clear dead blocks.
To make it more predictable and not have useless processes around,
each process can have at least two time-outs, one for food and one for age.
The one for age would insure that obsolete processes aren’t still around,
for instance if there has been a system-update, then eventually the old processes will time-out and be replaced by their updated versions.
The age process can also be related to it’s “cost”/size, for instance blue giant stars are huge, but live relatively short periods of time, similarly to large dog breeds. Since the large greedy algorithms require large amounts of resources it makes sense for them to have a fairly short life, so as not to linger longer than necessary. On the other hand there are red-dwarfs, and cold-blooded turtles, which use fairly little processing power, and can have fairly long timeouts, since even if they stick around a while it won’t make much of an impact — unless they reproduce a lot, which is where predators would come in.
Though that may be an intra-species longevity bias — leaner algorithms living longer than others of the same type. Wheras inter-species, larger things tends to live longer than smaller things, i.e. stars live longer than whales, and whales live longer than cats.
When a process is instantiated it can have a satiety level equal to the number of food-packets estimated that it has to process, or some set maximum for it’s “species”. The hunger-timeout will decrement the satiety level every so often. If satiety reaches 0 it will starve to death.
On the other hand there can be the reproduction timer, which upon triggering will check if there is a noticeable surplus of satiety, if so, then it will put in a reproduction request to the master thread, which can instantiate new processes.
The one thing to be careful of is processes which make food for themselves, or otherwise don’t follow the rules, so there might have to be some predatory processes, which would single out suspect processes and test them to make sure they fit the expected parameters, likely in a sandbox or “den” of some kind.
The master-thread in theory can do everything by itself, but it would be much slower than having the other processes helping. There could be something like a disease-process that would look for large amounts of stale food-packets, and notify the master-thread that they need to be consumed, then the master-thread can insantiate the appropriate process that consumes those kinds of packets. If however there is no way of consuming them, the disease-process will give it up to the decomposers, and it will be treated as junk and marked as clear.
If this is implemented, could possibly make a nice eco-system viewer, for debugging, or even pure enjoyment purposes.
Artificial Gaian Intelligence 🙂
As above, so below,