Thread Previous • Date Previous • Date Next • Thread Next |
Hi,
From what I could grasp from the current opencog implementation, *many*simplifications have been made, but a few premises still hold:
Indeed.
* except for the request processor, there are not supposed to be any threads (and consequently no concurrency) accessing the cogserver and/or atomspace * one may "simulate" an agent's priority using the "frequency" attribute * agents are supposed to voluntarily release control to the lobe/scheduler (i.e., there is no preemption) * the 'run' method of an agent shouldn't take too long (so that the other agents won't suffer from starvation). This means that some agents may have to be coded in such a way that its work is broken into multiple "fast and resume-able units"
The latter two premises should still hold. The second premise can hold for an initial implementation for simplicity. Ultimately, attention allocation will have something to do with an agent's priority, but this is best left for later, as it can only be properly tuned when we have a large bunch of agents doing PLN and MOSES work.
The first premise isn't really mandatory, but the AtomSpace isn't thread-safe. In early 2001, Senna and Thiago created a thread-safe AtomTable and found out that locking had a huge impact on performance, to the point that performance on a two-processor machine wasn't significantly better than performance on a single processor machine. The alternative they came up with is the exclusion table which lists which agents can be run alongside which other agents. These agents can then use concurrent threads, but they need to guarantee they'll never attempt to write to the same objects at the same time.
I'm not convinced this alternative is viable over the long run, especially with many cores. With just a few cores, this wouldn't be too hard to use with good results, as MOSES and Attention Allocation both have a bunch of expensive processing that doesn't write to Truth Values nor creates new Atoms, which are the main things done by PLN. On the other hand, it's possible that very careful locking might give us a thread-safe AtomSpace without huge performance impact.
In order to really figure this out, we need what I'd call representative dynamics -- a bunch of modules, each with a bunch of agents, doing things that are similar to the desired AGI dynamics. It looks like the shortest path to representative dynamics is to enable, at the same time, attention allocation (we're ready for that AFAIK), background PLN inference, request-driven PLN inference and some MOSES evolution.
So I'd say the plan is:1. In the near term (before the initial Prime release) we shouldn't worry too much about concurrency, and we should adhere to the premises Gama mentioned.
2. In the medium term we should investigate both options I outlined above, along with any alternative ideas people might have.
Cassio
Thread Previous • Date Previous • Date Next • Thread Next |