← Back to team overview

opencog-dev team mailing list archive

Re: [OpenCog] Re: Threading and GC in opencog

 

2008/9/30 Cassio Pennachin <cassio@xxxxxxxxxxxxx>:
> Hi,
>
>> What you describe would be a simple programming error
>> on the part of whoever wrote the such mind-agents. It would
>> be quite easy to fix.
>
> Whether it's a programming error or not depends on the concurrency model we
> adopt.

Yes, but the concurrency model is distinct, and more or less
unrelated, in my understanding.

>> The problem that I'm concerned about is at a different level.
>> Say that some process, after completing its work, is deleting
>> a bunch of stuff. For example, say the NLP subsystem is
>> done processing a bunch of sentences, and so these need
>> to be deleted. No other process will access these sentences,
>> so this should not be a problem.
>
> Why won't attention allocation, background inference and other processes
> access those sentences?

Well, (rhetorical question:) why would they?

If there's "background inference" being performed on
these sentences, then clearly the NLP system is buggy
if it's trying to delete them -- this would be a
straight-forward programming error.

As to attention allocation: it would presumably be
attention allocation that decided "we're done with
these sentences, so clear them out of RAM".

Both these issues are in the domain of the concurrency
model, which is more or less unrelated to the multi-threading
model, as I understand it.

--linas

p.s. as to concurrency:

I am currently hacking up a multi-stage NLP pipeline,
where processing happens in several stages, which
must happen in sequential order, and I'm using little
marker nodes to tag/flag different sets of atoms as being
complete, or waiting for processing.   Its sort of a hack
just so I can move on to other things, but if there's some
 "standard" way of declaring that processing for some
collection of atoms is "done", and is ready to be handed
off to the next stage, I'm all ears.



References