← Back to team overview

opencog-dev team mailing list archive

memory and (possible) bug

 

I was working out what my blowout is for using AtomSpace nodes and
links to implement a neural network instead of using arrays of doubles
and I figured out:

Atom = type (4) + atomTable_ptr (4) + incoming (4) + outgoing (4*n)
                + indicies (4) + targetTypeIndex (4) + predicateIndexInfo (4)
                + atttentionValue (4) + truthValue (4) + flags (1)
                + vtable ptr
         = 45 bytes minimum plus the size of the truth value, which is
8 for simple and 21 or more for indefinite and another 4 for
composite.
Nodes = Atom + name (4+len)
Link  = Atom + trail (4) + outgoing (4*n)   <- uhhh, repeated?

I notice that there is an outgoing vector in both Link and Atom.. is
that intentional?

Also can Nodes have outgoing edges? If so, what's that mean?

For a network I was looking at that had 3676 nodes and 1588000 links,
I worked out that blowout was about 6 or 7 to 1.

The algorithms would be slower too, as there's now traversal that has
to be done, instead of straight array accesses.

So I wonder if some kind of conceit is necessary.  Like a "layer node"
and a "totally connected link" or something.

A hypergraph is described as the nodes being able to contain
hypergraphs.. how are we implementing that?  Seems this might be a
case where presenting the interface of a hypergraph can be done
efficiently by dynamically creating the sub-nodes when they are used
(if ever).

Thoughts appreciated.

Trent



Follow ups