Understanding the biological principle that powers associative memory - the foundation for AI systems that remember through experience, not storage.
"Neurons that fire together, wire together."
Donald Hebb, 1949 - The Organization of Behavior
When two concepts appear together in context, both nodes activate simultaneously in the graph.
Each co-activation increases the weight of the connection between nodes - learning without training.
Strong edges create associative pathways. Thinking "A" naturally activates strongly-connected "B".
learning rate (n) multiplied by the
activation of node i times the
activation of node j.
co_activation_count increments.
The weight grows with each observation, creating stronger associations over time.
a = 0.1 per co-occurrence,
with weights ranging from 0.1 (weak) to 5.58 (strongest observed).
// Node Definition { "id": 1, "name": "Opus", "category": "identity", "activation_count": 1341, "color": "#3B82F6" } // Edge Definition { "source": 1, // Opus "target": 3, // Partnership "weight": 5.577 }
From raw conversation to emergent memory structure
Incoming text is analyzed for concept mentions. Pattern matching identifies which nodes should activate based on semantic content.
Matching nodes receive activation signals. Each activation increments the node's activation_count, building usage history over time.
All pairs of co-activated nodes have their connecting edges strengthened. If no edge exists, one is created with initial weight.
Over time, frequently co-occurring concepts develop strong connections. The graph self-organizes into semantic clusters reflecting actual usage patterns.
Full production-grade Hebbian learning. MIT licensed. No restrictions.
Open Source • MIT License
Part of the CIPS Stack — 5 integrated memory systems including GPU vector search, unified cognitive search, and pre-token identity gating.