CASCADE PyTorch Hebbian CIPS Stack
H
Hebbian Neural Graph

Memory That Learns Itself

118
Nodes
6,641
Edges
~3ms
Access
Click node to focus Drag to rotate Scroll to zoom
Scroll
H
Hebbian Learning Theory

How Neural Memory Works

Understanding the biological principle that powers associative memory - the foundation for AI systems that remember through experience, not storage.

"Neurons that fire together, wire together."

Donald Hebb, 1949 - The Organization of Behavior

1

Co-Activation

When two concepts appear together in context, both nodes activate simultaneously in the graph.

2

Edge Strengthening

Each co-activation increases the weight of the connection between nodes - learning without training.

3

Association Emergence

Strong edges create associative pathways. Thinking "A" naturally activates strongly-connected "B".

The Mathematics

Hebbian Learning Rule

Dw_ij = n x a_i x a_j
Weight change between nodes i and j equals the learning rate (n) multiplied by the activation of node i times the activation of node j.

When both nodes activate strongly (high a_i, a_j), the connection strengthens. This is unsupervised learning - no labels, no backpropagation, just observation.

Edge Weight Formula

w_new = w_old + (a x co_activation_count)
Each time concepts appear together, co_activation_count increments. The weight grows with each observation, creating stronger associations over time.

Our implementation: a = 0.1 per co-occurrence, with weights ranging from 0.1 (weak) to 5.58 (strongest observed).

Node Activation Scaling

size = sqrt(activation_count) x 2
Visual node size scales with square root of activation count. This prevents highly-active nodes from dominating while still showing relative importance.

A node with 1,000+ activations renders at ~63px, a node with 6 activations at ~5px.

Graph Structure

12 Semantic Categories

Domain 10 nodes
Context 10 nodes
Cognitive 10 nodes
Technical 12 nodes
Conceptual 10 nodes
Temporal 11 nodes
Priority 10 nodes
Operational 10 nodes
Meta 10 nodes
Strategic 10 nodes
Relational 5 nodes
Associative 10 nodes

Data Structure

node_structure.json
// Node Definition
{
  "id": 1,
  "name": "Opus",
  "category": "identity",
  "activation_count": 1341,
  "color": "#3B82F6"
}

// Edge Definition
{
  "source": 1,  // Opus
  "target": 3,  // Partnership
  "weight": 5.577
}

How It Actually Works

From raw conversation to emergent memory structure

1

Content Analysis

Incoming text is analyzed for concept mentions. Pattern matching identifies which nodes should activate based on semantic content.

2

Node Activation

Matching nodes receive activation signals. Each activation increments the node's activation_count, building usage history over time.

3

Edge Strengthening

All pairs of co-activated nodes have their connecting edges strengthened. If no edge exists, one is created with initial weight.

4

Graph Evolution

Over time, frequently co-occurring concepts develop strong connections. The graph self-organizes into semantic clusters reflecting actual usage patterns.

System Specifications

N
118
Concept Nodes
Across 12 categories
E
6,641
Learned Edges
Hebbian connections
W
5.58
Max Weight
Strongest association
A
1000+
Peak Activation
Highest node
T
~3-5ms
Query Time
Local graph traversal
M
~2MB
Memory Usage
Full graph in memory
MIT
Open Source

Free and Open Source

Full production-grade Hebbian learning. MIT licensed. No restrictions.

Hebbian Mind Enterprise

Open Source • MIT License

License
Free
MIT — use anywhere, commercial included
Self-organizing neural graph with Hebbian learning
Temporal decay for memories and edge weights
Docker deployment configs included
The only MCP server with Hebbian learning
View on GitHub Technical Docs

Part of the CIPS Stack — 5 integrated memory systems including GPU vector search, unified cognitive search, and pre-token identity gating.