Version history and release notes for NForge.
NForge v1.9.0 is a paradigm shift: instead of gating data collection behind $500/hour fMRI scanners, anyone in the world can now contribute training data from a browser and get paid in SOL for it. NForge Gym is a Solana-wallet-gated portal with three starter games — one per hardware tier — that capture the behavioral, affective, and vocal signals the framework needs to keep improving. Connect a Phantom wallet, play for up to 30 minutes a day, and 0.01 SOL lands in your wallet automatically the moment you hit the cap.
A gamified Stroop task. A colored word flashes on screen — click the button matching either the ink color or the word meaning. Stroop has hundreds of fMRI studies mapping it to the anterior cingulate and dorsolateral prefrontal cortex; every trial yields reaction time, RT variance, error rate, and mouse trajectory curvature — the behavioral fingerprint of decision conflict. Dense data, short trials, no webcam or mic required.
An emotion recognition game. An emotional face appears; pick the matching label. Every few rounds the prompt flips to "mimic this expression" and the webcam scores how close you get. MediaPipe Face Mesh runs entirely in your browser — only numerical features (landmarks, pupil proxy, blink rate, head pose) ever leave your device. Produces directly labeled (stimulus emotion, user response, user facial action units) triplets — the gold-standard format for training the Emotion Engine.
A prosody recording game. Read a neutral sentence aloud in a specified emotional tone — happily, sadly, angrily, calmly, fearfully. Meyda.js extracts pitch contour, energy envelope, spectral centroid, and jitter in the browser; raw audio never uploads, only numerical prosody features. Builds a labeled (text, intended emotion, prosody features) corpus that trains Empathy Voice in both directions.
Sign in with Phantom, play up to 30 minutes per UTC day across any of the three games, and the
moment you cross the cap with a passing quality score, the backend signs and sends
0.01 SOL directly to your wallet. A hot wallet soft-capped at 2 SOL keeps blast
radius bounded; a UNIQUE (wallet, day_bucket) Postgres constraint makes
double-payment structurally impossible. A scheduled confirmer job verifies transaction finality
every two minutes.
Webcam and microphone streams are processed only in the browser via WebAssembly-backed libraries. There is no backend endpoint that accepts raw audio or video — only derived numerical features can be uploaded. No email, no password, no PII: the wallet address is the only identifier, hashed with a pepper before appearing in any storage path.
v1.8.0 is a deep foundational upgrade to the NForge encoding model. Instead of new demos, this release ships five well-grounded neuroscience and ML primitives that improve prediction accuracy on cortical fMRI signals. Every primitive is opt-in via a config flag, so existing checkpoints continue to load and train without modification.
The fMRI BOLD signal lags neural activity by ~5 seconds with a stereotyped double-gamma shape.
NForge now ships an HRFConv layer initialised to the canonical SPM hemodynamic
response function and trained end-to-end. Modeling this physiological delay explicitly recovers
accuracy that the transformer was previously wasting capacity on.
Different cortical regions are best predicted by different layers of foundation models —
early layers carry low-level acoustic and visual features, late layers carry semantics. The new
LayerAttention and GatedLayerAttention modules learn per-layer softmax
weights, replacing naive mean/concatenation aggregation with an adaptive combination per modality.
Learned absolute positional embeddings limit the model's ability to extrapolate beyond the training
sequence length. The new RoPEAdditiveWrapper adds sinusoidal rotation-based positional
information that generalises to longer temporal contexts — useful for long-form video and
continuous narrative stimuli.
Text, audio, and video carry complementary signal — but the original combiner MLP merged them
in isolation. The new CrossModalAttention block lets each modality attend to the others'
full temporal context via multi-head attention before combination, capturing audio-visual binding
and language-grounding effects.
Not every cortical vertex carries the same signal-to-noise ratio. The new
NoiseCeilingWeightedLoss weights per-vertex MSE or Pearson loss by an estimated
reliability ceiling (computed via split-half correlation on repeated measurements), so the model
focuses capacity on vertices that can actually be predicted instead of memorising noise.
NForge v1.7.0 turns your mental health journey into a living digital garden. Every therapy session shapes the ecosystem — positive sessions bloom flowers, sustained calm grows trees, difficult periods bring storms and wilting, and recovery sprouts new life. Over months, you watch a garden grow that is your healing. Includes a gorgeous retro ASCII terminal demo.
Eight plant types — flowers, trees, shrubs, vines, ferns, grass, mushrooms, and cacti — each with a full lifecycle from seed to sprout to blooming to thriving. Plants are named poetically based on the emotion that planted them: "Serenity Rose", "Courage Oak", "Resilience Pine".
Your emotional trends drive the environment. High valence brings sunshine and rainbows. Low valence triggers storms and fog. Long-term emotional trajectory determines the season — spring for recovery, summer for flourishing, autumn for reflection, winter for difficult periods.
A beautiful terminal garden with picket fences, ASCII flower art, weather effects, and a time-lapse mode that replays 6 months of garden growth in fast-forward. Update the garden with simulated therapy sessions and watch new sprouts appear, flowers bloom, and the weather shift.
Track your garden's health score (0-100), bloom streaks, plant ages, and lifetime statistics. Snapshot history lets you compare your garden at any two points in time. Soil quality improves with consistent positive sessions, amplifying future growth.
NForge v1.6.0 makes your house sentient. Neural Home reads your cortical state in real time and adjusts every device in your home before you even sit down — lights dim to warm amber when anxiety spikes, eucalyptus fills the office when focus is needed, blinds open wide when your brain lights up with joy. Your environment becomes a continuous, invisible therapist. Supports HomeKit, Google Home, Alexa, Philips Hue, Sonos, Nest, MQTT, and Home Assistant.
Six pre-configured environmental scenes mapped to emotional quadrants: calm_retreat (dim 2700K, lavender, ambient 60bpm) for anxiety and stress, comfort (soft 2500K, vanilla, classical 70bpm) for sadness, joy (bright 3500K, citrus, upbeat 110bpm) for excitement, focus_zone (neutral 4000K, eucalyptus, lofi 90bpm) for contentment, energize (cool 5000K, citrus, upbeat 120bpm) for activation, and neutral for baseline. Every parameter — brightness, color temperature, temperature, music genre, tempo, volume, blinds, aroma, fan speed — is fully interpolated between scenes.
DeviceCommand generates native payloads for every major platform without importing any third-party SDK. HomeKit (characteristic-based with mireds conversion), Google Home (action.devices intent format), Alexa (Smart Home Skill API directives), Philips Hue (Clip API v2 with CIE xy color conversion), Sonos and Nest via platform-matched payloads, and MQTT with full Home Assistant auto-discovery support. The transport layer is always yours.
EmotionToSceneMapper doesn't snap between scenes — it continuously blends all numeric parameters (brightness, color temperature, temperature, volume, blinds position, fan speed) between the current and target scene using a configurable step factor. Transition duration is calculated from the Euclidean distance between emotional states in valence-arousal space: small emotional shifts trigger 2-second fades, large shifts stretch to 15 seconds. Every device respects its own min_transition_ms floor to prevent jarring snaps.
Register any number of rooms via RoomProfile, each with its own device list and priority level. Set the active room dynamically as you move through the house — the controller targets the highest-priority room when no active room is set. The built-in create_default_home() scaffolds a full three-room setup: living room (2 Hue lights, Nest thermostat, Sonos, smart blinds), bedroom (2 lights, thermostat, speaker, aromatherapy diffuser, blinds), and office (light, thermostat, speaker). Pause and resume automatic control at any time. Full command history retained for the last 1000 dispatches.
NForge v1.5.0 breaks the ultimate barrier — letting two people literally share feelings. BrainLink tracks multiple participants' brain states simultaneously, detects moments of neural synchrony and emotional rupture, and translates one person's inner experience into stimuli that evoke the same feeling in the other. Galbot mediates the exchange in real-time. Couples therapy will never be the same.
Real-time Pearson correlation across cortical surfaces between participants. Per-ROI synchrony maps show exactly which brain regions are in sync. Phase lag analysis via cross-correlation reveals who is the emotional leader — who feels first, and who follows. Classified into four levels: deep sync, moderate sync, mild sync, and divergent.
The core breakthrough. Translates one participant's emotional state into four stimulus modalities: audio (valence → major/minor key, arousal → tempo), visual (emotion → color palette and motion), narrative (metaphor-rich empathy descriptions), and haptic (Galbot therapeutic touch mapped to emotional intensity). Built-in safety caps prevent overwhelming the recipient.
Galbot automatically intervenes at critical moments. During deep resonance: "You're both feeling this right now. Stay with it." During emotional rupture: "I notice you two are in very different places. Can you share what's happening?" Auto-mediation is configurable with adjustable thresholds for rupture and resonance detection.
Full session reports with synchrony trajectory, peak connection moments, rupture events, transfers performed, and mediator interventions. Supports up to 6 simultaneous participants for group and family therapy. Every event is logged for post-session clinical review.
NForge v1.4.0 gives robots a soul and therapists a window. Empathy Voice integrates Eleven Labs to make Galbot speak with emotionally adaptive prosody — warm and steady for anxious patients, bright and energetic for the disengaged. NeuroSync streams every cortical prediction, emotion trajectory, BrainBeats measure, and journal entry to a real-time browser dashboard over WebSocket, so therapists can monitor sessions live from anywhere.
Five therapeutic voice presets — warm_steady, bright_energetic, gentle_soothing, neutral_calm, and playful_light — automatically selected based on the patient's emotional state. Supports two strategies: counter-regulation (therapeutic opposite) and mirror (empathetic matching). Arousal-weighted voice blending for smooth transitions between emotional states.
A stunning browser-based dashboard with four live panels: Cortical Heatmap (2D brain with thermal colormap), Emotion Trajectory (dual-line valence/arousal chart), BrainBeats Visualizer (20-bar equalizer), and Live Feed (terminal-style session log). Pure Canvas API at 60fps, zero external dependencies, auto-reconnecting WebSocket with simulation fallback.
SSML-like text markers automatically inserted based on detected emotion — pauses before key phrases for anxious patients, emphasis on action words for the disengaged. Voice trajectory logging tracks every voice selection across a session for post-analysis. Built-in cost estimation for Eleven Labs API usage.
NForge is no longer Galbot-exclusive. The Universal ROS2 Bridge publishes cortical predictions, emotion states, therapy actions, BrainBeats, journal entries, and spatial scenes as standard ROS2 topics — making any ROS2-compatible robot brain-aware. Pepper, NAO, Spot, TurtleBot, Unitree G1, or your custom build. One integration, every robot.
Full NForge data published as ROS2 topics: emotion state (10Hz), cortical predictions (30Hz), therapy actions (10Hz), BrainBeats (10Hz), journal entries (1Hz), spatial scenes (30Hz), intent predictions (10Hz), and dream frames (5Hz). Includes auto-generated .msg definitions for custom NForge message types.
Pre-configured profiles for 7 robots (Pepper, NAO, Spot, TurtleBot4, Unitree G1, Unitree Go2, Galbot G1) with capability detection. Abstract gestures like "nod" and "open_palms" auto-map to each robot's specific joint trajectories. If a robot can't speak, speech actions gracefully skip. If it has no display, facial expressions route to LED or gesture fallbacks.
NForge v1.3.0 brings brain activity into physical space. Stream real-time 3D cortical heatmaps, emotion auras, and floating thought streams to Apple Vision Pro — transforming the therapy room into a neural observatory. Therapists wearing Vision Pro see a holographic brain floating above the patient, its surface pulsing with color as emotions shift, surrounded by drifting words that name what the patient feels but cannot say.
Real-time 3D cortical surface visualization with thermal colormap (blue→cyan→green→yellow→red) mapped to neural activation intensity. ROI highlighting mode dims inactive regions to spotlight the brain areas driving the current emotional state. 20,484 vertices rendered as a floating holographic brain at 30fps.
Particle effects surrounding the brain mesh that reflect emotional state. Golden warm glow for contentment, bright sparks for excitement, red-purple flickers for anxiety, dim blue haze for sadness. High dominance produces structured geometric patterns while low dominance creates soft organic flows. Smooth interpolation between states prevents jarring transitions.
Detected psychological themes (anxiety, hope, nostalgia, confidence) materialize as floating text that drifts through space around the brain. Positive themes float upward, negative themes sink. Font size reflects confidence, color matches the emotion aura. Particles fade naturally over 8 seconds, creating a living stream of consciousness visible in the room.
JSON-based streaming protocol at 30fps with ARKit-compatible spatial payloads. Scene graph architecture with positioned anchors, brain mesh, aura particles, thought streams, emotion indicators, and journal cards — all composited into a single spatial update per frame. Designed for a visionOS RealityKit client.
NForge now automatically writes diary entries from therapy sessions by translating cortical activity patterns and decoded emotions into introspective prose. Galbot captures what patients experience but might not consciously articulate — turning raw neural signals into gentle, psychologically-aware journal entries that therapists can review.
The journal doesn't write constantly. It watches for emotional turning points (valence/arousal shifts > 0.3) and intensity spikes, only generating entries at psychologically meaningful moments. A ThemeDetector maps emotion dimensions and active brain regions to themes like anxiety, nostalgia, hope, vulnerability, and agency. During calm periods, the journal stays quiet.
Generates introspective diary text with intensity-tiered vocabulary. “Anxiety” at low intensity becomes “a quiet unease” while at high intensity it becomes “gripped by dread.” Supports first-person (“I felt…”) and second-person (“You felt…”) voice modes. Opening phrases vary naturally: “Something stirred within…”, “Beneath the surface…”, “Without quite knowing why…”
Detects the narrative shape of each session — rising, falling, volatile, or plateau. Identifies turning points where emotions shifted direction. The compiled diary opens with an arc-aware summary: “Today was a journey from darkness toward light…” or “A quiet session, but not without depth…”
NForge converts cortical activity predictions into real-time adaptive music. Different brain regions produce different musical properties — visual cortex creates bright, high-register melodies while emotional regions modulate tempo and dynamics. The robot literally turns your brain activity into music.
Cortical regions map directly to musical scales: visual→major pentatonic highs, auditory→harmonic mid-range, motor→rhythmic percussion, emotional→expressive minor dynamics, cognitive→contemplative bass. Emotion-driven key selection maps positive valence to major keys and negative valence to minor keys. Arousal-driven tempo spans from 50 BPM (calm) to 140 BPM (excited), creating a fully brain-responsive composition engine.
Set a target mood and the music gently biases toward that emotional state. A 70/30 blend between natural brain-driven composition and the therapeutic target ensures the music feels organic while still guiding the listener. Clinically grounded in established music therapy research.
NForge v1.2.0 introduces two groundbreaking capabilities that push Galbot beyond reactive therapy into anticipatory understanding and experiential reconstruction. These features represent a fundamental shift: the robot no longer just responds to what you say — it reconstructs what you experienced and understands what you need before you ask.
Dream Replay reverse-decodes brain activity into vivid experiential narratives that Galbot can narrate aloud. Silent Communication reads micro-expressions, posture, and breathing to predict your needs without a single word. Together, they make Galbot the most perceptive companion robot ever built.
Feed a sequence of cortical predictions (or real fMRI recordings) back through NForge in reverse to reconstruct the sensory experience that produced them. The DreamDecoder neural network maps ~20,484 cortical vertices back to visual (1280-d), auditory (1024-d), and semantic (3072-d) feature spaces via a shared trunk with modality-specific heads. Temporal smoothing (exponential moving average, α=0.3) ensures coherent frame-to-frame transitions. The NarrativeGenerator then converts reconstructed embeddings into evocative, human-readable descriptions — mapping activation intensity to a poetic lexicon ranging from "faint whispers" to "thundering crescendos". Galbot narrates peak emotional moments aloud, complete with matched facial expressions and gestures.
Galbot learns to understand you without words. The system fuses three real-time perception streams: MicroExpressionDecoder (468 MediaPipe face mesh landmarks → 7 micro-expressions including brow furrow, lip press, jaw clench), PostureAnalyzer (33 body keypoints → 6 posture classifications), and BreathingMonitor (chest displacement zero-crossing → BPM estimation). An IntentPredictor evaluates pattern rules against a rolling signal history to predict 8 possible needs (comfort, space, engagement, reassurance, silence, stimulation, rest, connection) with urgency scoring. The SilentDialogue session manager tracks rapport quality over time — measuring whether Galbot's proactive responses consistently lead to positive follow-up signals.
Complete robotics integration layer for the Galbot G1 humanoid robot. Introduced EmotionEngine, GalbotBridge, BiometricFusion, and TherapySession for closed-loop emotional therapy where the robot adapts in real time to predicted brain states.
173 cm humanoid platform with a 4-microphone array, RGB-D cameras, full-body articulation, and expressive face display. Galbot G1 is purpose-built for human-robot interaction in therapeutic and assistive settings, providing a physical presence that adapts to the emotional state of the person it accompanies.
ROI-based valence, arousal, and dominance decoding directly from cortical predictions using the HCP MMP1.0 parcellation atlas. Each HCP region is weighted by its known emotional salience, producing a continuous three-dimensional affect vector updated at prediction frequency.
4-channel Galbot action translation maps the affect vector to robot behaviours: speech_tone (warm / neutral / calm), facial_expression (smile / neutral / concern), gesture (open / still / protective), and approach_distance (close / medium / respectful). All channels update in real time as brain state evolves.
EEG 10-20 electrode montage and fNIRS optode signals are fused into the NForge feature space via learned projection heads, augmenting the cortical prediction stream with real-time physiological ground truth. Enables operation without fMRI hardware in clinical or home settings.
Trajectory-aware session management tracks emotional arc across an entire therapy session. Computes engagement metrics (stability, peak valence, recovery rate) and exports structured JSON reports for clinical review. Sessions can define target trajectories and measure deviation in real time.
Planned features for upcoming NForge releases.
First public release of NForge. Restructured from Meta's TRIBE v2 with professional src/ package layout, four new features (ROI Attention Maps, Real-time Streaming, Modality Attribution, Cross-Subject Adaptation), torch.compile support, and comprehensive test coverage.