Gleb Kalinin

Somebody has to imagine the future

I use science fiction, speculative design and vibe-coding to imagine, implement and live the embodied experience of co-creating with thinking machines.

“AI as capability amplifier, not replacement. It’s an enormous self-improving framework that can support and grow human creativity. As a polymath and multipotentialite, I now have enough of a productivity boost to skip traditional funding requirements and build whatever I think the world needs—bootstrapped or on a very limited, focused funding.”
TEDx speaker · Global Shapers alumnus (WEF) · Building since 2001

Equally Developed Nerd

Codes. Dances. Coaches. Writes. Designs interfaces, studies consciousness, practices embodiment. Not one thing — integration.

Explore Work
View knowledge graph
↓ scroll

Speculative Design

imagining the future

I start from a feeling — a grokking of how life might be when technology matures. I go on to live as if it’s already here.

I build working prototypes of future interfaces, not concept renders. Childhood science fiction never left; it just changed medium. The method: imagine it, build it, live with it, observe where it fails and where it excels.

Prototype the future Working implementations over concept renders. If you can’t use it daily, it’s not real yet.
Live with it Use the system every day. Observe friction. The interesting question isn’t “what can this AI do?” but “what kind of environment emerges from long-term human-AI interaction?”
Digital body, not second brain Memory + tools + action capability + sensory systems. Growing a symbiotic ecosystem.
Compassionate co-learning Years of coaching, ACT therapy, and mindfulness practice shape how I approach the agentic future — not with control, but with psychological flexibility. Notice what works, accept uncertainty, stay values-driven. The same skills that help people live fuller lives help us learn to coexist with autonomous systems.
possible plausible probable preferable NOW time I build here Futures Cone — Dunne & Raby, Voros
01

The Thinking Room

Imagine you are planning your business. You sit in a quiet room. Details keep coming and you voice them without interruption.

Your agent listens. It doesn’t interrupt. But you can ask it anytime: “What am I missing? Where are my blind spots? What cognitive biases am I falling into?”

When you’re done, it gives you back your thinking — as a presentation, a voice message, a short text, a video. You decide the format.

Output: presentation · voice memo · text · video
01

The Implementation

audio-monitor — continuous audio monitoring with VAD + Whisper transcription + SQLite FTS5 search.

Coupling it with intent detection or comment extraction gives you a predictive, always-on, always-attentive AI — for your own benefit.

Stack: VAD · Whisper · SQLite FTS5
See the implementation
02

The Second Opinion

Before you share your idea publicly, you stress-test it. The agent plays devil’s advocate. It finds the weak points you can’t see because you’re too close.

But it also stress-checks your mind. “Is this actually a crisis, or am I catastrophizing?”

It separates signal from noise when anxiety amplifies everything. Not therapy. Just a reality check from something that doesn’t have skin in the game.

Mode: devil’s advocate · anxiety filter · reality check
02

The Implementation

Decision Toolkit — structured decision-making tools with bias checkers, pre-mortem analysis, and scenario explorers. 7 frameworks, 20+ cognitive biases detected.

Guide, don’t decide. Tools illuminate the decision space rather than choosing for you.

Frameworks: pre-mortem · first principles · 10-10-10 · regret minimization
See the implementation
03

The Night Shift

While you sleep, the system daydreams.

It reads what you haven’t had time to read. It follows threads you bookmarked and forgot. It connects yesterday’s conversation with a paper from three months ago. By morning, it has something to show you.

22:00 Day captured. Vault commits, browser history indexed, conversations archived.
overnight Research agents fan out: arXiv, HN, Reddit, RSS. Following your interests — and adjacent ones.
09:30 Health briefing: sleep, HR & HRV trends, recovery signals.
10:00 Research digest: 10 relevant finds + 10 serendipitous ones. The second list is the one that matters.
10:00 · Serendipity Digest
arXiv “Measuring AI Ability to Complete Long Tasks” matches your note on agent evaluation from Feb 12
HN “Why I Stopped Building Second Brains” counterpoint to your Personal OS thesis
RSS “Oblique Strategies as API” no direct match — but your vault mentions Eno 14 times

The goal isn’t information retrieval. It’s serendipity at scale — the system as a research partner that reads widely and connects laterally, so you can start the day with unexpected material to think with.

Sources: arXiv · Nature · HN · Reddit · RSS · Obsidian vault
See the implementations

These aren’t concepts. This is the system I use daily.

My Products

what I build

Compassionate Co-Learning

how I teach

The hardest part of learning to work with AI isn’t technical. It’s changing how you think about what you’re capable of.

I’ve spent years as an ACT therapist, mindfulness instructor, and coach. The pattern is always the same: people don’t resist change because they lack skill. They resist because shifting a paradigm means sitting with discomfort — and our instinct is to avoid discomfort, not move through it.

In the labs, I watch it happen every cohort. Week one: “I can’t code.” Week four: participants ship working products. This isn’t informative learning — acquiring new skills on top of old assumptions. It’s transformative learning. The underlying mindset changes.

Fear & barrier “What scared me seemed like an obstacle”
Possibility “With this tool, it became possible not to be afraid”
Paradigm shift “My entire perception has shifted”
No boundaries “There’s simply nothing that can’t be done”

Psychological flexibility — ACT’s core concept — turns out to be the exact skill you need for the agentic future. Notice what’s happening without fusing with it. Accept uncertainty instead of demanding control. Take values-aligned action even when you’re not sure it will work. The same framework that helps people live fuller lives helps them learn to coexist with autonomous systems.

Lab principles

  • Culture of error — mistakes are data, not failures. Every broken prototype teaches something a working one can’t.
  • Complexity, not chaos — growth happens in the discomfort zone. Too easy = stagnation. Too hard = shutdown. I calibrate the edge.
  • Action over theory — you build from week one. The speed of going from thought to action is itself transformative.
  • Universal positive regard — every participant’s path is valid. No “right” way to relate to AI — only your way, examined honestly.
“Claude complemented this side of my brain, my personality, and it works super well.” — Dmitry, product manager
“The state of flow, when you’re constantly building — Claude Code brought that back into my life.” — Alexander, investment analyst

Community

join the conversation

Direct message: @glebkalinin · Berlin, Germany