Building Your AI Second Brain: Lessons from Singapore

Last week, I taught a workshop in Singapore on building AI second brains for non-technical leaders. Not the abstract, futuristic kind—the practical, “walk out with something working today” kind.

The core insight? Context engineering is your superpower.

The Leadership Decision Crisis

Here’s what I’m seeing: more decisions are being escalated upward to CEOs and senior leaders. Questions like “What risk level will you take to bring AI to customers?” create unsustainable cognitive load at the top.

You need systems that extend your cognitive reach, not just automate tasks.

This is why I lost interest in task automation months ago. The real opportunity is insight productivity—extending your thinking and decision-making capacity.

What Is Context Engineering?

Think of AI models as having two types of memory:

  1. Long-term memory: Built into the model itself (this is why you should use the smartest model you have access to)
  2. Short-term memory: What you give it through text, files, and conversation

Context engineering is managing that short-term memory strategically. It’s briefing your AI like you’d brief a new chief of staff.

Three Things Your AI Needs to Know

1. Goals & Priorities

  • Long term: career, business, personal aspirations
  • Short term: this quarter, this month
  • Method: Run an AI-led interview about your objectives

2. Thinking & Analysis Patterns

  • What questions do you ask repeatedly in meetings?
  • What management frameworks do you use?
  • What books have deeply influenced your thinking?
  • Method: Review meeting transcripts and extract your recurring patterns

Examples that work:

  • “I always ask about second-order effects”
  • “I use working backwards from customer value”
  • “Influenced by Good Strategy/Bad Strategy

3. Style & Preferences

  • How you communicate (tone, structure, formality)
  • Level of detail you prefer
  • How you like information presented
  • Method: Feed it 3-5 of your best emails or memos. The AI will reverse-engineer your style.

My Daily Workflow

Morning ritual:

  1. Open Claude Code in my Obsidian vault
  2. Prompt: “Read yesterday’s meeting notes and today’s calendar. Create my morning briefing.”
  3. Claude scans relevant files, identifies open threads, connects to today’s meetings
  4. I review, edit, and start my day fully contextualized

Evening ritual:

  1. Dump today’s meeting transcripts into a folder
  2. Prompt: “Synthesize today’s meetings. Extract key decisions, action items, and insights.”
  3. Claude reads all transcripts, identifies patterns, updates project files, creates structured notes
  4. These notes automatically feed tomorrow morning’s briefing

The compounding effect:

  • Week 1: Basic summaries
  • Week 4: AI recognizes recurring themes
  • Week 12: AI understands your priorities and flags what matters
  • Week 24: It’s like having a chief of staff who knows your entire context

Why Local Files Matter

I learned this the hard way. I lost three years of notes when Evernote changed their free plan. Never again.

Platform lock-in risks:

  • Limited uploads (15-20 files in most platforms)
  • Short context windows
  • Pricing changes
  • Features removed
  • What happens when you leave your company?

Local file benefits:

  • You own your data permanently
  • Works across job changes
  • Much larger context windows (500+ files vs 15-20 uploads)
  • AI can write back to files (self-referential memory)
  • Privacy: sensitive notes never leave your computer

The Setup That Works

Tools I use:

  • Obsidian (free): Local markdown notes in plain text files you own forever
  • Claude Code (comes with Claude Pro): AI that can read and write your local files

This creates a self-reinforcing loop. Obsidian is where I keep my notes. Claude Code is my AI assistant that can read and write those notes.

A businessman I know runs 24 custom GPTs—one per company function across his multiple businesses. His staff must ask the relevant GPT before escalating to him. He’s multiplied his cognitive reach 10x.

Common Pitfall: Hallucinations

During the Temasek workshop, someone asked a great question about complex prompts causing confusion when applying game theory models.

The issue: AI models generate plausible-sounding text. They don’t “know” facts—they predict likely continuations. Confidence ≠ accuracy.

How to fix:

  • Simple prompt: “What is X?” → risky for facts
  • Better prompt: “What is X? Before answering, check if this exists and cite your reasoning.”
  • Best prompt: “What is X? If you’re not certain, say so and suggest how I could verify.”

For complex requests, break them into steps:

  1. “First, summarize the situation”
  2. “Now, identify the key players and their incentives”
  3. “Finally, apply game theory framework X”

The Mindset Shift

This is not about automation. This is about extending your thinking.

The goal: Make better decisions faster, with more context.

Think of it like this: Would you rather have 10 tools that do tasks for you, or one system that knows how you think and helps you think better?

I choose the latter every time.

Getting Started

The fastest path:

  1. Create a Custom GPT in ChatGPT
  2. Upload a preference file (goals, frameworks, style)
  3. Feed it 3-5 of your best writing samples
  4. Test it with a real task today

The advanced path:

  1. Download Obsidian (free)
  2. Get Claude Pro ($20/month includes Claude Code)
  3. Point Claude Code to your Obsidian vault
  4. Start with a simple prompt

Either way, the key is to start. Build the habit. Use it daily for two weeks.

Your second brain compounds over time.


Want to dive deeper? I’m running follow-up sessions in January 2025 for hands-on setup and troubleshooting. Reach out at stef@thinkingmachin.es.