VoicePrompt for Agents logo

Lochside Studio

VoicePrompt Dev

Speak prompts.Drive agents.

VoicePrompt Dev turns voice into commands for AI coding, terminal, and chat workflows inside VS Code without turning every next instruction into another typing task.

Built for developers running fast loops with Codex, Copilot, terminals, and execution-heavy workflows.
Peter Steinberger
“I don't write, I talk.”

Peter Steinberger creator of OpenClaw, the viral open-source AI agent that drew 2 million visitors in a week and 100,000+ GitHub stars.

Code

Speak prompts directly into coding workflows.

Terminal

Keep commands and approvals moving without retyping.

Agents

Stay in fast loops with Codex, Copilot, and chat surfaces.

Stay in flow longer

Less typing. Less context switching.

Move from idea to execution faster

Get prompts into action quickly.

Make agents easier to use all day

Use agents more naturally in real work.

Watch the Demo

Why It Feels Faster

The point is not voice for its own sake. The point is removing the prompt-typing bottleneck in the parts of development where you already think faster than you type.

Talk to the Agent, Not the Input Box

Say the prompt, correction, or follow-up out loud instead of breaking your flow to type every next instruction by hand.

Stay in Codex, Copilot, or Terminal Loops

Route transcripts straight into the surfaces where agent-driven work already happens instead of copying text between tools.

Keep the Loop Moving After You Speak

Review, append, replace, auto-submit, or send approval keys from the panel so the workflow keeps moving after capture.

Built for Real Developer Setups

Use local capture, practical fallbacks, OpenAI, or your own command backend without rebuilding your environment around a voice toy.

Works With

Built around the developer surfaces and platforms where voice input actually matters.
GitHub CopilotCodexVS CodeOpenAIWindowsLinuxmacOS

Quick Start

  1. 01

    Find VoicePrompt

    Open the Extensions view in VS Code and search for VoicePrompt for Agents.

  2. 02

    Install the Extension

    Install it from the Marketplace, then open the VoicePrompt panel inside VS Code.

  3. 03

    Start Speaking

    Choose your transcription backend, hold Space or tap R, and route the transcript where you work.

How It Works

Three pieces make the workflow work: capture speech fast, route it into the right place, and keep the next action just as fast.

Capture

Record without leaving VS Code

Start with tap-to-record or hold-to-talk, keep the audio local when a native recorder exists, and fall back cleanly when it does not.

  • Tap R or hold Space to speak
  • Cancel and retry without breaking flow
  • Windows and Linux local capture, with browser fallback when needed

Route

Send the transcript where work happens

VoicePrompt is built around the surfaces developers actually use during agent work instead of making you copy text around by hand.

  • Route into Copilot or Codex chat
  • Send directly to the active terminal or Codex CLI
  • Paste into the active editor when that is the right target

Control

Keep momentum after capture

Review when you want control, or keep things moving with draft management, auto-submit, and terminal approval shortcuts.

  • Append, replace, or clear the current draft
  • Choose paste-only or optional auto-submit
  • Use Y, N, 1, 2, 3, Enter, and T from the panel

Trust and Limits

Clear enough to install, honest enough to trust

Local Capture First

VoicePrompt records locally when a native recorder is available and only falls back when the machine needs it.

Backend Choice Stays Yours

Use the OpenAI transcription path or point the extension at your own command-based backend.

Current Build, Clear Limits

This release is built for VS Code chat and terminal workflows first, with practical guardrails instead of fake perfection.