type4me-macos-voice-input

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Type4Me macOS Voice Input

Type4Me macOS语音输入工具

Skill by ara.so — Daily 2026 Skills collection.
Type4Me is a macOS voice input tool that captures audio via global hotkey, transcribes it using local (SherpaOnnx/Paraformer/Zipformer) or cloud (Volcengine/Deepgram) ASR engines, optionally post-processes text via LLM, and injects the result into any app. All credentials and history are stored locally — no telemetry, no cloud sync.
来自ara.so的技能工具 —— 2026日常技能合集。
Type4Me是一款macOS语音输入工具,可通过全局热键捕获音频,使用本地(SherpaOnnx/Paraformer/Zipformer)或云端(火山引擎/Deepgram)ASR引擎进行转录,还可通过LLM对文本进行可选后处理,最终将结果插入到任意应用中。所有凭证和历史记录均本地存储——无遥测数据,无云端同步。

Architecture Overview

架构概述

Type4Me/
├── ASR/                    # ASR engine abstraction
│   ├── ASRProvider.swift          # Provider enum + protocols
│   ├── ASRProviderRegistry.swift  # Plugin registry
│   ├── Providers/                 # Per-vendor config files
│   ├── SherpaASRClient.swift      # Local streaming ASR
│   ├── SherpaOfflineASRClient.swift
│   ├── VolcASRClient.swift        # Volcengine streaming ASR
│   └── DeepgramASRClient.swift    # Deepgram streaming ASR
├── Bridge/                 # SherpaOnnx C API Swift bridge
├── Audio/                  # Audio capture
├── Session/                # Core state machine: record→ASR→inject
├── Input/                  # Global hotkey management
├── Services/               # Credentials, hotwords, model manager
├── Protocol/               # Volcengine WebSocket codec
└── UI/                     # SwiftUI (FloatingBar + Settings)
Type4Me/
├── ASR/                    # ASR引擎抽象层
│   ├── ASRProvider.swift          # 供应商枚举+协议
│   ├── ASRProviderRegistry.swift  # 插件注册表
│   ├── Providers/                 # 各供应商配置文件
│   ├── SherpaASRClient.swift      # 本地流式ASR客户端
│   ├── SherpaOfflineASRClient.swift
│   ├── VolcASRClient.swift        # 火山引擎流式ASR客户端
│   └── DeepgramASRClient.swift    # Deepgram流式ASR客户端
├── Bridge/                 # SherpaOnnx C API与Swift的桥接层
├── Audio/                  # 音频捕获模块
├── Session/                # 核心状态机:录制→ASR→插入
├── Input/                  # 全局热键管理
├── Services/               # 凭证、热词、模型管理器
├── Protocol/               # 火山引擎WebSocket编解码器
└── UI/                     # SwiftUI界面(悬浮栏+设置页面)

Installation

安装

Prerequisites

前置依赖

bash
undefined
bash
undefined

Xcode Command Line Tools

Xcode命令行工具

xcode-select --install
xcode-select --install

CMake (for local ASR engine)

CMake(用于本地ASR引擎)

brew install cmake
undefined
brew install cmake
undefined

Build & Deploy from Source

从源码构建与部署

bash
git clone https://github.com/joewongjc/type4me.git
cd type4me
bash
git clone https://github.com/joewongjc/type4me.git
cd type4me

Step 1: Compile SherpaOnnx local engine (~5 min, one-time)

步骤1:编译SherpaOnnx本地引擎(约5分钟,仅需执行一次)

bash scripts/build-sherpa.sh
bash scripts/build-sherpa.sh

Step 2: Build, bundle, sign, install to /Applications, and launch

步骤2:构建、打包、签名、安装到/Applications并启动

bash scripts/deploy.sh
undefined
bash scripts/deploy.sh
undefined

Download Pre-built App

下载预构建应用

Download
Type4Me-v1.2.3.dmg
from releases (cloud ASR only, no local engine):
https://github.com/joewongjc/type4me/releases/tag/v1.2.3
If macOS blocks the app:
bash
xattr -d com.apple.quarantine /Applications/Type4Me.app
从发布页面下载
Type4Me-v1.2.3.dmg
(仅支持云端ASR,无本地引擎):
https://github.com/joewongjc/type4me/releases/tag/v1.2.3
如果macOS阻止应用运行:
bash
xattr -d com.apple.quarantine /Applications/Type4Me.app

Download Local ASR Models

下载本地ASR模型

bash
mkdir -p ~/Library/Application\ Support/Type4Me/Models
bash
mkdir -p ~/Library/Application\ Support/Type4Me/Models

Option A: Lightweight ~20MB

选项A:轻量版约20MB

tar xjf ~/Downloads/sherpa-onnx-streaming-zipformer-small-ctc-zh-int8-2025-04-01.tar.bz2
-C ~/Library/Application\ Support/Type4Me/Models/
tar xjf ~/Downloads/sherpa-onnx-streaming-zipformer-small-ctc-zh-int8-2025-04-01.tar.bz2
-C ~/Library/Application\ Support/Type4Me/Models/

Option B: Balanced ~236MB (recommended)

选项B:平衡版约236MB(推荐)

tar xjf ~/Downloads/sherpa-onnx-streaming-zipformer-ctc-multi-zh-hans-2023-12-13.tar.bz2
-C ~/Library/Application\ Support/Type4Me/Models/
tar xjf ~/Downloads/sherpa-onnx-streaming-zipformer-ctc-multi-zh-hans-2023-12-13.tar.bz2
-C ~/Library/Application\ Support/Type4Me/Models/

Option C: Bilingual Chinese+English ~1GB

选项C:中英双语版约1GB

tar xjf ~/Downloads/sherpa-onnx-streaming-paraformer-bilingual-zh-en.tar.bz2
-C ~/Library/Application\ Support/Type4Me/Models/

Expected structure for Paraformer model:
~/Library/Application Support/Type4Me/Models/ └── sherpa-onnx-streaming-paraformer-bilingual-zh-en/ ├── encoder.int8.onnx ├── decoder.int8.onnx └── tokens.txt
undefined
tar xjf ~/Downloads/sherpa-onnx-streaming-paraformer-bilingual-zh-en.tar.bz2
-C ~/Library/Application\ Support/Type4Me/Models/

Paraformer模型的预期目录结构:
~/Library/Application Support/Type4Me/Models/ └── sherpa-onnx-streaming-paraformer-bilingual-zh-en/ ├── encoder.int8.onnx ├── decoder.int8.onnx └── tokens.txt
undefined

Key Protocols

核心协议

SpeechRecognizer Protocol

SpeechRecognizer协议

Every ASR client must implement this protocol:
swift
protocol SpeechRecognizer: AnyObject {
    /// Start a new recognition session
    func startRecognition() async throws
    
    /// Feed raw PCM audio data
    func appendAudio(_ buffer: AVAudioPCMBuffer) async
    
    /// Stop and get final result
    func stopRecognition() async throws -> String
    
    /// Cancel without result
    func cancelRecognition() async
    
    /// Streaming partial results (optional)
    var partialResultHandler: ((String) -> Void)? { get set }
}
每个ASR客户端都必须实现该协议:
swift
protocol SpeechRecognizer: AnyObject {
    /// 启动新的识别会话
    func startRecognition() async throws
    
    /// 传入原始PCM音频数据
    func appendAudio(_ buffer: AVAudioPCMBuffer) async
    
    /// 停止识别并获取最终结果
    func stopRecognition() async throws -> String
    
    /// 取消识别且不返回结果
    func cancelRecognition() async
    
    /// 流式返回部分结果(可选)
    var partialResultHandler: ((String) -> Void)? { get set }
}

ASRProviderConfig Protocol

ASRProviderConfig协议

Each vendor's credential definition:
swift
protocol ASRProviderConfig {
    /// Unique identifier string
    static var providerID: String { get }
    
    /// Display name in Settings UI
    static var displayName: String { get }
    
    /// Credential fields shown in Settings
    static var credentialFields: [CredentialField] { get }
    
    /// Validate credentials before use
    static func validate(_ credentials: [String: String]) -> Bool
    
    /// Create the recognizer instance
    static func createClient(
        credentials: [String: String],
        config: RecognitionConfig
    ) throws -> SpeechRecognizer
}
各供应商的凭证定义协议:
swift
protocol ASRProviderConfig {
    /// 唯一标识符
    static var providerID: String { get }
    
    /// 设置界面中显示的名称
    static var displayName: String { get }
    
    /// 设置界面中显示的凭证字段
    static var credentialFields: [CredentialField] { get }
    
    /// 使用前验证凭证有效性
    static func validate(_ credentials: [String: String]) -> Bool
    
    /// 创建识别器实例
    static func createClient(
        credentials: [String: String],
        config: RecognitionConfig
    ) throws -> SpeechRecognizer
}

Adding a New ASR Provider

添加新的ASR供应商

Step 1: Create Provider Config

步骤1:创建供应商配置

Create
Type4Me/ASR/Providers/OpenAIWhisperProvider.swift
:
swift
import Foundation

struct OpenAIWhisperProvider: ASRProviderConfig {
    static let providerID = "openai_whisper"
    static let displayName = "OpenAI Whisper"
    
    static let credentialFields: [CredentialField] = [
        CredentialField(
            key: "api_key",
            label: "API Key",
            placeholder: "sk-...",
            isSecret: true
        ),
        CredentialField(
            key: "model",
            label: "Model",
            placeholder: "whisper-1",
            isSecret: false
        )
    ]
    
    static func validate(_ credentials: [String: String]) -> Bool {
        guard let apiKey = credentials["api_key"], !apiKey.isEmpty else {
            return false
        }
        return apiKey.hasPrefix("sk-")
    }
    
    static func createClient(
        credentials: [String: String],
        config: RecognitionConfig
    ) throws -> SpeechRecognizer {
        guard let apiKey = credentials["api_key"] else {
            throw ASRError.missingCredential("api_key")
        }
        let model = credentials["model"] ?? "whisper-1"
        return OpenAIWhisperASRClient(apiKey: apiKey, model: model, config: config)
    }
}
创建
Type4Me/ASR/Providers/OpenAIWhisperProvider.swift
swift
import Foundation

struct OpenAIWhisperProvider: ASRProviderConfig {
    static let providerID = "openai_whisper"
    static let displayName = "OpenAI Whisper"
    
    static let credentialFields: [CredentialField] = [
        CredentialField(
            key: "api_key",
            label: "API Key",
            placeholder: "sk-...",
            isSecret: true
        ),
        CredentialField(
            key: "model",
            label: "Model",
            placeholder: "whisper-1",
            isSecret: false
        )
    ]
    
    static func validate(_ credentials: [String: String]) -> Bool {
        guard let apiKey = credentials["api_key"], !apiKey.isEmpty else {
            return false
        }
        return apiKey.hasPrefix("sk-")
    }
    
    static func createClient(
        credentials: [String: String],
        config: RecognitionConfig
    ) throws -> SpeechRecognizer {
        guard let apiKey = credentials["api_key"] else {
            throw ASRError.missingCredential("api_key")
        }
        let model = credentials["model"] ?? "whisper-1"
        return OpenAIWhisperASRClient(apiKey: apiKey, model: model, config: config)
    }
}

Step 2: Implement the ASR Client

步骤2:实现ASR客户端

Create
Type4Me/ASR/OpenAIWhisperASRClient.swift
:
swift
import Foundation
import AVFoundation

final class OpenAIWhisperASRClient: SpeechRecognizer {
    var partialResultHandler: ((String) -> Void)?
    
    private let apiKey: String
    private let model: String
    private let config: RecognitionConfig
    private var audioData: Data = Data()
    
    init(apiKey: String, model: String, config: RecognitionConfig) {
        self.apiKey = apiKey
        self.model = model
        self.config = config
    }
    
    func startRecognition() async throws {
        audioData = Data()
    }
    
    func appendAudio(_ buffer: AVAudioPCMBuffer) async {
        // Convert PCM buffer to raw bytes and accumulate
        guard let channelData = buffer.floatChannelData?[0] else { return }
        let frameCount = Int(buffer.frameLength)
        let bytes = UnsafeBufferPointer(start: channelData, count: frameCount)
        // Convert Float32 PCM to Int16 for Whisper API
        let int16Samples = bytes.map { sample -> Int16 in
            return Int16(max(-32768, min(32767, Int(sample * 32767))))
        }
        int16Samples.withUnsafeBytes { ptr in
            audioData.append(contentsOf: ptr)
        }
    }
    
    func stopRecognition() async throws -> String {
        // Build multipart form request to Whisper API
        var request = URLRequest(url: URL(string: "https://api.openai.com/v1/audio/transcriptions")!)
        request.httpMethod = "POST"
        request.setValue("Bearer \(apiKey)", forHTTPHeaderField: "Authorization")
        
        let boundary = UUID().uuidString
        request.setValue("multipart/form-data; boundary=\(boundary)", 
                        forHTTPHeaderField: "Content-Type")
        
        var body = Data()
        // Append audio file part
        body.append("--\(boundary)\r\n".data(using: .utf8)!)
        body.append("Content-Disposition: form-data; name=\"file\"; filename=\"audio.raw\"\r\n".data(using: .utf8)!)
        body.append("Content-Type: audio/raw\r\n\r\n".data(using: .utf8)!)
        body.append(audioData)
        body.append("\r\n".data(using: .utf8)!)
        // Append model part
        body.append("--\(boundary)\r\n".data(using: .utf8)!)
        body.append("Content-Disposition: form-data; name=\"model\"\r\n\r\n".data(using: .utf8)!)
        body.append("\(model)\r\n".data(using: .utf8)!)
        body.append("--\(boundary)--\r\n".data(using: .utf8)!)
        
        request.httpBody = body
        
        let (data, response) = try await URLSession.shared.data(for: request)
        guard let httpResponse = response as? HTTPURLResponse,
              httpResponse.statusCode == 200 else {
            throw ASRError.networkError("Whisper API returned error")
        }
        
        let result = try JSONDecoder().decode(WhisperResponse.self, from: data)
        return result.text
    }
    
    func cancelRecognition() async {
        audioData = Data()
    }
}

private struct WhisperResponse: Codable {
    let text: String
}
创建
Type4Me/ASR/OpenAIWhisperASRClient.swift
swift
import Foundation
import AVFoundation

final class OpenAIWhisperASRClient: SpeechRecognizer {
    var partialResultHandler: ((String) -> Void)?
    
    private let apiKey: String
    private let model: String
    private let config: RecognitionConfig
    private var audioData: Data = Data()
    
    init(apiKey: String, model: String, config: RecognitionConfig) {
        self.apiKey = apiKey
        self.model = model
        self.config = config
    }
    
    func startRecognition() async throws {
        audioData = Data()
    }
    
    func appendAudio(_ buffer: AVAudioPCMBuffer) async {
        // 将PCM缓冲区转换为原始字节并累积
        guard let channelData = buffer.floatChannelData?[0] else { return }
        let frameCount = Int(buffer.frameLength)
        let bytes = UnsafeBufferPointer(start: channelData, count: frameCount)
        // 将Float32 PCM转换为Int16以适配Whisper API
        let int16Samples = bytes.map { sample -> Int16 in
            return Int16(max(-32768, min(32767, Int(sample * 32767))))
        }
        int16Samples.withUnsafeBytes { ptr in
            audioData.append(contentsOf: ptr)
        }
    }
    
    func stopRecognition() async throws -> String {
        // 构建发送给Whisper API的多部分表单请求
        var request = URLRequest(url: URL(string: "https://api.openai.com/v1/audio/transcriptions")!)
        request.httpMethod = "POST"
        request.setValue("Bearer \(apiKey)", forHTTPHeaderField: "Authorization")
        
        let boundary = UUID().uuidString
        request.setValue("multipart/form-data; boundary=\(boundary)", 
                        forHTTPHeaderField: "Content-Type")
        
        var body = Data()
        // 添加音频文件部分
        body.append("--\(boundary)\r\n".data(using: .utf8)!)
        body.append("Content-Disposition: form-data; name=\"file\"; filename=\"audio.raw\"\r\n".data(using: .utf8)!)
        body.append("Content-Type: audio/raw\r\n\r\n".data(using: .utf8)!)
        body.append(audioData)
        body.append("\r\n".data(using: .utf8)!)
        // 添加模型参数部分
        body.append("--\(boundary)\r\n".data(using: .utf8)!)
        body.append("Content-Disposition: form-data; name=\"model\"\r\n\r\n".data(using: .utf8)!)
        body.append("\(model)\r\n".data(using: .utf8)!)
        body.append("--\(boundary)--\r\n".data(using: .utf8)!)
        
        request.httpBody = body
        
        let (data, response) = try await URLSession.shared.data(for: request)
        guard let httpResponse = response as? HTTPURLResponse,
              httpResponse.statusCode == 200 else {
            throw ASRError.networkError("Whisper API返回错误")
        }
        
        let result = try JSONDecoder().decode(WhisperResponse.self, from: data)
        return result.text
    }
    
    func cancelRecognition() async {
        audioData = Data()
    }
}

private struct WhisperResponse: Codable {
    let text: String
}

Step 3: Register the Provider

步骤3:注册供应商

In
Type4Me/ASR/ASRProviderRegistry.swift
, add to the
all
array:
swift
struct ASRProviderRegistry {
    static let all: [any ASRProviderConfig.Type] = [
        SherpaParaformerProvider.self,
        VolcengineProvider.self,
        DeepgramProvider.self,
        OpenAIWhisperProvider.self,   // ← Add your provider here
    ]
}
Type4Me/ASR/ASRProviderRegistry.swift
中,将新供应商添加到
all
数组:
swift
struct ASRProviderRegistry {
    static let all: [any ASRProviderConfig.Type] = [
        SherpaParaformerProvider.self,
        VolcengineProvider.self,
        DeepgramProvider.self,
        OpenAIWhisperProvider.self,   // ← 在此处添加你的供应商
    ]
}

Credentials Storage

凭证存储

Credentials are stored at
~/Library/Application Support/Type4Me/credentials.json
with permissions
0600
. Never hardcode secrets — always load via
CredentialStore
:
swift
// Reading credentials
let store = CredentialStore.shared
let apiKey = store.get(providerID: "openai_whisper", key: "api_key")

// Writing credentials  
store.set(providerID: "openai_whisper", key: "api_key", value: userInputKey)

// Checking if configured
let isConfigured = store.isConfigured(providerID: "openai_whisper", 
                                       fields: OpenAIWhisperProvider.credentialFields)
凭证存储在
~/Library/Application Support/Type4Me/credentials.json
,权限为
0600
。切勿硬编码密钥——始终通过
CredentialStore
加载:
swift
// 读取凭证
let store = CredentialStore.shared
let apiKey = store.get(providerID: "openai_whisper", key: "api_key")

// 写入凭证  
store.set(providerID: "openai_whisper", key: "api_key", value: userInputKey)

// 检查是否已配置
let isConfigured = store.isConfigured(providerID: "openai_whisper", 
                                       fields: OpenAIWhisperProvider.credentialFields)

Custom Processing Modes with Prompt Variables

带提示变量的自定义处理模式

Processing modes use LLM post-processing with three context variables:
VariableValue
{text}
Recognized speech text
{selected}
Text selected in active app at record start
{clipboard}
Clipboard content at record start
Example custom mode prompts:
swift
// Translate selection using voice command
let translatePrompt = """
The user selected this text: {selected}
Voice command: {text}
Execute the command on the selected text. Output only the result.
"""

// Code review via voice
let codeReviewPrompt = """
Code to review:
{clipboard}

Review instruction: {text}

Provide focused feedback addressing the instruction.
"""

// Email reply drafting
let emailPrompt = """
Original email: {selected}
My reply intent (spoken): {text}
Write a professional email reply. Output only the email body.
"""
处理模式使用LLM进行后处理,支持三个上下文变量:
变量取值
{text}
识别出的语音文本
{selected}
录制开始时当前应用中选中的文本
{clipboard}
录制开始时剪贴板中的内容
自定义模式提示示例:
swift
// 通过语音命令翻译选中的文本
let translatePrompt = """
用户选中的文本:{selected}
语音命令:{text}
根据命令处理选中的文本,仅输出结果。
"""

// 通过语音进行代码评审
let codeReviewPrompt = """
待评审的代码:
{clipboard}

评审指令:{text}

针对指令提供聚焦的反馈意见。
"""

// 语音起草邮件回复
let emailPrompt = """
原邮件内容:{selected}
我的回复意图(语音输入):{text}
撰写一封专业的邮件回复,仅输出邮件正文。
"""

Built-in Processing Modes

内置处理模式

swift
enum ProcessingMode {
    case fast           // Direct ASR output, zero latency
    case performance    // Dual-channel: streaming + offline refinement
    case englishTranslation  // Chinese speech → English text
    case promptOptimize // Raw prompt → optimized prompt via LLM
    case command        // Voice command + selected/clipboard context → LLM action
    case custom(prompt: String)  // User-defined prompt template
}
swift
enum ProcessingMode {
    case fast           // 直接输出ASR结果,无延迟
    case performance    // 双通道:流式识别+离线优化
    case englishTranslation  // 中文语音→英文文本
    case promptOptimize // 原始提示词→通过LLM优化
    case command        // 语音命令+选中/剪贴板上下文→LLM执行操作
    case custom(prompt: String)  // 用户自定义提示模板
}

Session State Machine

源码变更后更新

The core recording flow in
Session/
:
[Idle]
  → hotkey pressed → [Recording] → audio streams to ASR client
  → hotkey released/pressed again → [Processing]
  → ASR returns text → [LLM Post-processing] (if mode requires)
  → [Injecting] → text injected into active app
  → [Idle]
bash
cd type4me
git pull
bash scripts/deploy.sh

Updating After Source Changes

除非引擎版本变更,否则无需重新编译SherpaOnnx

bash
cd type4me
git pull
bash scripts/deploy.sh
undefined

SherpaOnnx does NOT need recompiling unless engine version changed

故障排除

应用无法打开(安全警告)

undefined
bash
xattr -d com.apple.quarantine /Applications/Type4Me.app

Troubleshooting

设置中未识别到本地模型

App won't open (security warning)

bash
xattr -d com.apple.quarantine /Applications/Type4Me.app
验证目录结构是否完全匹配:
bash
ls ~/Library/Application\ Support/Type4Me/Models/sherpa-onnx-streaming-paraformer-bilingual-zh-en/

Local model not recognized in Settings

必须显示:encoder.int8.onnx decoder.int8.onnx tokens.txt

Verify the directory structure exactly matches:
bash
ls ~/Library/Application\ Support/Type4Me/Models/sherpa-onnx-streaming-paraformer-bilingual-zh-en/
undefined

Must show: encoder.int8.onnx decoder.int8.onnx tokens.txt

SherpaOnnx编译失败

undefined
bash
undefined

SherpaOnnx build fails

确保已安装cmake

bash
undefined
brew install cmake

Ensure cmake is installed

清理后重试

brew install cmake
rm -rf Frameworks/ bash scripts/build-sherpa.sh
undefined

Clean and retry

新ASR供应商未在设置中显示

rm -rf Frameworks/ bash scripts/build-sherpa.sh
undefined
  • 确认供应商类型已添加到
    ASRProviderRegistry.all
  • 确保
    providerID
    在所有供应商中唯一
  • 清理构建:
    swift package clean && bash scripts/deploy.sh

New ASR provider not appearing in Settings

无法捕获音频/无悬浮栏

  • Confirm the provider type is added to
    ASRProviderRegistry.all
  • Ensure
    providerID
    is unique across all providers
  • Clean build:
    swift package clean && bash scripts/deploy.sh
  • 授予麦克风权限:系统设置→隐私与安全性→麦克风→勾选Type4Me
  • 授予辅助功能权限以支持文本插入:系统设置→隐私与安全性→辅助功能→勾选Type4Me

Audio not captured / no floating bar

凭证无法保存

  • Grant microphone permission: System Settings → Privacy & Security → Microphone → Type4Me ✓
  • Grant Accessibility permission for text injection: System Settings → Privacy & Security → Accessibility → Type4Me ✓
bash
undefined

Credentials not saving

检查文件是否存在且权限正确

bash
undefined
ls -la ~/Library/Application\ Support/Type4Me/credentials.json

Check file exists and has correct permissions

应显示:-rw------- (0600)

若需修复权限:

ls -la ~/Library/Application\ Support/Type4Me/credentials.json
chmod 0600 ~/Library/Application\ Support/Type4Me/credentials.json
undefined

Should show: -rw------- (0600)

导出历史记录为CSV

Fix permissions if needed:

chmod 0600 ~/Library/Application\ Support/Type4Me/credentials.json
undefined
打开设置→历史记录→选择日期范围→导出CSV。SQLite数据库路径:
bash
~/Library/Application\ Support/Type4Me/history.db

Export history to CSV

直接查询:

Open Settings → History → select date range → Export CSV. The SQLite database is at:
bash
~/Library/Application\ Support/Type4Me/history.db
sqlite3 ~/Library/Application\ Support/Type4Me/history.db
"SELECT datetime(timestamp,'unixepoch'), text FROM records ORDER BY timestamp DESC LIMIT 20;"
undefined

Direct query:

系统要求

sqlite3 ~/Library/Application\ Support/Type4Me/history.db
"SELECT datetime(timestamp,'unixepoch'), text FROM records ORDER BY timestamp DESC LIMIT 20;"
undefined
  • macOS 14.0(Sonoma)或更高版本
  • 推荐使用Apple Silicon(M1/M2/M3/M4)芯片以运行本地ASR推理
  • 源码构建需安装Xcode命令行工具+CMake
  • 仅使用云端ASR供应商时需要网络连接

System Requirements

  • macOS 14.0 (Sonoma) or later
  • Apple Silicon (M1/M2/M3/M4) recommended for local ASR inference
  • Xcode Command Line Tools + CMake for source builds
  • Internet connection only needed for cloud ASR providers