Loading...
Loading...
Provides strategies for efficiently transforming large text files (thousands to millions of lines) using text editors like Vim, sed, or awk. This skill should be used when tasks involve bulk text transformations, CSV manipulation at scale, pattern-based edits across massive files, or when keystroke/operation efficiency is constrained. Applicable to tasks requiring macros, regex substitutions, or batch processing of structured text data.
npx skill4agent add letta-ai/skills large-scale-text-editingls -lhwc -lhead -n 100 input.csv > sample_head.csvsed -n '500000,500100p' input.csv > sample_middle.csvtail -n 100 input.csv > sample_tail.csv:%normal!gUhead -n 100 input.csv > test_input.csv
head -n 100 expected.csv > test_expected.csvdiff test_output.csv test_expected.csvcp input.csv input.csv.backupwc -l output.csv| Task | Approach |
|---|---|
| Apply macro to all lines | |
| Uppercase transformation | |
| Column manipulation | Capture groups with |
| Delimiter replacement | |
| Whitespace removal | |
setreg()\\\r| Pitfall | Prevention |
|---|---|
| Reading large files directly | Always check file size first; use head/tail/sed for sampling |
| No backup before in-place edit | Create backup copy before any modification |
| Testing only on first few lines | Sample from multiple file locations |
| Assuming uniform structure | Verify structure with samples from different positions |
| Regex over-matching | Use explicit delimiters and non-greedy quantifiers |
| Insufficient timeout | Calculate expected processing time for file size |
| Not verifying exit codes | Check tool exit status after operations |