Drop #658 (2025-05-28): CLI Thursday

Yazelix; sad; angle-grinder

Today’s Drop explores some CLI tools that might just fundamentally change how you work with your system. Whether you’re hunting for files with lightning speed, grinding through logs like a lumberjack in a dense forest, or setting up a fully integrated terminal-based development environment, we’ve got something that should pique your interest.


TL;DR

(This is an LLM/GPT-generated summary of today’s Drop using Ollama + Qwen 3 and a custom prompt.)

  • Yazelix combines Yazi, Zellij, and Helix into a terminal-based IDE-like environment with seamless integration and efficient workflows (https://github.com/luccahuguet/yazelix)
  • sad introduces a preview-first workflow for safe, interactive find-and-replace operations, reducing the risk of destructive changes (https://github.com/ms-jpq/sad)
  • angle-grinder provides a powerful command-line tool for log file analysis, enabling structured querying, parsing, and aggregation similar to a spreadsheet or analytics platform (https://github.com/rcoh/angle-grinder)

Yazelix

Yazelix is a nascent and (the best word I can use is) “involved” development environment setup that combines three spiffy terminal-based tools into what the creator calls an “IDE-like experience.”

Think of it as a carefully orchestrated set of three tools working together. The name itselfspells: Yazi + Zellij + Helix = Yazelix. Each tool has a specific role in creating a “unified” development experience that rivals traditional graphical IDEs, but runs entirely in your terminal.

Here’s how the three components work together…

Zellij acts as the conductor. It’s a terminal multiplexer (i.e., a “window manager” for your terminal) that creates different panes and manages how they interact. Zellij handles the overall layout, keyboard shortcuts, and coordination between the other tools.

Yazi serves as the file manager and sidebar. Instead of the typical folder tree you’d see in VS Code or other IDEs, Yazi provides a well-crafted, keyboard-driven file browser that can preview files, show git status, and navigate your project structure efficiently.

Helix is the text editor. It’s a modern, modal editor (similar to Vim but with more sensible defaults) that focuses on multiple cursors and powerful text manipulation capabilities.

We’ve covered Zellij and Helix before. I’m not a big fan of TUI file managers, so y’all can dig into that independently.

What makes Yazelix super cool isn’t just having these three tools running together. The magic happens in how they seamlessly communicate. When you tap Enter on a file in the Yazi sidebar, it automatically opens that file in Helix. If Helix isn’t running yet, Yazelix launches it for you. Conversely, when you’re editing a file in Helix, you can press Alt+Y to reveal that file’s location in the Yazi sidebar.

This bidirectional communication creates a workflow where you can navigate your project structure visually through Yazi while editing files in Helix, all orchestrated by Zellij’s pane management system.

The project includes several layers of configuration and scripting that make this integration possible. Yazelix uses Nushell (a modern shell with structured data capabilities) to write scripts that handle the communication between tools. For example, when you select a file in Yazi, a Nushell script determines whether Helix is already running and either opens the file in an existing Helix instance or launches a new one.

The project also includes Lua plugins for Yazi that enhance its capabilities, such as showing git status information and dynamically adjusting the number of columns based on the terminal width.

The creator has solved several common problems that plague terminal-based development setups. First, they’ve eliminated keybinding conflicts by carefully remapping shortcuts so that Zellij and Helix don’t interfere with each other. This means you can use both tools naturally without having to remember different key combinations for similar actions.

Second, the system automatically manages your workspace context. When you open a file from Yazi, Yazelix automatically renames the Zellij tab to reflect the git repository or directory you’re working in, helping you stay oriented in complex projects.

Third, the setup includes intelligent logging and debugging capabilities, making it easier to troubleshoot issues when they arise.

Traditional IDEs are often heavy, resource-intensive applications that can become slow with large projects. Terminal-based tools like those in Yazelix are typically much faster and more resource-efficient. They also offer superior keyboard-driven workflows that can significantly speed up development once you learn them.

However, the challenge with terminal tools has always been integration. It’s often frustrating when one tries to get multiple tools working together smoothly. Yazelix solves this by providing a pre-configured, battle-tested setup that the creator uses daily in their own development work.

The setup requires several dependencies (Yazi, Zellij, Helix, Nushell, and optionally Zoxide for smart directory navigation), and the installation involves cloning configuration files and setting up your terminal emulator to launch the environment automatically.

The creator provides configurations for both WezTerm and Ghostty terminal emulators, though the system can work with others with some additional configuration.

This is definitely a power-user setup that requires comfort with terminal environments and some patience to learn the keyboard-driven workflows of each component tool. However, for developers who invest the time to learn it, Yazelix offers a highly efficient, customizable development environment that can be faster and more responsive than traditional graphical IDEs.

The project represents a thoughtful approach to combining existing tools rather than building something entirely new, leveraging the Unix philosophy of small, focused programs working together to create something greater than the sum of its parts.

Since I know many Drop readers are terminal-first, and despise “AI”, this seems to be a decent alternative to Zed (which is now my daily driver).


sad

Think about the last time you needed to make changes across multiple files. Maybe you wanted to rename a variable, update API endpoints, or fix formatting issues. Traditional approaches like sed are powerful but dangerous, as one mistyped command could accidentally destroy your code. The sed command executes immediately without showing you what it’s about to change, which is like performing surgery while blindfolded.

sad (which playfully stands for “Space Age seD”) solves this by introducing a preview-first workflow. Instead of making changes immediately, it shows you exactly what would change and lets you selectively approve modifications.00

The magic of sad is its diff preview approach. When you run a sad command, here’s what happens:

  1. Pattern Matching: sad scans your files and finds all matches for your search pattern
  2. Preview Generation: It creates a beautiful diff showing exactly what would change
  3. Interactive Selection: Using fzf (a fuzzy finder), you can review and selectively approve changes
  4. Safe Application: Only approved changes get written to files

This turns a potentially destructive operation into an interactive, safe process.

Let’s break down the examples from the documentation to see how this works in practice:

Basic Interactive Replacement:

find . -name "*.js" | sad 'oldVariableName' 'newVariableName'

This command would show you a diff of every JavaScript file where oldVariableName appears, highlighting exactly how it would change to newVariableName. You could then pick and choose which files to update.

Regex with Capture Groups:

sad '"(\d+)"' '🌈$1🌈'

This finds quoted numbers like "42" and replaces them with 🌈42🌈. The $1 captures whatever was inside the parentheses. The preview shows you each transformation before you commit to it.

sad is built in Rust, which gives it excellent performance and memory safety.

Here’s how the key components work together:

  • Input Processing: sad can read file paths from stdin (often piped from find or fd) or work with files directly. The --read0 flag handles null-delimited input for files with special characters.
  • Regex Engine: It uses Rust’s regex crate with smart defaults like case-insensitive matching and multiline support. The flags system (-f) lets you customize behavior – lowercase letters enable features, uppercase disable them.
  • Preview System: The diff generation creates unified diff format output (like git diff), with customizable context lines via --unified=<n>.
  • Integration Layer: sad automatically detects and uses tools you already have installed – fzf for selection, delta or diff-so-fancy for colorized output, and respects your GIT_PAGER setting.

I think one reason sad is so impressive is that it flowed from a set of key design principles:

  • Progressive Enhancement: sad works with a basic setup but gets better as you install complementary tools. No fzf? It still works but without interactive selection. No syntax highlighter? You still get diffs, just not colorized.
  • Unix Philosophy: It does one thing well (safe find-and-replace) and composes with other tools. The pipe-friendly design means it fits naturally into existing workflows.
  • Safety First: The default behavior prioritizes safety over speed. You have to explicitly use --commit to skip previews, making accidental bulk changes much less likely.

Here’s how one might fit sad into real development work:

  • Code Refactoring: When renaming functions or variables across a large codebase, sad lets you see the impact before committing. You might discover edge cases where the replacement shouldn’t happen.
  • Configuration Updates: Updating API URLs or configuration values across multiple config files becomes much safer when you can preview each change.
  • Documentation Maintenance: Updating examples or links across documentation files, where context matters for each replacement.

The tool includes a “gotta go fast” mode for when you’re confident about changes. Using --commit with output redirection (> /dev/null) skips all interactive elements for batch processing.

The Rust implementation means it’s genuinely fast even on large codebases, while the streaming design keeps memory usage low.

Understanding sad becomes clearer when you see how it relates to similar tools:

  • sed: Immediate execution, no preview, powerful but risky
  • sd: Stream-oriented replacement, good for pipes, no interactive preview
  • ripgrep --replace: Fast and capable, but also immediate execution
  • sad: Interactive preview, selective application, safety-focused

Think of sad as bringing the Git workflow philosophy (review before commit) to find-and-replace operations.

This tool represents a thoughtful evolution of Unix text processing, and maintains the composability and power of traditional tools while adding modern interactive elements that make complex operations much safer and more manageable.


angle-grinder

angle-grinder can best be described as a specialized calculator for log files. Just as you might use a spreadsheet to analyze structured data with formulas and pivot tables, angle-grinder lets you perform sophisticated analytics on unstructured or semi-structured log files directly from the command line.

Now, I tend to use DuckDB for, well, everything CLI-data-ops-wise, but specialized tools can come in handy, and this one has not gotten enough 💙 in the past ~7 years.

The tool was created to fill a specific gap: what do you do when your log data isn’t in fancy monitoring systems like Kibana or Splunk, but you still need to extract meaningful insights quickly? It can process [well] over 1 million rows per second, making it practical for analyzing substantial amounts of log data in real-time.

The key to understanding angle-grinder is grasping its jq-esque pipeline architecture. Every query follows this pattern:

Filter → Transform → Aggregate → Display

Think of it like an assembly line where log lines flow through different stations, getting modified at each step. This mirrors how one might think about data analysis: first you select what data you want, then you extract meaningful fields from it, then you group and summarize it, and finally you present the results.

Let’s break down the anatomy of an angle-grinder query:

agrind '<filters> | <operators>'

The filters act as your initial selection criteria and are the decision points for which log lines get to proceed through the pipeline. You have three types:

  • * means “let everything through”
  • ERROR* (no quotes) does case-insensitive matching with wildcards
  • "ERROR" (with quotes) does exact, case-sensitive matching

The operators then transform, manipulate, and aggregate the data. This separation makes queries both readable and powerful.

Fully groking angle-grinder requires grasping that operators fall into two fundamental categories:

  • Row Operators (1-to-1 or 1-to-0): These transform individual log lines. Think of them as functions that take one log line and either modify it or drop it entirely. Examples include json (parse JSON), parse (extract fields with patterns), where (filter based on conditions), and fields (select specific columns).
  • Aggregate Operators (many-to-fewer): These combine multiple rows into summary statistics. Once you use an aggregate operator, you’re no longer dealing with individual log lines but with grouped summaries. Examples include countsumaveragep50 (50th percentile), and sort.

This distinction is important as it affects how one structures queries. Row operators can be chained freely, but once you introduce aggregation, you’re working with a different kind of data.

A major superpower of angle-grinder lies in its parsing capabilities. Most log files are unstructured text, but angle-grinder gives you several ways to extract structure:

JSON Parsing is the simplest — if your logs are JSON, json automatically creates fields you can reference:

# Log: {"status": 200, "response_time": 45, "user": "hrbrmstr"}
agrind '* | json | count by status'

Pattern Parsing uses wildcards where * matches anything:

# Log: "2023-01-01 ERROR user hrbrmstr failed login"
agrind '* | parse "* * user * *" as date, level, user, action'

Regular Expression Parsing gives you full regex power with named captures:

agrind '* | parse regex "(?P<ip>\d+\.\d+\.\d+\.\d+).*(?P<status>\d{3})"'

Think of parsing as teaching angle-grinder the “grammar” of your log files so it can extract meaningful fields.

Once you have structured data, aggregation lets us answer questions like “how many errors per hour?” or “what’s the average response time by endpoint?”

The aggregation syntax follows this pattern:

<aggregate_function> [by <grouping_fields>]

For example:

# Count total requests
agrind '* | json | count'

# Count requests by status code
agrind '* | json | count by status'

# Multiple aggregations
agrind '* | json | count, average(response_time) by endpoint, status'

The by clause is like SQL’s GROUP BY — it creates separate calculations for each unique combination of the specified fields.

angle-grinder includes a fairly robust expression system that lets us create calculated fields and complex conditions. We can use mathematical operators, string functions, date operations, and conditional logic:

# Create calculated fields
agrind '* | json | response_time * 1000 as response_ms'

# Conditional aggregation
agrind '* | json | count(status >= 400) as error_count by endpoint'

# Complex expressions in grouping
agrind '* | json | count by status >= 400, substring(url, 0, 10)'

Think of expressions as giving you the power of spreadsheet formulas within your log analysis.

Unlike traditional CLI tools that run once and exit, angle-grinder provides live-updating results. When you run an aggregation query on a live log stream (like tail -f), the results refresh ~20 times per second in the terminal. This creates a real-time dashboard effect that’s incredibly useful for monitoring.

The terminal automatically formats the output as a table and adjusts the display to fit your screen. If you redirect output to a file, it switches to a batch mode and outputs results once when complete.

Aliases in angle-grinder let us create reusable query templates for common log formats. This is like creating macros for frequently used parsing patterns:

# In .agrind-aliases/apache.toml
keyword = "apache"
template = """
parse "* - * [*] \\"* * *\\" * *" as ip, name, timestamp, method, url, protocol, status, contentlength
"""

Time Slicing helps with temporal analysis by grouping timestamps into buckets:

agrind '* | json | timeslice(parseDate(timestamp)) 5m | count by _timeslice'

To become proficient with angle-grinder, I recommend this progression:

Start with simple filtering and counting to get comfortable with the basic syntax. Then practice parsing different log formats to understand how to extract structure from unstructured data. Next, experiment with various aggregation functions to see how they transform your data. Finally, explore expressions and advanced features to handle complex analysis scenarios.

The key thing to consider is that angle-grinder bridges the gap between simple command-line text processing tools (like grep and awk) and full-featured analytics platforms. It provides much of the power of systems like Splunk but with the immediacy and simplicity of command-line tools.


FIN

Remember, you can follow and interact with the full text of The Daily Drop’s free posts on:

  • 🐘 Mastodon via @dailydrop.hrbrmstr.dev@dailydrop.hrbrmstr.dev
  • 🦋 Bluesky via https://bsky.app/profile/dailydrop.hrbrmstr.dev.web.brid.gy

☮️

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.