DuckDBT; Your Shell Ops Are History; Collmmmitted
This weekend Bonus edition of the Drop is doubly bonus-ified with each section also containing a bonus resource link that is subject-matter aligned to the section topic.
TL;DR
(This is an AI-generated summary of today’s Drop using Ollama + llama 3.2 and a custom prompt.)
- DuckDB and dbt integration enables data transformation pipelines with features like spatial functions, external file exports, and reverse ETL to PostgreSQL databases (https://duckdb.org/2025/04/04/dbt-duckdb.html)
- Enhance shell history searching with fuzzy finding tools like fzf or skim to efficiently recall commands through fuzzy matching and multiple result display, significantly improving productivity (https://tratt.net/laurie/blog/2025/better_shell_history_search.html)
- Generate meaningful Git commit messages automatically using Ollama with tools like ollamacommit or Zed’s Git panel, which analyze code changes locally to create descriptive commits without sending code to external services (https://github.com/clianor/ollama-commit)
DuckDBT

The DuckDB folks recently dropped a post that I’ve kind of been waiting for: Transforming Data with DuckDB and dbt.
If you’re not faimilar with dbt (a.k.a., Data Build Tool), it’s an open-source command line tool that helps analysts and engineers transform data in “data warehouses” more effectively. It focuses specifically on the transformation (T) part of the Extract, Load, Transform (ELT) process. It doesn’t extract or load data but is designed to efficiently transform data that’s already at your disposal. The platform allows data teams to:
- build trusted data pipelines that deliver reliable, traceable data for business decisions
- accelerate analytics workflows by enabling faster development of high-quality data products
- maintain data quality at scale with features like automated documentation and lineage visualization
Last fall, we migrated to an entirely new internet sensor framework, and and entirely new data collection and analysis pipeline. The new pipeline makes it possible to work in JSON and Parquet land, free from a cost-per-query model I was hesitant to lean into before.
I’ve been meaning to migrate my lovingly hand-crafted Bash and systemd setup to dbt, but had not really had the cycles to take on the cognitive load of mastering dbt’s mental model, and also how to fit my preference of using DuckDB for all the data work into dbt’s structure.
I’m honestly glad I never bothered as this post covers it top-to-bottom, with a cloneable example that makes groking the dbt+DuckDB combo way easier than if I had started to do so on my own.
So whether you’re in the same boat I am, or you’re just curious about how to get started with dbt and DuckDB, this post is 100% for you.
While we’re talking DuckDB, I’ll toss in an added bonus of their all-new, spiffy roadmap, so you can stay up-to-date with their latest developments. This is quite the ambitious line-up (they have links to plans/issues in their post):
- Time series optimizations
- Partition-aware optimizations
- Sorting-aware optimizations
- Database file encryption
- Better Filter Cardinality Estimation using automatically maintained table samples
- Parallel Python UDFs
- ALTER TABLE support for adding foreign keys
- Improvements of query profiling (especially for concurrently running queries)
- XML read support
- Materialized views
- MERGE statement
- Support for async I/O
- Support for PL/SQL stored procedures
Your Shell Ops Are History

In a recent blog post, Laurence Tratt (@ltratt.bsky.social) explores how to enhance shell history searching to boost productivity. He begins by noting the repetitive nature of shell commands, estimating that he typically runs 50-100 unique commands per working day, with a small subset executed hundreds of times daily. This repetition creates an opportunity to save time and reduce errors by efficiently searching command history. I bet this sounds familiar if you, too, are a denizen of the shell.
The traditional method (i.e., sans extra tooling) for searching shell history in Unix shells like Bash involves pressing Ctrl-r and entering a substring. This approach cycles backward through matching commands but has limitations. This feature is a bit crude since it’s just basic substring matching — a method that falls short if you remember parts of a command but not their exact order or location.
However, we can pair Ctrl-r with fzf, a fuzzy finder tool we’ve covered before. This combination brought two significant improvements: fuzzy matching (typing “c mo” would match “cat /etc/motd”) and displaying multiple matches simultaneously. This let us quickly select the right command rather than cycling through matches one by one.
This alone roughly doubled Tratt’s shell efficiency overnight. More importantly, it had a long-term effect on his command usage — he became more ambitious with shell commands because he could “outsource his memory” to fzf. For example, he switched to setting environment variables on a per-command basis rather than globally, knowing he could easily recall them later.
After using fzf for some time, Tratt discovered skim, a Rust-based fzf-alike that better suited his preferences. The differences were minor, but skim’s matching more often found the commands he wanted quickly, had a preferable UI, and was easier to install on various systems.
The post covers quit a bit more of the journey to better shell history navigation as well as detailed instructions for how to duplicate Tratt’s setup. It’s well-worth the few-minutes read if you want to level up your shell history game.
As an added bonus, you may also want to bookmark and checkout — A Complete Guide to Linux Bash History — which is exactly what it says on the tin (with no cruft to wade through).
Collmmmitted

Ollamacommit is a command-line tool that leverages Ollama, the local-first large language model runner, to generate meaningful Git commit messages. This tool addresses a common challenge for developers who often struggle to write descriptive commit messages that effectively communicate the changes made in their code. Which is way too nice of a way to say that this can save you from the terrible, phoned-in commits your teammate (or, well, me) considers appropriate, though ends up being woefully inadequate.
The project works by analyzing the git diff of staged changes and using this information to prompt a local language model to generate an appropriate commit message. By running locally through Ollama, it offers privacy benefits compared to cloud-based alternatives, as your code never leaves your machine.
Installation is straightforward using npm with the command npm install -g ollamacommit. Before using it, you need to have Ollama installed and running with at least one model. The tool supports various models including Llama 2, CodeLlama, and other compatible models that can be run through Ollama.
Usage is simple — after making changes and staging them with git, you can run ollamacommit to generate a commit message. The tool offers several configuration options including the ability to specify which model to use, customize the prompt template, and adjust other parameters to fine-tune the output according to your preferences.
The project is actively maintained, with regular updates and improvements being made. It represents an interesting intersection of developer tools and AI assistance, streamlining the git workflow while keeping sensitive code data local.
The bonus, here, is me pointing you to Zed’s recently added Git support/panel, which also sports LLM-powered commit message generation superpowers. You can ask whatever LLM provider you want (so, it can be local Ollama, too) to generate a commit message by focusing on the message editor within the Git Panel and either clicking on the pencil icon in the bottom left, or reaching for the git: generate commit message (alt-tab) keybinding. I have been making great use of this feature, and it has significantly improved my commit message writing process.
FIN
Remember, you can follow and interact with the full text of The Daily Drop’s free posts on:
- 🐘 Mastodon via
@dailydrop.hrbrmstr.dev@dailydrop.hrbrmstr.dev - 🦋 Bluesky via
https://bsky.app/profile/dailydrop.hrbrmstr.dev.web.brid.gy
☮️
Leave a comment