lint-http; A Recipe For Delight & Disaster; 10 Predictions for Data Infrastructure in 2026
Three diverse topics, today, that should provide an interesting distraction from waves arms out for everyone.
TL;DR
(This is an LLM/GPT-generated summary of today’s Drop. It’s Ollama and MiniMax M2.1.)
- lint-http is a Rust-based forward proxy that analyzes HTTP and HTTPS traffic to detect protocol violations and best practice issues during development, catching subtle problems that traditional debugging tools often miss (https://github.com/alganet/lint-http)
- As LLMs threaten traditional food blogs through AI Overviews and low-barrier content farms, independent creators are responding by emphasizing human verification and accountability as competitive advantages (https://www.plainoldrecipe.com/)
- Columnar predicts 2026 will bring continued Apache Arrow ecosystem growth alongside funding challenges, broader ADBC adoption, and AI driving increased focus on making tabular data fast, safe, and accessible (https://columnar.tech/blog/2026-predictions/)
lint-http

When you’re developing web applications or debugging network traffic, it’s easy to miss the small details that can make a big difference in performance and reliability. A relatively new tool (created by Alexandre Gomes Gaigalas) called lint-http addresses this challenge by acting as a helpful set of eyes that sits right between your code and the internet. This Rust-based forward proxy intercepts HTTP and HTTPS traffic to check whether both clients and servers are following proper protocol best practices.
It goes beyond simple traffic capture tools like Wireshark or mitmproxy by actively analyzing the conversation between your application and web servers. It watches for subtle issues that often cause mysterious production problems but are easy to overlook during development (things like missing user agent headers, improper cache control settings, incorrect ETag usage, and other protocol violations). After working with it for a bit, I might describe it as “LanguageTool for HTTP requests/responses,” since the items it flags are valid/informative/educational, but I do not necessarily want to have every client or server I write conform to presumed perfection.
The proxy runs locally on port 3000 by default, and you simply point your HTTP client toward it. For HTTPS interception, you need to trust the proxy’s generated certificate authority, which is standard for TLS-intercepting proxies. The tool provides an endpoint to download this certificate, making the setup process manageable.
The section header shows the tool in action.
The technical implementation is spiffy because it’s entirely Rust-native, using rustls and the tokio ecosystem instead of OpenSSL. This means you get a single binary with no system dependencies, making it easy to run on different machines without library conflicts.
Each HTTP transaction gets logged as JSON Lines, with each request-response pair written as a JSON object containing method, URI, headers, status code, timing information, and any detected rule violations. This practical format is easy to parse with tools like jq and integrates well with log processing pipelines. Configuration happens through a TOML file that lets you enable or disable specific rules and control where captures are written.
For anyone doing serious HTTP client or server development (or, if you’re just über curious), having a tool that catches protocol violations during development can prevent production problems caused by subtle specification misunderstandings. lint-http takes a thoughtful approach to making the invisible parts of web communication visible and measurable.
A Recipe for Delight & Disaster

I re-noted my use of Cooked.wiki in yesterday’s 2026 inaugural Drop. It’s a great site that does what it says on the tin. However, it leans heavily on LLMs to extract and validate recipe content, which is a non-starter for a decent chunk of readers and humans in general. It’s also a “cloud” thing with no way (yet) to export your data, which is a non-starter for another percentage of readers/humans.
One open-source, non-“AI” alternative to Cooked.wiki is Plain Old Recipe (GH) (POR). You can see how it transforms this recipe from Serious Eats at this URL. The section header shows Cooked.wiki’s results next to POR’s.
While you can self-host POR (making sure to note the AGPL license), all the real work is being done by a well-worn Python package: recipe-scrapers. It has a vibrant community, support for a bonkers number of recipe sites, and is regularly updated. You can even find more projects based on it at the showcase (or contribute your own).
Hunting for and using recipes from the internet in 2026 is going to be interesting. The industry that is food/recipe blogging has faced challenges before (like algorithm shifts and folks like me being part of the “jump to recipe” culture), the “LLM era” introduces a structural threat to the traditional ad-supported business model of these sites (scrapers do as well).
The biggest hit comes from the dreaded “AI Overviews” in search results. When you search for “how to make lasagna,” Google or Bing now often provides a synthesized recipe at the very top of the page. The result is that humans get the ingredients and steps without ever clicking on a blog/link. Since bloggers make money through ad impressions on their own sites, a “satisfied” user who stays on the search page represents (does the math) ZERO revenue for the creator. Truth-be-told, they get no revenue from me, either, unless I’m daft enough to tap a recipe link on a mobile device.
After skimming a few search results from Kagi, it also appears that some major food bloggers lost between 30% & 80% of their organic traffic (likely due to the LLMs).
LLMs have also lowered the barrier to entry (for food blogging) to nearly zero. Content farms are now using “AI” to spin up thousands of recipe sites. They’re great! Who doesn’t want untested instructions, hallucinated ratios, and uncanny valley “AI” image slop! Since the popular search engines have no real defenses against industrialized SEO tactics, all these garbage sites end up burying URLs with content made by us meatbags.
One potential “silver lining” for bloggers is that LLMs currently lack accountability. If an “AI” recipe ruins your Thanksgiving turkey, there is no one to complain to. If a favorite blogger’s recipe fails, their reputation is on the line. This “human-in-the-loop” verification is becoming the primary selling point for independent creators of all kinds this year, not just foodies.
While I will still use Cooked.wiki to siphon off usable content from needless narrative and pop-up ridden recipe sites, I will also continue to hoard dead-tree recipe books as they’ll still work fine in my dream state of living in a cave near the Maine woods without glowing rectangles or TCP/IP stacks.
10 Predictions for Data Infrastructure in 2026

(Not alot of blather in this last section due to the word density in the middle one.)
Columnar is an org that’s building on Apache Arrow and ADBC to deliver fundamental improvements in speed, simplicity, and security. With names like Ian Cook, Bryce Mecum, Matt Topol, and others being part of the endeavor, it’s definitely a company to take seriously.
Since their mission is ultimately to define the next era of data connectivity, the team is well-suited to make some bold (and not-so-bold) predictions for what we might be seeing in 2026.
Head over to “10 Predictions for Data Infrastructure in 2026” to see the “why” behind these “whats”:
- The boundaries between analytical and operational systems, which have blurred in recent years, will get even blurrier—but not in the way many anticipated.
- The Apache Arrow ecosystem will continue to grow rapidly, while revealing growing strain around funding and maintenance of critical shared infrastructure.
- ADBC adoption will broaden significantly, with more vendors, drivers, and clients converging on a common database connectivity layer.
- Awareness and use of Arrow will expand in the JavaScript and TypeScript ecosystem.
- Open table formats—especially Apache Iceberg—will climb the slope of enlightenment, emerging from hype and disillusionment to become proven infrastructure.
- Multi-engine data stacks will become increasingly mainstream as organizations prioritize faster innovation cycles, better interoperability, and cost savings.
- The availability of composable open source building blocks like DuckDB and DataFusion will continue to fuel an explosion of innovation by vendors.
- As open standards like Arrow, Parquet, and Iceberg see broader adoption, they will be increasingly pulled between two competing forces: the need for simplicity and broad compatibility, and the pressure to innovate and expand.
- The most significant advances in data infrastructure will come not from entirely new systems, but from work on interoperability, standards, and the foundational plumbing required for coordination and efficiency at scale.
- As AI agents move into production, the industry will focus on the surprisingly hard problem of making tabular data fast, safe, and accessible.
FIN
Remember, you can follow and interact with the full text of The Daily Drop’s free posts on:
- Mastodon via
@dailydrop.hrbrmstr.dev@dailydrop.hrbrmstr.dev - Bluesky via
<https://bsky.app/profile/dailydrop.hrbrmstr.dev.web.brid.gy>
Leave a comment