Bonus Drop #107 (2026-01-11): AI Yai Yai

The Ghost Library; Mine The Gap

I’ve had some “yuge thoughts” around “AI” racing through the noggin this past week, and figured I could take the weekend to let them coalesce and get them into some usable form.

One is on the legacy blog, and discusses “AI Proofing Your It/Cyber Career”.

The other two are below.

After all this pondering and distillation, I’m left with a deep sense of foreboding for the coming months.


TL;DR

(This is an LLM/GPT-generated summary of today’s Drop. Ollama + MinMax M2.1.)

  • Ghost libraries like Drew Breunig’s whenwords represent a new open-source model where specifications replace implementations, enabling AI to generate working code on demand while raising concerns about contributor recognition, specification brittleness, and the transformation of open source from a community of contributors into a system of spec authors and consumers (https://github.com/dbreunig/whenwords)
  • AI data center growth is creating a copper supply crisis, with demand projected to rise 50% by 2040 creating a ten million metric ton shortfall, as power-dense GPU clusters require up to 47 metric tons of copper per megawatt and existing mines deplete while new ones take 17 years to come online (https://www.spglobal.com/en/research-insights/special-reports/copper-in-the-age-of-ai)

The Ghost Library

Photo by Zh haris on Unsplash

Drew Breunig just released a library called whenwords. It turns timestamps into phrases like “3 hours ago” or “in 2 days.” Now, libraries like this exist in every language; but what makes whenwords unusual is that it exists in no language. The repository contains a detailed specification, 125 test cases, and instructions for generating an implementation. Those instructions are, essentially, a prompt that you’d paste into Claude, Cursor (et al.), specify your language, and wait. The “AI” reads the spec, writes the code, runs the tests, and delivers a working library.

Breunig has verified implementations in Ruby, Python, Rust, Elixir, Swift, PHP, Bash, and Excel. The spec is the actual product, though; the code is just a side effect.

He calls this a “ghost library,” and the name fits. There’s something “spectral” about software that materializes on command and leaves no trace in your dependency tree. No version conflicts or supply chain attacks. Plus, no waiting for a maintainer to merge your fix, because there’s nothing to fix. If the generated code fails a test, you regenerate.

My first reaction was “ugh” but upon further consideration, the pitch is kinda strong. Small utilities are everywhere that do lots of things: date handling, string manipulation, validation, config parsing, etc.. Tney’re all solved problems with well-known behavior. If “AI” can reliably generate correct implementations from good specs, why maintain thousands of redundant packages? Why build the same library in twelve languages when you could write one spec and let the implementations spawn as needed?

Then, I sat with the idea for an even longer bit and some uncomfortable questions surfaced.

Let’s start with the “casual contributor”. Open source has always been a ladder: you find a bug in a small library, you fix it, and you ultimately submit a pull request. Your first contribution leads to your second. That rhythm built careers and communities. But, ghost libraries don’t have bugs to fix because the spec is the spec. If your generated code fails, you iterate the spec and regenerate it. There’s no feedback loop between humans and maintainers because there are no maintainers, just spec authors and “AI” systems, make everyone else a mere consumer.

Then there’s the question of credit. In the case of whenwords, Breunig did the intellectual work: he decided the thresholds, the edge case behavior, and the API surface, all of which lives in the spec. But when you generate an implementation, you own it. The MIT license lets you do whatever you want, and the value of the design work flows downstream without attribution. If this model scales, we’ll have a whole class of people doing foundational work that nobody recognizes.

And here’s the thing: specs are (to me) orders-of-magnitude harder to write than code. Chris Gregori has a neat phrase: “Code is cheap; software is expensive.”. “AI” has cratered the cost of generating code, but the hard part was never typing…it was understanding the problem, anticipating edge cases, designing behavior that holds up under real use. That work doesn’t disappear when you ship a spec instead of an implementation. Ghost libraries might lower the barrier to using open source while raising the barrier to creating it.

There’s also a subtler tension around what actually gets built. A good spec needs clean boundaries. For example, the whenwords spec explicitly avoids localization and timezone handling. So, it’s English only, with pure functions, and no side effects. While these constraints make the spec tractable, they also make the library less useful than messier alternatives that handle the real world. Ghost libraries optimize for what can be specified clearly, not necessarily for what solves hard problems.

And finally, there’s brittleness of a different kind. While ghost libraries solve the fragility of dependency trees (remember, from above, that when your generated code breaks, you just regenerate it), they introduce a new single point of failure: the spec itself. If the spec has a blind spot, every generated implementation inherits it. Traditional libraries can be patched by anyone who spots a flaw. Spec flaws require the author to notice, care, and update. Specs can rot in subtler ways too. The world changes, edge cases emerge that the original author never anticipated, and suddenly ten thousand generated implementations share the same gap.

Whenwords is a clever experiment, and for stable utilities with well-defined behavior, the model makes sense. But if this becomes normal, open source stops being a community of contributors and becomes a community of spec authors and consumers. The gift economy gets harder to see. And we end up running a lot of code that nobody wrote but everybody trusts.


Mine The Gap

Photo by Vlad Cheu021ban on Pexels.com

The server rack used to be a modest piece of infrastructure sitting in a climate-controlled room, drawing somewhere between ~five and ~fifteen kilowatts, and did its job without much fuss. Then the “AI” boom arrived and turned that same footprint into something that demands over a hundred kilowatts (a seven-fold increase!) that ripples outward into consequences nobody bothered to model during the hype cycle.

One of those consequences is copper.

A new S&P Global report [PDF] traces how artificial intelligence is creating a supply crisis in one of civilization’s oldest industrial metals. The power-dense, brittle GPU clusters used for “AI” training require thicker electrical distribution systems, more robust grounding, and cooling infrastructure that leans heavily on copper’s thermal conductivity. Modern “AI” training facilities now need as much as forty-seven metric tons of copper for every megawatt of capacity. Traditional data centers never came close to those figures.

These processors run so hot that the old air-cooled designs just cannot keep up. Liquid cooling systems and cold plates have become standard, and copper sits at the center of those designs because nothing else moves heat as efficiently. All those “intelligence” factories being rushed into construction across the country are also copper sinks of unprecedented scale.

Global demand is projected to rise ~fifty percent over the next fifteen years, jumping from twenty-eight million metric tons to forty-two million by 2040, and — without intervention — we face a ten million metric ton shortfall. There are also some stark conditions exacerbating the problem, ranging from the fact that existing mines are depleting, to the sad reality that ore quality also keeps dropping. And, bringing a new mine online takes an average of seventeen years between discovery and production. Sure, recycling can help a bit, but can only supply about a third of what we will need; plus, half of global refining capacity is concentrated in China, adding geopolitical fragility to an already stressed system.

S&P Global estimates US data center electricity demand will climb from five percent of the national total today to fourteen percent by 2030 (a mere four years from now). Every “AI” company racing to build training capacity, every cloud provider expanding their footprint, every enterprise spinning up their own clusters is pulling from the same finite copper supply that nobody seemed to factor into their breathless roadmaps.

The “AI” industry sold itself as a weightless revolution, pure intelligence floating in the cloud. but what it actually requires is millions of tons of metal ripped from the ground on timelines that make Moore’s Law look quaint. This gold rush demanded that everyone move fast, scale now, worry about consequences later. Copper is one of those consequences, and it turns out you cannot will a mine into existence with a keynote presentation, and you cannot power a hundred-kilowatt rack with investor enthusiasm. The physical world has a way of presenting invoices that the digital prophets forgot to budget for.


FIN

Remember, you can follow and interact with the full text of The Daily Drop’s free posts on:

  • 🐘 Mastodon via @dailydrop.hrbrmstr.dev@dailydrop.hrbrmstr.dev
  • 🦋 Bluesky via https://bsky.app/profile/dailydrop.hrbrmstr.dev.web.brid.gy

☮️

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.