Drop #673 (2025-06-30): Long[er] Form Monday

LLMs as Symptom, Not Solution; Orange 🔐; Erasing Humans

We’re on our annual all-company break at $WORK, so I may miss more than the Wednesday Drop this week if I do manage to get out and touch some grass.

I sammich’d a tech resource in between two “AI” meta pieces, in case you just want to hit that up.

The article referenced in the first section has spurred a great deal of pondering for me.


TL;DR

(This is an LLM/GPT-generated summary of today’s Drop using Ollama + Qwen 3 and a custom prompt.)


LLMs as Symptom, Not Solution

Photo by Anna Shvets on Pexels.com

Artyom Bologov’s recent piece “LLMs, But Only Because Your Tech SUCKS” delivers a provocative thesis that cuts straight to the heart of modern development practices: our reliance on large language models might say more about our tooling deficiencies than our technological advancement.

The argument is elegantly simple and uncomfortably accurate. Many of us have embraced LLMs not because they represent some breakthrough in software development, but because they paper over fundamental shortcomings in our languages, environments, and workflows.

Consider the boilerplate problem. Languages like Go, Java and JavaScript have conditioned us to accept verbose, repetitive code as inevitable. Enter your fav coding agent, dutifully generating the same tedious patterns we’ve resigned ourselves to writing. But Bologov points to a different reality: languages with robust macro systems. such as Lisp, Clojure, and Rust, simply don’t have this problem. When you can abstract away repetitive patterns at the syntactic level, you don’t need an “AI” assistant to generate what shouldn’t exist in the first place.

That made an old-ish saying come to mind: the code you don’t write is often the best code of all.

Artyom’s REPL discussion hits particularly close to home. Strong interactive development environments create a unique relationship with code. In REPLs, we’re not writing programs so much as conversing with them, testing hypotheses and refining ideas in real-time. When one’s development environment supports this kind of interactive exploration, the need for LLM-generated boilerplate simply evaporates. This is one reason I still prefer R over so many other choices. The core R REPL, and the supercharged one in RStudio (I do not and will not use Positron), both feel like I am having a conversation vs. “programming”.

When you labor in an edit-compile-test cycle it’s easy to see why many developers lean on “AI” assistance. If changing and testing code is friction-heavy, of course you’ll reach for tools that promise to reduce that friction, even if they introduce other complexities.

Perhaps most tellingly, the author argues that LLMs excel primarily at navigating poorly designed APIs and inadequate documentation. This observation stings because it’s true. I have no problem admitting I hit up Perplexity or Claude when faced with the arcane syntax of ffmpeg or the labyrinthine documentation of some legacy or even new fangled APIs. But this reveals the real problem that the tools themselves are just poorly designed.

Well-crafted APIs with clear, comprehensive documentation (WITH EXAMPLES, PEOPLE!!!) don’t need “AI” interpreters. They need good design principles and respect for the developer experience.

The mass enthusiasm for LLMs in development workflows might be masking a deeper reluctance to invest in better tools, languages, and environments. It’s easier to add another layer of “AI” assistance than to fundamentally rethink our technological stack.

Bologov’s critique isn’t anti-progress! Rather, it’s pro-thoughtful progress. The question isn’t whether LLMs are useful (I, for one, grant that they obviously are), but whether their utility stems from genuine advancement or from filling gaps that shouldn’t exist.

Instead of asking “How can ‘AI’ help me code faster?” we might ask “Why does this task require ‘AI’ assistance in the first place?” Often, the answer points toward fixable problems in our tools and workflows.

This piece arrives at a crucial moment when the tech industry is making substantial bets on “AI”-assisted development. His argument deserves serious consideration: perhaps the most transformative thing we could do isn’t to build better “AI” assistants, but to build tools and environments that make those assistants unnecessary.

That’s not just a technical challenge—it’s a philosophical one about what good software development actually looks like.


Orange 🔐

Photo by Pixabay on Pexels.com

I’ll admit it upfront: when I see Cloudflare’s name attached to something involving privacy and encryption, my first instinct is to reach for my cyber ten-foot pole. The company that sits in the middle of half the internet’s traffic now wants to handle your video calls? But Orange Meets (GH) (🍊) presents an intriguing case study in how a company known for centralized infrastructure can build genuinely decentralized privacy tools.

This new, open-source video conferencing platform implements true end-to-end encryption using the Messaging Layer Security (MLS) protocol. The key phrase here is “true E2EE” (no “asterisk”). That means even Cloudflare’s own infrastructure cannot decrypt your media streams.

The technical architecture is genuinely clever. Media streams get encrypted frame-by-frame using MLS running in a Rust-based Web Worker compiled to WebAssembly. WebRTC transforms handle the encryption injection via createEncodedStreams(), intercepting data before transmission and after reception.

The platform implements a “designated committer” protocol — a distributed algorithm that assigns one participant to manage MLS group operations. This keeps the backend stateless and removes server-side MLS logic entirely. They’ve even formally verified the protocol with TLA+ model checking (see link at beginning of this paragraph) to handle edge cases like committer disconnections.

The security enhancements show they’re taking the trust problem seriously. Safety numbers provide unique group identifiers in the corner of the screen for out-of-band verification, helping detect malevolence-in-the-middle attacks. The system further prevents malicious app servers from substituting key packages through cross-channel validation. It’s a solid design.

Orange is not yet the new Black (i.e., Zoom). The repo reveals the typical growing pains of ambitious open-source projects. Video resolution caps out at 640×360 instead of 1080p. There are rendering artifacts, track synchronization errors, and permission grant failures. Feature requests have piled up for things like AI subtitles, custom backgrounds, and higher participant limits.

You can try the E2EE-enabled demo at e2ee.orange.cloudflare.dev or deploy your own instance from the GitHub repository. Whether you trust it enough to use for sensitive conversations is ultimately a personal calculus of threat models, trust assumptions, and alternative options.

I’ll be trying this with some work-mates when we get back from our annual company-wide shutdown.


Erasing Humans

Something kind of unsettling is happening in corporate visual design. Over the past six months, businesses have quietly but dramatically purged humans from their imagery. They’ve replaced collaborative teams and human faces with robots, “AI” avatars, and abstract tech motifs. 🤖

One company’s 3D illustration pack saw human-centered images drop by 87.5%. Where people do appear, they’re often reduced to disembodied fragments of floating hands holding objects or faceless figures peripheral to the technology they serve.

Meanwhile, “AI” imagery has exploded. Robots now have expressive features and emotional depth that human figures have lost. Companies appear to want to be seen as “AI”-driven, not people-centric.

I don’t think this is a mere design trend. Given the stories I hear on a daily basis, I’m fairly certain this is a reflection of how businesses now perceive value and aspiration. Being a “people company” apparently isn’t the goal anymore. The visual language suggests a future where humans are optional, not central.

The full analysis, “AI Replaced People in Corporate Imagery”, is worth your time if you’re curious about how visual culture reflects our technological anxieties.


FIN

Remember, you can follow and interact with the full text of The Daily Drop’s free posts on:

  • 🐘 Mastodon via @dailydrop.hrbrmstr.dev@dailydrop.hrbrmstr.dev
  • 🦋 Bluesky via https://bsky.app/profile/dailydrop.hrbrmstr.dev.web.brid.gy

☮️

Fediverse Reactions