Drop #578 (2024-12-18): Wonkish Wednesday

HTTP Message Signatures; Guidelines for Human Rights Protocol and Architecture Considerations; Do You Know The Times?

I’ve been working some saved RFC stacks and plucked out three to cover in this most wonky of Drops.

Please note there is an implied expectation that you’ve at least skimmed each RFC before reading the associated section.


TL;DR

(This is an AI-generated summary of today’s Drop using Ollama + llama 3.2 and a custom prompt + VSCodium extension.)


HTTP Message Signatures

Photo by Snapwire on Pexels.com

HTTP Message Signatures provide cryptographic proof that specific portions of an HTTP message have remained unchanged during transit.

And, I can hear y’all now…“BUT WE HAVE TLS EVERYWHERE!”

Hear me out.

I think this IETF standard is fairly important in our modern, daft “HTTPS Everywhere™” environemnt. TLS alone cannot guarantee end-to-end message integrity, such as when messages pass through multiple TLS connections or when application-specific keys need to be bound to the HTTP message. And, not to be “that guy” but also when we are using certificates that let pretty much anyone* abscond with and read your previous plaintext messages.

*I assert that modern Certificate Authorities are a joke, poorly managed, and have enough incidents a year that if you really think your encrypted connections are safe from someone who truly wants to decrypt them, I have a bridge to sell you.

This proposed signature mechanism works by selecting specific components of an HTTP message, canonicalizing them into a signature base, and applying cryptographic operations. Components can be HTTP fields or derived components like @method or @path.

Message components must be canonicalized because HTTP permits various transformations during transit. For example, intermediaries (“Hi, enterprise/ISP proxies!”) might reorder fields, combine field values, or modify whitespace. The canonicalization rules ensure that signers and verifiers work with identical values despite these transformations.

The signing process involves selecting components to sign, creating the signature base, and applying a cryptographic algorithm. The signature parameters, including covered components and metadata like creation time, become part of the signature itself.

Verification then requires reconstructing the signature base from the received message and validating the cryptographic signature. This process fails if any covered component has been modified or if the signature parameters don’t meet application requirements.

Perhaps a “why” is in order…

Let’s say you have an API gateway that terminates TLS but you’re clever and careful and feel the need to prove message authenticity to backend services. HTTP signatures allow the gateway to sign the original client’s request components, enabling backends to verify the message hasn’t been tampered with, even though the TLS connection terminated at the gateway. This might be especially useful in a webhook context where HTTP signatures will let you verify that:

  • the webhook actually came from the claimed sender
  • the message content wasn’t tampered with in transit
  • no replay attacks are occurring (using the created/expires fields)

e.g.,:

POST /webhook HTTP/1.1
Host: api.example.com
Date: Wed, 18 Dec 2024 13:00:00 GMT
Signature: keyId="webhook-key-1",
          algorithm="hs2019",
          created=1703037600,
          headers="(request-target) (created) host date content-type",
          signature="base64-signature-here"

I’m glad the folks behind the standard saw fit to allow for signing of specific message components. An authorization service might only care about the Authorization header and method, while an API might need to verify the request body hasn’t changed by signing the Content-Digest field. This makes it super flexible.

The security model of this signature setup assumes that both signers and verifiers have access to appropriate key material and agree on acceptable algorithms. Applications also must carefully choose which components to sign based on their security requirements. For instance, signing the @path but not @query could allow request tampering through query parameter manipulation.

I’m evaluating a few libraries in JS and Go that handle this as middleware. I’ll report back in a less wonkish edition on how they went.


Guidelines for Human Rights Protocol and Architecture Considerations

Photo by Thirdman on Pexels.com

The development of human rights protocol considerations for internet technologies (RFC 9620) has become critical as we increasingly rely on technology to mediate fundamental human freedoms. This IRTF framework (not endorsed by the IETF) provides guidance for technologists to evaluate how protocol design choices impact human rights.

The internet’s technical architecture directly shapes our ability to exercise basic rights like freedom of expression, privacy, and association. Protocol decisions that may seem purely technical — such as allowing intermediary nodes or exposing metadata — can enable censorship, surveillance, and discrimination.

Technical choices around intermediaries, connectivity, and content signals create choke points that governments and other actors can exploit for censorship and control. For example, protocols that expose identifiers or lack encryption enable selective blocking of traffic and pervasive monitoring.

Given the real and present threats, modern protocols absolutely should intentionally resist them through:

  • end-to-end encryption to prevent intermediary interference
  • decentralized architectures that avoid single points of control
  • privacy-preserving designs that minimize exposed metadata
  • censorship resistance features

The need for human rights considerations in protocols has never been more pressing. Citizens everywhere (U.S. folks: you’re not out of the woods anymore) must realize that state surveillance and censorship capabilities grow more sophisticated with every passing day/week/month/year. Sadly, internet infrastructure is also becoming increasingly more centralized (“Hi, CloudFlare, Akamai, Fastly, Google, Apple, Microsoft!). And, we also rely heavily on this technical infrastructure to enage with other humans and get our messages out to a broader audience.

Protocol developers (which could honestly be any of you reading this) should 100% take the time to evaluate human rights impacts throughout the design process, not as an afterthought. This requires analyzing how technical decisions affect freedom of expression and association, privacy and security, equal access and non-discrimination, and remedy/transparency.

The linked framework provides a structured way to assess these impacts while acknowledging that context matters (i.e., there’s rarely a one-size-fits-all solution that perfectly balances all rights and technical requirements).

Human rights considerations aren’t just ethical guidelines — they’re essential for maintaining the internet as an open, enabling platform for human rights and freedoms rather than a tool for control and oppression.


Do You Know The Times?

Photo by Markus Winkler on Pexels.com

If you do any “real” work with dates/time — and, especially, timestamps — you know what a royal pain it can be.

RFC 9557 introduces real improvements to internet timestamps by extending RFC 3339‘s format to include additional contextual information while maintaining backward compatibility.

The specification redefines how the “Z” suffix is interpreted in timestamps. Previously, Z implied UTC as the preferred reference point. Now, Z indicates that while the UTC time is known, the local offset remains unknown – matching the semantic meaning of “-00:00” in RFC 3339.

This internet Extended Date/Time Format (IXDTF) enables timestamps to carry rich metadata through an optional suffix system. Each suffix tag consists of a key-value pair enclosed in square brackets. The format supports both elective and critical tags, with critical tags marked by an exclamation point.

Time zone information can be included using IANA time zone names, allowing applications to handle daylight saving transitions correctly. For example:

1996-12-19T16:39:57-08:00[America/Los_Angeles]

The specification also introduces the u-ca suffix key for indicating preferred calendar systems:

1996-12-19T16:39:57-08:00[America/Los_Angeles][u-ca=hebrew]

It further handles inconsistencies between time offsets and time zones pragmatically. When using critical tags (marked with !), applications must act on inconsistencies. With elective tags, applications may choose to ignore inconsistencies.

The format supports experimental tags prefixed with underscore for controlled environments:

1996-12-19T16:39:57-08:00[_foo=bar][_baz=bat]

One caveat: the extended format introduces potential privacy concerns through information disclosure. Implementations should carefully consider what additional metadata they include, particularly when timestamps may be exposed to untrusted parties. The specification recommends following data minimization principles when generating timestamps with extended information.


FIN

We all will need to get much, much better at sensitive comms, and Signal is one of the only ways to do that in modern times. You should absolutely use that if you are doing any kind of community organizing (etc.). Ping me on Mastodon or Bluesky with a “🦇?” request (public or faux-private) and I’ll provide a one-time use link to connect us on Signal.

Remember, you can follow and interact with the full text of The Daily Drop’s free posts on:

  • 🐘 Mastodon via @dailydrop.hrbrmstr.dev@dailydrop.hrbrmstr.dev
  • 🦋 Bluesky via https://bsky.app/profile/dailydrop.hrbrmstr.dev.web.brid.gy

Also, refer to:

to see how to access a regularly updated database of all the Drops with extracted links, and full-text search capability. ☮️

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.