Bonus Drop #36 (2023-12-16): Knowl-Edge Drop

Self-Hosted, “Serverless” “Edge” Functions For Fun And Profit

While I can’t guarantee the “profit” part, I can at least promise the “fun” part, provided you are, indeed, even mildly interested in self-hosting your own “edge” functions.

“Serverless” is in quotes because, well, there are always servers involved. Whomever the progenitor of that term deserves at least some time in Hades (or, New Jersey).

The “edge” is in quotes because to truly be edge-y your code needs to run as close to the systems reaching out to them as possible (or where the data is). However, the setup we’ll cover will be good for most casual “edge” needs, and provide a way to experiment with the concept before shelling out coin at an edge compute provider.

Defining “Serverless Edge”

“Serverless Edge” technically refers to compute resources that execute functions are located “geographically” (lots of quotes today; o_O) closer to the end user, at the Internet’s edge.

“Serverless” computing, also referred to as Function as a Service (FaaS), is a model where we do not need to maintain servers or instances for the code that is run. Instead, we simply add a function to some provider’s service and, when requested, the function is executed. This model is supposed to be cost-effective as it requires almost no infrastructure cost and offers easy scalability.

As noted in the preamble, “edge” computing, refers to performing computations at the point where data (or caller) originates, i.e., at the network’s edge. This proximity of computing ability to the source of data/caller creates opportunities to deliver deeper insights, improved response times, and better experiences.

In traditional “serverless edge” computing, after an event is triggered, a platform’s Anycast DNS resolves the function request to the nearest point of presence (PoP) that has a package containing the function’s code. This results in faster response times and improved overall performance due to the reduced latency.

The section header is stolen from the Deno folks who explain the “edge” more completely in a 2022 blog post.

Given these definitions, today’s Drop is taking some liberties with the terms, but the goal is to help you get started being able to setup an environment where you can easily create, install, and swap out/hot load your own “edge” functions on your own systems. This can be a permanent bespoke solution, or practice for using something like Fastly’s Compute@Edge, Cloudflare Workers, Deno Deploy, Supabase, or AWS Lambda@Edge (you’ll have to Kagi those since I’m not shilling any of them, just noting some platforms I’ve tried).

Deno + Supabase FOSS

Deno makes it stupid simple to deploy edge function with their Deno Deploy service. That service is so gosh darn good that Supabase also uses a flavor of it to power their edge. One of Supabase’s guiding principles is that “everything is portable“, meaning you should be able to take any part of the Supabase stack and host it yourself.

They made good on that promise with a self-hosting demo that you can run with Docker Compose. In the associated Dockerfile you’ll see that it’s just a basic configuration that uses the same edge-runtime Supabase uses in their core offering.

You can/should read their intro blog post about that runtime before continuing.

This runtime is fueled by Deno and lets us run hot-reloadable JavaScript, TypeScript, and WASM services.

All you need to do to get started is to clone the repo and then do one more thing before kickstarting the Docker Compose process: make the following changer to the docker-compose.yml file:

version: "3.9"
services:
  web:
    build: .
    volumes:
      - type: bind
        source: ./functions
        target: /home/deno/functions
    ports:
      - "127.0.0.1:SOME_OTHER_PORT:9000"

That change only exposes the edge server locally and you should, generally, pick a different port than the example one in any configuration (unless that breaks some assumptions in the encapsulated services).

Now you can:

$ docker compose up --build --detach

to start the service.

I’ve used this Caddyfile entry to expose it over TLS/443:

edge.hrbrmstr.dev {
  reverse_proxy localhost:SOME_OTHER_PORT
}

If you have a recent-ish curl handy, you can test out my deployment with:

$ curl -X POST https://edge.hrbrmstr.dev/hello-world --json '{ "name": "there" }'
{"message":"Hello there from Supabase Edge Functions!"}```

That’s So Random

If you even just perused yeterday’s WPE, you saw the Golang random JSON generator that was included in the post. For those who didn’t want to run one, I also made a version that runs on Fastly’s Compute@Edge platform: https://frankly-mature-fly.edgecompute.app/bar.

We can make a different version of that for our new edge service.

If you drop this into directory named random under the functions directory:

import { serve } from 'https://deno.land/std/http/server.ts';

serve(async (req) => {
  const url = new URL(req.url);
  if (url.pathname === '/random/bar') {
    const data = Array.from({ length: 10 }, () => ({
      Name: Math.random().toString(36).substring(2, 15),
      Count: Math.floor(Math.random() * 100),
    }));

    return new Response(JSON.stringify(data), {
      headers: { 'Content-Type': 'application/json' },
    });
  } else {
    // If the endpoint is not /random/bar, return a 404 error
    return new Response('Not found', { status: 404 });
  }
});

You can hit that via:

$ curl -s https://edge.hrbrmstr.dev/random/bar | jq
[
  {
    "name": "qldvqrr6qd9",
    "count": 84
  },
  {
    "name": "8affamjgbqd",
    "count": 34
  },
  …
]

I can (but have not) modify that to add, say, a /random/point endpoint that returns random data for a scatterplot, or support parameters for any of those endpoints, all without force-reloading anything.

A More Practical Example

Back in October we looked at a JS package that generates placeholder images.

We can make a similar endpoint for our edge service:

import { serve } from 'https://deno.land/std/http/server.ts';
import { isHexColor } from "https://deno.land/x/deno_validator/mod.ts";

serve(async (req) => {
  const url = new URL(req.url);
  if (url.pathname === '/placeholder/svg') {
    const height = url.searchParams.get("height");
    const width = url.searchParams.get("width");
    const color = url.searchParams.get("color");
    const fill = url.searchParams.get("fill");

    const heightVal = (height && !isNaN(Number(height))) ? Number(height) : 100;
    const widthVal = (width && !isNaN(Number(width))) ? Number(width) : 100;
    const colorVal = (color && isHexColor(`#${color}`)) ? `#${color}` : "white";
    const fillVal = (fill && isHexColor(`#${fill}`)) ? `#${fill}` : "black";

    const svg = `<svg width="${widthVal}" height="${heightVal}" 
xmlns="http://www.w3.org/2000/svg">
<style>rect { fill: ${fillVal}; stroke: ${colorVal}; }</style>
<rect width="100%" height="100%"/></svg>`;
    
    return new Response(svg, {
      headers: { 'Content-Type': 'image/svg+xml' },
    });
  } else {
    return new Response('Not found', { status: 404 });
  }
});

If you’d like a 100×100 green square, have at thee! https://edge.hrbrmstr.dev/placeholder/svg?color=00ff00&fill=00ff00

Note that I’ve left room to have this generate PNG, JPEG, or other image formats, but that’s left as an exercise for the reader.

FIN

Hopefully, this provided some inspiration to set up your own set of edge microservices and makes it easier/quicker to get a new API function up without running separate mini, separate processing servers. ☮️

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.