Declare-AI; Aifont & AImoji; Galah
Our semi-regular section is re-upped this first day of February with a diverse array of AI-driven topics, ranging from AI fonts to AI honeypots to whether you want to opt-in to saying you use AI.
TL;DR
This is an AI-generated summary of today’s Drop.
- The blog post discusses the concept of the AI Content Declaration by Declare-ai.org, which aims to bring transparency to the use of generative AI in content creation. The declaration allows creators to indicate the extent to which AI has assisted in their work, from minor tasks like correcting typos to generating the entire content. This transparency benefits both creators and consumers, providing a better understanding of how the content was produced.
- The post also introduces two typefaces, Aifont and AImoji, generated using general adversarial networks (GANs). These fonts were created by training models with a vast collection of existing fonts, allowing them to learn the characteristics of letters, symbols, numbers, and emojis.
- Lastly, the post presents Galah, a web honeypot developed by Adel Karimi and powered by OpenAI’s Large Language Models (LLMs). Galah uses an LLM to dynamically respond to incoming HTTP requests, creating realistic responses on the fly. This makes it more engaging for attackers, keeping them occupied for longer and providing more valuable data for security teams.
Declare-AI

Declare-ai.org‘s AI Content Declaration is a interesting concept that is hoping to bring transparency to the use of generative AI in content creation. Just in case there are still folks who’ve managed to avoid this new trend, generative AI, which includes Large Language Models (LLMs), AI Chatbots, and similar systems, can produce outputs that seems very human and creative. This can range from writing blog posts to generating images on a website or to even writing summary sections of a newsletter edition 🙃.
The AI Content Declaration is designed to allow we humans the opportunity to declare the extent to which AI has assisted us in our work. This could be as minimal as helping with typos or as significant as generating the entire content. The declaration can be embedded into the content, added as metadata on a web page, or included as special tags in an MP3 (etc.).
This transparency is beneficial for both creators and consumers. Creators can showcase their use of AI, while consumers gain a better understanding of how the content was produced. While I’m not sure one would always need to do this, I can see the appeal of doing so in, say, academic or professional settings, where the use of AI content generators could be seen as a form of plagiarism.
The declaration comes in different tiers, such as the “none” tier, which indicates that no Generative AI was used in the production of the work. This tiered system allows for a nuanced understanding of the role of AI in content creation.
However, I assert there are potential downsides to this approach. One concern could be the accuracy of the declarations. It relies on the honesty of humans, and there’s currently no mechanism to verify the declared level of AI assistance (just ask any educator who tries to use the various “detector” out there). Additionally, the concept is still in its alpha stage (1.0.0-alpha1), indicating that it’s in its early development phase and may undergo significant changes. Moreover, much like the “new font standard adoption” we talked about on Tuesday, these declarations only become meaningful after a decent percentage of folks jump on board.
However, to try to remain positive about an area in a topic (AI) that seems to only have negatives, these days, this attempt at codifying a declaration standard does have the potential to foster trust and understanding between folks who blather and those who consume the blather.
Aifont & AImoji

We sneak a bit of Typography Tuesday into today’s AI-infused edition with this section on two typefaces — Aifont & AImoji — generated with the help of general adversarial networks (GANs); specifically, the techniques described in Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks (GH).
Previous Drops have showcased some more modern takes on font-generation using AI, but it’s fun to see what folks were up to before we gobbled up half the world’s water so the average evil person could make free and convincing deepfakes at scale.
Process Studio trained their models with a vast collection of existing fonts, enabling them to sus out the characteristics of letters, symbols, numbers, and emojis. While their work is most assuredly interesting, I’m not a huge fan of either typeface. Plus, some of the emojis are, to use one of their own words, uncanny (and, downright disturbing).
If your sensibilities are less disturbed than mine, both of these fonts seem to be available for purchase.
I’m also keeping an eye out for then the previous papers we’ve looked at finally start dropping some software or actual typefaces we can start using.
Galah

Galah is a web honeypot, developed by Adel Karimi, and powered by OpenAI’s LLM (though it seems like it would be work with any such service provided the same API spec is supported). We should probably start with some definitions, since not every reader works in the space I do (at-scale mass exploitation detection via one of the world’s largest honeypot sensor fleets).
A honeypot (in cybersecurity vernacular), is a decoy system set up to bait and trap malicious actors. These systems are designed to lure them in and keep them busy while the real systems remain safe (kind of the opposite of what most managed detection and response companies do, which is use their customers as bait). Galah takes this honeypot concept and supercharges it. Instead of just being a static trap, Galah uses an LLM to dynamically respond to incoming HTTP requests, crafting realistic responses on the fly. This makes it much more engaging for attackers, keeping them occupied for longer and providing more valuable data for security teams.
There are plenty of existing, non-AI honeypot frameworks that try to do the same thing by matching incoming web requests to a known byte-stream pattern, then serving up a custom real (or fake) application matching what the requester is looking for. Galah, at a glance, is far more flexible than this. The aforementioned “traditional” honeypots require quite a bit of work to set up and maintain, as you need code and content to emulate various web applications or vulnerabilities. Galah, on the other hand, leverages LLMs and generative AI methods to dynamically mimic a wide range of applications, making it potentially easier to deploy and manage. It’s like having a whole team of actors playing different roles, all controlled by a single director.
At work, we’ve been experimenting with what I’ve dubbed as the “holodeck”, named after the fantastical rooms in the Star Trek universe. The concept is similar to Galah, but even more aspirational: why stop at just the initial access application? When the chips catch up (in terms of processing speed), why not have an AI-based system fake an entire computer? Or, entire network? Or, even an entire internet?
For folks with access to one of the more capable online models, try asking it to emulate a modern Ubuntu system, or Cisco router console, or a Red Hat linux box running Oracle (ping me if you need prompt help). You may be surprised at just how good a job it can do.
If you don’t mind paying the OpenAI tax, give Galah a go and try your hand at seeing if you can determine it’s a honeypot or not by interacting with the web side of it.
FIN
Remember, you can follow and interact with the full text of The Daily Drop’s free posts on Mastodon via @dailydrop.hrbrmstr.dev@dailydrop.hrbrmstr.dev ☮️
Leave a reply to Drop #711 (2025-09-16): Typography Tuesday – hrbrmstr's Daily Drop Cancel reply