Drop #423 (2024-03-07): Happy ThursdAI

Dripgrep; Yes, AIs, ‘Understand’ Things; Generative

(Programming note: I’m going to try to get back into the habit of getting a few Drops pre-written, so I can accommodate the crunch days that seem to be happening over the past couple weeks. Apologies for no Wednesday edition).

Our occasional AI-themed edition is here with a look at the bolting on of AI to a familiar tool, a thinkpiece on whether our emerging overlords do, indeed, grok stuff we’re sending their way, and a pretty sobering piece on how those that are embracing AI and trusting it upon us are just fitting into a sad, familiar mode.

TL;DR

(This is an AI-generated summary of today’s Drop)

(Perplexity neglected to include links, again…sigh)

  • Dripgrep Project: A CLI tool named dripgrep, which combines ripgrep with ChatGPT, is introduced for experimental purposes to explore the potential of integrating large language models with APIs for real-world problem-solving. The project is not intended for serious use but aims to demonstrate the application of LLMs in tool selection and performance optimization through semantic isolation and machine learning predictions.
  • AI Understanding Debate: Robert Wright challenges the long-standing argument against AI’s ability to understand, specifically countering John Searle’s ‘Chinese Room’ thought experiment. Wright argues that the conversational fluency and information processing capabilities of large language models like ChatGPT demonstrate a form of understanding comparable to human cognition, suggesting a reevaluation of what constitutes understanding in AI.
  • Generative AI’s Impact on Creative Industries: Ethan Marcotte’s post “Generative” explores the effects of generative AI on creative and technical professions, highlighting concerns about de-skilling and job loss due to automation. Marcotte acknowledges the potential benefits of generative AI in lowering barriers to creation but also points out the need for regulatory oversight and labor protections to address its societal and ethical implications.

Dripgrep

Photo by Ashish Chavan on Pexels.com

I trust anyone reading this uses ripgrep on an almost daily basis already. If not, hopefully you’re using ag or some other modern version of grep (or you’re just buring seconds needlessly).

One hallmark of ripgrpe is speed. It’s fast. So, of course what we want to do is bolt on some API calls to third-party LLM/GPTs (and pay the OpenAI tax) so that we slow it down by a few more orders of magnitude.

It that’s how you roll, you’ll like dripgrep!

This CLI marries ripgrep and ChatGPT so you can invoke something like this:

$ dripgrep gpt "I need you to do a multistep task for me. First, turn on statistics printing, then set a file type for txt tiles, and then I want you to think to yourself about what the Hungarian equivalent of the phrase 'Once upon a time' would be and dearch for it."

and see the output (head to the GH repo to give Frank, the creator, some 👀).

The project is not meant for serious use but rather as a reconnaissance mission to explore the potential of using large language models with APIs to present a set of “tools” or “functions” that the LLM can decide to invoke based on context. The goal is to apply the power of these models to real-world problems effectively.

The project focuses on defining a system that allows presenting a large number of possible actions to an LLM, leveraging Rust types and OpenAI funciton enumerators for tool calling and parallel processing. The project also looks at the importance of semantic isolation in action descriptions to optimize tool selection and performance.

Frank emphasizes the need for differentiation in action descriptions to avoid semantic overlap, which can impact tool selection accuracy. Additionally, the project explores the idea of augmenting similarity filtering with machine learning predictions and intermediate steps to enhance the robustness and consistency of the system’s responses. The developer plans to implement tests that evaluate adjustments in wording, argument variants, and system functioning rigorously. Furthermore, there are considerations for handling long-running processes efficiently by inserting intermediate steps for course correction.

And, to be completely fair, even dripgrep’s creator warns us not to use it:

Jokes aside, this is a serious project (in a way), but it is not a serious attempt to make some sort of grep gpt. I’m not making this because I think it is a great idea worth pursuing on its own. Don’t actually use this. Learn real command line tools, not this thing. Your children could one day find out.

Yes, AIs, ‘Understand’ Things

Photo by LJ on Pexels.com

Robert Wright’s recent piece, “Yes, AIs ‘understand’ things,” takes a pretty bold stance on the long- (long… long…) debated topic of artificial intelligence and understanding. Wright challenges the classic argument against AI understanding — John Searle’s ‘Chinese Room‘ thought experiment. Searle’s argument, which has been an oft-used foundaiton of AI skepticism since the 80’s, asserts that no matter how well an AI emulates human behavior, it cannot be said to truly “understand” because it is merely manipulating symbols without comprehension.

Wright argues that the advent of ChatGPT and other large language models has effectively refuted Searle’s argument. These LLMs demonstrate conversational fluency by processing information in ways functionally comparable to human brains, which Wright suggests is a form of understanding. He acknowledges that if one defines understanding to require subjective experience, then we cannot determine whether AIs possess it. Yet, if understanding is defined by the ability to process information similarly to humans, then these LLMs most certainly do exhibit key elements of understanding, including semantics and intentionality.

The post also touches on the evolution of AI in general, and suggests that as AI continues to develop, it will likely gain more elements of understanding. Wright’s critique of Searle’s argument is not to claim that AIs have full understanding but to highlight that they have as many elements of understanding as one might expect at their current level of development.

I think this analysis is significant because it shifts the conversation from whether AIs can understand to what degree they understand and what we mean by “understanding.” It’s a fairly nuanced (hot?) take that recognizes the complexity of the issue and the progress AI has made. The implications of this perspective could be pretty profound, as it challenges us to reconsider our definitions of cognition and the potential of artificial intelligence.

Give the whole thing a solid read. It’s managed to change my perspective (a bit) on this, especially with the emergence of Claude 3 (if you haven’t played with it yet, it’s bonkers cool).

Generative

Photo by Pixabay on Pexels.com

Ethan Marcotte’s (@beep@follow.ethanmarcotte.com) “Generative” post is a timeline that lets us explore the impact of generative artificial intelligence on the creative/technical professions.

Marcotte is known for coining the term “responsive web design,” so he comes at this piece with a more diverse background than a mere pundit. The core essence of his intent is to examine the implications of AI tools that can produce content — be it text, images, or code — on the skilled labor market and the broader creative industry.

He uses the term “de-skilling” to describe the process by which technology reduces the demand for skilled labor, and reflects on the rapid development of generative AI tools and their potential to automate tasks traditionally performed by professionals.

I’m compelled to point out that this concern is not just theoretical; people are already losing jobs to AI, and the tech industry is experiencing significant layoffs. This has hit my profession (cybersecurity) pretty hard.

This not a “doom! doom! doom!” piece. Ethan acknowledges the technical “marvel” of these tools and their potential to lower barriers for people to create and express themselves (and, as I’ve said on more than one occasion, they have been a godsend during my long covid brain fog).

However, he also points out the lack of regulatory oversight, privacy safeguards, and labor protections in the country where these tools are being developed. This nuanced view suggests that while generative AI can be a force for good, it also poses significant challenges to the workforce and ethical considerations for society. But, as you scroll through the pseudo-timeline view, you’ll see we’ve been here/done that on far more than one occasion.

I think it serves as a prescient reminder that — as we marvel at the possibilities of AI — we must also grapple with its impact on the human condition and the fabric of our industries (and, may I further posit “each time we use one of these tools”).

FIN

Remember, you can follow and interact with the full text of The Daily Drop’s free posts on Mastodon via @dailydrop.hrbrmstr.dev@dailydrop.hrbrmstr.dev ☮️

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.