Yes. That's the answer to a question you definitely have now: Did you use AI to write this blog post? And I have to say, I was a bit late to jump on the AI bandwagon. Usually, I am very interested in new technologies and consider myself an early adopter of many things. I had a Facebook account as soon as it was available and was posting pictures of my food on Instagram in 2010. But while the early AI adopters were prompting DALL-E to generate cat pictures in the style of Van Gogh, I was muttering angrily somewhere in a corner. In this blog post, I want to explain why I wasn't an early adopter, why I changed my mind, and my current opinions. I hope this gives me more insights and starts discussions so that I can learn more about it.

Technological innovations have always been disruptive to society. Hunting and gathering became obsolete by farming, mechanical looms replaced weavers, and horses gave way to cars as the primary mode of travel. Even Charlie from the chocolate factory's dad lost his job in the toothpaste factory because of a robot. But hunter-gatherers became farmers, weavers became factory workers, and horses now run diagonally in a sandbox with braids in their hair. Even Charlie's dad ended up maintaining the robot that replaced him. The point is, though past innovations have been disruptive and made a lot of people lose their jobs, there also were new job opportunities provided by those innovations.

A while back, I watched "Humans Need Not Apply" by CGPGrey, and as the title suggests, it argues that this innovation will not provide new job opportunities. I became scared. But unlike the previous innovations, I can actually ask this one what its plans are. So I did.

I might have been naive, thinking it would be honest. The combination of two things made me use AI more professionally and personally. The first was something I heard on the Nerdland podcast. In the near future, AI won't take away the jobs of people, but your job will be taken away by people using AI. And "The Dark forest" book taught me the concept of the "pre-emptive strike" or "first-mover advantage". Basically, if I don't start using AI, somebody else will and make me obsolete. My only option is to start using it. If you can't beat them, join them. Having used it now, I realize it's not at a level where it will render me obsolete immediately, so I consider my job safe. I can't tell, however, whether my kids will still have job opportunities, but I guess that's their problem.

The Nerdland podcast (Dutch)

How do I use AI now?

I use GitHub Copilot, but my feelings are mixed. Sometimes, it suggests what I actually want, although mostly, it's a one-liner that saves me a couple of keystrokes and replaces it with a TAB. Longer suggestions are not really helpful most of the time. I am more impressed by how I can leverage ChatGPT. I ask it for naming suggestions for classes and prompt it for code snippets. It actually helped me with a Filament-specific issue I was having. And although it didn't know about Filament 3, it pushed me in the right direction to fix it.

When writing blog posts, I prompt it for specific sentences, code snippets, and use it to make a lame joke (see above picture). For another blog post, it triggered me with the theme of a maze, which I found to be a good metaphor for the topic of the post. To sum it up, I use it as a rubber duck to help me with generating ideas. This renders a part of my job description obsolete, but it also means I don't have to talk to myself anymore, which I consider a win.

Supervision

Having used it more, I feel like AI is currently in the toddler phase. People are in awe that it can walk and tell the whole world about it, but it's not ready to cross the street unsupervised. It is great as a rubber duck, but I would not use it without verifying its output. It can still give wrong answers. It's a language model that generates useful output but not a knowledge machine. At least not yet.

At least it apologizes that it isn't perfect. But the lesson learned is that we should not trust AI blindly and still do fact-checking.

It's not magic, it's mathematics

The second thing we, as developers, should be mindful of is bias. Training AI models requires extensive data sets. The quality of these data sets directly affects the quality of the model. Basically, an AI is a set of mathematical algorithms that, when you combine them, can generate output that might feel magical. But if there is a bias in the data set, there will be a bias in the output the model generates. Having AI judges who could judge fairly and without bias might seem a good idea. The problem is that blindly using all previous rulings will absolutely contain bias, which will always show in the model. AI will not magically remove bias, as it was not designed to do that.

When you use AI to check what kind of insect you just took a picture of, the worst thing that can happen is that you mistake a Carpenter Ant (Camponotus sp.) for an Odorous Ant (Tapinoma sessile). I'm pretty sure the ant won't mind. When we start using AI to determine insurance rates, however, the bias might give people higher rates because they belong to some minority. When we start using AI models in the software we develop, we should always check for any biases that may occur.

AI will rule us all

Granted, this would be a very bleak future, and I don't think this will happen. AI is here to stay, and will definitely change our future in many ways. It will do wondrous things in medical advancement, but it will also be disruptive in the job markets. I changed my attitude towards AI, but I'm still cautious in trying to make it more than it is. It's not ready to be used unsupervised, and bias is a thing we should always be mindful of. I'd love to hear more about how you currently use AI because there are probably a million ways I haven't even thought about.

Small addendum: while I'm writing this, Grammarly constantly wants to change "it" into "them" when referring to AI. Or it doesn't know I'm actually referring to AI, or it has become sentient and wants to be addressed as "them". I'm hoping for the first explanation.