Artificial intelligence (AI) sparks a lot of discussion and strong feelings in my circles, creating two vocal camps. You’re either for or against. Unlike the people in these camps, my sentiment about AI has been going back-and-forth, and it took me a while to figure out why. One day I’d be excited about possibilities, and the next day I didn’t want anything to do with AI. It turns out, different aspects of the technology invoke different feelings.
In this post, I want to look at AI from different perspectives and share my thoughts on each.
AI as a technology
It must’ve been around 2019 when I started to actively follow developments in machine learning and neural networks. The concept of function approximation by having an algorithm optimise a model that transforms inputs to outputs was and is incredibly exciting. To really get how things worked, I followed courses, coded my own neural network from scratch, and dabbled with TensorFlow and Keras in small projects.
I still believe it’s transformative technology that lets us solve many problems we considered too complex or impractical to solve before, so the excitement is still there, but that excitement now comes with a degree of discomfort.
AI as an industry
A part of the industry deserves praise. Many people build meaningful systems, like tools that help doctors make more accurate diagnoses, systems that improve farming efficiency and lower emissions, and machinery that is more accurate in measuring the quality of produce and saves waste. I can’t get enough of applications that measurably benefit humankind.
The reality, unfortunately, is that not every endeavour is a clear positive. To develop large language models (LLMs), various companies blatantly rip most of humanity’s creative works, extort and traumatise low-wage workers to label the data, somehow manage to redirect a huge amount of resources from (sometimes publicly-funded) civilian infrastructure, and then get filthy rich off of it. Or rather, get filthy rich off promises and investments. It takes hundreds of billions1 worth of investments to buy the data they couldn’t steal, build the data centers, purchase the hardware, and operate the whole show.
So naturally, these achievements and promises are overblown because the money needs to keep flowing in. Artificial general intelligence? Consciousness? Some of these LLMs still struggle with counting the number of r’s in strawberry. Surely we’re several major advancements away from getting even close. Until then, just throwing more data at it yields diminishing returns.2
Consequently, I believe AI is a bubble, though primarily a financial one. I’m sure the technology is here to stay, it’s just grossly overvalued and implemented excessively. For the sake of our wallets, let’s hope the bubble doesn’t burst too violently and doesn’t trigger a recession.
I also worry about how this technological shift disempowers workers and further concentrates wealth to a very small group of people in tech. Perhaps I’d find forgiveness in my heart if they’d promote social safety nets to prepare us for their automated utopia. But they often promote hypercapitalism, in which the unoccupied are homeless and hungry.
All in all, there’s a lot wrong with the industry, more than I mentioned, and it’s a large contributor to my conflicting feelings about AI. I hope there’s a future where AI is ethically sourced, doesn’t gobble up resources mankind needs to survive, and doesn’t disrupt the entire working world without offering a better alternative.
AI as an assistant
On the consumer side of things, we have LLM assistants. They have their use, but the impact of using them is opaque. Attempts to calculate energy and drinking water consumption by AI varies wildly, but it’s likely tremendous3 both for training the models and inference. With ChatGPT having over 700 million weekly users,4 the total impact is tremendous either way. And for what? Homework and slop?
With so many eyeballs pointed to assistants instead of apps and websites, obviously the next move is to serve advertisements.5 We’ve seen how that goes. Ad platforms gobble up data to serve personalized ads and they’ll trick us into watching more so that more money can be made. Maybe they’ll take a slice off in-assistant purchases, a reason to incentivise spending money as well. Look, ads aren’t new, and I’m all for sustainable revenue models, but having ads served by what people believe to be magical truth boxes that regularly and confidently output false information, is bound to give more issues than banners.
We definitely need legislation to be updated to the AI assistants era, as it’s happy to give us financial advice, pretend to be a psychologist, or will try to sell you a product, but without the accountability.
AI as a geopolitical race
Clearly, the AI race is on. Not only between competing companies, but also between nations. As a multi-billion dollar industry1, there’s much to gain economically by coming out on top. It’s also a powerful tool for military purposes. Or for domestic control. Or to sway public opinion and interfere with elections, both domestically as foreign. Everyone wants a piece of that pie.
It would be inaccurate and cynical of me to say that, because of the above, governments aren’t doing anything that inhibits the AI industry to thrive. That’s not to say the AI industry doesn’t lobby and doesn’t influence policy, but something is being done. We have the EU AI Act, which prohibits harmful uses of AI, introduces obligations for “high-risk” uses, and introduces some transparency rules.6 While I have plenty to say about wealth accumulation and resource use in the AI industry, these aren’t exclusive to the AI space, and perhaps we shouldn’t mitigate that through legislation. At the risk of slowing down innovation and giving away the first spot, I’d love to see governments take these issues more seriously.
AI as a tool in software engineering
As you’ve read, I have plenty of gripes with the AI industry, and it made me hesitant to start to use AI in my work, but I eventually took the plunge when pushed gently.
I started out using open-source models trained on public data first. At the time, these weren’t good enough to make it worthwhile. My mileage was poor compared to that of my peers, who used the latest, greatest, and largest models offered. Despite my reservations, I eventually started to use the Cursor license I was given, but once in a while I come back to open-source models. Surely, at some point, we’ll get ethically sourced, maybe even resource efficient models that are competitive.
I’ve come to understand why some engineers feel their work is taken away from them. My love for coding started out as just that: a love for coding. AI takes that mechanical work away. Over the years, my interest shifted towards problem solving and engineering enablement, so I’m not too upset about offloading the manual labour. In some ways, it’s akin to delegating work to other developers, except AI doesn’t need to be motivated, doesn’t need breaks, has an incredibly broad knowledge of technologies, has no ego, but it can be utterly wrong with confidence. Because it’s not human, I’ll dismiss faulty work by AI without hesitation, whereas addressing a bad pull request from a junior developer requires tact. Delegating work to AI feels simpler, but it’s also less fun and engaging.
This analogy, however, creates the expectation that a senior engineer’s output multiplies by some amount because they have a virtual team of agents. This doesn’t exactly match my experience, but I’m still learning how to get most out of AI. That expectation does devalue our work. Have you ever received a raise proportionate to a productivity increase?
Is there a way back? Even if we lose knowledge along the way, we document our knowledge well enough for it to be learned again, but I think it’s unlikely that we need to go back. Once the bubble bursts, or when investors want a sustainable revenue model, I expect price hikes. If that happens tomorrow, at a point where many engineers are still sceptical, maybe we’ll see adoption and thus investments and thus developments stagnate. If it happens later, at a point when we’re confident that coding assistants boosts productivity by some amount, price hikes can be overcome. A tool that doubles productivity and costs less than a salary, is good deal, and today, a Premium Claude seat is 25-50× cheaper than a salary for a senior software engineer.
So, what does moving forward look like? I have high hopes that we’ll see more activity in the class of programming languages that focuses on defining outlines, specifications, validation, and proofs. Today, we can write a suite of unit tests and have AI write passable code of the implementation. If my tests are comprehensive enough, I trust the implementation through validation. Now imagine a language tailored to writing tests so we can write and review these more efficiently and without the syntactical baggage of an arbitrary programming language. We’ll need to figure out what level of granularity we’re comfortable with. Maybe some applications have few technological requirements and can be written as user flows, like in Gherkin, while other programs require more in-depth specification and/or validation. For example, for embedded software, we probably want to keep some control over memory usage. I suspect we’ll move from writing implementations to writing specifications that machines can satisfy.
Closing thoughts
Moving into the AI-era both worries and excites me.
Many of my friends shun everything AI, often for similar reasons mentioned in this post. I appreciate their unwavering beliefs and, as you can tell, I share their concerns and I hope governments and the space start working towards solutions.
But the uncertainty of the future brings out the curious nerd in me that wants to learn, explore, and adapt. My job is to stay on top of technology. It keeps me useful, productive, and employable. From my perspective, AI is here to stay, so it’s something I need to be on top of.
Although I cannot exert a lot of influence, I can try to mitigate some of the bad parts as I’m learning to use these tools. Instead of throwing AI at every problem that can be solved by other means, I choose to be conservative. I can discourage the creation of most wasteful and least useful use of AI by avoiding social media slop and asking others to do so as well. Instead of rewarding AI companies that train models with stolen data, I can deliberately support (open-source) models that are exclusively trained on public domain data.