Feature Artificial intelligence is rapidly reshaping the way software gets built, tested, and maintained — but not in the simplistic, headline-grabbing sense of "AI replacing developers."
Over the past few years, I've seen firsthand how AI is starting to change working practices. It's not a sweeping transformation, nor is it irrelevant hype – it is something more nuanced.
For some engineers, AI is quietly becoming part of the everyday engineering toolkit, showing up in code assistants, test generation tools, infrastructure management, and even project planning. In my own work, and in conversations with other engineers, I've noticed a shift. But is this for the better or for the worse?
As an experienced software engineer, I was asked about my own experiences of using AI. Like many of you, it is beginning to make an impact in daily work and my approach to tasks.
A brief background. At university I studied chemistry and physics. After a brief stint working as a chemist, I moved into programming. Mostly I've programmed on Windows using languages such as Visual Basic, Delphi, C/C++, and C#. I occasionally dabble with Python, and in my spare time, I play music and boardgames.
Back in the '90s when I started as a software engineer, search engines capable of finding information easily were not available. Generally, engineers learnt their skills from traditional sources. These included in-person training courses, books, technical newsgroups (remember them?), talking to other engineers, and (perhaps most importantly) from whatever code you happen to work on. Since I did not study "computer science" at uni (until later, when I went back to do my masters), I was constantly exposed to new ideas, data structures and languages, new problems and fresh ways of tackling them. One memorable moment came when an experienced engineer told me about a circular list, a data structure with a fixed list, where the insertion point into the list moves in a loop - when the insertion point reaches the end of the list, you cycle back to 0. Simple, yes, but also not something you might be exposed to every day.
In terms of programming and AI, I should say upfront that I am a light user. Mostly, I rely on experience, muscle-memory and traditional search. I lean heavily on the patterns of solid, clean and defensive code, built up through years of trial and error. My aim is code that will be easy for others to read and maintain years into the future. While not exactly an AI-skeptic, I do find that the variable, incomplete results served up by AI are, on average, not time savers.
Naturally, I make occasional use of AI tools (primarily CoPilot). The quality of results on simple, direct questions is generally very good. And the conversational style of summarizing both technical and non-technical subjects means AI tools often give better answers than a traditional search engine. These are two areas where AI shines. Occasionally, though, the suggested code is crude or just plain wrong - even the most ardent AI enthusiast would surely agree that the time is not right to remove humans from the loop.
Enthusiasts generally pigeonhole developers into one of three categories in terms of AI usage:
Assuming those categories are approximately right, as a light user of AI I guess I fit somewhere between group 1 and 2.
Even so, my skepticism relates to other issues:
Perhaps after reading that, the enthusiasts out there will put me firmly back into group 1, where I belong.
For some color, here are some of my recent personal experiences with AI:
A company I have worked for manufactures automation machines for factories. Its main product, and much of its codebase, has been around for several years and targets Windows. Naturally it has been modernized over time as hardware, operating systems and development tools have changed. The main application uses Delphi for the front-end, with hardware and peripheral control in C/C++.
Its application is single-purpose - users should not leave the environment provided by the software. We needed a way to display PDF user manuals. Many options suggest themselves, such as hosting a simple PDF reader (perhaps written in C# using a component like WebBrowser), using the system web browser with its built-in PDF support (perhaps Microsoft Edge), or relying on whatever application happened to be associated with PDFs (from the Windows registry).
So I asked CoPilot how to read the application associated with PDFs from the registry "in Delphi." CoPilot confidently suggested looking at " HKCU\Software\Classes\.pdf" only. Although the code did not compile in my version of Delphi, the actual problem was that details were missing. I finally found a StackOverflow question (see here) that covered more of the subtleties.
For those interested, one issue is the associated application can be overwritten by "user preference," such as "Open with" overrides in Windows Explorer.
The AI gave no hint at these wrinkles. It was the same in C# (which is more commonly used than Delphi).
Some will shrug - this is, after all, why humans should review and test the AI suggestions. But it does at least show the limitations of blindly following the AI. Had I relied on its confident initial answer, a quick test might not have uncovered the subtle problems. The code might easily have been released and then made it into factories where it is hard to update (machines are rarely connected to the internet).
A more positive example was my question on how to suppress warnings when loading a URI in the C# WebBrowser component. The AI pointed out the "ScriptErrorsSuppressed" property of the control, which resolved my problems. This is the simple type of contained question at which AI shines.
I was speaking with an experienced Test Manager at a major UK betting company. When I asked him about his experiences with AI, particularly at writing unit tests, he mentioned that results have been poor. In his experience it would usually take several cycles of modifying the prompt (skills sometimes known as "prompt engineering" or "query formulation") before the answer was usable. And then to integrate the code into their (sizable) codebase required the suggestions to be completely re-written. He thought that the AI was a net time-waster, especially for more experienced testers.
The context matters here: For greenfield projects, copy-pasting AI code into the (growing) codebase can be productive. But for larger, more complex real-world codebases, much greater care is required. Often, the AI suggested code is difficult to use or simply wrong.
In summary, my experiences with AI have been mixed. Asking the AI for suggestions on simple, direct queries, and then integrating those suggestions into the existing application is sometimes effective. But attempting to integrate code into a codebase which must be maintained many years into the future is much riskier.
Some of us have grown up in a world where smartphones have always existed. Many may struggle to imagine a world without their smartphone. We're increasingly chained (some might say enslaved) by these devices, requiring them for bank payments, ticket collections, and more... on an ever-growing list. How many of us could confidently navigate a physical map, having used Google Maps (or its equivalent) for years? What happens when the services we rely on disappear behind a paywall, or simply stop working altogether?
I fear the same is happening with AI. By all means, make occasional use of these tools to learn and be productive. But recognize the risks. Before you turn to that smartphone app or AI tool, attempt the problem yourself.
Stay in control and think for yourself – the tool you rely on today might not be here tomorrow. ®
Source: The register