I’m surprised when there is new technology that I predicted. On a hiking trip in the southern part of that US state I’m in I realized that the progression of software creation was towards more abstraction. I realized that the programming languages, libraries, and frameworks at that time were progressing more towards how one’s primary ‘talking’ language (as in not a programming language but one that you use to converse) works. Just think about how in some sects of test driven development, it’s all about capturing the “ubiquitous language” that “domain experts” use which leaves its marks on the code base. It went from directly writing machine code on proprietary computers, to higher and higher level languages… And then LLMs came onto the scene.
Todays programmer now gets to interact with a new paradigm of software creation: so called “AI” but specifically applied machine learning via large language models. If we aren’t already ‘there’ then soon every effect programmer is going to utilizing some sort of AI tooling. In fact we’re still defining what this paradigm looks like and how to correctly leverage AI. For me, I’ve enjoyed conversing with models to learn domain specific vocabulary terms (helps with digging in), learning history of topics, and working through hypotheticals to better understand a subject.
When I dive into creating software solutions now I am okay utilizing agentic coders. There is a craft to creating a prompt that is clear enough for an agent to follow instructions which requires a level of understanding from the person behind the programmer.
I also enjoy opening up open source projects like blender, godot, and others and asking an agent about a codebase. This is where I’ve found value that I didn’t initially anticipate.
The field of software engineering is constantly in a state of change. This is what attracts me to computers: I get to leverage my desire to learn about things and systems. LLMs and related AI technology has been a boon.