Programming with ChatGPT

For fun, I’ve decided to spend a few days and create a fairly complex client/server application (CRUD), using the technologies Python/FastAPI and Vue.js and the PostgreSQL database. But also for fun, I’ve decided to let ChatGPT do 100% of the programming, so my role is restricted 100% to prompts, putting the code into Visual Studio Code, and testing. Even in cases where I could easily do something by hand, I asked ChatGPT to do it.  Think of the movie Supersize Me, but for IT.

I wanted to record my thoughts as I am working on this project!

What’s worked well

  1. ChatGPT exceeds where I want and need the help. I’m not a full-time application developer, so I have no desire to fill my brain with things I will never remember, like HTML and CSS and stuff like that. ChatGPT has done a great job: “Can you please make the box a bit longer and a slightly brighter shade of green?”
  2. ChatGPT doesn’t just give me the code; it explains to me what’s happening and highlights important topics. This is important for me, since me acting as “dumb scribe” together with ChatGPT does not work; see the next section for bad examples! This means I need to have as deep a knowledge of the code as possible.
  3. ChatGPT helps me learn. Before I started this project I had no experience with Vue.js. Thanks to ChatGPT’s constant inputs and help, after just 2 days I have a much better knowledge of this product.

What’s worked badly

  1. For whatever reason, ChatGPT seems to have good days and bad days. I have no logical explanation for it. I can only guess it has to do with my prompt history (e.g. “context”) – but that’s just a guess. One example: yesterday I had to stop our session many times: I’d give it a source code file, instruct it clearly to change only the business logic and not the look-and-feel, and ChatGPT would change both. But today, working with that exact same file, ChatGPT only makes the changes I request. Another example: yesterday, no matter what I tried, ChatGPT would also regurgitate the complete source file; but today, it (thankfully) only gives me the bits-and-pieces that have changed. Degree of frustration: *****
  2. Infinite debugging loops. Yesterday was a particularly bad day for this, but it happens frequently. ChatGPT gives me code that has an obscure bug. I feed the error message into ChatGPT, and it spits out essentially the same code as what created the bug. I call this an “infinite debugging loop” – and I’ve needed to “break out of this loop” many times – but sometimes telling ChatGPT to take a different approach (and have it really take a different approach) is easier typed than done! Degree of frustration: ***
  3. More than I need. Yesterday was also a particularly bad day for this. I needed a simple update to a single file (e.g. change the button text from “Save” to “SAVE”). ChatGPT would then not just give me the needed files, but also 4-5 other files (usually with the text “Please ensure file XZY looks like this:”.  Degree of frustration: *
  4. Unable to debug some problems. There have been a number of bugs during development that ChatGPT was incapable of solving. After quite some frustrating time and back-and-forth, I carried out my own debugging. I found the need for this could be reduced by ensuring ChatGPT always took “baby-steps” that I could continually test. But I did want to point out: total elimination of the human factor in debugging is not yet possible.
  5. ChatGPT often goes into “guess mode.” I encountered this situation this morning where a Vue component with a dynamic table suddenly started adding duplicate entries into the table. ChatGPT: “Let’s see if this is race conditions…” It was not. ChatGPT: “OK, let’s see if they are added to the local array and not the server…” They were not. Guess after guess. I gave up. You get the idea. Frustration factor: ***

Other considerations

  1. I’ve done “extreme programming” or “pair programming” many times, and the good social interaction between programming partners can really keep the momentum going. Even with ChatGPT instructed to treat me nicely and call me Dr. Ken, I can only stomach about 90 minutes of work with ChatGPT without having to take a “frustration break.”

Advice for students and others

Naturally your mileage may vary, but I’ve found that programming with ChatGPT requires not only constant attention on my part and a deep knowledge of the code, but also the ability to carry out my own debugging. I can easily see that without some prior programming experience — and especially debugging experience — working in this way might be very slow and tedious.  When you do enough debugging, you get a “gut instinct” of where the problem could lie – and that was essential for me to climb out of the holes ChatGPT would occasionally dig for me.

NOTE: This article was published on LinkedIn

No Such Agency

Most people think the NSA is located in Ft. Meade, Maryland – and indeed, part of it is there. But according to rumors — and mind you, these are just rumors! — another NSA complex is located in San Antonio.

Of course, San Antonio is HOT – it actually broke global heat records in 2023 for the most continous days above 40C/100F. So as you can imagine, IF the rumors are true — and if the NSA were located in San Antonio — and I have no way of knowing if those rumors are true — and IF the NSA operated a huge datacenter — then it would be quite reasonable to expect a LOT of air conditioning.

Well, rumors aside, right next to a Walmart I spotted a HUGE field of massive air conditioning units – with no buildings in sight! To give a sense of scale, these air conditioning units easily cover a size of 10 footballs fields! So it does make one think: what exactly is being cooled, where, and for what reason?

Run on water

Texans can be crazy. After the world’s hottest and longest summer with 100+ continuous days of temperatures higher than 40C / 100F, San Antonio had a cold spell where the temperatures dropped below zero.

Because this never happens, the Texans of San Antonio panicked. The television stations broadcasted instructors about the 3P’s: Pets, Plants, Pipes. And there was a “run” on bottled water, since the Texans thought the world might actually end:

When programs write programs for programs

The evolution of programming languages from the electromechanical 0GL to the advanced 5GL has fundamentally altered human-computer interaction. High-level languages and Low-Code/No-Code platforms have democratized programming, leading to the recent integration of AI tools which challenge traditional programming roles. But now, the confluence of AI with coding practices may not be merely a further incremental change but could represent the inception of a new paradigm in software development, a symbiosis of human creativity and computational efficiency.

The human/computer interaction

How humans program computers has only changed a handful of times in the last 130 years. The first tabulating machine was electromechanical. It was first introduced by Herman Hollerith’s company in 1890, and in fact these business machines put the BM in IBM. They could do limited digital processing on data provided to them via punched cards. An operator would program them with jumper wires and plugs on a pin board, telling the electricity where to flow and thereby which calculations to carry out. Let’s call this programming approach the Zeroth Generation Language, or 0GL.

The first large computers that followed borrowed Joseph Jacquard’s loom approach from 1803, using a defined instruction set encoded by ones and zeros; these were the First Generation Languages (1GL). Often they were implemented by giant roles of black tape with holes, a technology dating back to Basile Bouchon in 1725. The computing power was limited, but the only limit to the size of your application was how much tape your roles could hold.

The Second Generation (2GL) assembly languages increased human usability by replacing 0’s and 1’s with symbolic names. But in fact this was a small paradigm change, because these languages were just as tied to their hardware as were the wires in the tabulating machines 50 years before them.

The next great jump was FORTRAN (in 1957) and COBOL (in 1959). These languages were more human-readable than assembly, but that was not the key point. The key point was abstraction, achieved via machine-dependent compilers, so that one FORTRAN or COBOL application would presumably give the same answers on any machine on which it was run.

The transition to Fourth Generation Languages (4GL) was all about a leap in usability. Invented around 1970, SQL is the most notable example, using a human-like syntax: you tell it what you want, and it figures out how to get it. Despite its age it’s never been replaced and remains the gold standard for interacting with relational databases today.

Many computer scientists argue that the newest Low-Code/No-Code programming environments, such as Microsoft PowerApps, are the latest addition to the 4GL cadre since they similarly require little knowledge of traditional programming structures. This paradigm is exploding in popularity and transforming the enterprise IT landscape: business users (not IT professionals) create ephemeral applications to solve specific and often short-term business problems. But how ironic that with their GUIs and controls and connectors, they are the modern digital equivalents of the 0GL tabulating machine pin boards from 130 years ago!

Some people have argued there are now Fifth Generation Languages (5GL), used for artificial intelligence and machine learning, where the focus is on the results expected, not on how to achieve them.

From coding by hand to AI collaboration

The evolution of 0GL to 5GL is all about the leaps in how humans interact with machines. But not unsurprisingly, the advent of ChatGPT (and its cousins like Bard and GitHub Co-Pilot) has brought about a new paradigm in how we develop applications. As the new generation of college computer science students now well know, you don’t have to write your own Java/PHP/Python… code anymore; instead, you can ask ChatGPT to write it for you. Or for example, you can feed ChatGPT buggy code or code lacking in quality, and ask it to remedy the situation, or to create the tests and documentation. To be sure, there are limits, and a good human understanding of the language is essential to avoid errors and ensure you get the results you want. But the technology is advancing rapidly, its limits are contracting, and the degree of user-needed corrections shrinks every day.

If we project this situation forward – even just a bit – its ludicrousness becomes self-evident: humans asking AIs to create human-readable code for humans that no longer need to read the code! This paradox underscores a new era where the traditional roles of human programmers are not just assisted but fundamentally altered by artificial intelligence; it marks a significant evolution in computational development.

With Artificial intelligence now a key player in the realm of code creation, we need to examine its repercussions on this craft. This present state may be the start of a larger change, where artificial intelligence becomes a collaborative partner in code creation and the relationship between developer and programming tool is increasingly indistinct – in other words, a symbiosis of human creativity and computational power.