The Transactional Echo Necessity

The Echo Chamber and the Mountain

Imagine standing inside an echo chamber and letting out a full-throated yell. The response is instantaneous. The walls throw your voice back at you before the sound even finishes leaving your mouth. It doesn't matter what you said. It doesn't matter whether it was brilliant or nonsense. The echo chamber doesn't care about quality. It only cares about volume and proximity. You yell. It answers. That's the deal.

Now walk outside. Stand on the summit of a tall mountain or at the edge of the ocean where the water meets the shore and yell the same thing. Nothing. The wind takes it. The waves don't care. The open air swallows your voice whole and gives you nothing in return — unless someone else is standing near enough to hear you. And if they are, you don't even need to yell. You can speak at a normal volume. You can lean in and whisper. What you say travels across that short distance and lands in another mind, and what comes back is a real response — something shaped by understanding, not just proximity and resonance.

This is where I find myself on a Friday morning, somewhere along an hour-and-forty-five-minute commute, thinking out loud into my phone. I've been spending time in some very active echo chambers in the technology world. The IBM i community on LinkedIn is a warm and supportive one — and I am genuinely proud to be a part of it. But it is, without question, an echo chamber. The same names. The same discussions. The same cautious pace of change. A handful of recognized voices whose words get amplified while many others go unheard. There is deep value in that community, and I will always honor it. But I've also been listening to what is happening outside of it — the open sky, the mountainside, the roaring shoreline of the broader development world. And the conversation happening out there is loud, fast, brilliant, and urgent.

This article is my attempt to step outside and speak to both groups at once — but especially to the developers who are newer to the craft, those who learned in the modern era and are moving at the speed of what's new. I want to reach across the echo chamber wall and talk with you, not talk at you. I'm cheering for you. What you're doing with AI-assisted development, agent workflows, vibe coding, and modern platform tooling is genuinely exciting. You are building at a pace and with a brain-power that is off the charts, and I have enormous respect for what you're creating.

But I have something to say about the foundation beneath all of it. And I think it matters more right now, in the AI coding era, than it ever has before.

The Systems Nobody Talks About in the Systems Everybody Uses

Let me introduce you to a side of computing that tends to live at the back of the room.

Not the data centers you read about in the news today — the ones drawing protest signs and city council debates about power consumption and water usage and the environmental impact of running AI inference at scale. Those are real concerns and worth discussing. But that is a different room entirely.

I am talking about data centers of a different vintage. Climate-controlled facilities that have been running since computing was young. Secure, purpose-built environments that house large systems not because someone wanted to build a monument to technology, but because expensive, powerful, precision machines need the right conditions to do their work reliably. These rooms exist because the work inside them has never stopped.

Behind at least half of the transactions you conduct in any given week — and for many people, every single day — there are large Online Transaction Processing systems. OLTP. You probably haven't thought much about them. They don't run flashy demos at conferences. They don't have influencer advocates. They are not the subject of viral LinkedIn posts. They simply work. Day in. Day out. Year after year. Decade after decade.

The two platforms I want to focus on are IBM Mainframe and IBM i — and before any eyes glaze over, stay with me.

IBM Mainframe systems run COBOL applications that sit behind virtually every ATM transaction, bank transaction, insurance claim, financial exchange, and major government service in the developed world. We are talking about the backbone of global commerce. These systems process more transactions per second with a smaller failure rate than almost any other computing platform in existence. The software running on them has been carefully maintained for thirty, forty, fifty years. It is not old in the way a broken-down car is old. It is old in the way a cathedral is old — engineered with precision, refined over centuries of use, and still standing because the people who built it understood what they were building for.

IBM i — which long-time practitioners will remember by the names iSeries and AS/400 — is similarly battle-tested. Fortune 500 companies run on it. You've likely interacted with it at Home Depot, Lowe's, Costco, on cruise ships, in casinos, and through hundreds of other businesses without ever knowing it was there. The applications on these systems are written primarily in RPG, a language engineered for business transaction processing from its first day of existence. It does what it does with the kind of dependability that modern developers rarely encounter firsthand.

Here is what I hear from voices in the broader developer community — the loud, fast-moving, AI-empowered world:

Why don't we just rewrite all of this? The code is old. The people who understand it are retiring. Let's port it to a modern platform and be done with it.

I understand the instinct. I do. But I want to gently, respectfully, and firmly push back. Not because the goal is wrong, but because the reasoning underneath it is missing a critical piece.

What's Actually Being Lost When "Rewriting" Goes Wrong

The problem is not the code itself. The problem is the engineering philosophy the code represents — and the fact that we have largely stopped teaching it.

Here is what I mean.

These older systems — Mainframe, IBM i — are built around a concept that sounds almost too simple to be profound: the transaction. A complete unit of work with a defined beginning, a controlled middle, and an explicit end. Either the entire transaction succeeds, or it rolls back cleanly. There are no partial states. There are no orphaned records. There is no ambiguity about whether the thing happened or didn't. The system knows. The data knows. And the next transaction begins on solid ground.

This sounds obvious until you spend years working in environments where it has been forgotten, ignored, or never taught in the first place. Then you start seeing what happens when transactional thinking is missing from the architecture of modern systems.

Problems don't announce themselves the next morning. They accumulate quietly, in bits and pieces, over months. A financial record that doesn't balance. An inventory count that can't be reconciled. A customer order that is in an impossible state — not complete, not open, not cancelled, just stuck somewhere the system was never designed to handle. And then one day accounting calls the development team and wants to know why the numbers don't add up. The root cause is traced back to a process that was refactored six months ago by someone who didn't understand what they were actually touching.

I have lived this. I am looking at a system right now — one I've been studying for a couple of weeks — that on its surface looked strange. Why in the world do they do it this way? And then I understood. Completely. This process runs faithfully, day in and day out, because every edge case, every unexpected event, every "what if the power goes out mid-transaction" scenario was considered, engineered for, and handled. Not logged to a file that no admin will read. Handled. Right there, in the moment, inside the transaction itself.

That kind of software engineering is hard. It is not glamorous. You cannot vibe-code your way into it. You cannot ask an AI agent to generate a transactionally correct event-driven architecture if you don't understand what that means in the first place.

The Pyramids Didn't Come With a README

I use this analogy often because it holds up every time I try to break it.

The pyramids of Egypt are engineering monuments that have stood for thousands of years. We have theories about how they were built. We have compelling arguments. We do not have proof. And the uncomfortable truth is that with everything we know about modern engineering, materials science, logistics, and construction, we cannot build an equivalent structure with the confidence that they were built. The knowledge that produced those monuments is partially, irretrievably lost.

I am not suggesting that COBOL and RPG will be lost to the same degree. I am saying the engineering philosophy behind them is already eroding in ways we may not fully recognize until the damage is done.

Modern development education teaches methods. It teaches syntax, frameworks, patterns, and paradigms. What it often fails to teach is how to think through the business process behind the software before you write a single line of code. How to trace a transaction from initiation to completion and understand every branch, every failure state, every edge case, before you design the data model. How to make the error handling as robust as the happy path — not as an afterthought, but as an integral part of the architecture.

The developers who built these long-running systems thought that way. It was expected of them. It was taught. And the systems they produced just work in ways that are difficult to fully explain to someone who has never worked inside one.

The Cafeteria That Ran Itself

I want to share a story that I think captures this perfectly. It comes from my own experience, but the lesson belongs to anyone who has ever had to build something that had to work every single time without exception.

A few years ago, I was in a meeting at a manufacturing facility when an idea came up: what if we automated the employee cafeteria benefit program? The facility had a few thousand employees who received employer-paid meal and break credits — a snack during break time, a lunch at midday, each with specific limits and controls. No extra food. No double-dipping at snack time. No gaming the system. A simple benefit, but one with a lot of rules around it.

The existing process was manual, imprecise, and expensive to administer. The idea became a project. The project became software.

I built it in .NET — C# on a Windows server, as far from IBM i as you can get technologically. But I built it the way I build everything: transactionally. The architecture was procedural. Each employee badge scan initiated a transaction. The transaction evaluated eligibility, validated the time of day, confirmed the benefit limits, applied the selection, updated the records, and closed. Every event was handled inside the transaction. Every exception was handled inside the transaction. The database connection was the only object in the true sense — everything else was procedure and event.

The result was a system that just worked. For thousands of employees. Every shift. Every day. Hundreds of thousands of dollars saved per quarter in food benefit management costs and administrative overhead. The idea was born in a conference room. The software delivered real, measurable value to real people's working lives.

I tell this story not to congratulate myself, but to make a specific point: you can write transactionally correct procedural software in any language on any platform. This is not about RPG versus C# versus Python versus Java. This is about engineering philosophy — the discipline of thinking through the entire transaction before you write a single character of code. Knowing the flow. Drawing it on paper. Understanding every branch. Being able to describe what happens when anything goes wrong before you decide how the system will handle it.

That is the skill that built the systems running behind global commerce. And that is the skill that built a cafeteria management system in a factory in Reynosa, Mexico. Same philosophy. Different platforms.

For the Developers Who Are New to This Conversation

I want to speak directly to you now — the developers who learned in the modern era, who are comfortable with AI-assisted coding, who may have never written a line of RPG or COBOL and have no particular desire to. I am not here to tell you that you should. I am not here to tell you that your tools are inferior or your approach is wrong.

I am here to tell you that the engineering principles behind those old systems are not optional extras. They are the foundation. And as you build more complex software — especially with AI-generated code, which can produce large volumes of syntactically correct but architecturally thin solutions — the absence of transactional thinking will catch up with you. Not tomorrow. Maybe not for months. But it will catch up.

The IBM i world sometimes seems small and self-contained because it is, in some ways. Its community is tight-knit. Its pace of adoption is measured. Its culture can be resistant to perspectives from outside its own walls. But the counterpoint is not that the IBM i world is right to stay isolated — it is that the broader development world needs to stop treating OLTP architecture, transactional processing, and procedural event design as legacy curiosities from a museum.

These are not old techniques. They are proven techniques. The distinction matters.

The pace of change in the modern development world is extraordinary. The brain power being applied to new problems is genuinely inspiring. The tools coming out of the AI-assisted development ecosystem are remarkable. But speed and power applied without architectural discipline produces technical debt that compounds faster than anyone anticipates. The bugs are quiet at first. The inconsistencies are small. And then one day accounting calls.

Study the art of software engineering. Not just the syntax of your favorite language. Not just the API documentation for your framework of choice. Study what a transaction actually is. Understand why rollback exists and what it protects. Learn what an event-driven procedure looks like when it's designed to be bulletproof, not just functional. Draw the entire flow of a business process on paper before you write a single line of code.

You do not need to become an IBM i developer to learn these things. You need to become a complete software engineer.

Engineering for the Future by Honoring the Heritage

The IBM i and Mainframe platforms are not going anywhere, because the software engineering philosophy that built them is sound, and because the businesses running on them have learned — sometimes painfully — what happens when you move away from that philosophy without fully understanding what you're leaving behind.

But the developers who know these systems deeply are aging out of the workforce. The institutional knowledge they carry is not being transferred at the pace it should be. In some cases it is not being transferred at all. And the broader development world — the fast-moving, AI-empowered, cloud-native world — is accelerating into an era of complexity that will demand exactly the kind of transactional rigor that these old systems were built around.

This is the necessity I'm pointing to. Not nostalgia. Not a defense of old tools for old tools' sake. The necessity of understanding that the architecture matters as much as the technology. That procedures and transactions and event handling are not constraints to be worked around — they are the engineering that makes software trustworthy. And trustworthy software is the only kind worth building.

Honor the heritage. Engineer the future. Not as slogans. As practice.

The people still working in these systems are not standing in the echo chamber yelling about how things used to be better. They are standing outside, on the shoreline, saying something important at a normal volume. You just have to be willing to get close enough to hear it.

Summary

This article explores the tension between the fast-moving modern development world and the deep, often overlooked engineering principles embedded in legacy OLTP systems — particularly IBM Mainframe and IBM i platforms. Through the metaphor of the echo chamber versus genuine communication, Mike Moegling argues that the IBM i community's insularity and the broader developer community's dismissiveness of "old" systems both create blind spots that will become costly as software complexity continues to increase.

The central thesis is that transactional processing — the discipline of designing complete, atomic units of work with explicit beginning, controlled middle, and defined end — is not a feature of old technology. It is a philosophy of software engineering that produces reliable, trustworthy systems regardless of platform or language. A real-world example, a factory cafeteria management system built in C# and deployed to thousands of employees, demonstrates that this philosophy applies as readily to modern platforms as it does to RPG on IBM i.

As AI-assisted and agent-driven development accelerates the production of code, the absence of architectural discipline becomes an increasingly serious risk. Code can be generated faster than ever; the engineering thinking behind it cannot be automated. The call to action is directed especially at newer developers: study the art of software engineering, not just the syntax of your preferred tools. Draw the transaction before you design the system. Understand every failure state before you write the happy path.

Executive Summary

Modern development culture moves fast and produces extraordinary things — but it increasingly de-emphasizes the transactional and procedural engineering discipline that powers the most reliable systems in the world. IBM Mainframe and IBM i platforms, despite being viewed as legacy technology by many, run the financial, commercial, and logistical infrastructure of global business. Their durability is not accidental. It reflects a philosophy of software engineering centered on complete, atomic transactions, explicit error handling at the moment of failure, and procedural design that traces the full business process before a single line of code is written.

As AI-assisted development accelerates, generating code faster than architectural judgment can evaluate it, the risk of producing syntactically correct but transactionally fragile systems increases significantly. Developers who do not understand what a transaction actually is — not as a database concept but as an engineering discipline — will build systems that work until they don't, and fail in ways that take months to fully diagnose.

The opportunity is not to force modern developers to become IBM i programmers. It is to ensure that the engineering philosophy embedded in these long-running, high-reliability systems is studied, understood, and carried forward into every platform and language where complex business software is being built. The tools are new. The engineering principles are timeless.

Next
Next

Thoughts on AI: Developers, Security, and the Data Center Question