Ignore at your own risk: The hidden price tag of bad code

By Ben Matthews, Senior Director of Engineering at Stack Overflow.

  • Tuesday, 10th March 2026 Posted 2 hours ago in by Phil Alsop

AI tools have democratised the power of software, bestowing the novice with the ability to generate functioning code in moments. The emergence of vibe coding captures the excitement felt by non-technical users who suddenly find themselves capable of building applications - a milestone all developers can remember - now through conversational prompts rather than structured learning. Yet, beneath this sense of empowerment lies a growing risk for businesses. While AI can generate code at speed, it cannot guarantee its quality for the application scenarios they are targeting. For many organisations, this ease of creation is ushering in a new layer of technical fragility - one that carries tangible operational and financial costs.

When quick fixes cover deeper flaws

The rise of AI-generated coding has led to a rise in software that appears functional but can often fail under pressure. Code produced through automated tools often generalises architecture, ignores bespoke needs, and can lack context for security concerns that your app may introduce; things that senior developers know and value through time tested experience. This pattern reflects a growing problem across industries: code that looks correct on the surface can mask a foundation that lacks the base that traditional development and QA tools provide.

Organisations that adopt AI-generated development without proper stewardship from engineers who understand what the code is doing are discovering that these shortcuts inevitably lead to bottlenecks and time sinks in other areas. What began as an efficient solution transforms into a long-term liability that nets a need for more scrutiny, developer time and overhead to address gaps.

The rising financial burden of poor software

Bad code is not just a technical concern - it is one of the most persistent and underestimated financial drains in modern organisations. Often called technical debt, developers have long warned that the cost of fixing code defects grows exponentially the later they are addressed or, even worse, discovered. As a result, much of the most valuable engineering knowledge today is summarised rather than explored and understood.

Even when bad code does not lead to system failure, its inefficiencies can quietly escalate financially as LLMs build things without quality or cost in mind behind the scenes. One of the ironic lessons that junior engineers learn early on is that fatal errors are often better than non-fatal ones simply because they are easier to identify; teams spend much more time on trying to track down errors that fly under the radar than the obvious ones. Organisations absorb higher operating expenses without gaining the

competitive advantage that well-engineered software is designed to deliver if they can’t find problems, are building on an unstable base and are constantly trying to realign their applications.

Security risks hidden in plain sight

The most dangerous consequence of low-quality code is the security lapses it can create. AI-generated code often pulls patterns from training data without consideration of security best practices. Input validation may be weak or non-existent. Authentication steps may be oversimplified and outdated dependencies may be silently included. These vulnerabilities become opportunities for bad actors - and because the code appears functional, many organizations fail to detect the risks until it is too late, such as this instance where vibe coding produced an account verification email that contained the actual passcode in the email. While it looks correct at first glance, the AI tool immediately compromised security.

Security teams then face the challenge of reactively fortifying systems that were never built with protection in mind and constantly retrace the AI’s steps in a constant defensive posture. This approach is expensive, time-consuming, and insufficient in a threat landscape defined by rapid exploitation and sophisticated attack chains. Using AI to learn and react to threats makes sense, but using it as a base from which you build can be an expensive path.

The human cost: Burnout and bottlenecks

Developers feel the impact of bad code more than anyone. In organizations where rapid output is prioritised over reliable expertise, engineers become trapped in a reactive cycle. Instead of building new features or exploring innovation, they spend their time stabilizing systems that were created hastily or without technical understanding. If you think building at a steady pace is expensive, wait until you see how long it takes when you build it too fast.

This environment is a direct contributor to burnout. Repeatedly solving preventable problems erodes morale, and teams will lose the space to think strategically, improve architectures, or propose meaningful improvements. As technical debt grows, engineering velocity slows. Eventually, the organization finds itself restrained not by a lack of ideas, but by the weight of past decisions.

Why skilled developers still matter

Despite fears that AI will replace developers, the value of experienced engineering has increased in an era where “anyone” can write code. Real developers do more than produce syntax; they anticipate edge cases, plan for scalability, embed security principles, and build systems designed to evolve. Not only do they know how to write code, but they can also recognize which patterns are safe, scalable, and battle-tested.

One study suggests that developers who know how to instruct AI receive higher accuracy scores, with scores jumping as much as 17.7% to 78.7% in some cases. The value of this experience isn’t just hand writing code, but in using and understanding how a new tool functions.

AI can accelerate tasks, support prototyping, and inspire creativity. However it cannot replicate intuition, judgement, or the ability to foresee downstream consequences that AI needs to function correctly. As more people gain the ability to generate code, the role of curated, experience-driven knowledge becomes even more important. Organizations that treat AI as a shortcut rather than a tool risk undermining the very foundation of their platforms.

Building a future where speed doesn’t compromise quality

The path forward for businesses is not to reject AI-assisted coding, but to implement it with discipline. Governance frameworks must ensure that all code - regardless of how it was generated - is reviewed, tested, and validated by people with the expertise to judge its reliability. Engineering standards should remain uncompromised and the culture should reward careful design as much as rapid experimentation. The goal is not fewer questions, but fewer avoidable mistakes leading to bad code development.

Organizations that strike this balance will gain the benefits of AI, which can be plentiful, without falling victim to its pitfalls. They will develop faster while maintaining the integrity of their systems. They will innovate without collecting destructive technical debt. Most importantly, they will empower their engineering teams instead of overwhelming them. In doing so, they transform shared technical knowledge into a strategic asset rather than an afterthought.

Quality remains the competitive advantage

Bad code carries a cost that touches budgets, security, operations, employee wellbeing, and long-term competitiveness. However, these costs are avoidable through pairing the creativity and speed of AI tools with the expertise and critical thinking of trained developers. Reliable, community-validated knowledge plays a crucial role in ensuring that speed does not come at the expense of quality. The companies that succeed in the AI era will be those that understand that while anyone can generate code, building good software still demands skill, foresight and a commitment to excellence.