Bug Prediction in Code Changes

From use case: Bug Prediction in Code Changes

Wheel Pros, an aftermarket wheel manufacturer and distributor with a network of more than 25,000 dealers in over 30 countries, worked with systems integrator Presidio to deploy Amazon CodeGuru from Amazon Web Services. Wheel Pros, which manages more than 300 microservices, uses CodeGuru to improve code quality and application performance while reducing manual effort. “Amazon CodeGuru Profiler analyzes the application’s run-time performance and, using machine learning, provides recommendations that can speed up the application, so we don’t have to have our developers figure out what is the best way to configure from a performance perspective,” says Rich Benner, Wheel Pros chief information officer.

Ihomer, a Netherlands-based provider of electric charging and energy-management services, uses AI coding assistants to accelerate software development but recognized that AI-generated code could inadvertently violate best practices or security policies if left unchecked. The company deployed Codacy to mitigate those risks, achieving a 20% reduction in duplicate code across key repositories, according to a Codacy case study.

The growing use of AI coding assistants is driving interest in bug-prediction systems. A 2024 GitHub international survey of 2,000 software professionals found that 97% had used AI coding assistants at some point, although a smaller percentage that varied by country said their companies encouraged or permitted use of those AI tools. An Apiiro study found that while AI coders reduced minor errors, for example reducing trivial syntax errors by 76% compared to non-AI code development, they generated more serious errors in the form of a 150% increase in architectural flaws and an 300% increase in privilege issues.

Ironically, “the only solution is to use more AI” by training another tool to spot bad AI-created code, said Chris Wysopal, co-founder and chief security evangelist at Veracode, in an interview with trade publication Bank Info Security.

Indeed, organizations are turning to AI to solve this problem. The AI system debugging market size was valued at $1,18 billion in 2024 and projected to grow to $1,33 billion by 2025 and $3,921.24 million by 2034, a CAGR of 12.8% during the forecast period, according to Polaris Research.

Success factors include comprehensive data collection, continuous model refinement, and strong integration with existing workflows. Organizations that have successfully deployed these systems report that predictive models can identify high-risk code sections, enabling developers to focus testing efforts more efficiently.