• Post a Project

Why Some Devs Remain Wary of AI

Updated August 21, 2025

Hannah Hicklen

by Hannah Hicklen, Content Marketing Manager at Clutch

AI tools are now deeply integrated into traditional software development workflows. From AI-generated code to automated testing, and code reviews, this technology is augmenting the work of human developers to increase speed, productivity, and consistency.

But despite the promises and hype, some software developers remain cautious about relying on AI tools in their work. In June 2025, Clutch conducted a large survey of 800 experienced software developers and engineers at companies across North America on AI and software development. While the majority of developers are excited about the use of AI, 10% of developers have serious concerns about AI and the future of software engineering. They worry not only about the accuracy of the tools themselves, but also about the ethics and long-term consequences of their use.

Clutch data on AI and software engineering

Looking for a Software Development agency?

Compare our list of top Software Development companies near you

Find a provider

This skepticism may prove to be a healthy influence on the industry as a whole, driving progress towards more reliable and ethical AI tools. As businesses navigate both the promises and perils of AI-augmented workflows, healthy AI skepticism will remain an important part of the conversation.

AI's Integration Into Development Workflows

AI is rapidly being integrated across the entire software development lifecycle, transforming how developers write, test, and maintain code. What started with code generation has now expanded to include tasks like automated code reviews, test creation, bug detection, and even architectural recommendations.

Now, AI is becoming a hands-on collaborator at nearly every stage of development, including design, programming, and code review. Here is how AI is being used throughout the development process:

Software Design

Software design involves creating a comprehensive plan for how a software system will work, based on relevant business needs and requirements. Several AI tools can aid in design:

  • MyMap.AI: Generates charts and diagrams for visualizing the design of a software system.
  • Aqua: Automates many requirements management tasks, including extracting and summarizing software requirements from different sources.
  • ChatGPT: Drafts complete software design documents drawing on user specifications and uploaded sources.

Programming

Programming is the phase where the ideas in the design plan are implemented into functional code. Programmers are increasingly relying on a number of AI aids:

  • GitHub Copilot: Autocompletes code fragments or writes entire blocks of code based on natural language inputs.
  • Claude Code: Works from within the terminal to understand your entire codebase and makes coordinated changes across multiple files.
  • CodePair: Allows for real-time collaboration between programmers in an integrated development environment with AI assistance.

Review

Software reviews involve applying quality checks to completed code. AI tools such as these can be used in reviews:

  • DeepCode AI: Analyzes and fixes security vulnerabilities in codebases using fine-tuned AI models.
  • Codiga: Applies static code rules to test for and fix common errors and vulnerabilities, automating code reviews and security analysis.
  • CodeRabbit: Initiates full AI code reviews and provides simple summaries of changed files and pull requests.

There's no doubt that incorporating some of these tools into your software development workflows could significantly boost their speed and productivity. However, there are potential drawbacks, and not everyone is fully on board.

Is AI-Generated Code Really Reliable?

Even the most advanced AI systems of today make mistakes. That's why 14% of developers and engineers stated that they do not fully trust AI-generated code.

Clutch data on AI and development

“AI can also carry forward biases from its training data and sometimes produce confident but wrong logic,” says Harish Kumar, VP of Growth & Product at DianApps Technologies Pvt. Ltd. “The biggest risk is developers accepting suggestions without understanding them or testing them properly.”

Biases and hallucinations are primary challenges facing current approaches to AI. And this problem doesn't seem to be getting better; in recent testing, the hallucination rate of OpenAI's newest o3 and o4 models actually increased compared to an older o1 system.

AI hallucination occurs for a variety of reasons. Modern language models are trained on the entire content of the internet. As this data contains many errors and inaccuracies, so too can AI outputs. AI tools can also struggle when dealing with novel use cases that are not represented well in the training data. This means that the more novel your software design is, the more careful you need to be about relying on AI code to implement it.

While AI hallucinations are always a concern, they are especially pernicious in programming because they can cause:

  • Bugs: Inherent issues in the design of a software program. These can lead to crashes, unintended behaviors, and compatibility issues. In one recent severe example, bugs in a July 2024 update to software by security vendor CrowdStrike crashed millions of critical devices worldwide, costing billions of dollars and disrupting global air travel and banking industries.
  • Security vulnerabilities: Flaws that render a software system open to exploitation by intentional attacks, including zero-day exploits, distributed-denial-of-service (DDoS) attacks, and Structured Query Language (SQL) injections. A recent critical zero-day vulnerability in Microsoft SharePoint, a common enterprise web platform for sharing and storing digital assets, allowed hackers to exfiltrate sensitive data and steal cryptographic keys from hundreds of organizations globally.
  • Logical errors: Bugs that involve mistakes made in the basic reasoning and control flow of a software program. This might involve, for instance, mistaking a logical AND, which runs a block of code when each of two conditions are met, with a logical OR, which runs when at least one of the conditions is met.

In addition to these sources of error, modern AI systems still lack the common sense and contextual awareness that human developers possess. For example, a human is likely to know that something has gone wrong if a variable representing a number of users has a value other than a positive integer, whereas an AI system might keep chugging along as if everything is fine.

Ethical Gray Areas in AI-Assisted Development

Even if AI tools were 100% reliable, there would still be ethical concerns regarding their use. Developers who are hesitant to use AI-generated code raise questions like who is responsible for the code, if it violates intellectual property rights, and more.

Who's responsible when AI code causes harm?

Even the best AI systems sometimes produce bugs and other errors in code. If that malformed code makes its way into vital systems in infrastructure, transportation, medical, and enterprise technologies, it's all but certain that harm will occur.

In these cases, who should be held liable — the producers of AI systems or the users of those systems? AI producers are likely to deflect responsibility by pointing to disclaimers nestled in terms of service pages. OpenAI's Terms of Use, for example, demand that "you must evaluate Output for accuracy and appropriateness for your use case, including using human review as appropriate, before using or sharing Output from the Services."

Even if such disclaimers hold up legally, however, it is unclear whether this should absolve AI producers of all moral responsibility, especially given the aggressive marketing of AI tools for software development.

Are AI-generated suggestions plagiarized?

Developers are also worried about intellectual property issues related to AI developer tools. AI systems learn to program by ingesting examples of programs in their training data. The similarity of AI suggestions to code in their training data may lead some to characterize AI outputs as a form of plagiarism.

AI proponents, on the other hand, are likely to view any such similarities as innocuous. They point out that the practice of copying open-source code was well-established among human coders long before the rise of AI-generated code.

How are AI decisions made?

Even the most capable AI engineer often won't be able to tell you why an AI tool produced a specific output in a specific situation. The lack of transparency cautions against relying too heavily on AI code in situations where you don't just need working code but an understanding of why that code is the way it is.

A Major Risk: Overreliance on AI

Of the software professionals we surveyed, 8% expressed skepticism about AI’s impact on software development roles. They are worried about:

Clutch data on AI and software development

  • Developers become overly dependent on AI suggestions, leading to a lack of human creativity and perspective in software design.
  • The erosion of deeper understanding and problem-solving skills, yielding suboptimal code and an inability to tackle more complex problems.
  • Junior developers relying on AI to solve simple problems instead of building their development fundamentals by cutting their teeth on common issues.

The throughline for all of these risks is the potential for human developers to use AI as a substitute for critical thinking rather than as an aid to free themselves up for handling higher-level challenges.

Why Skepticism Might Be a Good Thing

While it is easy to dismiss AI skeptics as behind the times or resistant to progress, a healthy dose of skepticism could be a positive influence on the industry as a whole. Skepticism breeds accountability and curbs the worst excesses of AI hype.

More specifically, skepticism can help drive both the development of more reliable AI tools and a focus on how to use them more ethically through deeper questioning and audits. It encourages an emphasis on educating developers about best practices in the use of AI to avoid the risks of overreliance. Put simply, not all resistance is anti-AI; it's instead pro-responsible AI.

Building Trust in AI Tools Going Forward

To some extent, the ship has already sailed when it comes to AI and software development — AI is already firmly integrated into development workflows. But that doesn't mean AI skepticism has no place in the future of software engineering.

By remaining wary and demanding that trust in AI tools be earned rather than given, developers can incentivize the creation of more reliable and ethical AI systems while avoiding the skill erosion that comes from overreliance. The end result can be an overall healthier software development ecosystem.

About the Author

Avatar
Hannah Hicklen Content Marketing Manager at Clutch
Hannah Hicklen is a content marketing manager who focuses on creating newsworthy content around tech services, such as software and web development, AI, and cybersecurity. With a background in SEO and editorial content, she now specializes in creating multi-channel marketing strategies that drive engagement, build brand authority, and generate high-quality leads. Hannah leverages data-driven insights and industry trends to craft compelling narratives that resonate with technical and non-technical audiences alike. 
See full profile

Related Articles

More

The True Cost of Reactive Performance Fixes in High-Load Systems
Vibe Coding: The Future of Software Engineering or Hidden Danger?
The Hidden Cost of Skipping Software Discovery