Updated August 21, 2025
AI tools are now deeply integrated into traditional software development workflows. From AI-generated code to automated testing, and code reviews, this technology is augmenting the work of human developers to increase speed, productivity, and consistency.
But despite the promises and hype, some software developers remain cautious about relying on AI tools in their work. In June 2025, Clutch conducted a large survey of 800 experienced software developers and engineers at companies across North America on AI and software development. While the majority of developers are excited about the use of AI, 10% of developers have serious concerns about AI and the future of software engineering. They worry not only about the accuracy of the tools themselves, but also about the ethics and long-term consequences of their use.
Looking for a Software Development agency?
Compare our list of top Software Development companies near you
This skepticism may prove to be a healthy influence on the industry as a whole, driving progress towards more reliable and ethical AI tools. As businesses navigate both the promises and perils of AI-augmented workflows, healthy AI skepticism will remain an important part of the conversation.
AI is rapidly being integrated across the entire software development lifecycle, transforming how developers write, test, and maintain code. What started with code generation has now expanded to include tasks like automated code reviews, test creation, bug detection, and even architectural recommendations.
Now, AI is becoming a hands-on collaborator at nearly every stage of development, including design, programming, and code review. Here is how AI is being used throughout the development process:
Software design involves creating a comprehensive plan for how a software system will work, based on relevant business needs and requirements. Several AI tools can aid in design:
Programming is the phase where the ideas in the design plan are implemented into functional code. Programmers are increasingly relying on a number of AI aids:
Software reviews involve applying quality checks to completed code. AI tools such as these can be used in reviews:
There's no doubt that incorporating some of these tools into your software development workflows could significantly boost their speed and productivity. However, there are potential drawbacks, and not everyone is fully on board.
Even the most advanced AI systems of today make mistakes. That's why 14% of developers and engineers stated that they do not fully trust AI-generated code.
“AI can also carry forward biases from its training data and sometimes produce confident but wrong logic,” says Harish Kumar, VP of Growth & Product at DianApps Technologies Pvt. Ltd. “The biggest risk is developers accepting suggestions without understanding them or testing them properly.”
Biases and hallucinations are primary challenges facing current approaches to AI. And this problem doesn't seem to be getting better; in recent testing, the hallucination rate of OpenAI's newest o3 and o4 models actually increased compared to an older o1 system.
AI hallucination occurs for a variety of reasons. Modern language models are trained on the entire content of the internet. As this data contains many errors and inaccuracies, so too can AI outputs. AI tools can also struggle when dealing with novel use cases that are not represented well in the training data. This means that the more novel your software design is, the more careful you need to be about relying on AI code to implement it.
While AI hallucinations are always a concern, they are especially pernicious in programming because they can cause:
In addition to these sources of error, modern AI systems still lack the common sense and contextual awareness that human developers possess. For example, a human is likely to know that something has gone wrong if a variable representing a number of users has a value other than a positive integer, whereas an AI system might keep chugging along as if everything is fine.
Even if AI tools were 100% reliable, there would still be ethical concerns regarding their use. Developers who are hesitant to use AI-generated code raise questions like who is responsible for the code, if it violates intellectual property rights, and more.
Even the best AI systems sometimes produce bugs and other errors in code. If that malformed code makes its way into vital systems in infrastructure, transportation, medical, and enterprise technologies, it's all but certain that harm will occur.
In these cases, who should be held liable — the producers of AI systems or the users of those systems? AI producers are likely to deflect responsibility by pointing to disclaimers nestled in terms of service pages. OpenAI's Terms of Use, for example, demand that "you must evaluate Output for accuracy and appropriateness for your use case, including using human review as appropriate, before using or sharing Output from the Services."
Even if such disclaimers hold up legally, however, it is unclear whether this should absolve AI producers of all moral responsibility, especially given the aggressive marketing of AI tools for software development.
Developers are also worried about intellectual property issues related to AI developer tools. AI systems learn to program by ingesting examples of programs in their training data. The similarity of AI suggestions to code in their training data may lead some to characterize AI outputs as a form of plagiarism.
AI proponents, on the other hand, are likely to view any such similarities as innocuous. They point out that the practice of copying open-source code was well-established among human coders long before the rise of AI-generated code.
Even the most capable AI engineer often won't be able to tell you why an AI tool produced a specific output in a specific situation. The lack of transparency cautions against relying too heavily on AI code in situations where you don't just need working code but an understanding of why that code is the way it is.
Of the software professionals we surveyed, 8% expressed skepticism about AI’s impact on software development roles. They are worried about:
The throughline for all of these risks is the potential for human developers to use AI as a substitute for critical thinking rather than as an aid to free themselves up for handling higher-level challenges.
While it is easy to dismiss AI skeptics as behind the times or resistant to progress, a healthy dose of skepticism could be a positive influence on the industry as a whole. Skepticism breeds accountability and curbs the worst excesses of AI hype.
More specifically, skepticism can help drive both the development of more reliable AI tools and a focus on how to use them more ethically through deeper questioning and audits. It encourages an emphasis on educating developers about best practices in the use of AI to avoid the risks of overreliance. Put simply, not all resistance is anti-AI; it's instead pro-responsible AI.
To some extent, the ship has already sailed when it comes to AI and software development — AI is already firmly integrated into development workflows. But that doesn't mean AI skepticism has no place in the future of software engineering.
By remaining wary and demanding that trust in AI tools be earned rather than given, developers can incentivize the creation of more reliable and ethical AI systems while avoiding the skill erosion that comes from overreliance. The end result can be an overall healthier software development ecosystem.