Local LLM guide for engineering teams: when to run models on your own infrastructure, key trade-offs, and practical setup patterns.
AI in software development: what it can and can’t do (expert interview)
Everyone in software development is talking about AI right now.
We talked with Ion Hrytsku, head of backend development at SapientPro, about what AI can and can’t do for software teams.
Ion has seven years of experience building web app architecture, plus hands-on work with machine learning and AI.
His take is practical: treat AI like an assistant and keep humans accountable for the decisions.
In this article, we break down what teams get wrong about AI, where it truly adds value, where it falls short, and how to use it without sacrificing quality.
Key takeaways:
- AI is a multiplier, not a replacement. It speeds up drafting, writing code, and exploration. But architectural thinking, domain knowledge, and accountability stay human.
- Most risks come from weak processes, not AI itself. Blind trust, relaxed reviews, and unclear boundaries create problems. Strong engineering discipline prevents them.
- AI works best for structured, repeatable tasks. Boilerplate, test generation, documentation, and early prototyping benefit most from AI. Core architecture and critical domain logic still require experienced oversight.
- Review and validation become more important, not less. As code generation gets faster, evaluation becomes the core skill. Teams must question assumptions, test edge cases, and think about scale.
Table of Contents
Current perceptions of AI: what do people underestimate and overestimate?
AI triggers strong opinions. Some teams expect a 10x boost overnight. Others see it as a gimmick that writes messy code.
The truth sits somewhere in between. We’ll take a look at where people overestimate AI in development first.
Claim: “AI can replace developers.”
This is the loudest claim. It’s also the most misleading.
AI can generate code. It can successfully refactor smaller codebases. It can explain a function.
But it doesn’t understand your business context, own architectural decisions, or carry responsibility.
As Ion told us:
AI is a tool. It doesn’t replace engineers. It helps them work faster, but you still need someone who understands the system and takes responsibility for the result.
Ion Hrytsku, Head of Backend Development at SapientPro
When teams treat AI like an autonomous senior engineer, they lower their guard. That’s when quality drops.
Claim: “If the code compiles, it’s good.”
AI often produces code that looks correct. It compiles. It even passes a few tests.
But that doesn’t mean it fits your architecture, performance constraints, or your long-term maintainability goals.
Ion pointed out another common mistake:
Sometimes AI gives you something that looks very confident. But confidence doesn’t mean correctness. You still need to validate everything.
Ion Hrytsku, Head of Backend Development at SapientPro
AI predicts the most likely answer. It doesn’t reason about trade-offs the way a senior engineer does.
Claim: “AI understands the whole system.”
It doesn’t.
Even with large context windows, AI coding tools still struggle with complex, multi-service architectures. They don’t truly grasp domain logic and work by approximating patterns.
That works well for isolated tasks. It breaks down in deep system design.
Now the other side.
Many teams still underuse AI because they only see it as a code generator.
That’s selling it short. Let’s see where people underestimate AI.
AI as a thinking partner
Used well, AI is a powerful sounding board. You can quickly validate an idea, explore alternative implementations, and find edge cases you forgot.
Ion was clear about the upside:
AI is very powerful when you use it correctly. It can significantly speed up development, especially in the early stages when you’re exploring solutions.
Ion Hrytsku, Head of Backend Development at SapientPro
That early exploration phase matters. You can test ideas faster and move forward with more confidence.
He also framed it like this:
I don’t use AI to think instead of me. I use it to think with me. It helps me explore options faster, but I still make the final call.
Ion Hrytsku, Head of Backend Development at SapientPro
AI in repetitive tasks
Test scaffolding. Data mapping. Documentation drafts.
These tasks drain developers’ focus. AI handles them well.
And when you remove that friction, your team can spend more time on architecture and product logic.
That’s where human judgment matters most.
AI as a learning accelerator
Junior engineers can use AI to understand unfamiliar libraries. Senior engineers can use it to quickly scan new frameworks.
It won’t replace experience. But it shortens the feedback loop.
The risk appears when developers accept answers blindly. The benefit appears when they challenge them.
In short, most problems don’t come from AI itself. They come from how teams frame AI adoption.
If you expect magic, you’ll be disappointed. If you expect nothing, you’ll miss real gains.
AI works best when you treat it like a powerful assistant, not a decision-maker or a shortcut around engineering fundamentals.
AI use cases in software development: dos and don’ts
AI works well in specific situations. It creates problems in others.
The difference usually comes down to one thing: who’s in control.
Let’s look at where it adds real value. And where you should slow down.
Do: use AI for scaffolding and boilerplate
AI is strong at pattern-based tasks, like:
- Generating CRUD operations
- Writing DTOs
- Mapping data structures
- Drafting API documentation
These tasks follow predictable structures, which means AI handles them well.
This is low-risk acceleration. As long as someone reviews the output.
Don’t: let AI design your architecture
Architecture requires trade-offs.
Performance versus cost. Speed versus maintainability. Simplicity versus flexibility.
AI doesn’t understand your long-term roadmap or sit in product strategy meetings. It doesn’t own technical debt.
Ion was clear about this boundary:
AI can suggest patterns or approaches. But architectural decisions should always stay with experienced engineers who understand the full context.
Use AI for ideas, not for final calls.
Do: use AI to explore edge cases
AI can surface scenarios you might miss.
Things like unusual input combinations, validation gaps, and error-handling paths.
When you ask it the right questions, it becomes a fast stress-testing partner.
This works especially well in early development.
Don’t: blindly trust generated logic
This is where a lot of developers get burned.
AI can produce code that looks clean and convincing. It may even reference libraries or methods that don’t exist. Or misuse ones that do.
Ion highlighted this risk during our conversation:
You always need to double-check what AI gives you. It can hallucinate APIs or suggest solutions that seem correct but are not.
If your review standards drop because “AI wrote it,” quality drops with them.
Your process should stay the same: Code reviews, testing, and validation. No shortcuts.
Do: use AI in code reviews, carefully
AI can summarize pull requests, spot obvious inconsistencies, and suggest naming improvements.
It’s helpful as a second set of eyes. But it shouldn’t replace peer review.
Human reviewers understand intent. They know past decisions and can sense when something feels off.
Don’t: use AI where domain depth matters most
The deeper you go into domain-specific logic, the more careful you need to be.
This is especially true for:
- Highly regulated industries
- Complex financial calculations
- Healthcare workflows
- Custom algorithms
In these areas, small mistakes carry large consequences.
AI performs best in structured, repeatable tasks. It performs worst when judgment, context, and accountability are the most important.
The takeaway is simple: Define clear boundaries around AI use.
Let AI handle repetitive tasks and speed up your team, but keep humans responsible for direction and decisions.
Next, we’ll look at how AI changes day-to-day development workflows. And what that means for your team structure and processes.
AI’s impact on development workflows
AI is fundamentally changing how software development teams work.
Some of those changes are subtle. Others are much more obvious
The biggest shift is that development becomes more about validation than generation.
Before AI tools, engineers spent most of their time writing code from scratch. Now, they often start with a generated draft. That changes the equation.
Instead of asking, “Can you build this?” You start asking, “Can you evaluate this?”
Ion described this shift clearly:
The role of the developer is changing. It’s less about typing every line and more about reviewing, validating, and making sure the solution actually fits the system.
Strong review skills become even more important, especially architectural awareness and critical thinking.
If your team lacks those skills, AI can amplify their weaknesses instead of their strengths.
AI also shortens the feedback loop.
You can prototype faster, test different approaches quickly, and refactor with less friction.
That speed feels great. But it also increases responsibility.
When you work faster, you also introduce mistakes faster. Ion put it simply during our conversation:
AI can increase productivity. But productivity without control can create more problems than it solves.
Speed only helps if your quality standards stay intact.
AI also affects how new engineers onboard to new projects or teams.
Junior developers can ask questions instantly. They can explore unfamiliar codebases with guided explanations and get examples tailored to your stack.
That can accelerate learning. But it can also create dependency.
For junior developers, AI can be a great learning tool. But they still need to understand why something works, not just copy the solution.
If juniors rely on AI for every decision, they will struggle to build deep understanding. Engineering leaders need to encourage curiosity, not just copy-pasting AI code.
Teams using AI well also tend to experiment more in discovery and early development.
With AI, you can easily compare different patterns and stress-test ideas before committing.
But, one workflow risk does stand out.
AI-generated code can look clean and structured. But it can introduce subtle inefficiencies or inconsistent patterns.
If you don’t review it carefully, those small issues accumulate. Over time, you get silent technical debt.
Ion made a similar point when talking about oversight:
AI can help you move faster. But if you don’t have strong review practices, you can accumulate problems very quickly.
Not because AI is bad. But because oversight is.
So, the question isn’t whether to use AI. It’s whether your processes evolve alongside it.
Code reviews may need to be stricter. Architectural guidelines may need to be clearer. Documentation standards may need reinforcement.
Implementation advice for software teams: how to get AI right
Using AI isn’t the hard part. Using it well is.
If you’re a CTO or engineering manager, your job isn’t to decide whether AI is “good” or “bad.” It’s to create an environment where it improves output without lowering standards.
Here’s a practical way to approach it.
Define clear boundaries
Start by setting clear boundaries.
Where is AI allowed to assist? Where is human review mandatory? Where is AI off-limits?
For example:
- Allowed: boilerplate, test scaffolding, documentation drafts
- Assisted but reviewed carefully: business logic
- Human-led only: architecture, critical domain decisions
Ion emphasized that structure matters:
You need clear rules. If developers don’t know where AI is appropriate, they will use it everywhere.
Boundaries reduce confusion and risk.
Don’t relax code reviews
AI-generated code should not skip review. If anything, it deserves more scrutiny.
Encourage reviewers to:
- Question assumptions – Check what the code assumes about inputs, business rules, and system behavior. Make sure those assumptions match reality.
- Verify API usage – Confirm that methods, libraries, and configurations are valid, current, and correctly implemented.
- Check performance implications – Think beyond correctness. Consider scale, complexity, and production load.
- Test edge cases – Look past the happy path. Validate how the code behaves with invalid input, high traffic, or unexpected states.
Make it explicit that “AI wrote it” is not a quality signal by itself.
Train developers to prompt well
AI output quality heavily depends on input quality.
Vague prompts produce generic code. Specific prompts produce usable drafts.
Teach your team how to:
- Provide context
- Define constraints
- Describe expected behavior clearly
- Iterate on responses
Prompting is becoming a real skill. Treat it like one.
Encourage understanding over copy-paste
This is especially important for junior engineers.
Ask them to explain AI-generated code in their own words. Have them modify it and let them defend design choices.
Ion made this point clearly in our discussion:
AI should support learning, not replace it. Developers still need to understand what they are building.
If your team can’t explain the code, they shouldn’t merge it.
Track impact realistically
Don’t assume AI improves productivity. Measure it.
Look at:
- Cycle time
- Defect rates
- Rework frequency
- Code review duration
- Production incidents
If velocity increases but defect rates rise, something is off.
AI adoption should always improve outcomes.
Start small, then expand
You don’t need a 30-page AI transformation strategy to start using AI in development.
Start with a pilot project. Define use cases, gather feedback, and adjust guidelines.
When you see stable improvements, expand. This reduces resistance and helps you avoid organization-wide chaos.
But, no matter how advanced AI becomes, accountability must stay with people.
Engineers own the code, tech leads own architecture, and CTOs own technical direction.
AI can be a powerful multiplier. But only if your fundamentals are strong.
Next, we’ll look ahead. Where is AI in software development realistically heading over the next few years?
Future outlook of AI in software development
AI will not replace engineering teams. But it will completely reshape how they work.
The next few years won’t be about magical breakthroughs. They’ll be about deeper AI integration.
AI will become a standard assistant
Right now, AI adoption is still spotty in some development teams.
That won’t last.
Just like version control and automated testing became standard, AI agents and assistants will likely become part of the default toolkit.
Ion sees this as evolution, not disruption:
AI will become part of the normal development process. The key question is not whether to use it, but how to use it responsibly.
The competitive edge won’t come from using AI, it will come from using it well.
Skill profiles will shift
The strongest engineers will still need deep technical fundamentals.
But two skills will grow in importance:
- Critical thinking
- System-level awareness
If AI generates more first drafts, engineers must excel at reviewing, validating, and improving them.
Prompting will still matter, of course. But judgment will matter more.
Architecture will stay human-led
As tools improve, they’ll suggest better patterns and catch more inconsistencies. They might even simulate real architectural trade-offs.
But final responsibility will stay human.
Developing complex systems requires business awareness, long-term thinking, and accountability. AI is not there yet.
Ion put it clearly during our discussion:
AI can help with many tasks. But responsibility for architecture and system design must remain with experienced engineers.
That principle is unlikely to change soon.
Regulation and governance will increase
As AI becomes more embedded in workflows, companies will formalize policies.
Expect clearer internal guidelines around:
- Where AI can be used
- How generated code is reviewed
- How data is handled in prompts
- How accountability is documented
In regulated industries, this will move quickly.
But, the real transformation will be cultural.
Teams will need to normalize experimentation and stay open to new tools without abandoning engineering discipline.
AI won’t eliminate the need for a strong engineering culture. It will expose weaknesses in teams that don’t have one.
And if you treat it as a disciplined multiplier, not a shortcut, you will see the strongest results.
Conclusion
AI in software development is neither a miracle nor a threat.
It’s a tool.
If you use it without structure, it creates risk. And if you use it with discipline, it becomes an extremely powerful multiplier.
And AI does not remove the need for strong developers. It just raises the bar for them.
If you’re exploring how to introduce AI into your development workflows without lowering standards, start with structure.
And if you want a grounded, practical perspective on building AI-ready engineering teams, we’re always open to a conversation.



