How I Turn AI into Leverage in My Day-to-Day Work as a Software Developer
Now we’ve reached a point where even non-frontend programmers can say that many things change overnight(as is often the case with frontend frameworks). Last year was crazy: new models, new improvements in LLMs, new tools, agents, and new patterns for context management — all evolving at an extremely rapid pace. Despite this fast-moving environment, we as programmers have adopted AI in our day-to-day jobs. Thanks to significantly improved models and tools, we can focus more on building and thinking about what we build instead of how we build it — while at the same time having many additional things to learn.
There are many aspects to consider — from prompting, through MCPs and agents, to choosing which LLM to use. At the same time, we still need to grow our software development knowledge, as vibe coding is not a viable pattern in the real world when delivering high-quality, secure, and scalable software. One of the things I appreciate most in my job is the opportunity to evolve and learn new, exciting things. Sharing knowledge about ways of using AI for programming is especially important in these early days of its adoption, as we establish the best patterns for now and for the future. Here is my approach to using AI in programming, forged through experimentation, and real-world work with AI.
Prompting
When we talk about AI in programming, we usually think about tools that leverage LLMs — and LLMs need input to generate output. Prompting is a fundamental aspect of working with LLMs: the better the prompt you craft, the better the result you get. Even though models are getting better and better and are capable of producing acceptable output even when a prompt contains minor errors, prompt quality still matters.
Here are the most important rules I follow when crafting a prompt:
-
organize your prompt Name the task you give to the LLM and place it at the beginning of the prompt. If you want to refactor part of your code, explicitly ask for “refactor”. If you want a code review, start with “Review the code…”, then provide details and relevant context. Think about what you want to achieve and break it down into smaller parts. Use punctuation and TODO-like lists, include keywords that guide the LLM, and avoid ambiguity at all costs. Focus on a single, well-defined task — no one builds an app with a one-shot prompt.
-
provide context Provide context through effective references: file mentions, code snippets, error messages, terminal output, screenshots, or by guiding the agent on where and what it should search for. This is no different from how you talk with a colleague or team lead when discussing code or during planning sessions. There are many ways to do this — AI tools such as terminal-based agents or AI-powered IDEs follow similar patterns, often treating the
@symbol as a way to provide direct references to your code. Avoid unnecessary context; be concise. -
provide examples If hand over the steer to the AI we will end up with the code that has inconsistent style. Sometimes we need to similar function or a pattern we've used somewhere else or even our way of writing tests, and it is significant to follow the same patterns in the current task we give to the AI, if so we can provide a concrete example from our codebase. Sometimes, especially when the example fits to the particular task's requirements, is a decent solution as rules are more generic and not best in showing examples.
-
"sacrifice grammar for the sake of concission" These are Matt Pocock’s words from one of his videos, and it's a great advice. LLMs can feel like conversational partners, which may give the impression that we need to use correct grammar. In reality, they are probabilistic systems that predict the next tokens, so concision is crucial when prompting for code. The more unnecessary words you include in a prompt, the higher the chance the model will hallucinate or drift away from the intended output.
-
use mixed tools of providing text to LLM's We create prompts using natural language, so why should we always type instead of just speaking? Many AI tools allow you to generate prompts using voice, which is often much faster than typing. I frequently mix both approaches — typing and speaking. Using voice to create prompts has an additional benefit: it helps you articulate your thoughts more clearly and trains your ability to explain complex problems, which pays off over time.
Rules
Letting AI generate code without rules almost guarantees low-quality output. To ensure consistency, LLMs need clear guidelines. Of course, nobody wants to paste or rewrite these rules every time they craft a new prompt. Most AI tools provide more or less standardized ways to handle rules, usually through Markdown files. Many tools respect an AGENTS.md file as a source of instructions. You can place this file at different scopes of your project or even globally, often in the configuration directory of your AI tool. If you have rules you are confident you will use across every project, you can define them globally. If you have project-specific rules, you can place AGENTS.md in the root of the project, or even in a subfolder—for example, for components or a specific package in a monorepo. It’s important not to repeat yourself across AGENTS.md files, as the rules defined there are aggregated and sent to the LLM together with your prompt.
I put the following in my AGENTS.md:
-
Commands used in the specific project, such as running the application, Docker commands, testing.
-
Clear boundaries defining what the AI is not allowed to do, either at the project or global level — for example, not adding new dependencies without explicit consent.
-
Rules describing what the AI should and should not do when interacting with Git.
-
A description of the project structure and a list of the technologies used, including important package versions and dependency constraints.
-
Code style guidelines — in my case, things like early returns, preferring function declarations over function expressions, and using ESM instead of CommonJS. I often include very short examples here, showing what is considered bad and what is considered a good example.
-
Rules for working with tests. Here, I require the AI to write tests, run them, and either fix the implementation or propose changes to the codebase until everything goes green. When working in a TDD workflow, it’s important to explicitly tell the AI about it and allow tests to be red when appropriate.
There is certainly more things you can put as a rule. The rule of thumb is to place a rules that you feel you will repeat or you already have done it in your prompts, remembering about the fact that your rules have to be as concise as possible and not so long.
Plan
Having a plan is crucial when solving problems and building features. Most problems need to be broken down into smaller tasks that are logically ordered within a plan — and the same applies when working with AI. AI is not capable of delivering reliable software with a one-shot prompt or through vibe coding. A plan helps structure the work on a project or feature, creating a source of truth and a point of reference when running multiple agents. You can work on each phase separately, iterating on every chunk of the plan. Some modern AI tools offer a planning mode that allows you to create a plan with an LLM simply by discussing the requirements. My preferred approach is to break the plan into phases, where each phase represents a larger chunk of work. For example, implementing authentication using OpenID Connect can be split into two sub-phases: frontend authentication and backend authentication. Each sub-phase then contains concrete steps — for the backend, for instance: implementing the Google OpenID strategy, implementing the authentication controller, and so on. Plans can also be stored in Markdown files, which LLMs can easily read. I always strive to follow the same loop: plan → execute → test → stage/commit → repeat.
Context Management
Despite the fact that LLM context windows keep growing — with some models now supporting up to 1 million tokens — context management remains critical. The larger the context, the more noise is introduced into the conversation, which makes the generated output less accurate and increases the likelihood of model drift. There are moments when it’s better to start a new conversation, and this is where the plan from the previous section becomes invaluable. Having a source of truth, such as a plan, makes restarting a conversation easy and safe. Another technique I often use is explicitly instructing the model to rely on sub-agents for specialized tasks. A common example is using sub-agents to explore the codebase. The main agent then receives a summarized output from the sub-agent, saving a significant portion of the context window — almost as if you were combining the context windows of multiple LLMs. In some cases, to save even more context, I ask the model to locate relevant code on its own using sub-agents instead of pasting code directly into the prompt.
MCPs also play a very important role here. Having too many MCPs enabled everywhere is usually not a good idea. Every MCP needs to expose its tool definitions, which can quickly clutter the context. I try to avoid enabling MCPs that are redundant for a given project. There are several ways to manage this, depending on the AI agent or tool you use.
There is also a relatively new concept called Skills, recently introduced by Anthropic. Skills are folders — essentially packages — that contain resources which can be used by AI. These resources typically include instructions for the LLM, scripts that can be executed on the local machine, documentation, and related context. What’s especially important is that Skills are context-efficient: they load resources progressively, only when needed, which saves a significant amount of context. A good practice is to review your MCPs and check whether some of them can be replaced by Skills. In general, MCPs are better suited for dynamic or remote data, such as external API integrations, while Skills really shine when it comes to local automations and repeatable workflows.
Parallelism
Current AI tools have reached a point where we can spawn agents that work for us in parallel. The feature I appreciate the most is the ability to use sub-agents that are orchestrated by a main agent. In terminal-based agents such as Claude Code or OpenCode, we can create our own specialized agents — for example, for performing code reviews or searching for security vulnerabilities in a codebase. As mentioned in the previous section, sub-agents handle specialized tasks and report back to the main agent, which helps save context window space. What’s equally important is that they can focus on narrow, well-defined responsibilities, resulting in higher-quality output for each individual subtask.
Another thing I do is spawn multiple agents at the same time on different Git worktrees (I will cover Git worktrees in the next paragraph). If there is a solid plan and it contains tasks that can be done independently — or if some features are already well defined — I can run them in parallel. Once the first agent finishes its work, I start reviewing its output, and I repeat the process for the remaining agents. After each task is completed, I merge the code.
When the development environment is well configured and switching between worktrees is fast, this approach can lead to noticeable time savings compared to a purely synchronous workflow. However, there are important trade-offs. Context switching introduces cognitive overhead, additional automation is required (for example, scripts that quickly set up each worktree), and these automations are often not reusable across repositories, which can become problematic. In practice, it may turn out that the time savings are minimal or even nonexistent.
That’s why I use this approach only for very small subtasks when there are many of them, or for well-documented, mid-sized features where I know I’ll have idle time while previous tasks are being executed by AI. As a rule of thumb, I also avoid having too many tasks open at once to prevent my attention from becoming diluted.
Leverage git
Git was created long before AI agents existed, and I’m sure Linus Torvalds did not anticipate how useful it would become for AI-assisted coding.
For me, Git is extremely useful in AI-driven development in at least two areas:
-
Reviewing changes and diffs generated by AI. Git, as a version control system, fits this task perfectly. When an agent generates code, I can clearly see what has changed, review the diff, and stage only the parts I’ve already approved. Git also gives me the ability to roll back changes at any time. At the end of the day, AI agents are just like other programmers — they introduce changes that need to be reviewed.
-
Git worktrees. As mentioned in the previous paragraph, Git worktrees are ideal for agents working in parallel. In simple terms, Git worktrees are multiple working copies of the same repository that share a single Git history. This allows me to have multiple worktrees, each checked out on a different branch, and switch between them without stashing or modifying code. For multiple agents, this is a huge advantage: unrelated changes stay isolated, and I have full control over conflicts, resolving them in the version control system the same way I’ve done countless times before.
Things I do not do with AI
For tasks like organizing imports, extracting functions, moving code between files, or initializing a new project, I prefer using dedicated tools. This video from CJ, one of the great folks from the Syntax podcast resonates with me completely — I agree with him 100%.
For small tweaks and quick fixes, I also tend to write the code manually. When you know your tools well, type fast, use shortcuts instinctively, and your editor feels like an extension of your hand, you can often be faster doing these small edits yourself — corrections, refactors, code movement, and minor tweaks. I love Vim motions and use them with pleasure. Whenever I’m forced to use something other than Neovim, I make sure to bring my Vim motions and shortcuts with me.
I’d even say that having strong skills in navigating a codebase, editing quickly, and managing code efficiently is more important now than ever. Combined with AI agents, these skills allow you to be extremely efficient.
Wrapping up
Thanks to AI, I can ship faster. I can finally build things I planned a long time ago but never had the time to code. I feel the superpowers AI gives me — I see how quickly I can plan a feature, gather requirements, create code, and learn new things along the way.
At the same time, I clearly see how important my role still is. At the end of the day, LLMs are just mechanisms that predict the next token — the responsibility for the code is on my side. There are no shortcuts. I need to decide, review, and make sure I fully understand the code created on my behalf. AI cannot think for you — that is your role, and it always has been.
Being a software developer is not about typing code. It’s about creation. It’s about solving real problems. We still need to know how to code, understand patterns, and recognize what is good and what is bad.
Thanks for reading! 👋🏻
