By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: Effective Practices for Coding with a Chat-Based AI
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > Effective Practices for Coding with a Chat-Based AI
News

Effective Practices for Coding with a Chat-Based AI

News Room
Last updated: 2025/07/04 at 9:17 AM
News Room Published 4 July 2025
Share
SHARE

Key Takeaways

  • Coding agents are not a passing trend, they are an evolving part of the development landscape; it is becoming essential for developers to use them effectively to enhance both efficiency and quality.
  • LLMs are not to be considered commodities. LLM choice can greatly influence the quality of work performed by the agents.
  • While agents offer automation and efficiency, it is important not to over-delegate. Developers must remain actively involved and in control of the development process since they are the final responsible of the outcome.
  • Experience continues to be a vital asset for developers, enabling them to design effective solutions, plan implementations, and critically evaluate the output generated by coding agents.

A New Workflow for Developers

Since GitHub Copilot launched as a preview in Summer 2021, we have seen an explosion of coding assistant products. Initially used as code completion on steroids, some products in this space (like Cursor and Windsurf) rapidly moved towards agentic interactions. Here the assistant, triggered by prompts, autonomously performs actions such as modifying code files and running terminal commands.

Recently, GitHub Copilot added its own “agent mode” as a feature of the integrated chat, through which it is possible to ask an agent to perform various tasks on your behalf. GitHub Copilot “agent mode” is another example of the frantic evolution in the agentic space. This “agent mode” should not to be confused with GitHub Copilot “coding agent”, which can be invoked from GitHub’s interfaces, like github.com or the GitHub CLI to work autonomously on GitHub issues.

In this article we want to take a look at what it means to use agents in software development and which kind of changes they bring in the developer’s workflow. To get a concrete feeling about how this new workflow could look like, we will use GitHub Copilot “agent mode” to build a simple Angular app which searches Wikipedia articles and shows the result as a list (see how to access GitHub Copilot agent in VSCode). Let’s call it the “Search wiki app”.

The app we want to build

[Click here to expand image above to full-size]

We will first try to build the app in one shot, sending just one prompt to the agent. Next we will try to do the same with a more guided approach.

Building the App in a Single Step

“Search wiki app” is pretty simple. What it is and what it does can be described in a relatively short prompt like the one below. Note that technical details about Angular are not necessary to illustrate the agent impact on the developer workflow. However, they remind us that even when working with agents, the developer must be aware of important technical details that should be provided when crafting a prompt to perform a task.

Our prompt to ask GitHub Copilot “agent” to build the entire app


Generate an Angular app that queries the Wikipedia API to fetch articles that match a search term and displays the results in a list.
The app should have a search bar where users can enter a search term, and when they click the search button, it should call the Wikipedia API and display the results in a list format. Each item in the list should include the title of the article and a brief description.
The app should also handle errors gracefully and display an appropriate message if no results are found or if there is an error with the API call.
Use Angular Material for the UI components and ensure that the app is responsive and works well on both desktop and mobile devices. The app should be structured in a modular way, with separate components for the search bar and the results list. Use best practices for Angular development, including services for API calls and observables for handling asynchronous data.

The LLM Engine Matters

GitHub Copilot “agent-mode” lets us choose the LLM model to use. The experiments we ran showed that the choice of the LLM engine is key. It is important to underline this concept to avoid the risk of considering LLMs as commodities whose differences are noteworthy only in nerdish discussions. Somehow, this belief may be even reinforced by GitHub Copilot allowing developers to select which LLM to use from a simple dropdown list. LLMs have different (and continuously evolving) capabilities, which translate into different costs and outcomes.

To prove this, we tried the same prompt with two different models: “Claude Sonnet 4” from Anthropic and “o4-mini (preview)” from OpenAI. Both are rightly regarded as powerful models, but their nature and capabilities are quite different. “Claude Sonnet 4” is a huge model, with over 150B parameters, specifically fine-tuned for coding, while “o4-mini (preview)” is a much smaller model, with 8B parameters, fine-tuned to be generalistic. Therefore it is not surprising that the results we got were very different, but this diversity is inherent to the present LLM landscape, so we better keep it into account.

Working with o4-mini (preview)

Using “o4-mini (preview)” the GitHub Copilot agent has not been able to build a working feature. In fact, the first version had some errors that prevented compilation. Subsequently, We started a conversation with the agent, asking to correct the errors. After a few iterations, we stopped because errors continued to pop up and, more importantly, we were not able to easily understand how the solution was designed and the code was difficult to follow, even if we have a certain familiarity with Angular. For those curious, you can view the code produced with this experiment.

Working with Claude Sonnet 4

“Claude Sonnet 4” gave us totally different results. The code generated in the first iteration worked as expected, without any need for iterating or manual intervention. The design of the solution looked clean, modularized, and with a clear project folder structure.

We even asked it to generate an architectural diagram and the agent produced nice mermaid diagrams along with detailed explanations of the key elements of the design. For the curious, you can view the code produced in this experiment.

Partial view of the Data Flow diagrams generated by “Claude Sonnet 4”

Feeling “Not Really in Control”

Even if the Claude Sonnet 4-powered coding agent produced a good working solution and nice documentation, still my feeling was “I am not in control”. For instance, to make sure the generated diagrams were accurate, I had to follow the code closely and cross-check it against both the diagrams and the generated documentation. In other words, to truly understand what the agent has done, we basically have to reverse-engineer the code and validate the generated documentation.

However, this should not be seen as an optional activity. In fact, it is essential because it helps us better understand what the agent has done, especially since we are ultimately responsible for the code, even if it was developed by AI.

We may say that this is not very much different from having a co-worker creating a diagram or documenting something for us. The whole point is trust. While working in teams, we tend to develop trust with some of our colleagues; outcomes from trusted colleagues are generally accepted as good. With agents and LLMs, trust is risky, given the hallucination problem that even the more sophisticated models continue to have. Hence, we need to check everything produced by AI.

A Guided Approach

Be the Architect

To stay in the driver’s seat, let’s try a different approach: first we design the structure of the solution we want to build, then we draw an implementation plan, splitting the task into small steps. In other words, let’s start doing what a good old application architect would do. Only when our implementation plan is ready, we will ask the agent to perform each step, gradually building the app. If you would like, you can view the design of the solution and the implementation plan.

Define best practices for the agent to follow

Since we want to be good architects, we have to define the best practices that we want our agent to follow. Best practices can be naming conventions, coding styles, patterns, and tools to use.

GitHub Copilot provides a convenient way to define such best practices through “instruction files” which are then automatically embedded in any message sent to the agent. We can even use generative AI, via standard chatbot like ChatGPT, to help us define a good list of best practices.

In this example we instructed the agent to write comprehensive tests, implement accessibility in all views, and write clear comments on the most complex pieces of code using the JSDoc standard. The results of these instructions have been very good.

For those curious, you can view the detailed instructions we used in this exercise.

Share and Enforce Best Practices at the Team Level

An interesting side effect of defining best practices in instruction files is that they can then be treated as part of the project deliverables. These defined best practices can therefore be versioned in the project repo and shared among all developers. This mechanism helps enforce the use of centrally defined best practices across all contributors of a project, as long as they use agents to help them do the work.

Build the App

Having defined the implementation plan and the best practices, it is time to build the app. This is the way we proceed:

  • Each step of the implementation plan is turned into a prompt for the agent.
  • After the execution of each prompt, we check what the agent has done. If we keep individual steps small and simple, the code the agent produces at each step will not be difficult to understand and verify.
  • We may also want to start a dialogue with the agent to ask for clarifications or changes to the implementation for the current step.
  • When we are satisfied with the results of the step, we commit the changes to set a consistency point before moving on to the next step.

We build our app step-by-step, always remaining in control of what is happening.

The workflow with the agent

Such an approach makes it possible to create a good quality app since the agent, controlled by our “instructions” file, typically follows many best practices. In our experience we have seen that:

  • The agent has written a comprehensive set of tests, covering a vast set of edge cases.
  • The agent has backed accessibility in each HTML template, something that usually costs us quite some time.
  • The agent has added clear comments to the most critical parts of the code using JSDoc standard.

In summary, we built a working app in four steps, using four prompts which all produced the expected result at the first attempt using “Claude Sonnet 4” as the LLM engine.

The Search wiki app

[Click here to expand image above to full-size]

For the curious, you can view a detailed description of each step, the prompt used and the code generated at each step.

Choose the Right Model

As we already stated, model choice can make the difference. We tried the same guided approach with some other LLMs using the same sequence of prompts. With GPT-4.1, the agent generated a working app with almost no need for corrections (the only errors were wrong import paths), but with less quality. For instance, the app did not follow material design (as it should have per instructions) and did not handle the “enter key” event. Also, accessibility was not implemented on the first go.

Speed Is Important but Is Not Everything

With this guided approach we have created a fully working application, with comprehensive sets of tests and many more quality features, using just four prompts. This only took a couple hours of work at most, and probably less. This speed is quite impressive when compared to a traditional approach, where the developer would have to check many technical details, such as the Wikipedia API documentation or the latest Angular best practices guidelines, before writing all the code.

At the same time, it could be argued that we could have been even faster, asking the agent to build the entire app with just one prompt. The point is that we may have sacrificed some speed for the benefit of producing the solution that we want to build. In other words, although it may be faster to ask the agent to generate a complete application with just one prompt, (and an agent may be powerful enough to do it), we do not want to just create an app, we want to create our app, an app that we understand and that follows the design we want, because eventually we may have to maintain it and evolve it. Designing the structure of the app and drawing an implementation plan takes some time but guarantees also that the result is something under our control, manageable and maintainable. And everything is obtained at a much higher speed than with a traditional hand writing approach.

Conclusions

Agents can be a very powerful tool in the hands of developers, especially if powered by effective LLMs. They can speed up development, complete tasks that sometimes we leave behind, such as tests, and do all this following the guidelines and best practices we defined for them. But all this power comes with the risk of losing control. We may end up with working solutions which require time to be understood and we may be tempted just to trust them without maintaining due control. Sometimes generative AI hallucinates, which is a big risk. Even if we assume that the AI will never hallucinate, relying on a solution that we do not understand is risky.

To maintain control over what is created while leveraging the power of agents, we can adopt a workflow that mixes human architectural knowledge with an agent’s effectiveness. In this workflow, the design and planning phase is left in the hands of experienced human architects, who define the solution and the steps to reach it. The coding is delegated to the agent, which works fast and with good quality.

We believe that our experiment is brutally simple. We also believe that building a new greenfield app is very different from working on an existing and complex (often confused) code base. Still, the results are impressive and clearly show that we find our way to work together with agents better. This approach brings us efficiency and quality.

Experience is key

If we want to control what agents do for us, experience is key. Experience lets us design a good solution, plan an effective implementation, and gives us the judgment to check what AI has generated. How will we develop this experience in a world where agents do the heavy lifting in coding? Well, this is a different question, one that applies to many human intellectual activities in a world which has access to generative AI tools.

The answer to this question is probably still to be found and is, in any case, outside the scope of this short article.

Technical Details

Using the GitHub Copilot agent in VSCode

In this section, we will review how to access the agent within VSCode.

The GitHub Copilot agent is integrated in the GitHub Copilot chat. From the chat, it is possible to select the agent mode as well as the LLM engine that the agent will use behind the scenes.

GitHub Copilot agent in VSCode

Using the chat, we can ask the agent to perform tasks for us. We will use the chat to build our “Search wiki app” as described earlier.

The Design of the Solution and the Implementation Plan

With an agent-based workflow, we first design the solution and then list the tasks that will bring us to the desired result (we define an implementation plan). This approach is what any good architect would do before starting to code.

The design of the “Search wiki app”

In the implementation plan we work bottom up, starting from the services connecting to the external systems and then building the views on top of them. For the sake of simplicity, the app implements state management.

So this is our plan to build the “Search wiki app”:

  • Create a WikiService and add a method to it to serve as a search API to fetch Wikipedia articles.
  • Create a WikiCard component to show a single Wikipedia article as a card in the list. It will be used by the WikiList component to build the grid of retrieved articles.
  • Create the WikiListComponent to manage the whole list of articles. The search field and button will be added directly to this component. Link the search button “click” event to the WikiService search API and show the results in the WikiListComponent as a grid of WikiCardComponents.
  • Configure WikiList as the first page to be loaded at the start of the app.

The Implementation Steps and Their Prompts

The following are the implementation steps, with links to the codebase status at each step, and the prompts used for each of them.

1. Create WikiService


Create WikiService and add to it a method that would serve as API to fetch the Wikipedia articles give a certain search term.
Use the latest APIs provided by Wikipedia.
Add also a test for this API. Do not use mocks for the test but use the real API.

Code status after prompt execution of Step 1

2. Create WikiCard component


Create WikiCard component, which is a component that shows a single Wikipedia article as a card in the list. A Wikipedia article is an object described the the interface WikipediaSearchResult - it will be used by the WikiList component to build the grid of retrieved articles.

Code status after prompt execution of Step 2

3. Create the WikiListComponent


Create WikiList component, which is a component that shows a list of Wikipedia articles as cards.
It will use the WikiCard component to show each article in the list.
The component has a search field that allows the user to search for articles by a search term.
The component has a button that allows the user to fetch the articles from the Wikipedia API.
The click event of the button should call the WikiService to fetch the articles.

Code status after prompt execution of Step 3

4. Configure WikiList as the start page


add WikiList as the page loaded at the start of the application

Code status after prompt execution of Step 4

Instructions, Best Practices, and Context

Below are the instructions that we defined and that the agent used throughout the exercise:

You are an expert Angular developer with extensive experience in Angular v19. While generating code, please follow these coding standards and best practices:


- Use Angular v19 features and syntax.
- Prefer standalone components and functional providers where possible.
- Use TypeScript strict mode and enable Angular strict templates.
- Organize code by feature, using `src/app/components`, `src/app/pages`, and `src/app/services` folders.
- Use Angular CLI for generating components, services, and modules.
- Use RxJS best practices: avoid manual subscription management when possible; use `async` pipe in templates.
- Use Angular Forms (Reactive or Template-driven) for user input.
- Use Angular Material for UI components if a design system is needed.
- Write unit tests for all components, services, and pipes using Jasmine and TestBed.
- Use clear, descriptive names for files, classes, and selectors.
- Follow Angular and TypeScript style guides for formatting and naming.
- Document public APIs and complex logic with JSDoc comments.
- Avoid logic in templates; keep templates declarative and simple.
- Do not use inline templates or styles; use external files.
- Use dependency injection for services and configuration.
- Prefer Observables over Promises for asynchronous operations.
- Keep components focused and small; extract logic into services when appropriate.
- Use environment files for configuration.
- Use Angular’s built-in routing and guards for navigation and access control.
- Ensure accessibility (a11y) in all UI components.
- Use ESLint and Prettier for code quality and formatting.
- Use "npx ng .." for running Angular CLI commands to ensure the correct version is used.
- Write "code generated by AI" at the end of the code file.
- When running tests use always the command "npx ng test --browsers=ChromeHeadless --watch=false --code-coverage=false".

As per prompting best practices, those published recently by Anthropic for instance, instructions should start with a role definition, considering that these instructions end up playing a role similar to the “system prompt” present in many LLM APIs.

Looking at these instructions, we can see several guidelines which have actually turned into code generated by the agent:

  • One of the instructions tells the agent to create “unit tests for all components, services, and pipes”. We can see that the agent obeys to the instruction.
  • There is one instruction to “Ensure accessibility (a11y) in all UI components”, with which the agent complies, at least when powered by Claude Sonnet 4.
  • An instruction asks to “document public APIs and complex logic with JSDoc comments” and this is actually what happens in the code.

There are also instructions for bash commands (e.g., “when running tests use always the command “npx ng test –browsers=ChromeHeadless –watch=false –code-coverage=false””). These kinds of instructions are also followed by the agent.

The agent definitely complies with the instructions we provide, which makes defining such guidelines extremely important if we want to enforce a certain set of rules. We should treat such instructions as first class project deliverables shared among all developers to be sure that a certain level of quality standardization is maintained.

One last note on “meta-prompting”. Instructions provide guidelines and therefore they greatly depend on the type of project we have to deal with.

A pragmatic way to create our instructions is to start asking an LLM to generate an instruction file for us, providing the type of project we are working with, for instance a React front-end app, or a Go app. This approach is called “meta-prompting”, which is creating a prompt through a prompt.

Once we have the starting point generated for us, we can customize it with the requirements of our specific project.

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Report: Meta is developing chatbots that will send unsolicited messages to users – News
Next Article Best data plans in Nigeria in July 2025: MTN, Glo, Airtel, 9mobile
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Free Professional Social Media Proposal Templates to Win Clients
Computing
New iPhone ‘nudity-spotting’ feature FREEZES video calls if someone strips off
News
Sending an email to a low employee has cost 1,500 euros to a company: it doesn’t matter if you respond or not
Mobile
Towards a radical change in the economic model of the generative AI?
Mobile

You Might also Like

News

New iPhone ‘nudity-spotting’ feature FREEZES video calls if someone strips off

4 Min Read
News

Gemma 3n Introduces Novel Techniques for Enhanced Mobile AI Inference

4 Min Read
News

The Coming Battle: AI Wealth and the Socialist Backlash

16 Min Read
News

‘Close to perfect’: readers’ favourite games of 2025 so far

8 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?