By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: [Video Podcast] AI-Driven Development with Olivia McVicker
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > [Video Podcast] AI-Driven Development with Olivia McVicker
News

[Video Podcast] AI-Driven Development with Olivia McVicker

News Room
Last updated: 2026/01/19 at 6:07 AM
News Room Published 19 January 2026
Share
[Video Podcast] AI-Driven Development with Olivia McVicker
SHARE

Watch the video:

Transcript

Introduction [00:19]

Thomas Betts: Hello, and welcome to the InfoQ Podcast. I’m Thomas Betts, and today I’m speaking with Olivia McVicker, a senior cloud advocate at Microsoft. In this episode, we’ll be discussing AI-driven software development. We’ll start with the current and mainstream tooling, and then move into some of the innovative and cutting edge things that are just coming out.

Then I think we want to get into where this is all headed and what software developers and architects can expect to see the benefits of AI in the next few years. Olivia, welcome to the InfoQ Podcast.

Olivia McVicker: Thank you so much, Thomas. I’m really excited to be here, as you said, kind of mapping out what we’re going to be talking about today. The AI tooling space is a very exciting one. There’s a lot of really cool things happening, and there’s a lot of things to look back on and look forward towards. So really excited here to talk about the journey and look forward to what’s next.

Current state of AI software development tooling [01:02]

Thomas Betts: Yes. Well, before we get into the AI tooling, I wanted to actually back up a little bit because I think developers have always been looking for ways to improve the code writing process. That’s the tedium of our job sometimes. That’s why we had IDEs that became popular. We went from having just notepad and text editors. I wanted auto complete and easy refactoring tools. All of those got built in, but now that static analysis, those static tools are being complemented or even replaced with generative AI coding assistance. Are those modern AI coding tools just the latest iteration of tools that help developers write codes or are they really changing what it means to be a developer and how we do our jobs?

Olivia McVicker: Yes. I mean, it’s a great question. I think it’s kind of both. And is the next evolution, next iteration of coding tools. As you said, if you look at IDEs, the goal is to essentially help the developer be more productive. So if you go back to things like IntelliSense, right? So you don’t have to memorize every single possible method that’s on your variable and you can pop that up. A natural evolution of that was kind of the early iterations of AI assisted coding where you have that auto complete, that ghost text that kind of uses an LLM to actually predict what you’re going to say next. And then we move on to the agentic space and bringing all that into the developer tooling space. And so I do see it as the next iteration of helping developers be more productive, but I don’t know that I would say it necessarily changes what it means to be a developer, but more so that it changes the proportion of what our mental energy can be used for.

So when I say that, I mean things like dealing with refactoring code, that’s something that you can maybe now just hand off to an AI assistant. If you look at something as basic as onboarding onto a project, you look at what it would look like really traditional and you bring someone on, you read them through all this documentation and then they have a bunch of questions and you try to make time for it or they try to figure it out themselves. Well, now they can go and ask AI about, “Oh, okay, I have this issue here. Where can I find this code?” And you can get to being productive quicker. So again, I don’t know that it necessarily changes the heart of what being a software developer is or really reshapes what we do, but rather it gives us that time back to focus on how can I get productive quicker using the tools that I have. And so it kind of goes hand in hand with that new process and more advanced tooling.

Shifting from coding assistants to AI-driven development [03:30]

Thomas Betts: Yes. I think the idea of the cognitive load factor, like how much do I have to keep in my memory, there’s always been this push to work at higher levels of abstraction. We went from assembly code to C and then C++ and object-oriented. All of that was to get us closer to the business model of what we’re talking about, the business problem and be able to describe in that language. And now LLMs are allowing natural language processing to come into, I’m going to write software by describing it in human language and we don’t have to go from the joke from Office Space. I take the requirements from the business analysts and I give them to the developers. I can just take that and it creates code. Is that how we’re moving into this era of AI driven development instead of just AI assisted?

Olivia McVicker: Yes. Yes. And I think that that’s a good distinction. I think that there is a little bit of a mental shift there between that AI assistant, like I said, kind of those early days of auto complete and just predicting what your next code is going to be, stubbing out the rest of your method. As you said, AI driven development. I like to think of it as shifting to the thought of an AI teammate that can help you throughout the entire software development lifecycle. So instead of just a simple assist of, okay, write stuff out the rest of this method or I have this tiny syntax error, can you fix this? It’s now throughout the whole process. Where can I use AI throughout my entire software development lifecycle process? Whether that’s… What I do a lot of times when I’m starting an app or any sort of project, I have an idea in my head and I’ll actually just start with the coding assistant to brainstorm where I can go with that.

So I have these ideas, these are the things I want to do with it. What else could I do here? What edge cases am I missing? And I use it just even from the start there to help me brainstorm it. And then from there, okay, we have these cool requirements. Let’s go ahead and start the initial implementation here. I’ll be kind of the human in the loop there to help guide it. And then, okay, there’s these other requirements that I want to do, help me make these issues in GitHub to go ahead and track this work. And okay, now I’m ready. Let’s go ahead and try to set up some deployment pipelines. It’s really the idea of the AI teammate and, okay, I’m still in control. I’m still kind of having this cognitive load of doing the problem solving, but I’m able to have an AI teammate who I can share that cognitive load with and you can help me brainstorm and we can have that back and forth just like you would as a traditional teammate.

And I’m not saying that means no person teammates anymore, no human teammates, but it’s the idea of just having one extra resource available to you that understands your code base, that understands what you’re doing, can work with you to come up with that. So that’s kind of the shift in my mind of what this AI-driven development is. It’s throughout the entire process and someone that you really iterate over with and have that collaborative relationship with.

Thomas Betts: Yes. I think we get into the sociotechnical factors of software development, that teams build software and you build better software when you have people to bounce those ideas off of, you think about better. I know in remote work situations, I sit here in my house by myself, there’s no one around. I do have the rubber duck that I still talk to occasionally because I’m having a problem and thinking through and explaining the problem helps you get to a better solution like, “Oh, I didn’t realize that. And just the fact that I went through and had to explain what I got to, aha, there’s my issue”. The AI can be that rubber duck that answers back and fills in the gaps for you as well.

Olivia McVicker: Right. It gives that sounding board. Like you said, I mean, with remote work, it helps that way. If people are on your team are dispersed across different time zones, that’s a huge help as well too. “Okay, help me out until I can talk to the senior architect when he comes on tomorrow or something like that. It really enables you, in my opinion, to get unblocked a lot quicker and just kind of help there. I also can help foresee some of those blockers too. And so I’ll use it a lot there like, “Where are my blind spots? What am I missing here?” And help it throughout that process too, again, having that collaborative relationship.

The role of AI in the software development lifecycle [07:28]

Thomas Betts: I personally believe, and I’ve said this a few times in the podcast and other places, that companies don’t hire software engineers to write code. They hire software engineers to solve problems. And writing code is one of the tools that’s just part of the job. But if the AI is able to write the code, how is that changing the role of the software engineer? We’ve kind of been getting to it and it’s kind of that idea of thinking about it, but I think that’s one of the resistances we see is people are like, “This is changing my job, but do you see it as a fundamental change or a good augmentation and change for the better?”

Olivia McVicker: Yes. I mean, I think that’s always one of the first things that you hear when people talk about the rise of AI coding assistance. “Oh my gosh, okay, well, am I going to be out of a job?” First of all, I agree 100% with your comment about software developers are hired to problem solve. I’m a firm believer that we’re not here to just regurgitate syntax or the definition of things you can Google really quickly. We’re here to actually solve those hard problems. And I don’t foresee that going away at all with this. To your point about, okay, well, if AI can write all the code, what are we going to do? I would challenge that initially by saying, can AI write all the code? Sure. Should AI write all the code? No, at least not unchecked, not unreviewed. Sure, you can spin up something really quickly.

AI can actually do a really great one shot, but do I recommend you just go and deploy that to production without reviewing? Absolutely not. You still always need that interaction there. And so as you mentioned, what does that really mean for software developers? If AI can write a lot of the code, are we just AI reviewers? Is that what our job becomes? And no, I don’t think so at all. I think that there’s a couple things. One that we, again, have kind of alluded to is the idea of now we actually can spend more time problem solving. So if you look at all the things you’ve done as a software engineer, there’s probably a lot of times that it’s just been like, oh, go update this documentation real quick here or go copy this over there, those little things that are maybe a little bit more routine, I guarantee you spent probably days and days of your career doing those, right?

Maybe months. It adds up after a while. And so the idea is those sort of things that in my mind are not quite as mentally taxing and don’t require that intense problem solving, those are things that we can give to AI now and ideally are a pretty quick code review too then because they’re kind of those simpler, it doesn’t involve super complex algorithms or refactoring or anything like that. So in my mind, there’s that aspect of, yes, let’s give AI all that code that I don’t want to write and that doesn’t need to have me write it. It’s something that follows a very set pattern that’s predictable. Obviously, again, review the code, but we give AI that code to write. Now our minds are freed up for more of that problem solving, but for solving those hard problems. We’re having those discussions about what does this application actually look like for our company?

Prompt engineering is a required skill for developers [10:19]

How does this affect the different orgs that we work with? How do we interact with this other business logic with this app? How does that all work? And really figuring out those sort of problems. On top of that, kind of going off of that actually is the idea that LLMs are only as good as the information that you give it. So we still will always have a role of figuring out the best way to interact with LLMs. I firmly believe that AI coding assistants are a tool. They are not a replacement and like any tool, they need to be learned to wield properly. There are tips that you can use to make sure that it has the right context that you need to make sure that you’re going to get the best responses. So that’s things like understanding how to prompt best. I think that we’re seeing a rise in software development for having this competency in AI dev tooling.

And that’s, like I said, learning how to prompt better, learning which models are best for which tasks. I’ve seen people and I’ve done this myself where I’m like, oh my gosh, why is this taking so long to do this two second task because I’m using a reasoning model when I should have just done this little mini model and then we would’ve been there. So there’s a whole new skillset that’s emerging of how to use these tools.

And I think that that’s kind of the shift that we’re seeing with our roles. So kind of just to summarize that, it’s one, it means that it’s not replacing us. AI code is not going to replace us, but it gives us a chance to focus our capacity on harder problems to solve. And then just the idea that we are growing this new skillset to learn how to wield this tool properly.

Thomas Betts: Yes. It’s going to date me a little bit, but when I interviewed for an internship way back when, one of the skills was, are you able to search the internet?

Olivia McVicker: Yes. Yes. Right.

Thomas Betts: Because that was a new skill back in the ’90s and that was what they wanted someone to be able to do. It’s like, we need you to be able to go and help find these answers. And actually it was, as the intern, you had the time to figure out how to do that. The people who had been around for a while may not actually have as good an experience doing that. I think that’s what we’re also seeing just in general of using LLMs, using ChatGPT or whatever. The current generation of college students are past the initial, thou shalt not ever use these things too. I can use this to help my research and help do these things. I still have to use it appropriately. And that generation’s going to come in knowing, being AI and LLM aware, the current generation of software developers are having to learn it on the job as we go.

And so we’re making some mistakes, but we’re improving that process. And I think you mentioned learning how to prompt things like… So we use GitHub Copilot and we have a Copilot instructions file in our repo. And that was one of the tips that somebody said, start putting these in all of the repos. And it starts with like, here’s what this project is, here’s our company standards, here’s our team standards, and here’s what this repository does. And I think that’s also going to the idea you talked about of onboarding people. I have to onboard the AI to here’s how we write code. That’s also that same documentation is now useful for the new developer that starts because they can see, here’s how we write code, here’s how we write tests, we use this framework, we use these libraries. So I think it’s a big way of saying that this AI enabled world is still fairly new. We’re getting into it, but it’s going to become just table stakes for how we’re developing software.

Olivia McVicker: Yes. Yes. I think there are so many good points in there. I love your comparison to using the web because I think that it’s a really strong comparison in the sense that pre AI, the last five, 10 years, no one really batted an eye when you would go look up, oh gosh, okay, what’s the syntax to do this? That was just normal. But to your point in the 90s, that was like, whoa, how does this look? What do we do here? But you adapt to it. And I think we’re seeing, to your point, the exact same thing with AI tooling. And I think that you touched on education. Obviously, there’s a hard balance between making sure that students are learning properly and responsibly and not just relying on the tools. I think that that’s a whole hard problem in itself, but I think that it’s absolutely a problem that we should figure out because to your point, people need to come in with this competency.

It’s going back to the web example, it would be the same thing now as if day one on the job, you were like, “Okay, you can never Google anything”. It’s like, “What?” That’s unrealistic, right? Five, 10 years ago, people would go and stack overflow and just copy things over. You would know the people who actually didn’t know what was going on, who just copied it over and didn’t know how to adapt and they pushed this code and you’re like, “This is not at all what we’re looking for”. We’re going to see the same thing with AI tooling. You’re going to see people who just accept what AI gave and pushes it and didn’t actually understand it. And so we’re still going to see those same sort of comparisons and that same sort of growing and that same sort of realization that people need to actually understand what is happening.

Take the time to learn the AI tools [15:12]

Olivia McVicker: And then to another point you made in there, you mentioned GitHub Copilot, full disclaimer. I work on the GitHub Copilot and VS Code team. So big fan of there, but you mentioned using custom instructions. I think that there are so many best practices like that that are out there that are so hard to keep up with because there’s so much that’s always changing, but those best practices are very much things that people should be focusing on to try to figure out, okay, I didn’t quite get what I expected from this tool, whether it’s GitHub, Copilot or any other AI coding tool.

Instead of just writing that off of like, this tool is trash, let’s actually take a step back and how can we make sure it has that context that we want it to have. So going back to what I said earlier, that LLMs are only as good as the information that you give it, there’s that skillset of making sure that you are giving it the right information. And with that, there are best practices and tips and tricks for using these tools to make sure that you can give it that context.

Thomas Betts: Yes. I think the way that we’re writing stuff down to keep the LLMs getting better, like every time we use it, like the one tip someone gave me was anytime you fix a bug, have it put the way that it fixed that bug, if it’s a pattern that was broken, put that into the instructions file so you don’t write another one of those bugs. And I think back to having postmortems and incident analysis where we say, “Hey, the root cause was the code was written this way and we’re going to put all these checks and balances in place to make sure we never do that again”.

Well, I’m very bad at remembering things like that. There’s too much knowledge. It goes back to that cognitive load problem. If you put it in the instructions, then hopefully it doesn’t do that again. Or you can say, “I wrote the code myself, please review it”. And it’ll be able to say, “Hey, according to the instructions, you may have introduced this bug”. And that’s where we get to having AI agents not just write code, but also be maybe reviewing poll requests of the code that I wrote. Yes.

Olivia McVicker: All throughout that process is definitely what we see kind of it shaping, going back to what the AI-driven development means. I think we all have those moments where like, “Oh, I know I ran into this a year ago. I have no idea what we did or what that meant”. It’s almost like by documenting things for AI agents or whatever coding assistant you’re using, it’s forcing you to have better practices for yourself to document those practices and things like, “Oh, make sure to look out for this in our tech stacks. We don’t introduce this bug”. Or even things like, “Oh, these are the specific styling preferences that we have for our code base”. Because everyone kind of has their own particular code base. You can go from project to project and people have tertiary commands and other people love other things. And it kind of gives it kind of that set principle and you can make sure that you are referencing that and that you can even just do one little ask at the end of it.

Okay, go make sure that this fits all of our best practices for formatting and then it will do that instead of you having to go through the code review process and then you get a comment being like, “No, we don’t like to do that. Go change this”. You can either one, catch that beforehand or two, at the code review process involve AI coding agents throughout that, they can fix it. It really just opens up this whole new way of working throughout the entire process.

AI coding tools are still software and all software needs clear instructions [18:10]

Thomas Betts: Yes. And I think that’s another case where we’ve had linting tools. We’ve had editor config files that say, “Hey, I like two spaces and semicolons at the end and curly braces here”. We’ve had ways to define that and then the IDEs get better at interpreting those. So yes, again, we could bring in the LLM to do the review, but that might not be the most efficient way to do it. That might be toolings already built in. Now, things should work hand in hand. I don’t like it when my coding assistant doesn’t follow the rules and then I get a bunch of red squigglies because it didn’t put the right number of spaces in or something like that. I’m like, “Come on, you’re smarter, aren’t you?” No, it only does exactly what you tell it. So you have to learn how to tell it. And once I tell it something three times, put it in the instructions files so I don’t have to tell it ever again.

Olivia McVicker: And really if you think about it, I mean, that’s kind of the essence of software development in general, right? You’re teaching this computer who knows nothing, you’re writing a program for it to learn how to do something. I think when I was first learning computer science, I remember one of the comparisons someone made, I’m sure it’s quoted from someone, was just the idea that when you’re writing a program, just think about explaining something to like, oh, it’s like a really, really dumb toddler who doesn’t know anything. You have to tell it everything needs to, you need to tell it all the steps that it needs to do and write out… It’s not just going to assume to know to do something. You have to tell it to do something. And that’s the heart of software development in general. And now we’re just taking that to a natural language perspective of telling the AI coding assistants that as well.

It’s that same concept, just abstracting out a level back to what you said at the very start of this. We’re always looking for ways to abstract that out so we can have those natural language interactions and be able to interact with the computer and coding agents now and coding assistants to, again, be as productive as possible.

Thomas Betts: Yes. I think people haven’t yet figured out the relationship between the text that it spits out and even like ChatGPT or something. It spits out text and it sounds like it knows everything and correlating that to it doesn’t know anything. When the hallucinations happen, it’s the same text as if it’s completely factual and there’s no filter for us to be able to tell. And so people don’t have that natural instinct of, this is the toddler that I’m explaining things to. They think this thing sounds smart. I shouldn’t have to explain it, talk to it like it’s a toddler.

Olivia McVicker: Right. How many people have toddlers who just very competently say something and they are not right. It’s a good point. How many times have you hallucinated? It’s a meme now. And then the model returns like, “You’re absolutely right. That’s not how it goes”. I think that that’s a really important caveat for, again, whatever AI assistant you’re using to always really take the time to understand what’s being said. And if you don’t quite… you can always ask questions. You tap into your own brain too and be like, “Okay, well, let me go actually verify this”. You can ask things like, “Okay, well, show me the code exactly where you pulled this from”. And there’s steps that you can do with any sort of problem solving or learning process in general, there’s steps you can do to really dig in and check the resources and fact check to make sure it’s there.

And I would say just like any other source, it’s important to not take it completely at face value and make sure you are understanding that underlying.

Use subagents to break down the work [21:14]

Thomas Betts: Yes. So getting outside just writing the code into the bigger software development life cycle, where are the AI… Let’s say coding agents out, let’s just say AI agents going to start helping us with that software development process. Are they going to be, someone just writes requirements and there are a bunch of agents that do the work? And kind of on the same thing we were just talking about, garbage in, garbage out. If you don’t have good requirements… Forget the AI world. If you don’t have good requirements, software developers will produce software that is not what you need. Now that’s going to be magnified if you don’t have good requirements and you start handing it off to agents, you’re going to get bad results. Is this possibly going to improve the software requirement process because we’re going to have various agents and where do you see those agents coming into play?

Olivia McVicker: Yes, I think it’s a good question. The short answer is yes, the goal is that it improves and it makes that whole process more seamless. We all get better at learning how to prompt it and get those really robust requirements and then it sees those blind spots that maybe we didn’t see. But long answer to your question, right? If you look at the state of coding agents in general right now, so you have your local coding agents. So I’m just going to give VS Code and GitHub Copilot as an example. So that would be like agent mode in VS Code is your local agent. Then there’s the idea of cloud agents or background agents. And so that would be like Copilot coding agent in that sort of ecosystem. Those local agents are going to be that way to actually just be in your editor, be in whatever tooling space you are and have those direct conversations, you’re manually approving tools, you’re much more with it there, right?

It’s all on your local machine. Whereas we then also have this concept of background agents, which run in a cloud environment, it’s way more hands off. You’re basically like, “Hey, I’m going to go give you this task, go do it”. And then you go get coffee, you go do whatever, and you come back and check in on it when it’s done. And then from there you can iterate over it. But it’s the idea of you’re much more hands off and it can just go and do your task. So right now we’re kind of at a stage where there’s those local and those background agents and how do those itself interact. And so we’re getting in a good place where you can choose your preference, whether you want to maybe start with a local agent session, then do some of your own manual coding, then be like, okay, background agent, go ahead and take this over the finish line.

So that’s kind of the current state that we’re in. Where we’re currently moving and where we’re at right now is going to be building upon that to the idea of subagents. So like what you were saying, the idea of very specialized agents for specific tasks and that you can then chain those subagents together. So the idea that you can start with some sort of planning agent or product management requirements agent and then that subagent can go ahead and take your requirements and then that subagent can go ahead and chain that and hand that over to the development agent or the architect agent or something like that. And then you go to the development subagent and then it goes to the testing subagent and then it goes to the documentation subagent. So it’s the idea that you can then kind of take these personas into their own subagent spaces and hand them over to help throughout there.

Again, we’re talking like production level code. I don’t foresee this as totally autonomous, I just click this, go, go deploy, we’re in production. There’s still a human in the loop kind of in between those chains to make sure that things are looking good in between and you can always iterate over the different pieces. But yes, I absolutely think that that’s where the agentic space is going to, is the idea of subagents and being able to chain all that together and really break down what the current pieces of that software development lifecycle are, get those robust, whether it’s requirements or outcomes from each of those processes and be able to chain it together so you can kind of pass it on from one to the next.

Humans collaborating with AI agents [24:56]

Thomas Betts: I think what you described sounds like the full virtual team. Every role that you just talked about is a person that’s on my team. So when you say there should be human in the loop, which humans? Am I going to have the docs person reviewing the documentation agent and the dev people reviewing the ddev agents or is there ever any overlap like, “Oh, well, this person can oversee these three different roles because the PM is actually doing the QA as well, as long as we have the QA agent saying, I’ve added extra tests and this is ready for a human should do the final review”.

Olivia McVicker: Yes, it’s a great question. I think it can be both. I think one of the coolest things about AI coding assistance is that it can take you out of what your own personal specialized place is. If you’re someone who’s really only used to backend development, for example, well, now it’s a lot easier to inspect front end code and ask more questions about it. If you’re someone who’s a product manager, it’s so much easier to get into the code. So I do like the idea in general from a robust team building perspective of people being involved at all parts of that, but from a very practical perspective, you need to have someone who’s an expert reviewing it as well. One thing and pitfall that I have seen with the rise of background coding agents and code reviews and things like that is you’ll have someone kick off a coding agent for an area that they’re maybe not an expert in, which is fine.

Then they get a teammate to review the code who also is not an expert in the area or has just really not seen it. And then they both just say, “Looks good to me”. And it turns out there was something missing because neither of them were comfortable with that area. There’s really an importance of making sure, again, I’ve been talking about context for the LLM, but that applies to humans as well. Then you have the human in the loop who has the necessary context. Otherwise, going back to what you were saying, it’s so easy to just see a response, be like, “Oh yes, that totally looks right”. And so you’re just going to be like, “Check, yep, we’re good”. So I think it’s a little bit of both. There’s the idea of getting people out of their own comfort zones or their kind of normal verticals just to learn more and be able to see other areas of the life cycle.

But from an absolute practical perspective, you need someone who has that domain area expertise to be reviewing that, at least at a minimum before it goes to production. Maybe you have someone kind of monitoring throughout the intermediate steps, but you have one final big push at the end of making sure that you have all those experts in the loop.

Pros and cons of the shift to AI-assisted development [27:14]

Thomas Betts: Yes. And I think there’s some people who think, oh, we’re going to… There’s been people claiming in the press we’re like, “Oh, we’re going to eliminate all of our developers. We’re not going to have any developers. We’re just going to have AI agents”. And they’re like, “Oh, that was a bad idea. Bring them back”. But I think there is going to be some impact on team dynamics and how we structure teams and how these teams work together, positive and negative, but just changing in general. How do you think this is going to be affecting individuals’ creativity or autonomy or just how people interact with each other, whether they’re in the office or fully remote?

Olivia McVicker: Yes, I think it’s a fair question. I think there can be pros and cons, especially if people aren’t used to thinking in that way. If we focus on the pros first, I like to think team dynamics wise, it actually, again, it frees you up to have a lot more of those just fun conversations like, “Let’s really dig into this problem that we need to solve. Let’s go have these back and forth while the agent’s going off and doing the documentation that we don’t want to spend time on doing”. So I like to think that when it opens up the opportunity for more time just to have more of those conversations, I do think on the flip side though, it’s a new process, it’s a new mindset.

And so because of that, there’s always going to be growing pains. So there’s always going to be kind of frustration of, “Oh, how did you let this slip by?” Or depending on where you’re at in your own AI tooling acceptance, you might look at someone’s code and be like, “Wow, you just had AI generate this. This is absolute garbage. I can’t believe who you are”. And it can lead to resentment if you’re someone who’s a little bit more skeptical about AI assistant coding. And so I think that with any sort of industry change, there’s going to be growing pains and so you’re going to see that.

But I think as everyone kind of gets up to speed, figures out the right way for it to work with their team, everyone’s team’s going to be a little bit different, I like to think that it will end up getting that really positive team dynamic of getting in the right flow of, okay, who should be reviewing this? What work should we be giving to coding agents and what work should we be handling? I think that there’s going to be growing pains throughout that process. Once you kind of get in that right routine, it really frees you up to be able to have more of those fun conversations and those human to human software developer conversations.

For your piece about creativity and autonomy, same thing, I’m hoping it actually gives you more autonomy because then you can kind of pick which work you want to work on. It’s not the idea of, okay, we’re just automating all of your code and now you don’t have anything to do. It’s the idea of, okay, you don’t have to spend time go fixing these typos over here, go formatting this code this way. Let’s go put you on this really cool new feature that we need to implement and we need to go talk to all our stakeholders here and we need to figure exactly what we need and then we need to speed up the processing time by this and let’s really put you on those sort of hard problems. I like to think that that will give people more autonomy over things they want to do because they can spend their brainpower.

Going back to what you said, I think most software developers are there to problem solve. And then creativity as well. I think I mentioned earlier that one of the things I really like to do is just kind of brainstorm back and forth. And so I think it can be a really good way to, if your remote or your teammates aren’t available to really have those rubber ducking sessions or those brainstorming sessions, just be like, “Oh, hey, I was thinking about this. What are some other things we could do here?” And just get that creativity flowing even quicker.

Thomas Betts: Yes. I’m waiting for the day when someone says, “Oh, we’re going to have whatever the next version after Scrum is or SAFe. Here’s how you have an AI-enabled team and how we’re supposed to structure our teams with AI following all these rules”. If you go back to capital A, Agile, and the Agile Manifesto, it’s all the stuff that AI agents can help us do are still the same thing. We want to have more collaboration versus writing documentation and requirements. Having the ability to refine the requirements, it’s like, “Oh, I didn’t get a good output. Let me add more requirements. Let me clarify those things where it was lacking in detail”. That communication that used to be keep the product owner and the engineer close together. Now it’s maybe the product owner and the engineer and the AI close together so we’re all collaborating and having those discussions.

Olivia McVicker: Exactly.

Security, trust, and ethics [31:19]

Thomas Betts: I want to pivot a little bit towards security and trust because it’s a big topic for anything with AI that a lot of companies are now realizing, “Hey, we should probably have an AI governance committee. We build software and we want to use some AI stuff or we buy all of our software, but we’re going to check the box. How are they using AI? Where’s my data going? Are you building models? Is my data leaving and being used somewhere else?” So it’s like we’re getting more cautious, probably as cautious as we should have been about our data before, but now people are seeing some of the effects. What are some of the questions that you’re seeing in the companies you work with about AI regarding security, trust, or maybe even ethics coming in?

Olivia McVicker: Yes. I think you touched on some of the big ones. One of the big things is I have this proprietary code at my work. What are you going to do with it if I’m using an AI coding assistant? Where does this go? When I hit send to my request, what happens there? I would say that’s probably the top one that comes up, especially if we’re talking enterprise customers. You have people working on proprietary data and they want to make sure what’s happening to it. We have questions about things like, okay, I get a code block recommended to me. It seems really confident. How do I know that it’s secure? Should I be second guessing all of this? How can I make sure that it’s still… I’m not going to push something that then ultimately, right, I’m pushing it. I’m going to be on the hook for it.

I don’t want to be using this code. And then it makes people want to just jerk back and be like, “I just don’t even want to touch that can of worms”. I agree with you. I think it’s great that people are asking these questions. And I think that there’s a lot of work being put in to make sure we have these sort of safeguards and these answers in place. There’s also the ethical piece of, okay, where’s these coding agents and these coding assistants getting their data from? Is it ethical to be using this public data? How is this all being sourced? And so these are all really tough questions and they’re all questions that are very valid questions. I think especially the larger players in the game have done a good job of reacting to those questions how they can and setting up things like trust centers.

So again, I’m GitHub Copilot and VS Code. So there’s a whole trust center that kind of goes over things like that and the protections for enterprise and business customers and the opt-in process for things like sharing your data. So I would just say ultimately these are all very valid questions. Do the research that you can. Whatever AI coding assistant you are using should have all of those answers publicized.

If they don’t, I would maybe be wary if they can’t answer things like that. Those are all very, very valid questions. Another thing that VS Code does that I really love is they have open sourced the GitHub Copilot chat extension functionality. So I won’t go all into that, but the idea of having that piece open source is that people can actually go into the code and see what telemetry is being captured. And so I think that ultimately it’s up to the user to do the research or the enterprise admin to make sure that they are comfortable with those safeguards that are in place. But it’s absolutely something that people should be evaluating when they’re deciding what AI coding assistants they should use.

Looking ahead [34:21]

Thomas Betts: So let’s look ahead and what’s some of the advice that you have for teams that are just trying to stay current or trying to get ahead and aren’t interested in just chasing the AI hype cycle? How do you know that this is a real and I should go pursue that and how do I keep from being too far behind the curve?

Olivia McVicker: Yes, I think that that’s such a good question. I mean, even in general, we’re talking about AI right now, but there’s always some fad, there’s always some hype, there’s always some trend. And I think my advice no matter what is kind of always the same and it’s the sense that you just need to try it yourself. Obviously, there’s a concept of how can I even know what’s the latest? And so there’s kind of the blanket answers of: find trusted voices that you like to follow, follow podcasts like this, so you can kind of hear about what the latest and greatest is, follow brand accounts for whatever tooling you want to use to get the latest there. So you can just be aware. I think that there’s a level of just awareness of, “Oh, I kind of heard about that. ” And then it kind of lingers in your mind for a bit and you decide to try it out.

But ultimately, everyone’s team is different. Everyone’s processes are different. Everyone’s workflows are different. And so something that maybe is not just a fad and is going to stick around, it still might not be something that’s super applicable to what you’re doing right now. And so I think at the end of the day, you can listen to all these trusted voices and take their opinions and take their best practices, but really just go and try it out.

Most accounts at this point, most coding assistants have some sort of free tier. Just go try it out, go do a hobby project and see what it’s about. See if it’s something that would be valuable to what process you have. And I would also maybe just add an extra caution of things are changing so quickly here in the AI space in general. So I would also just caution, if you try something like the data comes out and you’re like, “This is garbage”, maybe revisit in like three months and see where it’s at because I do think that there’s a level of people are so excited to get things out to people day one and it’s easy to try something and decide that it’s not there, but things are, oh my God, it’s insane how quickly things are changing.

And so if you try something out and you’re like, this just feels like all hype, but then a few months later you’re still hearing about it, I would just also caution like, “Go ahead and try it again. Don’t just write something off completely because the space is moving so quickly”. I would say take in all the information and then use your own judgment for how it fits into your workflows. Really try it out because that’s really the only way you can tell if something’s going to stick around is if it works and sticks in your own workflow.

Thomas Betts: Yes. Since you’re from Microsoft, I’ll say this. The rule of thumb used to be wait for version two. Version one is just to get out there, but version two is like we fixed a lot of the bugs. I was on beta one of .NET. I’m like, “It worked, but man, version two was much better”. And that’s true for all software. It takes a little bit of maturity. I think the big factor here is the next version, sometimes it’s an order of magnitude better. It’s not just, “Oh, we added a few more features and fixed some bugs”.

The models that are inside them are getting better because people think, “Oh, I’m using this LLM”. Well, no, you’re using GitHub Copilot or Cursor or Claude. That’s the interface that has an LLM underneath it. There’s a lot of software written in between that makes it… And those things are changing. Just like for example, a Copilot went from having ask mode to agent mode inside VS Code. I’m like, “Oh, that just showed up one day. That was a game changer for me”. So if you thought you knew it three months ago, it’s not the same thing. I think that’s a very valid point.

Olivia McVicker: And I think to your point about there’s so many different pieces to it. You are maybe just seeing the interface of you sending the request, but there’s so many things that go into that. That’s things like capacity changes, token limits change, models change, the IDE integration changes. There’s so many things that go into just you asking a question, are you prompting to fix a bug? And so because of that, all of those pieces are improving every single day. And so that’s why we’re seeing such crazy gains in just a matter of a couple months because maybe the model got really better and then right after this integration got a lot better. And then also we increased your token limits and also this happened. And so suddenly it’s a completely different experience. So I think it’s easy to very quickly write something off if it doesn’t work as hype.

But to your point, in such a fast moving space, you have to be a little bit more open-minded there and just see, okay, was it just hype and it’s not going anywhere or was it just not quite there in that stable version?

Thomas Betts: Well, I think before this podcast gets out of date, we’re going to wrap it up there before the adage catches up to us. But Olivia, thanks again for joining me today.

Olivia McVicker: Thank you so much, Thomas, for having me. I really appreciate it. It was a great conversation.

Thomas Betts: And listeners, we hope you’ll join us again soon for another episode of the InfoQ Podcast.

.
From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article SPDX SBOM Generation Tool Proposed For The Linux Kernel SPDX SBOM Generation Tool Proposed For The Linux Kernel
Next Article How to Automate WordPress Forms With n8n – Save Hours of Manual Work How to Automate WordPress Forms With n8n – Save Hours of Manual Work
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Intel LLM-Scaler-Omni Update Brings ComfyUI & SGLang Improvements On Arc Graphics
Intel LLM-Scaler-Omni Update Brings ComfyUI & SGLang Improvements On Arc Graphics
Computing
Stunning 3,500-carat ‘Purple Star Sapphire’ worth ‘up to £300 million’ found
Stunning 3,500-carat ‘Purple Star Sapphire’ worth ‘up to £300 million’ found
News
Tencent’s 2024 anti-corruption report: dozens dismissed, more than 20 handed over to the police · TechNode
Tencent’s 2024 anti-corruption report: dozens dismissed, more than 20 handed over to the police · TechNode
Computing
Preventing Data Exfiltration: A Practical Implementation of VPC Service Controls at Enterprise Scale in Google Cloud Platform
Preventing Data Exfiltration: A Practical Implementation of VPC Service Controls at Enterprise Scale in Google Cloud Platform
News

You Might also Like

Stunning 3,500-carat ‘Purple Star Sapphire’ worth ‘up to £300 million’ found
News

Stunning 3,500-carat ‘Purple Star Sapphire’ worth ‘up to £300 million’ found

5 Min Read
Preventing Data Exfiltration: A Practical Implementation of VPC Service Controls at Enterprise Scale in Google Cloud Platform
News

Preventing Data Exfiltration: A Practical Implementation of VPC Service Controls at Enterprise Scale in Google Cloud Platform

23 Min Read
More than 60 Labour MPs urge Starmer to back under-16s social media ban
News

More than 60 Labour MPs urge Starmer to back under-16s social media ban

6 Min Read
ESPN Select Review: Smooth and Sharp, But Serious Sports Fans Will Want More
News

ESPN Select Review: Smooth and Sharp, But Serious Sports Fans Will Want More

4 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?