By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: Growing and Cultivating Strong Machine Learning Engineers
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > Growing and Cultivating Strong Machine Learning Engineers
News

Growing and Cultivating Strong Machine Learning Engineers

News Room
Last updated: 2025/11/10 at 10:06 AM
News Room Published 10 November 2025
Share
Growing and Cultivating Strong Machine Learning Engineers
SHARE

Transcript

Vivek Gupta: My name is Vivek Gupta. I’m the Director of the AI Rotational Program at Microsoft. I’ve been a little over 30 years in the software development area, in a variety of fields, done consulting, done software development, worked on products, worked with customers. Tried to do a variety of things over the years. The last a little over 12 years have been spent specifically on AI and machine learning, in a variety of contexts. I started at Nokia, working in that with maps and data analysis of users on their mobile devices. Then I moved to Microsoft. I’ve been here a little over 11 years, and 5 of those years I actually was a data scientist on the team, working with our AI platform team on figuring out what did people want to do with a cloud-based system, and helping customers onboard to it.

Just generally see what was needed in order to scale up machine learning, and convince people that it was safe to be on the cloud. It’s a hard thing to think about now, that just 10 years ago, people were just not willing to stick the data that they use for AI on the cloud at all. The last 7 years, I’ve actually spent as a manager of AI teams, so machine learning engineers, applied scientists, as well as product managers or TPMs as well. In those years, I’ve learned a lot about how things work in that space. Towards the end, I’ll talk about some more detailed experience I’ve had, specifically in the rotation program, but most of the talk comes from across the entire journey of these last 12 years in this space.

Outline

The main topics I want to cover here are, what do I need to know, me as a manager? We’re looking at growing our engineers, specifically our machine learning engineers, but there are some things that I myself need to know. How do we nourish those engineers, especially when they’re just getting started in their careers? Then, what do we do to cultivate relevant skills over the longer term, looking at more senior engineers? Then, finally, let’s look at what it is to be production machine learning. We talk a lot about experimentation and things when you hear about AI. There’s a lot of tools that we all use to leverage AI. How do we get to that point of actually using those tools? Then a bit about my team. That’s where I’ll go into a little bit more about what it is I do currently.

What Do I Need to Know?

What do I need to know? I actually have to know a little bit of everything. I have to put on my PM hat when I’m working with the PMs. I have to know about the applied sciences or at least understand it enough to know where the value could be in it. Then, as an engineer, I actually need to stay current. There’s a lot of tools and technology coming out. I’m probably not going to sit down most of the time and implement things, but I am going to spend time reading the various blog posts by other large companies or individuals who try out the things and look for ideas that might lead me down a path, or be able to ask one of my engineers, go explore this. This is something interesting and I’d like to see somebody try it with this data that we have. I actually need to have a general sense of things. I don’t need to be an expert in all these things. That’s what hopefully my senior engineers are the experts who will dive deeply, but I need to have ideas. I need to help drive the team forward into thinking about new things.

Nourishing Engineers, and Cultivating Relevant Skills

How do we go about this nourishing of early in career engineers? They want feedback. That’s one of the main things that they’re looking for. That’s because they’ve just come out of school, in my case, most of them, and they’re used to getting grades, and they want to know how they can do better. They meet with their advisors. They meet with their teachers. They get some feedback that tells them, if you do this for the next exam, you’ll get a better grade. They’re hoping for their next promotion in our case. Sometimes it comes along and sometimes it doesn’t, and they’re disappointed, and we have to explain what else they need to do. Feedback is a very varied thing. Some of it will be feedback in the form of giving them PR reviews. How are they doing in their coding areas? Did you remember to add unit tests to your code? We’ll have to give feedback on that as well.

Some of it is actually on how to interact with others or how to deal with other people and teams that they’re working with, how to prioritize the work that they’re doing, how to make best use of their one-on-ones. There’s a lot that they need feedback on that helps engineers grow in their skills over time. For early career engineers, those are a few of the things that we look at. We need to give them time to learn. There are a lot of things going on. As I said, I have to be reading all the time. They’re actually implementing this stuff. They actually need to have time to actually try out new things and practice with it. That can be a difficult thing to carve out of the busy schedules and the rapid release cycles that we have, the continuous deployments, and everything else.

If we don’t carve out that time, they will never develop that comfort of switching between languages, trying out new tools quickly and learning how they work. Fostering that is a good thing to do. We actually do, I think on the order of four or five hackathons a year on our team, scheduled between various cycles that we have. Those hackathons are an opportunity for them to try something that they otherwise wouldn’t have an opportunity to try.

We need to get them to ask questions. Early in career folks, again, coming from universities, are of the opinion that asking questions indicates that they’re not understanding what they have to do. The finding typically is that they’re not asking questions until they’ve really been stuck for much too long. We need to actually encourage them to go to the senior engineers, go to their managers, and say, I am trying these things, but I’m still stuck, I’ll keep working on it, but if you have somebody who might be able to unblock me, that would be useful. I’ll go over what my team does. It takes us almost six months to get them to the point where they feel comfortable doing that. That is an unfortunately long time where you find that their first set of projects that they’re doing, that they actually took much longer than they really needed to.

Then they need mentoring. You need to get them to where you’re providing assistance both as a manager, finding them somebody who’s maybe a more senior peer that can actually help them and guide them through some of these learnings that we’re talking about. Then, finally, collaboration. One of the things we want people to do is actually talk to other disciplines, talk to other people on other projects in order to foster collaboration, not only from the sense of, I’m working with the people on my team really well. We’re building this product together. It’s going great.

Often, there are ideas on other teams that you are working with that may actually be allowing them to leverage work that somebody else has done or share something that they’ve done with that team that reduces this duplicate effort that goes on in teams most of the time. Encourage that type of collaboration or listening in on other people’s talks or project design presentations and things like that, so that they can learn from that.

Senior engineers, what are we looking for there? They’re looking for feedback too. It just needs to be from more senior people about how they grow their careers. They need time to learn too. The learning process doesn’t stop. Mentoring, same thing. Here, we’re not only looking for them to get mentoring, we want them to learn to be mentors to the more junior engineers. By giving them coaching on, how do you give mentorship to somebody else, we’re actually building up a nice cycle here where we are in some ways offloading that work, but we’re actually making it more scalable as an organization. Collaboration, this should be a group that is actually widening that circle of impact. You don’t want them just looking at their team or maybe the neighboring teams. This is the one where you want them looking to get across your organization or maybe across your different verticals in your company and looking for opportunities to collaborate or work together.

At the senior and principal engineer levels, you really want them looking at these ideas a little bit differently. You may want them also to be coming in and proposing things for things like the hackathons or others. They may look at a little opportunity and say, our next hackathon, we’re not just opening it up to people doing whatever they want. Here are some ideas that would be useful to us maybe looking six months out, but we can actually try experimenting on those now.

Cultivating relevant skills. We’ve got all the standard things you’d expect for engineering. You want people to be able to code, unit and integration tests. You need them to be able to do DevOps, MLOps, as the case may be, or LLMOps as the latest one is, cloud infrastructure and monitoring, and system design. Those are give-me’s. All of these don’t change just because you have ML engineers. They still need to be able to do all these things as well. That doesn’t change at all.

Production Machine Learning

What changes for machine learning in a production environment? What are the skills that you need to help people develop? They actually need to understand how AI and machine learning is done by data scientists. Data scientists, as many of you may work with them already, often do work in notebooks. There are ways to automate notebooks and push the code directly out, but that’s not necessarily the most scalable, the most manageable code, the easiest way to do it. They also do a lot with data. They need data from different sources. They need to pull it in. They need different types of feedback loops coming in.

The more your engineers, and even your PMs, understand how the data science process works, how that code works, how that code is developed, the more that they’re able to actually assist in making that code scalable. To building processes that are more resilient, that actually have the right telemetry, and that you actually have the ability to make this production-ready as we’re used to with more typical systems that we build. That’s the big thing here. Doing these environments where these people actually work together is one of the first things that helps your machine learning engineers become more aware of what they can do to scale these processes.

Data management. Typically, we’ve dealt with databases. We have database administrators. We have people dealing with this. We deal with scaling out of databases. Data management is a little different in the machine learning world. You actually have to keep track of what data is used to train your models. You have to keep track of test sets that you might use to validate your models. You have to keep track of moving data from one place to another, maybe reformatting the data. You have to maybe do aggregations that the data scientists require in order to get that data in the right place. You’re building lots of pipelines to move the data around from one place to another, transform it in different ways, and then keep track of it. You can’t just say, I’ll just do a query and I’ll grab that same data. You have to know that you are actually using the right data for this particular instance. You have to monitor the data to see if there’s actually data shifts occurring in order to see if your models have to be retrained.

That’s another instance where that’s new for the machine learning engineers. Lots of different tools for this, lots of different ways to do it, but it is a skill that’s actually necessary as your machine learning engineers work with everyone else. Privacy and security. We all worry about security and illegal access to things. Do people have the right authorization to access systems? We also need to worry as we’re gathering this data about our customers or others. Are we actually maintaining privacy in the right ways? Are we not logging data that shouldn’t be logged? Are we training our models with data in an eyes-off fashion where we don’t actually look at the data? What’s our proxy data for those purposes? Are we making sure that we’re enforcing all those rules as engineers on everyone else who’s using the system? Privacy and security play hand in hand, but it takes on a different flavor than some of the other cases where we may be using it.

Training pipelines, I mentioned this earlier. When you’re training models or fine-tuning models in some of the LLM cases or others, you’re actually looking at building out a way of training the model on a set of data. If you want to be able to make valid comparisons from one training run to another, you actually have to make sure you’re using the same data. If you don’t, your comparison is not really valid anymore. You have some golden set of data that you use to validate your training. Training means making sure you have consistency in how you manage your data for training. Again, where engineers come in on this is very likely the data scientists will pull some data down, train their model, all is good, but now we need to automate that process. Maybe we need to actually retrain the model monthly on some set of data, and we need to have the right criteria.

In the case of forecasting, we actually have to retrain the model every single 15-minute increment in order to do energy forecasting. In the case of something else, maybe it’s a fine-tuned model for LLMs. Maybe we’re not doing it as frequently, but if the model version changes, we have to redo it anyways. When do we do it? How frequently do we do it? Engineers actually need to think about what does that schedule of data movement and automation look like in order to actually build up these training pipelines. Model and prompt management. Model management with MLOps has been around for the last 6 or 7 years, 8 years maybe now at this point. That was the idea that I train this model, I’m going to keep a version of it, and I need to remember that this is the version. Now I’m going to train up a new one. I’m going to deploy it to production.

I had a failure, I want to be able to roll back to the previous version, so I actually kept the previous version around as well. Now we’re going to this whole thing where we’re using these large language models that you’re not necessarily training. Most people probably aren’t fine-tuning them at this point. They’re just using them out of the box as APIs. What happens when that API version changes? Does your prompt still work correctly? How are you testing that? How do you know when that model has changed? Are you keeping track of that? Are you making sure? What if they decide that they’re just ending that particular version of the model? Are you prepared for those model changes? Are your prompts staying up to date?

Evaluation. We talked a little bit about this in terms of validating our models. With LLMs, evaluations have become even more difficult. There is no right or wrong answer most of the time with the LLMs. It’s all probabilistic. Probably driving most of your engineers nuts. There are evaluation techniques and methods to this. There’s the need for dashboards and being able to keep track of your model versions that you’re calling, keep track of your prompts that you’re doing, and actually run these evaluation pipelines on a regular basis. There’s a feedback loop coming in from your production side. I have that a little bit further on. You want to use those signals to determine whether or not you are still giving what you consider valid answers. You can see most of the time now, most applications that are using LLMs are not guaranteeing, and they’re telling you that these are generated, that the answer is absolutely correct because it’s a summarization of something else.

They’re pointing you back to the original source and they’re giving you footnotes essentially, go look here to see what the original source said. You have to think, how am I evaluating that I’m giving the right answers? You’re also evaluating to see whether you’re susceptible to jailbreaks, people trying to understand what’s going on in your prompts. You’re also trying to evaluate that you’re not doing any harm. Your output of your LLM is not giving any harmful answers.

Again, building pipelines that maybe show a dashboard that shows all these things happening are another thing that your ML engineers are looking to create. Again, it’s about automation and scaling. Your data scientists will probably build this once and show the results for a particular instance. What you need to be able to do is scale that out as the environment keeps changing and updating, can you continue to say that everything is really working as designed? Can you do that in a test environment so that you can test out what a new model version is going to produce, and be able to know that we need to send this back and we need new prompts for this before we release this.

Telemetry. Again, telemetry is not much different than what we did. We do want to know sometimes what answers we’re giving. There are always the opt-ins that we actually want to see the questions people are asking and we want to see the answers that they’re getting, especially if they provide us the authorization in a feedback that says thumbs up or thumbs down, because that’s an opportunity to add to the data that we have for our evaluation as well. Telemetry becomes important in LLMs, but in regular machine learning as well, we want to know, are our answers accurate? Think about a forecasting example. We could do forecasting and we could give a certain set of results.

Obviously, 15 minutes from now in the energy case, we’d actually have the answer. We want to look at that telemetry and say, we know the answer now. We gave this answer previously. We want to actually compare the two and then put that into our telemetry so we can decide if our model is due for some retraining or update because there’s been a sudden shift in the environment or what’s being done. Then, finally, human in the loop. Human in the loop is one of the things that we want to do. I think GitHub Copilot is a good example. It will give you an answer or it will maybe do your PR thing for you. It’ll ask you to look through all the answers it gave. You can mark off whether you want to accept these or not. We’re generating code in some cases. Do we want to automatically have something generate code, check it in, and have it merged into your main branch?

Probably not. You probably still want somebody to be looking at it and evaluating whether that code is something that you want to actually release. We have several cases of this in a variety of different scenarios. You want to think about a human in the loop. I think Office does this as well. It gives you multiple outputs. It lets you decide, do you want it to rewrite or do you want it to be assistive? We want to think about how to incorporate human in the loop in a lot of the different systems that we build as gatekeeping before some of this stuff goes out or before it’s accepted. Something to think about in that process.

Then, finally, user feedback is what closes the loop. We’ve all seen it. Thumbs up, thumbs down, three star, whatever your type of rating system is, most of these systems do need some feedback loop that ties back to the model training and the model evaluation and everything else. Your engineers, similar to doing telemetry, need to be thinking about incorporating that into whatever they’re building so that it can come back in as data that actually enables us to make our systems incrementally better at each release. It’s not something that is an automatic give-me. Thumbs up, thumbs down isn’t just about how good a job you’re doing, but it actually is giving you feedback on how your model or models are performing, and which ones may need some modification.

Questions and Answers

Participant 1: The user feedback, is it part of testing or production?

Vivek Gupta: Yes, it should be part of the system. It should be part of your design that there is a feedback loop. Similar to how you design for telemetry, user feedback should actually be considered. Often, your applied scientists or data scientists will think about that as well. Feedback loop is a form of telemetry, essentially, and you really want to incorporate that at your design time because then you can build the pipeline that you need to incorporate that with your model evaluation.

Participant 2: Do you see changes in the new people coming in now that AI is in the picture, how you train them?

Vivek Gupta: The biggest change we’ve definitely seen in the last year or so is how people are using things like GitHub Copilot, or Claude, or any of these other tools, to actually write their code. I’m seeing a split in how people are doing it. There is a group of people who will have it write their code, and then they will write their unit tests to see that it’s doing the right thing. Then there’s another group of people that are writing unit tests first, and then based on the unit test and the specification, having it generate the code. That way they’re getting their correctness in first, and then getting the results afterwards. I’m not sure which one is better yet.

Participant 2: Are you encouraging them to use Copilot to do their work?

Vivek Gupta: We definitely are. We do have a lot of checks in place to see that we’re testing everything, and integration tests and stuff like that. Those are still largely handwritten for the integration tests. We are definitely encouraging people to use it as an assistant. Don’t use it as a replacement for yourself and hope that everything goes perfect, but we are encouraging people to use it.

What is it that MAIDAP is? We’re an early in career program. We have software engineers, applied scientists, and product managers on our team. We’re cohort-based. The different disciplines come at different levels. We have bachelor’s degrees. We have master’s degrees, PhDs, and MBAs. It spans the spectrum. We’re based in Cambridge, Massachusetts, and out in Redmond, Washington. We’re a team that’s split up. Many probably know, Microsoft is in a, you can work from home, it just depends on your team. We actually are hybrid. Since these are early in career folks, we actually do want them in, we want them interacting with each other, so they’re in a hybrid mode. They do work across the two sites together. We don’t treat the two sites as two separate entities. We actually make them work together, dealing with the time zone differences and everything. It is centrally located in Microsoft’s organization.

The advantage there has been, when I talk about the projects, that as a central organization, we get to see what’s going on across Microsoft in terms of AI and machine learning. In the two years that these folks are with us, and the four different projects that they work on, they work across Microsoft in different types of technologies, tech stacks, different purposes that they’re trying to accomplish. Some are UI based. Some are backend systems. They’re getting a breadth of experience about how AI and machine learning is deployed and delivered in production systems.

Then they go to a product team at the end of the two years and join them permanently, or as permanently as permanent is. They go off to those teams and they’ll work there. They maintain that network of coming back to us and asking questions, sponsoring projects on our team, and things like that. It’s an opportunity for them to get some breadth before they pick the area that they’re interested in.

The challenges at MAIDAP. All of you have similar challenges. We have 10 projects that we run with 10 different teams doing those projects, and those projects are very diverse. One of the difference is, every 6 months, they change projects and they have to onboard to a new team. Each of these individuals over the course of 2 years might go through four different programming languages, four different tech stacks, four different mechanisms for deploying their code. Microsoft does not have a unified system across the entire company. That variable tech stack and rapidly moving technology hardens them a little bit towards dealing with new things.

As I mentioned earlier, we try to do four to five hackathons a year as well. Again, in order to keep them current, we have reading groups, we do a bunch of different things so that people are staying current with what’s going on. That makes them more valuable to us as they transition to the product teams. Even while we’re us, we get comments from the teams that we work with that these individuals, even given the fact that they just came out of school and some of the folks just have a bachelor’s degree, are more creative than a lot of the people they have on their team because they’re willing to take risks. We encourage that risk-taking for them. It is a challenge that they’re dealing with, but it’s also an opportunity.

Then, what are the highlights? These are curious, bright, driven individuals. They are fun to work with. They bring an energy that’s totally different, and having them in the office with us makes all of us have a lot more energy for doing these things. It’s actually a lot of fun. The hackathons I’ve mentioned, those are really a highlight, I think, for us a lot of times. Then, doing the career development of these individuals is what led to this talk in particular is, I get to focus on that. We are delivering innovative features to the products across Microsoft with the partner teams that we work with, but the career development of these individuals, working on these individuals and getting them to be more senior engineers over their two years is actually a very joyous thing to be able to spend time doing. Then, I do enjoy the fast-paced environment, that constant change, doing something new each time, looking at what the latest thing is. I’ll give you an example. 2023 fall, ChatGPT got announced.

At that time, I think we were doing one LLM project, so large language model related project. It was a random one. January of the following year when our next cycle began, our next 6-month program, 9 out of 10 of our projects were LLM projects. We have that ability to completely shift what we’re doing and focus on the immediate need of the teams that we’re working with and help them jump ahead on the work that they’re doing. That was an opportunity for us to keep ahead of things. That’s where the giving people room to learn, room to have reading groups, the room to read research papers and things like that, really helps us continue to be innovative in what we’re doing and what we’re proposing to the teams that we work with.

Participant 3: Do you have any resources for any entry-level engineers or senior engineers to start with machine learning and AI programming?

Vivek Gupta: I think most of the time, most of us are just out there reading archive or elsewhere. We’re looking at papers, looking at what the latest trends are. If you’re starting with something, I think GitHub Copilot is free now, at least for some basic level use, grab one of those papers and implement something, and then work your way from there. Once you implement something, figure out how to deploy it. We do a big onboarding week. We have everyone start on the same day. July 7th this year, we have 34 people starting, 17 in Redmond, 17 in Cambridge, and we actually do a week-long onboarding for them to get used to being at Microsoft, one, new company for them, and what are they doing? We actually create a two-day project, mini hackathon, where our previous cohort design an experiment.

I think the one right now is, how do I build something that uses MCP and actually build a bot around it using some of the tools that are available? Each of the individuals on the team in that two days will actually build a bot, add MCP and add some functionality where it’s calling out to something else and getting a result. It’s building a little project like that that I think is the fastest way to understanding how the pieces fit together. Like I said, it’s the prior cohort, which now only has a year of experience, is building this based on what they’ve learned in the last year, and saying, this is what I think will be useful for the incoming cohort to learn the first week that they get here. Pick some small project, something that you want to build an answer to, and just have people go out and try doing it.

Participant 4: As a senior developer, I’m senior in terms of overall experience, but I am definitely a beginner in terms of AI. Would you suggest that for seniors, the same thing, just start experimenting, play around a little bit?

Vivek Gupta: Same thing. The tools are fantastic. There’s a lot of built-in things. Whether you pick up Azure, AWS, GCP, you pick up any one you want or pick some tools that you download, there’s enough out there, to say, I have this problem. One that I’m working on, I can tell you right now. I do photography. I’ve gotten back into film photography. I take notes while I’m taking my photos in order to keep track of, what settings did I use? Did that picture turn out the way I want? I’m building myself a little chatbot that I can talk to with my headphone and keep notes. That’s just for me, but it’s letting me actually see how all these pieces fit together so that, again, I’m staying fresh with how some of this works. I really think that the best way is pick something that’s interesting to you and just try doing it.

Participant 5: How do you ensure that ML engineers are integrating their work, the specialized ML work they do with other types of engineers at the company who might not be specialized or might be specialized in something different?

Vivek Gupta: That was one of the first things. ML engineers are still engineers. They really have to be just like any other engineer in terms of how they do their work together. We still have PMs specifying functionally what needs to be built. At the end of the day, there are these integration points that have to happen. If you were integrating with a library or something else or another set of web services, you’re ending up doing the same sort of integration that has to be there. That doesn’t change much. The product teams that we work with, some of the teams we work with don’t do any ML themselves. What ends up happening is we take on the work of building the pipelines to move the data.

The work that we may give them is, here’s the specification of where we need the data to be in a particular format or something else. Or, could you build this data pipeline of moving this data from here to here, which they’re quite capable of doing. They don’t necessarily need to know what’s going to happen with the data. Obviously, we do talk to them about what’s going to happen with it because that gives us an opportunity to help them learn what ML engineers do if they want to help do it in the future.

Participant 5: Is then the incentive structure for full-stack engineer versus ML engineer the same?

Vivek Gupta: I think it ends up being interest. We have had a few people come in and say, it was great going through the program, but I want to be a full-stack engineer instead where I’m getting to work on more UI and other things and actually building the application side of things. Then we’ve had others who are like, I want nothing to do with that frontend stuff or the full-stack bit, I want to build all these backend processes that scale AI. I think part of the program for us is they find where their niche is and they move into that. I don’t know that there’s a particular benefit of one over the other. I think it ends up being interest to some extent.

Participant 1: The program itself, is it specific for early career or is there similar programs for software engineers switching from senior?

Vivek Gupta: This program in particular is early in career. I think Apple and Facebook have one for people who want to switch what they’re doing in their path, but ours is specifically early in career, pretty much just graduating from college. My busy recruiting season is September and October when I’m hiring people.

 

See more presentations with transcripts

 

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article The iconic Nintendo Switch is coming to the end of its storied life The iconic Nintendo Switch is coming to the end of its storied life
Next Article What We Learned from 180 Top-Ranked Google Ads | WordStream What We Learned from 180 Top-Ranked Google Ads | WordStream
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

GNU Coreutils 9.9 Brings Numerous Fixes
GNU Coreutils 9.9 Brings Numerous Fixes
Computing
9 Walmart Electronics That Customers Swear By – BGR
9 Walmart Electronics That Customers Swear By – BGR
News
How to Set Up Session-Level Database Migrations in Python | HackerNoon
How to Set Up Session-Level Database Migrations in Python | HackerNoon
Computing
Tesla Cybertruck and Model 3 program manager steps down
Tesla Cybertruck and Model 3 program manager steps down
News

You Might also Like

9 Walmart Electronics That Customers Swear By – BGR
News

9 Walmart Electronics That Customers Swear By – BGR

14 Min Read
Tesla Cybertruck and Model 3 program manager steps down
News

Tesla Cybertruck and Model 3 program manager steps down

3 Min Read
How Europe deals with China in trade, technology, and security
News

How Europe deals with China in trade, technology, and security

17 Min Read
Samsung may give the Galaxy S26 Ultra a larger front camera hole for a good reason
News

Samsung may give the Galaxy S26 Ultra a larger front camera hole for a good reason

3 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?