-
They said that while AI can increase productivity, stakeholders need to understand its limitations.
-
This article is part of the ‘CXO AI Playbook’: clear conversations from business leaders about how they test and use AI.
At a Business Insider roundtable in November, Neeraj Verma, head of applied AI at Nice, argued that generative AI “makes a good developer better and a worse developer worse.”
He added that some companies expect employees to be able to use AI to create a web page or HTML file and simply copy and paste solutions into their code. “Right now,” he said, “they expect everyone to be a developer.”
During the virtual event, software developers from companies like Meta, Slack, Amazon, Slalom and more discussed how AI impacted their roles and career paths.
They said that while AI could help with tasks like writing routine code and translating ideas between programming languages, fundamental coding skills are needed to use the AI tools effectively. Communicating this reality to non-technical stakeholders is a primary challenge for many software developers.
Understanding limitations
Coding is only part of a developer’s job. As AI adoption increases, testing and quality assurance may become more important for verifying the accuracy of AI-generated work. The U.S. Bureau of Labor Statistics predicts that the number of software developers, quality assurance analysts and testers will grow 17% over the next decade.
Productivity expectations may overshadow concerns about the ethics and safety of AI.
“Interacting with ChatGPT or Cloud AI is so simple and natural that it can be surprising how difficult it is to monitor AI behavior,” said Igor Ostrovsky, co-founder of Augment, during the roundtable. “It’s actually very difficult, and there’s a lot of risk involved, to try to get AI to behave in a way that consistently gives you a delightful user experience that people expect.”
Companies have faced some of these issues in recent AI launches. Microsoft’s Copilot was found to have issues with oversharing and data security, although the company created internal programs to address the risk. Tech giants are investing billions of dollars in AI technology – Microsoft alone plans to spend more than $100 billion on graphics processing units and data centers to power AI by 2027 – but not so much in AI governance, ethics and risk analysis.
AI integration in practice
For many developers, managing stakeholder expectations – communicating the boundaries, risks and overlooked aspects of the technology – is a challenging but crucial part of the job.
Kesha Williams, head of enterprise architecture and engineering at Slalom, said during the roundtable that one way to bridge this conversation with stakeholders is to outline specific use cases for AI. Focusing on the applications of the technology can highlight potential pitfalls while keeping an eye on the big picture.
“Good developers understand how to write good code and how to integrate good code into projects,” says Verma. “ChatGPT is just another tool to write some of the code that goes into the project.”
Ostrovsky predicted that the way employees interact with AI would change over the years. In the age of rapidly evolving technology, he said, developers will need to have a “desire to adapt and learn and the ability to solve difficult problems.”
Read the original article on Business Insider