As a new year begins to unfold, the operative word surrounding the enterprise world’s adoption of artificial intelligence is uncertainty.
This notable lack of confidence can be seen in survey results recently unveiled by Cisco Systems Inc., which found that only 13% of responding companies were fully ready to capture AI’s potential. And that’s down a percentage point from one year ago.
“They believe there’s potential, but they have a lot of fear about what’s unknown,” said Chuck Robbins (pictured), chairman and chief executive officer of Cisco. “There’s so much promise, there’s still more to figure out.”
Robbins made his remarks at the company’s AI Summit, a small thought leadership gathering of 150 company executives and analysts held this month in Palo Alto, California. While the event provided an opportunity to showcase Cisco’s release of AI Defense, a new tool for safeguarding AI systems, it also offered an intriguing look at the challenges confronting an ecosystem of companies that are focused on moving the technology forward.
Those challenges run the gamut, from intense competition for silicon market share, model pricing and security to regulatory concerns, agentic deployment and how to generate meaningful productivity gains from investment. “The ability to make people more productive is hugely powerful,” David Solomon, chairman and CEO of the Goldman Sachs Group Inc., said during an appearance at the Cisco event.
Solomon, who noted that his firm had been “using artificial intelligence for two decades,” revealed that AI is now being used to write the very documents that companies rely on investment banks to provide when submitting public filings.
These generally involve drafting a form, known as the S-1 filing, for submitting an application to the Securities and Exchange Commission for an initial public offering of stock. “Today, you can basically have something that’s 95% of the way there in a few minutes,” Solomon said. “You still have to put a lot of time into differentiating that last 5%, but that’s a huge productivity gain.”
Models show signs of reasoning
Those productivity gains may become more significant as the capabilities for AI continue to expand. One of the participants in Cisco’s AI Summit was Aidan Gomez, founder and CEO of Cohere Inc. Gomez, whose previous work at Google LLC resulted in the development of Transformer deep learning architectures, outlined a progression where AI models were beginning to move beyond mere knowledge aggregation to an ability for insight creation.
“Are the models just regurgitating stuff they’ve already seen or are they actually delivering us new ideas, things that they haven’t seen before out on the internet or when they were trained?” Gomez said. “We’ve already seen instances of generalization and new ideas. They can create things that a human hasn’t created before.”
An example of how far AI has progressed in a short period of time can be found in the release of OpenAI’s o1 model in September. Gomez noted that the model, which can decode scrambled text and answer questions with better accuracy beyond Ph.D. level knowledge, is an important step forward in AI’s capability.
“The big thing right now is reasoning,” Gomez said. “The input space of the system is literally anything. The expectation from the user is you have to respond immediately and you have to be correct. That’s an insane expectation. Of course we should spend different amounts of compute, different amounts of energy on different problems. What reasoning has unlocked is the ability to do that.”
Reasoning is expected to play a significant role in the development of AI agents, intelligent pieces of software that can perform tasks autonomously. The rise of AI agents was one of the key tech story lines in 2024, and it has been a major focus for industry players such as Salesforce Inc., Boomi LP and ServiceNow Inc.
The effectiveness of AI agents will depend on an ability to gather data from a variety of sources and one of the primary technologies for accomplishing this is enterprise search. This has propelled companies such as Glean Technologies Inc. into the spotlight as users of AI agents must find ways to access massive amounts of data from different applications and products.
“Search has now become a core foundation for any AI agent you want to build in the enterprise,” Arvind Jain, founder and CEO of Glean, said during a discussion at the Cisco event. “That’s why it has become such a hot topic now. There is going to be this amazing team of surrounding AI agents that are assisting you.”
Yet several of the summit participants expressed a belief that the agentic AI wave may still be a few miles offshore. Cohere’s Gomez noted that enterprises must still work on creating an infrastructure that will support the type of data integration necessary to drive agentic AI that is fully autonomous.
“It’s totally augmentation right now, it’s not automation,” Gomez said. “We’re 18 months away from something really compelling, where you can see roles that can have their day-to-day changed.”
Riding the inference wave
Not to be forgotten in the world’s adoption of AI is the role that hardware will play in facilitating productivity and usage. Although Nvidia Corp. has forged a platform that integrates hardware, software and systems engineering into a massive ecosystem, there are still other companies working with silicon to support the booming AI market.
One of these is Groq Inc., a processor company founded in 2016 by a group of former Google engineers who were instrumental in designing the TPU or tensor processing unit. The firm’s focus has been on AI inferencing and its LPU or language processing unit that can perform inferencing tasks at lightning-fast speeds.
Groq is happy to leave dominance in the GPU or graphics processing unit market to Nvidia, according to founder and CEO Jonathan Ross.
“We spotted that inference was going to be a wave,” Ross said. “You cannot compete with Nvidia, and the world does not need another GPU manufacturer. What you do is find an unsolved problem and you solve it. The problem we solved was AI was slow. We’re still 10x faster than a GPU.”
Though Groq may have found a solution to speed in AI inferencing, the tech industry is still grappling with an insatiable appetite for data. Scale AI Inc., a startup that feeds data to OpenAI and Nvidia for training models, built its business on an ability to capture and channel vast amounts of information for driving artificial intelligence.
However, there are growing concerns that the data well may soon run dry. In December, OpenAI co-founder Ilya Sutskever warned that the industry had achieved “peak data” and the process of pretraining models with massive amounts of information will soon come to an end. That was very much on the mind of Scale AI Inc. co-founder and CEO Alexandr Wang.
“We need new sources of data,” Wang said at the Cisco event. “How do you produce advanced frontier datasets to fuel the forward progress of these industries? Synthetic data is not going to magically solve all of our data problems.”
Potential regulatory roadblocks
The AI ecosystem is also being forced to confront the prospect of government regulation that could hamper future growth. The industry managed to dodge one attempt at regulation last fall when California Governor Gavin Newsom vetoed a bill that would have imposed new AI safety regulations. However, he did sign legislation mandating transparency in generative AI.
At the federal level, President Donald Trump signed an executive order this month that repealed previous government policies on AI which contained guardrails that companies in the U.S. were required to follow. Yet, Trump’s order calls for the development of a new action plan for AI within the next 180 days, and it remains uncertain what that will contain.
“I’m very skeptical of overregulating AI right now,” said Aaron Levie, co-founder and CEO at Box Inc., who expressed concern that companies such as OpenAI, Meta and Anthropic would have been subjected to undue scrutiny if the proposed California law had been signed. “All of a sudden, the AI frontier labs now have to hire dozens, hundreds of regulatory people, policy people, legal folks every time they want to do a new training run. Nearly a dozen players are running at full speed to leapfrog each other in model capability and pricing and innovation. What would happen if every single one of them now went 50% slower?”
Despite potential roadblocks involving a lack of data and regulatory oversight, the AI industry continues to roll onward. This was captured in remarks by Fei-Fei Li, one of the world’s foremost researchers in AI and deep learning. Li, who was formerly chief scientist of AI and machine learning at Google Cloud and now serves as co-founder and CEO of World Labs Inc., described her current initiative to build LWMs or large world models that perceive and interact with the 3D world.
Li’s work in spatial intelligence involves equipping robots with the ability to navigate physical spaces. It represents what Li characterized as the “next generation foundation model” for robots, another hallmark in the era of innovation that AI has spawned.
“We believe the ability to generate and interact with 3D worlds, real or virtual, is fundamental for intelligent agents,” Li said. “AI is real. It really is genuinely the new computing.”
Photo: Cisco/livestream
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU