By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: How Exposed Endpoints Increase Risk Across LLM Infrastructure
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > Computing > How Exposed Endpoints Increase Risk Across LLM Infrastructure
Computing

How Exposed Endpoints Increase Risk Across LLM Infrastructure

News Room
Last updated: 2026/02/23 at 7:28 AM
News Room Published 23 February 2026
Share
How Exposed Endpoints Increase Risk Across LLM Infrastructure
SHARE

The Hacker NewsFeb 23, 2026Artificial Intelligence / Zero Trust

As more organizations run their own Large Language Models (LLMs), they are also deploying more internal services and Application Programming Interfaces (APIs) to support those models. Modern security risks are being introduced less from the models themselves and more from the infrastructure that serves, connects and automates the model. Each new LLM endpoint expands the attack surface, often in ways that are easy to overlook during rapid deployment, especially when endpoints are trusted implicitly. When LLM endpoints accumulate excessive permissions and long-lived credentials are exposed, they can provide far more access than intended. Organizations must prioritize endpoint privilege management because exposed endpoints have become an increasingly common attack vector for cybercriminals to access the systems, identities and secrets that power LLM workloads.

What is an endpoint in modern LLM infrastructure?

In modern LLM infrastructure, an endpoint is any interface where something — whether it be a user, application or service — can communicate with a model. Simply put, endpoints allow requests to be sent to an LLM and for responses to be returned. Common examples include inference APIs that handle prompts and generate outputs, model management interfaces used to update models and administrative dashboards that allow teams to monitor performance. Many LLM deployments also rely on plugin or tool execution endpoints, which allow models to interact with external services such as databases that may connect the LLM to other systems. Together, these endpoints define how the LLM connects to the rest of its environment.

The main challenge is that most LLM endpoints are built for internal use and speed, not long-term security. They are typically created to support experimentation or early deployments and then are left running with minimal oversight. As a result, they tend to be poorly monitored and granted more access than necessary. In practice, the endpoint becomes the security boundary, meaning its identity controls, secrets handling and privilege scope determine how far a cybercriminal can go.

How LLM endpoints become exposed

LLMs are rarely exposed through one failure; more often, exposure happens gradually through small assumptions and decisions made during development and deployment. Over time, these patterns transform internal services into externally reachable attack surfaces. Some of the most common exposure patterns include:

  • Publicly accessible APIs without authentication: Internal APIs are sometimes exposed publicly to quicken testing or integration. Authentication is delayed or skipped entirely, and the endpoint remains accessible long after it was meant to be restricted.
  • Weak or static tokens: Many LLM endpoints rely on tokens or API keys that are hardcoded and never rotated. If these secrets are leaked through misconfigured systems or repositories, unauthorized users can access an endpoint indefinitely.
  • The assumption that internal means safe: Teams often treat internal endpoints as trusted by default, assuming they will never be reached by unauthorized users. However, internal networks are frequently reachable through VPNs or misconfigured controls.
  • Temporary test endpoints that become permanent: Endpoints designed for debugging or demos are rarely cleaned up. Over time, these endpoints remain active but unmonitored and poorly secured while the surrounding infrastructure evolves.
  • Cloud misconfigurations that expose services: Misconfigured API gateways or firewall rules can unintentionally expose internal LLM endpoints to the internet. These misconfigurations often occur gradually and go unnoticed until the endpoint is already exposed.

Why exposed endpoints are dangerous across LLM infrastructure

Exposed endpoints are particularly dangerous in LLM environments because LLMs are designed to connect multiple systems within a broader technical infrastructure. When cybercriminals compromise a single LLM endpoint, they can often gain access to much more than the model itself. Unlike traditional APIs that perform one function, LLM endpoints are commonly integrated with databases, internal tools or cloud services to support automated workflows. Therefore, one compromised endpoint can allow cybercriminals to move quickly and laterally across systems that already trust the LLM by default.

The real danger doesn’t derive from the LLM being too powerful but rather from the implicit trust placed in the endpoint from the beginning. Once an LLM endpoint is exposed, it can act as a force multiplier; cybercriminals can use a compromised endpoint for various automated tasks instead of manually exploring systems. Exposed endpoints can jeopardize LLM environments through:

  • Prompt-driven data exfiltration: Cybercriminals can create prompts that cause the LLM to summarize sensitive data it has access to, turning the model into an automated data extraction tool.
  • Abuse of tool-calling permissions: When LLMs call internal tools or services, exposed endpoints can be used to abuse these tools by modifying resources or performing privileged actions.
  • Indirect prompt injection: Even when access is limited, cybercriminals can manipulate data sources or LLM inputs, causing the model to execute harmful actions indirectly.

Why NHIs are especially dangerous in LLM environments

Non-Human Identities (NHIs) are credentials used by systems instead of human users. In LLM environments, service accounts, API keys and other non-human credentials enable models to access data, interact with cloud services and perform automated tasks. NHIs pose a significant security risk in LLM environments because models rely on them continuously. Out of convenience, teams often grant NHIs broad permissions but fail to revisit and tighten access controls later. When an LLM endpoint is compromised, cybercriminals inherit the NHI’s access behind that endpoint, allowing them to operate using trusted credentials. Several common problems worsen this security risk:

  • Secrets sprawl: API keys and service account credentials are often spread across configuration files and pipelines, making them difficult to track and secure.
  • Static credentials: Many NHIs use long-lived credentials that are rarely, if ever, rotated. Once those credentials are exposed, they remain usable for long periods of time.
  • Excessive permissions: Broad access is often granted to NHIs to avoid delays, but it’s inevitably forgotten about. Over time, NHIs accumulate permissions beyond what is actually necessary for their tasks.
  • Identity sprawl: Growing LLM systems produce large numbers of NHIs across environments. Without proper oversight and management, this expansion of identities reduces visibility and increases the attack surface.

How to reduce risk from exposed endpoints

Reducing risk from exposed endpoints starts with assuming that cybercriminals will eventually reach exposed services. Security teams should aim not just to prevent access but to limit what can happen once an endpoint is reached. An easy way to do this is by applying zero-trust security principles to all endpoints: access should be explicitly verified, continuously evaluated and tightly monitored in all cases. Security teams should also do the following:

  • Enforce least-privilege access for human and machine users: Endpoints should only have access to what is necessary to perform a specific task, regardless of whether the user is human or non-human. Reducing permissions limits how much damage a cybercriminal can do with a compromised endpoint.
  • Use Just-in-Time (JIT) access: Privileged access should not be available all the time on any endpoint. With JIT access, privileges are only granted when necessary and automatically revoked after a task is completed.
  • Monitor and record privileged sessions: Monitoring and recording privileged activity helps security teams detect privilege misuse, investigate security incidents and understand how endpoints are actually being used.
  • Rotate secrets automatically: Tokens, API keys and service account credentials must be rotated on a regular basis. Automated secrets rotation reduces the risk of long-term credential abuse if secrets are exposed.
  • Remove long-lived credentials when possible: Static credentials are one of the biggest security risks in LLM environments. Replacing them with short-lived credentials limits how long compromised secrets remain useful in the wrong hands.

These security measures are especially important in LLM environments because LLMs rely heavily on automation. Since models operate continuously without human oversight, organizations must protect access by keeping it time-limited and closely monitored.

Prioritize endpoint privilege management to enhance security

Exposed endpoints amplify risk quickly in LLM environments, where models are deeply integrated with internal tools and sensitive data. Traditional access models are insufficient for systems that act autonomously and at scale, which is why organizations must rethink how they grant and manage access in AI infrastructure. Endpoint privilege management shifts the focus from trying to prevent breaches on endpoints to limiting the impact by eliminating standing access and controlling what both human and non-human users can do after an endpoint is reached. Solutions like Keeper support this zero-trust security model by helping organizations remove unnecessary access and better protect critical LLM systems.

Note: This article was thoughtfully written and contributed for our audience by Ashley D’Andrea, Content Writer at Keeper Security.

Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Google News, Twitter and LinkedIn to read more exclusive content we post.

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Apple TV: 26 of the Best TV Shows You're Probably Not Watching Apple TV: 26 of the Best TV Shows You're Probably Not Watching
Next Article Building a Least-Privilege AI Agent Gateway for Infrastructure Automation with MCP, OPA, and Ephemeral Runners Building a Least-Privilege AI Agent Gateway for Infrastructure Automation with MCP, OPA, and Ephemeral Runners
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Study Finds Optimizer Choice Significantly Impacts Model Retention | HackerNoon
Study Finds Optimizer Choice Significantly Impacts Model Retention | HackerNoon
Computing
Michael Bay Helped Bring The Most Underrated Pirate TV Series To Life – BGR
Michael Bay Helped Bring The Most Underrated Pirate TV Series To Life – BGR
News
The FBI is buying Americans’ location data
The FBI is buying Americans’ location data
News
Tesla Cybertruck gets long-awaited safety feature
Tesla Cybertruck gets long-awaited safety feature
News

You Might also Like

Study Finds Optimizer Choice Significantly Impacts Model Retention | HackerNoon
Computing

Study Finds Optimizer Choice Significantly Impacts Model Retention | HackerNoon

5 Min Read
China’s GAC sells portion of battery unit stake following losses · TechNode
Computing

China’s GAC sells portion of battery unit stake following losses · TechNode

1 Min Read
Measuring Catastrophic Forgetting in AI | HackerNoon
Computing

Measuring Catastrophic Forgetting in AI | HackerNoon

4 Min Read
TuSimple rebrands as CreateAI
Computing

TuSimple rebrands as CreateAI

4 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?