The Java MCP Server Configuration Generator, a new utility created by Max Rydahl Andersen, allows Java developers to run Model Context Protocol (MCP) servers using JBang. While there are multiple implementations of MCP servers in Java today, the MCP Java project aims to make it easier to run various MCP servers. JBang, a tool designed to run Java code like scripting and small utilities without the hassle of setting up a project and dependencies, has been a standard practice for Java.
Model Context Protocol (MCP), introduced by Anthorpic at the end of 2024, is an open standard for applications to provide context to Large Language Models (LLMs). Companies like OpenAI and Google have announced support for it, and most recently, GitHub announced support for the MCP server for VS Code users. MCP gives developers the unique ability to expose functionality in the form of tools that integrate with large language models (LLMs). MCP servers can communicate via standard input and Server-Side Events (SSE).
The MCP Java project has a JBang-catalog for MCP Servers. JBang also has bindings to UV and NPM, which is uncommon for Java. However, it makes sense for developers to use multiple languages for projects. The following commands list the servers.
## JBang
jbang catalog list mcp-java
## UVX
uvx jbang catalog list mcp-java
## NPM
npx -y @jbangdev/jbang catalog list mcp-java
With all the momentum behind Gen-AI, Java frameworks are no exception; LangChain4j, Quarkus, Spring AI, Model Context Protocol SDK, and JBang have all announced support in the last couple of months.
Consider the following timeline:
Jakarta EE and other frameworks haven’t announced support yet. However, it seems WildFly already has an alpha implementation. MCP has been a revelation to the large language model tools and function calling landscape. MCP is becoming a de facto way of writing and exposing tools to the developer community.
Java frameworks have seen an explosion in support for the MCP. Java’s footprint in enterprise and business applications provides a unique opportunity to integrate with large language models, adding more value for end-users. However, this is not always true. As always, security is a concern with rapid innovation, and it is one of the most overlooked pillars. Furthermore, exposing data to LLMs can have multiple side effects, e.g., hallucinations and the risk of leaking unwanted information.