The Open Platform for Enterprise AI “OPEA” that is a sub-project of the Linux Foundation and backed by a wide variety of different organizations to provide open solutions for Generative AI announced today their newest GenAI code examples.
OPEA 1.4 is now available to provide the latest Generative AI Examples that work on hardware/software from multiple vendors and leveraging open-source. New agent capabilities in OPEA 1.4 include model context protocol (MCP) support and a deep research agent. OPEA 1.4 also introduces guardrails for inputs/outputs. The OPEA Guardrails are for “content safety” and aim to “prevent the creation of inappropriate outputs.”
OPEA’s guardrails include the abilities to filter competitor mentions, banning sensitive substrings (words), banning different topics like violence / attack / war are noted, disabling bias, filtering programming languages that are supported or not, blocking malicious URLs, factual consistency settings, and scanning for sensitive topics and toxicity.
In addition to the new guardrails as part of OPEA GenAI capabilities, there is new fine-tuning of reasoning models, an LLM router for determining the downstream LLM serving endpoint is best suited for an incoming prompt, language detection, air-gapped environment support, and remote inference endpoints support.
The OPEA Generative AI Examples also now feature one-click deployment support, documentation improvements, and other improvements for a better developer experience. These OPEA GenAI examples also have begun adding AMD EPYC support. The AMD EPYC support comes via specialized Docker containers. This has been tested across 4th Gen EPYC and the newest 5th Gen EPYC server processors. This complements the existing hardware support of Intel Xeon, Intel Gaudi AI, and Intel Arc GPUs as validated hardware.
Downloads and more details on the OPEA GenAI Examples 1.4 via GitHub.