Red Hat has announced its Red Hat Enterprise Linux AI (RHEL AI) foundational model platform. With it, its users will be able to develop, test and deploy generative AI models with more agility. RHEL AI brings together the Granite family of large language models, which are licensed open source from IBM Research, with InstructLab’s model alignment tools based on the LAB chatbot methodology and a community-driven model development approach. through the InstructLab project.
This entire solution is packaged as a bootable, optimized RHEL image for individual server deployments in the hybrid cloud. Additionally, it is included as part of OpenShift AI, Red Hat’s hybrid machine learning operations platform, for running models at scale in distributed cluster environments.
IBM Research created the Large-scale Alignment for chatBots (LAB) technique as an approach to model alignment that uses taxonomies-guided synthetic data generation and a multi-phase adjustment framework. This approach makes the development of AI models more open and accessible to users. This is because it reduces the dependency on human annotations and proprietary models.
With the LAB method, models can be improved by specifying skills and knowledge linked to a taxonomy. From this information at scale, synthetic data is generated to include in the model, and the generated data is also used to train it.
Since they found that the LAB method could help improve model performance, IBM and Red Hat decided to launch the open source community created around the InstructLab method. It also used IBM’s open source Granite models. InstructLab wants to shift the development of large language models to developers, making building one, or contributing to one that has already been developed, as easy as contributing to any open source project.
IBM has also made available to the public, as part of the launch of InstructLab, a family of Granite code and language models in English. The models have been released under the Apache license, and the English Granite 7B has been integrated into the InstructLab community, where users can improve it collectively.
Well, RHEL AI is based on this open approach to AI innovation. That’s why it incorporates an enterprise-ready version of the InstructLab project, and the Granite code and language models and Red Hat’s enterprise Linux platform. This simplifies deployment in a hybrid infrastructure environment, and creates a foundational model platform to facilitate the adoption of open source generative AI models in the company.
RHEL AI incorporates, in addition to Granite models and code, an InstructLab distribution with support and lifecycle, offering a scalable solution to increase the capabilities of large language models. Also optimized launch model runtime instances with Granite models and InstructLab tool packages as launch RHEL images via RHEL image mode. They include PyTorch and optimized runtime libraries, as well as accelerators for AMD Instinct MI300X, Intel and Nvidia GPUs, and NeMo frameworks.
Red Hat Enterprise Linux also offers full Red Hat enterprise support and lifecycle, starting with enterprise product distribution and continuing with 24×7 production support and extended lifecycle support.
As enterprises fine-tune new AI models on RHEL AI, they will have a ready path to scale their workflows with Red Hat OpenShift AI, which will include RHEL AI, and can leverage OpenShift’s Kubernetes engine to train and serve AI models at scale. As for OpenShift AI’s integrated MLOps capabilities, they will serve to manage the model lifecycle.
RHEL AI is now available in testing phase for developers. While its final version arrives, IBM to add IBM Cloud support for Red Hat Enterprise Linux AI and OpenShift AI. based on the GPU infrastructure you have available.