OVHcloud has announced the availability of the AI Endpoints Solutionwith which he intends Facilitate developers access to AI models and the integration of advanced artificial intelligence functions in their applications. The solution has more than 40 Open Source models, among which there are both LLMS and models for generative, for different uses: conversational agents, voice models, or code assistants, among others.
AI Endpoints eliminates the need to manage the necessary infrastructure to work with models, as well as having specialized knowledge in Machine Learning. Therefore, this solution allows simple access to IA Open Source models housed in the cloud.
OVHCloud AI Endpoints allows you to test the functions of AI in a test environment before its large -scale deployment in applications, internal tools or business processes. The cases of use in which it can be used are, among others, the integration of large language models in applications, text extraction through the exploitation of advanced automatic learning models, voice integration thanks to APIs or coding assistance.
In the latter case, with tools such as Continue, developers can integrate attendees and AI agents in real -time mode directly in their integrated development environments. In this way they have an error detection system, automation of tasks and code suggestions to expedite the coding process.
The sovereign cloud infrastructure of Ovhcloud guarantees that the data used with the solution are housed in Europe, and are therefore protected before non -European regulations. In addition, AI Endopints is executed in the OVHCloud infrastructure, characterized by being composed of water -refrigerated servers installed in the company’s data centers. This reduces the impact of AI workloads without reducing their performance.
On the other hand, AI Endpoints promotes the total transparency of the model through “Open Weight” models, which can be deployed in the infrastructure of a company or integrate into another cloud service, which guarantees the maintenance of data control.
After overcoming a previous phase, in which the service has been developed and proven, it has incorporated the comments and functions most requested by customers, such as compatibility with stable open source models or the improved management of the API keys.
Among the models offered by AI Endpoint there are LLM (call 3.3 70b, Mixtral 8x7b), SLM (Mistral Nemo, calls 3.1 8b), code (qwen 2.5 coder 32b, codestral mamba), reasoning (deepseel-r1), multimodals (qwen 2.5 vl 72b) and image generation (SDXL). Also of voice and speech, as the voice conversion model to text ASR, and the text to voice TTS.
The service is now available in Europe, Canada and the Asia-Pacific region, deployed from the Gravelines Data Center. Will offer a Payment model for useysUS prices will vary depending on the model used and the number of tokens consumed per minute.