Developers on Apple platforms often face a fragmented ecosystem when using language models. Local models via Core ML or MLX offer privacy and offline capabilities, while cloud services like OpenAI, Anthropic, or Google Gemini provide advanced features. AnyLanguageModel, a new Swift package, simplifies integration by offering a unified API for both local and remote models.
AnyLanguageModel works seamlessly with Apple’s Foundation Models framework, letting developers switch with minimal code changes while keeping session and response structures intact. It supports Core ML, MLX, llama.cpp/llama.swift, Ollama-hosted models, and cloud services from OpenAI, Anthropic, Google Gemini, and Hugging Face. Swift package traits enable developers to include only the backends they need, minimizing dependencies.
AnyLanguageModel also extends beyond the current capabilities of Foundation Models by supporting vision-language prompts, allowing developers to send images alongside textual queries. This includes interactions with models like Anthropic’s Claude, enabling tasks such as image description, text extraction, and visual analysis without waiting for Apple’s own framework to support these features.
Mattt, the developer behind the package, explained the reason for targeting the Foundation Models API:
Most apps use some mix of local & remote models from different providers, and it’s annoying to get them all to play nicely together. Apple’s Foundation Models provides a kind of ‘public option’ — a fallback built into all macOS and iOS devices. Since that’s available only through Foundation Models, it made sense to target that as the API for supporting other providers, too.
The library is currently pre-1.0, with ongoing development aimed at implementing tool calling, structured output generation, and performance optimizations for local inference. The accompanying demonstration application, chat-ui-swift, showcases streaming responses, chat persistence, Apple Foundation Models integration, and Hugging Face OAuth authentication. The app is intended as a starting point for developers to explore, extend, and adapt the API to their own projects.
Early community feedback has been positive. Krzysztof Zabłocki commented:
Great work, mate. I have been using it in a new project, eagerly waiting for your branch with OpenAI support for Generable to land.
AnyLanguageModel and the chat-ui-swift demo are available on GitHub, allowing developers to experiment, report issues, and contribute enhancements. The project represents a step toward reducing friction in AI application development while promoting consistent, multi-provider LLM workflows on Apple platforms.
