Ollama is now powered by MLX on Apple Silicon in preview

Article URL: https://ollama.com/blog/mlx Comments URL: https://news.ycombinator.com/item?id=47582482 Points: 465 # Comments: 225

Ollama is now powered by MLX on Apple Silicon in preview
Ollama is now powered by MLX on Apple Silicon in preview Photo: Hacker News

Today, we’re previewing the fastest way to run Ollama on Apple silicon, powered by MLX, Apple’s machine learning framework.

This unlocks new performance to accelerate your most demanding work on macOS:
Accelerate coding agents like Pi or Claude Code
OpenClaw now responds much faster
Fastest performance on Apple silicon, powered by MLX
Ollama on Apple silicon is now built on top of Apple’s machine learning framework, MLX, to take advantage of its unified memory architecture.

This results in a large speedup of Ollama on all Apple Silicon devices.

On Apple’s M5, M5 Pro and M5 Max chips, Ollama leverages the new GPU Neural Accelerators to accelerate both time to first token (TTFT) and generation speed (tokens per second).

Testing was conducted on March 29, 2026, using Alibaba’s Qwen3.5-35B-A3B model quantized to `NVFP4` and Ollama’s previous implementation quantized to `Q4_K_M` using Ollama 0.18.

Ollama 0.19 will see even higher performance (1851 token/s prefill and 134 token/s decode when running with `int4`).

NVFP4 support: higher quality responses and production parity
Ollama now leverages NVIDIA’s NVFP4 format to maintain model accuracy while reducing memory bandwidth and storage requirements for inference workloads.

It further opens up Ollama to have the ability to run models optimized by NVIDIA’s model optimizer .

Other precisions will be made available based on the design and usage intent from Ollama’s research and hardware partners.

Improved caching for more responsiveness
Ollama’s cache has been upgraded to make coding and agentic tasks more efficient.

Intelligent checkpoints: Ollama will now store snapshots of its cache at intelligent locations in the prompt, resulting in less prompt processing and faster responses.

This preview release of Ollama accelerates the new Qwen3.5-35B-A3B model, with sampling parameters tuned for coding tasks.

Please make sure you have a Mac with more than 32GB of unified memory.

We are actively working to support future models.

For users with custom models fine-tuned on supported architectures, we will introduce an easier way to import models into Ollama.

In the meantime, we will expand the list of supported architectures.

Source: This article was originally published by Hacker News

Read Full Original Article →

Share this article

Comments (0)

No comments yet. Be the first to comment!

Leave a Comment

Maximum 2000 characters