Unsloth Studio

Article URL: https://unsloth.ai/docs/new/studio Comments URL: https://news.ycombinator.com/item?id=47414032 Points: 219 # Comments: 48

Unsloth Studio
Unsloth Studio Photo: Hacker News

Run and train AI models locally with Unsloth Studio.

Today, we’re launching Unsloth Studio (Beta): an open-source, no-code web UI for training, running and exporting open models in one unified local interface.

bolt Quickstart star Features github Github
Run GGUF and safetensor models locally on Mac , Windows, Linux.

Train 500+ models 2x faster with 70% less VRAM (no accuracy loss)
Run and train text, vision, TTS audio, embedding models
MacOS and CPU work for Chat GGUF inference.

MLX training coming soon.

No dataset needed.

Auto-create datasets from PDF, CSV, JSON, DOCX, TXT files.

Self-healing tool calling / web search + code execution
Auto inference parameter tuning and edit chat templates.

Search and run GGUF and safetensor models with self-healing tool calling / web search, auto inference parameter tuning, code execution and APIs (very soon).

Upload images, docs, audio, code files.

Battle models side by side .

Powered by llama.cpp + Hugging Face, we support multi-GPU inference and most models.

Upload PDF, CSV, JSON docs, or YAML configs and start training instantly on NVIDIA.

Unsloth’s kernels optimize LoRA, FP8, FFT, PT across 500+ text, vision, TTS/audio and embedding models.

Fine-tune the latest LLMs like Qwen3.5 and NVIDIA Nemotron 3 .

Multi-GPU works automatically, with a new version coming.

Data Recipes transforms your docs into useable / synthetic datasets via graph-node workflow.

Upload unstructured or structured files like PDFs, CSV and JSON.

Unsloth Data Recipes, powered by NVIDIA DataDesigner arrow-up-right , auto turns documents into your desired formats.

Gain complete visibility into and control over your training runs.

Track training loss, gradient norms, and GPU utilization in real time, and customize to your liking.

You can even view the training progress on other devices like your phone.

Export any model , including your fine-tuned models, to safetensors, or GGUF for use with llama.cpp, vLLM, Ollama, LM Studio, and more.

Stores your training history, so you can revisit runs, export again and experiment.

Chat with and compare 2 different models, such as a base model and a fine-tuned one, to see how their outputs differ.

Just load your first GGUF/model, then the second, and voilà!

Inference will firstly load for one model, then the second one.

Unsloth Studio can be used 100% offline and locally on your computer.

Its token-based authentication, including password and JWT access / refresh flows keeps your data secure and under your control.

Please note this is the BETA version of Unsloth Studio.

Expect many improvements, fixes, and new features in the coming days and weeks.

One issue we’re actively addressing is precompiled llama.cpp binaries to significantly speed up install times.

Unsloth Studio works on Windows, Linux, WSL and MacOS (chat only currently).

CPU: Unsloth still works without a GPU, but only for Chat inference.

Training: Works on NVIDIA GPUs: RTX 30, 40, 50, Blackwell, DGX Spark/Station etc.

Mac: Like CPU - Chat only works for now.

MLX training coming very soon.

Coming soon: Support for Apple MLX , AMD , and Intel .

Multi-GPU: Works already, with a major upgrade on the way.

First install may take 5-10 minutes.

This is normal as llama.cpp needs to compile binaries.

We're working on precompiled binaries so next time it won't take so long.

Our Docker image now works for Studio!

We're working on Mac compatibility unsloth/unsloth .

Read our Docker guide .

For more details about installation please visit the Unsloth Studio Install section.

You can also view NVIDIA's Video Tutorial here .

hashtag google Google Colab notebook
We’ve created a free Google Colab notebook arrow-up-right so you can explore all of Unsloth’s features on Colab’s T4 GPUs.

You can train and run most models up to 22B parameters, and switch to a larger GPU for bigger models.

Just Click 'Run all' and the UI should pop up after installation.

It'll take 30+ mins for llama.cpp to compile on a T4 GPU, thus we recommend using a bigger GPU for faster speeds.

Once installation is complete, scroll to Start Unsloth Studio and click Open Unsloth Studio in the white box shown on the left:
Sometimes the Studio link may return an error.

This happens because Google Colab expects you to stay on the Colab page; if it detects inactivity, it may shut down the GPU session.

Here is a usual workflow of Unsloth Studio to get you started:
Launch Studio from install instructions .

Load a model from local files or a supported integration.

Import training data from PDFs, CSVs, or JSONL files, or build a dataset from scratch.

Clean, refine, and expand your dataset in Data Recipes .

Start training with recommended presets or customize the config yourself.

Chat with the trained model and compare its outputs against the base model.

You can read our individual deep dives into each section of Unsloth Studio:
Here is a video tutorial created by NVIDIA to get you started with Studio:
Does Unsloth collect or store data?

We do not collect usage telemetry.

We only collect the minimal hardware information required for compatibility, such as GPU type and device (e.g.

Mac).

Unsloth Studio runs 100% offline and locally.

How do I use an old / exisiting model that I downloaded previously from Hugging Face?

Yes, you can use pre-exisiting or old models or GGUFs that you previously downloaded from Hugging Face etc.

Read our instructions here .

Does Unsloth Studio support OpenAI-compatible APIs?

Yes, for our Data Recipes it does.

For inference we are working on this and hope to release support for it as soon as this week so stay tuned!

Is Unsloth now licensed under AGPL-3.0?

Unsloth uses a dual-licensing model of Apache 2.0 and AGPL-3.0.

The core Unsloth package remains licensed under Apache 2.0 arrow-up-right , while certain optional components, such as the Unsloth Studio UI are licensed AGPL-3.0 arrow-up-right .

This structure helps support ongoing Unsloth development while keeping the project open source and enabling the broader ecosystem to continue growing.

Does Studio only support LLMs?

No.

Studio supports a range of supported transformers compatible model families, including text, multimodal models, text-to-speech , audio, embeddings , and BERT-style models.

Can I use my own training config?

Yes.

Import a YAML config and Studio will pre-fill the relevant settings.

Do you need to train models to use the UI?

No, you can just download any GGUF or model without fine-tuning any model.

We're working hard to make open-source AI as accessible as possible.

Coming next for Unsloth and Unsloth Studio, we're releasing official support for: multi-GPU, Apple Silicon/MLX, AMD, and Intel.

Reminder this is the BETA version of Unsloth Studio so expect a lot of announcements and improvements in the coming weeks.

We’re also working closely with NVIDIA on multi-GPU support to deliver the best and simplest experience possible.

A huge thank you to NVIDIA and Hugging Face for being part of our launch.

Also thanks to all of our early beta testers for Unsloth Studio, we truly appreciate your time and feedback.

We’d also like to thank llama.cpp, PyTorch and open model labs for providing the infrastructure that made Unsloth Studio possible.

Source: This article was originally published by Hacker News

Read Full Original Article →

Share this article

Comments (0)

No comments yet. Be the first to comment!

Leave a Comment

Maximum 2000 characters