Files
David Koski 6c0b66f90a implement LoRA / QLoRA (#46)
* implement LoRA / QLoRA

- example of using MLX to fine-tune an LLM with low rank adaptation (LoRA) for a target task
- see also https://arxiv.org/abs/2106.09685
- based on https://github.com/ml-explore/mlx-examples/tree/main/lora

* add some command line flags I found useful during use
- --quiet -- don't print decorator text, just the generated text
- --prompt @/tmp/file.txt -- load prompt from file

* user can specify path to model OR model identifier in huggingface

* update mlx-swift reference

Co-authored-by: Ashraful Islam <ashraful.meche@gmail.com>
Co-authored-by: JustinMeans <46542161+JustinMeans@users.noreply.github.com>
2024-04-22 09:30:12 -07:00

1.2 KiB

LLM

This is a port of several models from:

using the Hugging Face swift transformers package to provide tokenization:

The Models.swift provides minor overrides and customization -- if you require overrides for the tokenizer or prompt customizations they can be added there.

This is set up to load models from Hugging Face, e.g. https://huggingface.co/mlx-community

The following models have been tried:

  • mlx-community/Mistral-7B-v0.1-hf-4bit-mlx
  • mlx-community/CodeLlama-13b-Instruct-hf-4bit-MLX
  • mlx-community/phi-2-hf-4bit-mlx
  • mlx-community/quantized-gemma-2b-it

Currently supported model types are:

  • Llama / Mistral
  • Gemma
  • Phi

See Configuration.swift for more info.

See llm-tool

LoRA

Lora.swift contains an implementation of LoRA based on this example:

See llm-tool/LoraCommands.swift for an example of a driver and llm-tool for examples of how to run it.