Skip to content

Update to swift-transformers #117

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
johnmai-dev opened this issue Sep 2, 2024 · 3 comments
Closed

Update to swift-transformers #117

johnmai-dev opened this issue Sep 2, 2024 · 3 comments

Comments

@johnmai-dev
Copy link
Contributor

Hi @davidkoski @awni !
Is it possible to upgrade the dependencies?

The latest version of swift-transformers supports chat-template. huggingface/swift-transformers#104

ChatMLX depends on mlx-libraries and the latest version of swift-transformers, but mlx-libraries uses an older swift-transformers version. Looking forward to an upgrade.

.package(url: "https://github.com/huggingface/swift-transformers", from: "0.1.9"),

After upgrading, you can also modify Libraries/LLM/Models.swift.

extension ModelConfiguration {
public static let smolLM_135M_4bit = ModelConfiguration(
id: "mlx-community/SmolLM-135M-Instruct-4bit",
defaultPrompt: "Tell me about the history of Spain."
) {
prompt in
"<|im_start|>user\n\(prompt)<|im_end|>\n<|im_start|>assistant\n"
}
public static let mistralNeMo4bit = ModelConfiguration(
id: "mlx-community/Mistral-Nemo-Instruct-2407-4bit",
defaultPrompt: "Explain quaternions."
) { prompt in
"<s>[INST] \(prompt) [/INST] "
}
public static let mistral7B4bit = ModelConfiguration(
id: "mlx-community/Mistral-7B-Instruct-v0.3-4bit",
defaultPrompt: "Describe the Swift language."
) { prompt in
"<s>[INST] \(prompt) [/INST] "
}
public static let codeLlama13b4bit = ModelConfiguration(
id: "mlx-community/CodeLlama-13b-Instruct-hf-4bit-MLX",
overrideTokenizer: "PreTrainedTokenizer",
defaultPrompt: "func sortArray(_ array: [Int]) -> String { <FILL_ME> }"
) { prompt in
// given the prompt: func sortArray(_ array: [Int]) -> String { <FILL_ME> }
// the python code produces this (via its custom tokenizer):
// <PRE> func sortArray(_ array: [Int]) -> String { <SUF> } <MID>
"<PRE> " + prompt.replacingOccurrences(of: "<FILL_ME>", with: "<SUF>") + " <MID>"
}
public static let phi4bit = ModelConfiguration(
id: "mlx-community/phi-2-hf-4bit-mlx",
// https://www.promptingguide.ai/models/phi-2
defaultPrompt: "Why is the sky blue?"
)
public static let phi3_5_4bit = ModelConfiguration(
id: "mlx-community/Phi-3.5-mini-instruct-4bit",
defaultPrompt: "What is the gravity on Mars and the moon?",
extraEOSTokens: ["<|end|>"]
) {
prompt in
"<s><|user|>\n\(prompt)<|end|>\n<|assistant|>\n"
}
public static let gemma2bQuantized = ModelConfiguration(
id: "mlx-community/quantized-gemma-2b-it",
overrideTokenizer: "PreTrainedTokenizer",
// https://www.promptingguide.ai/models/gemma
defaultPrompt: "what is the difference between lettuce and cabbage?"
) { prompt in
"<start_of_turn>user\n\(prompt)<end_of_turn>\n<start_of_turn>model\n"
}
public static let gemma_2_9b_it_4bit = ModelConfiguration(
id: "mlx-community/gemma-2-9b-it-4bit",
overrideTokenizer: "PreTrainedTokenizer",
// https://www.promptingguide.ai/models/gemma
defaultPrompt: "What is the difference between lettuce and cabbage?"
) { prompt in
"<start_of_turn>user\n\(prompt)<end_of_turn>\n<start_of_turn>model\n"
}
public static let gemma_2_2b_it_4bit = ModelConfiguration(
id: "mlx-community/gemma-2-2b-it-4bit",
overrideTokenizer: "PreTrainedTokenizer",
// https://www.promptingguide.ai/models/gemma
defaultPrompt: "What is the difference between lettuce and cabbage?"
) { prompt in
"<start_of_turn>user \(prompt)<end_of_turn><start_of_turn>model"
}
public static let qwen205b4bit = ModelConfiguration(
id: "mlx-community/Qwen1.5-0.5B-Chat-4bit",
overrideTokenizer: "PreTrainedTokenizer",
defaultPrompt: "why is the sky blue?"
) { prompt in
"<|im_start|>system\nYou are a helpful assistant<|im_end|>\n<|im_start|>user\n\(prompt)<|im_end|>\n<|im_start|>assistant"
}
public static let openelm270m4bit = ModelConfiguration(
id: "mlx-community/OpenELM-270M-Instruct",
// https://huggingface.co/apple/OpenELM
defaultPrompt: "Once upon a time there was"
) { prompt in
"\(prompt)"
}
public static let llama3_1_8B_4bit = ModelConfiguration(
id: "mlx-community/Meta-Llama-3.1-8B-Instruct-4bit",
defaultPrompt: "What is the difference between a fruit and a vegetable?"
) {
prompt in
"<|begin_of_text|><|start_header_id|>system<|end_header_id|>\nYou are a helpful assistant<|eot_id|>\n<|start_header_id|>user<|end_header_id|>\n\(prompt)<|eot_id|>\n<|start_header_id|>assistant<|end_header_id|>"
}
public static let llama3_8B_4bit = ModelConfiguration(
id: "mlx-community/Meta-Llama-3-8B-Instruct-4bit",
defaultPrompt: "What is the difference between a fruit and a vegetable?"
) {
prompt in
"<|begin_of_text|><|start_header_id|>system<|end_header_id|>\nYou are a helpful assistant<|eot_id|>\n<|start_header_id|>user<|end_header_id|>\n\(prompt)<|eot_id|>\n<|start_header_id|>assistant<|end_header_id|>"
}
private enum BootstrapState: Sendable {
case idle
case bootstrapping
case bootstrapped
}

@davidkoski
Copy link
Collaborator

Yes of course! If you want to submit a PR that is the fastest way, otherwise I will try to get to this in the coming week.

@johnmai-dev
Copy link
Contributor Author

Okay, thank you very much @davidkoski!
I can only modify mlx-swift-examples/Package.swift on my end.

As for modifying mlx-swift-examples/Libraries/LLM/Models.swift, I might not have time for that at the moment.

Hi @pcuenca, could you please tag a new release for https://github.com/huggingface/swift-transformers?

@davidkoski
Copy link
Collaborator

I think we are all set here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants