Strathweb Phi Engine - now with Safe Tensors support

Β· 490 words Β· 3 minutes to read

This summer, I announced the Strathweb Phi Engine β€” a cross-platform library for running Phi inference anywhere. Up until now, the library only supported models in the quantized GGUF format. Today, I’m excited to share that the library now also supports the Safe Tensor model format.

This enhancement significantly expands the scope of use cases and interoperability for the Strathweb Phi Engine. With Safe Tensor support, you can now load and execute models in a format that is not only performant but also prioritizes security and memory safety. Notably, all the Phi models published by Microsoft use the Safe Tensor format by default.

Using the Safe Tensor format πŸ”—

In Strathweb Phi Engine, models are loaded by supplying the model location via PhiModelProvider. (Note: The casing used here and in the examples below is specific to the Swift implementation. Other supported languages may use slightly different naming conventions to align with their respective language standards.)

There are now four possibilities for loading models:

  1. PhiModelProvider.huggingFaceGguf: Loads a model in the GGUF format directly from Hugging Face. This requires specifying the repository name, model filename, and the model version (branch).
  2. PhiModelProvider.huggingFace: Loads a model in the Safe Tensor format directly from Hugging Face. This requires specifying the repository name and model version (branch). The library automatically detects the necessary model files by looking for model.safetensors.index.json and config.json in the repository.
  3. PhiModelProvider.fileSystemGguf: Loads a model in the GGUF format from the local filesystem. You must provide the absolute path to the GGUF file.
  4. PhiModelProvider.fileSystem: Loads a model in the Safe Tensor format from the local filesystem. You must provide the absolute paths to both the model.safetensors.index.json and config.json files.

Here’s a complete example of how to load and use the Phi-3-mini-4k model in both GGUF and Safe Tensor formats using Swift:

import Foundation

let isQuantizedMode = CommandLine.arguments.contains("--quantized")

if isQuantizedMode {
    print(" πŸƒ Quantized mode is enabled.")
} else {
    print(" πŸ’ͺ Safe tensors mode is enabled.")
}

let modelProvider = isQuantizedMode ? 
    PhiModelProvider.huggingFaceGguf(modelRepo: "microsoft/Phi-3-mini-4k-instruct-gguf", modelFileName: "Phi-3-mini-4k-instruct-q4.gguf", modelRevision: "main") : 
    PhiModelProvider.huggingFace(modelRepo: "microsoft/Phi-3-mini-4k-instruct", modelRevision: "main")

let inferenceOptionsBuilder = InferenceOptionsBuilder()
try! inferenceOptionsBuilder.withTemperature(temperature: 0.9)
let inferenceOptions = try! inferenceOptionsBuilder.build()

let cacheDir = FileManager.default.currentDirectoryPath.appending("/.cache")

class ModelEventsHandler: PhiEventHandler {
    func onInferenceStarted() {}

    func onInferenceEnded() {}

    func onInferenceToken(token: String) {
        print(token, terminator: "")
    }

    func onModelLoaded() {
        print("""
 🧠 Model loaded!
****************************************
""")
    }
}

let modelBuilder = PhiEngineBuilder()
try! modelBuilder.withEventHandler(eventHandler: BoxedPhiEventHandler(handler: ModelEventsHandler()))
let gpuEnabled = try! modelBuilder.tryUseGpu()
try! modelBuilder.withModelProvider(modelProvider: modelProvider)

let model = try! modelBuilder.buildStateful(cacheDir: cacheDir, systemInstruction: "You are a hockey poet. Be brief and polite.")

// Run inference
let result = try! model.runInference(promptText: "Write a haiku about ice hockey", inferenceOptions: inferenceOptions)

print("""

****************************************
 πŸ“ Tokens Generated: \(result.tokenCount)
 πŸ–₯️ Tokens per second: \(result.tokensPerSecond)
 ⏱️ Duration: \(result.duration)s
 🏎️ GPU enabled: \(gpuEnabled)
""")

And that’s it! Whether you’re working with quantized models or exploring Safe Tensor use cases, this update opens up new possibilities. You can find more info and samples in the Strathweb Phi Engine repository. Have fun experimenting with Phi!

About


Hi! I'm Filip W., a software architect from ZΓΌrich πŸ‡¨πŸ‡­. I like Toronto Maple Leafs πŸ‡¨πŸ‡¦, Rancid and quantum computing. Oh, and I love the Lowlands 🏴󠁧󠁒󠁳󠁣󠁴󠁿.

You can find me on Github, on Mastodon and on Bluesky.

My Introduction to Quantum Computing with Q# and QDK book
Microsoft MVP