Skip to content

Installation

1 install the package

The package urban-worm can be installed with pip:

pip install urban-worm

2 Inference with llama.cpp

To run more pre-quantized models with vision capabilities, please install pre-built version of llama.cpp:

# Windows
winget install llama.cpp

# Mac and Linux
brew install llama.cpp

More information about the installation here

More GGUF models can be found at the Hugging Face pages here and here

3 Inference with Ollama client

Please make sure Ollama is installed before using urban-worm if you plan to rely on Ollama

For Linux, users can also install ollama by running in the terminal:

curl -fsSL https://ollama.com/install.sh | sh

For MacOS, users can also install ollama using brew:

brew install ollama

To install brew, run in the terminal:

/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"

Windows users should directly install the Ollama client

To install the development version from this repo:

pip install -e git+https://github.com/billbillbilly/urbanworm.git#egg=urban-worm