open-interpreter: a natural language interface for computers

Open Interpreter lets LLMs run code (Python, Javascript, Shell, and more) locally. You can chat with Open Interpreter through a ChatGPT-like interface in your terminal by running $ interpreter after installing.

This provides a natural-language interface to your computer's general-purpose capabilities:

  • Create and edit photos, videos, PDFs, etc.
  • Control a Chrome browser to perform research
  • Plot, clean, and analyze large datasets
  • ...etc.

Note: You'll be asked to approve code before it's run.

Setup

Installation from pip

If you are familiar with Python, we recommend installing Open Interpreter via pip

pip install open-interpreter

You’ll need Python 3.10 or 3.11. Run python --version to check yours.

It is recommended to install Open Interpreter in a virtual environment.

Install optional dependencies from pip

Open Interpreter has optional dependencies for different capabilities

Local Mode dependencies

pip install open-interpreter[local]

OS Mode dependencies

pip install open-interpreter[os]

Safe Mode dependencies

pip install open-interpreter[safe]

Server dependencies

pip install open-interpreter[server]

Experimental one-line installers

There are also experimental installers, for Mac as an example, open your Terminal with admin privileges (click here to learn how), then paste the following commands:

curl -sL https://raw.githubusercontent.com/KillianLucas/open-interpreter/main/installers/oi-mac-installer.sh | bash

These installers will attempt to download Python, set up an environment, and install Open Interpreter for you.

No Installation

If configuring your computer environment is challenging, you can press the , key on the GitHub page to create a codespace. After a moment, you’ll receive a cloud virtual machine environment pre-installed with open-interpreter. You can then start interacting with it directly and freely confirm its execution of system commands without worrying about damaging the system.

Basic Usage

Interactive Chat

To start an interactive chat in your terminal, either run interpreter from the command line or interpreter.chat() from a .py file.

interpreter

Programmatic Chat

For more precise control, you can pass messages directly to .chat(message) in Python:

interpreter.chat("Add subtitles to all videos in /videos.")

# ... Displays output in your terminal, completes task ...

interpreter.chat("These look great but can you make the subtitles bigger?")

# ...

Start a New Chat

In your terminal, Open Interpreter behaves like ChatGPT and will not remember previous conversations. Simply run interpreter to start a new chat.

interpreter

In Python, Open Interpreter remembers conversation history. If you want to start fresh, you can reset it.

interpreter.messages = []

Save and Restore Chats

In your terminal, Open Interpreter will save previous conversations to <your application directory>/Open Interpreter/conversations/.

You can resume any of them by running --conversations. Use your arrow keys to select one, then press ENTER to resume it.

interpreter --conversations

In Python, interpreter.chat() returns a List of messages, which can be used to resume a conversation with interpreter.messages = messages.

# Save messages to 'messages'
messages = interpreter.chat("My name is Killian.")


# Reset interpreter ("Killian" will be forgotten)
interpreter.messages = []


# Resume chat from 'messages' ("Killian" will be remembered)
interpreter.messages = messages

Configure Default Settings

We save default settings to the default.yaml profile which can be opened and edited by running the following command:

interpreter --profiles

You can use this to set your default language model, system message (custom instructions), max budget, etc.

Note: The Python library will also inherit settings from the default profile file. You can change it by running interpreter --profiles and editing default.yaml.

Customize System Message

In your terminal, modify the system message by editing your configuration file as described here.

In Python, you can inspect and configure Open Interpreter’s system message to extend its functionality, modify permissions, or give it more context.

interpreter.system_message += """
Run shell commands with -y so the user doesn't have to confirm them.
"""
print(interpreter.system_message)

Change your Language Model

Open Interpreter uses LiteLLM to connect to language models.

You can change the model by setting the model parameter:

interpreter --model gpt-3.5-turbo
interpreter --model claude-2
interpreter --model command-nightly

In Python, set the model on the object:

interpreter.llm.model = "gpt-3.5-turbo"

Find the appropriate “model” string for your language model here.

Running Locally

Open Interpreter can be run fully locally.

Users need to install software to run local LLMs. Open Interpreter supports multiple local model providers such as OllamaLlamafileJan, and LM Studio.

Local models perform better with extra guidance and direction. You can improve performance for your use-case by creating a new Profile.

Terminal Usage

Local Explorer

A Local Explorer was created to simplify the process of using OI locally. To access this menu, run the command interpreter --local.

Select your chosen local model provider from the list of options.

Most providers will require the user to state the model they are using. Provider specific instructions are shown to the user in the menu.

Custom Local

If you want to use a provider other than the ones listed, you will set the --api_base flag to set a custom endpoint.

You will also need to set the model by passing in the --model flag to select a model.

interpreter --api_base "http://localhost:11434" --model ollama/codestral

Other terminal flags are explained in Settings.

Python Usage

In order to have a Python script use Open Interpreter locally, some fields need to be set

from interpreter import interpreter

interpreter.offline = True
interpreter.llm.model = "ollama/codestral"
interpreter.llm.api_base = "http://localhost:11434"

interpreter.chat("how many files are on my desktop?")

Helpful settings for local models

Local models benefit from more coercion and guidance. This verbosity of adding extra context to messages can impact the conversational experience of Open Interpreter. The following settings allow templates to be applied to messages to improve the steering of the language model while maintaining the natural flow of conversation.

interpreter.user_message_template allows users to have their message wrapped in a template. This can be helpful steering a language model to a desired behaviour without needing the user to add extra context to their message.

interpreter.always_apply_user_message_template has all user messages to be wrapped in the template. If False, only the last User message will be wrapped.

interpreter.code_output_template wraps the output from the computer after code is run. This can help with nudging the language model to continue working or to explain outputs.

interpreter.empty_code_output_template is the message that is sent to the language model if code execution results in no output.