How to use AutoGPT using Docker: A step-by-step guide
AutoGPT is an autonomous GPT-4 agent. AutoGPT is basically ChatGPT talking to itself. It can create code, execute code, and can access the internet. By talking to itself ChatGPT can, amongst other things, verify sources, create programs and debug programs by itself. It’s the latest big thing in AI. In this article I help you step-by-step to run AutoGPT using Docker.
I assume you know how to work with the terminal, you have GIT installed and you have Docker installed.
Step 1: Open your terminal
Step 2: Clone the AutoGPT git repository.
The command below creates an Auto-GPT folder in the folder you’ve accessed in your terminal.
git clone https://github.com/Significant-Gravitas/Auto-GPT.git Auto-GPT
Step 3: Checkout the stable branch
AutoGPT is under active development. The master branch may often be in a broken state. Therefore, if you just want to use AutoGPT you must checkout the stable branch. This branch containts the latest stable version of AutoGPT
cd ./Auto-GPT #move to the newly created folder
git fetch
git checkout stable
Step 4: create an .env file
cp .env.template .env
Step 5: Get an OpenAI API key
5.1 Go to: https://platform.openai.com/ .
5.2 Create an account or login with you OpenAI account.
5.3 Click on your account name (top right)
5.4 Go to “View API Keys”
5.5 Go to “Billing” and configure billing by following the steps “Payment methods”
5.6 After setting up payments go to API Keys and click “Create new secret key”.
5.7 Give the key a name, and click “Create secret key” and copy the key.
Step 6: Save the key in the .env file
Open the .env file in the Auto-GPT directory. Where it says OPEN_API_KEY=, paste your API key
OPENAI_API_KEY=#paste your API key here
Step 7: Start Docker
Step 8: Run Auto-GPT with docker-compose.
Execute the following command in your terminal
docker-compose run --build --rm auto-gpt
Step 9: Enjoy AutoGPT!
Tip: Run AutoGPT in continuous mode:
docker-compose run --build --rm auto-gpt --continuous
Step 10: If you want to stay up to date on how AI can help you develop software, follow me.
Update: use local memory with Auto-GPT
If you want to use local memory with auto-gpt (and not use Redis). Do the following:
Step 1: Create a file named “auto-gpt.json” in the “Auto-GPT” directory.
Step 2: Open the file “docker-compose.yml” in your editor.
Step 3: Bind auto-gpt.json to the docker container
In the ‘docker-compose.yml’ file add the following to “volumes:”
- "./auto-gpt.json:/home/appuser/auto-gpt.json"
You end up with a “volumes” section that looks like this:
volumes:
- "./autogpt:/app"
- ".env:/app/.env"
- "./auto-gpt.json:/home/appuser/auto-gpt.json"