Note
This template uses Google's Gemini model via their OpenAI-compatible API.
This Bolt for Python template demonstrates how to build Agents & Assistants in Slack.
Before getting started, make sure you have a development workspace where you have permissions to install apps. If you don’t have one setup, go ahead and create one.
Join the Slack Developer Program for exclusive access to sandbox environments for building and testing your apps, tooling, and resources created to help you build and grow.
- Open https://api.slack.com/apps/new and choose "From an app manifest"
- Choose the workspace you want to install the application to
- Copy the contents of manifest.json into the text box that says
*Paste your manifest code here*(within the JSON tab) and click Next - Review the configuration and click Create
- Click Install to Workspace and Allow on the screen that follows. You'll then be redirected to the App Configuration dashboard.
Before you can run the app, you'll need to store some environment variables.
- Open your app configuration page from this list, click OAuth & Permissions in the left hand menu, then copy the Bot User OAuth Token. You will store this in your environment as
SLACK_BOT_TOKEN. - Click Basic Information from the left hand menu and follow the steps in the App-Level Tokens section to create an app-level token with the
connections:writescope. Copy this token. You will store this in your environment asSLACK_APP_TOKEN.
This project uses python-dotenv to automatically load environment variables from a .env file when you run the app. A .env-example file is included at the project root with all required variables.
Setup:
# Copy the example to a working .env file
cp .env-example .env
# Edit .env and replace the placeholder values with your real tokens
# Then run the app — python-dotenv will load the variables automatically:
python3 app.pyNote
The .env file is ignored by git (via .gitignore) to keep your secrets safe.
If you prefer not to use a .env file, you can export variables directly in your shell (macOS / Linux / zsh or bash):
# Replace with your app token and bot token
# For Windows OS, env:SLACK_BOT_TOKEN = <your-bot-token> works
export SLACK_BOT_TOKEN=<your-bot-token>
export SLACK_APP_TOKEN=<your-app-token>
# This sample uses OpenAI's API by default, but you can switch to any other solution!
export GEMINI_API_KEY=<your-gemini-api-key>
export GEMINI_API_BASE_URL=https://generativelanguage.googleapis.com/v1beta/openai/
export GEMINI_MODEL=<your-prefered-gemini-model> # e.g. gemini-2.5-flashTip
To get your Gemini API key, go to the Google AI Studio and create a new API key. If you want to learn more about Gemini's OpenAI-compatible API, check out the official documentation.
# Clone this project onto your machine
git clone https://github.com/jonigl/slack-ai-agent-with-google-gemini.git
# Change into this project directory
cd slack-ai-agent-with-google-gemini
# Setup your python virtual environment
python3 -m venv .venv
source .venv/bin/activate # for Windows OS, .\.venv\Scripts\Activate instead should work
# Install the dependencies
pip install -r requirements.txt
# Start your local server
python3 app.pyStart talking to the bot! Start a new DM or thread and click the feedback button when it responds.
# Run flake8 from root directory for linting
flake8 *.py && flake8 listeners/
# Run black from root directory for code formatting
black .manifest.json is a configuration for Slack apps. With a manifest, you can create an app with a pre-defined configuration, or adjust the configuration of an existing app.
app.py is the entry point for the application and is the file you'll run to start the server. This project aims to keep this file as thin as possible, primarily using it as a way to route inbound requests.
Every incoming request is routed to a "listener". This directory groups each listener based on the Slack Platform feature used, so /listeners/events handles incoming events, /listeners/shortcuts would handle incoming Shortcuts requests, and so on.
/listeners/assistant
Configures the new Slack Assistant features, providing a dedicated side panel UI for users to interact with the AI chatbot. This module includes:
assistant.py, which contains two listeners:
- The
@assistant.thread_startedlistener receives an event when users start new app thread. - The
@assistant.user_messagelistener processes user messages in app threads or from the app Chat and History tab.
ai/llm_caller.py, which handles OpenAI API integration and message formatting. It includes the call_llm() function that sends conversation threads to OpenAI's models.
Only implement OAuth if you plan to distribute your application across multiple workspaces. A separate app_oauth.py file can be found with relevant OAuth settings.
When using OAuth, Slack requires a public URL where it can send requests. In this template app, we've used ngrok. Checkout this guide for setting it up.
Start ngrok to access the app on an external network and create a redirect URL for OAuth.
ngrok http 3000
This output should include a forwarding address for http and https (we'll use https). It should look something like the following:
Forwarding https://3cb89939.ngrok.io -> http://localhost:3000
Navigate to OAuth & Permissions in your app configuration and click Add a Redirect URL. The redirect URL should be set to your ngrok forwarding address with the slack/oauth_redirect path appended. For example:
https://3cb89939.ngrok.io/slack/oauth_redirect