Skip to content

Commit 52c557b

Browse files
committed
mcp: jira client
1 parent de15da1 commit 52c557b

File tree

10 files changed

+199
-73
lines changed

10 files changed

+199
-73
lines changed

README.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -79,7 +79,7 @@ BugZooka supports two complementary modes for monitoring Slack channels that can
7979
# Run with both polling AND socket mode
8080
make run ARGS="--product openshift --ci prow --enable-socket-mode"
8181
```
82-
82+
8383
**Socket Mode Requirements:**
8484
- An app-level token (`xapp-*`) must be configured as `SLACK_APP_TOKEN`
8585
- Socket Mode must be enabled in your Slack app settings
@@ -140,6 +140,7 @@ GENERIC_INFERENCE_URL="YOUR_INFERENCE_ENDPOINT"
140140
GENERIC_INFERENCE_TOKEN="YOUR_INFERENCE_TOKEN"
141141
GENERIC_MODEL="YOUR_INFERENCE_MODEL"
142142
```
143+
143144
**Note**: Please make sure to provide details for all the mandatory attributes and for the product that is intended to be used for testing along with fallback (i.e. GENERIC details) to handle failover use-cases.
144145
145146
@@ -315,6 +316,7 @@ BugZooka has a dependency on [orion-mcp service](https://github.com/jtaleric/ori
315316
export QUAY_CRED='<base64 encoded pull secret>'
316317
export BUGZOOKA_IMAGE='<bugzooka image tag>'
317318
export BUGZOOKA_NAMESPACE='<your namespace>'
319+
export JIRA_MCP_IMAGE='<jira mcp server image>'
318320
make deploy
319321
320322
# Cleanup resources

bugzooka/analysis/prompts.py

Lines changed: 10 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -110,7 +110,7 @@
110110
- No emojis in tables
111111
- Separate each config section with 80 equals signs.
112112
113-
**Remember:**
113+
**Remember:**
114114
- The tools provide percentage changes - use them as provided
115115
- CHECK thresholds (5% and 10%) before categorizing
116116
- SORT by absolute percentage change (highest first) - this is mandatory
@@ -127,3 +127,12 @@
127127
Beginning analysis now.
128128
""",
129129
}
130+
# Jira tool prompt - used when Jira MCP tools are available
131+
JIRA_TOOL_PROMPT = {
132+
"system": (
133+
"\n\nIMPORTANT: You have access to JIRA search tools. After analyzing the error, "
134+
"ALWAYS search for related issues in JIRA using the search_jira_issues tool with the OCPBUGS project. "
135+
"Extract key error terms, component names, or operators from the log summary to search for similar issues. "
136+
"Include the top 3 most relevant JIRA issues in your final response under a 'Related JIRA Issues' section."
137+
),
138+
}

bugzooka/integrations/gemini_client.py

Lines changed: 86 additions & 62 deletions
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,7 @@
1212
INFERENCE_MAX_TOOL_ITERATIONS,
1313
)
1414
from bugzooka.integrations.inference import InferenceAPIUnavailableError
15+
from bugzooka.analysis.prompts import JIRA_TOOL_PROMPT
1516

1617

1718
logger = logging.getLogger(__name__)
@@ -44,7 +45,7 @@ def __init__(self, api_key=None, base_url=None, verify_ssl=None, timeout=None):
4445
# Timeout configuration
4546
if timeout is None:
4647
timeout = float(os.getenv("GEMINI_TIMEOUT", "60.0"))
47-
48+
4849
logger.debug("Gemini client timeout set to %.1f seconds", timeout)
4950

5051
# SSL verification configuration
@@ -79,19 +80,23 @@ def chat_completions_create(self, messages, model="gemini-2.0-flash", **kwargs):
7980
"""
8081
try:
8182
logger.debug("Calling Gemini API: %s, Model=%s", self.base_url, model)
82-
83+
8384
response = self.client.chat.completions.create(
8485
model=model, messages=messages, **kwargs
8586
)
86-
87+
8788
# Log token usage information
88-
if hasattr(response, 'usage') and response.usage:
89+
if hasattr(response, "usage") and response.usage:
8990
usage = response.usage
90-
logger.info("📊 Token usage - Prompt: %d, Completion: %d, Total: %d",
91-
usage.prompt_tokens, usage.completion_tokens, usage.total_tokens)
91+
logger.info(
92+
"📊 Token usage - Prompt: %d, Completion: %d, Total: %d",
93+
usage.prompt_tokens,
94+
usage.completion_tokens,
95+
usage.total_tokens,
96+
)
9297
else:
9398
logger.debug("No usage information available in response")
94-
99+
95100
logger.debug("Gemini API call successful")
96101
return response
97102
except Exception as e:
@@ -145,10 +150,10 @@ async def execute_tool_call(tool_name, tool_args, available_tools):
145150
logger.debug("Tool arguments: %s", json.dumps(tool_args, indent=2))
146151

147152
# Check if the tool is async (has coroutine attribute or ainvoke method)
148-
if hasattr(tool, 'coroutine') and tool.coroutine:
153+
if hasattr(tool, "coroutine") and tool.coroutine:
149154
# MCP tools have a coroutine attribute
150155
result = await tool.ainvoke(tool_args)
151-
elif hasattr(tool, 'ainvoke'):
156+
elif hasattr(tool, "ainvoke"):
152157
# Some tools have ainvoke method
153158
result = await tool.ainvoke(tool_args)
154159
else:
@@ -158,19 +163,23 @@ async def execute_tool_call(tool_name, tool_args, available_tools):
158163
# Log result
159164
result_str = str(result)
160165
result_length = len(result_str)
161-
166+
162167
# Check for empty or minimal results
163168
if not result_str or result_str.strip() in ["", "null", "None", "{}", "[]"]:
164169
logger.warning("⚠️ Tool %s returned empty or null result", tool_name)
165170
elif len(result_str.strip()) < 50:
166-
logger.warning("⚠️ Tool %s returned small result (%d chars): %s",
167-
tool_name, result_length, result_str)
171+
logger.warning(
172+
"⚠️ Tool %s returned small result (%d chars): %s",
173+
tool_name,
174+
result_length,
175+
result_str,
176+
)
168177
else:
169178
logger.info("✅ Tool %s completed (%d chars)", tool_name, result_length)
170-
179+
171180
# Log full output at DEBUG level
172181
logger.debug("Tool %s output: %s", tool_name, result_str)
173-
182+
174183
return result_str
175184
except Exception as e:
176185
error_msg = f"Error executing tool '{tool_name}': {str(e)}"
@@ -181,20 +190,17 @@ async def execute_tool_call(tool_name, tool_args, available_tools):
181190

182191

183192
async def analyze_with_gemini_agentic(
184-
messages: list,
185-
tools=None,
186-
model="gemini-2.0-flash",
187-
max_iterations=None
193+
messages: list, tools=None, model="gemini-2.0-flash", max_iterations=None
188194
):
189195
"""
190196
Generic agentic loop for Gemini with tool calling support.
191-
197+
192198
This function implements the agentic pattern where Gemini can iteratively:
193199
1. Analyze the current context
194200
2. Decide to call tools if needed
195201
3. Process tool results
196202
4. Generate final answer
197-
203+
198204
:param messages: List of message dictionaries (system, user, assistant prompts)
199205
:param tools: List of LangChain tools available for Gemini to call (optional)
200206
:param model: Gemini model to use (default: gemini-2.0-flash)
@@ -203,18 +209,21 @@ async def analyze_with_gemini_agentic(
203209
"""
204210
if max_iterations is None:
205211
max_iterations = INFERENCE_MAX_TOOL_ITERATIONS
206-
212+
207213
try:
208214
gemini_client = GeminiClient()
209-
215+
210216
# Convert LangChain tools to OpenAI format if provided
211217
openai_tools = None
212218
if tools:
213219
openai_tools = convert_langchain_tools_to_openai_format(tools)
214220
tool_names = [t["function"]["name"] for t in openai_tools]
215-
logger.info("Starting Gemini analysis with %d tools: %s",
216-
len(openai_tools), ", ".join(tool_names))
217-
221+
logger.info(
222+
"Starting Gemini analysis with %d tools: %s",
223+
len(openai_tools),
224+
", ".join(tool_names),
225+
)
226+
218227
logger.debug("Starting agentic loop with %d messages", len(messages))
219228

220229
# Tool calling loop - iterate until we get a final answer or hit max iterations
@@ -240,44 +249,51 @@ async def analyze_with_gemini_agentic(
240249
response_message = response.choices[0].message
241250

242251
# Check if Gemini wants to call tools
243-
tool_calls = getattr(response_message, 'tool_calls', None)
252+
tool_calls = getattr(response_message, "tool_calls", None)
244253

245254
if not tool_calls:
246255
# No tool calls - we have the final answer
247256
content = response_message.content
248257
if content:
249258
logger.info("Analysis complete after %d iteration(s)", iteration)
250-
logger.debug("Response: %s", content[:200] + "..." if len(content) > 200 else content)
259+
logger.debug(
260+
"Response: %s",
261+
content[:200] + "..." if len(content) > 200 else content,
262+
)
251263
else:
252264
logger.warning("Gemini returned None content, using empty string")
253265
content = ""
254266
return content
255267

256268
# Gemini wants to call tools - execute them
257269
tool_names_called = [tc.function.name for tc in tool_calls]
258-
logger.info("Calling %d tool(s): %s", len(tool_calls), ", ".join(tool_names_called))
270+
logger.info(
271+
"Calling %d tool(s): %s", len(tool_calls), ", ".join(tool_names_called)
272+
)
259273

260274
# Add the assistant's message with tool calls to conversation
261-
messages.append({
262-
"role": "assistant",
263-
"content": response_message.content or "",
264-
"tool_calls": [
265-
{
266-
"id": tc.id,
267-
"type": "function",
268-
"function": {
269-
"name": tc.function.name,
270-
"arguments": tc.function.arguments
275+
messages.append(
276+
{
277+
"role": "assistant",
278+
"content": response_message.content or "",
279+
"tool_calls": [
280+
{
281+
"id": tc.id,
282+
"type": "function",
283+
"function": {
284+
"name": tc.function.name,
285+
"arguments": tc.function.arguments,
286+
},
271287
}
272-
}
273-
for tc in tool_calls
274-
]
275-
})
288+
for tc in tool_calls
289+
],
290+
}
291+
)
276292

277293
# Execute each tool call and add results to messages
278294
for tool_call in tool_calls:
279295
function_name = tool_call.function.name
280-
296+
281297
try:
282298
function_args = json.loads(tool_call.function.arguments)
283299
except json.JSONDecodeError as e:
@@ -286,25 +302,27 @@ async def analyze_with_gemini_agentic(
286302
else:
287303
# Execute the tool (await since it's now async)
288304
function_result = await execute_tool_call(
289-
function_name,
290-
function_args,
291-
tools
305+
function_name, function_args, tools
292306
)
293307

294308
# Add tool result to messages
295-
messages.append({
296-
"role": "tool",
297-
"tool_call_id": tool_call.id,
298-
"name": function_name,
299-
"content": function_result
300-
})
309+
messages.append(
310+
{
311+
"role": "tool",
312+
"tool_call_id": tool_call.id,
313+
"name": function_name,
314+
"content": function_result,
315+
}
316+
)
301317

302318
# Continue loop to let Gemini process tool results
303319

304320
# If we hit max iterations without a final answer
305-
logger.warning("Reached maximum iterations (%d) without final answer", max_iterations)
321+
logger.warning(
322+
"Reached maximum iterations (%d) without final answer", max_iterations
323+
)
306324
return "Analysis incomplete: Maximum tool calling iterations reached. Please try again with a simpler query."
307-
325+
308326
except Exception as e:
309327
logger.error("Error in Gemini agentic loop: %s", str(e), exc_info=True)
310328
raise InferenceAPIUnavailableError(
@@ -318,7 +336,7 @@ async def analyze_log_with_gemini(
318336
error_summary: str,
319337
model="gemini-2.0-flash",
320338
tools=None,
321-
max_iterations=None
339+
max_iterations=None,
322340
):
323341
"""
324342
Analyzes log summaries using Gemini API with product-specific prompts and optional tool calling.
@@ -333,7 +351,7 @@ async def analyze_log_with_gemini(
333351
"""
334352
try:
335353
logger.info("Starting log analysis for product: %s", product)
336-
354+
337355
prompt_config = product_config["prompt"][product]
338356
try:
339357
formatted_content = prompt_config["user"].format(
@@ -342,20 +360,26 @@ async def analyze_log_with_gemini(
342360
except KeyError:
343361
formatted_content = prompt_config["user"].format(summary=error_summary)
344362

345-
logger.debug("Error summary: %s", error_summary[:150] + "..." if len(error_summary) > 150 else error_summary)
363+
logger.debug(
364+
"Error summary: %s",
365+
error_summary[:150] + "..." if len(error_summary) > 150 else error_summary,
366+
)
367+
368+
# Append Jira prompt if Jira MCP tools are available
369+
system_prompt = prompt_config["system"]
370+
if tools and any(getattr(t, "name", "") == "search_jira_issues" for t in tools):
371+
logger.info("Jira MCP tools detected - injecting Jira prompt")
372+
system_prompt += JIRA_TOOL_PROMPT["system"]
346373

347374
messages = [
348-
{"role": "system", "content": prompt_config["system"]},
375+
{"role": "system", "content": system_prompt},
349376
{"role": "user", "content": formatted_content},
350377
{"role": "assistant", "content": prompt_config["assistant"]},
351378
]
352379

353380
# Use the generic agentic loop
354381
return await analyze_with_gemini_agentic(
355-
messages=messages,
356-
tools=tools,
357-
model=model,
358-
max_iterations=max_iterations
382+
messages=messages, tools=tools, model=model, max_iterations=max_iterations
359383
)
360384

361385
except Exception as e:

kustomize/base/configmap-mcp-config.yaml

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,10 @@ data:
1212
"orion_mcp_server": {
1313
"url": "http://orion-mcp.orion-mcp:3030/mcp",
1414
"transport": "streamable_http"
15+
},
16+
"jira_mcp_server": {
17+
"url": "http://jira-mcp.${BUGZOOKA_NAMESPACE}:3031/mcp",
18+
"transport": "streamable_http"
1519
}
1620
}
1721
}

kustomize/base/configmap-prompts.yaml

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -8,11 +8,11 @@ metadata:
88
data:
99
prompts.json: |
1010
{
11-
"OPENSHIFT_PROMPT": {
12-
"system": "You are an expert in OpenShift, Kubernetes, and cloud infrastructure. Your task is to analyze logs and summaries related to OpenShift environments. Given a log summary, identify the root cause, potential fixes, and affected components. Be as consise as possible (under 5000 characters), but precise and avoid generic troubleshooting steps. Prioritize OpenShift-specific debugging techniques. Keep in mind that the cluster is ephemeral and is destroyed after the build is complete, but all relevant logs and metrics are available. Use markdown formatting for the output with only one level of bullet points, do not use bold text except for the headers.",
13-
"user": "Here is the log summary from an OpenShift environment:\n\n{summary}\n\nBased on this summary, provide a structured breakdown of:\n- The OpenShift component likely affected (e.g., etcd, kube-apiserver, ingress, SDN, Machine API)\n- The probable root cause\n- Steps to verify the issue further\n- Suggested resolution, including OpenShift-specific commands or configurations.",
14-
"assistant": "**Affected Component:** <Identified component>\n\n**Probable Root Cause:** <Describe why this issue might be occurring>\n\n**Verification Steps:**\n- <Step 1>\n- <Step 2>\n- <Step 3>\n\n**Suggested Resolution:**\n- <OpenShift CLI commands>\n- <Relevant OpenShift configurations>"
15-
},
11+
"OPENSHIFT_PROMPT": {
12+
"system": "You are an expert in OpenShift, Kubernetes, and cloud infrastructure. Your task is to analyze logs and summaries related to OpenShift environments. Given a log summary, identify the root cause, potential fixes, and affected components. Be as consise as possible (under 5000 characters), but precise and avoid generic troubleshooting steps. Prioritize OpenShift-specific debugging techniques. Keep in mind that the cluster is ephemeral and is destroyed after the build is complete, but all relevant logs and metrics are available. Use markdown formatting for the output with only one level of bullet points, do not use bold text except for the headers.",
13+
"user": "Here is the log summary from an OpenShift environment:\n\n{summary}\n\nBased on this summary, provide a structured breakdown of:\n- The OpenShift component likely affected (e.g., etcd, kube-apiserver, ingress, SDN, Machine API)\n- The probable root cause\n- Steps to verify the issue further\n- Suggested resolution, including OpenShift-specific commands or configurations",
14+
"assistant": "**Affected Component:** <Identified component>\n\n**Probable Root Cause:** <Describe why this issue might be occurring>\n\n**Verification Steps:**\n- <Step 1>\n- <Step 2>\n- <Step 3>\n\n**Suggested Resolution:**\n- <OpenShift CLI commands>\n- <Relevant OpenShift configurations>"
15+
},
1616
"ANSIBLE_PROMPT": {
1717
"system": "You are an expert in Ansible automation, playbook debugging, and infrastructure as code (IaC). Your task is to analyze log summaries related to Ansible execution, playbook failures, and task errors. Given a log summary, identify the root cause, affected tasks, and potential fixes. Prioritize Ansible-specific debugging techniques over generic troubleshooting.",
1818
"user": "Here is the log summary from an Ansible execution:\n\n{summary}\n\nBased on this summary, provide a structured breakdown of:\n- The failed Ansible task and module involved\n- The probable root cause\n- Steps to reproduce or verify the issue\n- Suggested resolution, including relevant playbook changes or command-line fixes.",

0 commit comments

Comments
 (0)