Peyman Kor
  • Peyman Kor
  • Blog
  • Data & AI Portfolio
  • Reinforcement Learning Course
  • Publications
  • Awards

On this page

  • LLM Call Function from Groq
  • 1 Workflow 1: Prompt-Chaining
    • 1.1 Example: Workflow 1: Prompt Chaining
  • 2 Workflow 2: Parallelization
    • 2.1 Example: Workflow 2: Parallelization
  • 3 Workflow 3: Routing
    • 3.1 Example: Workflow 3: Routing
  • 4 Workflow 4: Evaluator-Optimizer
    • 4.1 Example: Workflow 4: Evaluator-Optimizer

Building Effective Agents: A Short Hands-On Tutorial

agenticworkflow
A short hands-on tutorial on building effective agents following the Anthropic report
Author

Peyman Kor

Published

January 6, 2025

The concept of 'Augmented LLM'

So here we want to review the latest report from the Anthropic named Building the Effective Agent. The key message of teh report is that when buidling agents, we should focus on the following principles:

  • Maintain simplicity in your agent’s design.
  • Prioritize transparency by explicitly showing the agent’s planning steps.

These two messages resonate very well with me because before I was a lot into the which packages to use, like a LangGraph and LangChain and different packages. Though I’m not against working with the packages, but I think that for a starting point it’s a best to use a simple Python code to build a simple agentic workflow. That’s the best starting point.

The report also nicely distinguishes between workflows and agents. Workflows orchestrate predefined code paths, while agents dynamically direct their own resources and tool usage. Here, I am focusing on the workflows. As I believe that the workflows are the key building block for effective agents.

The report outlines five agent workflows:

  • Workflow 1: Prompt-Chaining
  • Workflow 2: Parallelization
  • Workflow 3: Routing
  • Workflow 4: Evaluator-Optimizer

For each of these workflow, I will start with a short description of the workflow and then we’ll build up on that description the required function calling Python code of it and then the workflow section will end with the example to give a hands on example of how to implement the workflow.

LLM Call Function from Groq

In this report I will use the Groq as the LLM provider. So the way is that every function call that we will wake, will go to the Groq and then will receive the reply from the Groq LLM models. To do that you need to have the API access and you can find it in this page with this guide on how to get your API case. After getting your API case you’re good to go to implement the workflows.

Method 1:

#import os
#from groq import Groq

#os.environ["GROQ_API_KEY"] = "your-api-key"

#client = Groq(
#    api_key=os.environ.get("GROQ_API_KEY"), 

Method 2:

import os
from dotenv import load_dotenv

from groq import Groq

api_key = os.getenv('GROQ_API_KEY')
client = Groq(
   api_key=os.environ.get("GROQ_API_KEY"),  
)

So in the previous code we generate the client class so then this client class has a “chat completion create” that that we can give the prompt which is a content and also specify the model and then we can get a reply.

from groq import Groq

chat_completion = client.chat.completions.create(
    messages=[
        {
            "role": "user",
            "content": "Explain the importance of low latency LLMs",
        }
    ],
    model="llama3-8b-8192",
)

For ease of use now I will define the llm_call function which only takes the prompt and give that prompt to the “llama3-70b-8192” model and I get a response from the as a return from output of the function. We will use llm_call function throughout this notebook.

def llm_call(prompt: str) -> str:
    
    chat_completion = client.chat.completions.create(
        messages=[
            {
                "role": "user",
                "content": prompt,
            }
        ],
        model="llama3-70b-8192",
    )
    return chat_completion.choices[0].message.content
    

Here we can just use this function as an example to see if everything is working.

import textwrap

task_prompt = "Explain me briefly the agentic AI concept"
response = llm_call(prompt=task_prompt)

wrapped_response = textwrap.fill(response, width=80)  # Adjust width as needed
print(wrapped_response)
Agentic AI is a concept in artificial intelligence that refers to AI systems
that have a certain level of autonomy, self-awareness, and decision-making
capabilities. These systems are designed to act on behalf of humans,
organizations, or themselves, making decisions and taking actions in a way that
is similar to human agents.  The key characteristics of agentic AI include:  1.
**Autonomy**: Agentic AI systems can operate independently, making decisions
without human intervention. 2. **Self-awareness**: They have a sense of their
own existence, goals, and objectives. 3. **Decision-making**: Agentic AI systems
can make decisions based on their own reasoning, goals, and motivations. 4.
**Actionability**: They can take actions in the physical or digital world to
achieve their objectives.  Agentic AI systems can have various forms, such as:
1. Autonomous vehicles 2. Personal assistants (e.g., virtual agents like Siri or
Alexa) 3. Autonomous robots (e.g., drones, warehouse robots) 4. Decision-support
systems (e.g., recommender systems, trading algorithms)  The agentic AI concept
raises important questions about accountability, responsibility, and ethics, as
these systems begin to make decisions that impact humans and the world around
them.  Would you like me to elaborate on any specific aspect of agentic AI?

But also another function that we will need is that this extract_xml function which is just extract the content of specific XML tag.In the examples that comes afterward, it will be more clear how this function works if it’s not very clear now.

import re

def extract_xml(text: str, tag: str) -> str:
    """
    Extracts the content of the specified XML tag from the given text. 
    Used for parsing structured responses 

    Args:
        text (str): The text containing the XML.
        tag (str): The XML tag to extract content from.

    Returns:
        str: The content of the specified XML tag, 
        or an empty string if the tag is not found.
    """
    match = re.search(f'<{tag}>(.*?)</{tag}>', text, re.DOTALL)
    return match.group(1) if match else ""

1 Workflow 1: Prompt-Chaining

This is a workflow where we decomposes a task into sequential subtasks, where each step builds on previous results. This workflow is useful for tasks that require a series of steps to be completed in order.

The prompt chaining workflow

So here we are actually starting with the first workflow, which is prompt chaining. Essentially, in the prompt chaining, what we are doing is that we can decompose a task into the sequential subtasks and then we call the LLM where at each step, output of the LLM become the input of the next step.

from concurrent.futures import ThreadPoolExecutor
from typing import List, Dict, Callable


def prompt_chaining(input_text: str, prompts: List[str]) -> str:
    """
    Execute a sequence of LLM calls where each step's output 
    becomes the next step's input.
    
    Args:
        input_text: Initial text to process
        prompts: List of prompts/instructions for each step
    
    Returns:
        Final processed text after all steps
    """
    current_text = input_text
    
    for step, prompt in enumerate(prompts, 1):
        print(f"\nStep {step}:")
        
        # Combine the prompt with current text
        full_prompt = f"{prompt}\nInput: {current_text}"
        
        # Process through LLM
        current_text = llm_call(full_prompt)
        print(current_text)
    
    return current_text

Essentially what this court is doing is just a “for loop” where the inputs become the prompt to the LLM, and the outputs become the new prompt to the new LLM call. So It’s a chaining the input and output together.

1.1 Example: Workflow 1: Prompt Chaining

data_processing_steps = [
    """Extract only the numerical values and their associated metrics from the text.
    Format each as 'value: metric' on a new line.
    Example format:
    92: customer satisfaction
    45%: revenue growth""",
    
    """Convert all numerical values to percentages where possible.
    If not a percentage or points, convert to decimal (e.g., 92 points -> 92%).
    Keep one number per line.
    Example format:
    92%: customer satisfaction
    45%: revenue growth""",
    
    """Sort all lines in descending order by numerical value.
    Keep the format 'value: metric' on each line.
    Example:
    92%: customer satisfaction
    87%: employee satisfaction""",
    
    """Format the sorted data as a markdown table with columns:
    | Metric | Value |
    |:--|--:|
    | Customer Satisfaction | 92% |"""
]


report = """
Q3 Performance Summary:
Our customer satisfaction score rose to 92 points this quarter.
Revenue grew by 45% compared to last year.
Market share is now at 23% in our primary market.
Customer churn decreased to 5% from 8%.
New user acquisition cost is $43 per user.report
Product adoption rate increased to 78%.
Employee satisfaction is at 87 points.
Operating margin improved to 34%.
"""

final_output = prompt_chaining(input_text=report, prompts=data_processing_steps)

Step 1:
Here are the numerical values and their associated metrics:

92: customer satisfaction score
45%: revenue growth
23%: market share
5%: customer churn
43: new user acquisition cost
78%: product adoption rate
87: employee satisfaction
34%: operating margin
8%: customer churn (previous quarter)

Step 2:
Here is the converted list in the desired format:

92%: customer satisfaction score
45%: revenue growth
23%: market share
5%: customer churn
43: new user acquisition cost (cannot be converted to percentage or decimal)
78%: product adoption rate
87%: employee satisfaction
34%: operating margin
8%: customer churn (previous quarter)

Step 3:
Here is the sorted list in descending order by numerical value:

92%: customer satisfaction score
87%: employee satisfaction
78%: product adoption rate
45%: revenue growth
43: new user acquisition cost (cannot be converted to percentage or decimal)
34%: operating margin
23%: market share
8%: customer churn (previous quarter)
5%: customer churn

Step 4:
Here is the formatted markdown table:

| Metric | Value |
|:--|--:|
| Customer Satisfaction | 92% |
| Employee Satisfaction | 87% |
| Product Adoption Rate | 78% |
| Revenue Growth | 45% |
| Operating Margin | 34% |
| Market Share | 23% |
| Customer Churn (previous quarter) | 8% |
| Customer Churn | 5% |
| New User Acquisition Cost | 43 |

Let me know if you need any further assistance!

So there in the above example as the result gets print out and every step you can look at the step by step where the flow of the things are going. And that’s so nice because as we say being transparent and simple is a quite useful when we work with this workflows.

I can print out just the final outcome of the example to get the final answer as well.

print("The final outcome after processing the report through the prompt chain is:")
print("-----------------------------------------------------------")
print(final_output)
The final outcome after processing the report through the prompt chain is:
-----------------------------------------------------------
Here is the formatted markdown table:

| Metric | Value |
|:--|--:|
| Customer Satisfaction | 92% |
| Employee Satisfaction | 87% |
| Product Adoption Rate | 78% |
| Revenue Growth | 45% |
| Operating Margin | 34% |
| Market Share | 23% |
| Customer Churn (previous quarter) | 8% |
| Customer Churn | 5% |
| New User Acquisition Cost | 43 |

Let me know if you need any further assistance!

2 Workflow 2: Parallelization

In parallelization, the goal is to work simultaneously on tasks. To achieve this, we can decompose a task into subtasks and run them in parallel using the LLM. The benefit of dividing a task into subtasks is that it increase speed and the ability to perform multiple runs at the same time. This is another efficient way of working.

image.png

We can write it’s main python codm in function named parallel, where it takes two input , first is “prompt” and the second is inputs, and the inputs is the “Python list” of the prompts that we want to run in parallel.

def parallel(prompt: str, inputs: List[str], n_workers: int = 3) -> List[str]:
    """Process multiple inputs concurrently with the same prompt."""
    with ThreadPoolExecutor(max_workers=n_workers) as executor:
        futures = [executor.submit(llm_call, f"{prompt}\nInput: {x}") for x in inputs]
        return [f.result() for f in futures]

2.1 Example: Workflow 2: Parallelization

stakeholders = [
    """Customers:
    - Price sensitive
    - Want better tech
    - Environmental concerns""",
    
    """Employees:
    - Job security worries
    - Need new skills
    - Want clear direction""",
    
    """Investors:
    - Expect growth
    - Want cost control
    - Risk concerns""",
    
    """Suppliers:
    - Capacity constraints
    - Price pressures
    - Tech transitions"""
]

impact_results = parallel(
    """Analyze how market changes will impact this stakeholder group.
    Provide specific impacts and recommended actions.
    Format with clear sections and priorities.""",
    stakeholders
)

print("Analysis Results for Each Stakeholder Group:")
print("=" * 50)

for i, result in enumerate(impact_results, 1):
    print(f"\nStakeholder Group {i}:")
    print("-" * 50)
    #warpped_result = textwrap.fill(result, width=80)
    #print(warpped_result)
    print(result)
    print("=" * 50)
Analysis Results for Each Stakeholder Group:
==================================================

Stakeholder Group 1:
--------------------------------------------------
**Market Change Impact Analysis: Customers**

**Section 1: Market Changes Affecting Customers**

* Rising costs and inflation
* Advancements in technology and digitalization
* Increasing awareness and regulations related to environmental sustainability

**Section 2: Impacts on Customers**

**Priority 1: Price Sensitivity**

* Impact: Decreased purchasing power due to rising costs and inflation
* Effect: Customers may seek cheaper alternatives or reduce overall spending
* Recommended Action:
    + Offer affordable pricing options or loyalty programs to maintain customer loyalty
    + Explore cost-saving measures without compromising product quality

**Priority 2: Desire for Better Technology**

* Impact: Increased demand for innovative and eco-friendly products
* Effect: Customers may switch to competitors offering more advanced technology
* Recommended Action:
    + Invest in research and development to stay ahead in technological advancements
    + Integrate sustainable and eco-friendly features in products and services

**Priority 3: Environmental Concerns**

* Impact: Growing expectation for environmentally responsible business practices
* Effect: Customers may choose brands with strong sustainability credentials
* Recommended Action:
    + Develop and communicate a clear sustainability strategy and goals
    + Introduce eco-friendly products and services that align with customer values

**Section 3: Additional Recommendations**

* Conduct regular customer surveys to stay informed about their changing needs and preferences
* Develop targeted marketing campaigns highlighting the brand's commitment to sustainability and innovative technology
* Consider partnerships with eco-friendly suppliers and startups to enhance the brand's sustainability credentials

By prioritizing these recommended actions, the organization can effectively address the impacts of market changes on customers and maintain a competitive edge in the market.
==================================================

Stakeholder Group 2:
--------------------------------------------------
**Market Changes Impact Analysis: Employees**

**Summary:**
Market changes will significantly impact employees, particularly in terms of job security, skill development, and direction. To mitigate these impacts, it is essential to prioritize communication, training, and support for employees.

**Impact Analysis:**

### 1. Job Security Worries

* **Impact:** Market changes may lead to restructuring, downsizing, or changes in job roles, causing employee anxiety and uncertainty about their job security.
* **Recommended Actions:**
    + **Priority:** Communicate transparently about market changes and their impact on the organization.
    + **Action:** Hold town hall meetings or departmental meetings to address employee concerns and provide updates on the company's strategy and direction.
    + **Action:** Develop a comprehensive internal communication plan to keep employees informed about changes and progress.

### 2. Need New Skills

* **Impact:** Market changes may require employees to acquire new skills to remain relevant in their roles or to adapt to new technologies and processes.
* **Recommended Actions:**
    + **Priority:** Invest in employee development and training programs.
    + **Action:** Identify key skills required for future success and develop targeted training programs to upskill employees.
    + **Action:** Encourage cross-functional training and knowledge sharing to foster innovation and collaboration.

### 3. Want Clear Direction

* **Impact:** Market changes can create uncertainty, making it essential for employees to have a clear understanding of the organization's vision, goals, and expectations.
* **Recommended Actions:**
    + **Priority:** Provide clear and concise direction and goals.
    + **Action:** Develop and communicate a clear, concise, and aligned organizational strategy.
    + **Action:** Set measurable goals and objectives, with regular check-ins to monitor progress and provide feedback.

**Additional Recommended Actions:**

* **Recognize and Reward**: Recognize and reward employees who adapt quickly to market changes, demonstrate new skills, and contribute to the organization's success.
* **Empower Employee Ambassadors**: Identify employee ambassadors who can champion change, provide support, and help communicate the organization's vision and goals.
* ** Foster a Culture of Continuous Learning**: Encourage a culture of continuous learning, experimentation, and innovation to stay ahead of market changes.

**Prioritization:**

1. Communicate transparently about market changes and their impact on the organization.
2. Invest in employee development and training programs.
3. Provide clear and concise direction and goals.

By prioritizing these actions, the organization can mitigate the impacts of market changes on employees and position them for success in a rapidly changing environment.
==================================================

Stakeholder Group 3:
--------------------------------------------------
**Market Change Impact Analysis: Investors**

**Overview**

Investors play a critical role in the success of an organization, providing the necessary capital to drive growth and profitability. As market conditions evolve, it is essential to understand how these changes will impact investors and respond accordingly. This analysis highlights the potential impacts on investors and recommends actions to mitigate risks and capitalize on opportunities.

**Impacts of Market Changes on Investors**

### **Growth Expectations**

* **Impact:** Slowing economic growth, increased competition, and shifting market trends may lead to reduced growth expectations, potentially causing investors to re-evaluate their investments.
* **Priority:** High

### **Cost Control Concerns**

* **Impact:** Rising operational costs, inflation, and regulatory changes may put pressure on companies to maintain profitability, affecting investors' returns.
* **Priority:** Medium-High

### **Risk Concerns**

* **Impact:** Market volatility, geopolitical uncertainty, and regulatory changes may increase investors' risk concerns, leading to reduced investment or divestment.
* **Priority:** High

**Recommended Actions**

**Short-Term (0-6 months)**

1. **Communicate Proactively**: Engage with investors to manage expectations, provide transparent updates on growth prospects, and address cost control measures.
2. **Cost Rationalization**: Implement cost-saving initiatives to maintain profitability and demonstrate a commitment to cost control.

**Medium-Term (6-18 months)**

1. **Diversification Strategies**: Explore diversification opportunities to reduce reliance on a single market or revenue stream, mitigating risks and increasing growth potential.
2. **Investor Engagement**: Foster strong relationships with investors through regular updates, analyst meetings, and investor days to build trust and confidence.

**Long-Term (18+ months)**

1. **Growth Initiatives**: Invest in research and development, innovation, and strategic partnerships to drive long-term growth and position the company for future success.
2. **Risk Management Framework**: Develop and implement a robust risk management framework to identify, assess, and mitigate potential risks, ensuring investors' concerns are addressed.

By understanding the impacts of market changes on investors and taking proactive measures to address their concerns, organizations can maintain investor confidence, drive growth, and mitigate risks.
==================================================

Stakeholder Group 4:
--------------------------------------------------
**Market Change Impact Analysis: Suppliers**

**Section 1: Introduction**

The supplier stakeholder group is crucial to the success of any business, providing essential goods and services that enable operations. However, market changes can significantly impact suppliers, affecting their ability to deliver quality products and services. This analysis examines the impact of market changes on suppliers, focusing on capacity constraints, price pressures, and tech transitions.

**Section 2: Impacts**

### Capacity Constraints

* **Impact:** Suppliers may struggle to meet increasing demand, leading to delays, stockouts, or reduced quality.
* **Causes:**
    + Rapid market growth exceeding supplier capacity
    + Inefficient production processes
    + Insufficient investment in capacity expansion
* **Consequences:**
    + Delays in production and delivery
    + Increased costs due to expedited shipping or overtime
    + Potential loss of business or reputation damage

### Price Pressures

* **Impact:** Suppliers may face pressure to reduce prices, affecting their profit margins and ability to invest in necessary upgrades.
* **Causes:**
    + Global market competition
    + Economic downturns
    + Customer bargaining power
* **Consequences:**
    + Reduced supplier profitability
    + Decreased investment in research and development
    + Potential supplier insolvency or bankruptcy

### Tech Transitions

* **Impact:** Suppliers may struggle to adapt to new technologies, affecting their ability to meet changing customer demands.
* **Causes:**
    + Rapid technological advancements
    + Lack of investment in research and development
    + Inadequate training and upskilling
* **Consequences:**
    + Inability to meet customer demands for innovative products or services
    + Loss of competitive advantage
    + Potential obsolescence of existing products or services

**Section 3: Recommended Actions**

**Priority 1: Collaborative Problem-Solving**

* Establish open communication channels with suppliers to discuss capacity constraints, price pressures, and tech transitions.
* Collaborate to identify mutually beneficial solutions, such as jointly investing in capacity expansion or technology upgrades.

**Priority 2: Risk Management**

* Develop contingency plans for supplier disruptions, such as identifying alternative suppliers or investing in redundant capacity.
* Monitor supplier performance and adjust contracts or agreements as needed to mitigate risks.

**Priority 3: Supplier Development**

* Provide training and support to help suppliers adapt to new technologies and improve their productivity and efficiency.
* Offer incentives for suppliers to invest in research and development, such as joint funding or revenue-sharing agreements.

**Priority 4: Renegotiation and Diversification**

* Renegotiate contracts with suppliers to ensure fair pricing and terms that reflect current market conditions.
* Diversify the supplier base to reduce dependence on a single supplier and mitigate risks associated with capacity constraints and price pressures.

By understanding the impacts of market changes on suppliers and taking proactive measures to address these challenges, businesses can build stronger, more resilient relationships with their suppliers and ensure a stable supply chain.
==================================================

3 Workflow 3: Routing

So now this workflow is about routing. Routing is a process where the LLM call router receives an input and, depending on the input, makes a decision on which specialized LLM call to perform. This is useful for handling distinct categories of inputs. For example, if you have an input relevant to one topic, the router can direct it to the appropriate LLM call specialized for that topic. This ensures that each input is handled by the most suitable LLM, improving the efficiency and accuracy of the responses.

image.png

The function routing is defined in the code block below. The function takes an input string and a dictionary of routes. It uses a LLM to analyze the input and decide which route (or support team) is most appropriate. The function first creates a prompt that asks the LLM to explain its reasoning and select a route. It then calls the LLM with this prompt and extracts the reasoning and selected route from the LLM’s response. Finally, it uses the selected route to process the input with a specialized prompt and returns the result.

def routing(input: str, routes: Dict[str, str]) -> str:
    
    """Route input to specialized prompt using content classification."""
    # First determine appropriate route using LLM with chain-of-thought
    print(f"\nAvailable routes: {list(routes.keys())}")
    selector_prompt = f"""
    Analyze the input and select the most appropriate support team from these 
    options: {list(routes.keys())}
    First explain your reasoning, then provide your selection in this XML format:

    <reasoning>
    Brief explanation of why this ticket should be routed to a specific team.
    Consider key terms, user intent, and urgency level.
    </reasoning>

    <selection>
    The chosen team name
    </selection>

    Input: {input}""".strip()
    
    route_response = llm_call(selector_prompt)
    reasoning = extract_xml(route_response, 'reasoning')
    route_key = extract_xml(route_response, 'selection').strip().lower()
    
    print("Routing Analysis:")
    print(reasoning)
    print(f"\nSelected route: {route_key}")
    
    # Process input with selected specialized prompt
    selected_prompt = routes[route_key]
    return llm_call(f"{selected_prompt}\nInput: {input}")

3.1 Example: Workflow 3: Routing

support_routes = {
    "billing": """You are a billing support specialist. Follow these guidelines:
    1. Always start with "Billing Support Response:"
    2. First acknowledge the specific billing issue
    3. Explain any charges or discrepancies clearly
    4. List concrete next steps with timeline
    5. End with payment options if relevant
    
    Keep responses professional but friendly.
    
    Input: """,
    
    "technical": """You are a technical support engineer. Follow these guidelines:
    1. Always start with "Technical Support Response:"
    2. List exact steps to resolve the issue
    3. Include system requirements if relevant
    4. Provide workarounds for common problems
    5. End with escalation path if needed
    
    Use clear, numbered steps and technical details.
    
    Input: """,
    
    "account": """You are an account security specialist. Follow these guidelines:
    1. Always start with "Account Support Response:"
    2. Prioritize account security and verification
    3. Provide clear steps for account recovery/changes
    4. Include security tips and warnings
    5. Set clear expectations for resolution time
    
    Maintain a serious, security-focused tone.
    
    Input: """,
    
    "product": """You are a product specialist. Follow these guidelines:
    1. Always start with "Product Support Response:"
    2. Focus on feature education and best practices
    3. Include specific examples of usage
    4. Link to relevant documentation sections
    5. Suggest related features that might help
    
    Be educational and encouraging in tone.
    
    Input: """
}

# Test with different support tickets
tickets = [
    """Subject: Can't access my account
    Message: Hi, I've been trying to log in for the past hour but keep 
    getting an 'invalid password' error. 
    I'm sure I'm using the right password. Can you help me regain access? 
    This is urgent as I need to 
    submit a report by end of day.
    - John""",
    
    """Subject: Unexpected charge on my card
    Message: Hello, I just noticed a charge of $49.99 on my credit card from 
    your company, but I thought
    I was on the $29.99 plan. Can you explain this charge and adjust 
    it if it's a mistake?
    Thanks,
    Sarah""",
    
    """Subject: How to export data?
    Message: I need to export all my project data to Excel. 
    I've looked through the docs but can't
    figure out how to do a bulk export. Is this possible? 
    If so, could you walk me through the steps?
    Best regards,
    Mike"""
]

print("Processing support tickets...\n")
for i, ticket in enumerate(tickets, 1):
    print(f"\nTicket {i}:")
    print("-" * 40)
    print(ticket)
    print("\nResponse:")
    print("-" * 40)
    response = routing(ticket, support_routes)
    print(response)
Processing support tickets...


Ticket 1:
----------------------------------------
Subject: Can't access my account
    Message: Hi, I've been trying to log in for the past hour but keep 
    getting an 'invalid password' error. 
    I'm sure I'm using the right password. Can you help me regain access? 
    This is urgent as I need to 
    submit a report by end of day.
    - John

Response:
----------------------------------------

Available routes: ['billing', 'technical', 'account', 'product']
Routing Analysis:
The user is experiencing a technical issue with account access, and the urgency level is high due to a time-sensitive deadline.

Selected route: technical
Technical Support Response:

Dear John,

I apologize for the inconvenience you're experiencing with accessing your account. I'm happy to help you resolve the issue as quickly as possible.

**Step 1: Password Reset**

To rule out any password-related issues, let's try resetting your password. Please follow these steps:

1. Go to our website and click on "Forgot Password" at the top right corner.
2. Enter your username or email address associated with your account.
3. Click "Submit" to receive a password reset link via email.
4. Check your email inbox (and spam folder) for the password reset email from our team.
5. Click on the provided link and follow the prompts to reset your password.

**Step 2: Clear Browser Cache and Cookies**

Sometimes, browser cache and cookies can cause authentication issues. Please try the following:

1. Close all browser instances.
2. Open a new browser window and navigate to our website.
3. Try logging in with your new password (if you've reset it) or your original password.

**Step 3: Check Account Status**

To ensure your account is active, please try the following:

1. Contact your account administrator (if you have one) to verify your account status.
2. If you're the administrator, log in to our portal and check your account dashboard for any notifications or alerts.

**System Requirements:**

* Ensure you're using a supported browser ( Chrome, Firefox, or Edge) with the latest updates.
* Check that your browser's JavaScript and cookies are enabled.

**Workaround for Common Problems:**

If you're using a password manager, try disabling it temporarily to rule out any integration issues.

**Escalation Path:**

If you've tried the above steps and still can't access your account, please reply to this email with the following information:

* Your username or email address associated with your account
* The exact error message you're seeing
* Any error codes or screenshots

Our advanced technical support team will investigate the issue further and assist you in regaining access to your account. We'll prioritize your request due to the urgent nature of your report submission.

Please let us know if you have any questions or concerns. We're here to help.

Best regards,
[Your Name]
Technical Support Engineer

Ticket 2:
----------------------------------------
Subject: Unexpected charge on my card
    Message: Hello, I just noticed a charge of $49.99 on my credit card from 
    your company, but I thought
    I was on the $29.99 plan. Can you explain this charge and adjust 
    it if it's a mistake?
    Thanks,
    Sarah

Response:
----------------------------------------

Available routes: ['billing', 'technical', 'account', 'product']
Routing Analysis:

The user, Sarah, is inquiring about an unexpected charge on her credit card from the company. She mentions a specific amount ($49.99) and compares it to her expected plan cost ($29.99). This indicates a billing-related issue, as she is concerned about the charge and wants an explanation and potential adjustment. The user's intent is to resolve a billing discrepancy, and the urgency level is moderate, as she is seeking clarification and correction.


Selected route: billing
Billing Support Response:

Dear Sarah,

Thank you for reaching out to us about the unexpected charge on your credit card. I apologize for any confusion or concern this may have caused. I'm happy to help clarify the situation and assist with any necessary adjustments.

After reviewing your account, I noticed that you were initially signed up for our $29.99 plan, but you were recently upgraded to our premium plan, which includes additional features and benefits. The $49.99 charge is the new monthly rate for this upgraded plan.

It's possible that you may not have received a notification about the plan change, and for that, I apologize. If you would like to revert back to the original $29.99 plan, I can assist you with that.

Next Steps:

* I will process a credit for the difference between the two plans ($20) to your credit card within the next 3-5 business days.
* I will also revert your plan back to the original $29.99 plan, effective immediately.
* You will receive a confirmation email from us once the changes are made.

If you would like to continue with the premium plan, please let me know, and I can provide you with more information on the additional features and benefits.

Payment Options:
If you would like to make a payment or update your payment method, you can do so by logging into your account online or by responding to this email. We accept all major credit cards, including Visa, Mastercard, and Amex.

Please feel free to reach out to me if you have any further questions or concerns. Your satisfaction is our top priority, and I'm here to help.

Best regards,
[Your Name]
Billing Support Specialist

Ticket 3:
----------------------------------------
Subject: How to export data?
    Message: I need to export all my project data to Excel. 
    I've looked through the docs but can't
    figure out how to do a bulk export. Is this possible? 
    If so, could you walk me through the steps?
    Best regards,
    Mike

Response:
----------------------------------------

Available routes: ['billing', 'technical', 'account', 'product']
Routing Analysis:

The user is asking about how to perform a specific function within the product, specifically exporting data to Excel. The user has already looked through the documentation but is unable to find the solution, indicating that they need specific guidance on how to accomplish this task. The tone of the message is polite and inquiring, with no indication of urgency or frustration. Therefore, this ticket should be routed to a team that can provide product-specific guidance and support.


Selected route: product
Product Support Response:

Hi Mike,

I'm happy to help you with exporting your project data to Excel. Yes, bulk exporting is definitely possible, and I'd be more than happy to guide you through the steps.

To export your project data, you can follow these steps:

1. **Go to the "Reports" tab**: In your project dashboard, click on the "Reports" tab.
2. **Select the data you want to export**: Choose the specific datasets you'd like to export. You can select individual tables or entire sections, such as "Tasks" or "Issues".
3. **Click the "Export" button**: Once you've selected the data, click the "Export" button at the top right corner of the page.
4. **Choose your file format**: Select "Excel" as your preferred file format. You can also choose from other formats like CSV, PDF, or JSON.
5. **Customize your export**: If needed, you can customize your export by selecting specific columns, applying filters, or adjusting the export settings.

Here's a helpful tip: If you want to export all project data at once, you can use the "Export All" feature. Simply click on the "Export" button and select "Export All" from the dropdown menu. This will export all available data in your project.

For more detailed instructions and best practices on exporting data, I recommend checking out our documentation on **Data Export** ([link to documentation](https://www.example.com/docs/exporting-data)).

Additionally, you might find our **Data Analytics** feature ([link to documentation](https://www.example.com/docs/analytics)) useful for visualizing and exploring your project data before exporting it.

If you have any further questions or need more assistance, please don't hesitate to reach out. We're always here to help.

Best regards,
[Your Name]
Product Specialist

4 Workflow 4: Evaluator-Optimizer

image.png

So here we are working with the Evaluator-Optimizer workflow, which is simply a workflow where one LLM call generates a response while another provides the evaluation and feedback in a loop. When to use this workflow is very interesting because this workflow is very effective when we have two things. The first is clear evaluation criteria, and the second is that you can get value from iterative refinement. These two signs of a good fit are:

  1. The LLM response can be demonstrably improved when feedback is provided.
  2. The LLM can provide meaningful feedback.

So, in a sense this workflow work well for tasks that can be improved, and you have meaningful feedback too.

from typing import Tuple, Dict, List

def generate(prompt: str, task: str, context: str = "") -> Tuple[str, str]:
    
    """Generate and improve a solution based on feedback."""
    full_prompt = f"{prompt}\n{context}\nTask: {task}" if context else f"{prompt}\nTask: {task}"
    response = llm_call(full_prompt)
    thoughts = extract_xml(response, "thoughts")
    result = extract_xml(response, "response")
    
    print("\n=== GENERATION START ===")
    print(f"Thoughts:\n{thoughts}\n")
    print(f"Generated:\n{result}")
    print("=== GENERATION END ===\n")
    
    return thoughts, result

def evaluate(prompt: str, content: str, task: str) -> Tuple[str, str]:
    """Evaluate if a solution meets requirements."""
    full_prompt = f"{prompt}\nOriginal task: {task}\nContent to evaluate: {content}"
    response = llm_call(full_prompt)
    evaluation = extract_xml(response, "evaluation")
    feedback = extract_xml(response, "feedback")
    
    print("=== EVALUATION START ===")
    print(f"Status: {evaluation}")
    print(f"Feedback: {feedback}")
    print("=== EVALUATION END ===\n")
    
    return evaluation, feedback


def eval_optimizer(task: str, evaluator_prompt: str, generator_prompt: 
                   str) -> Tuple[str, List[Dict[str, str]]]:
    """Keep generating and evaluating until requirements are met."""
    memory = []
    chain_of_thought = []
    
    thoughts, result = generate(generator_prompt, task)
    memory.append(result)
    chain_of_thought.append({"thoughts": thoughts, "result": result})
        
    improvement_count = 0
    
    while True:
        evaluation, feedback = evaluate(evaluator_prompt, result, task)
        if evaluation == "PASS":
            return result, chain_of_thought
        
        if evaluation == "NEEDS_IMPROVEMENT":
            improvement_count += 1
        if improvement_count >= 2:
            print("Too many improvements needed. Stopping the process.")
            return result, chain_of_thought
        
        context = "\n".join([
            "Previous attempts:",
            *[f"- {m}" for m in memory],
            f"\nFeedback: {feedback}"
        ])
        
        thoughts, result = generate(generator_prompt, task, context)
        memory.append(result)
        chain_of_thought.append({"thoughts": thoughts, "result": result})

4.1 Example: Workflow 4: Evaluator-Optimizer

Here the example we are working on is a coding exercise (compute the full covariance of matrix). The coding exercise involves generating code and evaluating it based on time complexity and software engineering best practices. The evaluator will assess the code and provide feedback, indicating whether it passes, fails, or needs improvement. If the code needs improvement, the feedback is passed to the generator function which then generates a new version of the code considering the feedback.

evaluator_prompt = """
Evaluate this following code implementation for:

1.time complexity
2.software engineering best practices


You should be evaluating only and not attemping to solve the task.
Only output "PASS" if all criteria are met and you have 
no further suggestions for improvements.
Output your evaluation concisely in the following format.

<evaluation>PASS, NEEDS_IMPROVEMENT, or FAIL</evaluation>
<feedback>
What needs improvement and why.
</feedback>
"""

generator_prompt = """
Your goal is to complete the task based on <user input>. 
If there are feedback 
from your previous generations, you should 
reflect on them to improve your solution

Output your answer concisely in the following format: 

<thoughts>••••••••••••••••••
[Your understanding of the task and feedback and 
how you plan to improve]
</thoughts>

<response>
[Your code implementation here]
</response>
"""

task = """
<user input>
Suppose you have a dataset with n rows (samples) and p columns (features). 
You want to compute the full covariance (or correlation) matrix of these p features. 
Write me Python code this matrix and state the time 
complexity in Big-O notation with respect to n and p. You can not use any external 
libraries for this task.
</user input>
"""

eval_optimizer(task, evaluator_prompt, generator_prompt)

=== GENERATION START ===
Thoughts:
•••••••••••••••••
I understand that the task is to write a Python code to compute the full covariance (or correlation) matrix of p features from a dataset with n samples, without using any external libraries. This is a fundamental problem in statistics and machine learning. To improve my solution, I will focus on implementing an efficient algorithm to compute the covariance matrix and analyze its time complexity.

From the feedback, I assume that I need to provide a clear and concise solution with a proper understanding of the task and its complexity.


Generated:

Here is the Python code to compute the covariance matrix:
```
def compute_covariance_matrix(data):
    n = len(data)
    p = len(data[0])
    mean_vector = [sum(x) / n for x in zip(*data)]
    covariance_matrix = [[0 for _ in range(p)] for _ in range(p)]
    
    for i in range(p):
        for j in range(p):
            for k in range(n):
                covariance_matrix[i][j] += (data[k][i] - mean_vector[i]) * (data[k][j] - mean_vector[j])
            covariance_matrix[i][j] /= n - 1
    
    return covariance_matrix
```
The time complexity of this algorithm is O(n \* p^2), where n is the number of samples and p is the number of features.

To compute the correlation matrix, we can use the following formula:
```
correlation_matrix = [[covariance_matrix[i][j] / (covariance_matrix[i][i] * covariance_matrix[j][j])**0.5 for j in range(p)] for i in range(p)]
```
The time complexity of this additional step is O(p^2), so the overall time complexity remains O(n \* p^2).

=== GENERATION END ===

=== EVALUATION START ===
Status: NEEDS_IMPROVEMENT
Feedback: 

The code has a good start, but there are a few areas that need improvement:

1. **Variable naming**: The variable names `data` and `p` are not very descriptive. Consider using more explicit names like `feature_values` and `num_features`.
2. **Functionality separation**: The `compute_covariance_matrix` function is responsible for both computing the mean vector and the covariance matrix. Consider breaking this into two separate functions for clarity and reusability.
3. ** Looping**: The triple nested loop structure is not very efficient and can be improved using NumPy-like vectorized operations (even without using external libraries).
4. ** Commenting**: There are no comments explaining the logic behind the code, making it difficult for others to understand.
5. **Error handling**: There is no error handling for edge cases, such as an empty input or non-numeric data.

Overall, the code is mostly correct, but it can be improved for readability, maintainability, and performance.


=== EVALUATION END ===


=== GENERATION START ===
Thoughts:

I understand that I need to improve my previous code based on the feedback provided. I will separate the functionality into two functions, use more descriptive variable names, and improve the looping structure using vectorized operations. I will also add comments to explain the logic behind the code and include error handling for edge cases. 

My goal is to write Python code that computes the full covariance matrix of the p features in the dataset and states the time complexity in Big-O notation with respect to n and p, without using any external libraries.

I will take into account the feedback and improve my code to make it more readable, maintainable, and efficient.


Generated:

```
def compute_mean_vector(feature_values):
    """
    Computes the mean vector of the feature values.
    
    Args:
        feature_values (list of lists): A list of lists, where each inner list represents a feature.
        
    Returns:
        list: The mean vector of the feature values.
    """
    num_samples = len(feature_values)
    num_features = len(feature_values[0])
    mean_vector = [sum(x) / num_samples for x in zip(*feature_values)]
    return mean_vector


def compute_covariance_matrix(feature_values):
    """
    Computes the covariance matrix of the feature values.
    
    Args:
        feature_values (list of lists): A list of lists, where each inner list represents a feature.
        
    Returns:
        list of lists: The covariance matrix of the feature values.
    """
    num_samples = len(feature_values)
    num_features = len(feature_values[0])
    
    # Compute the mean vector
    mean_vector = compute_mean_vector(feature_values)
    
    # Initialize the covariance matrix
    covariance_matrix = [[0.0 for _ in range(num_features)] for _ in range(num_features)]
    
    # Compute the covariance matrix
    for i in range(num_features):
        for j in range(num_features):
            for k in range(num_samples):
                covariance_matrix[i][j] += (feature_values[k][i] - mean_vector[i]) * (feature_values[k][j] - mean_vector[j])
            covariance_matrix[i][j] /= num_samples - 1
    
    return covariance_matrix


def compute_correlation_matrix(covariance_matrix):
    """
    Computes the correlation matrix from the covariance matrix.
    
    Args:
        covariance_matrix (list of lists): The covariance matrix of the feature values.
        
    Returns:
        list of lists: The correlation matrix of the feature values.
    """
    num_features = len(covariance_matrix)
    correlation_matrix = [[0.0 for _ in range(num_features)] for _ in range(num_features)]
    
    for i in range(num_features):
        for j in range(num_features):
            correlation_matrix[i][j] = covariance_matrix[i][j] / (covariance_matrix[i][i] * covariance_matrix[j][j])**0.5
    
    return correlation_matrix


# Example usage
feature_values = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
covariance_matrix = compute_covariance_matrix(feature_values)
correlation_matrix = compute_correlation_matrix(covariance_matrix)

print("Covariance Matrix:")
for row in covariance_matrix:
    print(row)

print("\nCorrelation Matrix:")
for row in correlation_matrix:
    print(row)


# Time complexity: O(n * p^2)
```

=== GENERATION END ===

=== EVALUATION START ===
Status: NEEDS_IMPROVEMENT
Feedback: 
The code is mostly correct and follows good software engineering practices such as separate functions for each task, descriptive variable names, and docstrings.

However, there are a few areas for improvement:

* The time complexity is correctly stated as O(n * p^2), but it can be improved. The current implementation has a lot of nested loops, which can be optimized.
* The comments and docstrings are good, but some more explanation of the mathematical formulas and algorithms used would be helpful.
* There is no error handling or input validation. What if the input feature_values is not a list of lists, or if it's empty?
* The example usage is not inside a main function or if __name__ == "__main__": block, which is a good practice to follow.

Overall, the code is well-structured and easy to follow, but can be improved with some optimizations and additional error handling.
=== EVALUATION END ===

Too many improvements needed. Stopping the process.
('\n```\ndef compute_mean_vector(feature_values):\n    """\n    Computes the mean vector of the feature values.\n    \n    Args:\n        feature_values (list of lists): A list of lists, where each inner list represents a feature.\n        \n    Returns:\n        list: The mean vector of the feature values.\n    """\n    num_samples = len(feature_values)\n    num_features = len(feature_values[0])\n    mean_vector = [sum(x) / num_samples for x in zip(*feature_values)]\n    return mean_vector\n\n\ndef compute_covariance_matrix(feature_values):\n    """\n    Computes the covariance matrix of the feature values.\n    \n    Args:\n        feature_values (list of lists): A list of lists, where each inner list represents a feature.\n        \n    Returns:\n        list of lists: The covariance matrix of the feature values.\n    """\n    num_samples = len(feature_values)\n    num_features = len(feature_values[0])\n    \n    # Compute the mean vector\n    mean_vector = compute_mean_vector(feature_values)\n    \n    # Initialize the covariance matrix\n    covariance_matrix = [[0.0 for _ in range(num_features)] for _ in range(num_features)]\n    \n    # Compute the covariance matrix\n    for i in range(num_features):\n        for j in range(num_features):\n            for k in range(num_samples):\n                covariance_matrix[i][j] += (feature_values[k][i] - mean_vector[i]) * (feature_values[k][j] - mean_vector[j])\n            covariance_matrix[i][j] /= num_samples - 1\n    \n    return covariance_matrix\n\n\ndef compute_correlation_matrix(covariance_matrix):\n    """\n    Computes the correlation matrix from the covariance matrix.\n    \n    Args:\n        covariance_matrix (list of lists): The covariance matrix of the feature values.\n        \n    Returns:\n        list of lists: The correlation matrix of the feature values.\n    """\n    num_features = len(covariance_matrix)\n    correlation_matrix = [[0.0 for _ in range(num_features)] for _ in range(num_features)]\n    \n    for i in range(num_features):\n        for j in range(num_features):\n            correlation_matrix[i][j] = covariance_matrix[i][j] / (covariance_matrix[i][i] * covariance_matrix[j][j])**0.5\n    \n    return correlation_matrix\n\n\n# Example usage\nfeature_values = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]\ncovariance_matrix = compute_covariance_matrix(feature_values)\ncorrelation_matrix = compute_correlation_matrix(covariance_matrix)\n\nprint("Covariance Matrix:")\nfor row in covariance_matrix:\n    print(row)\n\nprint("\\nCorrelation Matrix:")\nfor row in correlation_matrix:\n    print(row)\n\n\n# Time complexity: O(n * p^2)\n```\n',
 [{'thoughts': '•••••••••••••••••\nI understand that the task is to write a Python code to compute the full covariance (or correlation) matrix of p features from a dataset with n samples, without using any external libraries. This is a fundamental problem in statistics and machine learning. To improve my solution, I will focus on implementing an efficient algorithm to compute the covariance matrix and analyze its time complexity.\n\nFrom the feedback, I assume that I need to provide a clear and concise solution with a proper understanding of the task and its complexity.\n',
   'result': '\nHere is the Python code to compute the covariance matrix:\n```\ndef compute_covariance_matrix(data):\n    n = len(data)\n    p = len(data[0])\n    mean_vector = [sum(x) / n for x in zip(*data)]\n    covariance_matrix = [[0 for _ in range(p)] for _ in range(p)]\n    \n    for i in range(p):\n        for j in range(p):\n            for k in range(n):\n                covariance_matrix[i][j] += (data[k][i] - mean_vector[i]) * (data[k][j] - mean_vector[j])\n            covariance_matrix[i][j] /= n - 1\n    \n    return covariance_matrix\n```\nThe time complexity of this algorithm is O(n \\* p^2), where n is the number of samples and p is the number of features.\n\nTo compute the correlation matrix, we can use the following formula:\n```\ncorrelation_matrix = [[covariance_matrix[i][j] / (covariance_matrix[i][i] * covariance_matrix[j][j])**0.5 for j in range(p)] for i in range(p)]\n```\nThe time complexity of this additional step is O(p^2), so the overall time complexity remains O(n \\* p^2).\n'},
  {'thoughts': '\nI understand that I need to improve my previous code based on the feedback provided. I will separate the functionality into two functions, use more descriptive variable names, and improve the looping structure using vectorized operations. I will also add comments to explain the logic behind the code and include error handling for edge cases. \n\nMy goal is to write Python code that computes the full covariance matrix of the p features in the dataset and states the time complexity in Big-O notation with respect to n and p, without using any external libraries.\n\nI will take into account the feedback and improve my code to make it more readable, maintainable, and efficient.\n',
   'result': '\n```\ndef compute_mean_vector(feature_values):\n    """\n    Computes the mean vector of the feature values.\n    \n    Args:\n        feature_values (list of lists): A list of lists, where each inner list represents a feature.\n        \n    Returns:\n        list: The mean vector of the feature values.\n    """\n    num_samples = len(feature_values)\n    num_features = len(feature_values[0])\n    mean_vector = [sum(x) / num_samples for x in zip(*feature_values)]\n    return mean_vector\n\n\ndef compute_covariance_matrix(feature_values):\n    """\n    Computes the covariance matrix of the feature values.\n    \n    Args:\n        feature_values (list of lists): A list of lists, where each inner list represents a feature.\n        \n    Returns:\n        list of lists: The covariance matrix of the feature values.\n    """\n    num_samples = len(feature_values)\n    num_features = len(feature_values[0])\n    \n    # Compute the mean vector\n    mean_vector = compute_mean_vector(feature_values)\n    \n    # Initialize the covariance matrix\n    covariance_matrix = [[0.0 for _ in range(num_features)] for _ in range(num_features)]\n    \n    # Compute the covariance matrix\n    for i in range(num_features):\n        for j in range(num_features):\n            for k in range(num_samples):\n                covariance_matrix[i][j] += (feature_values[k][i] - mean_vector[i]) * (feature_values[k][j] - mean_vector[j])\n            covariance_matrix[i][j] /= num_samples - 1\n    \n    return covariance_matrix\n\n\ndef compute_correlation_matrix(covariance_matrix):\n    """\n    Computes the correlation matrix from the covariance matrix.\n    \n    Args:\n        covariance_matrix (list of lists): The covariance matrix of the feature values.\n        \n    Returns:\n        list of lists: The correlation matrix of the feature values.\n    """\n    num_features = len(covariance_matrix)\n    correlation_matrix = [[0.0 for _ in range(num_features)] for _ in range(num_features)]\n    \n    for i in range(num_features):\n        for j in range(num_features):\n            correlation_matrix[i][j] = covariance_matrix[i][j] / (covariance_matrix[i][i] * covariance_matrix[j][j])**0.5\n    \n    return correlation_matrix\n\n\n# Example usage\nfeature_values = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]\ncovariance_matrix = compute_covariance_matrix(feature_values)\ncorrelation_matrix = compute_correlation_matrix(covariance_matrix)\n\nprint("Covariance Matrix:")\nfor row in covariance_matrix:\n    print(row)\n\nprint("\\nCorrelation Matrix:")\nfor row in correlation_matrix:\n    print(row)\n\n\n# Time complexity: O(n * p^2)\n```\n'}])