Skip to content

Configuration

AI Agents are configured entirely through Lime Admin.

Agent Configuration Parameters

Each Agent has the following configuration parameters:

Basic Information

Name

The display name for the agent. This name appears in dropdowns when selecting agents in Automations or Actions.

Model

The AI model to use for this agent.

Available Options:

  • Use Fast for:
    • Simple categorization or classification tasks
    • Quick decisions based on clear criteria
    • High-volume operations where speed matters
    • Cost-sensitive applications
  • Use Powerful for:
    • Complex analysis requiring deep reasoning
    • Nuanced decisions with multiple factors
    • Detailed text analysis or generation

Instructions

Natural language instructions that tell the agent what to do and how to think about the task.

Best Practices

  • Be specific about what to analyze
  • Define evaluation criteria clearly
  • Provide domain context the agent needs
  • Include examples for complex logic
  • Explain how to handle edge cases

Example:

Analyze this sales opportunity and determine:

1. Win probability (0-100%) based on:
- Decision maker engagement level
- Budget confirmation status
- Timeline clarity
- Competitive situation

2. Primary risk factors that could prevent closing

3. Recommended next action for the sales rep

Consider our typical sales cycle is 60-90 days. Opportunities
without budget confirmation after 30 days are historically 40%
less likely to close.

Input Configuration

Input Lime Type

The type of Lime object this agent analyzes (e.g., "company", "person", "deal"). When the agent runs, it receives one object of this type as its primary input.

Input Object Properties

Defines which properties from the input object to provide to the agent.

Best Practices

  • Only include properties the agent needs
  • Provide descriptions to give the agent context

Additional Input Data

Queries

Additional Lime Queries that provide supplementary context to the agent.

Best Practices

  • Use filters to only fetch data the agent actually needs
  • Use filters to make sure the data is related to the input object

Example Use Cases:

  • Related objects (e.g., "Fetch all open deals for this company")
  • Historical data (e.g., "Get last 10 interactions with this person")
  • Reference data (e.g., "Include our product catalog")

Output Configuration

Defines the structured output the agent will return. Each output parameter has:

Key

The name of the output parameter.

Type

The data type of this output parameter.

See How It Works - Output Types for detailed information about each type.

Instructions

Parameter-specific instructions that clarify for the agent what this output parameter represents and how to populate it.

Things you might include

  • Format requirements, e.g. "Format as a phone number with country code"
  • Value boundaries, e.g. "Score between 0 and 100"
  • Language or tone, e.g. "Write in Swedish" or "Use formal business language"
  • Content guidance, e.g. "Focus on actionable recommendations"

Testing Agents

Agents can be tested directly in Lime Admin without saving configuration changes.

Testing an Existing Agent

  1. Scroll to the Test section at the bottom of the page
  2. In the LimeObject input field, search for and select an object matching the Input Lime Type
  3. Click the Run test button

Configuration Best Practices

Start Simple:

  • Begin with minimal instructions and a simple output configuration
  • Test with representative data
  • Iteratively refine based on results

Be Specific:

  • Clear instructions lead to better results
  • Well-defined output types prevent validation errors
  • Field-level instructions clarify expectations

Consider Performance:

  • Additional Input Data queries add latency
  • Larger contexts use more credits
  • Evaluate your results with different models. If the result is equally good with a faster model - choose it.