bug: Error: Required API key PERPLEXITY_API_KEY for provider 'perplexity' is not set in environment, session, or .env file. #722

Closed
opened 2025-10-14 16:10:52 -06:00 by navan · 0 comments
Owner

Originally created by @celgost on 5/16/2025

Description

This is not very clear in documentation how to actually use another API in Cursor.

I am eager to make a small PR to update documentation if you tell me how to use another API within Cursor MCP implementation.

I assumed I needed to:

  • Provide my OpenAI key in mcp.json.
  • Use provider openai and the modelId in config.

I also removed any mention of the research flag in the auto-generated cursor rules. But it doesn't matter.

See files below.

Steps to Reproduce

  1. Just ask in the Agent chat of Cursor to go to the next task and update task.
  2. Call is made automatically by the Agent in chat, like seen in screenshot.
    (I also tried to change 4.1-mini to 4-1-mini like in models list)

Expected Behavior

When updating tasks, the MCP tool checks the API key in mcp.json and the configured models in .taskmasterconfig and then use the providers and models defined there.

Actual Behavior

Image

Image

Error: Required API key PERPLEXITY_API_KEY for provider 'perplexity' is not set in environment, session, or .env file.

Environment

  • Task Master version: latest
  • Node.js version: 22.14.0
  • Operating system: Ubuntu 24.04 LTS
  • IDE (if applicable): Cursor
  • Model Used in Cursor: Gemini 2.5 Pro Exp 03 25

Additional Context

mcp.json:

"mcpServers": {
    "taskmaster-ai": {
      "command": "npx",
      "args": ["-y", "--package=task-master-ai", "task-master-ai"],
      "env": {
        "OPENAI_API_KEY": "sk-proj-myapikey"
      }
    }
}

.taskmasterconfig

{
  "models": {
    "main": {
      "provider": "openai",
      "modelId": "gpt-4-1-mini",
      "maxTokens": 120000,
      "temperature": 0.2
    },
    "research": {
      "provider": "openai",
      "modelId": "gpt-4-1-mini",
      "maxTokens": 8700,
      "temperature": 0.1
    },
    "fallback": {
      "provider": "openai",
      "modelId": "gpt-4-1-mini",
      "maxTokens": 120000,
      "temperature": 0.1
    }
  },
  "global": {
    "logLevel": "info",
    "debug": false,
    "defaultSubtasks": 5,
    "defaultPriority": "medium",
    "projectName": "Project"
  }
}

*Originally created by @celgost on 5/16/2025* ### Description This is not very clear in documentation how to actually use another API in Cursor. I am eager to make a small PR to update documentation if you tell me how to use another API within Cursor MCP implementation. I assumed I needed to: - Provide my OpenAI key in `mcp.json`. - Use provider `openai` and the `modelId` in config. I also removed any mention of the research flag in the auto-generated cursor rules. But it doesn't matter. See files below. ### Steps to Reproduce 1. Just ask in the Agent chat of Cursor to go to the next task and update task. 2. Call is made automatically by the Agent in chat, like seen in screenshot. (I also tried to change 4.1-mini to 4-1-mini like in models list) ### Expected Behavior When updating tasks, the MCP tool checks the API key in `mcp.json` and the configured models in `.taskmasterconfig` and then use the providers and models defined there. ### Actual Behavior ![Image](https://github.com/user-attachments/assets/064a7906-9ef2-4159-90a2-7181861e5a90) ![Image](https://github.com/user-attachments/assets/d2522f98-8550-438a-a00c-c59eadc83ff3) Error: Required API key PERPLEXITY_API_KEY for provider 'perplexity' is not set in environment, session, or .env file. ### Environment - Task Master version: latest - Node.js version: 22.14.0 - Operating system: Ubuntu 24.04 LTS - IDE (if applicable): Cursor - Model Used in Cursor: Gemini 2.5 Pro Exp 03 25 ### Additional Context mcp.json: ```json "mcpServers": { "taskmaster-ai": { "command": "npx", "args": ["-y", "--package=task-master-ai", "task-master-ai"], "env": { "OPENAI_API_KEY": "sk-proj-myapikey" } } } ``` .taskmasterconfig ``` { "models": { "main": { "provider": "openai", "modelId": "gpt-4-1-mini", "maxTokens": 120000, "temperature": 0.2 }, "research": { "provider": "openai", "modelId": "gpt-4-1-mini", "maxTokens": 8700, "temperature": 0.1 }, "fallback": { "provider": "openai", "modelId": "gpt-4-1-mini", "maxTokens": 120000, "temperature": 0.1 } }, "global": { "logLevel": "info", "debug": false, "defaultSubtasks": 5, "defaultPriority": "medium", "projectName": "Project" } } ```
Sign in to join this conversation.
No labels
area:ai-models
area:ai-models
area:ai-models
area:ai-models
area:ai-models
area:ai-models
area:ai-models
area:ai-models
area:ai-models
area:ai-models
area:ai-models
area:ai-models
area:cli
area:cli
area:cli
area:cli
area:cli
area:cli
area:cli
area:cli
area:cli
area:cli
area:cli
area:cli
area:cli
area:cli
area:installation
area:installation
area:installation
area:installation
area:installation
area:mcp
area:mcp
area:mcp
area:mcp
area:mcp
area:mcp
area:mcp
area:mcp
area:mcp
area:mcp
area:mcp
area:mcp
area:mcp
area:mcp
area:mcp
area:mcp
area:mcp
area:task-management
area:task-management
area:task-management
area:task-management
area:task-management
area:task-management
area:task-management
area:task-management
area:task-management
area:task-management
area:task-management
area:task-management
area:task-management
area:task-management
area:task-management
area:task-management
area:task-management
area:vscode-extension
area:vscode-extension
area:vscode-extension
area:vscode-extension
area:vscode-extension
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
bug
documentation
documentation
documentation
documentation
documentation
documentation
documentation
documentation
documentation
documentation
documentation
documentation
documentation
documentation
documentation
documentation
documentation
duplicate
duplicate
duplicate
duplicate
duplicate
duplicate
duplicate
duplicate
duplicate
duplicate
duplicate
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
enhancement
feedback
feedback
feedback
feedback
feedback
feedback
feedback
feedback
feedback
feedback
feedback
feedback
feedback
feedback
feedback
feedback
feedback
feedback
feedback
feedback
feedback
feedback
feedback
feedback
feedback
feedback
feedback
feedback
feedback
feedback
feedback
good first issue
good first issue
good first issue
good first issue
good first issue
good first issue
good first issue
good first issue
good first issue
good first issue
good first issue
good first issue
good first issue
good first issue
good first issue
good first issue
good first issue
good first issue
good first issue
good first issue
good first issue
help wanted
help wanted
help wanted
help wanted
help wanted
high-priority
high-priority
high-priority
high-priority
high-priority
high-priority
high-priority
high-priority
high-priority
high-priority
high-priority
high-priority
high-priority
integration request
integration request
integration request
integration request
invalid
invalid
invalid
invalid
invalid
invalid
invalid
invalid
invalid
invalid
invalid
invalid
invalid
invalid
low-priority
low-priority
medium-priority
medium-priority
medium-priority
medium-priority
medium-priority
medium-priority
medium-priority
medium-priority
medium-priority
medium-priority
medium-priority
medium-priority
medium-priority
medium-priority
medium-priority
medium-priority
medium-priority
medium-priority
medium-priority
medium-priority
medium-priority
medium-priority
medium-priority
medium-priority
provider:anthropic
provider:anthropic
provider:claude-code
provider:claude-code
provider:claude-code
provider:claude-code
provider:claude-code
provider:claude-code
provider:claude-code
provider:claude-code
provider:claude-code
provider:claude-code
provider:claude-code
provider:gemini-cli
provider:openai
provider:perplexity
question
question
question
question
question
question
question
question
question
question
question
question
question
question
refactor
refactor
wontfix
wontfix
wontfix
wontfix
wontfix
wontfix
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference: github/claude-task-master#722
No description provided.