This is a workflow builder that ensures the LLM produces a complete, step-by-step plan for any use case. WHEN TO CALL: - Call this tool based on RUBE_SEARCH_TOOLS output. If search tools response indicates create_plan should be called and the usecase is not easy, call it. - Use this tool after RUBE_SEARCH_TOOLS or RUBE_MANAGE_CONNECTIONS to generate an execution plan for the user's use case. - USE for medium or hard tasks — skip it for easy ones. - If the user switches to a new use case in the same chat and RUBE_SEARCH_TOOLS again instructs you to call the planner, you MUST call this tool again for that new use case. Memory Integration: - You can choose to add the memory received from the search tool into the known_fields parameter of the plan function to enhance planning with discovered relationships and information. Outputs a complete plan with sections such as "workflow_steps", "complexity_assessment", "decision_matrix", "failure_handling" "output_format", and more as needed. If you skip this step for non-easy tasks, workflows will likely be incomplete, or fail during execution for complex tasks. Calling it guarantees reliable, accurate, and end-to-end workflows aligned with the available tools and connections.
Convert the executed workflow into a notebook. If recipe_id is NOT provided, then create a new one. Else, update the existing one. This tool allows you to: 1. Save the full workflow execution as a reusable notebook 2. If I share the notebook with others (when published) it can be used by passing correct environment variables and it executes the whole workflow from beginning of this session 3. You should generate input json schema of this notebook based on the executed workflow, so that other users of this published notebook can pass their own valid inputs 4. You should generate the code of this notebook based on the executed workflow, please see below for more instructions on how to generate the code for the notebook 5. Similarly please generate good output json schema for this notebook, so that other users of this published notebook know how to consume the response 6. The notebook should take the input from environment variables using os.environ.get(). The user of this notebook will pass each key of the input json schema as an env var WHEN TO USE - Only run this tool when the workflow is completed and successful or if the user explicitly asked to run this tool DO NOT USE - When the workflow is still being processed, or not yet completed and the user explicitly didn't ask to run this tool IMPORTANT CODING RULES: 1. Single Execution: Please generate the code for the full notebook that can be executed in a single invocation 2. Schema Safety: Never assume the response schema for run_composio_tool if not known already from previous tools. To inspect schema, either run a simple request **outside** the workbench via RUBE_MULTI_EXECUTE_TOOL or use invoke_llm helper. 3. Parallelism & Timeout (CRITICAL): There is a hard timeout of 4 minutes so complete the code within that. Prioritize PARALLEL execution using ThreadPoolExecutor with suitable concurrency for bulk operations - e.g., call run_composio_tool or invoke_llm parallelly across rows to maximize efficiency. 4. LLM Helpers: You should always use invoke_llm helper for summary, analysis, or field extraction on results. This is a smart LLM that will give much better results than any adhoc filtering. 5. Avoid Meta Loops: Do not use run_composio_tool to call RUBE_MULTI_EXECUTE_TOOL or other RUBE_* meta tools to avoid cycles. Only use it for app tools. 6. Pagination: Use when data spans multiple pages. Continue fetching pages with the returned next_page_token or cursor until none remains. Parallelize fetching pages if tool supports page_number. 7. No Hardcoding: Never hardcode data in code. Always load it from files or tool responses, iterating to construct intermediate or final inputs/outputs. 8. No Hardcoding: Please do NOT hardcode any PII related information like email / name / home address / social security id or anything like that. This is very risky 9. NEVER HARDCODE CONTENT (CRITICAL): For ANY content generation use case, you MUST use invoke_llm helper instead of hardcoding. This includes but is not limited to: social media posts (Twitter, LinkedIn, Instagram, Facebook, Reddit, etc.), blog posts, articles, email content, SEO reports, market research, jokes, stories, creative writing, product descriptions, news summaries, documentation, or ANY text content that should be unique, personalized, or contextual. Always use invoke_llm with a specific prompt to generate fresh content every time. 10. Dynamic Content Generation: Structure your code like this for content generation: content_prompt = f"Generate a [specific type] about [topic] that [requirements]" then generated_content, error = invoke_llm(content_prompt). Every execution should produce different, contextually appropriate content. 11. Code Correctness (CRITICAL): Code must be syntactically and semantically correct and executable. 12. Please ensure that your code takes input in the form of input json schema via environment variables and stores the output at last command in the form of output json schema 13. Your notebook must read each key of the input json schema in your code from environment variables using os.environ.get() 14. Please always end the code with just "output" command following the output json schema so that the notebook actually shows it. DO NOT PRINT IT AT ANY COST 15. Please only take needed args as input in input json schema and only give the needed data in the output. This is to keep the json schemas and notebook simple 16. Please do NOT default inputs in os.environ.get() code to any sensitive / PII information. This is very risky. It is fine to default for non-sensitive inputs 17. Debugging (CRITICAL): For every print statement in your code, please prefix it with the time at which it is being executed. This will help me investigate latency related stuff 18. If there are any errors while running, please throw the error so that the person running the notebook can see and fix it 19. FAIL LOUDLY (CRITICAL): If you expect data but get 0 results, raise Exception immediately. NEVER silently continue or create empty outputs. Recovery loop will fix the code - don't hide issues. Example: if len(items) == 0: raise Exception("Expected data but got none") 20. NESTED DATA (CRITICAL): APIs often double-nest data. Always extract: data = result.get("data", {}); if "data" in data: data = data["data"]. Try flexible field names: item.get("id") or item.get("channel_id") IMPORTANT SCHEMA RULES: 1. Keep input schema simple - ask only for parameters users would want to vary between runs 2. Do not ask for large inputs. Use invoke_llm helper to generate large content in the code 3. HUMAN-FRIENDLY INPUTS (CRITICAL): - ✓ Ask for: channel_name, google_sheet_url, repo_name, email_address - ✗ Never ask for: channel_id, spreadsheet_id, document_id, user_id - Extract IDs in code: use FIND/SEARCH tools to convert names/URLs to IDs - For URLs: extract IDs in code with regex (e.g., spreadsheet_id from google_sheet_url) 4. REQUIRED vs OPTIONAL: Mark as required only if it's specific to the user's workflow and would change every run. Generic settings should be optional with sensible defaults 5. Identify what varies between runs: channel name, date range, search terms = required. Sheet tab name, row limits, formatting = optional 6. [CRITICAL]: Use search/find tools in code to convert human inputs (names/URLs) to IDs before calling other tools IMPORTANT RULES ON DEFAULTS FOR REQUIRED PARAMETERS : 1. Please ensure that default parameters are provided for all required inputs in the input schema (even if it is PII related info and specific to users) from your context 2. If there is no value available from context, please assume the default to be empty string. Do NOT hallucinate a random name or id 3. Ensure that there is no type mismatch in the default parameters with the input schema. For example, if the input schema is a string, the default should be a string, if the input schema is a number, the default should be a number, etc. 4. If user doesn't provide any input for this workflow, we will use these parameters. Otherwise the workflow will break 5. The values of these parameters should reflect the workflow execution and specific to the user creating / updating the recipe and not be random ENV & HELPERS: You can get the list of helper functions and their details from RUBE_REMOTE_WORKBENCH tool's description. NOTE: Please do not forget to read the environment variables for input and end the notebook with just output (DO NOT PRINT)
Executes a Recipe
Get the details of the existing recipe for a given recipe id.
MCP Server Info: Rube MCP by Composio connects 500+ apps—Slack, GitHub, Notion, Google Workspace (Gmail, Sheets, Drive, Calendar), Microsoft (Outlook, Teams), X, Figma, Meta apps (WhatsApp, Instagram), TikTok, AI tools like Veo3 & V0, and more—for seamless cross-app automation. Use this MCP server to discover new tools and connect to apps. <br> Tool Info: Create/manage connections to user's apps. If RUBE_SEARCH_TOOLS finds no active connection for an app, call this with the toolkit name and get auth redirect_url in response, ALWAYS show a FORMATTED MARKDOWN LINK to the user. Supports OAuth (default/custom), API Key, Bearer Token, Basic Auth, hybrid, and no-auth. Batch-safe, isolates errors, allows selective re-init, returns per-app results and summary. <br> IMPORTANT: If the response contains an error_message about "admin approval" or "disabled by an admin", the app is not enabled for the user's team. In this case, inform the user they need to ask a team admin to enable the app first. No redirect_url will be provided in this scenario.
Fast and parallel tool executor for tools discovered through RUBE_SEARCH_TOOLS. Use this tool to execute up to 20 tools in parallel across apps. Response contains structured outputs ready for immediate analysis - avoid reprocessing them via remote bash/workbench tools. Prerequisites: - Always use valid tool slugs and their arguments discovered through RUBE_SEARCH_TOOLS. NEVER invent tool slugs or argument fields. ALWAYS pass arguments with the tool_slug in each tool. - Ensure Active connection statuses for the toolkits that are going to be executed through RUBE_MANAGE_CONNECTIONS. - Only batch tools that are logically independent - no required ordering or dependencies between tools or their outputs. Usage guidelines: - Use this whenever a tool is discovered and has to be called, either as part of a multi-step workflow or as a standalone tool. - If RUBE_SEARCH_TOOLS returns a tool that can perform the task, prefer calling it via this executor. Do not write custom API calls or ad-hoc scripts for tasks that can be completed by available Composio tools. - Prefer parallel execution: group independent tools into a single multi-execute call where possible. - Predictively set sync_response_to_workbench=true if the response may be large or needed for later scripting. It still shows response inline; if the actual response data turns out small and easy to handle, keep everything inline and SKIP workbench usage. - Responses contain structured outputs for each tool. RULE: Small data - process yourself inline; large data - process in the workbench. - ALWAYS include inline references/links to sources in MARKDOWN format directly next to the relevant text. Eg provide slack thread links alongside with summary, render document links instead of raw IDs. - CRITICAL: You MUST always include the 'memory' parameter - never omit it. Even if you think there's nothing to remember, include an empty object {} for memory. Memory Storage: - CRITICAL FORMAT: Memory must be a dictionary where keys are app names (strings) and values are arrays of strings. NEVER pass nested objects or dictionaries as values. - CORRECT format: {"slack": ["Channel general has ID C1234567"], "gmail": ["John's email is john@example.com"]} - Write memory entries in natural, descriptive language - NOT as key-value pairs. Use full sentences that clearly describe the relationship or information. - ONLY store information that will be valuable for future tool executions - focus on persistent data that saves API calls. - STORE: ID mappings, entity relationships, configs, stable identifiers. - DO NOT STORE: Action descriptions, temporary status updates, logs, or "sent/fetched" confirmations. - Examples of GOOD memory (store these): * "The important channel in Slack has ID C1234567 and is called #general" * "The team's main repository is owned by user 'teamlead' with ID 98765" * "The user prefers markdown docs with professional writing, no emojis" (user_preference) - Examples of BAD memory (DON'T store these): * "Successfully sent email to john@example.com with message hi" * "Fetching emails from last day (Sep 6, 2025) for analysis" - Do not repeat the memories stored or found previously.