Loading...
Loading...
Professional Dify Workflow DSL/YML File Generator, automatically generates complete Dify workflow configuration files based on user business requirements, supporting various node types and complex workflow logic
npx skill4agent add wwwzhouhui/skills_collection dify-dsl-generatorGenerate a Dify workflow for [business requirement description]Help me generate a Dify workflow DSL file:
- Function: [Function to be implemented by the workflow]
- Input: [User input content]
- Processing Steps: [Detailed processing logic]
- Output: [Expected output results]
- Plugins Used: [Required plugins, optional]app:
description: 'Workflow description'
icon: 🤖
icon_background: '#FFEAD5'
mode: advanced-chat # or workflow, agent-chat
name: Workflow Name
use_icon_as_answer_icon: falseadvanced-chatworkflowagent-chatdependencies:
- current_identifier: null
type: marketplace
value:
marketplace_plugin_unique_identifier: Plugin unique identifierlanggenius/openai_api_compatiblebowenliang123/md_exporterkind: app
version: 0.3.0
workflow:
conversation_variables: []
environment_variables: []
features:
file_upload:
enabled: false
speech_to_text:
enabled: false
text_to_speech:
enabled: false
graph:
edges: []
nodes: []- data:
desc: ''
title: Start
type: start
variables:
- label: User Input
max_length: 1000
options: []
required: true
type: paragraph # or text-input, select, file
variable: query
id: 'start'
position:
x: 100
y: 300
type: custom
width: 244
height: 90paragraphtext-inputselectfilenumber- data:
context:
enabled: false
variable_selector: []
model:
completion_params:
temperature: 0.7
max_tokens: 2000
mode: chat
name: gpt-4
provider: openai
prompt_template:
- id: Unique ID
role: system
text: System prompt
- id: Unique ID
role: user
text: User prompt {{#variable reference#}}
title: LLM Node
type: llm
vision:
enabled: false
id: 'Node ID'
position:
x: 400
y: 300
type: customopenailanggenius/openai_api_compatible/openai_api_compatibleanthropicalibaba{{#Node ID.output variable#}}{{#sys.query#}}{{#Node ID.text#}}- data:
code: |
import json
def main(arg1: str, arg2: str) -> dict:
# Processing logic
result = process(arg1, arg2)
return {
"result": result,
"status": "success"
}
code_language: python3
outputs:
result:
type: string
status:
type: string
title: Code Execution
type: code
variables:
- value_selector:
- 'Previous Node ID'
- Output variable
variable: arg1
id: 'Node ID'
position:
x: 700
y: 300
type: custompython3javascriptstringnumberobjectarray[string]array[number]array[object]- data:
authorization:
config: null
type: no-auth
body:
data: '{"key": "{{#variable#}}"}'
type: json
headers: ''
method: post
timeout:
max_connect_timeout: 0
max_read_timeout: 0
max_write_timeout: 0
title: HTTP Request
type: http-request
url: https://api.example.com/endpoint
id: 'Node ID'
position:
x: 1000
y: 300
type: customgetpostputpatchdeleteno-authapi-keybearer- data:
cases:
- case_id: case1
conditions:
- comparison_operator: contains
id: Condition ID
value: Expected value
variable_selector:
- 'Node ID'
- Variable name
id: case1
logical_operator: and
logical_operator: or
title: Condition Judgment
type: if-else
id: 'Node ID'
position:
x: 1300
y: 300
type: customcontainsnot containsisis notemptynot emptyandor- data:
provider_id: Tool Provider ID
provider_name: Tool Provider Name
provider_type: builtin # or api
title: Tool Call
tool_configurations: {}
tool_label: Tool Label
tool_name: Tool Name
tool_parameters:
Parameter name:
type: mixed
value: '{{#variable#}}'
type: tool
id: 'Node ID'
position:
x: 1600
y: 300
type: custombuiltinapiplugin- data:
answer: |
{{#LLM Node ID.text#}}
{{#Code Node ID.result#}}
title: Direct Response
type: answer
variables: []
id: answer
position:
x: 1900
y: 300
type: custom- data:
advanced_settings: null
desc: ''
groups:
- group_name: Group 1
output_type: string
variables:
- value_selector:
- 'Node ID'
- Variable name
variable: Output variable name
title: Variable Aggregator
type: variable-aggregator
id: 'Node ID'
position:
x: 2200
y: 300
type: custom- data:
instruction: Extraction instruction description
is_array: false
model:
completion_params: {}
mode: chat
name: gpt-4
provider: openai
parameters:
- description: Parameter description
name: Parameter name
required: true
type: string
query:
- role: user
text: '{{#Input variable#}}'
reasoning_mode: prompt
title: Parameter Extraction
type: parameter-extractor
id: 'Node ID'
position:
x: 2500
y: 300
type: customedges:
- data:
isInIteration: false
isInLoop: false
sourceType: start
targetType: llm
id: Source Node ID-source-Target Node ID-target
source: 'Source Node ID'
sourceHandle: source
target: 'Target Node ID'
targetHandle: target
type: custom
zIndex: 0sourceTypetargetTypeStart(100,300) → LLM(400,300) → Code(700,300) → Answer(1000,300) → Branch1(1100,150)
If-Else(800,300) →
→ Branch2(1100,450)You are a professional SQL expert. Generate accurate SQL query statements based on user natural language questions.
Database schema:
{{#Database structure#}}
User question: {{#sys.query#}}
Requirements:
1. Only output SQL statements, no other explanations
2. Ensure correct SQL syntax
3. Use appropriate JOIN and WHERE conditionsExtract specified information from the following text:
Text content:
{{#Input text#}}
Extraction requirements:
- Extract all dates
- Extract all names
- Extract all amounts
Output in JSON format:
{
"dates": [],
"names": [],
"amounts": []
}Generate complete HTML pages based on user requirements.
Requirement: {{#User requirement#}}
Requirements:
1. Generate complete HTML documents
2. Include necessary CSS styles
3. Add necessary JavaScript interactions
4. Ensure standardized code format
5. Only output HTML code, no markdown code block markers1747991890414{{#Node ID.variable name#}}{{Node ID.variable}}{{#Node ID.variable#}}app:
# App configuration
dependencies:
# Dependency list
kind: app
version: 0.3.0
workflow:
conversation_variables: []
environment_variables: []
features:
# Feature configuration
graph:
edges:
# Connection relationships
nodes:
# Node definitions