add-cucumber-tests
Original:🇺🇸 English
Translated
1 scripts
Generates Tzatziki-based Cucumber BDD tests (.feature files) from a functional specification. Use this skill whenever a user wants to write Cucumber tests, add BDD scenarios, create feature files, generate tests, or test application behaviors with Gherkin — especially in Java/Spring projects using Tzatziki step definitions for HTTP, JPA, Kafka, MongoDB, OpenSearch, logging, or MCP. Also use when the user mentions writing integration tests, acceptance tests, or end-to-end tests in a project that already has Tzatziki/Cucumber dependencies, including TestNG-based setups.
3installs
Sourcedecathlon/tzatziki
Added on
NPX Install
npx skill4agent add decathlon/tzatziki add-cucumber-testsTags
Translated version includes tags in frontmatterSKILL.md Content
View Translation Comparison →BDD Test Generation with Tzatziki
Generate valid Cucumber files from a user's functional specification, using Tzatziki's
step definition library as the source of truth for legal step patterns.
.featurePrinciples
These explain the reasoning behind the workflow — understanding them helps you handle edge cases
the workflow doesn't explicitly cover.
-
Steps come from code, not imagination. Tzatziki provides hundreds of pre-built/
@Given/@Whenpatterns in its@Thenfiles. Inventing step text that doesn't match a real definition produces*Steps.javaat runtime. Per-module reference files inUndefinedStepExceptioncontain every legal step pattern — read the relevant ones before writing any scenario.references/steps-*.md -
Verify the environment before writing tests. Run at least one existing Cucumber test in the target module before creating new feature files. This catches missing dependencies, broken bootstrap, or misconfigured runners early — before you've invested effort in writing scenarios that can't execute. If no test exists yet, create the minimal bootstrap first and confirm it discovers at least one scenario.
-
YAML by default for structured data. Tzatziki scenarios are most readable when request bodies, database fixtures, Kafka payloads, and expected results usedoc strings. Fall back to Gherkin tables for naturally tabular data, or raw JSON only when the contract requires it.
"""yml -
Cover exactly what the user asked for — then help them think about what they missed. Generate scenarios for every functional behavior in the user's specification — not just the happy path. But don't silently add extra scenarios either. Instead, after covering the requested scope, actively identify edge cases and present them to the user as optional additions. To identify edge cases effectively, look at three things: (a) every external service call in the scenario — what happens if it returns an error (4xx, 5xx) or times out? (b) every data collection — what happens if it's empty or contains unexpected values? (c) existing test files in the project that test similar features — they often contain error-handling patterns you can adapt for the new scenario. The user decides which edge cases to include, but your job is to surface them so nothing important is missed.
-
Reuse what exists. If the project already has a runner, bootstrap class, feature location convention, or glue configuration — reuse them. Creating duplicates causes classpath conflicts and confuses test discovery.
Workflow
1. Understand the Specification
Analyze the user's input and break it into a checklist of distinct functional behaviors. Each
behavior will become one or more scenarios. If anything is ambiguous, ask before proceeding.
While analyzing, also extract:
- External dependencies — every API, service, or data source the feature interacts with. Each one is a potential source of edge-case scenarios (errors, timeouts, empty responses).
- Performance or reliability hints — if the spec mentions performance concerns, large data volumes, or error handling, note them. These signal scenarios the user likely cares about even if they didn't write explicit acceptance tests for them.
- Implicit error paths — if the spec says "call service X to get Y", the spec is describing the happy path. The failure path (service X returns an error) is an edge case to suggest.
2. Discover the Project
Detect the build tool (prefer wrappers: over , over ), then:
./mvnwmvn./gradlewgradle- Always read — core step definitions (ObjectSteps) are used in every Tzatziki project for variables, assertions, and data manipulation.
references/steps-core.md - Detect which modules the project depends on by inspecting
tzatziki-*orpom.xmlfor dependency declarations. A quick way to extract them:build.gradlebashgrep -o 'tzatziki-[a-z-]*' pom.xml | sort -u - Read the matching per-module reference(s) based on detected dependencies:
- HTTP/REST testing →
references/steps-http.md - Spring context →
references/steps-spring.md - JPA/database →
references/steps-spring-jpa.md - Kafka messaging →
references/steps-spring-kafka.md - MongoDB →
references/steps-spring-mongodb.md - OpenSearch →
references/steps-opensearch.md - Logging assertions →
references/steps-logback.md - MCP/AI testing →
references/steps-mcp.md
- HTTP/REST testing →
- When in doubt between a step pattern inferred from a example and one read from the Java
.featuresource, trust the Java source.*Steps.java - Catalog error-handling patterns from existing files you encounter during discovery. Note how the project tests HTTP error responses (e.g., mocking 404/500 from external APIs), empty data, cache misses, or retry behavior. You'll use these patterns as templates when suggesting edge cases in Step 5.
.feature
3. Check the Bootstrap
Search for the existing test infrastructure:
- Runner: +
@Suite(JUnit 5) or@IncludeEngines("cucumber")AbstractTestNGCucumberTests - Feature location: or
@SelectClasspathResource(...)@SelectDirectories(...) - Spring context: +
@CucumberContextConfiguration(if Spring is used)@SpringBootTest
If any piece is missing, plan to create it. Read for the
JUnit 5 runner and Spring configuration templates.
references/bootstrap-templates.md4. Verify the Environment
Run the existing test suite in the target module to confirm it works:
bash
# Maven
./mvnw -pl <module> -Dtest=<RunnerClass> test
# Gradle
./gradlew :<module>:test --tests <RunnerClass>A passing or functionally-failing run is fine — what matters is that Cucumber discovers and
executes scenarios. means the runner or feature discovery is broken and needs
fixing before you write anything new.
Tests run: 05. Propose the Plan
⛔ This is a mandatory checkpoint. Do not proceed to Step 6 without the user's explicit approval. Even if the user's specification already includes detailed acceptance tests, present the plan anyway — the user expects to review it, may want to adjust scope, and needs the chance to approve or add edge cases they care about. Skipping this step risks generating the wrong tests and wasting effort.
Output the full plan as regular response text first (never embed a long plan inside the
question — the UI truncates it). Include:
ask_user- Files to create or modify
- A table mapping each requested functional behavior to a scenario
- Any bootstrap work needed
- Suggested edge cases (clearly marked as optional) — see below for how to identify them
Then use the tool with a short, focused question asking only for the user's
decision (e.g. "Does this plan look good, and which optional edge cases would you like to
include?"). Do not write any file until the user confirms.
ask_user.featureIdentifying edge cases
Use the external dependencies, error patterns, and spec hints you gathered in Steps 1 and 2
to build a concrete list of optional edge-case scenarios. Work through these categories:
-
External service failures — For each external API call in the scenario consider what happens when it returns:
- A client error (400, 404) — e.g., the resource doesn't exist
- A server error (500, 502, 503) — e.g., the service is down
- An empty or unexpected response body
-
Empty and boundary data — For each data collection involved:
- What if the collection is empty? (e.g., a super model with no models, a model with no
articles, an overridden rule with an empty list)
value - What if it contains a single element vs. many?
- What if values are null or missing?
- What if the collection is empty? (e.g., a super model with no models, a model with no
articles, an overridden rule with an empty
-
State and ordering — Cache hit vs. miss, pre-existing vs. fresh data, concurrent modifications, retry after failure (CREATED → TO_RETRY → ERROR).
-
Patterns from existing tests — In Step 2 you cataloged error-handling patterns from the project's existingfiles. Adapt those patterns to the new feature. For example, if existing tests mock a 404 from
.featureand assert "0 kafka messages", suggest the same pattern for any new external API the feature calls./masterdata/v2/arbo/models/{id}/articles
Present each suggested edge case with a one-line description and mark them as (optional).
The user decides which to include.
6. Implement and Validate
- Write the file using exact step patterns from step 2 and matching the existing project's style (scenario naming,
.featureusage, tags, data format conventions).Background - If no feature convention exists, default to with
src/test/resources/features.@SelectClasspathResource("features") - Run the tests and inspect the output. The test output is the ground truth for step
validity — no amount of visual inspection replaces it. Look specifically for these errors:
- +
"step(s) are undefined"— means the step text doesn't match any registered step definition. This is the most common mistake and it must be fixed before the task is considered done. Go back to the per-module"You can implement these steps"files, find the correct pattern, and rewrite the step.references/steps-*.md - — same root cause, different presentation.
UndefinedStepException - — malformed Gherkin syntax.
ParseError - — runner or feature discovery is broken. Accept functional assertion failures (e.g.
Tests run: 0) — they're expected in BDD when the product behavior isn't implemented yet.expected 200 but got 404
- Repeat until every requested behavior from step 1 is present and zero undefined-step errors remain in the test output.
Running a single feature from the CLI: When you need to target one specific feature file or
scenario line, use as the selector — not or .
Read for the full details, because triggers
standalone Cucumber execution that bypasses the runner's annotations.
cucumber.features-Dtest=...cucumber.filter.namereferences/cli-execution.mdcucumber.features@ConfigurationParameterSuccess Criteria
- At least one existing test was run and confirmed operational before new scenarios were written.
- The plan was presented to the user and approved before any file was written.
.feature - Generated scenarios cover 100% of the functional behaviors the user explicitly requested.
- All step text comes from real Tzatziki step definitions (no invented steps).
- Tests were run and the output contains zero "undefined step" errors. This is the definitive
validation — if the test output says , the task is not done.
"step(s) are undefined" - Any required runner or Spring bootstrap files are in place.
- Tests are discovered and executed with the correct build tool command.
- Edge-case scenarios were identified, presented to the user, and included only if approved.
- Any remaining failures are functional assertions, not technical setup problems.
Reference Files
Read these when needed — they contain templates and detailed guidance that would clutter
the main workflow:
Per-Module Step References (read based on detected dependencies)
- — ObjectSteps: variables, assertions, data manipulation, type conversions. Always read this — core steps are used in every Tzatziki project.
references/steps-core.md - — HttpSteps: HTTP mocking, request/response assertions, REST API testing. Read when the project uses
references/steps-http.md.tzatziki-http - — SpringSteps: Spring context, properties, bean manipulation. Read when the project uses
references/steps-spring.md.tzatziki-spring - — SpringJPASteps: JPA entity management, database fixtures, persistence assertions. Read when the project uses
references/steps-spring-jpa.md.tzatziki-spring-jpa - — KafkaSteps: Kafka topic management, message producing/consuming, async assertions. Read when the project uses
references/steps-spring-kafka.md.tzatziki-spring-kafka - — SpringMongoSteps: MongoDB collection management, document fixtures, query asse/rtions. Read when the project uses
references/steps-spring-mongodb.md.tzatziki-spring-mongodb - — OpenSearchSteps: index management, document indexing, search assertions. Read when the project uses
references/steps-opensearch.md.tzatziki-opensearch - — LoggerSteps: log assertion, log level management. Read when the project uses
references/steps-logback.md.tzatziki-logback - — McpSteps: MCP server testing, tool invocation, AI integration. Read when the project uses
references/steps-mcp.md.tzatziki-test-mcp
Other References
- — JUnit 5 runner and Spring
references/bootstrap-templates.mdtemplates. Read when you need to create a new runner or Spring bootstrap class.@CucumberContextConfiguration - — How to run a single feature or scenario from the CLI using
references/cli-execution.md, including the Maven/Gradle property-mirroring mechanism. Read when you need to target a specificcucumber.featuresfile instead of running the full suite..feature