Welcome to the Divekit documentation. Choose a section to get started.
This is the multi-page printable view of this section. Click here to print.
Divekit Documentation
- 1: Quick Start
- 1.1: Hello there π
- 1.2: Installation
- 1.3: First steps
- 1.4: Individualization
- 1.5: Distribution
- 1.6: Glossary
- 2: Divekit CLI
- 2.1: divekit install
- 2.2: divekit init
- 2.3: divekit doctor
- 2.4: divekit distribute
- 2.5: divekit patch
- 2.6: divekit index
- 2.7: divekit overview
- 2.8: divekit config
- 2.9: divekit passcheck
- 2.10: divekit update
- 2.11: divekit run
- 3: Development
- 3.1: Architecture
- 3.1.1: Overview
- 3.1.2: Components
- 3.1.3: Configuration
- 3.2: Contributing
- 3.2.1: Development Setup
- 3.2.2: Error Handling
- 3.2.3: Contributing Guidelines
- 3.3: Work in Progress
- 3.3.1: Config Redesign
- 3.3.2: Deployment
- 3.4:
- 3.4.1: Go Testing Guide
- 3.4.2: Testrepo
- 4: Archive
- 4.1: Access Manager
- 4.2: Access Manager 2.0
- 4.3: Automated Repo Setup
- 4.4: Divekit Language Plugin
- 4.5: Divekit Language Server
- 4.6: Evaluation Processor
- 4.7: Operator
- 4.8: Passchecker
- 4.9: Plagiarism Detector
- 4.10: Repo Editor
- 4.11: Report Mapper
- 4.12: Report Visualizer
- 4.13: Test Library
- 4.14: Test page generator
1 - Quick Start
Divekit is a command-line tool for managing individualized programming assignments at scale. It helps educators create, distribute and evaluate programming exercises for large groups of students.
Key Features
- Assignment Individualization: Generate unique variations of programming assignments for each student
- Automated Distribution: Create and manage GitLab repositories for students automatically
- Test Integration: Built-in support for automated testing and evaluation
- Bulk Operations: Efficiently manage assignments for large classes
- Access Control: Manage repository access rights and permissions
Benefits
- Prevent Plagiarism: Each student receives a slightly different version of the assignment
- Save Time: Automate repetitive tasks like repository setup and access management
- Ensure Fairness: Standardized testing and evaluation across all variations
- Scale Easily: Handle large classes with minimal additional effort
Use Cases
Divekit is primarily used in educational settings where:
- Programming assignments need to be distributed to many students
- Each student should receive an individualized version
- Automated testing and evaluation is desired
- Manual administrative overhead should be minimized
The tool consolidates functionality that was previously spread across multiple separate tools into a single, easy-to-use CLI application.
Divekit is a toolkit for managing individualized programming assignments at scale.
1.1 - Hello there π
What is Divekit?
Divekit is a command-line tool for managing individualized programming assignments at scale. It helps educators create, distribute and evaluate programming exercises for large groups of students.
Visual overview: from origin to distributed repositories
Locally:
my-assignment/
ββ .divekit/
ββ src/$Vehicle$.java # placeholder - used by individualization
ββ test_sandbox/ # filter - excluded from student repos
ββ README.md
Gitlab: “Individualized repositories”:
gitlab.example.com/.../my-group/ # Code repositories (main target)
ββ my-assignment-4d2c0752-f400-4690-9bac-97c927e68e17/
β ββ src/Car.java # $Vehicle$ β Car
β ββ README.md
ββ my-assignment-4d243142-292b-4b95-abbb-af13f102c857/
β ββ src/Bike.java # $Vehicle$ β Bike
β ββ README.md
ββ my-assignment-8153ded6-ee91-4135-b59e-ace2625f83c3/
ββ src/Plane.java # $Vehicle$ β Plane
ββ README.md
Gitlab: “Secure pipeline” (students cannot access)
gitlab.example.com/.../my-sandbox/
ββ my-assignment-4d2c0752-f400-4690-9bac-97c927e68e17/
β ββ src/Car.java # $Vehicle$ β Car
β ββ test/
β ββ README.md
ββ my-assignment-4d243142-292b-4b95-abbb-af13f102c857/
β ββ src/Bike.java # $Vehicle$ β Bike
β ββ test/
β ββ README.md
ββ my-assignment-8153ded6-ee91-4135-b59e-ace2625f83c3/
ββ src/Plane.java # $Vehicle$ β Plane
ββ test/
ββ README.md
Key Features
- Assignment Individualization: Generate unique variations of programming assignments for each student
- Automated Distribution: Create and manage GitLab repositories for students automatically
- Test Integration: Built-in support for automated testing and evaluation
- Bulk Operations: Efficiently manage assignments for large classes
- Access Control: Manage repository access rights and permissions
Benefits
- Prevent Plagiarism: Each student receives a slightly different version of the assignment
- Save Time: Automate repetitive tasks like repository setup and access management
- Ensure Fairness: Standardized testing and evaluation across all variations
- Scale Easily: Handle large classes with minimal additional effort
Use Cases
Divekit is primarily used in educational settings where:
- Programming assignments need to be distributed to many students
- Each student should receive an individualized version
- Automated testing and evaluation is desired
- Manual administrative overhead should be minimized
The tool consolidates functionality that was previously spread across multiple separate tools into a single, easy-to-use CLI application.
Divekit is a toolkit for managing individualized programming assignments at scale.
Who needs Divekit?
Divekit is designed for different stakeholders in programming education:
Course Instructors
Primary users who:
- Create and manage programming assignments
- Need to distribute exercises to large groups of students
- Want to prevent plagiarism through individualization
- Need efficient ways to evaluate student submissions
Benefits
- Automated repository management
- Built-in individualization system
- Integrated testing capabilities
- Bulk operations for large classes
Teaching Assistants
Support staff who:
- Help manage student repositories
- Assist with assignment evaluation
- Provide technical support to students
Benefits
- Standardized repository structure
- Automated access management
- Consistent testing environment
- Clear overview of student progress
Students
End users who:
- Receive individualized assignments
- Submit their solutions through Git
- Get automated feedback on their work
Benefits
- Personal GitLab repositories
- Immediate feedback through automated tests
- Clear assignment structure
- Consistent submission process
1.2 - Installation
Prerequisites
Before installing Divekit, ensure your system meets the following requirements:
System Requirements
- Operating System: Linux, macOS, or Windows
- GitLab: Access to a GitLab instance with API permissions
GitLab Setup
- Access to a GitLab instance
- Admin rights to create groups and repositories
- Personal Access Token with
api
scope
Creating a Personal Access Token
or
- Navigate to your GitLab profile settings
- Go to “Access Tokens”
- Create a new token with required scopes
- Save the token securely - you’ll need it during installation
Storage Requirements
- Minimum 1GB free disk space
- Additional space for repositories (varies by project size)
Network Requirements
- Stable internet connection
- Access to GitLab API endpoints
- No blocking firewalls for HTTP requests
Optional Requirements
- Docker: For running tests in containers
- Maven/Gradle: For Java project support
- IDE: Any Git-compatible IDE for development
- Java: Version 11 or higher (only required for UMLet diagram processing)
Installation
Download the latest release:
- Navigate to Divekit CLI Releases
- Download a built version for your operating system
Install Divekit:
Navigate to the download directory and run the installation script:
./divekit.exe install # On Windows
./divekit install # On Linux/macOS
Environment Setup
After installation, configure your GitLab hosts and tokens using the divekit config
command. Tokens are stored securely in your operating system’s credential store (macOS Keychain, Windows Credential Manager, Linux Secret Service).
The DIVEKIT_MEMBERS
environment variable is optional for the path for your members files.
GitLab Host and Token Configuration
Use divekit config set @hosts
to add your GitLab instance(s). This records the host configuration in ~/.divekit/hosts.json
and stores the token in the OS credential manager.
Steps
Create a Personal Access Token (if not already done):
- Go to your GitLab profile > Access Tokens.
- Create a token with
api
scope. - Copy the token.
Add the Host:
# Simplified syntax with automatic token prompt divekit config set @hosts git-nrw https://gitlab.git.nrw/ # Or with direct token (for non-interactive use) divekit config set @hosts git-nrw https://gitlab.git.nrw/ --token glpat-xxx... # Traditional syntax divekit config set @hosts.git-nrw.host https://gitlab.git.nrw/ divekit config set @hosts.git-nrw.token glpat-xxx...
The token is prompted if not provided and stored in the OS credential manager.
Verify:
divekit config get @hosts.git-nrw
Output:
Host: https://gitlab.git.nrw/ Token: [stored]
Multiple Hosts
Add additional instances:
divekit config set @hosts gitlab-com https://gitlab.com/ --token glpat-yyy...
Member List Configuration
DIVEKIT_MEMBERS
: Optional environment variable for the default path tomembers.json
.Set it as a system environment variable:
Linux/macOS
Add to
~/.bashrc
or~/.zshrc
:export DIVEKIT_MEMBERS="/path/to/members.json"
Then:
source ~/.zshrc
Windows
Add via System Properties > Environment Variables > User Variables:
- Variable name:
DIVEKIT_MEMBERS
- Variable value:
C:\path\to\members.json
Alternatively, set per-distribution:
divekit config set @origin.my-distribution.members.path "/path/to/members.json"
- Variable name:
Security Notes
- Tokens are stored in the operating system’s credential manager (Keychain on macOS, Credential Manager on Windows, Secret Service on Linux).
~/.divekit/hosts.json
stores only host metadata and a token key reference. - Restrict permissions:
chmod 600 ~/.divekit/hosts.json # Linux/macOS
- Use minimal scopes (
api
for full access,read_api
for read-only). - For CI/CD, use temporary tokens or pipeline variables (override with
GITLAB_TOKEN
env var). hosts.json
is ignored in .gitignore by default.- Avoid .env files for tokens; use config instead.
Verify Installation
Run the doctor command to verify your setup:
divekit doctor
This will check if:
- Divekit is properly installed
- Required environment variables are set
- System requirements are met
Troubleshooting
If you encounter any issues:
- Run
divekit doctor
for detailed diagnostics - Verify host configuration:
divekit config get @hosts
- Check tokens in the OS keychain:
- macOS: Keychain Access app > Search for “divekit-cli”
- Windows: Credential Manager > Windows Credentials > Search for “divekit-cli”
- Linux:
secret-tool search divekit-cli <hostname>
- Check logs in
~/.divekit/logs
- Ensure correct permissions for
~/.divekit/
directory (chmod 700 ~/.divekit
on Linux/macOS)
1.3 - First steps
First Steps After Installation
Create Your First Assignment
- Create and navigate to a new directory:
mkdir my-assignment
cd my-assignment
- Initialize a new Divekit project:
divekit init
Assignment Content Creation
- Add your assignment files to the repository
- Mark solution parts in your code:
public class Example {
//unsup
return actualSolution;
//unsup
}
- Add variables for individualization:
public class $EntityClass$ {
// ...
}
Assignment Distribution
- Verify your setup:
divekit doctor
- Distribute the repositories:
divekit distribute
Next Steps
For more detailed information, please refer to:
- Configuration options in the Configuration section
- Detailed system requirements in the Prerequisites section
1.4 - Individualization
Overview
Divekit individualizes each studentβs (or groupβs) repository during distribution using two configuration files: variation.json
and individualization.json
. They define object/relationship variants, logic options, and per-member assignments β including structural changes, file selection by logic, and Lua-based customizations. Each distributed repository is unique while the core task stays consistent.
Configuration Files
Configs live in .divekit/distributions/<distribution-name>/
:
- variation.json: Declares available variations for objects, relations, and logic.
- individualization.json: Assigns these variations to members (fixed or random) and sets per-member options.
They are created/copied during divekit init
and can be edited directly or via divekit config
.
variation.json Structure
The file lists arrays of objects
, relations
, and logic
definitions:
{
"objects": [{
"ids": "Vehicle",
"objectVariations": [
{"name": "Car", "fields": [{"name": "speed", "type": "int", "values": ["100", "150"]}]},
{"name": "Truck", "fields": [{"name": "weight", "type": "int", "values": ["2000", "3000"]}]}
]
}],
"relations": [{
"id": "relation1", "type": "association", "source": "entity1", "target": "entity2",
"options": ["one-to-one", "one-to-many"]
}],
"logic": [{
"id": "logic1", "type": "businessLogic", "location": "src/logic",
"options": ["simple", "complex"]
}]
}
- objects: Entity variations (e.g., class alternatives with fields).
- relations: Relationship variants (e.g., multiplicities or types).
- logic: Logic variants that select files or scripts (e.g.,
ShoppingCart_simple.java
).
Variables may use placeholders like {{...}}
or Lua for dynamic values.
individualization.json Structure
This file assigns variations to members and controls application:
{
"version": "1.0",
"logicId": "logic1",
"objectAssignments": [
{"memberId": "student1", "objectVariations": {"entity1": "Car"}, "relationOptions": {"relation1": "one-to-many"}},
{"memberId": "student2", "objectVariations": {"entity1": "Truck"}, "randomLogic": true}
],
"globalSettings": {"delimiter": "$", "warnUnresolved": true}
}
- logicId: Global or per-member logic variant.
- objectAssignments: Per-member object and relation choices (fixed or random).
- globalSettings: Placeholder delimiter and unresolved-token warnings.
Without per-member overrides, values are chosen randomly from variation.json
.
Individualization Process
Individualization runs during divekit distribute
:
- Load
variation.json
andindividualization.json
. - Resolve each memberβs variation set (fixed or random).
- Generate content:
- Replace placeholders (e.g.,
$EntityName$
β “Car”). - Include/rename files by logic selection (e.g.,
_logic1
). - Apply relation-driven code changes.
- Run Lua for custom generation.
- Replace placeholders (e.g.,
- Validate unresolved placeholders (warn if enabled).
- Create the new repo with the individualized content.
Using Placeholders
Placeholders in origin files are replaced during individualization:
public class $EntityName$ {
private String name = "$DefaultName$";
private int $AttributeName$ = $RandomNumber$;
// Relation example
public $RelatedEntity$ getRelated() {
return related;
}
}
$EntityName$
: Selected object name (e.g., “Car”).$DefaultName$
: From object-variation fields.- Relations may generate extra code or files.
For logic variants, files like ShoppingCart_$LogicId$.java
are renamed or included based on the selected logic.
Creating the Configuration Files
During divekit init
variation.json
: Empty skeleton with default sections.individualization.json
: Copied from user config or created with defaults (optionally copied from another distribution).
Manual Editing
Edit both JSON files in .divekit/distributions/<name>/
. Validate with divekit doctor
.
individuals.json (optional)
Created during distribution to record assigned variations per member for reuse (e.g., patches). Not created at init.
Examples
Simple Object Variation
- Define object “Person” with variations “Student” (age 18β25) and “Teacher” (30β50). Assign “Student” randomly to half the members. Result: corresponding class and fields per repo.
Logic Variant with Lua
- Logic options “simple” vs “complex” with
Algorithm_simple.java
/Algorithm_complex.java
. Lua generates data based on selected complexity.
Relation Individualization
- Relation “hasFriend”: choose “one-to-one” or “one-to-many”. Fixed “one-to-many” yields list-based code.
Quality Assurance and Troubleshooting
- Unresolved tokens: Use
--warn-unresolved-tokens true
to log warnings. - Validation: Run
divekit doctor
to check configs. - Dry runs: Use simulated provider (
--provider simulated
). - Reproducibility: Assigned variations persist in
individuals.json
(e.g., for late joiners).
Best Practices
- Start simple; grow complexity gradually.
- Use Lua for advanced generation.
- Test with a small member list first.
- Document variation IDs and logic options.
- Provide fallbacks to avoid unresolved tokens.
For advanced setups, see the full schema in code or the RE docs under docs/re
.
1.5 - Distribution
Overview
Divekit can distribute your assignment to multiple repositories on GitLab, creating individualized versions for each student or group. This process includes:
- Creating code repositories for each student/group
- Optionally creating secure pipeline repositories for automated testing
- Assigning the correct permissions to students
- Individualizing the content based on your configuration
- Setting up automated CI/CD pipelines
Initialize Distribution
Before you can distribute assignments, you need to initialize a distribution configuration:
divekit init
This command will guide you through creating a new distribution and create a .divekit/distributions/<name>/config.json
file in the current directory.
The distribution name is used to identify the distribution configuration.
Distribution Guide
After initializing your distribution, use the distribute command to start the distribution process:
divekit distribute
Or with the --distribution
flag:
divekit distribute --distribution <distribution-name>
The command will:
- Load or prompt for the specified distribution configuration
- Ask for members to process
- Check if all configured members exist on GitLab
- Show you a summary of what will be created
- Create the repositories after your confirmation
Example Flow
# First, initialize a new distribution
$ divekit init
[interactive prompts for configuration...]
# Then distribute the assignment
$ divekit distribute --distribution my-assignment
--- Distribution Plan ---
Distribution Name: my-assignment
Target: https://gitlab.git.nrw/
- Repo Name Template: assignment-{{uuid}}
- Target Group ID: 12345
- Members to process: 5 members across 2 groups
Would create 2 repositories with name "assignment-{{uuid}}" and assign 5 members.
Secure pipeline enabled - will also create sandbox repositories.
? Continue? [Y/n]: y
Creating main repositories at group #234567:
[ββββββββββββββββββββββββββββββββββββββββββββββββββ] 100% (2/2)
Creating secure pipeline repositories at group #345678:
[ββββββββββββββββββββββββββββββββββββββββββββββββββ] 100% (2/2)
Setting up repository linking (main β sandbox):
[ββββββββββββββββββββββββββββββββββββββββββββββββββ] 100% (2/2)
Assigning members:
[βββββββββββββββββββββββββββ ] 50% (1/2)
What Happens During Distribution?
- Divekit creates a new code repository for each student/group
- If secure pipeline is enabled, sandbox repositories are created in a separate group
- Repository contents are individualized based on your variable configuration
- Students are assigned with appropriate permissions (developer/maintainer)
- Repository linking is established between code and sandbox repositories
- CI/CD pipelines are configured for automated testing
- Each repository gets a unique identifier via UUID
Next Steps
- Learn more about configuration options
- Understand how to individualize assignments
- Check the CLI commands reference for advanced options
1.6 - Glossary
This glossary provides definitions for terms used throughout the Divekit documentation.
Term | Definition |
---|---|
Divekit | A command-line tool for managing individualized programming assignments at scale. |
CLI | Command Line Interface - the primary way to interact with Divekit. |
Origin Repository | The source repository containing the assignment template and Divekit configuration. |
Distribution | A specific configuration for creating and managing student repositories. |
Individualization | The process of creating unique variations of assignments for each student. |
Secure Pipeline | Optional test repositories that run automated evaluations separately from student code. |
Members | Students or users who will receive access to distributed repositories. |
UUID | Universally Unique Identifier - used to create unique repository names. |
Linking | The connection between code and test repositories in secure pipeline setups. |
Variables | Placeholders in assignment files that are replaced with random values during individualization. |
Remote | Configuration for connecting to different GitLab instances or environments. |
2 - Divekit CLI
2.1 - divekit install
Installs the divekit CLI and the required modules by creating a symbolic link in the system PATH with proper executable permissions.
Usage
divekit install [flags]
Flags
--help
: Show help information
Description
The divekit install
command makes the Divekit CLI globally available by creating a symbolic link to the executable in a directory included in your system’s PATH. It supports cross-platform installation and handles permissions appropriately.
Key features:
- Creates system PATH symlinks
- Cross-platform support (macOS, Linux, Windows)
- Proper permission handling
- Displays current version after installation
Examples
Basic Installation
$ divekit install
Installing divekit in home directory...
[β] divekit installed in home directory
[β] divekit executable added to PATH
Verify Installation
$ divekit --version
divekit version v1.0.0
Updates (Planned)
- Check for newer versions:
divekit update --check
(displays current and available versions) - Automatic update application (future release)
- Support for package managers like Homebrew or Snap for easier distribution
Cross-Platform Support
- macOS: Symlink created in
/usr/local/bin/divekit
- Linux: Symlink created in
~/bin/divekit
or/usr/local/bin/divekit
(depending on permissions) - Windows: Adds to PATH via environment variable (requires admin rights for system-wide)
Notes
- Installation currently requires manual PATH setup on some systems if the symlink directory is not in PATH
- Exit codes indicate success/failure
- Supports both interactive and non-interactive modes
- Reinstallation handles existing symlinks gracefully
2.2 - divekit init
Initializes a new Divekit distribution by creating the necessary configuration files with the simplified Version 2.0 format under .divekit/
.
Usage
divekit init [flags]
Flags
--help
: Show help information- No other flags; the command is interactive
Description
The divekit init
command guides you through setting up a new or editing/copying an existing distribution configuration. It creates the .divekit/distributions/<name>/config.json
file with sensible defaults and allows customization through an interactive flow. All generated files are placed under the .divekit/
directory to ensure consistency and avoid root-level clutter.
Key features:
- Interactive initialization with validation and clear error messages
- Options to create new, edit existing, or copy from another distribution
- Automatic linking between main and test targets if secure pipeline is enabled
- Copy of additional files like
individualization.json
,variation.json
,individuals.json
(with prompts) - Input validation for group IDs, paths, etc.
- Defaults to configuration version 2.0
Initialization Flow
- Choose Action: Create new distribution, edit existing, or copy from another
- Distribution Name: Prompt for name (checks for conflicts)
- Repository Template: Customize naming pattern (e.g.,
assignment-{{uuid}}
) - Group IDs: Enter GitLab group ID for code repositories
- Secure Pipeline: Optional (y/N); if enabled, prompt for test group ID
- Members: Path to
members.json
(defaults to$DIVEKIT_MEMBERS/members.json
) and permissions (developer/maintainer) - Save: Creates
.divekit/distributions/<name>/config.json
and optional files
Examples
Create a new distribution (interactive)
$ divekit init
Welcome to Divekit! This will help you create a new distribution.
What do you want to do?
> Create new distribution
Edit existing distribution
Copy from another distribution
? Distribution name: my-assignment
? Repository name template: assignment-{{uuid}}
? GitLab group ID for code repositories: 123456
? Enable secure pipeline? (y/N): y
? GitLab group ID for test repositories: 234567
? Path to members file: $DIVEKIT_MEMBERS/members.json
? Member permissions (developer/maintainer): developer
Configuration created at ./.divekit/distributions/my-assignment/config.json
Updating project index...
Edit an existing distribution
$ divekit init
What do you want to do?
Create new distribution
> Edit existing distribution
Copy from another distribution
Select distribution:
> my-assignment
ws2024-java
Loaded ./.divekit/distributions/my-assignment/config.json
# You will be prompted for the same fields and can update values.
Saving configuration...
Updating project index...
Copy from another distribution (with optional file copy)
$ divekit init
What do you want to do?
Create new distribution
Edit existing distribution
> Copy from another distribution
Select source distribution:
> ws2024-java
ws2023-java
? New distribution name: ws2024-java-b
? Copy individualization/variation configs if present? (y/N): y
? Copy individuals.json from source distribution? (y/N): n
Created:
- ./.divekit/distributions/ws2024-java-b/config.json
- ./.divekit/distributions/ws2024-java-b/variation.json (copied)
- ./.divekit/distributions/ws2024-java-b/individualization.json (copied)
Updating project index...
Generated Files
.divekit/distributions/<name>/config.json
: Main configuration- Optional:
.divekit/distributions/<name>/individualization.json
,variation.json
,individuals.json
- No root-level files or directories like
config/
ororigin/
are created
Generated Configuration Example
{
"version": "2.0",
"remote": "default",
"groupId": 123456,
"name": "assignment-{{uuid}}",
"members": {
"path": "$DIVEKIT_MEMBERS/members.json",
"permissions": "developer"
},
"securePipeline": {
"enabled": true,
"groupId": 234567
}
}
Notes
- No
config/
,origin/
, or root-levelmembers.json
directories/files are created; everything under.divekit/
- The CLI validates inputs and writes the distribution config by default
- After saving, the CLI updates the local project index if available
- For copy operations, confirm copying of sensitive files like
individuals.json
- If secure pipeline is enabled, automatic linking from main to test is added
2.3 - divekit doctor
divekit doctor
runs a series of focused checks to validate your local setup. It helps identify misconfigurations early and prints PASS/FAIL results for each check.
Usage
divekit doctor
What it currently checks
The current implementation performs these checks:
- Divekit Home Directory
- Ensures your Divekit home directory exists and is initialized.
- Configuration Availability
- Verifies that a valid Divekit home can be determined for configuration access.
- GitLab Connection (basic)
- Attempts to read a token from the OS credential store and a default host from
~/.divekit/hosts.json
, then performs a basic API connectivity check. - Notes:
- Tokens are retrieved from your OS credential manager (Keychain on macOS, Credential Manager on Windows, Secret Service on Linux).
hosts.json
stores host metadata and a token-key reference, not the token itself.
- Attempts to read a token from the OS credential store and a default host from
- Unresolved Token Detection (sanity)
- Runs a lightweight sanity-check for unresolved template tokens in example content using your effective token delimiters.
Output format
The command prints a compact PASS/FAIL report:
--- Doctor Results ---
PASS Divekit Home Directory
PASS Configuration File
PASS GitLab Connection & Token (Basic Check)
PASS Unresolved Token Detection
--------------------
If a check fails, a FAIL line is printed with an error explanation.
Preparing your environment for doctor
To enable the GitLab connection check:
- Configure a GitLab host and token
# Simplified (prompts for token and stores it in OS keychain)
divekit config set @hosts git-nrw https://gitlab.git.nrw/
# Or non-interactive with token flag
divekit config set @hosts git-nrw https://gitlab.git.nrw/ --token glpat-xxx...
- Verify the host entry
divekit config get @hosts.git-nrw
Troubleshooting
- Token not found:
- Add a host as shown above or open your OS credential manager and verify an entry exists for βdivekit-cliβ.
- Base URL unknown:
- Ensure
~/.divekit/hosts.json
contains a default host or the host you intend to use.
- Ensure
- Keychain inspection:
- macOS: Keychain Access β search βdivekit-cliβ
- Windows: Credential Manager β Windows Credentials β search βdivekit-cliβ
- Linux:
secret-tool search service divekit-cli
Notes and roadmap
- The doctor command focuses on essential checks. Additional in-depth validations (e.g., distribution remotes, module presence) may be added in future iterations.
- No subcommands are currently available; the legacy βdoctor list/check β¦β interface mentioned in older drafts is not implemented.
2.4 - divekit distribute
.divekit
.Creates multiple repositories on GitLab based on the Version 2.0 configuration in config.json
.
Usage
divekit distribute [distribution-name] [flags]
Flags
--base-url <url>
: Base URL for the GitLab instance--token <value>
: GitLab access token (required for GitLab provider)--distribution <name>
,-d <name>
: Specify which distribution to use (or as argument)--members-file <path>
: Path to a file with member names (one per line)--add
: Add users to an existing distribution--fresh
: Delete remotes.json/individuals.json before distributing--message <text>
,-m <text>
: Commit message for initial commit--max-workers <int>
: Maximum number of concurrent workers (0 = default: 3)--rate-limit-delay <duration>
: Delay between GitLab API requests (0 = default: 100ms)--provider <type>
: Provider to use: gitlab (default), local (filesystem), simulated (dry-run)--concurrent
: Enable concurrent execution (default: true)--warn-unresolved-tokens <mode>
: Warn about unresolved tokens: auto|true|false (overrides config)--dangerously-delete-group-projects
: Enable deletion of existing projects without prompting (use with caution)--help
: Show help information
Credentials
- Tokens are stored in your operating systemβs credential manager (macOS Keychain, Windows Credential Manager, Linux Secret Service).
- Configure hosts via:
divekit config set @hosts git-nrw https://gitlab.git.nrw/ # prompts for token and stores it in OS keychain
- The flags
--token
and--base-url
are optional overrides. If omitted, Divekit resolves credentials and base URL from your configured hosts (hosts.json contains only host metadata and a token-key reference).
Description
The divekit distribute
command creates individualized repositories for each student or group based on your configuration. It is idempotent, meaning it skips existing repositories and only processes new members. The command supports concurrent processing with configurable worker pools and automatic rate limit handling with exponential backoff.
Key features:
- Repository creation and configuration from origin templates
- Member permission assignment with validation
- Idempotent operations for safe re-execution
- Concurrent processing (default 3 workers)
- Automatic rate limit handling with exponential backoff (up to 5 retries)
- Custom commit message support
- Provider support (GitLab, local, simulated for dry-run)
- Individualization using variables and content variations
- Linking between code and sandbox repositories
The flow:
- Validates the configuration and member data
- Loads members from file or config
- Creates code repositories in the specified GitLab group (or simulates)
- Optionally creates secure pipeline repositories
- Sets up repository linking between code and test repos
- Assigns members with appropriate permissions
- Applies individualization rules to content
- Handles unresolved tokens with configurable warnings
Examples
Distribute with interactive confirmation
$ divekit distribute --distribution my-assignment
starting distribution (provider="gitlab", add=false, fresh=false, workers=3, concurrent=true, non-interactive=false, delete=false)
Loading configuration: ./.divekit/distributions/my-assignment/config.json
Resolving credentials from host config...
Pre-checks:
[β] Members loaded (27)
[β] Configuration valid
[β] Targets detected: main, test
Plan (summary):
- Create 27 code repositories in group #123456
- Create 27 test repositories in group #234567
- Link codeβtest repositories
- Assign members with role: developer
Continue? [y/N]: y
Creating code repositories...
Creating test repositories...
Linking repositories...
Assigning members...
Done.
Adding to existing distribution (latecomers)
$ divekit distribute --dirstibution my-assignment --add --members-file ./new-members.txt
What Gets Created
- Code Repositories: Main repositories where students work on their assignments, with individualized content
- Test Repositories: Optional secure repositories for automated testing (if securePipeline enabled)
- Repository Links: Automatic linking between code and test repositories
- Member Assignments: Students assigned with configured permissions (developer/maintainer)
- Initial Commits: With custom message and origin content variations
2.5 - divekit patch
The divekit patch
command applies local file changes to all remote repositories of a chosen distribution. It uses the same distribution configuration and authentication resolution as divekit distribute
, including token retrieval from the OS credential manager.
Usage
divekit patch [files...] [flags]
- At least one file path must be provided.
- If
--distribution
is omitted, you will be prompted to select one (interactive flow).
Flags
--distribution <name>
,-d <name>
: Distribution to patch (prompt if omitted)--dry-run <mode>
: Simulate changes without committing- empty or
true
: simulated provider (no remote changes) fs
: use local filesystem provider for previewing changes
- empty or
--message <text>
,-m <text>
: Commit message (default is a timestamped βPatch applied β¦β)--warn-unresolved-tokens <auto|true|false>
: Warn about unresolved tokens in content (default:auto
)- Concurrency:
--max-workers <int>
: Max concurrent workers (0 = default: 3)--rate-limit-delay <duration>
: Delay between API requests (0 = default: 100ms)--concurrent <bool>
: Enable/disable concurrency (default: true)
- GitLab credentials (usually resolved automatically):
--token
,-t
: GitLab token (optional; retrieved from OS credential store if not provided)--remote
,-r
: GitLab base URL (optional; resolved from host configuration)
--help
: Show help
Notes:
- Tokens are stored in the OS credential manager (Keychain on macOS, Credential Manager on Windows, Secret Service on Linux). Host metadata and token-key references are stored in
~/.divekit/hosts.json
. Configure hosts via:divekit config set @hosts git-nrw https://gitlab.git.nrw/ # prompts for token, stores in OS keychain
Description
For each remote repository in the selected distribution:
- The command loads the provided patch files from your local working directory.
- It individualizes the content per target/remote (respecting distribution tokens and routing markers).
- It compares against the current remote content and creates commits with only the necessary changes.
Patches are applied safely and idempotently:
- Missing files are created.
- Changed files are updated.
- Unchanged files are skipped.
Content is individualized per repository (e.g., tokens replaced, target routing markers stripped) before comparison and commit.
Examples
Apply two files to the βsupervisorβ distribution
divekit patch --distribution supervisor E2WhateverTests.java pom.xml -m "Make some tests optional"
Dry-run (simulate without committing)
# Simulated provider (no remote changes, logs planned actions)
divekit patch --distribution supervisor E2WhateverTests.java pom.xml --dry-run true
# Filesystem preview mode (fs)
divekit patch --distribution supervisor E2WhateverTests.java pom.xml --dry-run fs
Interactive distribution selection
divekit patch E2WhateverTests.java pom.xml
# Prompts to select a distribution if multiple are detected
Control concurrency
divekit patch --distribution supervisor E2WhateverTests.java --max-workers 5 --rate-limit-delay 250ms
Handle unresolved tokens more strictly
divekit patch --distribution supervisor README.md --warn-unresolved-tokens true
Behavior details
- Individualization: Patch content is individualized per project using the configured token delimiters and per-project selections before diff/commit.
- Routing markers: Target/no-repo routing markers (as configured) are stripped from patch content before commit.
- Commit message: Uses
-m/--message
if provided; otherwise uses a default, timestamped message.
Troubleshooting
- βNo remote projects foundβ:
- Ensure the distribution has remotes configured (remotes.json) and that you selected the correct distribution.
- βCredentials not resolvedβ:
- Configure a host and token via:
divekit config set @hosts git-nrw https://gitlab.git.nrw/
- Verify token exists in your OS credential manager under βdivekit-cliβ.
- Configure a host and token via:
- API rate limits:
- Increase
--rate-limit-delay
and/or reduce--max-workers
, or disable concurrency with--concurrent=false
.
- Increase
2.6 - divekit index
Manage the Divekit project index, which allows you to quickly find and navigate to your Divekit projects.
Usage
divekit index <command> [flags]
Available Commands
add β Add projects to the index
- Usage:
divekit index add [path]
- Flags:
--find-up
Find and add projects in the current or parent directories--find-down
Find and add projects recursively in the specified or current directory
- Usage:
search β Search for projects in the index
- Usage:
divekit index search [term...] [flags]
- Flags:
-i, --interactive
Launch in interactive mode (prints selected project path)--user <name>
Filter by user--distribution <name>
Filter by distribution
- Usage:
refresh β Update the index with current project information
- Usage:
divekit index refresh
- Usage:
Description
The project index helps you manage multiple Divekit projects efficiently. It maintains a searchable database of your projects, allowing you to:
- Quickly find projects by name, distribution, or user
- Navigate to project directories with auto-completion
- Keep track of all your Divekit projects in one place
Examples
Adding Projects
# Add current project to index
divekit index add
# Add specific project path
divekit index add /path/to/project
# Find and add all projects in current directory tree
divekit index add --find-down
# Find and add projects in parent directories
divekit index add --find-up
Searching Projects
# Interactive search
divekit index search -i
# Search by name
divekit index search "assignment-1"
# Search by distribution
divekit index search --distribution "ws2024"
# Search by user
divekit index search --user "john.doe"
# Combine filters
divekit index search "java" --distribution "ws2024"
Refreshing the Index
# Update all projects in index
divekit index refresh
Index Location
The project index is stored as a SQLite database at ~/.divekit/index.db
. It contains:
- Project paths (absolute)
- Distributions detected per project
- Optional metadata used for searching and filtering
Note:
- The database is managed by the CLI; manual editing is not required or recommended.
- The index is updated via
divekit index add ...
anddivekit index refresh
.
Integration with Shell
The index integrates well with shell commands:
# Navigate to project found interactively
cd $(divekit index search -i)
# Open project in editor
code $(divekit index search "assignment-1")
2.7 - divekit overview
The divekit overview
command loads a distribution configuration from .divekit/distributions/<name>/config.json
, resolves credentials, and displays information about remote repositories (e.g., targets, UUIDs, users). With --serve
, it starts a local web server to present an interactive HTML overview.
Usage
divekit overview [flags]
Flags
--distribution <name>
,-d <name>
: Select the distribution to overview (prompt if omitted)--token <value>
: GitLab token (optional; typically resolved from OS credential store)--remote <url>
: GitLab base URL (optional; typically resolved from host config)--serve
,-s
: Serve the overview as an HTML page--dry-run
: Perform a dry run without making API calls--help
: Show help
Notes:
- Tokens are stored in your OS credential manager (Keychain on macOS, Credential Manager on Windows, Secret Service on Linux). Host metadata and token-key references are kept in
~/.divekit/hosts.json
. Configure hosts via:divekit config set @hosts git-nrw https://gitlab.git.nrw/ # prompts for token and stores it in OS keychain
Description
When executed, the command:
- Determines the project root (directory containing
.divekit/
). - Selects a distribution (via
--distribution
or prompt). - Loads its config from
.divekit/distributions/<name>/config.json
. - Resolves credentials for a target (host URL + token).
- Displays or serves an overview of repositories for the distribution.
With --serve
, an HTTP server is started and the default browser is opened. The HTML template lives in the CLI and is rendered at runtime.
Examples
Basic CLI Output
# Show overview for a chosen distribution (prompt if not provided)
divekit overview
Specific Distribution
divekit overview --distribution students
Serve an HTML Overview
divekit overview --distribution students --serve
# Serving on http://localhost:8080
# Opening browser...
Dry Run (no API calls)
divekit overview --distribution students --dry-run
Troubleshooting
- βDistribution not foundβ:
- Ensure you are inside a Divekit project (has
.divekit/
). - Verify
.divekit/distributions/<name>/config.json
exists.
- Ensure you are inside a Divekit project (has
- βCredentials not resolvedβ:
- Configure a host and token via:
divekit config set @hosts git-nrw https://gitlab.git.nrw/
- Check OS keychain entries for βdivekit-cliβ.
- Configure a host and token via:
- βCannot serve/port in useβ:
- Ensure port 8080 is free or set an alternate port if supported in your environment.
2.8 - divekit config
Usage
divekit config <command> [flags]
Available Commands
get [key]
- Get a configuration valueset [key] [value]
- Set a configuration value
Flags
--token <value>
- Set the access token directly (for host URLs, only in set command; usually resolved from OS credential manager)--help
- Show help information
Description
The divekit config
command allows you to view and modify your Divekit configuration settings using subcommands get
and set
. It supports aliases for targeting different configuration files and provides validation for all changes.
Key features:
- Configuration management
- CLI access for getting and setting values
- Schema validation
- Security best practices for sensitive data
- Environment variable override support (e.g., DIVEKIT_MEMBERS)
$ divekit config
Usage:
divekit config <command>
Available Commands:
get Get configuration values
set Set configuration values
For more information about a command, run:
divekit config <command> --help
Configuration Aliases
The divekit config
command uses special aliases to target different configuration files:
@hosts
: Targets the globalhosts.json
file for GitLab remote configurations@origin.<distribution>
: Targets theconfig.json
for a specific distribution@origin.<distribution>.remotes.*
: Targets theremotes.json
for that distribution
Examples
Managing GitLab Hosts
Simplified Hosts Syntax
# Set up a new GitLab host (automatic token prompt)
divekit config set @hosts git-nrw https://gitlab.git.nrw/
# With direct token
divekit config set @hosts git-nrw https://gitlab.git.nrw/ --token glpat-xxx...
# Get host information
divekit config get @hosts.git-nrw
Traditional Syntax
# Set host URL
divekit config set @hosts.git-nrw.host https://gitlab.git.nrw/
# Set token
divekit config set @hosts.git-nrw.token glpat-xxx...
Managing Distribution Configuration
# Set repository name template
divekit config set @origin.my-distribution.name "assignment-{{uuid}}"
# Get current group ID
divekit config get @origin.my-distribution.groupId
# Update member permissions
divekit config set @origin.my-distribution.members.permissions "maintainer"
# Set remote URL for distribution
divekit config set @origin.main.remotes.gitlab.url https://gitlab.com/my-group/my-project
Managing Members
Members are typically managed through the members.json
file referenced in your distribution configuration. The file should contain an array of member objects:
{
"members": [
{"username": "alice", "name": "Alice Johnson"},
{"username": "bob", "name": "Bob Smith"},
{"username": "charlie", "name": "Charlie Brown"}
]
}
Credentials and Security
- Tokens are stored in your operating systemβs credential manager (macOS Keychain, Windows Credential Manager, Linux Secret Service). They are NOT written to
hosts.json
. - The file
~/.divekit/hosts.json
stores only host metadata and a token-key reference (the key used to retrieve the token from the OS credential store). - Configure hosts using:
# Prompts for token and stores it in the OS credential manager divekit config set @hosts git-nrw https://gitlab.git.nrw/ # Or provide the token explicitly (useful for non-interactive setups) divekit config set @hosts git-nrw https://gitlab.git.nrw/ --token glpat-xxx...
- Optional:
DIVEKIT_MEMBERS
can be used to provide a global default path to your members.json (can also be configured per distribution):export DIVEKIT_MEMBERS="/path/to/members.json"
Configuration Migration (Planned)
- Automatic migration from v1.0 to v2.0 format when reading legacy files
- Schema validation and auto-migration capabilities
- Security warnings for sensitive member data placement with suggested secure alternatives
Configuration File Structure
The config.json
file uses the Version 2.0 flat format:
{
"version": "2.0",
"remote": "default",
"groupId": 123456,
"name": "assignment-{{uuid}}",
"members": {
"path": "$DIVEKIT_MEMBERS/members.json",
"permissions": "developer"
},
"securePipeline": {
"enabled": true,
"groupId": 234567
}
}
Notes
- Supports dot notation for nested keys (e.g.,
members.permissions
) - Invalid values trigger specific error messages with correction guidance
- Automatic backup of configuration files before modifications
- Environment variable overrides for CI/CD scenarios
2.9 - divekit passcheck
Check test results for secure pipeline repositories and generate reports.
Usage
divekit passcheck [flags]
Flags
--distribution <name>
: Specific distribution to check (auto-detect if not provided)--detail
: Show detailed test results per student--report
: Generate Excel report--output-path <path>
: Output path for generated reports (default: “./reports/”)--token <token>
: GitLab token--remote <url>
: Remote repository URL (GitLab Instance)
Credentials
- Tokens are stored in your operating system’s credential manager (macOS Keychain, Windows Credential Manager, Linux Secret Service).
- Prefer configuring hosts via:
divekit config set @hosts git-nrw https://gitlab.git.nrw/ # prompts for token and stores it in OS keychain
- The flags
--token
and--remote
are optional overrides. If omitted, credentials and base URL are resolved from your configured hosts.
Description
The passcheck
command automatically detects the current distribution, validates that secure pipeline is enabled, and checks test results from test repositories. It provides an overview of passed/failed tests and can generate detailed reports.
Features
- Automatic Detection: Finds the current distribution and secure pipeline setup
- Test Result Analysis: Checks test results from secure pipeline repositories
- Detailed Reporting: Shows per-student test results when using
--detail
- Excel Reports: Generates comprehensive Excel reports with
--report
- Flexible Output: Customizable output paths for reports
Examples
Basic Usage
# Check test results for current distribution
divekit passcheck
# Check specific distribution
divekit passcheck --distribution my-assignment
Detailed Analysis
# Show detailed results per student
divekit passcheck --detail
# Generate Excel report
divekit passcheck --report
# Both detailed and report
divekit passcheck --detail --report
Custom Output
# Save reports to custom directory
divekit passcheck --report --output-path ./my-reports/
# Use specific GitLab instance
divekit passcheck --remote https://gitlab.example.com --token glpat-xxxxx
Output Format
Summary View (Default)
Distribution: my-assignment
Test Results Summary:
βββ Total Students: 25
βββ Passed: 20 (80%)
βββ Failed: 3 (12%)
βββ Not Submitted: 2 (8%)
Recent Activity:
βββ alice.jones: PASSED (2 hours ago)
βββ bob.smith: FAILED (4 hours ago)
βββ charlie.brown: PASSED (1 day ago)
Detailed View (--detail
)
Distribution: my-assignment
Detailed Test Results:
Student: alice.jones
βββ Status: PASSED
βββ Test Suites: 5/5
βββ Last Run: 2024-01-15 14:30:00
βββ Test Details:
βββ Unit Tests: 25/25 passed
βββ Integration Tests: 8/8 passed
βββ Code Quality: 95% score
Student: bob.smith
βββ Status: FAILED
βββ Test Suites: 3/5
βββ Last Run: 2024-01-15 10:15:00
βββ Test Details:
βββ Unit Tests: 20/25 passed
βββ Integration Tests: 2/8 passed
βββ Code Quality: 78% score
Report Generation
When using --report
, the command generates an Excel file with:
- Student overview with pass/fail status
- Detailed test results per student
- Historical trends and statistics
- Charts and visualizations
- Timestamp and metadata
Prerequisites
- Secure pipeline must be enabled in the distribution configuration
- Test repositories must exist and be accessible
- Valid GitLab token with API access
- Network connectivity to GitLab instance
Integration
The passcheck
command integrates with:
- Secure Pipeline: Works with test repositories created by
divekit distribute
- CI/CD Systems: Reads results from automated test pipelines
- GitLab API: Fetches test results and repository information
- Excel Generation: Creates formatted reports for instructors
2.10 - divekit update
Status: Planned
The divekit update
command is planned but not yet implemented in the current CLI.
When implemented, this command is intended to:
- Check the currently installed Divekit CLI version
- Compare against the latest available release
- Offer a non-interactive flag to auto-apply updates
- Provide clear logs and exit codes suitable for CI usage
Planned flags (subject to change):
-y, --yes
β Auto-confirm applying an available update--check
β Only check and print the current/latest versions without applying
Current approach:
- Download the latest binary from the releases page and re-run
divekit install
- Verify installation with:
divekit --version
Release downloads:
2.11 - divekit run
Execute a single Lua script for custom automation. The command resolves a script by name from trusted directories or accepts an absolute path.
Usage
divekit run <script|/absolute/path/to/script.lua>
- Exactly one argument is required.
- When a name (without path) is provided, Divekit searches in trusted directories.
- When an absolute path is provided, the script is executed directly (if it exists and is a .lua file).
Script Resolution
Resolution happens in this order:
- Absolute path (when you pass an absolute path)
- Accepted as-is if it points to an existing .lua file.
- Trusted directories in the current Divekit project (highest priority)
- .divekit/scripts/
- .divekit/lua/
- .divekit/hooks/
- Trusted directories in the userβs home
- ~/.divekit/scripts/
- ~/.divekit/lua/
- ~/.divekit/hooks/
For name-only arguments (e.g., my-script), Divekit will try both my-script and my-script.lua in each trusted directory.
Security and Guards
- Only .lua files are executed.
- Symlinks are resolved and the resolved path must remain within the trusted root being checked (prevents path traversal).
- Relative paths are not executed directly; they must resolve from trusted directories.
Examples
Run by name (search in trusted locations)
# Looks up ".divekit/scripts/my-script.lua", then other trusted directories
divekit run my-script
Run by name with extension
divekit run my-script.lua
Run by absolute path
divekit run /full/path/to/my-script.lua
Run a script from the userβs home
# If placed under ~/.divekit/lua/report.lua
divekit run report
Troubleshooting
- Script not found:
- Ensure the file exists in one of the trusted directories or pass an absolute path.
- If using a name, try adding the .lua extension.
- Permission denied:
- Check file permissions and ownership of the script file.
- Wrong file type:
- Only .lua files are accepted.
Notes
- This command executes a single Lua file; there are no additional subcommands or flags for run.
- If your script depends on project context, run the command from inside a Divekit project (a directory that contains .divekit/).
3 - Development
Resources for developers who want to contribute to Divekit.
3.1 - Architecture
This section covers Divekit’s technical architecture:
The architecture documentation helps developers understand how Divekit works internally.
Components
- Core Components: Detailed documentation of core components and their interactions
3.1.1 - Overview
Divekit is a tool that helps instructors to create and distribute repositories to students.
High-Level Overview
graph TB INST((Instructors)) ORIGIN[Origin Repository] CLI[Divekit CLI] DIST[Distribution] REPOSTUDENT[Student Repositories] REPOTEST[Test Repositories] STUDENTS((Students)) TPAGE[Test Pages] INST -->|Develop| ORIGIN INST -->|Use| CLI ORIGIN -->|Input| CLI CLI -->|Generate| DIST DIST --- REPOTEST DIST --- REPOSTUDENT STUDENTS -->|Work on| REPOSTUDENT TPAGE -->|Get feedback| STUDENTS REPOSTUDENT --->|Update| REPOTEST REPOTEST --->|Update| TPAGE style CLI fill:#42b050,stroke:#333 style ORIGIN fill:#fcf,stroke:#333 style DIST fill:#a3e87e,stroke:#333 style INST fill:#ff9,stroke:#333 style STUDENTS fill:#ff9,stroke:#333 style REPOSTUDENT fill:#6fc5ff,stroke:#333 style REPOTEST fill:#6fc5ff,stroke:#333
Component Details
Divekit CLI
The CLI serves as the central interface for instructors. It controls the entire process of task distribution and management. All necessary commands for creating, distributing, and managing repositories are executed through the CLI.
Origin Repository
The Origin Repository contains the initial version of assignments and tests. It serves as a master template from which individualized versions for students are generated. This is where the original assignments, code scaffolds, and test cases are maintained.
Distribution
A Distribution is the result of the distribution process and consists of two main components:
Student Repositories
Individualized repositories for each student or group, containing:
- Personalized assignments
- Adapted code scaffolds
- Specific resources
Test Repositories
Separate repositories containing test cases and evaluation criteria:
- Automated tests
- Assessment metrics
- Feedback mechanisms
Test Page
A page where students can get feedback on their work.
Students
Students are the users who are working on the repositories. They can be individuals or groups.
Instructor
Instructor is the user who is creating the repositories and distributing them to the students.
3.1.2 - Components
This document describes the core components of Divekit and how they interact.
Components Overview
graph TB subgraph interfaces CLI[CLI Interface] WebUI[Web Interface] end style WebUI stroke-dasharray: 5 5 subgraph core[Modules] ModuleEntry(( )) style ModuleEntry fill:none,stroke:none Config[Configuration Manager] GitAdapter[GitLab Adapter] Indiv[Individualization] Pass[Passchecker] Plag[Plagiarism Checker] User[Usermanagement] end CLI --> ModuleEntry WebUI -.-> ModuleEntry Pass --> GitAdapter Plag --> GitAdapter User --> GitAdapter GitAdapter --> GitLab[GitLab API]
Interfaces
- CLI Interface: Central command-line interface for all user interactions
- Web Interface (planned): Alternative user interface that uses the same modules as the CLI
Modules
- Configuration Manager: Manages all configuration files and user settings
- GitLab Adapter: Central component for all GitLab interactions
- π§ Individualization: Handles the individualization of tasks
- π§ Passchecker: Checks submissions and communicates with GitLab
- π§ Plagiarism Checker: Detects possible plagiarism and interacts with GitLab
- π§ Usermanagement: Manages users and their permissions through GitLab
3.1.3 - Configuration
Divekit uses a hierarchical configuration system with both global and project-specific settings.
Configuration Levels
Divekit uses a multi-level configuration system based on the frequency of changes:
[0] Installation
Configurations that are set once during DiveKit installation and rarely changed afterwards. These contain global defaults and environment settings.
~
βββ .divekit/
βββ .env # Environment variables
βββ hosts.json # Hosts configuration
βββ members # Members configuration
β βββ 2025-01-21_12-28-15_pear_members.json
β βββ 2025-01-27_12-29-00_raspberry_members.json
β βββ 2025-01-27_12-40-02_sandwich_members.json
βββ origin.json # Origin configuration
βββ variation # Variation configuration (not finalized)
βββ relations.json # Relations configuration
βββ variableExtensions.json # Variable extensions configuration
βββ variations.json # Variations configuration
Environment Configuration
~/.divekit/.env
:
API_TOKEN=YOUR_ACCESS_TOKEN
DEFAULT_BRANCH=main
Remotes
Default:
~/.divekit/hosts.json
:
{
"version": "1.0",
"hosts": {
"default": {
"host": "https://gitlab.git.nrw/",
"token": "DIVEKIT_API_TOKEN"
}
}
}
Example:
~/.divekit/hosts.json
:
{
"version": "1.0",
"hosts": {
"default": {
"host": "https://gitlab.git.nrw/",
"tokenAt": "DIVEKIT_API_TOKEN"
},
"archilab": {
"host": "https://gitlab.archi-lab.io/",
"tokenAt": "DIVEKIT_API_TOKEN_ARCHILAB"
},
"gitlab": {
"host": "https://gitlab.com/",
"tokenAt": "DIVEKIT_API_TOKEN_GITLABCOM"
}
}
}
[1] Semester
Configurations that are typically set at the beginning of each semester. These define course-wide settings and distribution templates.
{ORIGIN_DIR}
βββ .divekit/ # Project configuration
βββ distributions/
βββ ST1-M1/ # Sandbox environment config
β βββ config.json # Distribution settings
βββ ST1-M2/ # Student environment config
βββ config.json # Distribution settings
Distribution Configuration (Version 2.0)
{ORIGIN}/.divekit/distributions/<distribution>/config.json
:
{
"version": "2.0",
"remote": "default",
"groupId": 12345,
"name": "assignment-{{uuid}}",
"members": {
"path": "$DIVEKIT_MEMBERS/members.json",
"permissions": "developer"
},
"securePipeline": {
"enabled": true,
"groupId": 67890
}
}
Key Changes in Version 2.0:
- Flat Structure: Removed nested
targets
object for simpler configuration - Secure Pipeline: Optional
securePipeline
object replaces separate test target - Simplified Members: Direct path reference instead of complex group structure
- Automatic Linking: Repository linking is handled automatically during distribution
[2] Milestone
Configurations that change with each milestone or assignment. These include specific repository settings and member assignments.
{ORIGIN_DIR}
βββ .divekit/
βββ distributions/
βββ <distribution>/ # e.g. ST1-M1
βββ config.json # Milestone-specific settings
Members Configuration
Members are configured in a simple JSON file referenced by the distribution configuration:
members.json
:
{
"members": [
{"username": "tbuck", "name": "Torben Buck"},
{"username": "ada", "name": "Ada Lovelace"},
{"username": "charles", "name": "Charles Babbage"},
{"username": "jobs", "name": "Steve Jobs"},
{"username": "woz", "name": "Steve Wozniak"}
]
}
Alternative CSV Format:
username,name
tbuck,Torben Buck
ada,Ada Lovelace
charles,Charles Babbage
jobs,Steve Jobs
woz,Steve Wozniak
The members file is referenced in the distribution configuration using the $DIVEKIT_MEMBERS
environment variable:
{
"members": {
"path": "$DIVEKIT_MEMBERS/members.json",
"permissions": "developer"
}
}
[3] π§ Call
Configurations that can be overridden during command execution. Any configuration value from the previous levels can be overridden using command-line arguments.
Examples:
# Specify individual files for patching
divekit patch --distribution="sandbox" src/main/java/Exercise.java src/test/java/ExerciseTest.java
# set debug loglevel
divekit patch --loglevel=debug
3.2 - Contributing
Learn how to contribute to the Divekit project.
3.2.1 - Development Setup
This guide will help you set up your development environment for contributing to Divekit.
Prerequisites
- Command Line access
- Internet connection
- Go 1.23 or higher
- Gitlab
- Access Token
- Group IDs
- (Git)
- (npm)
Setting Up the Development Environment
- Clone the repository:
git clone https://gitlab.git.nrw/divekit/tools/divekit-cli.git
- Navigate to the project directory:
cd divekit-cli
- Install the required dependencies:
go mod download
Install local modules (later possibly optional - but for development a huge help):
mkdir pkg
cd pkg
git clone https://gitlab.git.nrw/divekit/modules/gitlab-adapter
git clone https://gitlab.git.nrw/divekit/modules/config-management
cd ..
go work init
go work use ./pkg/gitlab-adapter
go work use ./pkg/config-management
- Build the CLI:
chmod +x build.sh
./build.sh
Then answer the questions or just press Enter for the default values (windows, amd64).
This will create a divekit
executable in the bin
directory. You can run this executable from the command line to use the CLI or run install
on it to install it globally.
For Example:
./bin/divekit_windows_amd64.exe install
This will install the divekit
command globally on your system. You can now run divekit
from any directory.
- Run the CLI:
./bin/divekit_windows_amd64.exe
# or
divekit
…or if you want to execute directly from the source code:
go run cmd/divekit/main.go
- Run the tests:
go test ./...
- Make your changes and submit a merge request.
3.2.2 - Error Handling
The project implements a structured error handling system that distinguishes between critical and non-critical errors. This pattern is currently implemented in the distribute
package and can serve as a template for other packages.
Error Pattern
Each package can define its own error types and handling behavior. The pattern consists of:
- A custom error type that implements the
error
interface - Specific error types as constants
- Methods to determine error severity and behavior
Example from the distribute package:
// Custom error type
type CustomError struct {
ErrorType ErrorType
Message string
Err error
}
// Error types
const (
// Critical errors that lead to termination
ErrConfigLoad // Configuration loading errors
ErrWorkingDir // Working directory access errors
// Non-critical errors that trigger warnings
ErrMembersNotFound // Member lookup failures
)
Example Implementation
Here’s how to implement this pattern in your package:
// Create a new error
if err := loadConfig(); err != nil {
return NewCustomError(ErrConfigLoad, "failed to load configuration", err)
}
// Handle non-critical errors
if err := validateData(); err != nil {
if !err.IsCritical() {
log.Warn(err.Error())
// Continue execution...
} else {
return err
}
}
Error Behavior
Each package can define its own error behavior, but should follow these general principles:
- Critical Errors: Should terminate the current operation
- Non-Critical Errors: Should generate warnings but allow continuation
- Wrapped Errors: Should preserve the original error context
Each error should include:
- An error type indicating its severity
- A descriptive message
- The original error (if applicable)
- A method to determine if it’s critical
This pattern provides consistent error handling while remaining flexible enough to accommodate different package requirements. The distribute
package provides a reference implementation of this pattern.
3.2.3 - Contributing Guidelines
Thank you for considering contributing to Divekit! This document outlines our contribution process and guidelines.
Code of Conduct
- Be respectful and inclusive
- Follow professional standards
- Help others learn and grow
- Report unacceptable behavior
Getting Started
- Fork the repository
- Set up your development environment
- Create a feature branch
- Make your changes
- Submit a pull request
Development Process
Branching Strategy
main
: Production-ready codedevelop
: Integration branch- Feature branches:
feature/your-feature
- Bugfix branches:
fix/issue-description
Commit Messages
Follow conventional commits:
type(scope): description
[optional body]
[optional footer]
The commit message header consists of three parts:
type
: Categorizes the type of change (see below)scope
: Indicates the section of the codebase being changed (e.g.cli
,core
,config
,parser
)description
: Brief description of the change in imperative mood
Examples:
feat(cli): add new flag for verbose output
fix(parser): handle empty config files correctly
docs(readme): update installation instructions
test(core): add tests for user authentication
Types:
feat
: New feature or functionalityfix
: Bug fixdocs
: Documentation changesstyle
: Formatting, missing semicolons, etc. (no code changes)refactor
: Code restructuring without changing functionalitytest
: Adding or modifying testschore
: Maintenance tasks, dependencies, etc.
The body should explain the “why” of the change, while the description explains the “what”.
Pull Requests
- Update documentation
- Add/update tests
- Ensure CI passes
- Request review
- Address feedback
Code Style
- Follow Go best practices and idioms
- Use
gofmt
for consistent formatting - Follow the official Go Code Review Comments
- Use
golint
andgolangci-lint
- Write clear, idiomatic Go code
- Keep functions focused and well-documented
Testing
- Write unit tests using the standard
testing
package - Use table-driven tests where appropriate
- Aim for good test coverage
- Write integration tests for complex functionality
- Use
go test
for running tests - Consider using testify for assertions
Documentation
- Write clear godoc comments
- Update README.md and other documentation
- Include examples in documentation
- Document exported functions and types
- Keep documentation up to date with changes
Review Process
- Automated checks (golangci-lint, tests)
- Code review
- Documentation review
- Final approval
- Merge
Release Process
- Version bump
- Changelog update
- Tag release
- Documentation update
3.3 - Work in Progress
3.3.1 - Config Redesign
Current State
ARS
-
{ARS}/.env
» INIT -
{ARS}/originRepositoryConfig.json
» INIT -
{ARS}/relationsConfig.json
» INIT -
{ARS}/variationsConfig.json
» SEMESTER -
{ARS}/repositoryConfig.json
» MILESTONE -
{ARS}/variableExtensionsConfig.json
( » INIT )-
$.[i].variableExtensions.ClassPath.preValue
» SEMESTER
-
RepoEditor (-> PatchTool)
OriginRepo
{OriginRepo}/repositoryConfig.json
$.general
$.repository
repositoryName
» CALLrepositoryCount
» INITrepositoryMembers
» MILESTONE
$.individualRepositoryPersist
$.local
originRepositoryFilePath
» MILESTONEsubsetPaths
» CALL
$.remote
originRepositoryId
» MILESTONEcodeRepositoryTargetGroupId
» MILESTONEtestRepositoryTargetGroupId
» MILESTONEdeleteExistingRepositories
» CALLaddUsersAsGuests
» CALL
$.overview
generateOverview
» INIToverviewRepositoryId
» SEMESTERoverviewFileName
» MILESTONE
Assigned Configurations
[0] INIT
Configurations that typically only need to be defined once during installation.
Optimally in: {$HOME}/.divekit/
{ARS}/.env
» INIT{ARS}/originRepositoryConfig.json
» INIT{ARS}/relationsConfig.json
» INIT{ARS}/variableExtensionsConfig.json
( » INIT )$.[i].variableExtensions.ClassPath.preValue
» SEMESTER
{OriginRepo}/repositoryConfig.json
[1] SEMESTER
Configurations that typically only need to be defined once per semester. They are best stored in the OriginRepo.
Optimally in: {OriginRepo}/.divekit_norepo/{distribution}/
{ARS}/variationsConfig.json
» SEMESTER{OriginRepo}/repositoryConfig.json
$.overview.overviewRepositoryId
» SEMESTER
[2] MILESTONE
Configurations that typically only need to be defined once per milestone. They are best stored in the OriginRepo.
Optimally in: {OriginRepo}/.divekit_norepo/{distribution:{milestone}}/
{ARS}/repositoryConfig.json
» MILESTONE{OriginRepo}/repositoryConfig.json
$.repository.repositoryMembers
» MILESTONE$.local.originRepositoryFilePath
» MILESTONE$.remote
originRepositoryId
» MILESTONEcodeRepositoryTargetGroupId
» MILESTONEtestRepositoryTargetGroupId
» MILESTONE
$.overview.overviewFileName
» MILESTONE
[3] CALL
Configurations that must be defined with each call.
Optimally in: CLI flags
Future
[0] INIT
{ARS}/.env
will be stored in {$HOME}/.divekit/
ACCESS_TOKEN=YOUR_ACCESS_TOKEN
HOST=https://git.st.archi-lab.io
BRANCH=main
{ARS}/originRepositoryConfig.json
-> {$HOME}/.divekit/origin.json
Will be stored here during installation and then copied to the new Origin Repos during divekit init
.
{
"variables": {
"variableDelimiter": "$"
},
"solutionDeletion": {
"deleteFileKey": "//deleteFile",
"deleteParagraphKey": "//delete",
"replaceMap": {
"//unsup": "throw new UnsupportedOperationException();",
"//todo": "// TODO"
}
},
"warnings": {
"variableValueWarnings": {
"typeWhiteList": ["json", "java", "md"],
"ignoreList": ["name", "type"]
}
}
}
Suggested change:
{
"version": "2.0",
"variables": {
"delimiter": "$"
},
"solutionCleanup": {
"deleteFile": "//deleteFile",
"replaceParagraph": {
"//unsup": "throw new UnsupportedOperationException();",
"//todo": "// TODO",
"//delete": null
}
},
"warnings": {
"variation": {
"fileTypes": ["json", "java", "md"],
"ignore": ["name", "type"]
}
}
}
{ARS}/relationsConfig.json
-> {$HOME}/.divekit/variation/relations.json
Will be stored here during installation and then copied to the new Origin Repos during divekit init
.
[!NOTE]
I don’t fully understand what this is for - it may remain here forever and not need to be copied to the Origin Repo?
(what is UmletRev? What does the star mean?)
[
{
"id": "OneToOne",
"Umlet": "lt=-\nm1=1\nm2=1",
"UmletRev": "lt=-\nm1=1\nm2=1",
"Short": "1 - 1",
"Description": "one to one"
},
{
"id": "OneToMany",
"Umlet": "lt=-\nm1=1\nm2=*",
"UmletRev": "lt=-\nm1=*\nm2=1",
"Short": "1 - n",
"Description": "one to many"
},
{
"id": "ManyToOne",
"Umlet": "lt=-\nm1=*\nm2=1",
"UmletRev": "lt=-\nm1=1\nm2=*",
"Short": "n - 1",
"Description": "many to one"
},
{
"id": "ManyToMany",
"Umlet": "lt=-\nm1=*\nm2=*",
"UmletRev": "lt=-\nm1=*\nm2=*",
"Short": "n - m",
"Description": "many to many"
}
]
Suggested change:
id
->key
?
{
"version": "2.0",
"relations": [
{
"id": "OneToOne",
"umlet": "lt=-\nm1=1\nm2=1",
"umletRev": "lt=-\nm1=1\nm2=1",
"short": "1 - 1",
"description": "one to one"
},
{
"id": "OneToMany",
"umlet": "lt=-\nm1=1\nm2=*",
"umletRev": "lt=-\nm1=*\nm2=1",
"short": "1 - n",
"description": "one to many"
},
{
"id": "ManyToOne",
"umlet": "lt=-\nm1=*\nm2=1",
"umletRev": "lt=-\nm1=1\nm2=*",
"short": "n - 1",
"description": "many to one"
},
{
"id": "ManyToMany",
"umlet": "lt=-\nm1=*\nm2=*",
"umletRev": "lt=-\nm1=*\nm2=*",
"short": "n - m",
"description": "many to many"
}
]
}
{ARS}/variableExtensionsConfig.json
-> {$HOME}/.divekit/variation/variableExtensions.json
Will be stored here during installation and then copied to the new Origin Repos during divekit init
.
[
{
"id": "Basic",
"variableExtensions": {
"": {
"preValue": "",
"value": "id",
"postValue": "",
"modifier": "NONE"
},
"Class": {
"preValue": "",
"value": "id",
"postValue": "",
"modifier": "NONE"
},
"Package": {
"preValue": "",
"value": "Class",
"postValue": "",
"modifier": "ALL_LOWER_CASE"
},
"ClassPath": {
"preValue": "thkoeln.st.st2praktikum.racing.", // ??? deprecated ???
"value": "Class",
"postValue": ".domain",
"modifier": "ALL_LOWER_CASE"
}
}
},
{
"id": "Getter",
"variableExtensions": {
"GetToOne": {
"preValue": "get",
"value": "Class",
"postValue": "",
"modifier": "NONE"
},
"GetToMany": {
"preValue": "get",
"value": "s",
"postValue": "",
"modifier": "NONE"
}
}
}
]
Questions
From my notes
I thought I had written this somewhere already, but I can’t find it anymore.
- [0] INIT -> “Installation” exists twice
- Once during DiveKit installation
- Once during DiveKit initialization in a new OriginRepo
So what should go where (have ideas)?
- Is the
preValue
still needed?
I unfortunately don’t remember exactly what/why, but this was causing some significant issues.
3.3.2 - Deployment
[!WARNING]
Not implemented this way yet - the current process is shown in the gif below.
This guide covers the process of deploying and releasing new versions of Divekit.
Version Management
Semantic Versioning
Divekit follows Semantic Versioning:
- MAJOR version for incompatible API changes
- MINOR version for new functionality
- PATCH version for bug fixes
Version Tagging
# Current version is v2.0.0
# Bump patch version (e.g., v2.0.0 -> v2.0.1)
./deploy.sh patch
# Bump minor version (e.g., v2.0.0 -> v2.1.0)
./deploy.sh minor
# Bump major version (e.g., v2.0.0 -> v3.0.0)
./deploy.sh major
# Create alpha/beta versions
./deploy.sh minor -alpha.1 # Creates v2.1.0-alpha.1
./deploy.sh patch -beta.2 # Creates v2.0.1-beta.2
# Rollback options
./deploy.sh rollback # Removes current tag and returns to previous version
./deploy.sh rollback v2.1.0 # Removes specific version tag
Example (current state)
Release Process
- Update version using deploy.sh:
./deploy.sh <patch|minor|major> [-alpha.N|-beta.N]
- Update CHANGELOG.md:
## [2.0.1] - YYYY-MM-DD
### Added
- New feature X
- Command Y support
### Changed
- Improved Z performance
### Fixed
- Bug in command A
- Create release branch:
git checkout -b release/v2.0.1
- Build and test locally:
go test ./...
go build
- Create GitLab release:
- Tag version is created automatically
- Changelog from CHANGELOG.md is included automatically
- CI pipeline automatically:
- Runs all tests
- Builds binaries for all supported platforms
- Creates release artifacts
- Uploads binaries to the release
Deployment Checklist
- All tests passing locally (
go test ./...
) - Documentation updated
- CHANGELOG.md updated
- Version tagged using
deploy.sh
- GitLab CI/CD Pipeline completed successfully:
- Binaries built successfully
- Release artifacts generated
- Release created and verified in GitLab
- Generated binaries tested on sample installation
Rollback Procedure
If issues are found:
- Execute rollback using deploy.sh:
./deploy.sh rollback [version] # Version is optional
This automatically executes the following steps:
- Deletes the specified tag (or current tag if no version specified) locally and remote
- Reverts to the previous version
- Creates a new hotfix branch if desired
Examples:
./deploy.sh rollback # Removes the most recent tag
./deploy.sh rollback v2.1.0 # Removes specific version v2.1.0
./deploy.sh rollback v2.0.0-alpha.1 # Removes a specific alpha version
If manual rollback is necessary:
git tag -d v2.0.1
git push origin :refs/tags/v2.0.1
git checkout -b hotfix/2.0.2
3.4 -
3.4.1 - Go Testing Guide
What should be tested in this project?
Given that this CLI is the entry point for the user to interact with Divekit, it is essential to test all commands.
Currently, there is only one command patch
, but all commands should be tested with the following aspects in mind:
- Command Syntax: Verify that the command syntax is correct
- Command Execution: Ensure that executing the command produces the expected behavior or output
- Options and Arguments: Test each option and argument individually to ensure they are processed correctly and test various combinations of options and arguments
- Error Handling: Test how the command handles incorrect syntax, invalid options, or missing arguments
Additionally, testing the utility functions is necessary, as they are used throughout the entire project. For that the following aspects should be considered:
- Code Paths: Every possible path through the code should be tested, which should include “happy paths” (expected input and output) as well as “edge cases” (unexpected inputs and conditions).
- Error Conditions: Check that the code handles error conditions correctly. For example, if a function is supposed to handle an array of items, what happens when itβs given an empty array? What about an array with only one item, or an array with the maximum number of items?
How should something be tested?
Commands should be tested with integration tests since they interact with the entire project. Integration tests are utilized to verify that all components of this project work together as expected in order to test the mentioned aspects.
To detect early bugs, utility functions should be tested with unit tests. Unit tests are used to verify the behavior of specific functionalities in isolation. They ensure that individual units of code produce the correct and expected output for various inputs.
How are tests written in Go?
Prerequisites
It’s worth mentioning that the following packages are utilized in this project for testing code.
The testing package
The standard library provides the testing package, which is required to support testing in Go. It offers different types from the testing library [1, pp. 37-38]:
testing.T
: To interact with the test runner, all tests must use this type. It contains a method for declaring failing tests, skipping tests, and running tests in parallel.testing.B
: Similar to the test runner, this type is a benchmark runner. It shares the same methods for failing tests, skipping tests and running benchmarks concurrently. Benchmarks are generally used to determine performance of written code.testing.F
: This type generates a randomized seed for the testing target and collaborates with thetesting.T
type to provide test-running functionality. Fuzz tests are unique tests that generate random inputs to discover edge cases and identify bugs in written code.testing.M
: This type allows for additional setup or teardown before or after tests are executed.
The testify toolkit
The testify toolkit provides several packages to work with assertions, mock objects and testing suites [4]. Primarily, the assertion package is used in this project for writing assertions more easily.
Test signature
To write unit or integration tests in Go, it is necessary to construct test functions following a particular signature:
func TestName(t *testing.T) {
// implementation
}
According to this test signature highlights following requirements [1, p.40]:
- Exported functions with names starting with “Test” are considered tests.
- Test names can have an additional suffix that specifies what the test is covering. The suffix must also begin with a capital letter. In this case, “Name” is the specified suffix.
- Tests are required to accept a single parameter of the
*testing.T
type. - Tests should not include a return type.
Unit tests
Unit tests are small, fast tests that verify the behavior of specific functionalities in isolation. They ensure that individual units of code produce the correct and expected output for various inputs.
To illustrate unit tests, a new file named divide.go
is generated with the following code:
package main
func Divide(a, b int) float64 {
return float64(a) / float64(b)
}
By convention tests are located in the same package as the function being tested.
It’s important that all test files must end with _test.go
suffix to get detected by the test runner.
Accordingly divide_test.go
is also created within the main package:
package main
import (
"github.com/stretchr/testify/assert"
"testing"
)
func TestDivide(t *testing.T) {
// Arrange
should, a, b := 2.5, 5, 2
// Act
is := divide(a, b)
// Assert
assert.Equal(t, should, is, "Got %v, want %v", is, should)
}
Writing unit or integration tests in the Arrange-Act-Assert (AAA) pattern is a common practice. This pattern establishes a standard for writing and reading tests, reducing the cognitive load for both new and existing team members and enhancing the maintainability of the code base [1, p. 14].
In this instance, the test is formulated as follows:
Arrange: All preconditions and inputs get set up.
Act: The Act step executes the actions outlined in the test scenario, with the specific actions depending on the type of test. In this instance, it calls the Add function and utilizes the inputs from the Arrange step.
Assert: During this step, the precondition from the Arrange step is compared with the output. If the output does not match the precondition, the test is considered failed, and an error message is displayed.
It’s worth noting that the Act and Assert steps can be iterated as many times as needed, proving beneficial, particularly in the context of table-driven tests.
Table-driven tests for unit and integration tests
To cover all test cases it is required to call Act and Assert multiple times. It would be possible to write one test per case, but this would lead to a lot of duplication, reducing the readability. An alternative approach is to invoke the same test function several times. However, in case of a test failure, pinpointing the exact point of failure may pose a challenge [2]. Instead, in the table-driven approach, preconditions and inputs are structured as a table in the Arrange step.
As a consequence divide_test.go
gets adjusted in the following steps [1, pp. 104-109]:
Step 1 - Create a structure for test cases
In the first step a custom type is declared within the test function. As an alternative the structure could be declared outside the scope of the test function. The purpose of this structure is to hold the inputs and expected preconditions of the test case.
The test cases for the previously mentioned Divide
function could look like this:
package main
import (
"math"
"testing"
)
func TestDivide(t *testing.T) {
// Arrange
testCases := []struct {
name string // test case name
dividend int // input
divisor int // input
quotient float64 // expected
}{
{"Regular division", 5, 2, 2.5},
{"Divide with negative numbers", 5, -2, -2.5},
{"Divide by 0", 5, 0, math.Inf(1)},
}
}
The struct
type wraps name
, dividend
, divisor
and quotient
. name
describes the purpose of a test case
and can be used to identify a test case, in case an error occurs.
Step 2 - Executing each test and assert it
Each test case from the table will be executed as a subtest. To achieve this, the testCases
are iterated over and
each testCase
is executed in a separate goroutine
[3] with t.Run()
.
The purpose of this is to individually fail tests without concerns about disrupting other tests.
Within t.Run()
, the Act and Assert steps get performed:
package main
import (
"github.com/stretchr/testify/assert"
"math"
"testing"
)
func TestDivide(t *testing.T) {
// Arrange
testCases := []struct {
name string // test case name
dividend int // input
divisor int // input
quotient float64 // expected
}{
{"Regular division", 5, 2, 2.5},
{"Divide with negative numbers", 5, -2, -2.5},
{"Divide by 0", 5, 0, math.Inf(1)},
}
for _, testCase := range testCases {
t.Run(testCase.name, func(t *testing.T) {
// Act
quotient := Divide(testCase.dividend, testCase.divisor)
// Assert
assert.Equal(t, testCase.quotient, quotient)
})
}
}
Setup and teardown
Setup and teardown before and after a test
Setup and teardown are used to prepare the environment for tests and clean up after tests have been executed.
In Go the type testing.M
from the testing package fulfills this purpose and is used as a parameter for the
TestMain
function, which controls the setup and teardown of tests.
To use this function, it must be included within the package alongside the tests, as the scope for functions
is limited to the package in which it is defined. This implies that each package can only have one
TestMain
function; consequently, it is called only when a test is executed within the package
[5].
The following example illustrates how it works [1, p. 51]:
package main
func TestMain(m *testing.M) {
// setup statements
setup()
// run the tests
e := m.Run()
// cleanup statements
teardown()
// report the exit code
os.Exit(e)
}
func setup() {
log.Println("Setting up.")
}
func teardown() {
log.Println("Tearing down.")
}
TestMain
runs before any tests are executed and defines the setup
and teardown
functions. The Run
method
from testing.M
is used to invoke the tests and returns an exit code that is used to report the success or failure
of the tests.
Setup and teardown before and after each test
In order to teardown after each test, the t.Cleanup
function can be used provided by the testing package
[2].
Since there is no mention to setup
before each test, it can be assumed that the setup
function is
called at the start of a test.
This example shows how this can be used:
package main
func TestWithSetupAndCleanup(t *testing.T) {
setup()
t.Cleanup(func() {
// cleanup logic
})
// more test code here
}
Write integration tests
Integration tests are used to verify the interaction between different components of a system. However, the mentioned principles for writing unit tests also apply to integration tests. The only difference is that integration tests involve a greater amount of code, as they encompass multiple components.
How to run tests?
To run tests from the CLI, the go test
command is used, which is part of the Go toolchain
[6].
The list shows some examples of how to run tests:
To run a specific test, the
-run
flag can be used. For example, to run theTestDivide
test from thedivide_test.go
file, the following command can be used:go test -run TestDivide
. Note that the argument for-run
is a regular expression, so it is possible to run multiple tests at once.To run all tests in a package, run
go test <packageName>
. Note that the package name should include a relative path if the package is not in the working directory.To run all tests in a project, run
go test ./...
. The argument for test is a wildcard, matching all subdirectories; therefore, it is crucial for the working directory to be set to the root of the project to recursively run all tests.
Additionally, tests can be run from the IDE. For example, in GoLand, the IDE will automatically detect tests and provide a gutter icon to run them [7].
How the command patch
is tested?
Prerequisites
Before patch
can be tested, it is necessary to do the following:
- Replace the placeholders in the file
.env.example
and rename it to.env
. If you have no api token, you can generate one here. - Run the script
setup.ps1
as administrator. This script will install all necessary dependencies and initialize the ARS-, Repo-Editor- and Test-Origin-Repository.
Test data
To test patch
, it was necessary to use a
test origin repository as test data. In this context the test origin repository is a repository that contains
all the necessary files and configurations from ST1 to test different scenarios.
Additionally, a test group was created to test if the Repo-Editor-repository actually pushes the generated files to remote repositories. Currently, the test group contains the following repositories:
coderepos:
ST1_Test_group_8063661e-3603-4b84-b780-aa5ff1c3fe7d
ST1_Test_group_86bd537d-9995-4c92-a6f4-bec97eeb7c67
ST1_Test_group_8754b8cb-5bc6-4593-9cb8-7c84df266f59
testrepos:
ST1_Test_tests_group_446e3369-ed35-473e-b825-9cc0aecd6ba3
ST1_Test_tests_group_9672285a-67b0-4f2e-830c-72925ba8c76e
Structure of a test case
patch
is tested with a table-driven test, which is located in the file patch_test.go
.
The following example shows the structure of a test case:
package patch
func TestPatch(t *testing.T) {
testCases := []struct {
name string
arguments PatchArguments // input
generatedFiles []GeneratedFile // expected
error error // expected
}{
{
"example test case",
PatchArguments{
dryRun: true | false,
logLevel: "[empty] | info | debug | warning | error",
originRepo: "path_to_test_origin_repo",
home: "[empty] | path_to_repositories",
distribution: "[empty] | code | test",
patchFiles: []string{"patch_file_name"},
},
[]GeneratedFile{
{
RepoName: "repository_name",
RelFilePath: "path_to_the_generated_file",
Distribution: Code | Test,
Include: []string{"should_be_found_in_the_generated_file"},
Exclude: []string{"should_not_be_found_in_the_generated_file"},
},
},
error: nil | errorType,
},
}
// [run test cases]
}
The name
field is the name of the test case and is used to identify the test case in case of an error.
The struct PatchArguments
contains all the necessary arguments to run the patch
command:
dryRun
: If true, generated files will not be pushed to a remote repository.logLevel
: The log level of the command.originRepo
: The path to the test origin repository.home
: The path to the divekit repositories.distribution
: The distribution to patch.patchFiles
: The patch files to apply.
The struct GeneratedFile
is the expected result of the patch
command and contains the following properties:
RepoName
: The name of the generated repository.RelFilePath
: The relative file path of the generated file.Distribution
: The distribution of the generated file.Include
: Keywords that should be found in the generated file.Exclude
: Keywords that should not be found in the generated file.
The error
field is the expected error of the patch
command. It can be nil
when no error is expected or
contain a specific error type if an error is expected.
Process of a test case
The following code snippet shows how test cases are processed:
package patch
func TestPatch(t *testing.T) {
// [define test cases]
for _, testCase := range testCases {
t.Run(testCase.name, func(t *testing.T) {
generatedFiles := testCase.generatedFiles
dryRunFlag := testCase.arguments.dryRun
distributionFlag := testCase.arguments.distribution
deleteFilesFromRepositories(t, generatedFiles, dryRunFlag) // step 1
_, err := executePatch(testCase.arguments) // step 2
checkErrorType(t, testCase.error, err) // step 3
if err == nil {
matchGeneratedFiles(t, generatedFiles, distributionFlag) // step 4
checkFileContent(t, generatedFiles) // step 5
checkPushedFiles(t, generatedFiles, dryRunFlag) // step 6
}
})
}
}
Each test case runs the following sequence of steps:
deleteFilesFromRepositories
deletes the specified files from their respective repositories. Prior to testing, it is necessary to delete these files to ensure that they are actually pushed to the repositories, given that they are initially included in the repositories.executePatch
executes the patch command with the given arguments and return the output and the error.checkErrorType
checks if the expected error type matches with the actual error type.matchGeneratedFiles
checks if the found file paths match with the expected files and throws an error when there are any differences.checkFileContent
checks if the content of the files is correct.checkPushedFiles
checks if the generated files have been pushed correctly to the corresponding repositories.
References
[1] A. Simion, Test-Driven Development in Go Packt Publishing Ltd, 2023
[2] “Comprehensive Guide to Testing in Go | The GoLand Blog," The JetBrains Blog (accessed Jan. 29, 2024).
[3] “Goroutines in Golang - Golang Docs," (accessed Jan. 29, 2024).
[4] “Using the Testify toolkit | GoLand," GoLand Help. (accessed Jan. 29, 2024).
[5] “Why use TestMain for testing in Go?" (accessed Jan. 29, 2024).
[6] “Go Toolchain - Go Wiki” (accessed Jan. 29, 2024).
[7] “Run tests | GoLand," GoLand Help. (accessed Jan. 29, 2024).
3.4.2 - Testrepo
The documentation is not yet written. Feel free to add it yourself ;)
Testing Package structure
static final String PACKAGE_PREFIX = "thkoeln.divekit.archilab.";
@Test
public void testPackageStructure() {
try {
Class.forName(PACKAGE_PREFIX + "domainprimitives.StorageCapacity");
Class.forName(PACKAGE_PREFIX + "notebook.application.NotebookDto");
Class.forName(PACKAGE_PREFIX + "notebook.application.NotebookController");
Class.forName(PACKAGE_PREFIX + "notebook.domain.Notebook");
// using individualization and the variableExtensionConfig.json this could be simplified to
// Class.forName("$entityPackage$.domain.$entityClass$");
// ==> Attention: If used, the test can't be tested in the orgin repo itself
} catch (ClassNotFoundException e) {
Assertions.fail("At least one of your entities is not in the right package, or has a wrong name. Please check package structure and spelling!");
}
}
Testing REST Controller
@Autowired
private MockMvc mockMvc;
@Test
public void notFoundTest() throws Exception {
mockMvc.perform(get("/notFound")
.accept(MediaType.APPLICATION_JSON))
.andDo(print())
.andExpect(status().isNotFound());
}
@Transactional
@Test
public void getPrimeNumberTest() throws Exception {
final Integer expectedPrimeNumber = 13;
mockMvc.perform(get("/primeNumber")
.accept(MediaType.APPLICATION_JSON))
.andDo(print())
.andExpect(status().isOk())
.andExpect(jsonPath("$", Matchers.is(expectedPrimeNumber))).andReturn();
}
Testing …
4 - Archive
Archive
Documentation for legacy tools that are being replaced by the CLI.
4.1 - Access Manager
The documentation is not yet written. Feel free to add it yourself ;)
4.2 - Access Manager 2.0
Setup & Run
- Install Python 3 or higher
- Install python-GitLab using
pip install python-gitlab
- Check the file
config.py
to configure the tool - run AccesManager.py using
python AccessManager
Configuration
Option | Purpose |
---|---|
GIT_URL | URL of your GitLab Server |
AUTH_TOKEN | Your personal GitLab access token |
GROUP_ID | Id of the GitLab Group you want to modify |
ACCESS_LEVEL | Access level you want to provide. 1 for Maintainer, 0 for Guest |
STUDENTS | List of users to modify. Users not in this List will be ignored. |
4.3 - Automated Repo Setup
Setup & Run
Install NodeJs (version >= 12.0.0) which is required to run this tool. NodeJs can be acquired on the website nodejs.org.
To use this tool you have to clone the repository to your local drive.
This tool uses several libraries in order to use the Gitlab API etc. Install these libraries by running the command
npm install
in the root folder of this project.Local/GitLab usage
- For local use only
- Copy the origin repository into the folder resources/test/input. If this folder does not exist, create the folder test inside the resources folder and then create the folder input in the newly created folder test
- The generated repositories will be located under resources/test/output after running the tool
- For use with Gitlab:
- Navigate to https://git.st.archi-lab.io/profile/personal_access_tokens (if you are using the gitlab instance * git.st.archi-lab.io*) and generate an Access Token / API-Token in order to get access to the gitlab api
- Copy the Access Token
- Rename the file .env.example to .env
- Open .env and replace YOUR_API_TOKEN with the token you copied.
- Configure the source repository and target group in the config
- For local use only
Before you can configure or run this tool you have to copy all the example config files inside the * resources/examples/config* folder to the resources/config folder in order to create your own config files. If you want to change the standard behaviour you can configure this tool by editing the configs.
To run the application navigate into the root folder of this tool and run
npm start
. The repositories will now be generated.
Configuration
Before you can configure this tool you have to copy all the relevant example config files inside the * resources/examples/config* folder to the resources/config folder in order to create your own config files. If you want to change the standard behaviour you can configure this tool by editing the configs.
The Divekit uses two types of configs, technical configs and domain specific configs. The contents of technical configs often change each time repositories are generated using the Divekit. Therefore these type of configs are located in the resources/config folder of the Divekit. Domain configs do not change each time new repositories are generated because they depend on the type of exercise and the corresponding domain. As a result these configs should be contained in the Origin Project (they dont have to). In the following the different configs, their purpose and their type are listed:
Config | Purpose | Type |
---|---|---|
repositoryConfig | Configure the process of repository generation | Technical Config |
originRepositoryConfig | Configure solution deletion and variable warnings | Domain Config |
variationsConfig | Configure different types of variations | Domain Config |
variableExtensionsConfig | Configure different extensions which are used to generate derivated variables | Domain Config |
relationsConfig | Configure properties of relations which are used to generate relation variables | Domain Config |
Features
Repository Generation
If the Divekit is run the tool will generate repositories based on the configured options defined in the * repositoryConfig*. The following example shows the relevant options where each option is explained in short:
{
"general": {
# Decide wheather you just want to test locally. If set to false the Gitlab api will be used
"localMode": true,
# Decide wheather test repositories should be generated as well. If set to false there will only be generated one code repository for each learner
"createTestRepository": true,
# Decide wheather the repositories should be randomized using the *variationsConfig.json*
"variateRepositories": true,
# Decide wheather the existing solution should be deleted using the SolutionDeleter
"deleteSolution": false,
# Activate warnings which will warn you if there are suspicious variable values remaining after variable placeholders have been replaced
"activateVariableValueWarnings": true,
# Define the number of concurrent repository generation processes. Keep in mind that high numbers can overload the Gitlab server if localMode is set to false
"maxConcurrentWorkers": 1
# Optional flag: set the logging level. Valid values are "debug", "info", "warn", "error" (case insensitive). Default value is "info".
"globalLogLevel": "debug"
},
"repository": {
# The Name of the repositories. Multiple repositories will be named <repositoryName>_group_<uuid>, <repositoryName>_tests_group_<uuid> ...
"repositoryName": "st2-praktikum",
# The number of repositories which will be created. Only relevant if there were no repositoryMembers defined
"repositoryCount": 0,
# The user names of the members which get access to repositories
"repositoryMembers": [
["st2-praktikum"]
]
},
"local": {
# The file path to an origin repository which should be used for local testing
"originRepositoryFilePath": ""
},
"remote": {
# Id of the repository you want to clone
"originRepositoryId": 1012,
# The ids of the target groups where all repositories will be located
"codeRepositoryTargetGroupId": 161,
"testRepositoryTargetGroupId": 170,
# If set to true all existing repositories inside the defined groups will be deleted
"deleteExistingRepositories": false,
# Define wheather users are added as maintainers or as quests
"addUsersAsGuests": false
}
}
If localMode is set to true the application will only generate possible variable variations and randomize files based on a folder which contains the origin repository. This folder should be located in the folder resources/test/input. If the folder resources/test/input does not exist create it within the root folder of this tool or run the tool once in test mode which will generate this folder automatically. This can be used to get an idea which repositories will result based on the configs. The following example shows the location of the origin folder:
root_of_tool
- build
- node_modules
- src
- .gitignore
- .Readme
- resources
- test
- input
- origin-folder
- src
- .gitignore
- .Readme
If you dont want to copy the origin repository each time you want to test a new version specify the file path to the origin repository in the config under local.originRepositoryFilePath.
Partially repository generation
While running the automated-repo-setup
in local mode you have the option to partially generate repositories.
To do so, just configure the repositoryConfig.json
* as such:
{
"general": {
"localMode": true
},
"local": {
"subsetPaths": [
"README.md",
"path/to/malfunction/file.eof"
]
}
}
*only partially shown
Start generation
npm start
Generated files are located under: resources/output/
File Assignment
Although code and test files are separeted into two repositories the exercise only consists of one repository called the origin. It would be really troublesome if you would have to update two repositories all the time while creating a new exercise. Because of that there has to be a way to determine wheather a file has to be copied to the code project, the test project or both. If you want some files to only be copied to a specific repository you can express this behaviour in the filename.
- If the filename contains the string _coderepo the file will only be copied to the code repository.
- If the filename contains the string _testrepo the file will only be copied to the test repository.
- If the filename contains the string _norepo the file will not be copied to the repositories. This can be used to store config files from this tool directly in the origin repository.
- If the filename contains none of those the file will be copied to both repositories.
File Converter
If you want to convert or manipulate certain repository files during the repository generation process File Converters (File Manipulators) can be used. Currently there is only one type of File Manipulator available. Additional converters can be easily added by extending the codebase of the Divekit. The already existing File Manipulartor is called UmletFileManipulator. This manupulator is used to convert the individualized xml representations of Umlet diagrams to image formats. This convert step can not be skipped because it is not possible to replace variables in image representations of umlet diagrams. Therefore the process of individualizing UML diagrams created with umlet is as follows:
UML diagram with placeholder variables (xml) -> UML diagram with already replaced content (xml) -> UML diagram with already replaced content (image file format)
Test Overview
To give an overview on passed and failed tests of a repository a test overview page will be generated using the project report-mapper and report-visualizer. The tools are called within the " .gitlab-ci.yml" file in the deploy stage.
Repository Overview
If you have the generation of the overview table enabled in the repositoryConfig the destination and the name of the overview table can be defined in the file repositoryConfig as well:
{
"overview": {
"generateOverview": true,
"overviewRepositoryId": 1018,
"overviewFileName": "st2-praktikum"
}
}
Given the config shown above a markdown file will be generated which includes a summary of all generated repositories and their members. After that the file will be uploaded to the configured repository:
Solution Deletion
If you want solutions which are contained in your origin project to be removed while creating the code and test repositories enable solution deletion in the repositoryConfig. The originRepositoryConfig specifies the keywords which are used to either
- delete a file
- delete a paragraph
- replace a paragraph
This can be shown best with an example:
// TODO calculate the sum of number 1 and number 2 and return the result
public static int sumInt(int number1, int number2) {
//unsup
return number1 + number2;
//unsup
}
// TODO calculate the product of number 1 and number 2 and return the result
public static int multiplyInt(int number1, int number2) {
//delete
return number1 * number2;
//delete
}
will be changed to:
// TODO calculate the sum of number 1 and number 2 and return the result
public static int sumInt(int number1, int number2) {
throw new UnsupportedOperationException();
}
// TODO calculate the product of number 1 and number 2 and return the result
public static int multiplyInt(int number1, int number2) {
}
The corresponding config entry in the originRepositoryConfig would be:
{
"solutionDeletion": {
"deleteFileKey": "//deleteFile",
"deleteParagraphKey": "//delete",
"replaceMap": {
"//unsup": "throw new UnsupportedOperationException();"
}
}
}
A file containing the string “//deleteFile” would be deleted.
Individualization
If you want your project to be randomized slightly use the configuration files variationsConfig.json, * variableExtensionsConfig* and relationsConfig to create variables. Variables can be referenced later by their name encapsulated in configured signs. e.g.: $ThisIsAVariable$.
Variable Generation
There are three types of variables:
Object Variables
Object Variables are used to randomize Entities and Value Objects. Such variables are created by defining one or multiple ids and an array of possible object variations. Object variations can contain attributes which will later be transformed into a variable. An example attribute could be Class which contains the class name of an entity. Keep in mind that attributes can not only be limited to a single primitive value but can only be expressed as a new object inside the json. The following json shows a possible declaration of two object variations inside the variationsConfig:
{
"ids": "Vehicle",
"objectVariations": [
{
"id": "Car",
"Class": "Car",
"RepoClass": "CarRepository",
"SetToOne": "setCar",
"SetToMany": "setCars"
},
{
"id": "Truck",
"Class": "Truck",
"RepoClass": "TruckRepository",
"SetToOne": "setTruck",
"SetToMany": "setTrucks"
},
{
"id": "Train",
"Class": "Train",
"RepoClass": "TrainRepository",
"SetToOne": "setTrain",
"SetToMany": "setTrains"
}
],
"variableExtensions": [
"Getter"
]
},
{
"ids": ["Wheel1", "Wheel2"],
"objectVariations": [
{
"id": "FrontWheel",
"Class": "FrontWheel",
"RepoClass": "FrontWheelRepository",
"SetToOne": "setFrontWheel",
"SetToMany": "setFrontWheels"
},
{
"id": "BackWheel",
"Class": "BackWheel",
"RepoClass": "BackWheelRepository",
"SetToOne": "setBackWheel",
"SetToMany": "setBackWheels"
}
],
"variableExtensions": ["Getter"]
}
The defined object variations are now randomly assigned to the variables Vehicle, Wheel1 and Wheel2. The following dictionary shows variables which result from above declaration:
VehicleClass: 'Truck',
VehicleRepoClass: 'TruckRepository',
VehicleGetToOne: 'getTruck',
VehicleGetToMany: 'getTrucks',
VehicleSetToOne: 'setTruck',
VehicleSetToMany: 'setTrucks',
Wheel1Class: 'Backwheel',
Wheel1RepoClass: 'BackwheelRepository',
Wheel1GetToOne: 'getBackWheel',
Wheel1GetToMany: 'getBackWheels',
Wheel1SetToOne: 'setBackWheel',
Wheel1SetToMany: 'setBackWheels',
Wheel2Class: 'FrontWheel',
Wheel2RepoClass: 'FrontWheelRepository',
Wheel2GetToOne: 'getFrontWheel',
Wheel2GetToMany: 'getFrontWheels',
Wheel2SetToOne: 'setFrontWheel',
Wheel2SetToMany: 'setFrontWheels'
In the example above you can see that some variables could be derived from already existing variables. The setter variables are a perfect example for this. Such variables can also be defined through variable extensions. This is done for the getter variables in the example. Two steps are required to define such derived variables:
- Define a rule for a variable extension in the config variableExtensionsConfig.json:
{
"id": "Getter",
"variableExtensions": {
"GetToOne": {
"preValue": "get",
"value": "CLASS",
"postValue": "",
"modifier": "NONE"
},
"GetToMany": {
"preValue": "get",
"value": "PLURAL",
"postValue": "",
"modifier": "NONE"
}
}
}
The value attribute references an already existing variable which is modified through the given modifier. Valid modifiers can for example convert the given variable to an all lower case variant.
The resulting value is then concatenated with the preValue and postValue like so: preValue + modifier(value) + postValue.
- Define a certain variable extension for an object by adding the id of the variable extension to the list of variable extensions of an object (see example above).
Relation Variables
Relation Variables are used to randomize relations between entities. They are defined by declaring an array of * relationships* and an array of relationObjects inside the variationsConfig. Both arrays must be of equal length because each set of relationObjects will be assigned to an relationShip.
In order to define a relationShip you have to provide an id and a reference to an relationShip type. These types are defined in the file relationsConfig and can contain any kind of attributes:
{
"id": "OneToOne",
"Umlet": "lt=-\nm1=1\nm2=1",
"Short": "1 - 1",
"Description": "one to one"
}
In order to define a set of relationObjects you have to provide an id and two object references. The following json shows an example definition for relations:
{
"relationShips": [
{
"id": "Rel1",
"relationType": "OneToOne"
},
{
"id": "Rel2",
"relationType": "OneToMany"
}
],
"relationObjects": [
{
"id": "RelVehicleWheel1",
"Obj1": "Vehicle",
"Obj2": "Wheel1"
},
{
"id": "RelVehicleWheel2",
"Obj1": "Vehicle",
"Obj2": "Wheel2"
}
]
}
For each relationship two kind of variables will be generated.
One kind of variable will clarify which objects belong to a certain relationship. These variables will start with for example Rel1 as defined in the section relationShips.
Another kind of variable will clarify which relationship belongs to a set of objects. These variables will start with for example RelVehicleWheel1 as defined in the section relationObjects.
For each of these two kinds a set of variables will be gernated. The first set contains attributes of the relation types defined in the relationsConfig. The other set contains attributes of the objects defined in the variationsConfig.
The following json shows a set of variables which will be generated for a single relationship:
Rel1_Umlet: 'lt=-\nm1=1\nm2=1',
Rel1_Short: '1 - 1',
Rel1_Description: 'one to one',
Rel1_Obj1Class: 'Truck',
Rel1_Obj1RepoClass: 'TruckRepository',
Rel1_Obj1GetToOne: 'getTruck',
Rel1_Obj1GetToMany: 'getTrucks',
Rel1_Obj1SetToOne: 'setTruck',
Rel1_Obj1SetToMany: 'setTrucks',
Rel1_Obj2Class: 'Backwheel',
Rel1_Obj2RepoClass: 'BackwheelRepository',
Rel1_Obj2GetToOne: 'getBackWheel',
Rel1_Obj2GetToMany: 'getBackWheels',
Rel1_Obj2SetToOne: 'setBackWheel',
Rel1_Obj2SetToMany: 'setBackWheels',
RelVehicleWheel1_Umlet: 'lt=-\nm1=1\nm2=1',
RelVehicleWheel1_Short: '1 - 1',
RelVehicleWheel1_Description: 'one to one',
RelVehicleWheel1_Obj1Class: 'Truck',
RelVehicleWheel1_Obj1RepoClass: 'TruckRepository',
RelVehicleWheel1_Obj1GetToOne: 'getTruck',
RelVehicleWheel1_Obj1GetToMany: 'getTrucks',
RelVehicleWheel1_Obj1SetToOne: 'setTruck',
RelVehicleWheel1_Obj1SetToMany: 'setTrucks',
RelVehicleWheel1_Obj2Class: 'Backwheel',
RelVehicleWheel1_Obj2RepoClass: 'BackwheelRepository',
RelVehicleWheel1_Obj2GetToOne: 'getBackWheel',
RelVehicleWheel1_Obj2GetToMany: 'getBackWheels',
RelVehicleWheel1_Obj2SetToOne: 'setBackWheel',
RelVehicleWheel1_Obj2SetToMany: 'setBackWheels',
Logic variables
Logic Variables are used to randomize logic elements of an exercise. The idea behind this concept is that you can define multiple groups of business logic, but only one group of business logic is assigned to each individual exercise. Logic variables can also be used to define text which decribes a certain business logic. Here is an example for the definition of logic variables:
{
"id": "VehicleLogic",
"logicVariations": [
{
"id": "VehicleCrash",
"Description": "Keep in mind that this text is just an example. \nThis is a new line"
},
{
"id": "VehicleShop",
"Description": "The Vehicle Shop exercise was selected"
}
]
}
Above example will generate only one variable which is called VehicleLogicDescription. The interesting part of the logic variations are the ids. If you add an underscore followed by such an id to the end of a file this file is only inserted into an individual repository if the said id was selected during the randomization.
e.g.: The file VehicleCrashTest_VehicleCrash.java is only inserted if the logic VehicleCrash was selected. The file VehicleShopTest_VehicleShop.java is only inserted if the logic VehicleShop was selected.
This can be used to dynamically insert certain test classes which test a specific business logic. If a certain test class was not inserted to an individual repository the one who solves this exercise does not have to implement the corresponding business logic.
Variable Post Processing
Often variable values are needed not only in capital letters but also in lower case format. Therefore for each generated variable there will be three different types generated:
The first type is the variable itselt without further changes e.g.: VehicleClass -> MonsterTruck
The second type sets the first char to lower case e.g.: vehicleClass -> monsterTruck
The third type sets all chars to lower case e.g.: vehicleclass -> monstertruck
Variable Replacement
In the process of repository individualization all defined variables will be replaced in all the origin repository files with their corresponding value. Typically every variable which should be replaced is decorated with a specific string at the start and the end of the variable e.g: $VehicleClass$ or xxxVehicleClassxxx. This string helps identifying variables. If needed this string can be set to an empty string. In this case the variable name can be inserted in specific files without futher decoration. This can lead to problems in terms of variable replacement so that the Divekit will take certain measures to ensure that all variables are replaced correctly. This decoration string can be configured in the originRepositoryConfig:
{
"variables": {
"variableDelimeter": "$"
}
}
Variable Value Warnings
If this feature is activated within the repositoryConfig the tool will spit out warnings which will inform you if there are suspicious variable values remaining after variable placeholders have been replaced. If for example a learner has to solve an exercise which contains Trucks instead of Cars (see config above) then the solution of this leaner should not contain variable values like “Car”, “CarRepository”, “setCar” or “setCars”. In the originRepositoryConfig you can define a whitelist of file types which should be included in the warning process.
Additionally an ignoreList can be configured. If a variable value is contained in one of the defined values inside the ignoreList this specific variable value will not trigger a warning. In addition, the ignoreFileList can contain filenames which should be completely excluded from the warning process.
The following json is an example for the discussed configurable options:
{
"warnings": {
"variableValueWarnings": {
"typeWhiteList": [
"json",
"java",
"md"
],
"ignoreList": [
"name",
"type"
],
"ignoreFileList": [
"individualizationCheck_testrepo.json",
"variationsConfig_testrepo.json",
"IndividualizationTest_testrepo.java"
]
}
}
}
Individual Repository Persist
If you run the tool the default behaviour is that it will generate individual variables for each repository which is specified in the repositoryConfig. If you want to reuse already generated variables you can set " useSavedIndividualRepositories" to “true” and define a file name under “savedIndividualRepositoriesFileName”. The file name is relative to the folder “resources/individual_repositories”. These options are defined in the repositoryConfig:
{
"individualRepositoryPersist": {
"useSavedIndividualRepositories": true,
"savedIndividualRepositoriesFileName": "individual_repositories_22-06-2021 12-58-31.json"
}
}
A single entry in such an individual repositories file can be edited with a normal text editor and could look like this:
{
"id": "67e6be38-ae36-4fbf-9d03-0993d97f7559",
"members": [
"user1"
],
"individualSelectionCollection": {
"individualObjectSelection": {
"Vehicle": "Truck",
"Wheel1": "BackWheel",
"Wheel2": "FrontWheel"
},
"individualRelationSelection": {
"Rel1": "RelVehicleWheel2",
"Rel2": "RelVehicleWheel1"
},
"individualLogicSelection": {
"VehicleLogic": "VehicleCrash"
}
}
}
Components
The component diagram above shows the components of the Divekit which are used in the process of generating and individualizing repositories. In the following the repository generation process will be explained step by step and the components relevant in each step are described:
The Repository Creator delegates most of the tasks involved in the repository generation process to other components. Before repositories are generated the Repository Creator calls the Repository Adapter to prepare the environment. This includes for example creating empty folders for repositories or deleting previous data which is contained in the destination folder. A Repository Adapter functions like an interface to the environment in which new repositories are being generated. At the moment there are two kinds of Repository Adapters: One for the local file system and one for Gitlab.
The Content Retriever retrieves all files from the configured origin repository. In order to access the origin repository the component will use a Repository Adapter. If solution deletion is activated the solution which is contained inside the origin repository will be deleted inside the retrived origin files (not in the origin repository).
For each configured repository or learner a specific configuration is generated by the Individual Repository Manager. This configuration is used by other components while generating repositories and contains for example a unique id and the usernames of learners. If individualization is activated for each configuration specific variations and corresponding variables are generated by the Variation Generator. These variations and variables will also be contained in the seperate configurations which are generated by the Individual Repository Manager.
For each repository configuration generated by the Individual Repository Manager in the previvous step a Content Provider is instantiated. After varying the content by using the randomly generated variations from the previous step the defined File Manipulators (File Converters) are executed. Finally the resulting files are pushed to a new repository using a Repository Adapter.
After all Content Providers are finished with generating each corresponding repository the Overview Generator collects basic information from the Content Providers and generates an overview of all links leading to Code Projects, Test Projects and Test Pages.
The following table lists the relevant packages inside the codebase for each component:
Component | Relevant Packages |
---|---|
Repository Creator | repository_creation |
Individual Repository Manager | repository_creation |
Variation Generator | content_variation |
Content Retriever | content_manager, solution_deletion |
Content Provider | content_manager, content_variation, file_manipulator |
Overview Generator | generate_overview |
Repository Adapter | repository_adapter |
Design-Decisions
Design-Decision | Explanation |
---|---|
Typescript chosen as programming language | Easy handling of dynamic json structures, Good API support for Gitlab, Platform independent, Can be executed locally with nodejs |
4.4 - Divekit Language Plugin
The documentation is not yet written. Feel free to add it yourself ;)
4.5 - Divekit Language Server
The documentation is not yet written. Feel free to add it yourself ;)
4.6 - Evaluation Processor
The documentation is not yet written. Feel free to add it yourself ;)
4.7 - Operator
The documentation is not yet written. Feel free to add it yourself ;)
Developed in a “Praxisprojekt” and not yet tested in practice.
4.8 - Passchecker
The documentation is not yet written. Feel free to add it yourself ;)
4.9 - Plagiarism Detector
The documentation is not yet written. Feel free to add it yourself ;)
4.10 - Repo Editor
The documentation is in a very early stage and some parts might be outdated.
The divekit-repo-editor allows the subsequent adjustment of individual files over a larger number of repositories.
The editor has two different functionalities, one is to adjust a file equally in all repositories and the other is to adjust individual files in repositories based on the project name.
Setup & Run
Install NodeJs (version >= 12.0.0) which is required to run this tool. NodeJs can be acquired on the website nodejs.org.
To use this tool you have to clone this repository to your local drive.
This tool uses several libraries in order to use the Gitlab API etc. Install these libraries by running the command
npm install
in the root folder of this project.Configure Token
- Navigate to your Profile and generate an Access Token / API-Token in order to get access to the gitlab api
- Copy the Access Token
- Rename the file .env.example to .env
- Open .env and replace YOUR_API_TOKEN with the token you copied.
Configure the application via
src/main/config/
and add files toassets/
, see below for more details.To run the application navigate into the root folder of this tool and run
npm start
. All assets will be updated. Usenpm run useSetupInput
if you want to use the latest output of the automated-repo-setup as input for the edit.
Configuration
Place all files that should be edited in the corresponding directories:
input
βββ assets
βββ code
β βββ PROJECT-NAME-WITH-UUID
β β βββ <add files for a specifig student here>
β βββ ...
βββ test
β βββ PROJECT-NAME-WITH-UUID
β β βββ <add files for a specifig student here>
β βββ ...
βββ <add files for ALL repos here>
src/main/config/editorConfig.json
: Configure which groups should be updated and define the commit message:
{
"onlyUpdateTestProjects": false,
"onlyUpdateCodeProjects": false,
"groupIds": [
1862
],
"logLevel": "info",
"commitMsg": "individual update test"
}
Changelog
1.0.0
- add individual updates per project
0.1.1
- add feature to force create/update
0.1.0
- add feature to update or create files based on given structure in
asset/*/
for all repositories
0.0.1
- initialize project based on the divekit-evaluation-processor
4.11 - Report Mapper
Architecture overview
Usage in the pipeline
For the usage in the pipeline you just need node
as prerequisite and then install and use the report-mapper as following:
npm install @divekit/report-mapper
npx report-mapper
Keep in mind, to provide needed input-data based on your configuration.
Complete sample test-repo pipeline-script
image: maven:3-jdk-11
stages:
- build
- deploy
build: # Build test reports
stage: build
script:
- chmod ugo+x ./setup-test-environment.sh
- ./setup-test-environment.sh # copy code from code repo and ensure that test are NOT overridden
- mvn pmd:pmd # build clean code report
- mvn verify -fn # always return status code 0 => Continue with the next stage
allow_failure: true
artifacts: # keep reports for the next stage
paths:
- target/pmd.xml
- target/surefire-reports/TEST-*.xml
pages: # gather reports and visualize via gitlab-pages
image: node:latest
stage: deploy
script:
- npm install @divekit/report-mapper
- npx report-mapper # run generate unified.xml file
- npm install @divekit/report-visualizer
- npx report-visualizer --title $CI_PROJECT_NAME # generate page
artifacts:
paths:
- public
only:
- master
configuration
The report mapper is configurable in two main ways:
- By defining which inputs are expected and therefore should be computed. This is configurable via parameters. You can choose from the following: pmd, checkstyle* and surefire. If none are provided it defaults to surefire and pmd.
npx report-mapper [surefire pmd checkstyle]
- The second option is specific to PMD. PMD for itself has a configuration-file
pmd-ruleset.xml
which configures which PMD rules should be checked. The report mapper also reads from this file and will design the output based on available rules.
Note: The assignment of PMD rules to clean code and solid principles is as of now hardcoded and not configurable.
*The checkstyle-mapper is currently not included in the testing and therefore should be used with caution.
Example simplified pmd-ruleset.xml
:
<?xml version="1.0"?>
<ruleset name="Custom Rules"
xmlns="http://pmd.sourceforge.net/ruleset/2.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://pmd.sourceforge.net/ruleset/2.0.0 https://pmd.sourceforge.io/ruleset_2_0_0.xsd">
<description>
Clean Code Rules
</description>
<!-- :::::: CLEAN CODE :::::: -->
<!-- Naming rules -->
<rule ref="category/java/codestyle.xml/ClassNamingConventions"/>
<rule ref="category/java/codestyle.xml/FieldNamingConventions"/>
<!-- :::::::: SOLID :::::::: -->
<!-- SRP (Single Responsibility Principle) rules -->
<rule ref="category/java/design.xml/TooManyFields"/> <!-- default 15 fields -->
<rule ref="category/java/design.xml/TooManyMethods"> <!-- default is 10 methods -->
<properties>
<property name="maxmethods" value="15" />
</properties>
</rule>
</ruleset>
Getting started
Install
Clone the repository and install everything necessary:
# HTTP
git clone https://github.com/divekit/divekit-report-mapper.git
# SSH
git clone git@github.com:divekit/divekit-report-mapper.git
cd ./divekit-report-mapper
npm ci # install all dependencies
npm test # check that everything works as intended
Provide input data
The input data should be provided in the following structure:
divekit-report-mapper
βββ target
| βββ surefire-reports
| | βββ fileForTestGroupA.xml
| | βββ fileForTestGroupB.xml
| | βββ ...
| βββ checkstyle-result.xml
| βββ pmd.xml
βββ ...
You can find some examples for valid and invalid inputs in the tests: src/test/resources
npm run dev
Understand the Output
The result from the divekit-report-mapper is a XML-File (target/unified.xml
).
It contains the result of all inputs sources in a uniform format. This also includes errors if some or all inputs
provided invalid or unexpected data.
Example with only valid data:
<?xml version="1.0" encoding="UTF-8"?>
<suites>
<testsuite xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation=""
name="E2CleanCodeSolidManualTest" failures="0" type="JUnit" status="failed">
<testcase name="testCleanCodeAndSolidReview" status="failed" hidden="false">
<error message="-%20break%20pipeline%20%3C--%0A" type="java.lang.Exception"><![CDATA[java.lang.Exception:
- break pipeline <--
at thkoeln.st.st2praktikum.exercise1.E2CleanCodeSolidManualTest.testCleanCodeAndSolidReview(E2CleanCodeSolidManualTest.java:13)]]>
</error>
</testcase>
</testsuite>
<testsuite name="Clean-Code-Principles by PMD" status="failed" type="CleanCode">
<testcase name="Keep it simple, stupid" status="passed" hidden="false"></testcase>
<testcase name="Meaningful names" status="failed" hidden="false">
<error type="LocalVariableNamingConventions" location="Line: 90 - 90 Column: 13 - 22"
file="C:\work\gitlab-repos\ST2MS0_tests_group_d5535b06-ae29-4668-8ad9-bd23b4cc5218\src\main\java\thkoeln\st\st2praktikum\bad_stuff\Robot.java"
message="The local variable name 'snake_case' doesn't match '[a-z][a-zA-Z0-9]*'"></error>
</testcase>
</testsuite>
</suites>
For further examples see tests src/test/resources
.
Deployment
All pipeline scripts normally use the latest version from npmjs.com.
The repository is set up with three different GitHub Actions workflows wich trigger
on pushes to the branches main
, stage
and development
.
- main: Build, run tests and publish new npm package. Fails if: build/tests fail, the version is a beta version or the version has not been updated
- stage: same as main but the version must be a beta-version and the package is tagged as beta
- development: Build and run all tests
Version
Complete packages available at npmjs.com. The versioning is mostly based on semantic versioning.
1.1.2
- fixed a bug which caused packages to fail if there were build through the automated GitHub Actions workflow
1.1.1
- moved docs from readme to divekit-docs
- add continues delivery pipeline
- switch to eslint
- add configurability of pmd principles
- add surefire parsing error flag
- update scripts according to new report-visualizer naming
1.0.8
- Parameters processing added, which allow a restriction of the used mappers
- Error handling: If a mapper does not deliver a valid result, an error is indicated in the unified.xml.
4.12 - Report Visualizer
Architecture overview
Usage in the pipeline
For the usage in the pipeline you just need node
as prerequisite and provide the input-data: target/unified.xml
.
Install and use the report-visualizer as following:
npm install @divekit/report-visualizer
npx report-visualizer --title PROJECT_NAME
Complete sample test-repo pipeline-script
image: maven:3-jdk-11
stages:
- build
- deploy
build: # Build test reports
stage: build
script:
- chmod ugo+x ./setup-test-environment.sh
- ./setup-test-environment.sh # copy code from code repo and ensure that test are NOT overridden
- mvn pmd:pmd # build clean code report
- mvn verify -fn # always return status code 0 => Continue with the next stage
allow_failure: true
artifacts: # keep reports for the next stage
paths:
- target/pmd.xml
- target/surefire-reports/TEST-*.xml
pages: # gather reports and visualize via gitlab-pages
image: node:latest
stage: deploy
script:
- npm install @divekit/report-mapper
- npx report-mapper # run generate unified.xml file
- npm install @divekit/report-visualizer
- npx report-visualizer --title $CI_PROJECT_NAME # generate page
artifacts:
paths:
- public
only:
- master
Getting started
Install
Clone the repository and install everything necessary:
# HTTP
git clone https://github.com/divekit/divekit-report-visualizer.git
# SSH
git clone git@github.com:divekit/divekit-report-visualizer.git
cd ./divekit-report-visualizer
npm ci # install all dependencies
Provide input data
The input data should be provided in the following structure:
divekit-report-visualizer
βββ target
| βββ unified.xml
βββ ...
Run it
Directly with provided input target/unified.xml
node bin/report-visualizer
Use predefined input assets/xml-examples/unified.xml
npm run dev
Or use divekit-report-mapper
result*
npm run dev++
*Requirement is that the divekit-report-visualizer
is located in the same directory as the divekit-report-mapper
.
Output (GitLab Pages)
Output in /public
directory. Which is used for GitLab-pages or could be mounted anywhere.
divekit-report-visualizer
βββ target
| βββ unified.xml
βββ public
| βββ index.html
| βββ style.css
βββ ...
The following picture shows an example output with passed test (green), test failures (orange),
errors (red) and a note (gray).
Deployment
Currently, completely manually. In the future done similar to report-mapper
All pipeline scripts normally use the latest version from npmjs.com.
Version
Complete packages available at npmjs.com. The versioning is mostly based on semantic versioning.
1.0.3
- Updating naming: form
divekit-new-test-page-generator
todivekit-report-visualizer
1.0.2
- Added hidden metadata in the header indicating the number of failed tests.
- Added possibility to pass a special ‘NoteTest’ test case which is displayed separately.
- Updated the error message for generation problems so that it is displayed even if only parts of the test page could not be generated.
- Fixed an error where the test page could not be generated if there was no input.
4.13 - Test Library
The documentation is not yet written. Feel free to add it yourself ;)
4.14 - Test page generator
The documentation is not yet written. Feel free to add it yourself ;)