This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Development

Documentation for Divekit developers and contributors.

Resources for developers who want to contribute to Divekit.

1 - Architecture

Technical architecture and component documentation.

This section covers Divekit’s technical architecture:

The architecture documentation helps developers understand how Divekit works internally.

Components

  • Core Components: Detailed documentation of core components and their interactions

1.1 - Overview

Overview of Divekit’s architecture and how the components work together.

Divekit is a tool that helps instructors to create and distribute repositories to students.

High-Level Overview

graph TB
    INST((Instructors))
    ORIGIN[Origin Repository]
    CLI[Divekit CLI]
    DIST[Distribution]
    REPOSTUDENT[Student Repositories]
    REPOTEST[Test Repositories]
    STUDENTS((Students))
    TPAGE[Test Pages]

    INST -->|Develop| ORIGIN
    INST -->|Use| CLI
    ORIGIN -->|Input| CLI
    CLI -->|Generate| DIST
    DIST --- REPOTEST
    DIST --- REPOSTUDENT
    STUDENTS -->|Work on| REPOSTUDENT
    TPAGE -->|Get feedback| STUDENTS
    REPOSTUDENT --->|Update| REPOTEST
    REPOTEST --->|Update| TPAGE

    style CLI fill:#42b050,stroke:#333
    style ORIGIN fill:#fcf,stroke:#333
    style DIST fill:#a3e87e,stroke:#333
    style INST fill:#ff9,stroke:#333
    style STUDENTS fill:#ff9,stroke:#333
    style REPOSTUDENT fill:#6fc5ff,stroke:#333
    style REPOTEST fill:#6fc5ff,stroke:#333

Component Details

Divekit CLI

The CLI serves as the central interface for instructors. It controls the entire process of task distribution and management. All necessary commands for creating, distributing, and managing repositories are executed through the CLI.

Origin Repository

The Origin Repository contains the initial version of assignments and tests. It serves as a master template from which individualized versions for students are generated. This is where the original assignments, code scaffolds, and test cases are maintained.

Distribution

A Distribution is the result of the distribution process and consists of two main components:

Student Repositories

Individualized repositories for each student or group, containing:

  • Personalized assignments
  • Adapted code scaffolds
  • Specific resources

Test Repositories

Separate repositories containing test cases and evaluation criteria:

  • Automated tests
  • Assessment metrics
  • Feedback mechanisms

Test Page

A page where students can get feedback on their work.

Students

Students are the users who are working on the repositories. They can be individuals or groups.

Instructor

Instructor is the user who is creating the repositories and distributing them to the students.

1.2 - Components

Detailed documentation of Divekit’s core components and their interactions.

This document describes the core components of Divekit and how they interact.

Components Overview

graph TB
    subgraph interfaces
        CLI[CLI Interface]
        WebUI[Web Interface]
    end
    style WebUI stroke-dasharray: 5 5

    subgraph core[Modules]
        ModuleEntry((   ))
        style ModuleEntry fill:none,stroke:none
        
        Config[Configuration Manager]
        GitAdapter[GitLab Adapter]
        Indiv[Individualization]
        Pass[Passchecker]
        Plag[Plagiarism Checker]
        User[Usermanagement]
    end
    
    CLI --> ModuleEntry
    WebUI -.-> ModuleEntry
    
    Pass --> GitAdapter
    Plag --> GitAdapter
    User --> GitAdapter
    GitAdapter --> GitLab[GitLab API]

Interfaces

  • CLI Interface: Central command-line interface for all user interactions
  • Web Interface (planned): Alternative user interface that uses the same modules as the CLI

Modules

  • Configuration Manager: Manages all configuration files and user settings
  • GitLab Adapter: Central component for all GitLab interactions
  • 🚧 Individualization: Handles the individualization of tasks
  • 🚧 Passchecker: Checks submissions and communicates with GitLab
  • 🚧 Plagiarism Checker: Detects possible plagiarism and interacts with GitLab
  • 🚧 Usermanagement: Manages users and their permissions through GitLab

1.3 - Configuration

Divekit uses a hierarchical configuration system with both global and project-specific settings.

Divekit uses a hierarchical configuration system with both global and project-specific settings.

Configuration Levels

Divekit uses a multi-level configuration system based on the frequency of changes:

[0] Installation

Configurations that are set once during DiveKit installation and rarely changed afterwards. These contain global defaults and environment settings.

~
└── .divekit/
    β”œβ”€β”€ .env                  # Environment variables
    β”œβ”€β”€ hosts.json            # Hosts configuration
    β”œβ”€β”€ members               # Members configuration
    β”‚   β”œβ”€β”€ 2025-01-21_12-28-15_pear_members.json
    β”‚   β”œβ”€β”€ 2025-01-27_12-29-00_raspberry_members.json
    β”‚   └── 2025-01-27_12-40-02_sandwich_members.json
    β”œβ”€β”€ origin.json           # Origin configuration
    └── variation             # Variation configuration (not finalized)
        β”œβ”€β”€ relations.json    # Relations configuration
        β”œβ”€β”€ variableExtensions.json # Variable extensions configuration
        └── variations.json     # Variations configuration

Environment Configuration

~/.divekit/.env:

API_TOKEN=YOUR_ACCESS_TOKEN
DEFAULT_BRANCH=main

Remotes

Default: ~/.divekit/hosts.json:

{
    "version": "1.0",
    "hosts": {
        "default": {
            "host": "https://gitlab.git.nrw/",
            "token": "DIVEKIT_API_TOKEN"
        }
    }
}

Example: ~/.divekit/hosts.json:

{
    "version": "1.0",
    "hosts": {
        "default": {
            "host": "https://gitlab.git.nrw/",
            "tokenAt": "DIVEKIT_API_TOKEN"
        },
        "archilab": {
            "host": "https://gitlab.archi-lab.io/",
            "tokenAt": "DIVEKIT_API_TOKEN_ARCHILAB"
        },
        "gitlab": {
            "host": "https://gitlab.com/",
            "tokenAt": "DIVEKIT_API_TOKEN_GITLABCOM"
        }
    }
}

[1] Semester

Configurations that are typically set at the beginning of each semester. These define course-wide settings and distribution templates.

{ORIGIN_DIR}
└── .divekit/                 # Project configuration
    └── distributions/        
        β”œβ”€β”€ ST1-M1/           # Sandbox environment config
        β”‚   └── config.json   # Distribution settings
        └── ST1-M2/           # Student environment config
            └── config.json   # Distribution settings

Distribution Configuration (Example)

{ORIGIN}/.divekit/distributions/<distribution>/config.json:

{
  "version": "2.0",
  "targets": {
    "default": {
      "remote": "default", // optional
      "groupId": 12345,    // optional (if set in global config)
      "name": "ST1-M1-{{uuid}}",
      "members": {
        "path": "$DIVEKIT_MEMBERS/2025-01-25_13-37_ST1-M1_members.json",
        "rights": "reporter"
      }
    },
    "test": {
      "remote": "gitlab",  
      "groupId": 67890,    // optional (if set in global config)
      "name": "ST1-M1-{{uuid}}_test",
      "members": {
        "path": "$DIVEKIT_MEMBERS/2025-01-25_13-37_ST1-M1_members.json",
        "rights": null
      }
    }
  }
}

[2] Milestone

Configurations that change with each milestone or assignment. These include specific repository settings and member assignments.

{ORIGIN_DIR}
└── .divekit/
    └── distributions/
        └── <distribution>/   # e.g. ST1-M1
            └── config.json   # Milestone-specific settings

Members Configuration

members.csv:

username
tbuck
ada
charles
jobs
woz

generates:

~/.divekit/members/2025-01-25_13-37_ST1-M1_members.json:

{
  "version": "2.0",
  "groups": [                       // ? rename to "members"?
    {
      "uuid": "4a28af44-f2cd-4a9e-a93f-2f4c29d6dfc0",
      "members": [                  // ? rename to "group"?
        "torben.buck"
      ]
    },
    {
      "uuid": "3dc6bbc1-a4eb-44fd-80fc-230bea317bc1",
      "members": [
        "ada"
      ]
    },
    {
      "uuid": "1fe6aa82-e04b-435f-8023-10104341825d",
      "members": [
        "charles"
      ]
    },
    {
      "uuid": "eb64c6af-67da-4f55-ae3a-d4b2a02baae6",
      "members": [
        "jobs"
      ]
    },
    {
      "uuid": "ade17515-bdb9-4398-90c1-cfc078f5ec36",
      "members": [
        "woz"
      ]
    }
  ]
}

[3] 🚧 Call

Configurations that can be overridden during command execution. Any configuration value from the previous levels can be overridden using command-line arguments.

Examples:

# Specify individual files for patching
divekit patch --distribution="sandbox" src/main/java/Exercise.java src/test/java/ExerciseTest.java

# set debug loglevel
divekit patch --loglevel=debug

2 - Contributing

Guidelines for contributing to Divekit.

Learn how to contribute to the Divekit project.

2.1 - Development Setup

How to set up your development environment for contributing to Divekit.

This guide will help you set up your development environment for contributing to Divekit.

Prerequisites

  • Command Line access
  • Internet connection
  • Go 1.23 or higher
  • Gitlab
    • Access Token
    • Group IDs
  • (Git)
  • (npm)

Setting Up the Development Environment

  1. Clone the repository:
git clone https://gitlab.git.nrw/divekit/tools/divekit-cli.git
  1. Navigate to the project directory:
cd divekit-cli
  1. Install the required dependencies:
go mod download

Install local modules (later possibly optional - but for development a huge help):

mkdir pkg
cd pkg
git clone https://gitlab.git.nrw/divekit/modules/gitlab-adapter
git clone https://gitlab.git.nrw/divekit/modules/config-management

cd ..
go work init
go work use ./pkg/gitlab-adapter
go work use ./pkg/config-management
  1. Build the CLI:

Build the CLI

chmod +x build.sh
./build.sh

Then answer the questions or just press Enter for the default values (windows, amd64).

This will create a divekit executable in the bin directory. You can run this executable from the command line to use the CLI or run install on it to install it globally.

For Example:

./bin/divekit_windows_amd64.exe install

This will install the divekit command globally on your system. You can now run divekit from any directory.

  1. Run the CLI:
./bin/divekit_windows_amd64.exe

# or

divekit

…or if you want to execute directly from the source code:

go run cmd/divekit/main.go
  1. Run the tests:
go test ./...
  1. Make your changes and submit a merge request.

2.2 - Error Handling

Guidelines and patterns for error handling in Divekit.

The project implements a structured error handling system that distinguishes between critical and non-critical errors. This pattern is currently implemented in the distribute package and can serve as a template for other packages.

Error Pattern

Each package can define its own error types and handling behavior. The pattern consists of:

  1. A custom error type that implements the error interface
  2. Specific error types as constants
  3. Methods to determine error severity and behavior

Example from the distribute package:

// Custom error type
type CustomError struct {
    ErrorType ErrorType
    Message   string
    Err       error
}

// Error types
const (
    // Critical errors that lead to termination
    ErrConfigLoad       // Configuration loading errors
    ErrWorkingDir      // Working directory access errors
    
    // Non-critical errors that trigger warnings
    ErrMembersNotFound // Member lookup failures
)

Example Implementation

Here’s how to implement this pattern in your package:

// Create a new error
if err := loadConfig(); err != nil {
    return NewCustomError(ErrConfigLoad, "failed to load configuration", err)
}

// Handle non-critical errors
if err := validateData(); err != nil {
    if !err.IsCritical() {
        log.Warn(err.Error())
        // Continue execution...
    } else {
        return err
    }
}

Error Behavior

Each package can define its own error behavior, but should follow these general principles:

  • Critical Errors: Should terminate the current operation
  • Non-Critical Errors: Should generate warnings but allow continuation
  • Wrapped Errors: Should preserve the original error context

Each error should include:

  • An error type indicating its severity
  • A descriptive message
  • The original error (if applicable)
  • A method to determine if it’s critical

This pattern provides consistent error handling while remaining flexible enough to accommodate different package requirements. The distribute package provides a reference implementation of this pattern.

2.3 - Contributing Guidelines

Guidelines and best practices for contributing to the Divekit project.

Thank you for considering contributing to Divekit! This document outlines our contribution process and guidelines.

Code of Conduct

  • Be respectful and inclusive
  • Follow professional standards
  • Help others learn and grow
  • Report unacceptable behavior

Getting Started

  1. Fork the repository
  2. Set up your development environment
  3. Create a feature branch
  4. Make your changes
  5. Submit a pull request

Development Process

Branching Strategy

  • main: Production-ready code
  • develop: Integration branch
  • Feature branches: feature/your-feature
  • Bugfix branches: fix/issue-description

Commit Messages

Follow conventional commits:

type(scope): description

[optional body]

[optional footer]

The commit message header consists of three parts:

  • type: Categorizes the type of change (see below)
  • scope: Indicates the section of the codebase being changed (e.g. cli, core, config, parser)
  • description: Brief description of the change in imperative mood

Examples:

  • feat(cli): add new flag for verbose output
  • fix(parser): handle empty config files correctly
  • docs(readme): update installation instructions
  • test(core): add tests for user authentication

Types:

  • feat: New feature or functionality
  • fix: Bug fix
  • docs: Documentation changes
  • style: Formatting, missing semicolons, etc. (no code changes)
  • refactor: Code restructuring without changing functionality
  • test: Adding or modifying tests
  • chore: Maintenance tasks, dependencies, etc.

The body should explain the “why” of the change, while the description explains the “what”.

Pull Requests

  1. Update documentation
  2. Add/update tests
  3. Ensure CI passes
  4. Request review
  5. Address feedback

Code Style

  • Follow Go best practices and idioms
  • Use gofmt for consistent formatting
  • Follow the official Go Code Review Comments
  • Use golint and golangci-lint
  • Write clear, idiomatic Go code
  • Keep functions focused and well-documented

Testing

  • Write unit tests using the standard testing package
  • Use table-driven tests where appropriate
  • Aim for good test coverage
  • Write integration tests for complex functionality
  • Use go test for running tests
  • Consider using testify for assertions

Documentation

  • Write clear godoc comments
  • Update README.md and other documentation
  • Include examples in documentation
  • Document exported functions and types
  • Keep documentation up to date with changes

Review Process

  1. Automated checks (golangci-lint, tests)
  2. Code review
  3. Documentation review
  4. Final approval
  5. Merge

Release Process

  1. Version bump
  2. Changelog update
  3. Tag release
  4. Documentation update

3 - Work in Progress

Currently under development.

3.1 - πŸ“ Notes

Notes for Divekit development

2024-10-01 Stefan, Torben (via Discord)

divekit patch

  • Individual files are passed to the command
  • Local testing is important for verification
  • Variables are also replaced during patching
  • Files are currently patched individually (can also be done in one commit)

divekit distribute

  • Not push because:
    • git push
      • Performs consistency checks (merge needed, missing pulls)
      • There are differences between Origin and Remote (variables)
      • The target is not the client but a creation operation within the server (?)

2024-09-12 Stefan, Fabian, Torben (in Person)

config

  • Distribution “test” -> “supervisor” -(later)-> “sandbox”
  • Distribution “code” -> “student”

divekit doctor

  • move to another “error control” command?
  • execute before other appropriate commands and possibly abort

divekit install

  • Possibly look into open source to see how others do it
  • Offer executables, divekit install copies the/an executable into the home directory and writes the path to the divekit executable in the PATH (and an update executable?).
  • divekit install, which copies divekit into the user directory and adds the divekit path to the PATH (and maybe already prepares all the doctor preparations)

divekit init

  • Merge with members for latecomers
  • Also update overview (new members missing)
  • Re-running ensures everything is in place

2024-09-12 Stefan, Fabian, Torben (in person)

divekit doctor

  • move to another “error control” command?
  • execute before other appropriate commands and possibly abort

divekit distribute

  • push -> create?
  • push -> distribute! (favorite)

3.2 - Config Redesign

Documentation of the configuration system redesign for DiveKit. This page describes the current state, planned changes, and future configuration structure.

Current State

ARS

RepoEditor (-> PatchTool)

OriginRepo

Assigned Configurations

[0] INIT

Configurations that typically only need to be defined once during installation.

Optimally in: {$HOME}/.divekit/

[1] SEMESTER

Configurations that typically only need to be defined once per semester. They are best stored in the OriginRepo.

Optimally in: {OriginRepo}/.divekit_norepo/{distribution}/

[2] MILESTONE

Configurations that typically only need to be defined once per milestone. They are best stored in the OriginRepo.

Optimally in: {OriginRepo}/.divekit_norepo/{distribution:{milestone}}/

[3] CALL

Configurations that must be defined with each call.

Optimally in: CLI flags

Future

[0] INIT

{ARS}/.env will be stored in {$HOME}/.divekit/

ACCESS_TOKEN=YOUR_ACCESS_TOKEN
HOST=https://git.st.archi-lab.io
BRANCH=main

{ARS}/originRepositoryConfig.json -> {$HOME}/.divekit/origin.json

Will be stored here during installation and then copied to the new Origin Repos during divekit init.

{
    "variables": {
        "variableDelimiter": "$"
    },
    "solutionDeletion": {
        "deleteFileKey": "//deleteFile",
        "deleteParagraphKey": "//delete",
        "replaceMap": {
            "//unsup": "throw new UnsupportedOperationException();",
            "//todo": "// TODO"
        }
    },
    "warnings": {
        "variableValueWarnings": {
            "typeWhiteList": ["json", "java", "md"],
            "ignoreList": ["name", "type"]
        }
    }
}

Suggested change:

{
    "version": "2.0",
    "variables": {
        "delimiter": "$"
    },
    "solutionCleanup": {
        "deleteFile": "//deleteFile",
        "replaceParagraph": {
            "//unsup": "throw new UnsupportedOperationException();",
            "//todo": "// TODO",
            "//delete": null
        }
    },
    "warnings": {
        "variation": {
            "fileTypes": ["json", "java", "md"],
            "ignore": ["name", "type"]
        }
    }
}

{ARS}/relationsConfig.json -> {$HOME}/.divekit/variation/relations.json

Will be stored here during installation and then copied to the new Origin Repos during divekit init.

[!NOTE]
I don’t fully understand what this is for - it may remain here forever and not need to be copied to the Origin Repo?
(what is UmletRev? What does the star mean?)

[
    {
        "id": "OneToOne",
        "Umlet": "lt=-\nm1=1\nm2=1",
        "UmletRev": "lt=-\nm1=1\nm2=1",
        "Short": "1 - 1",
        "Description": "one to one"
    },
    {
        "id": "OneToMany",
        "Umlet": "lt=-\nm1=1\nm2=*",
        "UmletRev": "lt=-\nm1=*\nm2=1",
        "Short": "1 - n",
        "Description": "one to many"
    },
    {
        "id": "ManyToOne",
        "Umlet": "lt=-\nm1=*\nm2=1",
        "UmletRev": "lt=-\nm1=1\nm2=*",
        "Short": "n - 1",
        "Description": "many to one"
    },
    {
        "id": "ManyToMany",
        "Umlet": "lt=-\nm1=*\nm2=*",
        "UmletRev": "lt=-\nm1=*\nm2=*",
        "Short": "n - m",
        "Description": "many to many"
    }
]

Suggested change:

  • id->key ?
{
    "version": "2.0",
    "relations": [
        {
            "id": "OneToOne",
            "umlet": "lt=-\nm1=1\nm2=1",
            "umletRev": "lt=-\nm1=1\nm2=1",
            "short": "1 - 1",
            "description": "one to one"
        },
        {
            "id": "OneToMany",
            "umlet": "lt=-\nm1=1\nm2=*",
            "umletRev": "lt=-\nm1=*\nm2=1",
            "short": "1 - n",
            "description": "one to many"
        },
        {
            "id": "ManyToOne",
            "umlet": "lt=-\nm1=*\nm2=1",
            "umletRev": "lt=-\nm1=1\nm2=*",
            "short": "n - 1",
            "description": "many to one"
        },
        {
            "id": "ManyToMany",
            "umlet": "lt=-\nm1=*\nm2=*",
            "umletRev": "lt=-\nm1=*\nm2=*",
            "short": "n - m",
            "description": "many to many"
        }
    ]
}

{ARS}/variableExtensionsConfig.json -> {$HOME}/.divekit/variation/variableExtensions.json

Will be stored here during installation and then copied to the new Origin Repos during divekit init.

[
    {
        "id": "Basic",
        "variableExtensions": {
            "": {
                "preValue": "",
                "value": "id",
                "postValue": "",
                "modifier": "NONE"
            },
            "Class": {
                "preValue": "",
                "value": "id",
                "postValue": "",
                "modifier": "NONE"
            },
            "Package": {
                "preValue": "",
                "value": "Class",
                "postValue": "",
                "modifier": "ALL_LOWER_CASE"
            },
            "ClassPath": {
                "preValue": "thkoeln.st.st2praktikum.racing.", // ??? deprecated ???
                "value": "Class",
                "postValue": ".domain",
                "modifier": "ALL_LOWER_CASE"
            }
        }
    },
    {
        "id": "Getter",
        "variableExtensions": {
            "GetToOne": {
                "preValue": "get",
                "value": "Class",
                "postValue": "",
                "modifier": "NONE"
            },
            "GetToMany": {
                "preValue": "get",
                "value": "s",
                "postValue": "",
                "modifier": "NONE"
            }
        }
    }
]

Questions

From my notes

I thought I had written this somewhere already, but I can’t find it anymore.

  • [0] INIT -> “Installation” exists twice
    • Once during DiveKit installation
    • Once during DiveKit initialization in a new OriginRepo

So what should go where (have ideas)?

  • Is the preValue still needed?
    I unfortunately don’t remember exactly what/why, but this was causing some significant issues.

3.3 - Deployment

How to deploy and release new versions of Divekit.

[!WARNING]
Not implemented this way yet - the current process is shown in the gif below.

This guide covers the process of deploying and releasing new versions of Divekit.

Version Management

Semantic Versioning

Divekit follows Semantic Versioning:

  • MAJOR version for incompatible API changes
  • MINOR version for new functionality
  • PATCH version for bug fixes

Version Tagging

# Current version is v2.0.0

# Bump patch version (e.g., v2.0.0 -> v2.0.1)
./deploy.sh patch

# Bump minor version (e.g., v2.0.0 -> v2.1.0)
./deploy.sh minor

# Bump major version (e.g., v2.0.0 -> v3.0.0)
./deploy.sh major

# Create alpha/beta versions
./deploy.sh minor -alpha.1  # Creates v2.1.0-alpha.1
./deploy.sh patch -beta.2   # Creates v2.0.1-beta.2

# Rollback options
./deploy.sh rollback        # Removes current tag and returns to previous version
./deploy.sh rollback v2.1.0 # Removes specific version tag

Example (current state)

Versioning

Release Process

  1. Update version using deploy.sh:
./deploy.sh <patch|minor|major> [-alpha.N|-beta.N]
  1. Update CHANGELOG.md:
## [2.0.1] - YYYY-MM-DD

### Added
- New feature X
- Command Y support

### Changed
- Improved Z performance

### Fixed
- Bug in command A
  1. Create release branch:
git checkout -b release/v2.0.1
  1. Build and test locally:
go test ./...
go build
  1. Create GitLab release:
  • Tag version is created automatically
  • Changelog from CHANGELOG.md is included automatically
  • CI pipeline automatically:
    • Runs all tests
    • Builds binaries for all supported platforms
    • Creates release artifacts
    • Uploads binaries to the release

Deployment Checklist

  • All tests passing locally (go test ./...)
  • Documentation updated
  • CHANGELOG.md updated
  • Version tagged using deploy.sh
  • GitLab CI/CD Pipeline completed successfully:
    • Binaries built successfully
    • Release artifacts generated
  • Release created and verified in GitLab
  • Generated binaries tested on sample installation

Rollback Procedure

If issues are found:

  1. Execute rollback using deploy.sh:
./deploy.sh rollback [version]  # Version is optional

This automatically executes the following steps:

  • Deletes the specified tag (or current tag if no version specified) locally and remote
  • Reverts to the previous version
  • Creates a new hotfix branch if desired

Examples:

./deploy.sh rollback          # Removes the most recent tag
./deploy.sh rollback v2.1.0   # Removes specific version v2.1.0
./deploy.sh rollback v2.0.0-alpha.1  # Removes a specific alpha version

If manual rollback is necessary:

git tag -d v2.0.1
git push origin :refs/tags/v2.0.1
git checkout -b hotfix/2.0.2

4 -

4.1 - Go Testing Guide

Before the CLI can be used in production, it is necessary to test it. This page describes how to test the CLI.

What should be tested in this project?

Given that this CLI is the entry point for the user to interact with Divekit, it is essential to test all commands. Currently, there is only one command patch, but all commands should be tested with the following aspects in mind:

  • Command Syntax: Verify that the command syntax is correct
  • Command Execution: Ensure that executing the command produces the expected behavior or output
  • Options and Arguments: Test each option and argument individually to ensure they are processed correctly and test various combinations of options and arguments
  • Error Handling: Test how the command handles incorrect syntax, invalid options, or missing arguments

Additionally, testing the utility functions is necessary, as they are used throughout the entire project. For that the following aspects should be considered:

  • Code Paths: Every possible path through the code should be tested, which should include “happy paths” (expected input and output) as well as “edge cases” (unexpected inputs and conditions).
  • Error Conditions: Check that the code handles error conditions correctly. For example, if a function is supposed to handle an array of items, what happens when it’s given an empty array? What about an array with only one item, or an array with the maximum number of items?

How should something be tested?

Commands should be tested with integration tests since they interact with the entire project. Integration tests are utilized to verify that all components of this project work together as expected in order to test the mentioned aspects.

To detect early bugs, utility functions should be tested with unit tests. Unit tests are used to verify the behavior of specific functionalities in isolation. They ensure that individual units of code produce the correct and expected output for various inputs.

How are tests written in Go?

Prerequisites

It’s worth mentioning that the following packages are utilized in this project for testing code.

The testing package

The standard library provides the testing package, which is required to support testing in Go. It offers different types from the testing library [1, pp. 37-38]:

  • testing.T: To interact with the test runner, all tests must use this type. It contains a method for declaring failing tests, skipping tests, and running tests in parallel.

  • testing.B: Similar to the test runner, this type is a benchmark runner. It shares the same methods for failing tests, skipping tests and running benchmarks concurrently. Benchmarks are generally used to determine performance of written code.

  • testing.F: This type generates a randomized seed for the testing target and collaborates with the testing.T type to provide test-running functionality. Fuzz tests are unique tests that generate random inputs to discover edge cases and identify bugs in written code.

  • testing.M: This type allows for additional setup or teardown before or after tests are executed.

The testify toolkit

The testify toolkit provides several packages to work with assertions, mock objects and testing suites [4]. Primarily, the assertion package is used in this project for writing assertions more easily.

Test signature

To write unit or integration tests in Go, it is necessary to construct test functions following a particular signature:

func TestName(t *testing.T) {
// implementation
}

According to this test signature highlights following requirements [1, p.40]:

  • Exported functions with names starting with “Test” are considered tests.
  • Test names can have an additional suffix that specifies what the test is covering. The suffix must also begin with a capital letter. In this case, “Name” is the specified suffix.
  • Tests are required to accept a single parameter of the *testing.T type.
  • Tests should not include a return type.

Unit tests

Unit tests are small, fast tests that verify the behavior of specific functionalities in isolation. They ensure that individual units of code produce the correct and expected output for various inputs.

To illustrate unit tests, a new file named divide.go is generated with the following code:

package main

func Divide(a, b int) float64 {
	return float64(a) / float64(b)
}

By convention tests are located in the same package as the function being tested. It’s important that all test files must end with _test.go suffix to get detected by the test runner.

Accordingly divide_test.go is also created within the main package:

package main

import (
	"github.com/stretchr/testify/assert"
	"testing"
)

func TestDivide(t *testing.T) {
	// Arrange
	should, a, b := 2.5, 5, 2
	// Act
	is := divide(a, b)
	// Assert
	assert.Equal(t, should, is, "Got %v, want %v", is, should)
}

Writing unit or integration tests in the Arrange-Act-Assert (AAA) pattern is a common practice. This pattern establishes a standard for writing and reading tests, reducing the cognitive load for both new and existing team members and enhancing the maintainability of the code base [1, p. 14].

In this instance, the test is formulated as follows:

  • Arrange: All preconditions and inputs get set up.

  • Act: The Act step executes the actions outlined in the test scenario, with the specific actions depending on the type of test. In this instance, it calls the Add function and utilizes the inputs from the Arrange step.

  • Assert: During this step, the precondition from the Arrange step is compared with the output. If the output does not match the precondition, the test is considered failed, and an error message is displayed.

It’s worth noting that the Act and Assert steps can be iterated as many times as needed, proving beneficial, particularly in the context of table-driven tests.

Table-driven tests for unit and integration tests

To cover all test cases it is required to call Act and Assert multiple times. It would be possible to write one test per case, but this would lead to a lot of duplication, reducing the readability. An alternative approach is to invoke the same test function several times. However, in case of a test failure, pinpointing the exact point of failure may pose a challenge [2]. Instead, in the table-driven approach, preconditions and inputs are structured as a table in the Arrange step.

As a consequence divide_test.go gets adjusted in the following steps [1, pp. 104-109]:

Step 1 - Create a structure for test cases

In the first step a custom type is declared within the test function. As an alternative the structure could be declared outside the scope of the test function. The purpose of this structure is to hold the inputs and expected preconditions of the test case.

The test cases for the previously mentioned Divide function could look like this:

package main

import (
	"math"
	"testing"
)

func TestDivide(t *testing.T) {
	// Arrange
	testCases := []struct {
		name     string  // test case name
		dividend int     // input
		divisor  int     // input
		quotient float64 // expected
	}{
		{"Regular division", 5, 2, 2.5},
		{"Divide with negative numbers", 5, -2, -2.5},
		{"Divide by 0", 5, 0, math.Inf(1)},
	}
}

The struct type wraps name, dividend, divisor and quotient. name describes the purpose of a test case and can be used to identify a test case, in case an error occurs.

Step 2 - Executing each test and assert it

Each test case from the table will be executed as a subtest. To achieve this, the testCases are iterated over and each testCase is executed in a separate goroutine [3] with t.Run(). The purpose of this is to individually fail tests without concerns about disrupting other tests.

Within t.Run(), the Act and Assert steps get performed:

package main

import (
	"github.com/stretchr/testify/assert"
	"math"
	"testing"
)

func TestDivide(t *testing.T) {
	// Arrange
	testCases := []struct {
		name     string  // test case name
		dividend int     // input
		divisor  int     // input
		quotient float64 // expected
	}{
		{"Regular division", 5, 2, 2.5},
		{"Divide with negative numbers", 5, -2, -2.5},
		{"Divide by 0", 5, 0, math.Inf(1)},
	}

	for _, testCase := range testCases {
		t.Run(testCase.name, func(t *testing.T) {
			// Act
			quotient := Divide(testCase.dividend, testCase.divisor)
			// Assert
			assert.Equal(t, testCase.quotient, quotient)
		})
	}
}

Setup and teardown

Setup and teardown before and after a test

Setup and teardown are used to prepare the environment for tests and clean up after tests have been executed. In Go the type testing.M from the testing package fulfills this purpose and is used as a parameter for the TestMain function, which controls the setup and teardown of tests.

To use this function, it must be included within the package alongside the tests, as the scope for functions is limited to the package in which it is defined. This implies that each package can only have one TestMain function; consequently, it is called only when a test is executed within the package [5].

The following example illustrates how it works [1, p. 51]:

package main

func TestMain(m *testing.M) {
	// setup statements
	setup()

	// run the tests
	e := m.Run()

	// cleanup statements
	teardown()

	// report the exit code
	os.Exit(e)
}

func setup() {
	log.Println("Setting up.")
}
func teardown() {
	log.Println("Tearing down.")
}

TestMain runs before any tests are executed and defines the setup and teardown functions. The Run method from testing.M is used to invoke the tests and returns an exit code that is used to report the success or failure of the tests.

Setup and teardown before and after each test

In order to teardown after each test, the t.Cleanup function can be used provided by the testing package [2]. Since there is no mention to setup before each test, it can be assumed that the setup function is called at the start of a test.

This example shows how this can be used:

package main

func TestWithSetupAndCleanup(t *testing.T) {
	setup()

	t.Cleanup(func() {
		// cleanup logic
	})

	// more test code here
}

Write integration tests

Integration tests are used to verify the interaction between different components of a system. However, the mentioned principles for writing unit tests also apply to integration tests. The only difference is that integration tests involve a greater amount of code, as they encompass multiple components.

How to run tests?

To run tests from the CLI, the go test command is used, which is part of the Go toolchain [6]. The list shows some examples of how to run tests:

  • To run a specific test, the -run flag can be used. For example, to run the TestDivide test from the divide_test.go file, the following command can be used: go test -run TestDivide. Note that the argument for -run is a regular expression, so it is possible to run multiple tests at once.

  • To run all tests in a package, run go test <packageName>. Note that the package name should include a relative path if the package is not in the working directory.

  • To run all tests in a project, run go test ./... . The argument for test is a wildcard, matching all subdirectories; therefore, it is crucial for the working directory to be set to the root of the project to recursively run all tests.

Additionally, tests can be run from the IDE. For example, in GoLand, the IDE will automatically detect tests and provide a gutter icon to run them [7].

How the command patch is tested?

Prerequisites

Before patch can be tested, it is necessary to do the following:

  1. Replace the placeholders in the file .env.example and rename it to .env. If you have no api token, you can generate one here.
  2. Run the script setup.ps1 as administrator. This script will install all necessary dependencies and initialize the ARS-, Repo-Editor- and Test-Origin-Repository.

Test data

To test patch, it was necessary to use a test origin repository as test data. In this context the test origin repository is a repository that contains all the necessary files and configurations from ST1 to test different scenarios.

Additionally, a test group was created to test if the Repo-Editor-repository actually pushes the generated files to remote repositories. Currently, the test group contains the following repositories:

coderepos:
    ST1_Test_group_8063661e-3603-4b84-b780-aa5ff1c3fe7d
    ST1_Test_group_86bd537d-9995-4c92-a6f4-bec97eeb7c67
    ST1_Test_group_8754b8cb-5bc6-4593-9cb8-7c84df266f59

testrepos:
    ST1_Test_tests_group_446e3369-ed35-473e-b825-9cc0aecd6ba3
    ST1_Test_tests_group_9672285a-67b0-4f2e-830c-72925ba8c76e

Structure of a test case

patch is tested with a table-driven test, which is located in the file patch_test.go.

The following example shows the structure of a test case:

package patch

func TestPatch(t *testing.T) {
	testCases := []struct {
		name           string
		arguments      PatchArguments  // input
		generatedFiles []GeneratedFile // expected
		error          error           // expected
	}{
		{
			"example test case",
			PatchArguments{
				dryRun:       true | false,
				logLevel:     "[empty] | info | debug | warning | error",
				originRepo:   "path_to_test_origin_repo",
				home:         "[empty] | path_to_repositories",
				distribution: "[empty] | code | test",
				patchFiles:   []string{"patch_file_name"},
			},
			[]GeneratedFile{
				{
					RepoName:    "repository_name",
					RelFilePath: "path_to_the_generated_file",
					Distribution: Code | Test,
					Include:     []string{"should_be_found_in_the_generated_file"},
					Exclude:     []string{"should_not_be_found_in_the_generated_file"},
				},
			},
			error: nil | errorType,
		},
	}

	// [run test cases]
}

The name field is the name of the test case and is used to identify the test case in case of an error.

The struct PatchArguments contains all the necessary arguments to run the patch command:

  • dryRun: If true, generated files will not be pushed to a remote repository.
  • logLevel: The log level of the command.
  • originRepo: The path to the test origin repository.
  • home: The path to the divekit repositories.
  • distribution: The distribution to patch.
  • patchFiles: The patch files to apply.

The struct GeneratedFile is the expected result of the patch command and contains the following properties:

  • RepoName: The name of the generated repository.
  • RelFilePath: The relative file path of the generated file.
  • Distribution: The distribution of the generated file.
  • Include: Keywords that should be found in the generated file.
  • Exclude: Keywords that should not be found in the generated file.

The error field is the expected error of the patch command. It can be nil when no error is expected or contain a specific error type if an error is expected.

Process of a test case

The following code snippet shows how test cases are processed:

package patch

func TestPatch(t *testing.T) {
	// [define test cases]

	for _, testCase := range testCases {
		t.Run(testCase.name, func(t *testing.T) {
			generatedFiles := testCase.generatedFiles
			dryRunFlag := testCase.arguments.dryRun
			distributionFlag := testCase.arguments.distribution

			deleteFilesFromRepositories(t, generatedFiles, dryRunFlag) // step 1
			_, err := executePatch(testCase.arguments)                 // step 2

			checkErrorType(t, testCase.error, err) // step 3
			if err == nil {
				matchGeneratedFiles(t, generatedFiles, distributionFlag) // step 4
				checkFileContent(t, generatedFiles)                      // step 5
				checkPushedFiles(t, generatedFiles, dryRunFlag)          // step 6
			}
		})
	}
}

Each test case runs the following sequence of steps:

  1. deleteFilesFromRepositories deletes the specified files from their respective repositories. Prior to testing, it is necessary to delete these files to ensure that they are actually pushed to the repositories, given that they are initially included in the repositories.

  2. executePatch executes the patch command with the given arguments and return the output and the error.

  3. checkErrorType checks if the expected error type matches with the actual error type.

  4. matchGeneratedFiles checks if the found file paths match with the expected files and throws an error when there are any differences.

  5. checkFileContent checks if the content of the files is correct.

  6. checkPushedFiles checks if the generated files have been pushed correctly to the corresponding repositories.

References

[1] A. Simion, Test-Driven Development in Go Packt Publishing Ltd, 2023

[2] “Comprehensive Guide to Testing in Go | The GoLand Blog," The JetBrains Blog (accessed Jan. 29, 2024).

[3] “Goroutines in Golang - Golang Docs," (accessed Jan. 29, 2024).

[4] “Using the Testify toolkit | GoLand," GoLand Help. (accessed Jan. 29, 2024).

[5] “Why use TestMain for testing in Go?" (accessed Jan. 29, 2024).

[6] “Go Toolchain - Go Wiki” (accessed Jan. 29, 2024).

[7] “Run tests | GoLand," GoLand Help. (accessed Jan. 29, 2024).

4.2 - Testrepo

In the test repo, various functionalities of the student’s source code can be tested. This pages decribes the various functionalities with simple examples

The documentation is not yet written. Feel free to add it yourself ;)

Testing Package structure

ExampleFile

static final String PACKAGE_PREFIX = "thkoeln.divekit.archilab.";

@Test
public void testPackageStructure() {
    try {
        Class.forName(PACKAGE_PREFIX + "domainprimitives.StorageCapacity");
        Class.forName(PACKAGE_PREFIX + "notebook.application.NotebookDto");
        Class.forName(PACKAGE_PREFIX + "notebook.application.NotebookController");
        Class.forName(PACKAGE_PREFIX + "notebook.domain.Notebook");
        // using individualization and the variableExtensionConfig.json this could be simplified to
        // Class.forName("$entityPackage$.domain.$entityClass$");
        // ==> Attention: If used, the test can't be tested in the orgin repo itself
    } catch (ClassNotFoundException e) {
        Assertions.fail("At least one of your entities is not in the right package, or has a wrong name. Please check package structure and spelling!");
    }
}

Testing REST Controller

ExampleFile

@Autowired
private MockMvc mockMvc;

@Test
public void notFoundTest() throws Exception {
    mockMvc.perform(get("/notFound")
        .accept(MediaType.APPLICATION_JSON))
        .andDo(print())
        .andExpect(status().isNotFound());
}

@Transactional
@Test
public void getPrimeNumberTest() throws Exception {
    final Integer expectedPrimeNumber = 13;
    mockMvc.perform(get("/primeNumber")
        .accept(MediaType.APPLICATION_JSON))
        .andDo(print())
        .andExpect(status().isOk())
        .andExpect(jsonPath("$", Matchers.is(expectedPrimeNumber))).andReturn();
}

Testing …