EuroArgoDev Software Evaluator is a web-based tool that checks a public GitHub repository against the EuroArgo software guidelines. It combines manual answers with automated GitHub API checks to assign a maturity level and suggest improvements.
- Check compliance with EuroArgoDev software guidelines
- Assign a maturity badge (Novice, Beginner, Intermediate, Advanced, Expert)
- Provide actionable feedback to reach a chosen target level
- Keep the tool easy to run locally and hostable on GitHub Pages
- Manual and automatic criteria sourced from
src/data/guidelines_v2.json - Target level selection filters criteria and caps the displayed maturity
- Grouped manual questions with evidence fields
- Automatic checks via GitHub REST API (Octokit)
- Progress indicator during auto tests
- Downloadable evaluation report (JSON) and re-upload flow that reuses manual answers
| Layer | Technology |
|---|---|
| Frontend Framework | React |
| Build Tool | Vite |
| API Integration | Octokit (GitHub REST API client) |
| Styling | Vanilla CSS (per-component styles + shared variables) |
| Hosting | GitHub Pages |
| Version Control | Git & GitHub |
Clone and install:
git clone https://github.com/euroargodev/Software-Evaluator.git
cd Software-Evaluator
npm installRun locally:
npm run devBuild:
npm run buildAutomatic checks call the GitHub API. Without a token you are limited to low rate limits; set a personal access token for smoother runs.
Create .env at the project root:
VITE_GH_DEPLOY_TOKEN=your_personal_access_token_here
Also add the same secret to GitHub Actions if you deploy from CI (Settings → Secrets and variables → Actions).
src/
├── App.jsx # View switcher (Home / Results)
├── App.css
├── components/ # Form, grouped manual board, manual cards, target selector
├── data/
│ ├── guidelines_v2.json # Source of criteria (manual + auto)
│ └── scripts/generateNewGuidelines.js
├── logic/ # Evaluation, GitHub client, auto tests
├── pages/ # Home and Results pages
├── styles/ # Global styles and color variables
└── main.jsx # Entry point
- User selects a target level and pastes a GitHub repository URL.
- Manual criteria (filtered by level) are answered and evidence is captured.
- Automatic criteria run via Octokit against the repository.
- Scores are weighted by level and combined into a maturity badge.
- Results page shows badge, stats, grouped recommendations, and lets the user download the evaluation JSON.
- On the next visit the user can upload the JSON to restore manual answers; only automatic checks run again.
Pull requests are welcome. For major changes, open an issue to discuss what you plan to add or modify.
MIT (see LICENSE).