27 Commits

Author SHA1 Message Date
8dac04b9b1 docs: Marquer tâche #116 comme terminée (Remarque sous adresse)
La fonctionnalité d'affichage de la remarque dans la première card
du passage_form_dialog.dart était déjà implémentée (lignes 703-719).

Affichage inclut:
- Icône Icons.note
- Texte en italique
- Style grisé (opacité 0.7)
- Condition d'affichage si remarque non vide

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-01-26 17:55:34 +01:00
495ba046ec docs: Marquer les tâches #76 et le bug de purge Hive comme terminés
Tâche #76: Accès admin limité web uniquement -  Terminé 26/01
Bug: Boxes Hive non purgées à la connexion -  Corrigé 26/01

Corrections aujourd'hui:
- #15: Nouveau membre non synchronisé (sécurité + sync + API)
- #76: Accès admin limité web uniquement
- Bug: Nettoyage automatique des boxes Hive à la connexion
- Optimisation: operation_id en session (évite requêtes SQL)
2026-01-26 17:42:33 +01:00
957386f78c docs: Marquer la tâche #15 comme terminée
Tâche #15: Nouveau membre non synchronisé -  Terminé 26/01

Solutions implémentées:
- Sécurité: Password supprimé de la réponse API
- Synchronisation: Auto-création ope_users lors création membre
- API retourne id, ope_user_id et username
- Flutter récupère et sauvegarde correctement les données dans Hive
- Optimisation: operation_id stocké en session (évite requête SQL)
- Fallback SQL si operation_id absent de la session
2026-01-26 17:29:56 +01:00
9a185b15f3 fix: Récupérer username et ope_user_id depuis la réponse API
- Utiliser le username retourné par l'API au lieu du username local
- Récupérer ope_user_id depuis la réponse API
- Ajouter des logs de debug pour tracer les valeurs
- Fix: Le username s'affiche maintenant dans le tableau des membres après création
2026-01-26 17:18:51 +01:00
6fd02079c1 feat: Ajouter fallback SQL si operation_id absent de la session
- Si Session::getOperationId() retourne null, requête SQL de fallback
- Log de warning pour identifier les cas où la session n'est pas à jour
- Utile si l'utilisateur n'a pas fait de login récent
- Garantit que l'opération active est toujours récupérée
2026-01-26 17:06:00 +01:00
d0697b1e01 feat: Ajouter operation_id dans la session pour optimisation
- Ajout de operation_id dans Session::login()
- Ajout de Session::getOperationId() et Session::setOperationId()
- LoginController met à jour operation_id dans la session après récupération
- UserController utilise Session::getOperationId() au lieu d'une requête SQL
- Optimisation: évite une jointure SQL users+operations à chaque création de membre
2026-01-26 17:01:46 +01:00
0687900564 fix: Récupérer l'opération active depuis la table operations
- Corrige l'erreur SQL 'Unknown column fk_operation in users'
- L'opération active est récupérée depuis operations.chk_active = 1
- Jointure avec users pour filtrer par entité de l'admin créateur
- Query: SELECT o.id FROM operations o INNER JOIN users u ON u.fk_entite = o.fk_entite WHERE u.id = ? AND o.chk_active = 1
2026-01-26 16:57:08 +01:00
c24a3afe6a feat: Créer automatiquement ope_users lors de la création d'un membre
PROBLÈME TÂCHE #15 :
Quand un admin crée un nouveau membre, seul users.id était créé.
Aucune entrée ope_users n'était créée automatiquement.
Résultat : Le nouveau membre n'apparaissait pas dans Flutter car
il n'était pas synchronisé avec l'opération active.

SOLUTION IMPLÉMENTÉE :
1. Récupération de l'opération active de l'admin créateur (users.fk_operation)
2. Création automatique d'une entrée dans ope_users si opération active
3. Retour de ope_user_id dans la réponse API (en plus de users.id)

NOUVELLE RÉPONSE API :
{
  "status": "success",
  "message": "Utilisateur créé avec succès",
  "id": "10023668",           // users.id (table centrale)
  "ope_user_id": "12345",     // ope_users.id (table opérationnelle)
  "username": "pr.350-renn731"
}

COMPORTEMENT :
- Si admin a une opération active → ope_users créé automatiquement
- Si pas d'opération active → ope_user_id sera null (membre non affecté)

LOGS :
- Log INFO si affectation réussie
- Log WARNING si pas d'opération active

Travail sur tâche #15 (Nouveau membre non synchronisé)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-01-26 16:42:55 +01:00
6eefa218d8 security: Supprimer le mot de passe de la réponse POST /api/users
PROBLÈME DE SÉCURITÉ :
Le mot de passe était retourné dans la réponse JSON lors de la création
d'un utilisateur (quand généré automatiquement).

RISQUES :
- Exposition dans les logs de proxies/load balancers
- Visible dans DevTools navigateur
- Peut être loggé côté client en cas d'erreur
- Reste en mémoire/historique des requêtes

SOLUTION :
- Suppression complète du champ 'password' de la réponse
- Le mot de passe est DÉJÀ envoyé par email (ligne 525)
- L'admin reçoit seulement : id + username

RÉPONSE AVANT :
{
  "status": "success",
  "message": "Utilisateur créé avec succès",
  "id": "10023668",
  "username": "pr.350-renn731",
  "password": "MPar<2a8^2&VnLE"  //  FAILLE
}

RÉPONSE APRÈS :
{
  "status": "success",
  "message": "Utilisateur créé avec succès",
  "id": "10023668",
  "username": "pr.350-renn731"  //  SÉCURISÉ
}

Travail sur tâche #15 (Nouveau membre non synchronisé)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-01-26 16:30:03 +01:00
e3d9433442 docs: Marquer tâche #30 comme complétée (26/01)
Membres sélectionnés haut liste - terminé

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-01-26 16:20:30 +01:00
35e9ddbed5 fix: Remplacer NavigationHelper par NavigationConfig dans map_page.dart
NavigationHelper a été supprimé lors du refactoring #74.
Utilisation de NavigationConfig à la place.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-01-26 16:06:51 +01:00
3daf5a204a docs: Marquer tâche #74 comme complétée (26/01)
🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-01-26 15:28:10 +01:00
1cdb4ec58c refactor: Simplifier DashboardLayout/AppScaffold (tâche #74)
Centralisation et simplification de l'architecture de navigation :

CRÉATIONS :
- navigation_config.dart : Configuration centralisée de la navigation
  * Toutes les destinations (admin/user)
  * Logique de navigation (index → route)
  * Résolution inverse (route → index)
  * Titres et utilitaires

- backgrounds/dots_painter.dart : Painter de points décoratifs
  * Extrait depuis AppScaffold et AdminScaffold
  * Paramétrable (opacité, densité, seed)
  * Réutilisable

- backgrounds/gradient_background.dart : Fond dégradé
  * Gère les couleurs admin (rouge) / user (vert)
  * Option pour afficher/masquer les points
  * Widget indépendant

SIMPLIFICATIONS :
- app_scaffold.dart : 426 → 192 lignes (-55%)
  * Utilise NavigationConfig au lieu de NavigationHelper
  * Utilise GradientBackground au lieu de code dupliqué
  * Suppression de DotsPainter local

- dashboard_layout.dart : 140 → 77 lignes (-45%)
  * Suppression validations excessives (try/catch, vérifications)
  * Code épuré et plus lisible

SUPPRESSIONS :
- admin_scaffold.dart : Supprimé (207 lignes)
  * Obsolète depuis unification avec AppScaffold
  * Code dupliqué avec AppScaffold
  * AdminNavigationHelper fusionné dans NavigationConfig

RÉSULTATS :
- Avant : 773 lignes (AppScaffold + AdminScaffold + DashboardLayout)
- Après : 623 lignes (tout inclus)
- Réduction nette : -150 lignes (-19%)
- Architecture plus claire et maintenable
- Aucune duplication de code
- Navigation centralisée en un seul endroit

Résout tâche #74 du PLANNING-2026-Q1.md

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-01-26 15:27:54 +01:00
7345cf805e fix: Correction format version YY.MM.DDNN (3 parties au lieu de 4)
- VERSION file stocke maintenant: 26.01.2604 (3 parties)
- Au lieu de: 26.01.26.04 (4 parties - invalide pour semver)
- Regex ajustée pour parser le nouveau format: ^YY.MM.DDNN$
- Détection changement de date compare YY, MM, DD séparément
- Build number reste YYMMDDNN (26012604)
- Commentaires mis à jour pour refléter format YY.MM.DDNN

Résout: "Could not parse 26.01.26.04+26012604"

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-01-26 12:35:09 +01:00
eef1fc8d32 fix: Format semver YY.MM.DDNN pour compatibilité Dart/Flutter
- Correction format version pour pubspec.yaml: YY.MM.DDNN
- VERSION file garde format lisible: 26.01.26.03
- pubspec.yaml reçoit format semver: 26.01.2603+26012603
- Concat DD+NN pour 3ème partie: 26+03 = 2603
- Build number complet: 26012603

Résout erreur: Could not parse '26.01.26.03+26012603'

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-01-26 12:30:40 +01:00
097335193e fix: Ajout fonction echo_success manquante
- Ajout fonction echo_success() avec symbole ✓ en vert
- Utilisée dans le système de versioning automatique
- Correction erreur : "echo_success : commande introuvable"

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-01-26 12:22:36 +01:00
cf1e54d8d0 docs: Mise à jour commentaires AppInfoService pour format YY.MM.DD.NN
- Clarification format version : YY.MM.DD.NN
- Commentaire : auto-incrémenté à chaque déploiement DEV
- Build number format : YYMMDDNN (sans points)
- Full version format : vYY.MM.DD.NN+YYMMDDNN

La version complète sera automatiquement affichée dans :
- splash_page.dart (écran chargement)
- login_page.dart (connexion)
- register_page.dart (inscription)
- dashboard_app_bar.dart (tableau de bord)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-01-26 12:15:37 +01:00
d6c4c6d228 feat: Versioning automatique YY.MM.DD.NN pour DEV
- Système 100% automatique de numérotation de version
- Format : YY.MM.DD.NN (ex: 26.01.27.01)
- Détection automatique de la date du jour
- Incrémentation auto du build number (.01 → .02 → .03...)
- Reset auto à .01 lors d'un changement de date
- Compatible avec ancien format (conversion auto)

Logique :
1. Récupération date système : date +%y.%m.%d
2. Si date différente de VERSION → YY.MM.DD.01
3. Si même date → incrémenter dernier nombre
4. Écriture dans VERSION et pubspec.yaml

Exemple :
- 26/01 build 1 → 26.01.26.01
- 26/01 build 2 → 26.01.26.02
- 27/01 build 1 → 26.01.27.01 (reset auto)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-01-26 12:14:54 +01:00
b3e03ef6e6 feat: Filtres membres/secteurs dynamiques et liés (#42)
- Filtrage croisé via box user_sector (UserSectorModel)
- Si secteur sélectionné → membres filtrés (uniquement ce secteur)
- Si membre sélectionné → secteurs filtrés (uniquement ses secteurs)
- Relation : UserSectorModel.opeUserId ↔ UserSectorModel.fkSector
- Import UserSectorModel ajouté
- Simplification dropdown secteurs (liste directe, plus de map)

Comportement :
1. Aucun filtre → tous les membres et tous les secteurs
2. Secteur choisi → liste membres réduite
3. Membre choisi → liste secteurs réduite
4. Les deux choisis → affichage le plus restreint

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-01-26 12:00:28 +01:00
9c837f8adb feat: Bouton réinitialiser filtres historique admin (#42)
- Ajout bouton IconButton avec icône clear (X)
- Visible uniquement si au moins un filtre est actif
- Réinitialise : recherche textuelle + membre + secteur
- Remet les 2 selects à "Tous"
- Style : fond gris clair, padding 12px
- Tooltip : "Réinitialiser les filtres"

Affichage conditionnel :
isAdmin && (recherche OU membre OU secteur non vides)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-01-26 11:51:18 +01:00
16b30b0e9e feat: Poids police inputs w600 + suppression thème dark
- Augmentation poids police inputs : w500 → w600 (semi-bold)
- bodyLarge w600 dans _getTextTheme (statique)
- bodyLarge w600 dans getResponsiveTextTheme (responsive)
- Suppression complète du thème dark inutilisé (~95 lignes)
- Suppression constantes backgroundDarkColor et textDarkColor
- Application forcée sur tous les TextFormField/TextField

Amélioration de la lisibilité des champs de saisie.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-01-26 11:41:22 +01:00
c1c6c55cbe feat: Format affichage membres avec secteur (#42)
- Format dropdown membre : "FirstName name (sectName)"
- Gestion des cas où firstName ou name sont vides
- Affichage sectName entre parenthèses si disponible
- Fallback : "Membre #opeUserId" si aucun nom

Exemples :
- "Pierre Dupont (Secteur A)"
- "Pierre (Secteur B)"
- "Dupont (Secteur C)"

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-01-26 11:29:17 +01:00
3c3a9b90aa fix: Utiliser membre.opeUserId pour filtre membre (#42)
- Correction : utilise membre.opeUserId (et non membre.id)
- Liste tous les membres ayant un opeUserId != null
- Filtre passages par passage.fkUser == membre.opeUserId
- Retire les debugPrint inutiles
- Affichage : membre.name ou membre.firstName

Relation: MembreModel.opeUserId == PassageModel.fkUser

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-01-26 11:25:42 +01:00
6952417147 fix: Utiliser box Hive membres pour filtre membre (#42)
- Correction du filtre membre : utilise membreRepository.getMembresBox()
- Récupère les membres depuis la box Hive (ope_users)
- Filtre uniquement les membres ayant des passages (memberIdsInPassages)
- Affichage : member.name ou member.firstName
- Tri alphabétique par nom

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-01-26 11:12:57 +01:00
a392305820 feat: Ajout filtres membre/secteur dans historique admin (#42)
- Ajout de 2 dropdowns de filtres dans history_page.dart (admin uniquement)
- Filtre par membre (fkUser) : liste dynamique depuis passages
- Filtre par secteur (fkSector) : liste dynamique depuis passages
- Valeurs par défaut : "Tous" pour chaque filtre
- Tri alphabétique des dropdowns
- Mise à jour du planning : #42 validée (26/01)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-01-26 10:53:07 +01:00
5b6808db25 feat: Version 3.6.3 - Carte IGN, mode boussole, corrections Flutter analyze
Nouvelles fonctionnalités:
- #215 Mode boussole + carte IGN/satellite (Mode terrain)
- #53 Définition zoom maximal pour éviter sur-zoom
- #14 Correction bug F5 déconnexion
- #204 Design couleurs flashy
- #205 Écrans utilisateurs simplifiés

Corrections Flutter analyze:
- Suppression warnings room.g.dart, chat_service.dart, api_service.dart
- 0 error, 0 warning, 30 infos (suggestions de style)

Autres:
- Intégration tuiles IGN Plan et IGN Ortho (geopf.fr)
- flutter_compass pour Android/iOS
- Réorganisation assets store

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-19 17:46:03 +01:00
232940b1eb feat: Version 3.6.2 - Correctifs tâches #17-20
- #17: Amélioration gestion des secteurs et statistiques
- #18: Optimisation services API et logs
- #19: Corrections Flutter widgets et repositories
- #20: Fix création passage - détection automatique ope_users.id vs users.id

Suppression dossier web/ (migration vers app Flutter)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-16 14:11:15 +01:00
3205 changed files with 87149 additions and 12571 deletions

0
.gitignore vendored Normal file → Executable file
View File

0
CHANGELOG-v3.1.6.md Normal file → Executable file
View File

View File

@@ -10,6 +10,11 @@ This file provides guidance to Claude Code (claude.ai/code) when working with co
- Web: `cd web && npm run dev` - run Svelte dev server
- Web build: `cd web && npm run build` - build web app for production
## Post-modification checks (OBLIGATOIRE)
After modifying any code file, run the appropriate linter:
- Dart/Flutter: `cd app && flutter analyze [modified_files]`
- PHP: `php -l [modified_file]` (syntax check)
## Code Style Guidelines
- Flutter/Dart: Follow Flutter lint rules in analysis_options.yaml
- Naming: camelCase for variables/methods, PascalCase for classes/enums

0
Capture.png Normal file → Executable file
View File

Before

Width:  |  Height:  |  Size: 255 KiB

After

Width:  |  Height:  |  Size: 255 KiB

153
HOWTO-PROKOV.md Executable file
View File

@@ -0,0 +1,153 @@
# Prokov - Gestion des tâches
## Vue d'ensemble
Prokov est l'outil de gestion de projets et tâches utilisé pour suivre l'avancement de tous les projets 2026.
**URL** : https://prokov.unikoffice.com
**API** : https://prokov.unikoffice.com/api/
## Compte Claude
Claude Code peut interagir directement avec l'API Prokov.
| Paramètre | Valeur |
|-----------|--------|
| Email | pierre@d6mail.fr |
| Password | d66,Pierre |
| Entity | 1 |
| Role | owner |
## Projets
| ID | Projet | Parent | Description |
|----|--------|--------|-------------|
| 1 | Prokov | - | Gestionnaire de tâches |
| 2 | Sogoms | - | API auto-générée Go |
| 4 | Geosector | - | Application Amicales Pompiers |
| 14 | Geosector-App | 4 | App Flutter |
| 15 | Geosector-API | 4 | API backend |
| 16 | Geosector-Web | 4 | Site web |
| 5 | Cleo | - | - |
| 6 | Serveurs | - | Infra |
| 8 | UnikOffice | - | - |
| 21 | 2026 | - | Plateforme micro-services |
| 22 | 2026-Go | 21 | Modules Go (Thierry) |
| 23 | 2026-Flutter | 21 | App Flutter (Pierre) |
| 24 | 2026-Infra | 21 | Infrastructure (commun) |
## Statuts
| ID | Nom | Actif |
|----|-----|-------|
| 1 | Backlog | Oui |
| 2 | À faire | Oui |
| 3 | En cours | Oui |
| 4 | À tester | Oui |
| 5 | Livré | Oui |
| 6 | Terminé | Non |
| 7 | Archivé | Non |
## Utilisation avec Claude Code
### Lire les tâches d'un projet
> "Montre-moi les tâches du projet 2026"
Claude va récupérer les tâches via l'API.
### Créer une tâche
> "Crée une tâche 'Implémenter mod-cpu' dans 2026-Go avec priorité 3"
### Mettre à jour un statut
> "Passe la tâche #170 en statut 'En cours'"
### Marquer comme terminé
> "Marque la tâche #170 comme terminée"
## API Endpoints
### Authentification
```bash
# Login (récupère le token JWT)
curl -s -X POST "https://prokov.unikoffice.com/api/auth/login" \
-H "Content-Type: application/json" \
-d '{"email":"pierre@d6mail.fr","password":"d66,Pierre"}'
```
### Projets
```bash
# Liste des projets
curl -s "https://prokov.unikoffice.com/api/projects" \
-H "Authorization: Bearer $TOKEN"
# Créer un projet
curl -s -X POST "https://prokov.unikoffice.com/api/projects" \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{"name":"Mon Projet","description":"...","color":"#2563eb"}'
# Créer un sous-projet
curl -s -X POST "https://prokov.unikoffice.com/api/projects" \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{"name":"Sous-Projet","parent_id":21}'
```
### Tâches
```bash
# Tâches d'un projet
curl -s "https://prokov.unikoffice.com/api/tasks?project_id=21" \
-H "Authorization: Bearer $TOKEN"
# Créer une tâche
curl -s -X POST "https://prokov.unikoffice.com/api/tasks" \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{"title":"Ma tâche","project_id":22,"status_id":2,"priority":3}'
# Mettre à jour une tâche
curl -s -X PUT "https://prokov.unikoffice.com/api/tasks/170" \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{"status_id":3}'
```
### Statuts
```bash
# Liste des statuts
curl -s "https://prokov.unikoffice.com/api/statuses" \
-H "Authorization: Bearer $TOKEN"
```
## Workflow Git (à implémenter)
Le hook post-commit pourra détecter les `#ID` dans les messages de commit et mettre automatiquement les tâches en "À tester".
```bash
git commit -m "feat: nouvelle fonctionnalité #170 #171"
# → Tâches 170 et 171 passent en statut 4 (À tester)
```
## Structure projets 2026
```
/home/pierre/dev/2026/
├── prokov/ # ID 1 - Gestionnaire tâches
├── sogoms/ # ID 2 - API Go
├── geosector/ # ID 4 - App géospatiale
│ ├── app/ # ID 14
│ ├── api/ # ID 15
│ └── web/ # ID 16
├── resalice/ # Migration vers Sogoms
├── monipocket/ # À intégrer dans 2026
├── unikoffice/ # ID 8
└── cleo/ # ID 5
```

0
MONITORING.md Normal file → Executable file
View File

0
PLANNING-STRIPE-ADMIN.md Normal file → Executable file
View File

2
VERSION Normal file → Executable file
View File

@@ -1 +1 @@
3.5.2
26.01.2607

Binary file not shown.

View File

@@ -1,651 +0,0 @@
#!/bin/bash
set -uo pipefail
# Note: Removed -e to allow script to continue on errors
# Errors are handled explicitly with ERROR_COUNT
# Parse command line arguments
ONLY_DB=false
if [[ "${1:-}" == "-onlydb" ]]; then
ONLY_DB=true
echo "Mode: Database backup only"
fi
# Configuration
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
CONFIG_FILE="$SCRIPT_DIR/d6back.yaml"
LOG_DIR="$SCRIPT_DIR/logs"
mkdir -p "$LOG_DIR"
LOG_FILE="$LOG_DIR/d6back-$(date +%Y%m%d).log"
ERROR_COUNT=0
RECAP_FILE="/tmp/backup_recap_$$.txt"
# Lock file to prevent concurrent executions
LOCK_FILE="/var/lock/d6back.lock"
exec 200>"$LOCK_FILE"
if ! flock -n 200; then
echo "ERROR: Another backup is already running" >&2
exit 1
fi
trap 'flock -u 200' EXIT
# Clean old log files (keep only last 10)
find "$LOG_DIR" -maxdepth 1 -name "d6back-*.log" -type f 2>/dev/null | sort -r | tail -n +11 | xargs -r rm -f || true
# Check dependencies - COMMENTED OUT
# for cmd in yq ssh tar openssl; do
# if ! command -v "$cmd" &> /dev/null; then
# echo "ERROR: $cmd is required but not installed" | tee -a "$LOG_FILE"
# exit 1
# fi
# done
# Load config
DIR_BACKUP=$(yq '.global.dir_backup' "$CONFIG_FILE" | tr -d '"')
ENC_KEY_PATH=$(yq '.global.enc_key' "$CONFIG_FILE" | tr -d '"')
BACKUP_SERVER=$(yq '.global.backup_server // "BACKUP"' "$CONFIG_FILE" | tr -d '"')
EMAIL_TO=$(yq '.global.email_to // "support@unikoffice.com"' "$CONFIG_FILE" | tr -d '"')
KEEP_DIRS=$(yq '.global.keep_dirs' "$CONFIG_FILE" | tr -d '"')
KEEP_DB=$(yq '.global.keep_db' "$CONFIG_FILE" | tr -d '"')
# Load encryption key
if [[ ! -f "$ENC_KEY_PATH" ]]; then
echo "ERROR: Encryption key not found: $ENC_KEY_PATH" | tee -a "$LOG_FILE"
exit 1
fi
ENC_KEY=$(cat "$ENC_KEY_PATH")
echo "=== Backup Started $(date) ===" | tee -a "$LOG_FILE"
echo "Backup directory: $DIR_BACKUP" | tee -a "$LOG_FILE"
# Check available disk space
DISK_USAGE=$(df "$DIR_BACKUP" | tail -1 | awk '{print $5}' | sed 's/%//')
DISK_FREE=$((100 - DISK_USAGE))
if [[ $DISK_FREE -lt 20 ]]; then
echo "WARNING: Low disk space! Only ${DISK_FREE}% free on backup partition" | tee -a "$LOG_FILE"
# Send warning email
echo "Sending DISK SPACE WARNING email to $EMAIL_TO (${DISK_FREE}% free)" | tee -a "$LOG_FILE"
if command -v msmtp &> /dev/null; then
{
echo "To: $EMAIL_TO"
echo "Subject: Backup${BACKUP_SERVER} WARNING - Low disk space (${DISK_FREE}% free)"
echo ""
echo "WARNING: Low disk space on $(hostname)"
echo ""
echo "Backup directory: $DIR_BACKUP"
echo "Disk usage: ${DISK_USAGE}%"
echo "Free space: ${DISK_FREE}%"
echo ""
echo "The backup will continue but please free up some space soon."
echo ""
echo "Date: $(date '+%d.%m.%Y %H:%M')"
} | msmtp "$EMAIL_TO"
echo "DISK SPACE WARNING email sent successfully to $EMAIL_TO" | tee -a "$LOG_FILE"
else
echo "WARNING: msmtp not found - DISK WARNING email NOT sent" | tee -a "$LOG_FILE"
fi
else
echo "Disk space OK: ${DISK_FREE}% free" | tee -a "$LOG_FILE"
fi
# Initialize recap file
echo "BACKUP REPORT - $(hostname) - $(date '+%d.%m.%Y %H')h" > "$RECAP_FILE"
echo "========================================" >> "$RECAP_FILE"
echo "" >> "$RECAP_FILE"
# Function to format size in MB with thousand separator
format_size_mb() {
local file="$1"
if [[ -f "$file" ]]; then
local size_kb=$(du -k "$file" | cut -f1)
local size_mb=$((size_kb / 1024))
# Add thousand separator with printf and sed
printf "%d" "$size_mb" | sed ':a;s/\B[0-9]\{3\}\>/\.&/;ta'
else
echo "0"
fi
}
# Function to calculate age in days
get_age_days() {
local file="$1"
local now=$(date +%s)
local file_time=$(stat -c %Y "$file" 2>/dev/null || echo 0)
echo $(( (now - file_time) / 86400 ))
}
# Function to get week number of year for a file
get_week_year() {
local file="$1"
local file_time=$(stat -c %Y "$file" 2>/dev/null || echo 0)
date -d "@$file_time" +"%Y-%W"
}
# Function to cleanup old backups according to retention policy
cleanup_old_backups() {
local DELETED_COUNT=0
local KEPT_COUNT=0
echo "" | tee -a "$LOG_FILE"
echo "=== Starting Backup Retention Cleanup ===" | tee -a "$LOG_FILE"
# Parse retention periods
local KEEP_DIRS_DAYS=${KEEP_DIRS%d} # Remove 'd' suffix
# Parse database retention (5d,3w,15m)
IFS=',' read -r KEEP_DB_DAILY KEEP_DB_WEEKLY KEEP_DB_MONTHLY <<< "$KEEP_DB"
local KEEP_DB_DAILY_DAYS=${KEEP_DB_DAILY%d}
local KEEP_DB_WEEKLY_WEEKS=${KEEP_DB_WEEKLY%w}
local KEEP_DB_MONTHLY_MONTHS=${KEEP_DB_MONTHLY%m}
# Convert to days
local KEEP_DB_WEEKLY_DAYS=$((KEEP_DB_WEEKLY_WEEKS * 7))
local KEEP_DB_MONTHLY_DAYS=$((KEEP_DB_MONTHLY_MONTHS * 30))
echo "Retention policy: dirs=${KEEP_DIRS_DAYS}d, db=${KEEP_DB_DAILY_DAYS}d/${KEEP_DB_WEEKLY_WEEKS}w/${KEEP_DB_MONTHLY_MONTHS}m" | tee -a "$LOG_FILE"
# Process each host directory
for host_dir in "$DIR_BACKUP"/*; do
if [[ ! -d "$host_dir" ]]; then
continue
fi
local host_name=$(basename "$host_dir")
echo " Cleaning host: $host_name" | tee -a "$LOG_FILE"
# Clean directory backups (*.tar.gz but not *.sql.gz.enc)
while IFS= read -r -d '' file; do
if [[ $(basename "$file") == *".sql.gz.enc" ]]; then
continue # Skip SQL files
fi
local age_days=$(get_age_days "$file")
if [[ $age_days -gt $KEEP_DIRS_DAYS ]]; then
rm -f "$file"
echo " Deleted: $(basename "$file") (${age_days}d > ${KEEP_DIRS_DAYS}d)" | tee -a "$LOG_FILE"
((DELETED_COUNT++))
else
((KEPT_COUNT++))
fi
done < <(find "$host_dir" -name "*.tar.gz" -type f -print0 2>/dev/null)
# Clean database backups with retention policy
declare -A db_files
while IFS= read -r -d '' file; do
local filename=$(basename "$file")
local db_name=${filename%%_*}
if [[ -z "${db_files[$db_name]:-}" ]]; then
db_files[$db_name]="$file"
else
db_files[$db_name]+=$'\n'"$file"
fi
done < <(find "$host_dir" -name "*.sql.gz.enc" -type f -print0 2>/dev/null)
# Process each database
for db_name in "${!db_files[@]}"; do
# Sort files by age (newest first)
mapfile -t files < <(echo "${db_files[$db_name]}" | while IFS= read -r f; do
echo "$f"
done | xargs -I {} stat -c "%Y {}" {} 2>/dev/null | sort -rn | cut -d' ' -f2-)
# Track which files to keep
declare -A keep_daily
declare -A keep_weekly
for file in "${files[@]}"; do
local age_days=$(get_age_days "$file")
if [[ $age_days -le $KEEP_DB_DAILY_DAYS ]]; then
# Keep all files within daily retention
((KEPT_COUNT++))
elif [[ $age_days -le $KEEP_DB_WEEKLY_DAYS ]]; then
# Weekly retention: keep one per day
local file_date=$(date -d "@$(stat -c %Y "$file")" +"%Y-%m-%d")
if [[ -z "${keep_daily[$file_date]:-}" ]]; then
keep_daily[$file_date]="$file"
((KEPT_COUNT++))
else
rm -f "$file"
((DELETED_COUNT++))
fi
elif [[ $age_days -le $KEEP_DB_MONTHLY_DAYS ]]; then
# Monthly retention: keep one per week
local week_year=$(get_week_year "$file")
if [[ -z "${keep_weekly[$week_year]:-}" ]]; then
keep_weekly[$week_year]="$file"
((KEPT_COUNT++))
else
rm -f "$file"
((DELETED_COUNT++))
fi
else
# Beyond retention period
rm -f "$file"
echo " Deleted: $(basename "$file") (${age_days}d > ${KEEP_DB_MONTHLY_DAYS}d)" | tee -a "$LOG_FILE"
((DELETED_COUNT++))
fi
done
unset keep_daily keep_weekly
done
unset db_files
done
echo "Cleanup completed: ${DELETED_COUNT} deleted, ${KEPT_COUNT} kept" | tee -a "$LOG_FILE"
# Add cleanup summary to recap file
echo "" >> "$RECAP_FILE"
echo "CLEANUP SUMMARY:" >> "$RECAP_FILE"
echo " Files deleted: $DELETED_COUNT" >> "$RECAP_FILE"
echo " Files kept: $KEPT_COUNT" >> "$RECAP_FILE"
}
# Function to backup a single database (must be defined before use)
backup_database() {
local database="$1"
local timestamp="$(date +%Y%m%d_%H)"
local backup_file="$backup_dir/sql/${database}_${timestamp}.sql.gz.enc"
echo " Backing up database: $database" | tee -a "$LOG_FILE"
if [[ "$ssh_user" != "root" ]]; then
CMD_PREFIX="sudo"
else
CMD_PREFIX=""
fi
# Execute backup with encryption
# First test MySQL connection to get clear error messages (|| true to continue on error)
MYSQL_TEST=$(ssh -i "$ssh_key" -p "$ssh_port" -o ConnectTimeout=20 "$ssh_user@$host_ip" \
"$CMD_PREFIX incus exec $container_name -- bash -c 'cat > /tmp/d6back.cnf << EOF
[client]
user=$db_user
password=$db_pass
host=$db_host
EOF
chmod 600 /tmp/d6back.cnf
mariadb --defaults-extra-file=/tmp/d6back.cnf -e \"SELECT 1\" 2>&1
rm -f /tmp/d6back.cnf'" 2>/dev/null || true)
if ssh -i "$ssh_key" -p "$ssh_port" -o ConnectTimeout=20 "$ssh_user@$host_ip" \
"$CMD_PREFIX incus exec $container_name -- bash -c 'cat > /tmp/d6back.cnf << EOF
[client]
user=$db_user
password=$db_pass
host=$db_host
EOF
chmod 600 /tmp/d6back.cnf
mariadb-dump --defaults-extra-file=/tmp/d6back.cnf --single-transaction --lock-tables=false --add-drop-table --create-options --databases $database 2>/dev/null | sed -e \"/^CREATE DATABASE/s/\\\`$database\\\`/\\\`${database}_${timestamp}\\\`/\" -e \"/^USE/s/\\\`$database\\\`/\\\`${database}_${timestamp}\\\`/\" | gzip
rm -f /tmp/d6back.cnf'" | \
openssl enc -aes-256-cbc -salt -pass pass:"$ENC_KEY" -pbkdf2 > "$backup_file" 2>/dev/null; then
# Validate backup file size (encrypted SQL should be > 100 bytes)
if [[ -f "$backup_file" ]]; then
file_size=$(stat -c%s "$backup_file" 2>/dev/null || echo 0)
if [[ $file_size -lt 100 ]]; then
# Analyze MySQL connection test results
if [[ "$MYSQL_TEST" == *"Access denied"* ]]; then
echo " ERROR: MySQL authentication failed for $database on $host_name/$container_name" | tee -a "$LOG_FILE"
echo " User: $db_user@$db_host - Check password in configuration" | tee -a "$LOG_FILE"
elif [[ "$MYSQL_TEST" == *"Unknown database"* ]]; then
echo " ERROR: Database '$database' does not exist on $host_name/$container_name" | tee -a "$LOG_FILE"
elif [[ "$MYSQL_TEST" == *"Can't connect"* ]]; then
echo " ERROR: Cannot connect to MySQL server at $db_host in $container_name" | tee -a "$LOG_FILE"
else
echo " ERROR: Backup file too small (${file_size} bytes): $database on $host_name/$container_name" | tee -a "$LOG_FILE"
fi
((ERROR_COUNT++))
rm -f "$backup_file"
else
size=$(du -h "$backup_file" | cut -f1)
size_mb=$(format_size_mb "$backup_file")
echo " ✓ Saved (encrypted): $(basename "$backup_file") ($size)" | tee -a "$LOG_FILE"
echo " SQL: $(basename "$backup_file") - ${size_mb} Mo" >> "$RECAP_FILE"
# Test backup integrity
if ! openssl enc -aes-256-cbc -d -pass pass:"$ENC_KEY" -pbkdf2 -in "$backup_file" | gunzip -t 2>/dev/null; then
echo " ERROR: Backup integrity check failed for $database" | tee -a "$LOG_FILE"
((ERROR_COUNT++))
fi
fi
else
echo " ERROR: Backup file not created: $database" | tee -a "$LOG_FILE"
((ERROR_COUNT++))
fi
else
# Analyze MySQL connection test for failed backup
if [[ "$MYSQL_TEST" == *"Access denied"* ]]; then
echo " ERROR: MySQL authentication failed for $database on $host_name/$container_name" | tee -a "$LOG_FILE"
echo " User: $db_user@$db_host - Check password in configuration" | tee -a "$LOG_FILE"
elif [[ "$MYSQL_TEST" == *"Unknown database"* ]]; then
echo " ERROR: Database '$database' does not exist on $host_name/$container_name" | tee -a "$LOG_FILE"
elif [[ "$MYSQL_TEST" == *"Can't connect"* ]]; then
echo " ERROR: Cannot connect to MySQL server at $db_host in $container_name" | tee -a "$LOG_FILE"
else
echo " ERROR: Failed to backup database $database on $host_name/$container_name" | tee -a "$LOG_FILE"
fi
((ERROR_COUNT++))
rm -f "$backup_file"
fi
}
# Process each host
host_count=$(yq '.hosts | length' "$CONFIG_FILE")
for ((i=0; i<$host_count; i++)); do
host_name=$(yq ".hosts[$i].name" "$CONFIG_FILE" | tr -d '"')
host_ip=$(yq ".hosts[$i].ip" "$CONFIG_FILE" | tr -d '"')
ssh_user=$(yq ".hosts[$i].user" "$CONFIG_FILE" | tr -d '"')
ssh_key=$(yq ".hosts[$i].key" "$CONFIG_FILE" | tr -d '"')
ssh_port=$(yq ".hosts[$i].port // 22" "$CONFIG_FILE" | tr -d '"')
echo "Processing host: $host_name ($host_ip)" | tee -a "$LOG_FILE"
echo "" >> "$RECAP_FILE"
echo "HOST: $host_name ($host_ip)" >> "$RECAP_FILE"
echo "----------------------------" >> "$RECAP_FILE"
# Test SSH connection
if ! ssh -i "$ssh_key" -p "$ssh_port" -o ConnectTimeout=20 -o StrictHostKeyChecking=no "$ssh_user@$host_ip" "true" 2>/dev/null; then
echo " ERROR: Cannot connect to $host_name ($host_ip)" | tee -a "$LOG_FILE"
((ERROR_COUNT++))
continue
fi
# Process containers
container_count=$(yq ".hosts[$i].containers | length" "$CONFIG_FILE" 2>/dev/null || echo "0")
for ((c=0; c<$container_count; c++)); do
container_name=$(yq ".hosts[$i].containers[$c].name" "$CONFIG_FILE" | tr -d '"')
echo " Processing container: $container_name" | tee -a "$LOG_FILE"
# Add container to recap
echo "" >> "$RECAP_FILE"
echo " Container: $container_name" >> "$RECAP_FILE"
# Create backup directories
backup_dir="$DIR_BACKUP/$host_name/$container_name"
mkdir -p "$backup_dir"
mkdir -p "$backup_dir/sql"
# Backup directories (skip if -onlydb mode)
if [[ "$ONLY_DB" == "false" ]]; then
dir_count=$(yq ".hosts[$i].containers[$c].dirs | length" "$CONFIG_FILE" 2>/dev/null || echo "0")
for ((d=0; d<$dir_count; d++)); do
dir_path=$(yq ".hosts[$i].containers[$c].dirs[$d]" "$CONFIG_FILE" | sed 's/^"\|"$//g')
# Use sudo if not root
if [[ "$ssh_user" != "root" ]]; then
CMD_PREFIX="sudo"
else
CMD_PREFIX=""
fi
# Special handling for /var/www - backup each subdirectory separately
if [[ "$dir_path" == "/var/www" ]]; then
echo " Backing up subdirectories of $dir_path" | tee -a "$LOG_FILE"
# Get list of subdirectories
subdirs=$(ssh -i "$ssh_key" -p "$ssh_port" -o ConnectTimeout=20 "$ssh_user@$host_ip" \
"$CMD_PREFIX incus exec $container_name -- find /var/www -maxdepth 1 -type d ! -path /var/www" 2>/dev/null || echo "")
for subdir in $subdirs; do
subdir_name=$(basename "$subdir" | tr '/' '_')
backup_file="$backup_dir/www_${subdir_name}_$(date +%Y%m%d_%H).tar.gz"
echo " Backing up: $subdir" | tee -a "$LOG_FILE"
if ssh -i "$ssh_key" -p "$ssh_port" -o ConnectTimeout=20 "$ssh_user@$host_ip" \
"$CMD_PREFIX incus exec $container_name -- tar czf - $subdir 2>/dev/null" > "$backup_file"; then
# Validate backup file size (tar.gz should be > 1KB)
if [[ -f "$backup_file" ]]; then
file_size=$(stat -c%s "$backup_file" 2>/dev/null || echo 0)
if [[ $file_size -lt 1024 ]]; then
echo " WARNING: Backup file very small (${file_size} bytes): $subdir" | tee -a "$LOG_FILE"
# Keep the file but note it's small
size=$(du -h "$backup_file" | cut -f1)
size_mb=$(format_size_mb "$backup_file")
echo " ✓ Saved (small): $(basename "$backup_file") ($size)" | tee -a "$LOG_FILE"
echo " DIR: $(basename "$backup_file") - ${size_mb} Mo (WARNING: small)" >> "$RECAP_FILE"
else
size=$(du -h "$backup_file" | cut -f1)
size_mb=$(format_size_mb "$backup_file")
echo " ✓ Saved: $(basename "$backup_file") ($size)" | tee -a "$LOG_FILE"
echo " DIR: $(basename "$backup_file") - ${size_mb} Mo" >> "$RECAP_FILE"
fi
# Test tar integrity
if ! tar tzf "$backup_file" >/dev/null 2>&1; then
echo " ERROR: Tar integrity check failed" | tee -a "$LOG_FILE"
((ERROR_COUNT++))
fi
else
echo " ERROR: Backup file not created: $subdir" | tee -a "$LOG_FILE"
((ERROR_COUNT++))
fi
else
echo " ERROR: Failed to backup $subdir" | tee -a "$LOG_FILE"
((ERROR_COUNT++))
rm -f "$backup_file"
fi
done
else
# Normal backup for other directories
dir_name=$(basename "$dir_path" | tr '/' '_')
backup_file="$backup_dir/${dir_name}_$(date +%Y%m%d_%H).tar.gz"
echo " Backing up: $dir_path" | tee -a "$LOG_FILE"
if ssh -i "$ssh_key" -p "$ssh_port" -o ConnectTimeout=20 "$ssh_user@$host_ip" \
"$CMD_PREFIX incus exec $container_name -- tar czf - $dir_path 2>/dev/null" > "$backup_file"; then
# Validate backup file size (tar.gz should be > 1KB)
if [[ -f "$backup_file" ]]; then
file_size=$(stat -c%s "$backup_file" 2>/dev/null || echo 0)
if [[ $file_size -lt 1024 ]]; then
echo " WARNING: Backup file very small (${file_size} bytes): $dir_path" | tee -a "$LOG_FILE"
# Keep the file but note it's small
size=$(du -h "$backup_file" | cut -f1)
size_mb=$(format_size_mb "$backup_file")
echo " ✓ Saved (small): $(basename "$backup_file") ($size)" | tee -a "$LOG_FILE"
echo " DIR: $(basename "$backup_file") - ${size_mb} Mo (WARNING: small)" >> "$RECAP_FILE"
else
size=$(du -h "$backup_file" | cut -f1)
size_mb=$(format_size_mb "$backup_file")
echo " ✓ Saved: $(basename "$backup_file") ($size)" | tee -a "$LOG_FILE"
echo " DIR: $(basename "$backup_file") - ${size_mb} Mo" >> "$RECAP_FILE"
fi
# Test tar integrity
if ! tar tzf "$backup_file" >/dev/null 2>&1; then
echo " ERROR: Tar integrity check failed" | tee -a "$LOG_FILE"
((ERROR_COUNT++))
fi
else
echo " ERROR: Backup file not created: $dir_path" | tee -a "$LOG_FILE"
((ERROR_COUNT++))
fi
else
echo " ERROR: Failed to backup $dir_path" | tee -a "$LOG_FILE"
((ERROR_COUNT++))
rm -f "$backup_file"
fi
fi
done
fi # End of directory backup section
# Backup databases
db_user=$(yq ".hosts[$i].containers[$c].db_user" "$CONFIG_FILE" 2>/dev/null | tr -d '"')
db_pass=$(yq ".hosts[$i].containers[$c].db_pass" "$CONFIG_FILE" 2>/dev/null | tr -d '"')
db_host=$(yq ".hosts[$i].containers[$c].db_host // \"localhost\"" "$CONFIG_FILE" 2>/dev/null | tr -d '"')
# Check if we're in onlydb mode
if [[ "$ONLY_DB" == "true" ]]; then
# Use onlydb list if it exists
onlydb_count=$(yq ".hosts[$i].containers[$c].onlydb | length" "$CONFIG_FILE" 2>/dev/null || echo "0")
if [[ "$onlydb_count" != "0" ]] && [[ "$onlydb_count" != "null" ]]; then
db_count="$onlydb_count"
use_onlydb=true
else
# No onlydb list, skip this container in onlydb mode
continue
fi
else
# Normal mode - use databases list
db_count=$(yq ".hosts[$i].containers[$c].databases | length" "$CONFIG_FILE" 2>/dev/null || echo "0")
use_onlydb=false
fi
if [[ -n "$db_user" ]] && [[ -n "$db_pass" ]] && [[ "$db_count" != "0" ]]; then
for ((db=0; db<$db_count; db++)); do
if [[ "$use_onlydb" == "true" ]]; then
db_name=$(yq ".hosts[$i].containers[$c].onlydb[$db]" "$CONFIG_FILE" | tr -d '"')
else
db_name=$(yq ".hosts[$i].containers[$c].databases[$db]" "$CONFIG_FILE" | tr -d '"')
fi
if [[ "$db_name" == "ALL" ]]; then
echo " Fetching all databases..." | tee -a "$LOG_FILE"
# Get database list
if [[ "$ssh_user" != "root" ]]; then
db_list=$(ssh -i "$ssh_key" -p "$ssh_port" -o ConnectTimeout=20 "$ssh_user@$host_ip" \
"sudo incus exec $container_name -- bash -c 'cat > /tmp/d6back.cnf << EOF
[client]
user=$db_user
password=$db_pass
host=$db_host
EOF
chmod 600 /tmp/d6back.cnf
mariadb --defaults-extra-file=/tmp/d6back.cnf -e \"SHOW DATABASES;\" 2>/dev/null
rm -f /tmp/d6back.cnf'" | \
grep -Ev '^(Database|information_schema|performance_schema|mysql|sys)$' || echo "")
else
db_list=$(ssh -i "$ssh_key" -p "$ssh_port" -o ConnectTimeout=20 "$ssh_user@$host_ip" \
"incus exec $container_name -- bash -c 'cat > /tmp/d6back.cnf << EOF
[client]
user=$db_user
password=$db_pass
host=$db_host
EOF
chmod 600 /tmp/d6back.cnf
mariadb --defaults-extra-file=/tmp/d6back.cnf -e \"SHOW DATABASES;\" 2>/dev/null
rm -f /tmp/d6back.cnf'" | \
grep -Ev '^(Database|information_schema|performance_schema|mysql|sys)$' || echo "")
fi
# Backup each database
for single_db in $db_list; do
backup_database "$single_db"
done
else
backup_database "$db_name"
fi
done
fi
done
done
echo "=== Backup Completed $(date) ===" | tee -a "$LOG_FILE"
# Cleanup old backups according to retention policy
cleanup_old_backups
# Show summary
total_size=$(du -sh "$DIR_BACKUP" 2>/dev/null | cut -f1)
echo "Total backup size: $total_size" | tee -a "$LOG_FILE"
# Add summary to recap
echo "" >> "$RECAP_FILE"
echo "========================================" >> "$RECAP_FILE"
# Add size details per host/container
echo "BACKUP SIZES:" >> "$RECAP_FILE"
for host_dir in "$DIR_BACKUP"/*; do
if [[ -d "$host_dir" ]]; then
host_name=$(basename "$host_dir")
host_size=$(du -sh "$host_dir" 2>/dev/null | cut -f1)
echo "" >> "$RECAP_FILE"
echo " $host_name: $host_size" >> "$RECAP_FILE"
# Size per container
for container_dir in "$host_dir"/*; do
if [[ -d "$container_dir" ]]; then
container_name=$(basename "$container_dir")
container_size=$(du -sh "$container_dir" 2>/dev/null | cut -f1)
echo " - $container_name: $container_size" >> "$RECAP_FILE"
fi
done
fi
done
echo "" >> "$RECAP_FILE"
echo "TOTAL SIZE: $total_size" >> "$RECAP_FILE"
echo "COMPLETED: $(date '+%d.%m.%Y %H:%M')" >> "$RECAP_FILE"
# Prepare email subject with date format
DATE_SUBJECT=$(date '+%d.%m.%Y %H')
# Send recap email
if [[ $ERROR_COUNT -gt 0 ]]; then
echo "Total errors: $ERROR_COUNT" | tee -a "$LOG_FILE"
# Add errors to recap
echo "" >> "$RECAP_FILE"
echo "ERRORS DETECTED: $ERROR_COUNT" >> "$RECAP_FILE"
echo "----------------------------" >> "$RECAP_FILE"
grep -i "ERROR" "$LOG_FILE" >> "$RECAP_FILE"
# Send email with ERROR in subject
echo "Sending ERROR email to $EMAIL_TO (Errors found: $ERROR_COUNT)" | tee -a "$LOG_FILE"
if command -v msmtp &> /dev/null; then
{
echo "To: $EMAIL_TO"
echo "Subject: Backup${BACKUP_SERVER} ERROR $DATE_SUBJECT"
echo ""
cat "$RECAP_FILE"
} | msmtp "$EMAIL_TO"
echo "ERROR email sent successfully to $EMAIL_TO" | tee -a "$LOG_FILE"
else
echo "WARNING: msmtp not found - ERROR email NOT sent" | tee -a "$LOG_FILE"
fi
else
echo "Backup completed successfully with no errors" | tee -a "$LOG_FILE"
# Send success recap email
echo "Sending SUCCESS recap email to $EMAIL_TO" | tee -a "$LOG_FILE"
if command -v msmtp &> /dev/null; then
{
echo "To: $EMAIL_TO"
echo "Subject: Backup${BACKUP_SERVER} $DATE_SUBJECT"
echo ""
cat "$RECAP_FILE"
} | msmtp "$EMAIL_TO"
echo "SUCCESS recap email sent successfully to $EMAIL_TO" | tee -a "$LOG_FILE"
else
echo "WARNING: msmtp not found - SUCCESS recap email NOT sent" | tee -a "$LOG_FILE"
fi
fi
# Clean up recap file
rm -f "$RECAP_FILE"
# Exit with error code if there were errors
if [[ $ERROR_COUNT -gt 0 ]]; then
exit 1
fi

View File

@@ -1,112 +0,0 @@
# Configuration for MariaDB and directories backup
# Backup structure: $dir_backup/$hostname/$containername/ for dirs
# $dir_backup/$hostname/$containername/sql/ for databases
# Global parameters
global:
backup_server: PM7 # Nom du serveur de backup (PM7, PM1, etc.)
email_to: support@unikoffice.com # Email de notification
dir_backup: /var/pierre/back # Base backup directory
enc_key: /home/pierre/.key_enc # Encryption key for SQL backups
keep_dirs: 7d # Garde 7 jours pour les dirs
keep_db: 5d,3w,15m # 5 jours complets, 3 semaines (1/jour), 15 mois (1/semaine)
# Hosts configuration
hosts:
- name: IN2
ip: 145.239.9.105
user: debian
key: /home/pierre/.ssh/backup_key
port: 22
dirs:
- /etc/nginx
containers:
- name: nx4
db_user: root
db_pass: MyDebServer,90b
db_host: localhost
dirs:
- /etc/nginx
- /var/www
databases:
- ALL # Backup all databases
onlydb: # Used only with -onlydb parameter (optional)
- turing
- name: IN3
ip: 195.154.80.116
user: pierre
key: /home/pierre/.ssh/backup_key
port: 22
dirs:
- /etc/nginx
containers:
- name: nx4
db_user: root
db_pass: MyAlpLocal,90b
db_host: localhost
dirs:
- /etc/nginx
- /var/www
databases:
- ALL # Backup all databases
onlydb: # Used only with -onlydb parameter (optional)
- geosector
- name: rca-geo
dirs:
- /etc/nginx
- /var/www
- name: dva-res
db_user: root
db_pass: MyAlpineDb.90b
db_host: localhost
dirs:
- /etc/nginx
- /var/www
databases:
- ALL
onlydb:
- resalice
- name: dva-front
dirs:
- /etc/nginx
- /var/www
- name: maria3
db_user: root
db_pass: MyAlpLocal,90b
db_host: localhost
dirs:
- /etc/my.cnf.d
- /var/osm
- /var/log
databases:
- ALL
onlydb:
- cleo
- rca_geo
- name: IN4
ip: 51.159.7.190
user: pierre
key: /home/pierre/.ssh/backup_key
port: 22
dirs:
- /etc/nginx
containers:
- name: maria4
db_user: root
db_pass: MyAlpLocal,90b
db_host: localhost
dirs:
- /etc/my.cnf.d
- /var/osm
- /var/log
databases:
- ALL
onlydb:
- cleo
- pra_geo

View File

@@ -1,118 +0,0 @@
#!/bin/bash
set -euo pipefail
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Configuration
CONFIG_FILE="backpm7.yaml"
# Check if file argument is provided
if [ $# -eq 0 ]; then
echo -e "${RED}Error: No input file specified${NC}"
echo "Usage: $0 <database.sql.gz.enc>"
echo "Example: $0 wordpress_20250905_14.sql.gz.enc"
exit 1
fi
INPUT_FILE="$1"
# Check if input file exists
if [ ! -f "$INPUT_FILE" ]; then
echo -e "${RED}Error: File not found: $INPUT_FILE${NC}"
exit 1
fi
# Function to load encryption key from config
load_key_from_config() {
if [ ! -f "$CONFIG_FILE" ]; then
echo -e "${YELLOW}Warning: $CONFIG_FILE not found${NC}"
return 1
fi
# Check for yq
if ! command -v yq &> /dev/null; then
echo -e "${RED}Error: yq is required to read config file${NC}"
echo "Install with: sudo wget -qO /usr/local/bin/yq https://github.com/mikefarah/yq/releases/latest/download/yq_linux_amd64 && sudo chmod +x /usr/local/bin/yq"
return 1
fi
local key_path=$(yq '.global.enc_key' "$CONFIG_FILE" | tr -d '"')
if [ -z "$key_path" ]; then
echo -e "${RED}Error: enc_key not found in $CONFIG_FILE${NC}"
return 1
fi
if [ ! -f "$key_path" ]; then
echo -e "${RED}Error: Encryption key file not found: $key_path${NC}"
return 1
fi
ENC_KEY=$(cat "$key_path")
echo -e "${GREEN}Encryption key loaded from: $key_path${NC}"
return 0
}
# Check file type early - accept both old and new naming
if [[ "$INPUT_FILE" != *.sql.gz.enc ]] && [[ "$INPUT_FILE" != *.sql.tar.gz.enc ]]; then
echo -e "${RED}Error: File must be a .sql.gz.enc or .sql.tar.gz.enc file${NC}"
echo "This tool only decrypts SQL backup files created by backpm7.sh"
exit 1
fi
# Get encryption key from config
if ! load_key_from_config; then
echo -e "${RED}Error: Cannot load encryption key${NC}"
echo "Make sure $CONFIG_FILE exists and contains enc_key path"
exit 1
fi
# Process SQL backup file
echo -e "${BLUE}Decrypting SQL backup: $INPUT_FILE${NC}"
# Determine output file - extract just the filename and put in current directory
BASENAME=$(basename "$INPUT_FILE")
if [[ "$BASENAME" == *.sql.tar.gz.enc ]]; then
OUTPUT_FILE="${BASENAME%.sql.tar.gz.enc}.sql"
else
OUTPUT_FILE="${BASENAME%.sql.gz.enc}.sql"
fi
# Decrypt and decompress in one command
echo "Decrypting to: $OUTPUT_FILE"
# Decrypt and decompress in one pipeline
if openssl enc -aes-256-cbc -d -salt -pass pass:"$ENC_KEY" -pbkdf2 -in "$INPUT_FILE" | gunzip > "$OUTPUT_FILE" 2>/dev/null; then
# Get file size
size=$(du -h "$OUTPUT_FILE" | cut -f1)
echo -e "${GREEN}✓ Successfully decrypted: $OUTPUT_FILE ($size)${NC}"
# Show first few lines of SQL
echo -e "${BLUE}First 5 lines of SQL:${NC}"
head -n 5 "$OUTPUT_FILE"
else
echo -e "${RED}✗ Decryption failed${NC}"
echo "Possible causes:"
echo " - Wrong encryption key"
echo " - Corrupted file"
echo " - File was encrypted differently"
# Try to help debug
echo -e "\n${YELLOW}Debug info:${NC}"
echo "File size: $(du -h "$INPUT_FILE" | cut -f1)"
echo "First bytes (should start with 'Salted__'):"
hexdump -C "$INPUT_FILE" | head -n 1
# Let's also check what key we're using (first 10 chars)
echo "Key begins with: ${ENC_KEY:0:10}..."
exit 1
fi
echo -e "${GREEN}Operation completed successfully${NC}"

View File

@@ -1,248 +0,0 @@
#!/bin/bash
#
# sync_geosector.sh - Synchronise les backups geosector depuis PM7 vers maria3 (IN3) et maria4 (IN4)
#
# Ce script :
# 1. Trouve le dernier backup chiffré de geosector sur PM7
# 2. Le déchiffre et décompresse localement
# 3. Le transfère et l'importe dans IN3/maria3/geosector
# 4. Le transfère et l'importe dans IN4/maria4/geosector
#
# Installation: /var/pierre/bat/sync_geosector.sh
# Usage: ./sync_geosector.sh [--force] [--date YYYYMMDD_HH]
#
set -uo pipefail
# Note: Removed -e to allow script to continue on sync errors
# Errors are handled explicitly with ERROR_COUNT
# Configuration
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
CONFIG_FILE="$SCRIPT_DIR/d6back.yaml"
BACKUP_DIR="/var/pierre/back/IN3/nx4/sql"
ENC_KEY_FILE="/home/pierre/.key_enc"
SSH_KEY="/home/pierre/.ssh/backup_key"
TEMP_DIR="/tmp/geosector_sync"
LOG_FILE="/var/pierre/bat/logs/sync_geosector.log"
RECAP_FILE="/tmp/sync_geosector_recap_$$.txt"
# Load email config from d6back.yaml
if [[ -f "$CONFIG_FILE" ]]; then
EMAIL_TO=$(yq '.global.email_to // "support@unikoffice.com"' "$CONFIG_FILE" | tr -d '"')
BACKUP_SERVER=$(yq '.global.backup_server // "BACKUP"' "$CONFIG_FILE" | tr -d '"')
else
EMAIL_TO="support@unikoffice.com"
BACKUP_SERVER="BACKUP"
fi
# Serveurs cibles
IN3_HOST="195.154.80.116"
IN3_USER="pierre"
IN3_CONTAINER="maria3"
IN4_HOST="51.159.7.190"
IN4_USER="pierre"
IN4_CONTAINER="maria4"
# Credentials MariaDB
DB_USER="root"
IN3_DB_PASS="MyAlpLocal,90b" # maria3
IN4_DB_PASS="MyAlpLocal,90b" # maria4
DB_NAME="geosector"
# Fonctions utilitaires
log() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $*" | tee -a "$LOG_FILE"
}
error() {
log "ERROR: $*"
exit 1
}
cleanup() {
if [[ -d "$TEMP_DIR" ]]; then
log "Nettoyage de $TEMP_DIR"
rm -rf "$TEMP_DIR"
fi
rm -f "$RECAP_FILE"
}
trap cleanup EXIT
# Lecture de la clé de chiffrement
if [[ ! -f "$ENC_KEY_FILE" ]]; then
error "Clé de chiffrement non trouvée: $ENC_KEY_FILE"
fi
ENC_KEY=$(cat "$ENC_KEY_FILE")
# Parsing des arguments
FORCE=0
SPECIFIC_DATE=""
while [[ $# -gt 0 ]]; do
case $1 in
--force)
FORCE=1
shift
;;
--date)
SPECIFIC_DATE="$2"
shift 2
;;
*)
echo "Usage: $0 [--force] [--date YYYYMMDD_HH]"
exit 1
;;
esac
done
# Trouver le fichier backup
if [[ -n "$SPECIFIC_DATE" ]]; then
BACKUP_FILE="$BACKUP_DIR/geosector_${SPECIFIC_DATE}.sql.gz.enc"
if [[ ! -f "$BACKUP_FILE" ]]; then
error "Backup non trouvé: $BACKUP_FILE"
fi
else
# Chercher le plus récent
BACKUP_FILE=$(find "$BACKUP_DIR" -name "geosector_*.sql.gz.enc" -type f -printf '%T@ %p\n' | sort -rn | head -1 | cut -d' ' -f2-)
if [[ -z "$BACKUP_FILE" ]]; then
error "Aucun backup geosector trouvé dans $BACKUP_DIR"
fi
fi
BACKUP_BASENAME=$(basename "$BACKUP_FILE")
log "Backup sélectionné: $BACKUP_BASENAME"
# Initialiser le fichier récapitulatif
echo "SYNC GEOSECTOR REPORT - $(hostname) - $(date '+%d.%m.%Y %H')h" > "$RECAP_FILE"
echo "========================================" >> "$RECAP_FILE"
echo "" >> "$RECAP_FILE"
echo "Backup source: $BACKUP_BASENAME" >> "$RECAP_FILE"
echo "" >> "$RECAP_FILE"
# Créer le répertoire temporaire
mkdir -p "$TEMP_DIR"
DECRYPTED_FILE="$TEMP_DIR/geosector.sql"
# Étape 1: Déchiffrer et décompresser
log "Déchiffrement et décompression du backup..."
if ! openssl enc -aes-256-cbc -d -pass pass:"$ENC_KEY" -pbkdf2 -in "$BACKUP_FILE" | gunzip > "$DECRYPTED_FILE"; then
error "Échec du déchiffrement/décompression"
fi
FILE_SIZE=$(du -h "$DECRYPTED_FILE" | cut -f1)
log "Fichier SQL déchiffré: $FILE_SIZE"
echo "Decrypted SQL size: $FILE_SIZE" >> "$RECAP_FILE"
echo "" >> "$RECAP_FILE"
# Compteur d'erreurs
ERROR_COUNT=0
# Fonction pour synchroniser vers un serveur
sync_to_server() {
local HOST=$1
local USER=$2
local CONTAINER=$3
local DB_PASS=$4
local SERVER_NAME=$5
log "=== Synchronisation vers $SERVER_NAME ($HOST) ==="
echo "TARGET: $SERVER_NAME ($HOST/$CONTAINER)" >> "$RECAP_FILE"
# Test de connexion SSH
if ! ssh -i "$SSH_KEY" -o ConnectTimeout=10 "$USER@$HOST" "echo 'SSH OK'" &>/dev/null; then
log "ERROR: Impossible de se connecter à $HOST via SSH"
echo " ✗ SSH connection FAILED" >> "$RECAP_FILE"
((ERROR_COUNT++))
return 1
fi
# Import dans MariaDB
log "Import dans $SERVER_NAME/$CONTAINER/geosector..."
# Drop et recréer la base sur le serveur distant
if ! ssh -i "$SSH_KEY" "$USER@$HOST" "incus exec $CONTAINER --project default -- mariadb -u root -p'$DB_PASS' -e 'DROP DATABASE IF EXISTS $DB_NAME; CREATE DATABASE $DB_NAME CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;'"; then
log "ERROR: Échec de la création de la base sur $SERVER_NAME"
echo " ✗ Database creation FAILED" >> "$RECAP_FILE"
((ERROR_COUNT++))
return 1
fi
# Filtrer et importer le SQL (enlever CREATE DATABASE et USE avec timestamp)
log "Filtrage et import du SQL..."
if ! sed -e '/^CREATE DATABASE.*geosector_[0-9]/d' \
-e '/^USE.*geosector_[0-9]/d' \
"$DECRYPTED_FILE" | \
ssh -i "$SSH_KEY" "$USER@$HOST" "incus exec $CONTAINER --project default -- mariadb -u root -p'$DB_PASS' $DB_NAME"; then
log "ERROR: Échec de l'import sur $SERVER_NAME"
echo " ✗ SQL import FAILED" >> "$RECAP_FILE"
((ERROR_COUNT++))
return 1
fi
log "$SERVER_NAME: Import réussi"
echo " ✓ Import SUCCESS" >> "$RECAP_FILE"
echo "" >> "$RECAP_FILE"
}
# Synchronisation vers IN3/maria3
sync_to_server "$IN3_HOST" "$IN3_USER" "$IN3_CONTAINER" "$IN3_DB_PASS" "IN3/maria3"
# Synchronisation vers IN4/maria4
sync_to_server "$IN4_HOST" "$IN4_USER" "$IN4_CONTAINER" "$IN4_DB_PASS" "IN4/maria4"
# Finaliser le récapitulatif
echo "========================================" >> "$RECAP_FILE"
echo "COMPLETED: $(date '+%d.%m.%Y %H:%M')" >> "$RECAP_FILE"
# Préparer le sujet email avec date
DATE_SUBJECT=$(date '+%d.%m.%Y %H')
# Envoyer l'email récapitulatif
if [[ $ERROR_COUNT -gt 0 ]]; then
log "Total errors: $ERROR_COUNT"
# Ajouter les erreurs au récap
echo "" >> "$RECAP_FILE"
echo "ERRORS DETECTED: $ERROR_COUNT" >> "$RECAP_FILE"
echo "----------------------------" >> "$RECAP_FILE"
grep -i "ERROR" "$LOG_FILE" | tail -20 >> "$RECAP_FILE"
# Envoyer email avec ERROR dans le sujet
log "Sending ERROR email to $EMAIL_TO (Errors found: $ERROR_COUNT)"
if command -v msmtp &> /dev/null; then
{
echo "To: $EMAIL_TO"
echo "Subject: Sync${BACKUP_SERVER} ERROR $DATE_SUBJECT"
echo ""
cat "$RECAP_FILE"
} | msmtp "$EMAIL_TO"
log "ERROR email sent successfully to $EMAIL_TO"
else
log "WARNING: msmtp not found - ERROR email NOT sent"
fi
log "=== Synchronisation terminée avec des erreurs ==="
exit 1
else
log "=== Synchronisation terminée avec succès ==="
log "Les bases geosector sur maria3 et maria4 sont à jour avec le backup $BACKUP_BASENAME"
# Envoyer email de succès
log "Sending SUCCESS recap email to $EMAIL_TO"
if command -v msmtp &> /dev/null; then
{
echo "To: $EMAIL_TO"
echo "Subject: Sync${BACKUP_SERVER} $DATE_SUBJECT"
echo ""
cat "$RECAP_FILE"
} | msmtp "$EMAIL_TO"
log "SUCCESS recap email sent successfully to $EMAIL_TO"
else
log "WARNING: msmtp not found - SUCCESS recap email NOT sent"
fi
exit 0
fi

80
api/TODO-API.md Normal file → Executable file
View File

@@ -1225,6 +1225,86 @@ php scripts/php/migrate_from_backup.php \
---
#### 7. Statistiques Events pour Admin Flutter
**Demandé le :** 22/12/2025
**Objectif :** Permettre aux admins Flutter de consulter les logs Events avec des stats quotidiennes, hebdomadaires et mensuelles, et drill-down vers le détail.
**Architecture choisie :** Stats pré-agrégées en SQL + détail JSONL à la demande
**Pourquoi cette approche :**
- Évite de parser les fichiers JSONL à chaque requête Flutter
- Transfert minimal (~1-10 KB par requête)
- Calculs hebdo/mensuel à la volée depuis `daily` (pas de tables supplémentaires)
- Détail paginé uniquement sur demande
**Phase 1 : Base de données** ✅ (22/12/2025)
- [x] Créer la table `event_stats_daily`
- Colonnes : `stat_date`, `entity_id`, `event_type`, `count`, `sum_amount`, `unique_users`, `metadata`
- Index : `(entity_id, stat_date)`, unique `(stat_date, entity_id, event_type)`
- [x] Script SQL de création : `scripts/sql/create_event_stats_daily.sql`
**Phase 2 : CRON d'agrégation** ✅ (22/12/2025)
- [x] Créer `scripts/cron/aggregate_event_stats.php`
- Parse le fichier JSONL de J-1 (ou date passée en paramètre)
- Agrège par entity_id et event_type
- INSERT/UPDATE dans `event_stats_daily`
- Calcule `unique_users` (COUNT DISTINCT sur user_id)
- Calcule `sum_amount` pour les passages
- Stocke metadata JSON (top 5 secteurs, erreurs fréquentes, etc.)
- [x] Ajouter au crontab : exécution à 01h00 chaque nuit (via deploy-api.sh)
- [x] Script de rattrapage : `php aggregate_event_stats.php --from=2025-01-01 --to=2025-12-21`
**Phase 3 : Service EventStatsService** ✅ (22/12/2025)
- [x] Créer `src/Services/EventStatsService.php`
- `getSummary(?int $entityId, ?string $date)` : Stats du jour
- `getDaily(?int $entityId, string $from, string $to, array $eventTypes)` : Stats journalières
- `getWeekly(?int $entityId, string $from, string $to, array $eventTypes)` : Calculé depuis daily
- `getMonthly(?int $entityId, int $year, array $eventTypes)` : Calculé depuis daily
- `getDetails(?int $entityId, string $date, ?string $eventType, int $limit, int $offset)` : Lecture JSONL paginée
- `getEventTypes()` : Liste des types d'événements disponibles
- `hasStatsForDate(string $date)` : Vérifie si stats existent
**Phase 4 : Controller et Routes** ✅ (22/12/2025)
- [x] Créer `src/Controllers/EventStatsController.php`
- `summary()` : GET /api/events/stats/summary?date=
- `daily()` : GET /api/events/stats/daily?from=&to=&events=
- `weekly()` : GET /api/events/stats/weekly?from=&to=&events=
- `monthly()` : GET /api/events/stats/monthly?year=&events=
- `details()` : GET /api/events/stats/details?date=&event=&limit=&offset=
- `types()` : GET /api/events/stats/types
- [x] Ajouter les routes dans `Router.php`
- [x] Vérification des droits : Admin entité (role_id = 2) ou Super-admin (role_id = 1)
- [x] Super-admin : peut voir toutes les entités (entity_id = NULL ou ?entity_id=X)
**Phase 5 : Optimisations** ✅ (22/12/2025)
- [x] Compression gzip sur les réponses JSON (si >1KB et client supporte)
- [x] Header `ETag` sur /summary et /daily (cache 5 min, 304 Not Modified)
- [x] Filtrage des champs sensibles dans /details (IP tronquée, user_agent supprimé)
- [x] Limite max 100 events par requête /details
**Phase 6 : Tests et documentation**
- [ ] Tests unitaires EventStatsService
- [ ] Tests endpoints avec différents rôles
- [ ] Documentation Postman/Swagger des endpoints
- [ ] Mise à jour TECHBOOK.md avec exemples de réponses JSON
**Estimation :** 2-3 jours de développement
**Dépendances :**
- EventLogService déjà en place ✅
- Fichiers JSONL générés quotidiennement ✅
---
### 🟢 PRIORITÉ BASSE
#### 7. Amélioration de la suppression des utilisateurs

0
api/alter_table_geometry.sql Normal file → Executable file
View File

0
api/config/nginx/pra-geo-http-only.conf Normal file → Executable file
View File

0
api/config/nginx/pra-geo-production.conf Normal file → Executable file
View File

0
api/config/whitelist_ip_cache.txt Normal file → Executable file
View File

0
api/create_table_x_departements_contours.sql Normal file → Executable file
View File

0
api/data/README.md Normal file → Executable file
View File

View File

@@ -36,7 +36,7 @@ FINAL_OWNER_LOGS="nobody"
FINAL_GROUP_LOGS="nginx"
# Configuration de sauvegarde
BACKUP_DIR="/data/backup/geosector/api"
BACKUP_DIR="/home/pierre/samba/back/geosector/api"
# Couleurs pour les messages
GREEN='\033[0;32m'
@@ -179,6 +179,14 @@ if [ "$SOURCE_TYPE" = "local_code" ]; then
--exclude='*.swp' \
--exclude='*.swo' \
--exclude='*~' \
--exclude='docs/*.geojson' \
--exclude='docs/*.sql' \
--exclude='docs/*.pdf' \
--exclude='composer.phar' \
--exclude='scripts/migration*' \
--exclude='scripts/php' \
--exclude='CLAUDE.md' \
--exclude='TODO-API.md' \
-czf "${ARCHIVE_PATH}" . 2>/dev/null || echo_error "Failed to create archive"
echo_info "Archive created: ${ARCHIVE_PATH}"
@@ -198,6 +206,16 @@ elif [ "$SOURCE_TYPE" = "remote_container" ]; then
--exclude='uploads' \
--exclude='sessions' \
--exclude='opendata' \
--exclude='docs/*.geojson' \
--exclude='docs/*.sql' \
--exclude='docs/*.pdf' \
--exclude='composer.phar' \
--exclude='scripts/migration*' \
--exclude='scripts/php' \
--exclude='CLAUDE.md' \
--exclude='TODO-API.md' \
--exclude='*.tar.gz' \
--exclude='vendor' \
-czf /tmp/${ARCHIVE_NAME} -C ${API_PATH} .
" || echo_error "Failed to create archive on remote"
@@ -288,11 +306,11 @@ if [ "$DEST_HOST" != "local" ]; then
incus exec ${DEST_CONTAINER} -- find ${API_PATH} -type d -exec chmod 755 {} \; &&
incus exec ${DEST_CONTAINER} -- find ${API_PATH} -type f -exec chmod 644 {} \; &&
# Permissions spéciales pour logs
# Permissions spéciales pour logs (PHP-FPM tourne sous nobody)
incus exec ${DEST_CONTAINER} -- mkdir -p ${API_PATH}/logs/events &&
incus exec ${DEST_CONTAINER} -- chown -R ${FINAL_OWNER_LOGS}:${FINAL_GROUP} ${API_PATH}/logs &&
incus exec ${DEST_CONTAINER} -- find ${API_PATH}/logs -type d -exec chmod 750 {} \; &&
incus exec ${DEST_CONTAINER} -- find ${API_PATH}/logs -type f -exec chmod 640 {} \; &&
incus exec ${DEST_CONTAINER} -- find ${API_PATH}/logs -type d -exec chmod 775 {} \; &&
incus exec ${DEST_CONTAINER} -- find ${API_PATH}/logs -type f -exec chmod 664 {} \; &&
# Permissions spéciales pour uploads
incus exec ${DEST_CONTAINER} -- mkdir -p ${API_PATH}/uploads &&
@@ -342,8 +360,8 @@ if [ "$DEST_HOST" != "local" ]; then
# GEOSECTOR API - Security data cleanup (daily at 2am)
0 2 * * * /usr/bin/php /var/www/geosector/api/scripts/cron/cleanup_security_data.php >> /var/www/geosector/api/logs/cleanup_security.log 2>&1
# GEOSECTOR API - Stripe devices update (weekly Sunday at 3am)
0 3 * * 0 /usr/bin/php /var/www/geosector/api/scripts/cron/update_stripe_devices.php >> /var/www/geosector/api/logs/stripe_devices.log 2>&1
# GEOSECTOR API - Event stats aggregation (daily at 1am)
0 1 * * * /usr/bin/php /var/www/geosector/api/scripts/cron/aggregate_event_stats.php >> /var/www/geosector/api/logs/aggregate_stats.log 2>&1
EOF
# Installer le nouveau crontab
@@ -380,4 +398,4 @@ fi
echo_info "Deployment completed at: $(date)"
# Journaliser le déploiement
echo "$(date '+%Y-%m-%d %H:%M:%S') - API deployed to ${ENV_NAME} (${DEST_CONTAINER}) - Archive: ${ARCHIVE_NAME}" >> ~/.geo_deploy_history
echo "$(date '+%Y-%m-%d %H:%M:%S') - API deployed to ${ENV_NAME} (${DEST_CONTAINER}) - Archive: ${ARCHIVE_NAME}" >> ~/.geo_deploy_history

0
api/docs/API-SECURITY.md Normal file → Executable file
View File

0
api/docs/CHAT_MODULE.md Normal file → Executable file
View File

0
api/docs/CHK_USER_DELETE_PASS_INFO.md Normal file → Executable file
View File

0
api/docs/DELETE_PASSAGE_PERMISSIONS.md Normal file → Executable file
View File

0
api/docs/EVENTS-LOG.md Normal file → Executable file
View File

0
api/docs/FIX_USER_CREATION_400_ERRORS.md Normal file → Executable file
View File

0
api/docs/GESTION-SECTORS.md Normal file → Executable file
View File

0
api/docs/INSTALL_FPDF.md Normal file → Executable file
View File

0
api/docs/PLANNING-STRIPE-API.md Normal file → Executable file
View File

0
api/docs/PREPA_PROD.md Normal file → Executable file
View File

0
api/docs/SETUP_EMAIL_QUEUE_CRON.md Normal file → Executable file
View File

0
api/docs/STRIPE-BACKEND-MIGRATION.md Normal file → Executable file
View File

0
api/docs/STRIPE-TAP-TO-PAY-FLOW.md Normal file → Executable file
View File

0
api/docs/STRIPE-TAP-TO-PAY-REQUIREMENTS.md Normal file → Executable file
View File

0
api/docs/STRIPE_VERIF.md Normal file → Executable file
View File

View File

@@ -89,7 +89,75 @@ PUT /api/users/123 // users.id
1. **Reçus fiscaux** : PDF auto (<5KB) pour dons, envoi email queue
2. **Logos entités** : Upload PNG/JPG, redimensionnement 250x250px, base64
3. **Migration** : Endpoints REST par entité (9 phases)
4. **CRONs** : Email queue (*/5), cleanup sécurité (2h), Stripe devices (dim 3h)
4. **CRONs** : Email queue (*/5), cleanup sécurité (2h)
## 📊 Statistiques Events (Admin Flutter)
### Architecture
**Principe** : Stats pré-agrégées en SQL + détail JSONL à la demande
| Source | Usage | Performance |
|--------|-------|-------------|
| Table `event_stats_daily` | Dashboard, graphiques, tendances | Instantané (~1ms) |
| Fichiers JSONL | Détail événements (clic sur stat) | Paginé (~50-100ms) |
### Flux de données
1. **EventLogService** écrit les événements dans `/logs/events/YYYY-MM-DD.jsonl`
2. **CRON nightly** agrège J-1 dans `event_stats_daily`
3. **API** sert les stats agrégées (SQL) ou le détail paginé (JSONL)
4. **Flutter Admin** affiche dashboard avec drill-down
### Table d'agrégation
**`event_stats_daily`** : Une ligne par (date, entité, type d'événement)
| Colonne | Description |
|---------|-------------|
| `stat_date` | Date des stats |
| `entity_id` | Entité (NULL = global super-admin) |
| `event_type` | Type événement (login_success, passage_created, etc.) |
| `count` | Nombre d'occurrences |
| `sum_amount` | Somme montants (passages) |
| `unique_users` | Utilisateurs distincts |
| `metadata` | JSON agrégé (top secteurs, erreurs fréquentes, etc.) |
### Endpoints API
| Endpoint | Période | Source | Taille réponse |
|----------|---------|--------|----------------|
| `GET /events/stats/summary` | Jour courant | SQL | ~1 KB |
| `GET /events/stats/daily` | Plage dates | SQL | ~5 KB |
| `GET /events/stats/weekly` | Calculé depuis daily | SQL | ~2 KB |
| `GET /events/stats/monthly` | Calculé depuis daily | SQL | ~1 KB |
| `GET /events/details` | Détail paginé | JSONL | ~10 KB |
### Optimisations transfert Flutter
- **Pagination** : 50 events max par requête détail
- **Champs filtrés** : Pas d'IP ni user_agent complet dans les réponses
- **Compression gzip** : -70% sur JSON
- **Cache HTTP** : ETag sur stats (changent 1x/jour)
- **Calcul hebdo/mensuel** : À la volée depuis `daily` (pas de tables supplémentaires)
### Types d'événements agrégés
| Catégorie | Events |
|-----------|--------|
| **Auth** | login_success, login_failed, logout |
| **Passages** | passage_created, passage_updated, passage_deleted |
| **Secteurs** | sector_created, sector_updated, sector_deleted |
| **Users** | user_created, user_updated, user_deleted |
| **Entités** | entity_created, entity_updated, entity_deleted |
| **Opérations** | operation_created, operation_updated, operation_deleted |
| **Stripe** | stripe_payment_created, stripe_payment_success, stripe_payment_failed, stripe_payment_cancelled, stripe_terminal_error |
### Accès et sécurité
- **Rôle requis** : Admin entité (role_id = 2) ou Super-admin (role_id = 1)
- **Isolation** : Admin voit uniquement les stats de son entité
- **Super-admin** : Accès global (entity_id = NULL dans requêtes)
## 🚀 Déploiement
@@ -172,4 +240,4 @@ DELETE FROM operations WHERE id = 850;
---
**Mis à jour : 26 Octobre 2025**
**Mis à jour : 22 Décembre 2025**

0
api/docs/UPLOAD-MIGRATION-RECAP.md Normal file → Executable file
View File

0
api/docs/UPLOAD-REORGANIZATION.md Normal file → Executable file
View File

0
api/docs/USERNAME_VALIDATION_CHANGES.md Normal file → Executable file
View File

0
api/docs/_logo_recu.png Normal file → Executable file
View File

Before

Width:  |  Height:  |  Size: 99 KiB

After

Width:  |  Height:  |  Size: 99 KiB

0
api/docs/_recu_template.pdf Normal file → Executable file
View File

0
api/docs/contour-des-departements.geojson Normal file → Executable file
View File

0
api/docs/create_table_user_devices.sql Normal file → Executable file
View File

0
api/docs/departements_limitrophes.md Normal file → Executable file
View File

0
api/docs/logrotate_email_queue.conf Normal file → Executable file
View File

0
api/docs/nouvelles-routes-session-refresh.txt Normal file → Executable file
View File

0
api/docs/recu_13718.pdf Normal file → Executable file
View File

0
api/docs/recu_19500582.pdf Normal file → Executable file
View File

0
api/docs/recu_19500586.pdf Normal file → Executable file
View File

0
api/docs/recu_537254062.pdf Normal file → Executable file
View File

0
api/docs/recu_972506460.pdf Normal file → Executable file
View File

0
api/docs/traite_batiments.sql Normal file → Executable file
View File

0
api/docs/x_departements_contours.sql Normal file → Executable file
View File

0
api/docs/x_departements_contours_corrected.sql Normal file → Executable file
View File

0
api/docs/x_departements_contours_fixed.sql Normal file → Executable file
View File

View File

@@ -157,12 +157,21 @@ register_shutdown_function(function() use ($requestUri, $requestMethod) {
// Alerter sur les erreurs 500
if ($statusCode >= 500) {
$error = error_get_last();
$errorMessage = $error['message'] ?? null;
// Si pas d'erreur PHP, c'est probablement une exception capturée
// Le détail de l'erreur sera dans les logs applicatifs
if ($errorMessage === null) {
$errorMessage = 'Exception capturée (voir logs/app.log pour détails)';
}
AlertService::trigger('HTTP_500', [
'endpoint' => $requestUri,
'method' => $requestMethod,
'error_message' => $error['message'] ?? 'Unknown error',
'error_file' => $error['file'] ?? 'Unknown',
'error_message' => $errorMessage,
'error_file' => $error['file'] ?? 'N/A',
'error_line' => $error['line'] ?? 0,
'stack_trace' => 'Consulter logs/app.log pour le stack trace complet',
'message' => "Erreur serveur 500 sur $requestUri"
], 'ERROR');
}

0
api/migration_add_departements_contours.sql Normal file → Executable file
View File

0
api/migration_add_sectors_adresses.sql Normal file → Executable file
View File

0
api/migrations/add_dept_limitrophes.sql Normal file → Executable file
View File

0
api/migrations/integrate_contours_to_departements.sql Normal file → Executable file
View File

0
api/migrations/update_all_dept_limitrophes.sql Normal file → Executable file
View File

1
api/ralph Submodule

Submodule api/ralph added at 098579b5a1

0
api/scripts/CORRECTIONS_MIGRATE.md Normal file → Executable file
View File

0
api/scripts/MIGRATION_PATCH_INSTRUCTIONS.md Normal file → Executable file
View File

0
api/scripts/README-migration.md Normal file → Executable file
View File

0
api/scripts/check_geometry_validity.sql Normal file → Executable file
View File

0
api/scripts/config/update_php_fpm_settings.sh Normal file → Executable file
View File

0
api/scripts/create_addresses_users.sql Normal file → Executable file
View File

0
api/scripts/create_addresses_users_by_env.sql Normal file → Executable file
View File

32
api/scripts/cron/CRON.md Normal file → Executable file
View File

@@ -94,30 +94,7 @@ Ce dossier contient les scripts automatisés de maintenance et de traitement pou
---
### 5. `update_stripe_devices.php`
**Fonction** : Met à jour la liste des appareils Android certifiés pour Tap to Pay
**Caractéristiques** :
- Liste de 95+ devices intégrée
- Ajoute les nouveaux appareils certifiés
- Met à jour les versions Android minimales
- Désactive les appareils obsolètes
- Notification email si changements importants
- Possibilité de personnaliser via `/data/stripe_certified_devices.json`
**Fréquence recommandée** : Hebdomadaire le dimanche à 3h
**Ligne crontab** :
```bash
0 3 * * 0 /usr/bin/php /var/www/geosector/api/scripts/cron/update_stripe_devices.php >> /var/www/geosector/api/logs/stripe_devices.log 2>&1
```
---
### 6. `sync_databases.php`
### 5. `sync_databases.php`
**Fonction** : Synchronise les bases de données entre environnements
@@ -175,9 +152,6 @@ crontab -e
# Rotation des logs événements (mensuel le 1er à 3h)
0 3 1 * * /usr/bin/php /var/www/geosector/api/scripts/cron/rotate_event_logs.php >> /var/www/geosector/api/logs/rotation_events.log 2>&1
# Mise à jour des devices Stripe (hebdomadaire dimanche à 3h)
0 3 * * 0 /usr/bin/php /var/www/geosector/api/scripts/cron/update_stripe_devices.php >> /var/www/geosector/api/logs/stripe_devices.log 2>&1
```
### 4. Vérifier que les CRONs sont actifs
@@ -203,7 +177,6 @@ Tous les logs CRON sont stockés dans `/var/www/geosector/api/logs/` :
- `cleanup_security.log` : Nettoyage des données de sécurité
- `cleanup_logs.log` : Nettoyage des anciens fichiers logs
- `rotation_events.log` : Rotation des logs événements JSONL
- `stripe_devices.log` : Mise à jour des devices Tap to Pay
### Vérification de l'exécution
@@ -216,9 +189,6 @@ tail -n 50 /var/www/geosector/api/logs/cleanup_logs.log
# Voir les dernières rotations des logs événements
tail -n 50 /var/www/geosector/api/logs/rotation_events.log
# Voir les dernières mises à jour Stripe
tail -n 50 /var/www/geosector/api/logs/stripe_devices.log
```
---

View File

@@ -0,0 +1,456 @@
#!/usr/bin/env php
<?php
/**
* Script CRON pour agrégation des statistiques d'événements
*
* Parse les fichiers JSONL et agrège les données dans event_stats_daily
*
* Usage:
* php aggregate_event_stats.php # Agrège J-1
* php aggregate_event_stats.php --date=2025-12-20 # Agrège une date spécifique
* php aggregate_event_stats.php --from=2025-12-01 --to=2025-12-21 # Rattrapage plage
*
* À exécuter quotidiennement via crontab (1h du matin) :
* 0 1 * * * /usr/bin/php /var/www/geosector/api/scripts/cron/aggregate_event_stats.php >> /var/www/geosector/api/logs/aggregate_stats.log 2>&1
*/
declare(strict_types=1);
// Configuration
define('LOCK_FILE', '/tmp/aggregate_event_stats.lock');
define('EVENT_LOG_DIR', __DIR__ . '/../../logs/events');
// Empêcher l'exécution multiple simultanée
if (file_exists(LOCK_FILE)) {
$lockTime = filemtime(LOCK_FILE);
if (time() - $lockTime > 3600) {
unlink(LOCK_FILE);
} else {
die("[" . date('Y-m-d H:i:s') . "] Le processus est déjà en cours d'exécution\n");
}
}
file_put_contents(LOCK_FILE, (string) getmypid());
register_shutdown_function(function () {
if (file_exists(LOCK_FILE)) {
unlink(LOCK_FILE);
}
});
// Simuler l'environnement web pour AppConfig en CLI
if (php_sapi_name() === 'cli') {
$hostname = gethostname();
if (strpos($hostname, 'pra') !== false) {
$_SERVER['SERVER_NAME'] = 'app3.geosector.fr';
} elseif (strpos($hostname, 'rca') !== false) {
$_SERVER['SERVER_NAME'] = 'rapp.geosector.fr';
} else {
$_SERVER['SERVER_NAME'] = 'dapp.geosector.fr';
}
$_SERVER['HTTP_HOST'] = $_SERVER['HTTP_HOST'] ?? $_SERVER['SERVER_NAME'];
$_SERVER['REMOTE_ADDR'] = $_SERVER['REMOTE_ADDR'] ?? '127.0.0.1';
if (!function_exists('getallheaders')) {
function getallheaders()
{
return [];
}
}
}
// Chargement de l'environnement
require_once __DIR__ . '/../../vendor/autoload.php';
require_once __DIR__ . '/../../src/Config/AppConfig.php';
require_once __DIR__ . '/../../src/Core/Database.php';
require_once __DIR__ . '/../../src/Services/LogService.php';
use App\Services\LogService;
/**
* Parse les arguments CLI
*/
function parseArgs(array $argv): array
{
$args = [
'date' => null,
'from' => null,
'to' => null,
];
foreach ($argv as $arg) {
if (strpos($arg, '--date=') === 0) {
$args['date'] = substr($arg, 7);
} elseif (strpos($arg, '--from=') === 0) {
$args['from'] = substr($arg, 7);
} elseif (strpos($arg, '--to=') === 0) {
$args['to'] = substr($arg, 5);
}
}
return $args;
}
/**
* Génère la liste des dates à traiter
*/
function getDatesToProcess(array $args): array
{
$dates = [];
if ($args['date']) {
$dates[] = $args['date'];
} elseif ($args['from'] && $args['to']) {
$current = new DateTime($args['from']);
$end = new DateTime($args['to']);
while ($current <= $end) {
$dates[] = $current->format('Y-m-d');
$current->modify('+1 day');
}
} else {
// Par défaut : J-1
$dates[] = date('Y-m-d', strtotime('-1 day'));
}
return $dates;
}
/**
* Parse un fichier JSONL et retourne les événements
*/
function parseJsonlFile(string $filePath): array
{
$events = [];
if (!file_exists($filePath)) {
return $events;
}
$handle = fopen($filePath, 'r');
if (!$handle) {
return $events;
}
while (($line = fgets($handle)) !== false) {
$line = trim($line);
if (empty($line)) {
continue;
}
$event = json_decode($line, true);
if ($event && isset($event['event'])) {
$events[] = $event;
}
}
fclose($handle);
return $events;
}
/**
* Agrège les événements par entity_id et event_type
*/
function aggregateEvents(array $events): array
{
$stats = [];
foreach ($events as $event) {
$entityId = $event['entity_id'] ?? null;
$eventType = $event['event'] ?? 'unknown';
$userId = $event['user_id'] ?? null;
// Clé d'agrégation : entity_id peut être NULL (stats globales)
$key = ($entityId ?? 'NULL') . '|' . $eventType;
if (!isset($stats[$key])) {
$stats[$key] = [
'entity_id' => $entityId,
'event_type' => $eventType,
'count' => 0,
'sum_amount' => 0.0,
'user_ids' => [],
'metadata' => [],
];
}
$stats[$key]['count']++;
// Collecter les user_ids pour unique_users
if ($userId !== null) {
$stats[$key]['user_ids'][$userId] = true;
}
// Somme des montants pour les passages et paiements Stripe
if (in_array($eventType, ['passage_created', 'passage_updated'])) {
$amount = $event['amount'] ?? 0;
$stats[$key]['sum_amount'] += (float) $amount;
} elseif (in_array($eventType, ['stripe_payment_success', 'stripe_payment_created'])) {
// Montant en centimes -> euros
$amount = ($event['amount'] ?? 0) / 100;
$stats[$key]['sum_amount'] += (float) $amount;
}
// Collecter metadata spécifiques
collectMetadata($stats[$key], $event);
}
// Convertir user_ids en count
foreach ($stats as &$stat) {
$stat['unique_users'] = count($stat['user_ids']);
unset($stat['user_ids']);
// Finaliser les metadata
$stat['metadata'] = finalizeMetadata($stat['metadata'], $stat['event_type']);
}
return $stats;
}
/**
* Collecte les métadonnées spécifiques par type d'événement
*/
function collectMetadata(array &$stat, array $event): void
{
$eventType = $event['event'] ?? '';
switch ($eventType) {
case 'login_failed':
$reason = $event['reason'] ?? 'unknown';
$stat['metadata']['reasons'][$reason] = ($stat['metadata']['reasons'][$reason] ?? 0) + 1;
break;
case 'passage_created':
$sectorId = $event['sector_id'] ?? null;
if ($sectorId) {
$stat['metadata']['sectors'][$sectorId] = ($stat['metadata']['sectors'][$sectorId] ?? 0) + 1;
}
$paymentType = $event['payment_type'] ?? 'unknown';
$stat['metadata']['payment_types'][$paymentType] = ($stat['metadata']['payment_types'][$paymentType] ?? 0) + 1;
break;
case 'stripe_payment_failed':
$errorCode = $event['error_code'] ?? 'unknown';
$stat['metadata']['error_codes'][$errorCode] = ($stat['metadata']['error_codes'][$errorCode] ?? 0) + 1;
break;
case 'stripe_terminal_error':
$errorCode = $event['error_code'] ?? 'unknown';
$stat['metadata']['error_codes'][$errorCode] = ($stat['metadata']['error_codes'][$errorCode] ?? 0) + 1;
break;
}
}
/**
* Finalise les métadonnées (top 5, tri, etc.)
*/
function finalizeMetadata(array $metadata, string $eventType): ?array
{
if (empty($metadata)) {
return null;
}
$result = [];
// Top 5 secteurs
if (isset($metadata['sectors'])) {
arsort($metadata['sectors']);
$result['top_sectors'] = array_slice($metadata['sectors'], 0, 5, true);
}
// Raisons d'échec login
if (isset($metadata['reasons'])) {
arsort($metadata['reasons']);
$result['failure_reasons'] = $metadata['reasons'];
}
// Types de paiement
if (isset($metadata['payment_types'])) {
arsort($metadata['payment_types']);
$result['payment_types'] = $metadata['payment_types'];
}
// Codes d'erreur
if (isset($metadata['error_codes'])) {
arsort($metadata['error_codes']);
$result['error_codes'] = $metadata['error_codes'];
}
return empty($result) ? null : $result;
}
/**
* Insère ou met à jour les stats dans la base de données
*/
function upsertStats(PDO $db, string $date, array $stats): int
{
$upsertedCount = 0;
$sql = "
INSERT INTO event_stats_daily
(stat_date, entity_id, event_type, count, sum_amount, unique_users, metadata)
VALUES
(:stat_date, :entity_id, :event_type, :count, :sum_amount, :unique_users, :metadata)
ON DUPLICATE KEY UPDATE
count = VALUES(count),
sum_amount = VALUES(sum_amount),
unique_users = VALUES(unique_users),
metadata = VALUES(metadata),
updated_at = CURRENT_TIMESTAMP
";
$stmt = $db->prepare($sql);
foreach ($stats as $stat) {
try {
$stmt->execute([
'stat_date' => $date,
'entity_id' => $stat['entity_id'],
'event_type' => $stat['event_type'],
'count' => $stat['count'],
'sum_amount' => $stat['sum_amount'],
'unique_users' => $stat['unique_users'],
'metadata' => $stat['metadata'] ? json_encode($stat['metadata'], JSON_UNESCAPED_UNICODE) : null,
]);
$upsertedCount++;
} catch (PDOException $e) {
echo " ERREUR insertion {$stat['event_type']}: " . $e->getMessage() . "\n";
}
}
return $upsertedCount;
}
/**
* Génère également les stats globales (entity_id = NULL)
*/
function generateGlobalStats(array $statsByEntity): array
{
$globalStats = [];
foreach ($statsByEntity as $stat) {
$eventType = $stat['event_type'];
if (!isset($globalStats[$eventType])) {
$globalStats[$eventType] = [
'entity_id' => null,
'event_type' => $eventType,
'count' => 0,
'sum_amount' => 0.0,
'unique_users' => 0,
'metadata' => null,
];
}
$globalStats[$eventType]['count'] += $stat['count'];
$globalStats[$eventType]['sum_amount'] += $stat['sum_amount'];
$globalStats[$eventType]['unique_users'] += $stat['unique_users'];
}
return array_values($globalStats);
}
// ============================================================
// MAIN
// ============================================================
try {
echo "[" . date('Y-m-d H:i:s') . "] Démarrage de l'agrégation des statistiques\n";
// Initialisation
$appConfig = AppConfig::getInstance();
$config = $appConfig->getFullConfig();
$environment = $appConfig->getEnvironment();
echo "Environnement: {$environment}\n";
Database::init($config['database']);
$db = Database::getInstance();
// Parser les arguments
$args = parseArgs($argv);
$dates = getDatesToProcess($args);
echo "Dates à traiter: " . implode(', ', $dates) . "\n\n";
$totalStats = 0;
$totalEvents = 0;
foreach ($dates as $date) {
$jsonlFile = EVENT_LOG_DIR . '/' . $date . '.jsonl';
echo "--- Traitement de {$date} ---\n";
if (!file_exists($jsonlFile)) {
echo " Fichier non trouvé: {$jsonlFile}\n";
continue;
}
$fileSize = filesize($jsonlFile);
echo " Fichier: " . basename($jsonlFile) . " (" . number_format($fileSize / 1024, 2) . " KB)\n";
// Parser le fichier
$events = parseJsonlFile($jsonlFile);
$eventCount = count($events);
echo " Événements parsés: {$eventCount}\n";
if ($eventCount === 0) {
echo " Aucun événement à agréger\n";
continue;
}
$totalEvents += $eventCount;
// Agréger par entity/event_type
$stats = aggregateEvents($events);
echo " Agrégations par entité: " . count($stats) . "\n";
// Générer les stats globales
$globalStats = generateGlobalStats($stats);
echo " Agrégations globales: " . count($globalStats) . "\n";
// Fusionner stats entités + globales
$allStats = array_merge(array_values($stats), $globalStats);
// Insérer en base
$upserted = upsertStats($db, $date, $allStats);
echo " Stats insérées/mises à jour: {$upserted}\n";
$totalStats += $upserted;
}
// Résumé
echo "\n=== RÉSUMÉ ===\n";
echo "Dates traitées: " . count($dates) . "\n";
echo "Événements traités: {$totalEvents}\n";
echo "Stats agrégées: {$totalStats}\n";
// Log
LogService::log('Agrégation des statistiques terminée', [
'level' => 'info',
'script' => 'aggregate_event_stats.php',
'environment' => $environment,
'dates_count' => count($dates),
'events_count' => $totalEvents,
'stats_count' => $totalStats,
]);
echo "\n[" . date('Y-m-d H:i:s') . "] Agrégation terminée avec succès\n";
} catch (Exception $e) {
$errorMsg = 'Erreur lors de l\'agrégation: ' . $e->getMessage();
LogService::log($errorMsg, [
'level' => 'error',
'script' => 'aggregate_event_stats.php',
'trace' => $e->getTraceAsString(),
]);
echo "\n❌ ERREUR: {$errorMsg}\n";
echo "Stack trace:\n" . $e->getTraceAsString() . "\n";
exit(1);
}
exit(0);

0
api/scripts/cron/cleanup_security_data.php Normal file → Executable file
View File

Some files were not shown because too many files have changed in this diff Show More