26 Commits
v3.6.2 ... main

Author SHA1 Message Date
8dac04b9b1 docs: Marquer tâche #116 comme terminée (Remarque sous adresse)
La fonctionnalité d'affichage de la remarque dans la première card
du passage_form_dialog.dart était déjà implémentée (lignes 703-719).

Affichage inclut:
- Icône Icons.note
- Texte en italique
- Style grisé (opacité 0.7)
- Condition d'affichage si remarque non vide

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-01-26 17:55:34 +01:00
495ba046ec docs: Marquer les tâches #76 et le bug de purge Hive comme terminés
Tâche #76: Accès admin limité web uniquement -  Terminé 26/01
Bug: Boxes Hive non purgées à la connexion -  Corrigé 26/01

Corrections aujourd'hui:
- #15: Nouveau membre non synchronisé (sécurité + sync + API)
- #76: Accès admin limité web uniquement
- Bug: Nettoyage automatique des boxes Hive à la connexion
- Optimisation: operation_id en session (évite requêtes SQL)
2026-01-26 17:42:33 +01:00
957386f78c docs: Marquer la tâche #15 comme terminée
Tâche #15: Nouveau membre non synchronisé -  Terminé 26/01

Solutions implémentées:
- Sécurité: Password supprimé de la réponse API
- Synchronisation: Auto-création ope_users lors création membre
- API retourne id, ope_user_id et username
- Flutter récupère et sauvegarde correctement les données dans Hive
- Optimisation: operation_id stocké en session (évite requête SQL)
- Fallback SQL si operation_id absent de la session
2026-01-26 17:29:56 +01:00
9a185b15f3 fix: Récupérer username et ope_user_id depuis la réponse API
- Utiliser le username retourné par l'API au lieu du username local
- Récupérer ope_user_id depuis la réponse API
- Ajouter des logs de debug pour tracer les valeurs
- Fix: Le username s'affiche maintenant dans le tableau des membres après création
2026-01-26 17:18:51 +01:00
6fd02079c1 feat: Ajouter fallback SQL si operation_id absent de la session
- Si Session::getOperationId() retourne null, requête SQL de fallback
- Log de warning pour identifier les cas où la session n'est pas à jour
- Utile si l'utilisateur n'a pas fait de login récent
- Garantit que l'opération active est toujours récupérée
2026-01-26 17:06:00 +01:00
d0697b1e01 feat: Ajouter operation_id dans la session pour optimisation
- Ajout de operation_id dans Session::login()
- Ajout de Session::getOperationId() et Session::setOperationId()
- LoginController met à jour operation_id dans la session après récupération
- UserController utilise Session::getOperationId() au lieu d'une requête SQL
- Optimisation: évite une jointure SQL users+operations à chaque création de membre
2026-01-26 17:01:46 +01:00
0687900564 fix: Récupérer l'opération active depuis la table operations
- Corrige l'erreur SQL 'Unknown column fk_operation in users'
- L'opération active est récupérée depuis operations.chk_active = 1
- Jointure avec users pour filtrer par entité de l'admin créateur
- Query: SELECT o.id FROM operations o INNER JOIN users u ON u.fk_entite = o.fk_entite WHERE u.id = ? AND o.chk_active = 1
2026-01-26 16:57:08 +01:00
c24a3afe6a feat: Créer automatiquement ope_users lors de la création d'un membre
PROBLÈME TÂCHE #15 :
Quand un admin crée un nouveau membre, seul users.id était créé.
Aucune entrée ope_users n'était créée automatiquement.
Résultat : Le nouveau membre n'apparaissait pas dans Flutter car
il n'était pas synchronisé avec l'opération active.

SOLUTION IMPLÉMENTÉE :
1. Récupération de l'opération active de l'admin créateur (users.fk_operation)
2. Création automatique d'une entrée dans ope_users si opération active
3. Retour de ope_user_id dans la réponse API (en plus de users.id)

NOUVELLE RÉPONSE API :
{
  "status": "success",
  "message": "Utilisateur créé avec succès",
  "id": "10023668",           // users.id (table centrale)
  "ope_user_id": "12345",     // ope_users.id (table opérationnelle)
  "username": "pr.350-renn731"
}

COMPORTEMENT :
- Si admin a une opération active → ope_users créé automatiquement
- Si pas d'opération active → ope_user_id sera null (membre non affecté)

LOGS :
- Log INFO si affectation réussie
- Log WARNING si pas d'opération active

Travail sur tâche #15 (Nouveau membre non synchronisé)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-01-26 16:42:55 +01:00
6eefa218d8 security: Supprimer le mot de passe de la réponse POST /api/users
PROBLÈME DE SÉCURITÉ :
Le mot de passe était retourné dans la réponse JSON lors de la création
d'un utilisateur (quand généré automatiquement).

RISQUES :
- Exposition dans les logs de proxies/load balancers
- Visible dans DevTools navigateur
- Peut être loggé côté client en cas d'erreur
- Reste en mémoire/historique des requêtes

SOLUTION :
- Suppression complète du champ 'password' de la réponse
- Le mot de passe est DÉJÀ envoyé par email (ligne 525)
- L'admin reçoit seulement : id + username

RÉPONSE AVANT :
{
  "status": "success",
  "message": "Utilisateur créé avec succès",
  "id": "10023668",
  "username": "pr.350-renn731",
  "password": "MPar<2a8^2&VnLE"  //  FAILLE
}

RÉPONSE APRÈS :
{
  "status": "success",
  "message": "Utilisateur créé avec succès",
  "id": "10023668",
  "username": "pr.350-renn731"  //  SÉCURISÉ
}

Travail sur tâche #15 (Nouveau membre non synchronisé)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-01-26 16:30:03 +01:00
e3d9433442 docs: Marquer tâche #30 comme complétée (26/01)
Membres sélectionnés haut liste - terminé

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-01-26 16:20:30 +01:00
35e9ddbed5 fix: Remplacer NavigationHelper par NavigationConfig dans map_page.dart
NavigationHelper a été supprimé lors du refactoring #74.
Utilisation de NavigationConfig à la place.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-01-26 16:06:51 +01:00
3daf5a204a docs: Marquer tâche #74 comme complétée (26/01)
🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-01-26 15:28:10 +01:00
1cdb4ec58c refactor: Simplifier DashboardLayout/AppScaffold (tâche #74)
Centralisation et simplification de l'architecture de navigation :

CRÉATIONS :
- navigation_config.dart : Configuration centralisée de la navigation
  * Toutes les destinations (admin/user)
  * Logique de navigation (index → route)
  * Résolution inverse (route → index)
  * Titres et utilitaires

- backgrounds/dots_painter.dart : Painter de points décoratifs
  * Extrait depuis AppScaffold et AdminScaffold
  * Paramétrable (opacité, densité, seed)
  * Réutilisable

- backgrounds/gradient_background.dart : Fond dégradé
  * Gère les couleurs admin (rouge) / user (vert)
  * Option pour afficher/masquer les points
  * Widget indépendant

SIMPLIFICATIONS :
- app_scaffold.dart : 426 → 192 lignes (-55%)
  * Utilise NavigationConfig au lieu de NavigationHelper
  * Utilise GradientBackground au lieu de code dupliqué
  * Suppression de DotsPainter local

- dashboard_layout.dart : 140 → 77 lignes (-45%)
  * Suppression validations excessives (try/catch, vérifications)
  * Code épuré et plus lisible

SUPPRESSIONS :
- admin_scaffold.dart : Supprimé (207 lignes)
  * Obsolète depuis unification avec AppScaffold
  * Code dupliqué avec AppScaffold
  * AdminNavigationHelper fusionné dans NavigationConfig

RÉSULTATS :
- Avant : 773 lignes (AppScaffold + AdminScaffold + DashboardLayout)
- Après : 623 lignes (tout inclus)
- Réduction nette : -150 lignes (-19%)
- Architecture plus claire et maintenable
- Aucune duplication de code
- Navigation centralisée en un seul endroit

Résout tâche #74 du PLANNING-2026-Q1.md

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-01-26 15:27:54 +01:00
7345cf805e fix: Correction format version YY.MM.DDNN (3 parties au lieu de 4)
- VERSION file stocke maintenant: 26.01.2604 (3 parties)
- Au lieu de: 26.01.26.04 (4 parties - invalide pour semver)
- Regex ajustée pour parser le nouveau format: ^YY.MM.DDNN$
- Détection changement de date compare YY, MM, DD séparément
- Build number reste YYMMDDNN (26012604)
- Commentaires mis à jour pour refléter format YY.MM.DDNN

Résout: "Could not parse 26.01.26.04+26012604"

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-01-26 12:35:09 +01:00
eef1fc8d32 fix: Format semver YY.MM.DDNN pour compatibilité Dart/Flutter
- Correction format version pour pubspec.yaml: YY.MM.DDNN
- VERSION file garde format lisible: 26.01.26.03
- pubspec.yaml reçoit format semver: 26.01.2603+26012603
- Concat DD+NN pour 3ème partie: 26+03 = 2603
- Build number complet: 26012603

Résout erreur: Could not parse '26.01.26.03+26012603'

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-01-26 12:30:40 +01:00
097335193e fix: Ajout fonction echo_success manquante
- Ajout fonction echo_success() avec symbole ✓ en vert
- Utilisée dans le système de versioning automatique
- Correction erreur : "echo_success : commande introuvable"

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-01-26 12:22:36 +01:00
cf1e54d8d0 docs: Mise à jour commentaires AppInfoService pour format YY.MM.DD.NN
- Clarification format version : YY.MM.DD.NN
- Commentaire : auto-incrémenté à chaque déploiement DEV
- Build number format : YYMMDDNN (sans points)
- Full version format : vYY.MM.DD.NN+YYMMDDNN

La version complète sera automatiquement affichée dans :
- splash_page.dart (écran chargement)
- login_page.dart (connexion)
- register_page.dart (inscription)
- dashboard_app_bar.dart (tableau de bord)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-01-26 12:15:37 +01:00
d6c4c6d228 feat: Versioning automatique YY.MM.DD.NN pour DEV
- Système 100% automatique de numérotation de version
- Format : YY.MM.DD.NN (ex: 26.01.27.01)
- Détection automatique de la date du jour
- Incrémentation auto du build number (.01 → .02 → .03...)
- Reset auto à .01 lors d'un changement de date
- Compatible avec ancien format (conversion auto)

Logique :
1. Récupération date système : date +%y.%m.%d
2. Si date différente de VERSION → YY.MM.DD.01
3. Si même date → incrémenter dernier nombre
4. Écriture dans VERSION et pubspec.yaml

Exemple :
- 26/01 build 1 → 26.01.26.01
- 26/01 build 2 → 26.01.26.02
- 27/01 build 1 → 26.01.27.01 (reset auto)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-01-26 12:14:54 +01:00
b3e03ef6e6 feat: Filtres membres/secteurs dynamiques et liés (#42)
- Filtrage croisé via box user_sector (UserSectorModel)
- Si secteur sélectionné → membres filtrés (uniquement ce secteur)
- Si membre sélectionné → secteurs filtrés (uniquement ses secteurs)
- Relation : UserSectorModel.opeUserId ↔ UserSectorModel.fkSector
- Import UserSectorModel ajouté
- Simplification dropdown secteurs (liste directe, plus de map)

Comportement :
1. Aucun filtre → tous les membres et tous les secteurs
2. Secteur choisi → liste membres réduite
3. Membre choisi → liste secteurs réduite
4. Les deux choisis → affichage le plus restreint

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-01-26 12:00:28 +01:00
9c837f8adb feat: Bouton réinitialiser filtres historique admin (#42)
- Ajout bouton IconButton avec icône clear (X)
- Visible uniquement si au moins un filtre est actif
- Réinitialise : recherche textuelle + membre + secteur
- Remet les 2 selects à "Tous"
- Style : fond gris clair, padding 12px
- Tooltip : "Réinitialiser les filtres"

Affichage conditionnel :
isAdmin && (recherche OU membre OU secteur non vides)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-01-26 11:51:18 +01:00
16b30b0e9e feat: Poids police inputs w600 + suppression thème dark
- Augmentation poids police inputs : w500 → w600 (semi-bold)
- bodyLarge w600 dans _getTextTheme (statique)
- bodyLarge w600 dans getResponsiveTextTheme (responsive)
- Suppression complète du thème dark inutilisé (~95 lignes)
- Suppression constantes backgroundDarkColor et textDarkColor
- Application forcée sur tous les TextFormField/TextField

Amélioration de la lisibilité des champs de saisie.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-01-26 11:41:22 +01:00
c1c6c55cbe feat: Format affichage membres avec secteur (#42)
- Format dropdown membre : "FirstName name (sectName)"
- Gestion des cas où firstName ou name sont vides
- Affichage sectName entre parenthèses si disponible
- Fallback : "Membre #opeUserId" si aucun nom

Exemples :
- "Pierre Dupont (Secteur A)"
- "Pierre (Secteur B)"
- "Dupont (Secteur C)"

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-01-26 11:29:17 +01:00
3c3a9b90aa fix: Utiliser membre.opeUserId pour filtre membre (#42)
- Correction : utilise membre.opeUserId (et non membre.id)
- Liste tous les membres ayant un opeUserId != null
- Filtre passages par passage.fkUser == membre.opeUserId
- Retire les debugPrint inutiles
- Affichage : membre.name ou membre.firstName

Relation: MembreModel.opeUserId == PassageModel.fkUser

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-01-26 11:25:42 +01:00
6952417147 fix: Utiliser box Hive membres pour filtre membre (#42)
- Correction du filtre membre : utilise membreRepository.getMembresBox()
- Récupère les membres depuis la box Hive (ope_users)
- Filtre uniquement les membres ayant des passages (memberIdsInPassages)
- Affichage : member.name ou member.firstName
- Tri alphabétique par nom

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-01-26 11:12:57 +01:00
a392305820 feat: Ajout filtres membre/secteur dans historique admin (#42)
- Ajout de 2 dropdowns de filtres dans history_page.dart (admin uniquement)
- Filtre par membre (fkUser) : liste dynamique depuis passages
- Filtre par secteur (fkSector) : liste dynamique depuis passages
- Valeurs par défaut : "Tous" pour chaque filtre
- Tri alphabétique des dropdowns
- Mise à jour du planning : #42 validée (26/01)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-01-26 10:53:07 +01:00
5b6808db25 feat: Version 3.6.3 - Carte IGN, mode boussole, corrections Flutter analyze
Nouvelles fonctionnalités:
- #215 Mode boussole + carte IGN/satellite (Mode terrain)
- #53 Définition zoom maximal pour éviter sur-zoom
- #14 Correction bug F5 déconnexion
- #204 Design couleurs flashy
- #205 Écrans utilisateurs simplifiés

Corrections Flutter analyze:
- Suppression warnings room.g.dart, chat_service.dart, api_service.dart
- 0 error, 0 warning, 30 infos (suggestions de style)

Autres:
- Intégration tuiles IGN Plan et IGN Ortho (geopf.fr)
- flutter_compass pour Android/iOS
- Réorganisation assets store

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-19 17:46:03 +01:00
3067 changed files with 79037 additions and 4976 deletions

0
.gitignore vendored Normal file → Executable file
View File

0
CHANGELOG-v3.1.6.md Normal file → Executable file
View File

View File

@@ -10,6 +10,11 @@ This file provides guidance to Claude Code (claude.ai/code) when working with co
- Web: `cd web && npm run dev` - run Svelte dev server
- Web build: `cd web && npm run build` - build web app for production
## Post-modification checks (OBLIGATOIRE)
After modifying any code file, run the appropriate linter:
- Dart/Flutter: `cd app && flutter analyze [modified_files]`
- PHP: `php -l [modified_file]` (syntax check)
## Code Style Guidelines
- Flutter/Dart: Follow Flutter lint rules in analysis_options.yaml
- Naming: camelCase for variables/methods, PascalCase for classes/enums

0
Capture.png Normal file → Executable file
View File

Before

Width:  |  Height:  |  Size: 255 KiB

After

Width:  |  Height:  |  Size: 255 KiB

0
HOWTO-PROKOV.md Normal file → Executable file
View File

0
MONITORING.md Normal file → Executable file
View File

0
PLANNING-STRIPE-ADMIN.md Normal file → Executable file
View File

2
VERSION Normal file → Executable file
View File

@@ -1 +1 @@
3.6.2
26.01.2607

Binary file not shown.

View File

@@ -1,651 +0,0 @@
#!/bin/bash
set -uo pipefail
# Note: Removed -e to allow script to continue on errors
# Errors are handled explicitly with ERROR_COUNT
# Parse command line arguments
ONLY_DB=false
if [[ "${1:-}" == "-onlydb" ]]; then
ONLY_DB=true
echo "Mode: Database backup only"
fi
# Configuration
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
CONFIG_FILE="$SCRIPT_DIR/d6back.yaml"
LOG_DIR="$SCRIPT_DIR/logs"
mkdir -p "$LOG_DIR"
LOG_FILE="$LOG_DIR/d6back-$(date +%Y%m%d).log"
ERROR_COUNT=0
RECAP_FILE="/tmp/backup_recap_$$.txt"
# Lock file to prevent concurrent executions
LOCK_FILE="/var/lock/d6back.lock"
exec 200>"$LOCK_FILE"
if ! flock -n 200; then
echo "ERROR: Another backup is already running" >&2
exit 1
fi
trap 'flock -u 200' EXIT
# Clean old log files (keep only last 10)
find "$LOG_DIR" -maxdepth 1 -name "d6back-*.log" -type f 2>/dev/null | sort -r | tail -n +11 | xargs -r rm -f || true
# Check dependencies - COMMENTED OUT
# for cmd in yq ssh tar openssl; do
# if ! command -v "$cmd" &> /dev/null; then
# echo "ERROR: $cmd is required but not installed" | tee -a "$LOG_FILE"
# exit 1
# fi
# done
# Load config
DIR_BACKUP=$(yq '.global.dir_backup' "$CONFIG_FILE" | tr -d '"')
ENC_KEY_PATH=$(yq '.global.enc_key' "$CONFIG_FILE" | tr -d '"')
BACKUP_SERVER=$(yq '.global.backup_server // "BACKUP"' "$CONFIG_FILE" | tr -d '"')
EMAIL_TO=$(yq '.global.email_to // "support@unikoffice.com"' "$CONFIG_FILE" | tr -d '"')
KEEP_DIRS=$(yq '.global.keep_dirs' "$CONFIG_FILE" | tr -d '"')
KEEP_DB=$(yq '.global.keep_db' "$CONFIG_FILE" | tr -d '"')
# Load encryption key
if [[ ! -f "$ENC_KEY_PATH" ]]; then
echo "ERROR: Encryption key not found: $ENC_KEY_PATH" | tee -a "$LOG_FILE"
exit 1
fi
ENC_KEY=$(cat "$ENC_KEY_PATH")
echo "=== Backup Started $(date) ===" | tee -a "$LOG_FILE"
echo "Backup directory: $DIR_BACKUP" | tee -a "$LOG_FILE"
# Check available disk space
DISK_USAGE=$(df "$DIR_BACKUP" | tail -1 | awk '{print $5}' | sed 's/%//')
DISK_FREE=$((100 - DISK_USAGE))
if [[ $DISK_FREE -lt 20 ]]; then
echo "WARNING: Low disk space! Only ${DISK_FREE}% free on backup partition" | tee -a "$LOG_FILE"
# Send warning email
echo "Sending DISK SPACE WARNING email to $EMAIL_TO (${DISK_FREE}% free)" | tee -a "$LOG_FILE"
if command -v msmtp &> /dev/null; then
{
echo "To: $EMAIL_TO"
echo "Subject: Backup${BACKUP_SERVER} WARNING - Low disk space (${DISK_FREE}% free)"
echo ""
echo "WARNING: Low disk space on $(hostname)"
echo ""
echo "Backup directory: $DIR_BACKUP"
echo "Disk usage: ${DISK_USAGE}%"
echo "Free space: ${DISK_FREE}%"
echo ""
echo "The backup will continue but please free up some space soon."
echo ""
echo "Date: $(date '+%d.%m.%Y %H:%M')"
} | msmtp "$EMAIL_TO"
echo "DISK SPACE WARNING email sent successfully to $EMAIL_TO" | tee -a "$LOG_FILE"
else
echo "WARNING: msmtp not found - DISK WARNING email NOT sent" | tee -a "$LOG_FILE"
fi
else
echo "Disk space OK: ${DISK_FREE}% free" | tee -a "$LOG_FILE"
fi
# Initialize recap file
echo "BACKUP REPORT - $(hostname) - $(date '+%d.%m.%Y %H')h" > "$RECAP_FILE"
echo "========================================" >> "$RECAP_FILE"
echo "" >> "$RECAP_FILE"
# Function to format size in MB with thousand separator
format_size_mb() {
local file="$1"
if [[ -f "$file" ]]; then
local size_kb=$(du -k "$file" | cut -f1)
local size_mb=$((size_kb / 1024))
# Add thousand separator with printf and sed
printf "%d" "$size_mb" | sed ':a;s/\B[0-9]\{3\}\>/\.&/;ta'
else
echo "0"
fi
}
# Function to calculate age in days
get_age_days() {
local file="$1"
local now=$(date +%s)
local file_time=$(stat -c %Y "$file" 2>/dev/null || echo 0)
echo $(( (now - file_time) / 86400 ))
}
# Function to get week number of year for a file
get_week_year() {
local file="$1"
local file_time=$(stat -c %Y "$file" 2>/dev/null || echo 0)
date -d "@$file_time" +"%Y-%W"
}
# Function to cleanup old backups according to retention policy
cleanup_old_backups() {
local DELETED_COUNT=0
local KEPT_COUNT=0
echo "" | tee -a "$LOG_FILE"
echo "=== Starting Backup Retention Cleanup ===" | tee -a "$LOG_FILE"
# Parse retention periods
local KEEP_DIRS_DAYS=${KEEP_DIRS%d} # Remove 'd' suffix
# Parse database retention (5d,3w,15m)
IFS=',' read -r KEEP_DB_DAILY KEEP_DB_WEEKLY KEEP_DB_MONTHLY <<< "$KEEP_DB"
local KEEP_DB_DAILY_DAYS=${KEEP_DB_DAILY%d}
local KEEP_DB_WEEKLY_WEEKS=${KEEP_DB_WEEKLY%w}
local KEEP_DB_MONTHLY_MONTHS=${KEEP_DB_MONTHLY%m}
# Convert to days
local KEEP_DB_WEEKLY_DAYS=$((KEEP_DB_WEEKLY_WEEKS * 7))
local KEEP_DB_MONTHLY_DAYS=$((KEEP_DB_MONTHLY_MONTHS * 30))
echo "Retention policy: dirs=${KEEP_DIRS_DAYS}d, db=${KEEP_DB_DAILY_DAYS}d/${KEEP_DB_WEEKLY_WEEKS}w/${KEEP_DB_MONTHLY_MONTHS}m" | tee -a "$LOG_FILE"
# Process each host directory
for host_dir in "$DIR_BACKUP"/*; do
if [[ ! -d "$host_dir" ]]; then
continue
fi
local host_name=$(basename "$host_dir")
echo " Cleaning host: $host_name" | tee -a "$LOG_FILE"
# Clean directory backups (*.tar.gz but not *.sql.gz.enc)
while IFS= read -r -d '' file; do
if [[ $(basename "$file") == *".sql.gz.enc" ]]; then
continue # Skip SQL files
fi
local age_days=$(get_age_days "$file")
if [[ $age_days -gt $KEEP_DIRS_DAYS ]]; then
rm -f "$file"
echo " Deleted: $(basename "$file") (${age_days}d > ${KEEP_DIRS_DAYS}d)" | tee -a "$LOG_FILE"
((DELETED_COUNT++))
else
((KEPT_COUNT++))
fi
done < <(find "$host_dir" -name "*.tar.gz" -type f -print0 2>/dev/null)
# Clean database backups with retention policy
declare -A db_files
while IFS= read -r -d '' file; do
local filename=$(basename "$file")
local db_name=${filename%%_*}
if [[ -z "${db_files[$db_name]:-}" ]]; then
db_files[$db_name]="$file"
else
db_files[$db_name]+=$'\n'"$file"
fi
done < <(find "$host_dir" -name "*.sql.gz.enc" -type f -print0 2>/dev/null)
# Process each database
for db_name in "${!db_files[@]}"; do
# Sort files by age (newest first)
mapfile -t files < <(echo "${db_files[$db_name]}" | while IFS= read -r f; do
echo "$f"
done | xargs -I {} stat -c "%Y {}" {} 2>/dev/null | sort -rn | cut -d' ' -f2-)
# Track which files to keep
declare -A keep_daily
declare -A keep_weekly
for file in "${files[@]}"; do
local age_days=$(get_age_days "$file")
if [[ $age_days -le $KEEP_DB_DAILY_DAYS ]]; then
# Keep all files within daily retention
((KEPT_COUNT++))
elif [[ $age_days -le $KEEP_DB_WEEKLY_DAYS ]]; then
# Weekly retention: keep one per day
local file_date=$(date -d "@$(stat -c %Y "$file")" +"%Y-%m-%d")
if [[ -z "${keep_daily[$file_date]:-}" ]]; then
keep_daily[$file_date]="$file"
((KEPT_COUNT++))
else
rm -f "$file"
((DELETED_COUNT++))
fi
elif [[ $age_days -le $KEEP_DB_MONTHLY_DAYS ]]; then
# Monthly retention: keep one per week
local week_year=$(get_week_year "$file")
if [[ -z "${keep_weekly[$week_year]:-}" ]]; then
keep_weekly[$week_year]="$file"
((KEPT_COUNT++))
else
rm -f "$file"
((DELETED_COUNT++))
fi
else
# Beyond retention period
rm -f "$file"
echo " Deleted: $(basename "$file") (${age_days}d > ${KEEP_DB_MONTHLY_DAYS}d)" | tee -a "$LOG_FILE"
((DELETED_COUNT++))
fi
done
unset keep_daily keep_weekly
done
unset db_files
done
echo "Cleanup completed: ${DELETED_COUNT} deleted, ${KEPT_COUNT} kept" | tee -a "$LOG_FILE"
# Add cleanup summary to recap file
echo "" >> "$RECAP_FILE"
echo "CLEANUP SUMMARY:" >> "$RECAP_FILE"
echo " Files deleted: $DELETED_COUNT" >> "$RECAP_FILE"
echo " Files kept: $KEPT_COUNT" >> "$RECAP_FILE"
}
# Function to backup a single database (must be defined before use)
backup_database() {
local database="$1"
local timestamp="$(date +%Y%m%d_%H)"
local backup_file="$backup_dir/sql/${database}_${timestamp}.sql.gz.enc"
echo " Backing up database: $database" | tee -a "$LOG_FILE"
if [[ "$ssh_user" != "root" ]]; then
CMD_PREFIX="sudo"
else
CMD_PREFIX=""
fi
# Execute backup with encryption
# First test MySQL connection to get clear error messages (|| true to continue on error)
MYSQL_TEST=$(ssh -i "$ssh_key" -p "$ssh_port" -o ConnectTimeout=20 "$ssh_user@$host_ip" \
"$CMD_PREFIX incus exec $container_name -- bash -c 'cat > /tmp/d6back.cnf << EOF
[client]
user=$db_user
password=$db_pass
host=$db_host
EOF
chmod 600 /tmp/d6back.cnf
mariadb --defaults-extra-file=/tmp/d6back.cnf -e \"SELECT 1\" 2>&1
rm -f /tmp/d6back.cnf'" 2>/dev/null || true)
if ssh -i "$ssh_key" -p "$ssh_port" -o ConnectTimeout=20 "$ssh_user@$host_ip" \
"$CMD_PREFIX incus exec $container_name -- bash -c 'cat > /tmp/d6back.cnf << EOF
[client]
user=$db_user
password=$db_pass
host=$db_host
EOF
chmod 600 /tmp/d6back.cnf
mariadb-dump --defaults-extra-file=/tmp/d6back.cnf --single-transaction --lock-tables=false --add-drop-table --create-options --databases $database 2>/dev/null | sed -e \"/^CREATE DATABASE/s/\\\`$database\\\`/\\\`${database}_${timestamp}\\\`/\" -e \"/^USE/s/\\\`$database\\\`/\\\`${database}_${timestamp}\\\`/\" | gzip
rm -f /tmp/d6back.cnf'" | \
openssl enc -aes-256-cbc -salt -pass pass:"$ENC_KEY" -pbkdf2 > "$backup_file" 2>/dev/null; then
# Validate backup file size (encrypted SQL should be > 100 bytes)
if [[ -f "$backup_file" ]]; then
file_size=$(stat -c%s "$backup_file" 2>/dev/null || echo 0)
if [[ $file_size -lt 100 ]]; then
# Analyze MySQL connection test results
if [[ "$MYSQL_TEST" == *"Access denied"* ]]; then
echo " ERROR: MySQL authentication failed for $database on $host_name/$container_name" | tee -a "$LOG_FILE"
echo " User: $db_user@$db_host - Check password in configuration" | tee -a "$LOG_FILE"
elif [[ "$MYSQL_TEST" == *"Unknown database"* ]]; then
echo " ERROR: Database '$database' does not exist on $host_name/$container_name" | tee -a "$LOG_FILE"
elif [[ "$MYSQL_TEST" == *"Can't connect"* ]]; then
echo " ERROR: Cannot connect to MySQL server at $db_host in $container_name" | tee -a "$LOG_FILE"
else
echo " ERROR: Backup file too small (${file_size} bytes): $database on $host_name/$container_name" | tee -a "$LOG_FILE"
fi
((ERROR_COUNT++))
rm -f "$backup_file"
else
size=$(du -h "$backup_file" | cut -f1)
size_mb=$(format_size_mb "$backup_file")
echo " ✓ Saved (encrypted): $(basename "$backup_file") ($size)" | tee -a "$LOG_FILE"
echo " SQL: $(basename "$backup_file") - ${size_mb} Mo" >> "$RECAP_FILE"
# Test backup integrity
if ! openssl enc -aes-256-cbc -d -pass pass:"$ENC_KEY" -pbkdf2 -in "$backup_file" | gunzip -t 2>/dev/null; then
echo " ERROR: Backup integrity check failed for $database" | tee -a "$LOG_FILE"
((ERROR_COUNT++))
fi
fi
else
echo " ERROR: Backup file not created: $database" | tee -a "$LOG_FILE"
((ERROR_COUNT++))
fi
else
# Analyze MySQL connection test for failed backup
if [[ "$MYSQL_TEST" == *"Access denied"* ]]; then
echo " ERROR: MySQL authentication failed for $database on $host_name/$container_name" | tee -a "$LOG_FILE"
echo " User: $db_user@$db_host - Check password in configuration" | tee -a "$LOG_FILE"
elif [[ "$MYSQL_TEST" == *"Unknown database"* ]]; then
echo " ERROR: Database '$database' does not exist on $host_name/$container_name" | tee -a "$LOG_FILE"
elif [[ "$MYSQL_TEST" == *"Can't connect"* ]]; then
echo " ERROR: Cannot connect to MySQL server at $db_host in $container_name" | tee -a "$LOG_FILE"
else
echo " ERROR: Failed to backup database $database on $host_name/$container_name" | tee -a "$LOG_FILE"
fi
((ERROR_COUNT++))
rm -f "$backup_file"
fi
}
# Process each host
host_count=$(yq '.hosts | length' "$CONFIG_FILE")
for ((i=0; i<$host_count; i++)); do
host_name=$(yq ".hosts[$i].name" "$CONFIG_FILE" | tr -d '"')
host_ip=$(yq ".hosts[$i].ip" "$CONFIG_FILE" | tr -d '"')
ssh_user=$(yq ".hosts[$i].user" "$CONFIG_FILE" | tr -d '"')
ssh_key=$(yq ".hosts[$i].key" "$CONFIG_FILE" | tr -d '"')
ssh_port=$(yq ".hosts[$i].port // 22" "$CONFIG_FILE" | tr -d '"')
echo "Processing host: $host_name ($host_ip)" | tee -a "$LOG_FILE"
echo "" >> "$RECAP_FILE"
echo "HOST: $host_name ($host_ip)" >> "$RECAP_FILE"
echo "----------------------------" >> "$RECAP_FILE"
# Test SSH connection
if ! ssh -i "$ssh_key" -p "$ssh_port" -o ConnectTimeout=20 -o StrictHostKeyChecking=no "$ssh_user@$host_ip" "true" 2>/dev/null; then
echo " ERROR: Cannot connect to $host_name ($host_ip)" | tee -a "$LOG_FILE"
((ERROR_COUNT++))
continue
fi
# Process containers
container_count=$(yq ".hosts[$i].containers | length" "$CONFIG_FILE" 2>/dev/null || echo "0")
for ((c=0; c<$container_count; c++)); do
container_name=$(yq ".hosts[$i].containers[$c].name" "$CONFIG_FILE" | tr -d '"')
echo " Processing container: $container_name" | tee -a "$LOG_FILE"
# Add container to recap
echo "" >> "$RECAP_FILE"
echo " Container: $container_name" >> "$RECAP_FILE"
# Create backup directories
backup_dir="$DIR_BACKUP/$host_name/$container_name"
mkdir -p "$backup_dir"
mkdir -p "$backup_dir/sql"
# Backup directories (skip if -onlydb mode)
if [[ "$ONLY_DB" == "false" ]]; then
dir_count=$(yq ".hosts[$i].containers[$c].dirs | length" "$CONFIG_FILE" 2>/dev/null || echo "0")
for ((d=0; d<$dir_count; d++)); do
dir_path=$(yq ".hosts[$i].containers[$c].dirs[$d]" "$CONFIG_FILE" | sed 's/^"\|"$//g')
# Use sudo if not root
if [[ "$ssh_user" != "root" ]]; then
CMD_PREFIX="sudo"
else
CMD_PREFIX=""
fi
# Special handling for /var/www - backup each subdirectory separately
if [[ "$dir_path" == "/var/www" ]]; then
echo " Backing up subdirectories of $dir_path" | tee -a "$LOG_FILE"
# Get list of subdirectories
subdirs=$(ssh -i "$ssh_key" -p "$ssh_port" -o ConnectTimeout=20 "$ssh_user@$host_ip" \
"$CMD_PREFIX incus exec $container_name -- find /var/www -maxdepth 1 -type d ! -path /var/www" 2>/dev/null || echo "")
for subdir in $subdirs; do
subdir_name=$(basename "$subdir" | tr '/' '_')
backup_file="$backup_dir/www_${subdir_name}_$(date +%Y%m%d_%H).tar.gz"
echo " Backing up: $subdir" | tee -a "$LOG_FILE"
if ssh -i "$ssh_key" -p "$ssh_port" -o ConnectTimeout=20 "$ssh_user@$host_ip" \
"$CMD_PREFIX incus exec $container_name -- tar czf - $subdir 2>/dev/null" > "$backup_file"; then
# Validate backup file size (tar.gz should be > 1KB)
if [[ -f "$backup_file" ]]; then
file_size=$(stat -c%s "$backup_file" 2>/dev/null || echo 0)
if [[ $file_size -lt 1024 ]]; then
echo " WARNING: Backup file very small (${file_size} bytes): $subdir" | tee -a "$LOG_FILE"
# Keep the file but note it's small
size=$(du -h "$backup_file" | cut -f1)
size_mb=$(format_size_mb "$backup_file")
echo " ✓ Saved (small): $(basename "$backup_file") ($size)" | tee -a "$LOG_FILE"
echo " DIR: $(basename "$backup_file") - ${size_mb} Mo (WARNING: small)" >> "$RECAP_FILE"
else
size=$(du -h "$backup_file" | cut -f1)
size_mb=$(format_size_mb "$backup_file")
echo " ✓ Saved: $(basename "$backup_file") ($size)" | tee -a "$LOG_FILE"
echo " DIR: $(basename "$backup_file") - ${size_mb} Mo" >> "$RECAP_FILE"
fi
# Test tar integrity
if ! tar tzf "$backup_file" >/dev/null 2>&1; then
echo " ERROR: Tar integrity check failed" | tee -a "$LOG_FILE"
((ERROR_COUNT++))
fi
else
echo " ERROR: Backup file not created: $subdir" | tee -a "$LOG_FILE"
((ERROR_COUNT++))
fi
else
echo " ERROR: Failed to backup $subdir" | tee -a "$LOG_FILE"
((ERROR_COUNT++))
rm -f "$backup_file"
fi
done
else
# Normal backup for other directories
dir_name=$(basename "$dir_path" | tr '/' '_')
backup_file="$backup_dir/${dir_name}_$(date +%Y%m%d_%H).tar.gz"
echo " Backing up: $dir_path" | tee -a "$LOG_FILE"
if ssh -i "$ssh_key" -p "$ssh_port" -o ConnectTimeout=20 "$ssh_user@$host_ip" \
"$CMD_PREFIX incus exec $container_name -- tar czf - $dir_path 2>/dev/null" > "$backup_file"; then
# Validate backup file size (tar.gz should be > 1KB)
if [[ -f "$backup_file" ]]; then
file_size=$(stat -c%s "$backup_file" 2>/dev/null || echo 0)
if [[ $file_size -lt 1024 ]]; then
echo " WARNING: Backup file very small (${file_size} bytes): $dir_path" | tee -a "$LOG_FILE"
# Keep the file but note it's small
size=$(du -h "$backup_file" | cut -f1)
size_mb=$(format_size_mb "$backup_file")
echo " ✓ Saved (small): $(basename "$backup_file") ($size)" | tee -a "$LOG_FILE"
echo " DIR: $(basename "$backup_file") - ${size_mb} Mo (WARNING: small)" >> "$RECAP_FILE"
else
size=$(du -h "$backup_file" | cut -f1)
size_mb=$(format_size_mb "$backup_file")
echo " ✓ Saved: $(basename "$backup_file") ($size)" | tee -a "$LOG_FILE"
echo " DIR: $(basename "$backup_file") - ${size_mb} Mo" >> "$RECAP_FILE"
fi
# Test tar integrity
if ! tar tzf "$backup_file" >/dev/null 2>&1; then
echo " ERROR: Tar integrity check failed" | tee -a "$LOG_FILE"
((ERROR_COUNT++))
fi
else
echo " ERROR: Backup file not created: $dir_path" | tee -a "$LOG_FILE"
((ERROR_COUNT++))
fi
else
echo " ERROR: Failed to backup $dir_path" | tee -a "$LOG_FILE"
((ERROR_COUNT++))
rm -f "$backup_file"
fi
fi
done
fi # End of directory backup section
# Backup databases
db_user=$(yq ".hosts[$i].containers[$c].db_user" "$CONFIG_FILE" 2>/dev/null | tr -d '"')
db_pass=$(yq ".hosts[$i].containers[$c].db_pass" "$CONFIG_FILE" 2>/dev/null | tr -d '"')
db_host=$(yq ".hosts[$i].containers[$c].db_host // \"localhost\"" "$CONFIG_FILE" 2>/dev/null | tr -d '"')
# Check if we're in onlydb mode
if [[ "$ONLY_DB" == "true" ]]; then
# Use onlydb list if it exists
onlydb_count=$(yq ".hosts[$i].containers[$c].onlydb | length" "$CONFIG_FILE" 2>/dev/null || echo "0")
if [[ "$onlydb_count" != "0" ]] && [[ "$onlydb_count" != "null" ]]; then
db_count="$onlydb_count"
use_onlydb=true
else
# No onlydb list, skip this container in onlydb mode
continue
fi
else
# Normal mode - use databases list
db_count=$(yq ".hosts[$i].containers[$c].databases | length" "$CONFIG_FILE" 2>/dev/null || echo "0")
use_onlydb=false
fi
if [[ -n "$db_user" ]] && [[ -n "$db_pass" ]] && [[ "$db_count" != "0" ]]; then
for ((db=0; db<$db_count; db++)); do
if [[ "$use_onlydb" == "true" ]]; then
db_name=$(yq ".hosts[$i].containers[$c].onlydb[$db]" "$CONFIG_FILE" | tr -d '"')
else
db_name=$(yq ".hosts[$i].containers[$c].databases[$db]" "$CONFIG_FILE" | tr -d '"')
fi
if [[ "$db_name" == "ALL" ]]; then
echo " Fetching all databases..." | tee -a "$LOG_FILE"
# Get database list
if [[ "$ssh_user" != "root" ]]; then
db_list=$(ssh -i "$ssh_key" -p "$ssh_port" -o ConnectTimeout=20 "$ssh_user@$host_ip" \
"sudo incus exec $container_name -- bash -c 'cat > /tmp/d6back.cnf << EOF
[client]
user=$db_user
password=$db_pass
host=$db_host
EOF
chmod 600 /tmp/d6back.cnf
mariadb --defaults-extra-file=/tmp/d6back.cnf -e \"SHOW DATABASES;\" 2>/dev/null
rm -f /tmp/d6back.cnf'" | \
grep -Ev '^(Database|information_schema|performance_schema|mysql|sys)$' || echo "")
else
db_list=$(ssh -i "$ssh_key" -p "$ssh_port" -o ConnectTimeout=20 "$ssh_user@$host_ip" \
"incus exec $container_name -- bash -c 'cat > /tmp/d6back.cnf << EOF
[client]
user=$db_user
password=$db_pass
host=$db_host
EOF
chmod 600 /tmp/d6back.cnf
mariadb --defaults-extra-file=/tmp/d6back.cnf -e \"SHOW DATABASES;\" 2>/dev/null
rm -f /tmp/d6back.cnf'" | \
grep -Ev '^(Database|information_schema|performance_schema|mysql|sys)$' || echo "")
fi
# Backup each database
for single_db in $db_list; do
backup_database "$single_db"
done
else
backup_database "$db_name"
fi
done
fi
done
done
echo "=== Backup Completed $(date) ===" | tee -a "$LOG_FILE"
# Cleanup old backups according to retention policy
cleanup_old_backups
# Show summary
total_size=$(du -sh "$DIR_BACKUP" 2>/dev/null | cut -f1)
echo "Total backup size: $total_size" | tee -a "$LOG_FILE"
# Add summary to recap
echo "" >> "$RECAP_FILE"
echo "========================================" >> "$RECAP_FILE"
# Add size details per host/container
echo "BACKUP SIZES:" >> "$RECAP_FILE"
for host_dir in "$DIR_BACKUP"/*; do
if [[ -d "$host_dir" ]]; then
host_name=$(basename "$host_dir")
host_size=$(du -sh "$host_dir" 2>/dev/null | cut -f1)
echo "" >> "$RECAP_FILE"
echo " $host_name: $host_size" >> "$RECAP_FILE"
# Size per container
for container_dir in "$host_dir"/*; do
if [[ -d "$container_dir" ]]; then
container_name=$(basename "$container_dir")
container_size=$(du -sh "$container_dir" 2>/dev/null | cut -f1)
echo " - $container_name: $container_size" >> "$RECAP_FILE"
fi
done
fi
done
echo "" >> "$RECAP_FILE"
echo "TOTAL SIZE: $total_size" >> "$RECAP_FILE"
echo "COMPLETED: $(date '+%d.%m.%Y %H:%M')" >> "$RECAP_FILE"
# Prepare email subject with date format
DATE_SUBJECT=$(date '+%d.%m.%Y %H')
# Send recap email
if [[ $ERROR_COUNT -gt 0 ]]; then
echo "Total errors: $ERROR_COUNT" | tee -a "$LOG_FILE"
# Add errors to recap
echo "" >> "$RECAP_FILE"
echo "ERRORS DETECTED: $ERROR_COUNT" >> "$RECAP_FILE"
echo "----------------------------" >> "$RECAP_FILE"
grep -i "ERROR" "$LOG_FILE" >> "$RECAP_FILE"
# Send email with ERROR in subject
echo "Sending ERROR email to $EMAIL_TO (Errors found: $ERROR_COUNT)" | tee -a "$LOG_FILE"
if command -v msmtp &> /dev/null; then
{
echo "To: $EMAIL_TO"
echo "Subject: Backup${BACKUP_SERVER} ERROR $DATE_SUBJECT"
echo ""
cat "$RECAP_FILE"
} | msmtp "$EMAIL_TO"
echo "ERROR email sent successfully to $EMAIL_TO" | tee -a "$LOG_FILE"
else
echo "WARNING: msmtp not found - ERROR email NOT sent" | tee -a "$LOG_FILE"
fi
else
echo "Backup completed successfully with no errors" | tee -a "$LOG_FILE"
# Send success recap email
echo "Sending SUCCESS recap email to $EMAIL_TO" | tee -a "$LOG_FILE"
if command -v msmtp &> /dev/null; then
{
echo "To: $EMAIL_TO"
echo "Subject: Backup${BACKUP_SERVER} $DATE_SUBJECT"
echo ""
cat "$RECAP_FILE"
} | msmtp "$EMAIL_TO"
echo "SUCCESS recap email sent successfully to $EMAIL_TO" | tee -a "$LOG_FILE"
else
echo "WARNING: msmtp not found - SUCCESS recap email NOT sent" | tee -a "$LOG_FILE"
fi
fi
# Clean up recap file
rm -f "$RECAP_FILE"
# Exit with error code if there were errors
if [[ $ERROR_COUNT -gt 0 ]]; then
exit 1
fi

View File

@@ -1,112 +0,0 @@
# Configuration for MariaDB and directories backup
# Backup structure: $dir_backup/$hostname/$containername/ for dirs
# $dir_backup/$hostname/$containername/sql/ for databases
# Global parameters
global:
backup_server: PM7 # Nom du serveur de backup (PM7, PM1, etc.)
email_to: support@unikoffice.com # Email de notification
dir_backup: /var/pierre/back # Base backup directory
enc_key: /home/pierre/.key_enc # Encryption key for SQL backups
keep_dirs: 7d # Garde 7 jours pour les dirs
keep_db: 5d,3w,15m # 5 jours complets, 3 semaines (1/jour), 15 mois (1/semaine)
# Hosts configuration
hosts:
- name: IN2
ip: 145.239.9.105
user: debian
key: /home/pierre/.ssh/backup_key
port: 22
dirs:
- /etc/nginx
containers:
- name: nx4
db_user: root
db_pass: MyDebServer,90b
db_host: localhost
dirs:
- /etc/nginx
- /var/www
databases:
- ALL # Backup all databases
onlydb: # Used only with -onlydb parameter (optional)
- turing
- name: IN3
ip: 195.154.80.116
user: pierre
key: /home/pierre/.ssh/backup_key
port: 22
dirs:
- /etc/nginx
containers:
- name: nx4
db_user: root
db_pass: MyAlpLocal,90b
db_host: localhost
dirs:
- /etc/nginx
- /var/www
databases:
- ALL # Backup all databases
onlydb: # Used only with -onlydb parameter (optional)
- geosector
- name: rca-geo
dirs:
- /etc/nginx
- /var/www
- name: dva-res
db_user: root
db_pass: MyAlpineDb.90b
db_host: localhost
dirs:
- /etc/nginx
- /var/www
databases:
- ALL
onlydb:
- resalice
- name: dva-front
dirs:
- /etc/nginx
- /var/www
- name: maria3
db_user: root
db_pass: MyAlpLocal,90b
db_host: localhost
dirs:
- /etc/my.cnf.d
- /var/osm
- /var/log
databases:
- ALL
onlydb:
- cleo
- rca_geo
- name: IN4
ip: 51.159.7.190
user: pierre
key: /home/pierre/.ssh/backup_key
port: 22
dirs:
- /etc/nginx
containers:
- name: maria4
db_user: root
db_pass: MyAlpLocal,90b
db_host: localhost
dirs:
- /etc/my.cnf.d
- /var/osm
- /var/log
databases:
- ALL
onlydb:
- cleo
- pra_geo

View File

@@ -1,118 +0,0 @@
#!/bin/bash
set -euo pipefail
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Configuration
CONFIG_FILE="backpm7.yaml"
# Check if file argument is provided
if [ $# -eq 0 ]; then
echo -e "${RED}Error: No input file specified${NC}"
echo "Usage: $0 <database.sql.gz.enc>"
echo "Example: $0 wordpress_20250905_14.sql.gz.enc"
exit 1
fi
INPUT_FILE="$1"
# Check if input file exists
if [ ! -f "$INPUT_FILE" ]; then
echo -e "${RED}Error: File not found: $INPUT_FILE${NC}"
exit 1
fi
# Function to load encryption key from config
load_key_from_config() {
if [ ! -f "$CONFIG_FILE" ]; then
echo -e "${YELLOW}Warning: $CONFIG_FILE not found${NC}"
return 1
fi
# Check for yq
if ! command -v yq &> /dev/null; then
echo -e "${RED}Error: yq is required to read config file${NC}"
echo "Install with: sudo wget -qO /usr/local/bin/yq https://github.com/mikefarah/yq/releases/latest/download/yq_linux_amd64 && sudo chmod +x /usr/local/bin/yq"
return 1
fi
local key_path=$(yq '.global.enc_key' "$CONFIG_FILE" | tr -d '"')
if [ -z "$key_path" ]; then
echo -e "${RED}Error: enc_key not found in $CONFIG_FILE${NC}"
return 1
fi
if [ ! -f "$key_path" ]; then
echo -e "${RED}Error: Encryption key file not found: $key_path${NC}"
return 1
fi
ENC_KEY=$(cat "$key_path")
echo -e "${GREEN}Encryption key loaded from: $key_path${NC}"
return 0
}
# Check file type early - accept both old and new naming
if [[ "$INPUT_FILE" != *.sql.gz.enc ]] && [[ "$INPUT_FILE" != *.sql.tar.gz.enc ]]; then
echo -e "${RED}Error: File must be a .sql.gz.enc or .sql.tar.gz.enc file${NC}"
echo "This tool only decrypts SQL backup files created by backpm7.sh"
exit 1
fi
# Get encryption key from config
if ! load_key_from_config; then
echo -e "${RED}Error: Cannot load encryption key${NC}"
echo "Make sure $CONFIG_FILE exists and contains enc_key path"
exit 1
fi
# Process SQL backup file
echo -e "${BLUE}Decrypting SQL backup: $INPUT_FILE${NC}"
# Determine output file - extract just the filename and put in current directory
BASENAME=$(basename "$INPUT_FILE")
if [[ "$BASENAME" == *.sql.tar.gz.enc ]]; then
OUTPUT_FILE="${BASENAME%.sql.tar.gz.enc}.sql"
else
OUTPUT_FILE="${BASENAME%.sql.gz.enc}.sql"
fi
# Decrypt and decompress in one command
echo "Decrypting to: $OUTPUT_FILE"
# Decrypt and decompress in one pipeline
if openssl enc -aes-256-cbc -d -salt -pass pass:"$ENC_KEY" -pbkdf2 -in "$INPUT_FILE" | gunzip > "$OUTPUT_FILE" 2>/dev/null; then
# Get file size
size=$(du -h "$OUTPUT_FILE" | cut -f1)
echo -e "${GREEN}✓ Successfully decrypted: $OUTPUT_FILE ($size)${NC}"
# Show first few lines of SQL
echo -e "${BLUE}First 5 lines of SQL:${NC}"
head -n 5 "$OUTPUT_FILE"
else
echo -e "${RED}✗ Decryption failed${NC}"
echo "Possible causes:"
echo " - Wrong encryption key"
echo " - Corrupted file"
echo " - File was encrypted differently"
# Try to help debug
echo -e "\n${YELLOW}Debug info:${NC}"
echo "File size: $(du -h "$INPUT_FILE" | cut -f1)"
echo "First bytes (should start with 'Salted__'):"
hexdump -C "$INPUT_FILE" | head -n 1
# Let's also check what key we're using (first 10 chars)
echo "Key begins with: ${ENC_KEY:0:10}..."
exit 1
fi
echo -e "${GREEN}Operation completed successfully${NC}"

View File

@@ -1,248 +0,0 @@
#!/bin/bash
#
# sync_geosector.sh - Synchronise les backups geosector depuis PM7 vers maria3 (IN3) et maria4 (IN4)
#
# Ce script :
# 1. Trouve le dernier backup chiffré de geosector sur PM7
# 2. Le déchiffre et décompresse localement
# 3. Le transfère et l'importe dans IN3/maria3/geosector
# 4. Le transfère et l'importe dans IN4/maria4/geosector
#
# Installation: /var/pierre/bat/sync_geosector.sh
# Usage: ./sync_geosector.sh [--force] [--date YYYYMMDD_HH]
#
set -uo pipefail
# Note: Removed -e to allow script to continue on sync errors
# Errors are handled explicitly with ERROR_COUNT
# Configuration
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
CONFIG_FILE="$SCRIPT_DIR/d6back.yaml"
BACKUP_DIR="/var/pierre/back/IN3/nx4/sql"
ENC_KEY_FILE="/home/pierre/.key_enc"
SSH_KEY="/home/pierre/.ssh/backup_key"
TEMP_DIR="/tmp/geosector_sync"
LOG_FILE="/var/pierre/bat/logs/sync_geosector.log"
RECAP_FILE="/tmp/sync_geosector_recap_$$.txt"
# Load email config from d6back.yaml
if [[ -f "$CONFIG_FILE" ]]; then
EMAIL_TO=$(yq '.global.email_to // "support@unikoffice.com"' "$CONFIG_FILE" | tr -d '"')
BACKUP_SERVER=$(yq '.global.backup_server // "BACKUP"' "$CONFIG_FILE" | tr -d '"')
else
EMAIL_TO="support@unikoffice.com"
BACKUP_SERVER="BACKUP"
fi
# Serveurs cibles
IN3_HOST="195.154.80.116"
IN3_USER="pierre"
IN3_CONTAINER="maria3"
IN4_HOST="51.159.7.190"
IN4_USER="pierre"
IN4_CONTAINER="maria4"
# Credentials MariaDB
DB_USER="root"
IN3_DB_PASS="MyAlpLocal,90b" # maria3
IN4_DB_PASS="MyAlpLocal,90b" # maria4
DB_NAME="geosector"
# Fonctions utilitaires
log() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $*" | tee -a "$LOG_FILE"
}
error() {
log "ERROR: $*"
exit 1
}
cleanup() {
if [[ -d "$TEMP_DIR" ]]; then
log "Nettoyage de $TEMP_DIR"
rm -rf "$TEMP_DIR"
fi
rm -f "$RECAP_FILE"
}
trap cleanup EXIT
# Lecture de la clé de chiffrement
if [[ ! -f "$ENC_KEY_FILE" ]]; then
error "Clé de chiffrement non trouvée: $ENC_KEY_FILE"
fi
ENC_KEY=$(cat "$ENC_KEY_FILE")
# Parsing des arguments
FORCE=0
SPECIFIC_DATE=""
while [[ $# -gt 0 ]]; do
case $1 in
--force)
FORCE=1
shift
;;
--date)
SPECIFIC_DATE="$2"
shift 2
;;
*)
echo "Usage: $0 [--force] [--date YYYYMMDD_HH]"
exit 1
;;
esac
done
# Trouver le fichier backup
if [[ -n "$SPECIFIC_DATE" ]]; then
BACKUP_FILE="$BACKUP_DIR/geosector_${SPECIFIC_DATE}.sql.gz.enc"
if [[ ! -f "$BACKUP_FILE" ]]; then
error "Backup non trouvé: $BACKUP_FILE"
fi
else
# Chercher le plus récent
BACKUP_FILE=$(find "$BACKUP_DIR" -name "geosector_*.sql.gz.enc" -type f -printf '%T@ %p\n' | sort -rn | head -1 | cut -d' ' -f2-)
if [[ -z "$BACKUP_FILE" ]]; then
error "Aucun backup geosector trouvé dans $BACKUP_DIR"
fi
fi
BACKUP_BASENAME=$(basename "$BACKUP_FILE")
log "Backup sélectionné: $BACKUP_BASENAME"
# Initialiser le fichier récapitulatif
echo "SYNC GEOSECTOR REPORT - $(hostname) - $(date '+%d.%m.%Y %H')h" > "$RECAP_FILE"
echo "========================================" >> "$RECAP_FILE"
echo "" >> "$RECAP_FILE"
echo "Backup source: $BACKUP_BASENAME" >> "$RECAP_FILE"
echo "" >> "$RECAP_FILE"
# Créer le répertoire temporaire
mkdir -p "$TEMP_DIR"
DECRYPTED_FILE="$TEMP_DIR/geosector.sql"
# Étape 1: Déchiffrer et décompresser
log "Déchiffrement et décompression du backup..."
if ! openssl enc -aes-256-cbc -d -pass pass:"$ENC_KEY" -pbkdf2 -in "$BACKUP_FILE" | gunzip > "$DECRYPTED_FILE"; then
error "Échec du déchiffrement/décompression"
fi
FILE_SIZE=$(du -h "$DECRYPTED_FILE" | cut -f1)
log "Fichier SQL déchiffré: $FILE_SIZE"
echo "Decrypted SQL size: $FILE_SIZE" >> "$RECAP_FILE"
echo "" >> "$RECAP_FILE"
# Compteur d'erreurs
ERROR_COUNT=0
# Fonction pour synchroniser vers un serveur
sync_to_server() {
local HOST=$1
local USER=$2
local CONTAINER=$3
local DB_PASS=$4
local SERVER_NAME=$5
log "=== Synchronisation vers $SERVER_NAME ($HOST) ==="
echo "TARGET: $SERVER_NAME ($HOST/$CONTAINER)" >> "$RECAP_FILE"
# Test de connexion SSH
if ! ssh -i "$SSH_KEY" -o ConnectTimeout=10 "$USER@$HOST" "echo 'SSH OK'" &>/dev/null; then
log "ERROR: Impossible de se connecter à $HOST via SSH"
echo " ✗ SSH connection FAILED" >> "$RECAP_FILE"
((ERROR_COUNT++))
return 1
fi
# Import dans MariaDB
log "Import dans $SERVER_NAME/$CONTAINER/geosector..."
# Drop et recréer la base sur le serveur distant
if ! ssh -i "$SSH_KEY" "$USER@$HOST" "incus exec $CONTAINER --project default -- mariadb -u root -p'$DB_PASS' -e 'DROP DATABASE IF EXISTS $DB_NAME; CREATE DATABASE $DB_NAME CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;'"; then
log "ERROR: Échec de la création de la base sur $SERVER_NAME"
echo " ✗ Database creation FAILED" >> "$RECAP_FILE"
((ERROR_COUNT++))
return 1
fi
# Filtrer et importer le SQL (enlever CREATE DATABASE et USE avec timestamp)
log "Filtrage et import du SQL..."
if ! sed -e '/^CREATE DATABASE.*geosector_[0-9]/d' \
-e '/^USE.*geosector_[0-9]/d' \
"$DECRYPTED_FILE" | \
ssh -i "$SSH_KEY" "$USER@$HOST" "incus exec $CONTAINER --project default -- mariadb -u root -p'$DB_PASS' $DB_NAME"; then
log "ERROR: Échec de l'import sur $SERVER_NAME"
echo " ✗ SQL import FAILED" >> "$RECAP_FILE"
((ERROR_COUNT++))
return 1
fi
log "$SERVER_NAME: Import réussi"
echo " ✓ Import SUCCESS" >> "$RECAP_FILE"
echo "" >> "$RECAP_FILE"
}
# Synchronisation vers IN3/maria3
sync_to_server "$IN3_HOST" "$IN3_USER" "$IN3_CONTAINER" "$IN3_DB_PASS" "IN3/maria3"
# Synchronisation vers IN4/maria4
sync_to_server "$IN4_HOST" "$IN4_USER" "$IN4_CONTAINER" "$IN4_DB_PASS" "IN4/maria4"
# Finaliser le récapitulatif
echo "========================================" >> "$RECAP_FILE"
echo "COMPLETED: $(date '+%d.%m.%Y %H:%M')" >> "$RECAP_FILE"
# Préparer le sujet email avec date
DATE_SUBJECT=$(date '+%d.%m.%Y %H')
# Envoyer l'email récapitulatif
if [[ $ERROR_COUNT -gt 0 ]]; then
log "Total errors: $ERROR_COUNT"
# Ajouter les erreurs au récap
echo "" >> "$RECAP_FILE"
echo "ERRORS DETECTED: $ERROR_COUNT" >> "$RECAP_FILE"
echo "----------------------------" >> "$RECAP_FILE"
grep -i "ERROR" "$LOG_FILE" | tail -20 >> "$RECAP_FILE"
# Envoyer email avec ERROR dans le sujet
log "Sending ERROR email to $EMAIL_TO (Errors found: $ERROR_COUNT)"
if command -v msmtp &> /dev/null; then
{
echo "To: $EMAIL_TO"
echo "Subject: Sync${BACKUP_SERVER} ERROR $DATE_SUBJECT"
echo ""
cat "$RECAP_FILE"
} | msmtp "$EMAIL_TO"
log "ERROR email sent successfully to $EMAIL_TO"
else
log "WARNING: msmtp not found - ERROR email NOT sent"
fi
log "=== Synchronisation terminée avec des erreurs ==="
exit 1
else
log "=== Synchronisation terminée avec succès ==="
log "Les bases geosector sur maria3 et maria4 sont à jour avec le backup $BACKUP_BASENAME"
# Envoyer email de succès
log "Sending SUCCESS recap email to $EMAIL_TO"
if command -v msmtp &> /dev/null; then
{
echo "To: $EMAIL_TO"
echo "Subject: Sync${BACKUP_SERVER} $DATE_SUBJECT"
echo ""
cat "$RECAP_FILE"
} | msmtp "$EMAIL_TO"
log "SUCCESS recap email sent successfully to $EMAIL_TO"
else
log "WARNING: msmtp not found - SUCCESS recap email NOT sent"
fi
exit 0
fi

0
api/TODO-API.md Normal file → Executable file
View File

0
api/alter_table_geometry.sql Normal file → Executable file
View File

0
api/config/nginx/pra-geo-http-only.conf Normal file → Executable file
View File

0
api/config/nginx/pra-geo-production.conf Normal file → Executable file
View File

0
api/config/whitelist_ip_cache.txt Normal file → Executable file
View File

0
api/create_table_x_departements_contours.sql Normal file → Executable file
View File

0
api/data/README.md Normal file → Executable file
View File

View File

@@ -179,6 +179,14 @@ if [ "$SOURCE_TYPE" = "local_code" ]; then
--exclude='*.swp' \
--exclude='*.swo' \
--exclude='*~' \
--exclude='docs/*.geojson' \
--exclude='docs/*.sql' \
--exclude='docs/*.pdf' \
--exclude='composer.phar' \
--exclude='scripts/migration*' \
--exclude='scripts/php' \
--exclude='CLAUDE.md' \
--exclude='TODO-API.md' \
-czf "${ARCHIVE_PATH}" . 2>/dev/null || echo_error "Failed to create archive"
echo_info "Archive created: ${ARCHIVE_PATH}"
@@ -198,6 +206,16 @@ elif [ "$SOURCE_TYPE" = "remote_container" ]; then
--exclude='uploads' \
--exclude='sessions' \
--exclude='opendata' \
--exclude='docs/*.geojson' \
--exclude='docs/*.sql' \
--exclude='docs/*.pdf' \
--exclude='composer.phar' \
--exclude='scripts/migration*' \
--exclude='scripts/php' \
--exclude='CLAUDE.md' \
--exclude='TODO-API.md' \
--exclude='*.tar.gz' \
--exclude='vendor' \
-czf /tmp/${ARCHIVE_NAME} -C ${API_PATH} .
" || echo_error "Failed to create archive on remote"

0
api/docs/API-SECURITY.md Normal file → Executable file
View File

0
api/docs/CHAT_MODULE.md Normal file → Executable file
View File

0
api/docs/CHK_USER_DELETE_PASS_INFO.md Normal file → Executable file
View File

0
api/docs/DELETE_PASSAGE_PERMISSIONS.md Normal file → Executable file
View File

0
api/docs/EVENTS-LOG.md Normal file → Executable file
View File

0
api/docs/FIX_USER_CREATION_400_ERRORS.md Normal file → Executable file
View File

0
api/docs/GESTION-SECTORS.md Normal file → Executable file
View File

0
api/docs/INSTALL_FPDF.md Normal file → Executable file
View File

0
api/docs/PLANNING-STRIPE-API.md Normal file → Executable file
View File

0
api/docs/PREPA_PROD.md Normal file → Executable file
View File

0
api/docs/SETUP_EMAIL_QUEUE_CRON.md Normal file → Executable file
View File

0
api/docs/STRIPE-BACKEND-MIGRATION.md Normal file → Executable file
View File

0
api/docs/STRIPE-TAP-TO-PAY-FLOW.md Normal file → Executable file
View File

0
api/docs/STRIPE-TAP-TO-PAY-REQUIREMENTS.md Normal file → Executable file
View File

0
api/docs/STRIPE_VERIF.md Normal file → Executable file
View File

0
api/docs/UPLOAD-MIGRATION-RECAP.md Normal file → Executable file
View File

0
api/docs/UPLOAD-REORGANIZATION.md Normal file → Executable file
View File

0
api/docs/USERNAME_VALIDATION_CHANGES.md Normal file → Executable file
View File

0
api/docs/_logo_recu.png Normal file → Executable file
View File

Before

Width:  |  Height:  |  Size: 99 KiB

After

Width:  |  Height:  |  Size: 99 KiB

0
api/docs/_recu_template.pdf Normal file → Executable file
View File

0
api/docs/contour-des-departements.geojson Normal file → Executable file
View File

0
api/docs/create_table_user_devices.sql Normal file → Executable file
View File

0
api/docs/departements_limitrophes.md Normal file → Executable file
View File

0
api/docs/logrotate_email_queue.conf Normal file → Executable file
View File

0
api/docs/nouvelles-routes-session-refresh.txt Normal file → Executable file
View File

0
api/docs/recu_13718.pdf Normal file → Executable file
View File

0
api/docs/recu_19500582.pdf Normal file → Executable file
View File

0
api/docs/recu_19500586.pdf Normal file → Executable file
View File

0
api/docs/recu_537254062.pdf Normal file → Executable file
View File

0
api/docs/recu_972506460.pdf Normal file → Executable file
View File

0
api/docs/traite_batiments.sql Normal file → Executable file
View File

0
api/docs/x_departements_contours.sql Normal file → Executable file
View File

0
api/docs/x_departements_contours_corrected.sql Normal file → Executable file
View File

0
api/docs/x_departements_contours_fixed.sql Normal file → Executable file
View File

0
api/migration_add_departements_contours.sql Normal file → Executable file
View File

0
api/migration_add_sectors_adresses.sql Normal file → Executable file
View File

0
api/migrations/add_dept_limitrophes.sql Normal file → Executable file
View File

0
api/migrations/integrate_contours_to_departements.sql Normal file → Executable file
View File

0
api/migrations/update_all_dept_limitrophes.sql Normal file → Executable file
View File

1
api/ralph Submodule

Submodule api/ralph added at 098579b5a1

0
api/scripts/CORRECTIONS_MIGRATE.md Normal file → Executable file
View File

0
api/scripts/MIGRATION_PATCH_INSTRUCTIONS.md Normal file → Executable file
View File

0
api/scripts/README-migration.md Normal file → Executable file
View File

0
api/scripts/check_geometry_validity.sql Normal file → Executable file
View File

0
api/scripts/config/update_php_fpm_settings.sh Normal file → Executable file
View File

0
api/scripts/create_addresses_users.sql Normal file → Executable file
View File

0
api/scripts/create_addresses_users_by_env.sql Normal file → Executable file
View File

0
api/scripts/cron/CRON.md Normal file → Executable file
View File

0
api/scripts/cron/cleanup_security_data.php Normal file → Executable file
View File

0
api/scripts/cron/process_email_queue_with_daily_log.sh Normal file → Executable file
View File

0
api/scripts/cron/rotate_event_logs.php Normal file → Executable file
View File

0
api/scripts/fix_geometry_for_spatial_index.sql Normal file → Executable file
View File

Some files were not shown because too many files have changed in this diff Show More