Compare commits
26 Commits
v3.5.2
...
92a69c978a
| Author | SHA1 | Date | |
|---|---|---|---|
| 92a69c978a | |||
| 234c4eb2cc | |||
| fe19a56983 | |||
| dadd0b69ca | |||
| a548ef8890 | |||
| f597c9aeb5 | |||
| 604294af96 | |||
| 41a4505b4b | |||
| 6c8853e553 | |||
| 4c2e809a35 | |||
| 890da22329 | |||
| 5ab03751e1 | |||
| c1f23c4345 | |||
| 5e255ebf5e | |||
| 206c76c7db | |||
|
|
599b9fcda0 | ||
|
|
6a609fb467 | ||
|
|
7763d02fae | ||
|
|
b9672a6228 | ||
|
|
4244b961fd | ||
|
|
f3f1a9c5e8 | ||
|
|
15a0f2d2be | ||
|
|
86a9a35594 | ||
|
|
e5ab857913 | ||
|
|
2aa2706179 | ||
|
|
41f1db1169 |
6
.gitignore
vendored
6
.gitignore
vendored
@@ -20,12 +20,6 @@ web/storage/*.key
|
|||||||
web/public/storage
|
web/public/storage
|
||||||
web/public/hot
|
web/public/hot
|
||||||
|
|
||||||
# API
|
|
||||||
api/sessions/
|
|
||||||
api/logs/
|
|
||||||
api/uploads/
|
|
||||||
api/vendor/
|
|
||||||
|
|
||||||
# Général
|
# Général
|
||||||
*.DS_Store
|
*.DS_Store
|
||||||
*.log
|
*.log
|
||||||
|
|||||||
608
MONITORING.md
608
MONITORING.md
@@ -1,608 +0,0 @@
|
|||||||
# Guide de Monitoring - Container Incus (Application Web PHP + MariaDB)
|
|
||||||
|
|
||||||
## Vue d'ensemble
|
|
||||||
|
|
||||||
Ce guide décrit les métriques essentielles à surveiller pour un container Incus hébergeant une application web PHP avec API mobile et base de données MariaDB.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 1. Ressources Système
|
|
||||||
|
|
||||||
### CPU
|
|
||||||
**Pourquoi ?** Identifier les pics de charge et les processus gourmands
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Utilisation CPU globale du container
|
|
||||||
incus info nx4 | grep "CPU utilisé"
|
|
||||||
|
|
||||||
# Détail par processus (dans le container)
|
|
||||||
top -bn1 | head -20
|
|
||||||
htop
|
|
||||||
```
|
|
||||||
|
|
||||||
**Métriques à surveiller :**
|
|
||||||
- Load average (devrait être < nombre de CPU)
|
|
||||||
- % CPU par processus (MariaDB, PHP-FPM, nginx)
|
|
||||||
|
|
||||||
**Seuils d'alerte :**
|
|
||||||
- ⚠️ Warning : Load average > 70% des CPU
|
|
||||||
- 🚨 Critical : Load average > 150% des CPU pendant >5min
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Mémoire RAM
|
|
||||||
**Pourquoi ?** Éviter les OOM (Out of Memory) qui tuent les processus
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Vue globale depuis le host
|
|
||||||
incus info nx4 | grep "Mémoire"
|
|
||||||
|
|
||||||
# Détail dans le container
|
|
||||||
free -h
|
|
||||||
ps aux --sort=-%mem | head -10
|
|
||||||
```
|
|
||||||
|
|
||||||
**Métriques à surveiller :**
|
|
||||||
- RAM utilisée / totale
|
|
||||||
- Swap utilisé (devrait rester minimal)
|
|
||||||
- Top processus consommateurs
|
|
||||||
|
|
||||||
**Seuils d'alerte :**
|
|
||||||
- ⚠️ Warning : RAM > 85%
|
|
||||||
- 🚨 Critical : RAM > 95% ou Swap > 1GB
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Disque I/O
|
|
||||||
**Pourquoi ?** MariaDB est très sensible aux lenteurs disque
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Sur le host
|
|
||||||
iostat -x 2 5
|
|
||||||
|
|
||||||
# Dans le container
|
|
||||||
iostat -x 1 3
|
|
||||||
iotop -oa
|
|
||||||
```
|
|
||||||
|
|
||||||
**Métriques à surveiller :**
|
|
||||||
- Latence disque (await)
|
|
||||||
- IOPS (r/s, w/s)
|
|
||||||
- % utilisation disque
|
|
||||||
|
|
||||||
**Seuils d'alerte :**
|
|
||||||
- ⚠️ Warning : await > 50ms
|
|
||||||
- 🚨 Critical : await > 200ms ou %util > 90%
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Espace Disque
|
|
||||||
**Pourquoi ?** MariaDB ne peut plus écrire si disque plein
|
|
||||||
|
|
||||||
```bash
|
|
||||||
df -h
|
|
||||||
du -sh /var/lib/mysql
|
|
||||||
```
|
|
||||||
|
|
||||||
**Seuils d'alerte :**
|
|
||||||
- ⚠️ Warning : > 80% utilisé
|
|
||||||
- 🚨 Critical : > 90% utilisé
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 2. PHP-FPM (Application Web)
|
|
||||||
|
|
||||||
### Pool de Workers
|
|
||||||
**Pourquoi ?** Cause #1 des timeouts et coupures de service (votre cas !)
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Nombre de workers actifs
|
|
||||||
ps aux | grep "php-fpm: pool www" | wc -l
|
|
||||||
|
|
||||||
# Config du pool
|
|
||||||
grep "^pm" /etc/php/8.3/fpm/pool.d/www.conf
|
|
||||||
|
|
||||||
# Logs d'alertes
|
|
||||||
tail -f /var/log/php8.3-fpm.log | grep "max_children"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Métriques critiques :**
|
|
||||||
- Nombre de workers actifs vs `pm.max_children`
|
|
||||||
- Warnings "max_children reached"
|
|
||||||
- Slow requests (>2s)
|
|
||||||
|
|
||||||
**Seuils d'alerte :**
|
|
||||||
- ⚠️ Warning : Workers actifs > 80% de max_children
|
|
||||||
- 🚨 Critical : Warning "max_children reached" apparaît
|
|
||||||
|
|
||||||
**Configuration recommandée pour votre cas :**
|
|
||||||
```ini
|
|
||||||
pm = dynamic
|
|
||||||
pm.max_children = 50-100 (selon RAM disponible)
|
|
||||||
pm.start_servers = 10-20
|
|
||||||
pm.min_spare_servers = 5-10
|
|
||||||
pm.max_spare_servers = 20-35
|
|
||||||
pm.max_requests = 500
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Temps de Réponse PHP
|
|
||||||
**Pourquoi ?** Scripts lents = workers bloqués
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Activer le slow log PHP-FPM
|
|
||||||
# Dans /etc/php/8.3/fpm/pool.d/www.conf
|
|
||||||
slowlog = /var/log/php8.3-fpm-slow.log
|
|
||||||
request_slowlog_timeout = 3s
|
|
||||||
|
|
||||||
# Voir les requêtes lentes
|
|
||||||
tail -f /var/log/php8.3-fpm-slow.log
|
|
||||||
```
|
|
||||||
|
|
||||||
**Seuils d'alerte :**
|
|
||||||
- ⚠️ Warning : Requêtes > 3s
|
|
||||||
- 🚨 Critical : Requêtes > 10s ou timeouts fréquents
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 3. MariaDB / MySQL
|
|
||||||
|
|
||||||
### Connexions
|
|
||||||
**Pourquoi ?** Trop de connexions = refus de nouvelles connexions
|
|
||||||
|
|
||||||
```bash
|
|
||||||
mysql -e "SHOW STATUS LIKE 'Threads_connected';"
|
|
||||||
mysql -e "SHOW STATUS LIKE 'Max_used_connections';"
|
|
||||||
mysql -e "SHOW VARIABLES LIKE 'max_connections';"
|
|
||||||
mysql -e "SHOW FULL PROCESSLIST;"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Métriques critiques :**
|
|
||||||
- Connexions actives vs max_connections
|
|
||||||
- Connexions en attente / bloquées
|
|
||||||
- Requêtes longues (>5s)
|
|
||||||
|
|
||||||
**Seuils d'alerte :**
|
|
||||||
- ⚠️ Warning : Connexions > 80% de max_connections
|
|
||||||
- 🚨 Critical : Connexions = max_connections
|
|
||||||
|
|
||||||
**Config recommandée :**
|
|
||||||
```ini
|
|
||||||
max_connections = 200-500 (selon votre trafic)
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Slow Queries
|
|
||||||
**Pourquoi ?** Requêtes lentes = workers PHP bloqués
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Activer le slow query log
|
|
||||||
mysql -e "SET GLOBAL slow_query_log = 'ON';"
|
|
||||||
mysql -e "SET GLOBAL long_query_time = 2;"
|
|
||||||
mysql -e "SET GLOBAL log_queries_not_using_indexes = 'ON';"
|
|
||||||
|
|
||||||
# Voir les slow queries
|
|
||||||
tail -f /var/lib/mysql/slow-query.log
|
|
||||||
# ou
|
|
||||||
mysqldumpslow /var/lib/mysql/slow-query.log
|
|
||||||
```
|
|
||||||
|
|
||||||
**Seuils d'alerte :**
|
|
||||||
- ⚠️ Warning : Requêtes > 2s
|
|
||||||
- 🚨 Critical : Requêtes > 10s ou >50 slow queries/min
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### InnoDB Buffer Pool
|
|
||||||
**Pourquoi ?** Si trop petit, beaucoup de lectures disque (lent)
|
|
||||||
|
|
||||||
```bash
|
|
||||||
mysql -e "SHOW VARIABLES LIKE 'innodb_buffer_pool_size';"
|
|
||||||
mysql -e "SHOW STATUS LIKE 'Innodb_buffer_pool_read%';"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Métriques critiques :**
|
|
||||||
- Buffer pool hit ratio (devrait être >99%)
|
|
||||||
- Read requests vs reads from disk
|
|
||||||
|
|
||||||
**Config recommandée :**
|
|
||||||
- `innodb_buffer_pool_size` = 70-80% de la RAM dédiée à MySQL
|
|
||||||
- Pour 20GB de données : `innodb_buffer_pool_size = 16G`
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Deadlocks et Locks
|
|
||||||
**Pourquoi ?** Peuvent causer des timeouts
|
|
||||||
|
|
||||||
```bash
|
|
||||||
mysql -e "SHOW ENGINE INNODB STATUS\G" | grep -A 50 "LATEST DETECTED DEADLOCK"
|
|
||||||
mysql -e "SHOW OPEN TABLES WHERE In_use > 0;"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Seuils d'alerte :**
|
|
||||||
- ⚠️ Warning : >1 deadlock/heure
|
|
||||||
- 🚨 Critical : Tables lockées >30s
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 4. Nginx / Serveur Web
|
|
||||||
|
|
||||||
### Connexions et Erreurs
|
|
||||||
**Pourquoi ?** Identifier les 502/504 (backend timeout)
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Connexions actives
|
|
||||||
netstat -an | grep :80 | wc -l
|
|
||||||
netstat -an | grep :443 | wc -l
|
|
||||||
|
|
||||||
# Erreurs récentes
|
|
||||||
tail -100 /var/log/nginx/error.log | grep -E "502|504|timeout"
|
|
||||||
tail -100 /var/log/nginx/access.log | grep -E " 502 | 504 "
|
|
||||||
```
|
|
||||||
|
|
||||||
**Métriques critiques :**
|
|
||||||
- Erreurs 502 Bad Gateway (PHP-FPM down)
|
|
||||||
- Erreurs 504 Gateway Timeout (PHP-FPM trop lent)
|
|
||||||
- Connexions actives
|
|
||||||
|
|
||||||
**Seuils d'alerte :**
|
|
||||||
- ⚠️ Warning : >5 erreurs 502/504 en 5min
|
|
||||||
- 🚨 Critical : >20 erreurs 502/504 en 5min
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Worker Connections
|
|
||||||
**Pourquoi ?** Limite le nombre de connexions simultanées
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Config nginx
|
|
||||||
grep worker_connections /etc/nginx/nginx.conf
|
|
||||||
ps aux | grep nginx | wc -l
|
|
||||||
```
|
|
||||||
|
|
||||||
**Config recommandée :**
|
|
||||||
```nginx
|
|
||||||
worker_processes auto;
|
|
||||||
worker_connections 2048;
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 5. Réseau
|
|
||||||
|
|
||||||
### Bande Passante
|
|
||||||
**Pourquoi ?** Identifier les pics de trafic
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Depuis le host
|
|
||||||
incus info nx4 | grep -A 10 "eth0:"
|
|
||||||
|
|
||||||
# Dans le container
|
|
||||||
iftop -i eth0
|
|
||||||
vnstat -i eth0
|
|
||||||
```
|
|
||||||
|
|
||||||
**Métriques :**
|
|
||||||
- Octets reçus/émis
|
|
||||||
- Paquets reçus/émis
|
|
||||||
- Erreurs réseau
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Connexions TCP
|
|
||||||
**Pourquoi ?** Trop de connexions TIME_WAIT = problème
|
|
||||||
|
|
||||||
```bash
|
|
||||||
netstat -an | grep -E "ESTABLISHED|TIME_WAIT|CLOSE_WAIT" | wc -l
|
|
||||||
ss -s
|
|
||||||
```
|
|
||||||
|
|
||||||
**Seuils d'alerte :**
|
|
||||||
- ⚠️ Warning : >1000 connexions TIME_WAIT
|
|
||||||
- 🚨 Critical : >5000 connexions TIME_WAIT
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 6. Logs et Événements
|
|
||||||
|
|
||||||
### Logs Système
|
|
||||||
```bash
|
|
||||||
# Container Debian
|
|
||||||
journalctl -u nginx -n 100
|
|
||||||
journalctl -u php8.3-fpm -n 100
|
|
||||||
journalctl -u mariadb -n 100
|
|
||||||
|
|
||||||
# Container Alpine (si pas de systemd)
|
|
||||||
tail -100 /var/log/messages
|
|
||||||
```
|
|
||||||
|
|
||||||
### Logs Applicatifs
|
|
||||||
```bash
|
|
||||||
# PHP errors
|
|
||||||
tail -100 /var/log/php8.3-fpm.log
|
|
||||||
tail -100 /var/www/html/logs/*.log
|
|
||||||
|
|
||||||
# Nginx
|
|
||||||
tail -100 /var/log/nginx/error.log
|
|
||||||
tail -100 /var/log/nginx/access.log
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 7. Scripts de Monitoring Automatisés
|
|
||||||
|
|
||||||
### Script de Monitoring Global
|
|
||||||
|
|
||||||
Créer `/root/monitor.sh` :
|
|
||||||
|
|
||||||
```bash
|
|
||||||
#!/bin/bash
|
|
||||||
|
|
||||||
LOG_FILE="/var/log/system-monitor.log"
|
|
||||||
ALERT_FILE="/var/log/alerts.log"
|
|
||||||
|
|
||||||
echo "=== Monitoring $(date) ===" >> $LOG_FILE
|
|
||||||
|
|
||||||
# 1. CPU
|
|
||||||
LOAD=$(uptime | awk -F'load average:' '{print $2}' | awk '{print $1}')
|
|
||||||
echo "Load: $LOAD" >> $LOG_FILE
|
|
||||||
|
|
||||||
# 2. RAM
|
|
||||||
RAM_PERCENT=$(free | awk '/Mem:/ {printf("%.1f"), ($3/$2)*100}')
|
|
||||||
echo "RAM: ${RAM_PERCENT}%" >> $LOG_FILE
|
|
||||||
if (( $(echo "$RAM_PERCENT > 85" | bc -l) )); then
|
|
||||||
echo "[WARNING] RAM > 85%: ${RAM_PERCENT}%" >> $ALERT_FILE
|
|
||||||
fi
|
|
||||||
|
|
||||||
# 3. PHP-FPM Workers
|
|
||||||
PHP_WORKERS=$(ps aux | grep "php-fpm: pool www" | wc -l)
|
|
||||||
PHP_MAX=$(grep "^pm.max_children" /etc/php/8.3/fpm/pool.d/www.conf | awk '{print $3}')
|
|
||||||
echo "PHP Workers: $PHP_WORKERS / $PHP_MAX" >> $LOG_FILE
|
|
||||||
if [ $PHP_WORKERS -gt $(echo "$PHP_MAX * 0.8" | bc) ]; then
|
|
||||||
echo "[WARNING] PHP Workers > 80%: $PHP_WORKERS / $PHP_MAX" >> $ALERT_FILE
|
|
||||||
fi
|
|
||||||
|
|
||||||
# 4. MySQL Connexions
|
|
||||||
MYSQL_CONN=$(mysql -e "SHOW STATUS LIKE 'Threads_connected';" | awk 'NR==2 {print $2}')
|
|
||||||
MYSQL_MAX=$(mysql -e "SHOW VARIABLES LIKE 'max_connections';" | awk 'NR==2 {print $2}')
|
|
||||||
echo "MySQL Connections: $MYSQL_CONN / $MYSQL_MAX" >> $LOG_FILE
|
|
||||||
if [ $MYSQL_CONN -gt $(echo "$MYSQL_MAX * 0.8" | bc) ]; then
|
|
||||||
echo "[WARNING] MySQL Connections > 80%: $MYSQL_CONN / $MYSQL_MAX" >> $ALERT_FILE
|
|
||||||
fi
|
|
||||||
|
|
||||||
# 5. Disque
|
|
||||||
DISK_PERCENT=$(df -h / | awk 'NR==2 {print $5}' | sed 's/%//')
|
|
||||||
echo "Disk: ${DISK_PERCENT}%" >> $LOG_FILE
|
|
||||||
if [ $DISK_PERCENT -gt 80 ]; then
|
|
||||||
echo "[WARNING] Disk > 80%: ${DISK_PERCENT}%" >> $ALERT_FILE
|
|
||||||
fi
|
|
||||||
|
|
||||||
# 6. Erreurs nginx
|
|
||||||
NGINX_ERRORS=$(grep -c "error" /var/log/nginx/error.log | tail -100)
|
|
||||||
if [ $NGINX_ERRORS -gt 10 ]; then
|
|
||||||
echo "[WARNING] Nginx errors: $NGINX_ERRORS in last 100 lines" >> $ALERT_FILE
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo "" >> $LOG_FILE
|
|
||||||
```
|
|
||||||
|
|
||||||
**Installation :**
|
|
||||||
```bash
|
|
||||||
chmod +x /root/monitor.sh
|
|
||||||
|
|
||||||
# Exécuter toutes les 5 minutes
|
|
||||||
(crontab -l 2>/dev/null; echo "*/5 * * * * /root/monitor.sh") | crontab -
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Script de Monitoring PHP-FPM Spécifique
|
|
||||||
|
|
||||||
Créer `/root/monitor-php-fpm.sh` :
|
|
||||||
|
|
||||||
```bash
|
|
||||||
#!/bin/bash
|
|
||||||
|
|
||||||
LOG="/var/log/php-fpm-monitor.log"
|
|
||||||
TIMESTAMP=$(date '+%Y-%m-%d %H:%M:%S')
|
|
||||||
|
|
||||||
# Compter les workers (sans grep lui-même)
|
|
||||||
WORKERS=$(ps aux | grep 'php-fpm: pool www' | grep -v grep | wc -l)
|
|
||||||
MAX_CHILDREN=$(grep '^pm.max_children' /etc/php/8.3/fpm/pool.d/www.conf | awk '{print $3}')
|
|
||||||
PERCENT=$(echo "scale=1; ($WORKERS / $MAX_CHILDREN) * 100" | bc)
|
|
||||||
|
|
||||||
# Log au format: timestamp,workers,max,percentage
|
|
||||||
echo "$TIMESTAMP,$WORKERS,$MAX_CHILDREN,$PERCENT%" >> $LOG
|
|
||||||
|
|
||||||
# Alerte si >80%
|
|
||||||
if (( $(echo "$PERCENT > 80" | bc -l) )); then
|
|
||||||
echo "[$TIMESTAMP] [ALERT] PHP Workers > 80%: $WORKERS / $MAX_CHILDREN ($PERCENT%)" >> /var/log/alerts.log
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Vérifier si max_children atteint dans les logs récents
|
|
||||||
if tail -10 /var/log/php8.3-fpm.log | grep -q "max_children"; then
|
|
||||||
echo "[$TIMESTAMP] [CRITICAL] MAX_CHILDREN REACHED!" >> /var/log/alerts.log
|
|
||||||
tail -3 /var/log/php8.3-fpm.log >> /var/log/alerts.log
|
|
||||||
fi
|
|
||||||
```
|
|
||||||
|
|
||||||
**Installation :**
|
|
||||||
```bash
|
|
||||||
chmod +x /root/monitor-php-fpm.sh
|
|
||||||
|
|
||||||
# Exécuter toutes les minutes
|
|
||||||
(crontab -l 2>/dev/null; echo "* * * * * /root/monitor-php-fpm.sh") | crontab -
|
|
||||||
```
|
|
||||||
|
|
||||||
**Visualiser les données :**
|
|
||||||
```bash
|
|
||||||
# Afficher les dernières 60 minutes
|
|
||||||
tail -60 /var/log/php-fpm-monitor.log
|
|
||||||
|
|
||||||
# Voir l'évolution graphique (nécessite gnuplot)
|
|
||||||
echo 'set datafile separator ","; plot "/var/log/php-fpm-monitor.log" using 2 with lines title "Workers"' | gnuplot -p
|
|
||||||
|
|
||||||
# Statistiques rapides
|
|
||||||
echo "Max workers last hour: $(tail -60 /var/log/php-fpm-monitor.log | cut -d',' -f2 | sort -n | tail -1)"
|
|
||||||
echo "Min workers last hour: $(tail -60 /var/log/php-fpm-monitor.log | cut -d',' -f2 | sort -n | head -1)"
|
|
||||||
echo "Avg workers last hour: $(tail -60 /var/log/php-fpm-monitor.log | cut -d',' -f2 | awk '{sum+=$1} END {print sum/NR}')"
|
|
||||||
|
|
||||||
# Alertes récentes
|
|
||||||
tail -20 /var/log/alerts.log | grep "PHP Workers"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Rotation automatique des logs :**
|
|
||||||
Créer `/etc/logrotate.d/php-fpm-monitor` :
|
|
||||||
```
|
|
||||||
/var/log/php-fpm-monitor.log {
|
|
||||||
daily
|
|
||||||
rotate 30
|
|
||||||
compress
|
|
||||||
missingok
|
|
||||||
notifempty
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 8. Solutions de Monitoring Automatisées
|
|
||||||
|
|
||||||
### Option 1 : Netdata (Recommandé)
|
|
||||||
**Avantages :** Installation simple, interface web, détection auto des services
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Installation
|
|
||||||
bash <(curl -Ss https://get.netdata.cloud/kickstart.sh) --dont-wait
|
|
||||||
|
|
||||||
# Accessible sur http://IP:19999
|
|
||||||
# Détecte automatiquement : PHP-FPM, MariaDB, Nginx, ressources système
|
|
||||||
```
|
|
||||||
|
|
||||||
**Métriques auto-détectées :**
|
|
||||||
- ✅ CPU, RAM, I/O, réseau
|
|
||||||
- ✅ PHP-FPM (workers, slow requests)
|
|
||||||
- ✅ MariaDB (connexions, queries, locks)
|
|
||||||
- ✅ Nginx (connexions, erreurs)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Option 2 : Prometheus + Grafana
|
|
||||||
**Avantages :** Professionnel, historique long terme, alerting avancé
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Exposer les métriques Incus
|
|
||||||
incus config set core.metrics_address :8443
|
|
||||||
|
|
||||||
# Installer Prometheus + Grafana
|
|
||||||
# (plus complexe, voir doc officielle)
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Option 3 : Scripts + Monitoring Simple
|
|
||||||
|
|
||||||
Si vous préférez rester léger, surveillez manuellement avec :
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Dashboard temps réel
|
|
||||||
watch -n 2 'echo "=== RESSOURCES ===";
|
|
||||||
free -h | head -2;
|
|
||||||
echo "";
|
|
||||||
echo "=== PHP-FPM ===";
|
|
||||||
ps aux | grep "php-fpm: pool" | wc -l;
|
|
||||||
echo "";
|
|
||||||
echo "=== MySQL ===";
|
|
||||||
mysql -e "SHOW STATUS LIKE \"Threads_connected\";" 2>/dev/null;
|
|
||||||
echo "";
|
|
||||||
echo "=== NGINX ===";
|
|
||||||
netstat -an | grep :80 | wc -l'
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 9. Checklist de Diagnostic en Cas de Problème
|
|
||||||
|
|
||||||
### Coupures / Timeouts
|
|
||||||
1. ✅ **Vérifier PHP-FPM** : `tail /var/log/php8.3-fpm.log | grep max_children`
|
|
||||||
2. ✅ **Vérifier MySQL** : `mysql -e "SHOW PROCESSLIST;"`
|
|
||||||
3. ✅ **Vérifier nginx** : `tail /var/log/nginx/error.log | grep -E "502|504"`
|
|
||||||
4. ✅ **Vérifier RAM** : `free -h`
|
|
||||||
5. ✅ **Vérifier I/O** : `iostat -x 1 5`
|
|
||||||
|
|
||||||
### Lenteurs
|
|
||||||
1. ✅ **Slow queries MySQL** : `tail /var/lib/mysql/slow-query.log`
|
|
||||||
2. ✅ **Slow PHP** : `tail /var/log/php8.3-fpm-slow.log`
|
|
||||||
3. ✅ **CPU** : `top -bn1 | head -20`
|
|
||||||
4. ✅ **I/O disque** : `iotop -oa`
|
|
||||||
|
|
||||||
### Crashes / Erreurs 502
|
|
||||||
1. ✅ **PHP-FPM status** : `systemctl status php8.3-fpm`
|
|
||||||
2. ✅ **OOM Killer** : `dmesg | grep -i "out of memory"`
|
|
||||||
3. ✅ **Logs PHP** : `tail -100 /var/log/php8.3-fpm.log`
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 10. Optimisations Recommandées
|
|
||||||
|
|
||||||
### PHP-FPM
|
|
||||||
```ini
|
|
||||||
# /etc/php/8.3/fpm/pool.d/www.conf
|
|
||||||
pm = dynamic
|
|
||||||
pm.max_children = 50-100
|
|
||||||
pm.start_servers = 10-20
|
|
||||||
pm.min_spare_servers = 5-10
|
|
||||||
pm.max_spare_servers = 20-35
|
|
||||||
pm.max_requests = 500
|
|
||||||
request_slowlog_timeout = 3s
|
|
||||||
```
|
|
||||||
|
|
||||||
### MariaDB
|
|
||||||
```ini
|
|
||||||
# /etc/mysql/mariadb.conf.d/50-server.cnf
|
|
||||||
max_connections = 200-500
|
|
||||||
innodb_buffer_pool_size = 16G # 70-80% de la RAM MySQL
|
|
||||||
slow_query_log = 1
|
|
||||||
long_query_time = 2
|
|
||||||
log_queries_not_using_indexes = 1
|
|
||||||
```
|
|
||||||
|
|
||||||
### Nginx
|
|
||||||
```nginx
|
|
||||||
# /etc/nginx/nginx.conf
|
|
||||||
worker_processes auto;
|
|
||||||
worker_connections 2048;
|
|
||||||
keepalive_timeout 30;
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Résumé des Seuils Critiques
|
|
||||||
|
|
||||||
| Métrique | Warning | Critical |
|
|
||||||
|----------|---------|----------|
|
|
||||||
| **RAM** | >85% | >95% |
|
|
||||||
| **CPU Load** | >70% CPU count | >150% CPU count |
|
|
||||||
| **Disk I/O await** | >50ms | >200ms |
|
|
||||||
| **Disk Space** | >80% | >90% |
|
|
||||||
| **PHP Workers** | >80% max | max_children reached |
|
|
||||||
| **MySQL Connections** | >80% max | = max_connections |
|
|
||||||
| **Slow Queries** | >2s | >10s ou >50/min |
|
|
||||||
| **Nginx 502/504** | >5 en 5min | >20 en 5min |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Contacts et Escalade
|
|
||||||
|
|
||||||
En cas d'incident critique :
|
|
||||||
1. Vérifier les logs (`/var/log/alerts.log`)
|
|
||||||
2. Identifier le goulot (CPU/RAM/PHP/MySQL)
|
|
||||||
3. Appliquer le correctif approprié
|
|
||||||
4. Documenter l'incident
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Date de création :** 2025-10-18
|
|
||||||
**Dernière mise à jour :** 2025-10-18
|
|
||||||
**Version :** 1.0
|
|
||||||
@@ -7,41 +7,41 @@
|
|||||||
## 📅 LUNDI 25/08 - Préparation (4h)
|
## 📅 LUNDI 25/08 - Préparation (4h)
|
||||||
|
|
||||||
### ✅ Compte Stripe Platform
|
### ✅ Compte Stripe Platform
|
||||||
- [x] Créer compte Stripe sur https://dashboard.stripe.com/register
|
- [ ] Créer compte Stripe sur https://dashboard.stripe.com/register
|
||||||
- [x] Remplir informations entreprise (SIRET, adresse, etc.)
|
- [ ] Remplir informations entreprise (SIRET, adresse, etc.)
|
||||||
- [x] Vérifier email de confirmation
|
- [ ] Vérifier email de confirmation
|
||||||
- [x] Activer l'authentification 2FA
|
- [ ] Activer l'authentification 2FA
|
||||||
|
|
||||||
### ✅ Activation des produits
|
### ✅ Activation des produits
|
||||||
- [x] Activer Stripe Connect dans Dashboard → Products
|
- [ ] Activer Stripe Connect dans Dashboard → Products
|
||||||
- [x] Choisir type "Express accounts" pour les amicales
|
- [ ] Choisir type "Express accounts" pour les amicales
|
||||||
- [x] Activer Stripe Terminal dans Dashboard
|
- [ ] Activer Stripe Terminal dans Dashboard
|
||||||
- [x] Demander accès "Tap to Pay on iPhone" via formulaire support
|
- [ ] Demander accès "Tap to Pay on iPhone" via formulaire support
|
||||||
|
|
||||||
### ✅ Configuration initiale
|
### ✅ Configuration initiale
|
||||||
- [x] Définir les frais de plateforme (DECISION: 0% commission plateforme - 100% pour les amicales)
|
- [ ] Définir les frais de plateforme (suggestion: 2.5% ou 0.50€ fixe)
|
||||||
- [x] Configurer les paramètres de virement (J+2 recommandé)
|
- [ ] Configurer les paramètres de virement (J+2 recommandé)
|
||||||
- [x] Ajouter logo et branding pour les pages Stripe
|
- [ ] Ajouter logo et branding pour les pages Stripe
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## 📅 MARDI 26/08 - Setup environnements (2h)
|
## 📅 MARDI 26/08 - Setup environnements (2h)
|
||||||
|
|
||||||
### ✅ Clés API et Webhooks
|
### ✅ Clés API et Webhooks
|
||||||
- [x] Récupérer clés TEST (pk_test_... et sk_test_...)
|
- [ ] Récupérer clés TEST (pk_test_... et sk_test_...)
|
||||||
- [x] Créer endpoint webhook : https://dapp.geosector.fr/api/stripe/webhook
|
- [ ] Créer endpoint webhook : https://votreapi.com/webhooks/stripe
|
||||||
- [x] Sélectionner événements webhook :
|
- [ ] Sélectionner événements webhook :
|
||||||
- `account.updated`
|
- `account.updated`
|
||||||
- `account.application.authorized`
|
- `account.application.authorized`
|
||||||
- `payment_intent.succeeded`
|
- `payment_intent.succeeded`
|
||||||
- `payment_intent.payment_failed`
|
- `payment_intent.payment_failed`
|
||||||
- `charge.dispute.created`
|
- `charge.dispute.created`
|
||||||
- [x] Noter le Webhook signing secret (whsec_...)
|
- [ ] Noter le Webhook signing secret (whsec_...)
|
||||||
|
|
||||||
### ✅ Documentation amicales
|
### ✅ Documentation amicales
|
||||||
- [x] Préparer template email pour amicales
|
- [ ] Préparer template email pour amicales
|
||||||
- [x] Créer guide PDF "Activer les paiements CB"
|
- [ ] Créer guide PDF "Activer les paiements CB"
|
||||||
- [x] Lister documents requis :
|
- [ ] Lister documents requis :
|
||||||
- Statuts association
|
- Statuts association
|
||||||
- RIB avec IBAN/BIC
|
- RIB avec IBAN/BIC
|
||||||
- Pièce identité responsable
|
- Pièce identité responsable
|
||||||
@@ -52,33 +52,33 @@
|
|||||||
## 📅 MERCREDI 27/08 - Amicale pilote (3h)
|
## 📅 MERCREDI 27/08 - Amicale pilote (3h)
|
||||||
|
|
||||||
### ✅ Onboarding première amicale
|
### ✅ Onboarding première amicale
|
||||||
- [x] Contacter amicale pilote (Amicale ID: 5)
|
- [ ] Contacter amicale pilote
|
||||||
- [x] Créer compte Connect Express via API
|
- [ ] Créer compte Connect Express via API ou Dashboard
|
||||||
- [x] Envoyer lien onboarding à l'amicale
|
- [ ] Envoyer lien onboarding à l'amicale
|
||||||
- [x] Suivre progression dans Dashboard → Connect → Accounts
|
- [ ] Suivre progression dans Dashboard → Connect → Accounts
|
||||||
- [x] Vérifier statut "Charges enabled"
|
- [ ] Vérifier statut "Charges enabled"
|
||||||
|
|
||||||
### ✅ Configuration compte amicale
|
### ✅ Configuration compte amicale
|
||||||
- [x] Vérifier informations bancaires (IBAN)
|
- [ ] Vérifier informations bancaires (IBAN)
|
||||||
- [x] Configurer email notifications
|
- [ ] Configurer email notifications
|
||||||
- [x] Tester micro-virement de vérification
|
- [ ] Tester micro-virement de vérification
|
||||||
- [x] Noter le compte ID : acct_1S2YfNP63A07c33Y
|
- [ ] Noter le compte ID : acct_...
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## 📅 JEUDI 28/08 - Tests paiements (2h)
|
## 📅 JEUDI 28/08 - Tests paiements (2h)
|
||||||
|
|
||||||
### ✅ Configuration Terminal Test
|
### ✅ Configuration Terminal Test
|
||||||
- [x] Créer "Location" test dans Dashboard → Terminal (Location ID: tml_GLJ21w7KCYX4Wj)
|
- [ ] Créer "Location" test dans Dashboard → Terminal
|
||||||
- [x] Générer reader test virtuel pour Simulator
|
- [ ] Générer reader test virtuel pour Simulator
|
||||||
- [x] Configurer les montants de test (10€, 20€, 30€)
|
- [ ] Configurer les montants de test (10€, 20€, 30€)
|
||||||
|
|
||||||
### ✅ Cartes de test
|
### ✅ Cartes de test
|
||||||
- [x] Préparer liste cartes test :
|
- [ ] Préparer liste cartes test :
|
||||||
- 4242 4242 4242 4242 : Succès
|
- 4242 4242 4242 4242 : Succès
|
||||||
- 4000 0000 0000 9995 : Refus
|
- 4000 0000 0000 9995 : Refus
|
||||||
- 4000 0025 0000 3155 : Authentification requise
|
- 4000 0025 0000 3155 : Authentification requise
|
||||||
- [x] Documenter processus de test pour développeurs
|
- [ ] Documenter processus de test pour développeurs
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -201,49 +201,4 @@ STRIPE_PLATFORM_ACCOUNT_ID=acct_...
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## 🎯 BILAN DES ACCOMPLISSEMENTS (01/09/2024)
|
*Document créé le 24/08/2024 - À tenir à jour quotidiennement*
|
||||||
|
|
||||||
### ✅ RÉALISATIONS CLÉS
|
|
||||||
1. **Intégration Stripe Connect complète**
|
|
||||||
- API PHP 8.3 fonctionnelle avec tous les endpoints
|
|
||||||
- Interface Flutter pour gestion Stripe dans l'amicale
|
|
||||||
- Webhooks configurés et testés
|
|
||||||
|
|
||||||
2. **Compte amicale pilote opérationnel**
|
|
||||||
- Amicale ID: 5 avec compte Stripe : acct_1S2YfNP63A07c33Y
|
|
||||||
- Location Terminal créée : tml_GLJ21w7KCYX4Wj
|
|
||||||
- Onboarding Stripe complété avec succès
|
|
||||||
|
|
||||||
3. **Configuration 0% commission plateforme**
|
|
||||||
- 100% des paiements vont aux amicales
|
|
||||||
- Seuls les frais Stripe standard s'appliquent (~1.4% + 0.25€)
|
|
||||||
- Interface UI mise à jour pour refléter cette politique
|
|
||||||
|
|
||||||
4. **Corrections techniques majeures**
|
|
||||||
- Problèmes de déchiffrement des données résolus
|
|
||||||
- Erreurs 502 nginx corrigées (logs debug supprimés)
|
|
||||||
- Base de données et API entièrement fonctionnelles
|
|
||||||
|
|
||||||
### 🔧 PROBLÈMES RÉSOLUS
|
|
||||||
- **Erreur 500** : "Database not found" → Fixed
|
|
||||||
- **Erreur 400** : "Invalid email address" → Fixed (déchiffrement ajouté)
|
|
||||||
- **Erreur 502** : "upstream sent too big header" → Fixed (logs supprimés)
|
|
||||||
- **Commission plateforme** : Supprimée comme demandé (0%)
|
|
||||||
- **UI messaging** : Corrigé pour refléter "100% pour votre amicale"
|
|
||||||
|
|
||||||
### 📊 APIs FONCTIONNELLES
|
|
||||||
- ✅ POST /api/stripe/accounts - Création compte
|
|
||||||
- ✅ GET /api/stripe/accounts/:id/status - Statut compte
|
|
||||||
- ✅ POST /api/stripe/accounts/:id/onboarding-link - Lien onboarding
|
|
||||||
- ✅ POST /api/stripe/locations - Création location Terminal
|
|
||||||
- ✅ POST /api/stripe/webhook - Réception événements
|
|
||||||
|
|
||||||
### 🎯 PROCHAINES ÉTAPES
|
|
||||||
1. Tests de paiements réels avec Terminal
|
|
||||||
2. Déploiement en environnement de recette
|
|
||||||
3. Formation des amicales pilotes
|
|
||||||
4. Monitoring des premiers paiements
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
*Document créé le 24/08/2024 - Dernière mise à jour : 01/09/2024*
|
|
||||||
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
BIN
__MACOSX/geo-app-20251014/ios/Pods/._Pods.xcodeproj
generated
BIN
__MACOSX/geo-app-20251014/ios/Pods/._Pods.xcodeproj
generated
Binary file not shown.
115
api/.vscode/settings.json
vendored
Executable file → Normal file
115
api/.vscode/settings.json
vendored
Executable file → Normal file
@@ -1,116 +1,4 @@
|
|||||||
{
|
{
|
||||||
"window.zoomLevel": 1, // Permet de zoomer, pratique si vous faites une présentation
|
|
||||||
|
|
||||||
// Apparence
|
|
||||||
// -- Editeur
|
|
||||||
"workbench.startupEditor": "none", // On ne veut pas une page d'accueil chargée
|
|
||||||
"editor.minimap.enabled": true, // On veut voir la minimap
|
|
||||||
"editor.minimap.showSlider": "always", // On veut voir la minimap
|
|
||||||
"editor.minimap.size": "fill", // On veut voir la minimap
|
|
||||||
"editor.minimap.scale": 2,
|
|
||||||
"editor.tokenColorCustomizations": {
|
|
||||||
"textMateRules": [
|
|
||||||
{
|
|
||||||
"scope": ["storage.type.function", "storage.type.class"],
|
|
||||||
"settings": {
|
|
||||||
"fontStyle": "bold",
|
|
||||||
"foreground": "#4B9CD3"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"editor.minimap.renderCharacters": true,
|
|
||||||
"editor.minimap.maxColumn": 120,
|
|
||||||
"breadcrumbs.enabled": false,
|
|
||||||
// -- Tabs
|
|
||||||
"workbench.editor.wrapTabs": true, // On veut voir les tabs
|
|
||||||
"workbench.editor.tabSizing": "shrink", // On veut voir les tabs
|
|
||||||
"workbench.editor.pinnedTabSizing": "compact",
|
|
||||||
"workbench.editor.enablePreview": false, // Un clic sur un fichier l'ouvre
|
|
||||||
|
|
||||||
// -- Sidebar
|
|
||||||
"workbench.tree.indent": 15, // Indente plus pour plus de clarté dans la sidebar
|
|
||||||
"workbench.tree.renderIndentGuides": "always",
|
|
||||||
// -- Code
|
|
||||||
"editor.occurrencesHighlight": "singleFile", // On veut voir les occurences d'une variable
|
|
||||||
"editor.renderWhitespace": "trailing", // On ne veut pas laisser d'espace en fin de ligne
|
|
||||||
"editor.renderControlCharacters": true, // On veut voir les caractères de contrôle
|
|
||||||
// Thème
|
|
||||||
"editor.fontFamily": "'JetBrains Mono', 'Fira Code', 'Operator Mono Lig', monospace",
|
|
||||||
"editor.fontLigatures": false,
|
|
||||||
"editor.fontSize": 13,
|
|
||||||
"editor.lineHeight": 22,
|
|
||||||
"editor.guides.bracketPairs": "active",
|
|
||||||
|
|
||||||
// Ergonomie
|
|
||||||
"editor.wordWrap": "off",
|
|
||||||
"editor.rulers": [],
|
|
||||||
"editor.suggest.insertMode": "replace", // L'autocomplétion remplace le mot en cours
|
|
||||||
"editor.acceptSuggestionOnCommitCharacter": false, // Evite que l'autocomplétion soit accepté lors d'un . par exemple
|
|
||||||
"editor.formatOnSave": true,
|
|
||||||
"editor.formatOnPaste": true,
|
|
||||||
"editor.linkedEditing": true, // Quand on change un élément HTML, change la balise fermante
|
|
||||||
"editor.tabSize": 2,
|
|
||||||
"editor.unicodeHighlight.nonBasicASCII": false,
|
|
||||||
|
|
||||||
"[php]": {
|
|
||||||
"editor.defaultFormatter": "bmewburn.vscode-intelephense-client",
|
|
||||||
"editor.formatOnSave": true,
|
|
||||||
"editor.formatOnPaste": true
|
|
||||||
},
|
|
||||||
"intelephense.format.braces": "k&r",
|
|
||||||
"intelephense.format.enable": true,
|
|
||||||
|
|
||||||
"[javascript]": {
|
|
||||||
"editor.defaultFormatter": "esbenp.prettier-vscode",
|
|
||||||
"editor.formatOnSave": true,
|
|
||||||
"editor.formatOnPaste": true
|
|
||||||
},
|
|
||||||
"prettier.printWidth": 360,
|
|
||||||
"prettier.semi": true,
|
|
||||||
"prettier.singleQuote": true,
|
|
||||||
"prettier.tabWidth": 2,
|
|
||||||
"prettier.trailingComma": "es5",
|
|
||||||
|
|
||||||
"explorer.autoReveal": false,
|
|
||||||
"explorer.confirmDragAndDrop": false,
|
|
||||||
"emmet.triggerExpansionOnTab": true,
|
|
||||||
"emmet.includeLanguages": {
|
|
||||||
"javascript": "javascriptreact"
|
|
||||||
},
|
|
||||||
"problems.decorations.enabled": true,
|
|
||||||
"explorer.decorations.colors": true,
|
|
||||||
"explorer.decorations.badges": true,
|
|
||||||
"php.validate.enable": true,
|
|
||||||
"php.suggest.basic": false,
|
|
||||||
"dart.analysisExcludedFolders": [],
|
|
||||||
"dart.enableSdkFormatter": true,
|
|
||||||
|
|
||||||
// Fichiers
|
|
||||||
"files.defaultLanguage": "markdown",
|
|
||||||
"files.autoSaveWorkspaceFilesOnly": true,
|
|
||||||
"files.exclude": {
|
|
||||||
"**/.idea": true
|
|
||||||
},
|
|
||||||
// Languages
|
|
||||||
"javascript.preferences.importModuleSpecifierEnding": "js",
|
|
||||||
"typescript.preferences.importModuleSpecifierEnding": "js",
|
|
||||||
|
|
||||||
// Extensions
|
|
||||||
"tailwindCSS.experimental.configFile": "web/tailwind.config.js",
|
|
||||||
"editor.quickSuggestions": {
|
|
||||||
"strings": true
|
|
||||||
},
|
|
||||||
|
|
||||||
"[svelte]": {
|
|
||||||
"editor.defaultFormatter": "svelte.svelte-vscode",
|
|
||||||
"editor.formatOnSave": true
|
|
||||||
},
|
|
||||||
"prettier.documentSelectors": ["**/*.svelte"],
|
|
||||||
"svelte.plugin.svelte.diagnostics.enable": false,
|
|
||||||
|
|
||||||
"js/ts.implicitProjectConfig.checkJs": false,
|
|
||||||
"svelte.enable-ts-plugin": false,
|
|
||||||
"workbench.colorCustomizations": {
|
"workbench.colorCustomizations": {
|
||||||
"activityBar.activeBackground": "#fa1b49",
|
"activityBar.activeBackground": "#fa1b49",
|
||||||
"activityBar.background": "#fa1b49",
|
"activityBar.background": "#fa1b49",
|
||||||
@@ -130,6 +18,5 @@
|
|||||||
"titleBar.inactiveBackground": "#dd053199",
|
"titleBar.inactiveBackground": "#dd053199",
|
||||||
"titleBar.inactiveForeground": "#e7e7e799"
|
"titleBar.inactiveForeground": "#e7e7e799"
|
||||||
},
|
},
|
||||||
"peacock.color": "#dd0531",
|
"peacock.color": "#dd0531"
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -14,17 +14,10 @@ This file provides guidance to Claude Code (claude.ai/code) when working with co
|
|||||||
## Build Commands
|
## Build Commands
|
||||||
- Install dependencies: `composer install` - install PHP dependencies
|
- Install dependencies: `composer install` - install PHP dependencies
|
||||||
- Update dependencies: `composer update` - update PHP dependencies to latest versions
|
- Update dependencies: `composer update` - update PHP dependencies to latest versions
|
||||||
- Deploy to DEV: `./deploy-api.sh` - deploy local code to dva-geo on IN3 (195.154.80.116)
|
- Deploy to REC: `./livre-api.sh rec` - deploy from DVA to RECETTE environment
|
||||||
- Deploy to REC: `./deploy-api.sh rca` - deploy from dva-geo to rca-geo on IN3
|
- Deploy to PROD: `./livre-api.sh prod` - deploy from RECETTE to PRODUCTION environment
|
||||||
- Deploy to PROD: `./deploy-api.sh pra` - deploy from rca-geo (IN3) to pra-geo (IN4)
|
|
||||||
- Export operations: `php export_operation.php` - export operations data
|
- Export operations: `php export_operation.php` - export operations data
|
||||||
|
|
||||||
## Development Environment
|
|
||||||
- **DEV Container**: dva-geo on IN3 server (195.154.80.116)
|
|
||||||
- **DEV API URL Public**: https://dapp.geosector.fr/api/
|
|
||||||
- **DEV API URL Internal**: http://13.23.33.43/api/
|
|
||||||
- **Access**: Via Incus container on IN3 server
|
|
||||||
|
|
||||||
## Code Architecture
|
## Code Architecture
|
||||||
This is a PHP 8.3 API without framework, using a custom MVC-like architecture:
|
This is a PHP 8.3 API without framework, using a custom MVC-like architecture:
|
||||||
|
|
||||||
|
|||||||
@@ -1,651 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
|
|
||||||
set -uo pipefail
|
|
||||||
# Note: Removed -e to allow script to continue on errors
|
|
||||||
# Errors are handled explicitly with ERROR_COUNT
|
|
||||||
|
|
||||||
# Parse command line arguments
|
|
||||||
ONLY_DB=false
|
|
||||||
if [[ "${1:-}" == "-onlydb" ]]; then
|
|
||||||
ONLY_DB=true
|
|
||||||
echo "Mode: Database backup only"
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Configuration
|
|
||||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
|
||||||
CONFIG_FILE="$SCRIPT_DIR/d6back.yaml"
|
|
||||||
LOG_DIR="$SCRIPT_DIR/logs"
|
|
||||||
mkdir -p "$LOG_DIR"
|
|
||||||
LOG_FILE="$LOG_DIR/d6back-$(date +%Y%m%d).log"
|
|
||||||
ERROR_COUNT=0
|
|
||||||
RECAP_FILE="/tmp/backup_recap_$$.txt"
|
|
||||||
|
|
||||||
# Lock file to prevent concurrent executions
|
|
||||||
LOCK_FILE="/var/lock/d6back.lock"
|
|
||||||
exec 200>"$LOCK_FILE"
|
|
||||||
if ! flock -n 200; then
|
|
||||||
echo "ERROR: Another backup is already running" >&2
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
trap 'flock -u 200' EXIT
|
|
||||||
|
|
||||||
# Clean old log files (keep only last 10)
|
|
||||||
find "$LOG_DIR" -maxdepth 1 -name "d6back-*.log" -type f 2>/dev/null | sort -r | tail -n +11 | xargs -r rm -f || true
|
|
||||||
|
|
||||||
# Check dependencies - COMMENTED OUT
|
|
||||||
# for cmd in yq ssh tar openssl; do
|
|
||||||
# if ! command -v "$cmd" &> /dev/null; then
|
|
||||||
# echo "ERROR: $cmd is required but not installed" | tee -a "$LOG_FILE"
|
|
||||||
# exit 1
|
|
||||||
# fi
|
|
||||||
# done
|
|
||||||
|
|
||||||
# Load config
|
|
||||||
DIR_BACKUP=$(yq '.global.dir_backup' "$CONFIG_FILE" | tr -d '"')
|
|
||||||
ENC_KEY_PATH=$(yq '.global.enc_key' "$CONFIG_FILE" | tr -d '"')
|
|
||||||
BACKUP_SERVER=$(yq '.global.backup_server // "BACKUP"' "$CONFIG_FILE" | tr -d '"')
|
|
||||||
EMAIL_TO=$(yq '.global.email_to // "support@unikoffice.com"' "$CONFIG_FILE" | tr -d '"')
|
|
||||||
KEEP_DIRS=$(yq '.global.keep_dirs' "$CONFIG_FILE" | tr -d '"')
|
|
||||||
KEEP_DB=$(yq '.global.keep_db' "$CONFIG_FILE" | tr -d '"')
|
|
||||||
|
|
||||||
# Load encryption key
|
|
||||||
if [[ ! -f "$ENC_KEY_PATH" ]]; then
|
|
||||||
echo "ERROR: Encryption key not found: $ENC_KEY_PATH" | tee -a "$LOG_FILE"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
ENC_KEY=$(cat "$ENC_KEY_PATH")
|
|
||||||
|
|
||||||
echo "=== Backup Started $(date) ===" | tee -a "$LOG_FILE"
|
|
||||||
echo "Backup directory: $DIR_BACKUP" | tee -a "$LOG_FILE"
|
|
||||||
|
|
||||||
# Check available disk space
|
|
||||||
DISK_USAGE=$(df "$DIR_BACKUP" | tail -1 | awk '{print $5}' | sed 's/%//')
|
|
||||||
DISK_FREE=$((100 - DISK_USAGE))
|
|
||||||
|
|
||||||
if [[ $DISK_FREE -lt 20 ]]; then
|
|
||||||
echo "WARNING: Low disk space! Only ${DISK_FREE}% free on backup partition" | tee -a "$LOG_FILE"
|
|
||||||
|
|
||||||
# Send warning email
|
|
||||||
echo "Sending DISK SPACE WARNING email to $EMAIL_TO (${DISK_FREE}% free)" | tee -a "$LOG_FILE"
|
|
||||||
if command -v msmtp &> /dev/null; then
|
|
||||||
{
|
|
||||||
echo "To: $EMAIL_TO"
|
|
||||||
echo "Subject: Backup${BACKUP_SERVER} WARNING - Low disk space (${DISK_FREE}% free)"
|
|
||||||
echo ""
|
|
||||||
echo "WARNING: Low disk space on $(hostname)"
|
|
||||||
echo ""
|
|
||||||
echo "Backup directory: $DIR_BACKUP"
|
|
||||||
echo "Disk usage: ${DISK_USAGE}%"
|
|
||||||
echo "Free space: ${DISK_FREE}%"
|
|
||||||
echo ""
|
|
||||||
echo "The backup will continue but please free up some space soon."
|
|
||||||
echo ""
|
|
||||||
echo "Date: $(date '+%d.%m.%Y %H:%M')"
|
|
||||||
} | msmtp "$EMAIL_TO"
|
|
||||||
echo "DISK SPACE WARNING email sent successfully to $EMAIL_TO" | tee -a "$LOG_FILE"
|
|
||||||
else
|
|
||||||
echo "WARNING: msmtp not found - DISK WARNING email NOT sent" | tee -a "$LOG_FILE"
|
|
||||||
fi
|
|
||||||
else
|
|
||||||
echo "Disk space OK: ${DISK_FREE}% free" | tee -a "$LOG_FILE"
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Initialize recap file
|
|
||||||
echo "BACKUP REPORT - $(hostname) - $(date '+%d.%m.%Y %H')h" > "$RECAP_FILE"
|
|
||||||
echo "========================================" >> "$RECAP_FILE"
|
|
||||||
echo "" >> "$RECAP_FILE"
|
|
||||||
|
|
||||||
# Function to format size in MB with thousand separator
|
|
||||||
format_size_mb() {
|
|
||||||
local file="$1"
|
|
||||||
if [[ -f "$file" ]]; then
|
|
||||||
local size_kb=$(du -k "$file" | cut -f1)
|
|
||||||
local size_mb=$((size_kb / 1024))
|
|
||||||
# Add thousand separator with printf and sed
|
|
||||||
printf "%d" "$size_mb" | sed ':a;s/\B[0-9]\{3\}\>/\.&/;ta'
|
|
||||||
else
|
|
||||||
echo "0"
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
# Function to calculate age in days
|
|
||||||
get_age_days() {
|
|
||||||
local file="$1"
|
|
||||||
local now=$(date +%s)
|
|
||||||
local file_time=$(stat -c %Y "$file" 2>/dev/null || echo 0)
|
|
||||||
echo $(( (now - file_time) / 86400 ))
|
|
||||||
}
|
|
||||||
|
|
||||||
# Function to get week number of year for a file
|
|
||||||
get_week_year() {
|
|
||||||
local file="$1"
|
|
||||||
local file_time=$(stat -c %Y "$file" 2>/dev/null || echo 0)
|
|
||||||
date -d "@$file_time" +"%Y-%W"
|
|
||||||
}
|
|
||||||
|
|
||||||
# Function to cleanup old backups according to retention policy
|
|
||||||
cleanup_old_backups() {
|
|
||||||
local DELETED_COUNT=0
|
|
||||||
local KEPT_COUNT=0
|
|
||||||
|
|
||||||
echo "" | tee -a "$LOG_FILE"
|
|
||||||
echo "=== Starting Backup Retention Cleanup ===" | tee -a "$LOG_FILE"
|
|
||||||
|
|
||||||
# Parse retention periods
|
|
||||||
local KEEP_DIRS_DAYS=${KEEP_DIRS%d} # Remove 'd' suffix
|
|
||||||
|
|
||||||
# Parse database retention (5d,3w,15m)
|
|
||||||
IFS=',' read -r KEEP_DB_DAILY KEEP_DB_WEEKLY KEEP_DB_MONTHLY <<< "$KEEP_DB"
|
|
||||||
local KEEP_DB_DAILY_DAYS=${KEEP_DB_DAILY%d}
|
|
||||||
local KEEP_DB_WEEKLY_WEEKS=${KEEP_DB_WEEKLY%w}
|
|
||||||
local KEEP_DB_MONTHLY_MONTHS=${KEEP_DB_MONTHLY%m}
|
|
||||||
|
|
||||||
# Convert to days
|
|
||||||
local KEEP_DB_WEEKLY_DAYS=$((KEEP_DB_WEEKLY_WEEKS * 7))
|
|
||||||
local KEEP_DB_MONTHLY_DAYS=$((KEEP_DB_MONTHLY_MONTHS * 30))
|
|
||||||
|
|
||||||
echo "Retention policy: dirs=${KEEP_DIRS_DAYS}d, db=${KEEP_DB_DAILY_DAYS}d/${KEEP_DB_WEEKLY_WEEKS}w/${KEEP_DB_MONTHLY_MONTHS}m" | tee -a "$LOG_FILE"
|
|
||||||
|
|
||||||
# Process each host directory
|
|
||||||
for host_dir in "$DIR_BACKUP"/*; do
|
|
||||||
if [[ ! -d "$host_dir" ]]; then
|
|
||||||
continue
|
|
||||||
fi
|
|
||||||
|
|
||||||
local host_name=$(basename "$host_dir")
|
|
||||||
echo " Cleaning host: $host_name" | tee -a "$LOG_FILE"
|
|
||||||
|
|
||||||
# Clean directory backups (*.tar.gz but not *.sql.gz.enc)
|
|
||||||
while IFS= read -r -d '' file; do
|
|
||||||
if [[ $(basename "$file") == *".sql.gz.enc" ]]; then
|
|
||||||
continue # Skip SQL files
|
|
||||||
fi
|
|
||||||
|
|
||||||
local age_days=$(get_age_days "$file")
|
|
||||||
|
|
||||||
if [[ $age_days -gt $KEEP_DIRS_DAYS ]]; then
|
|
||||||
rm -f "$file"
|
|
||||||
echo " Deleted: $(basename "$file") (${age_days}d > ${KEEP_DIRS_DAYS}d)" | tee -a "$LOG_FILE"
|
|
||||||
((DELETED_COUNT++))
|
|
||||||
else
|
|
||||||
((KEPT_COUNT++))
|
|
||||||
fi
|
|
||||||
done < <(find "$host_dir" -name "*.tar.gz" -type f -print0 2>/dev/null)
|
|
||||||
|
|
||||||
# Clean database backups with retention policy
|
|
||||||
declare -A db_files
|
|
||||||
|
|
||||||
while IFS= read -r -d '' file; do
|
|
||||||
local filename=$(basename "$file")
|
|
||||||
local db_name=${filename%%_*}
|
|
||||||
|
|
||||||
if [[ -z "${db_files[$db_name]:-}" ]]; then
|
|
||||||
db_files[$db_name]="$file"
|
|
||||||
else
|
|
||||||
db_files[$db_name]+=$'\n'"$file"
|
|
||||||
fi
|
|
||||||
done < <(find "$host_dir" -name "*.sql.gz.enc" -type f -print0 2>/dev/null)
|
|
||||||
|
|
||||||
# Process each database
|
|
||||||
for db_name in "${!db_files[@]}"; do
|
|
||||||
# Sort files by age (newest first)
|
|
||||||
mapfile -t files < <(echo "${db_files[$db_name]}" | while IFS= read -r f; do
|
|
||||||
echo "$f"
|
|
||||||
done | xargs -I {} stat -c "%Y {}" {} 2>/dev/null | sort -rn | cut -d' ' -f2-)
|
|
||||||
|
|
||||||
# Track which files to keep
|
|
||||||
declare -A keep_daily
|
|
||||||
declare -A keep_weekly
|
|
||||||
|
|
||||||
for file in "${files[@]}"; do
|
|
||||||
local age_days=$(get_age_days "$file")
|
|
||||||
|
|
||||||
if [[ $age_days -le $KEEP_DB_DAILY_DAYS ]]; then
|
|
||||||
# Keep all files within daily retention
|
|
||||||
((KEPT_COUNT++))
|
|
||||||
|
|
||||||
elif [[ $age_days -le $KEEP_DB_WEEKLY_DAYS ]]; then
|
|
||||||
# Weekly retention: keep one per day
|
|
||||||
local file_date=$(date -d "@$(stat -c %Y "$file")" +"%Y-%m-%d")
|
|
||||||
|
|
||||||
if [[ -z "${keep_daily[$file_date]:-}" ]]; then
|
|
||||||
keep_daily[$file_date]="$file"
|
|
||||||
((KEPT_COUNT++))
|
|
||||||
else
|
|
||||||
rm -f "$file"
|
|
||||||
((DELETED_COUNT++))
|
|
||||||
fi
|
|
||||||
|
|
||||||
elif [[ $age_days -le $KEEP_DB_MONTHLY_DAYS ]]; then
|
|
||||||
# Monthly retention: keep one per week
|
|
||||||
local week_year=$(get_week_year "$file")
|
|
||||||
|
|
||||||
if [[ -z "${keep_weekly[$week_year]:-}" ]]; then
|
|
||||||
keep_weekly[$week_year]="$file"
|
|
||||||
((KEPT_COUNT++))
|
|
||||||
else
|
|
||||||
rm -f "$file"
|
|
||||||
((DELETED_COUNT++))
|
|
||||||
fi
|
|
||||||
|
|
||||||
else
|
|
||||||
# Beyond retention period
|
|
||||||
rm -f "$file"
|
|
||||||
echo " Deleted: $(basename "$file") (${age_days}d > ${KEEP_DB_MONTHLY_DAYS}d)" | tee -a "$LOG_FILE"
|
|
||||||
((DELETED_COUNT++))
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
|
|
||||||
unset keep_daily keep_weekly
|
|
||||||
done
|
|
||||||
|
|
||||||
unset db_files
|
|
||||||
done
|
|
||||||
|
|
||||||
echo "Cleanup completed: ${DELETED_COUNT} deleted, ${KEPT_COUNT} kept" | tee -a "$LOG_FILE"
|
|
||||||
|
|
||||||
# Add cleanup summary to recap file
|
|
||||||
echo "" >> "$RECAP_FILE"
|
|
||||||
echo "CLEANUP SUMMARY:" >> "$RECAP_FILE"
|
|
||||||
echo " Files deleted: $DELETED_COUNT" >> "$RECAP_FILE"
|
|
||||||
echo " Files kept: $KEPT_COUNT" >> "$RECAP_FILE"
|
|
||||||
}
|
|
||||||
|
|
||||||
# Function to backup a single database (must be defined before use)
|
|
||||||
backup_database() {
|
|
||||||
local database="$1"
|
|
||||||
local timestamp="$(date +%Y%m%d_%H)"
|
|
||||||
local backup_file="$backup_dir/sql/${database}_${timestamp}.sql.gz.enc"
|
|
||||||
|
|
||||||
echo " Backing up database: $database" | tee -a "$LOG_FILE"
|
|
||||||
|
|
||||||
if [[ "$ssh_user" != "root" ]]; then
|
|
||||||
CMD_PREFIX="sudo"
|
|
||||||
else
|
|
||||||
CMD_PREFIX=""
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Execute backup with encryption
|
|
||||||
# First test MySQL connection to get clear error messages (|| true to continue on error)
|
|
||||||
MYSQL_TEST=$(ssh -i "$ssh_key" -p "$ssh_port" -o ConnectTimeout=20 "$ssh_user@$host_ip" \
|
|
||||||
"$CMD_PREFIX incus exec $container_name -- bash -c 'cat > /tmp/d6back.cnf << EOF
|
|
||||||
[client]
|
|
||||||
user=$db_user
|
|
||||||
password=$db_pass
|
|
||||||
host=$db_host
|
|
||||||
EOF
|
|
||||||
chmod 600 /tmp/d6back.cnf
|
|
||||||
mariadb --defaults-extra-file=/tmp/d6back.cnf -e \"SELECT 1\" 2>&1
|
|
||||||
rm -f /tmp/d6back.cnf'" 2>/dev/null || true)
|
|
||||||
|
|
||||||
if ssh -i "$ssh_key" -p "$ssh_port" -o ConnectTimeout=20 "$ssh_user@$host_ip" \
|
|
||||||
"$CMD_PREFIX incus exec $container_name -- bash -c 'cat > /tmp/d6back.cnf << EOF
|
|
||||||
[client]
|
|
||||||
user=$db_user
|
|
||||||
password=$db_pass
|
|
||||||
host=$db_host
|
|
||||||
EOF
|
|
||||||
chmod 600 /tmp/d6back.cnf
|
|
||||||
mariadb-dump --defaults-extra-file=/tmp/d6back.cnf --single-transaction --lock-tables=false --add-drop-table --create-options --databases $database 2>/dev/null | sed -e \"/^CREATE DATABASE/s/\\\`$database\\\`/\\\`${database}_${timestamp}\\\`/\" -e \"/^USE/s/\\\`$database\\\`/\\\`${database}_${timestamp}\\\`/\" | gzip
|
|
||||||
rm -f /tmp/d6back.cnf'" | \
|
|
||||||
openssl enc -aes-256-cbc -salt -pass pass:"$ENC_KEY" -pbkdf2 > "$backup_file" 2>/dev/null; then
|
|
||||||
|
|
||||||
# Validate backup file size (encrypted SQL should be > 100 bytes)
|
|
||||||
if [[ -f "$backup_file" ]]; then
|
|
||||||
file_size=$(stat -c%s "$backup_file" 2>/dev/null || echo 0)
|
|
||||||
if [[ $file_size -lt 100 ]]; then
|
|
||||||
# Analyze MySQL connection test results
|
|
||||||
if [[ "$MYSQL_TEST" == *"Access denied"* ]]; then
|
|
||||||
echo " ERROR: MySQL authentication failed for $database on $host_name/$container_name" | tee -a "$LOG_FILE"
|
|
||||||
echo " User: $db_user@$db_host - Check password in configuration" | tee -a "$LOG_FILE"
|
|
||||||
elif [[ "$MYSQL_TEST" == *"Unknown database"* ]]; then
|
|
||||||
echo " ERROR: Database '$database' does not exist on $host_name/$container_name" | tee -a "$LOG_FILE"
|
|
||||||
elif [[ "$MYSQL_TEST" == *"Can't connect"* ]]; then
|
|
||||||
echo " ERROR: Cannot connect to MySQL server at $db_host in $container_name" | tee -a "$LOG_FILE"
|
|
||||||
else
|
|
||||||
echo " ERROR: Backup file too small (${file_size} bytes): $database on $host_name/$container_name" | tee -a "$LOG_FILE"
|
|
||||||
fi
|
|
||||||
|
|
||||||
((ERROR_COUNT++))
|
|
||||||
rm -f "$backup_file"
|
|
||||||
else
|
|
||||||
size=$(du -h "$backup_file" | cut -f1)
|
|
||||||
size_mb=$(format_size_mb "$backup_file")
|
|
||||||
echo " ✓ Saved (encrypted): $(basename "$backup_file") ($size)" | tee -a "$LOG_FILE"
|
|
||||||
echo " SQL: $(basename "$backup_file") - ${size_mb} Mo" >> "$RECAP_FILE"
|
|
||||||
|
|
||||||
# Test backup integrity
|
|
||||||
if ! openssl enc -aes-256-cbc -d -pass pass:"$ENC_KEY" -pbkdf2 -in "$backup_file" | gunzip -t 2>/dev/null; then
|
|
||||||
echo " ERROR: Backup integrity check failed for $database" | tee -a "$LOG_FILE"
|
|
||||||
((ERROR_COUNT++))
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
else
|
|
||||||
echo " ERROR: Backup file not created: $database" | tee -a "$LOG_FILE"
|
|
||||||
((ERROR_COUNT++))
|
|
||||||
fi
|
|
||||||
else
|
|
||||||
# Analyze MySQL connection test for failed backup
|
|
||||||
if [[ "$MYSQL_TEST" == *"Access denied"* ]]; then
|
|
||||||
echo " ERROR: MySQL authentication failed for $database on $host_name/$container_name" | tee -a "$LOG_FILE"
|
|
||||||
echo " User: $db_user@$db_host - Check password in configuration" | tee -a "$LOG_FILE"
|
|
||||||
elif [[ "$MYSQL_TEST" == *"Unknown database"* ]]; then
|
|
||||||
echo " ERROR: Database '$database' does not exist on $host_name/$container_name" | tee -a "$LOG_FILE"
|
|
||||||
elif [[ "$MYSQL_TEST" == *"Can't connect"* ]]; then
|
|
||||||
echo " ERROR: Cannot connect to MySQL server at $db_host in $container_name" | tee -a "$LOG_FILE"
|
|
||||||
else
|
|
||||||
echo " ERROR: Failed to backup database $database on $host_name/$container_name" | tee -a "$LOG_FILE"
|
|
||||||
fi
|
|
||||||
|
|
||||||
((ERROR_COUNT++))
|
|
||||||
rm -f "$backup_file"
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
# Process each host
|
|
||||||
host_count=$(yq '.hosts | length' "$CONFIG_FILE")
|
|
||||||
|
|
||||||
for ((i=0; i<$host_count; i++)); do
|
|
||||||
host_name=$(yq ".hosts[$i].name" "$CONFIG_FILE" | tr -d '"')
|
|
||||||
host_ip=$(yq ".hosts[$i].ip" "$CONFIG_FILE" | tr -d '"')
|
|
||||||
ssh_user=$(yq ".hosts[$i].user" "$CONFIG_FILE" | tr -d '"')
|
|
||||||
ssh_key=$(yq ".hosts[$i].key" "$CONFIG_FILE" | tr -d '"')
|
|
||||||
ssh_port=$(yq ".hosts[$i].port // 22" "$CONFIG_FILE" | tr -d '"')
|
|
||||||
|
|
||||||
echo "Processing host: $host_name ($host_ip)" | tee -a "$LOG_FILE"
|
|
||||||
echo "" >> "$RECAP_FILE"
|
|
||||||
echo "HOST: $host_name ($host_ip)" >> "$RECAP_FILE"
|
|
||||||
echo "----------------------------" >> "$RECAP_FILE"
|
|
||||||
|
|
||||||
# Test SSH connection
|
|
||||||
if ! ssh -i "$ssh_key" -p "$ssh_port" -o ConnectTimeout=20 -o StrictHostKeyChecking=no "$ssh_user@$host_ip" "true" 2>/dev/null; then
|
|
||||||
echo " ERROR: Cannot connect to $host_name ($host_ip)" | tee -a "$LOG_FILE"
|
|
||||||
((ERROR_COUNT++))
|
|
||||||
continue
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Process containers
|
|
||||||
container_count=$(yq ".hosts[$i].containers | length" "$CONFIG_FILE" 2>/dev/null || echo "0")
|
|
||||||
|
|
||||||
for ((c=0; c<$container_count; c++)); do
|
|
||||||
container_name=$(yq ".hosts[$i].containers[$c].name" "$CONFIG_FILE" | tr -d '"')
|
|
||||||
|
|
||||||
echo " Processing container: $container_name" | tee -a "$LOG_FILE"
|
|
||||||
|
|
||||||
# Add container to recap
|
|
||||||
echo "" >> "$RECAP_FILE"
|
|
||||||
echo " Container: $container_name" >> "$RECAP_FILE"
|
|
||||||
|
|
||||||
# Create backup directories
|
|
||||||
backup_dir="$DIR_BACKUP/$host_name/$container_name"
|
|
||||||
mkdir -p "$backup_dir"
|
|
||||||
mkdir -p "$backup_dir/sql"
|
|
||||||
|
|
||||||
# Backup directories (skip if -onlydb mode)
|
|
||||||
if [[ "$ONLY_DB" == "false" ]]; then
|
|
||||||
dir_count=$(yq ".hosts[$i].containers[$c].dirs | length" "$CONFIG_FILE" 2>/dev/null || echo "0")
|
|
||||||
|
|
||||||
for ((d=0; d<$dir_count; d++)); do
|
|
||||||
dir_path=$(yq ".hosts[$i].containers[$c].dirs[$d]" "$CONFIG_FILE" | sed 's/^"\|"$//g')
|
|
||||||
|
|
||||||
# Use sudo if not root
|
|
||||||
if [[ "$ssh_user" != "root" ]]; then
|
|
||||||
CMD_PREFIX="sudo"
|
|
||||||
else
|
|
||||||
CMD_PREFIX=""
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Special handling for /var/www - backup each subdirectory separately
|
|
||||||
if [[ "$dir_path" == "/var/www" ]]; then
|
|
||||||
echo " Backing up subdirectories of $dir_path" | tee -a "$LOG_FILE"
|
|
||||||
|
|
||||||
# Get list of subdirectories
|
|
||||||
subdirs=$(ssh -i "$ssh_key" -p "$ssh_port" -o ConnectTimeout=20 "$ssh_user@$host_ip" \
|
|
||||||
"$CMD_PREFIX incus exec $container_name -- find /var/www -maxdepth 1 -type d ! -path /var/www" 2>/dev/null || echo "")
|
|
||||||
|
|
||||||
for subdir in $subdirs; do
|
|
||||||
subdir_name=$(basename "$subdir" | tr '/' '_')
|
|
||||||
backup_file="$backup_dir/www_${subdir_name}_$(date +%Y%m%d_%H).tar.gz"
|
|
||||||
|
|
||||||
echo " Backing up: $subdir" | tee -a "$LOG_FILE"
|
|
||||||
|
|
||||||
if ssh -i "$ssh_key" -p "$ssh_port" -o ConnectTimeout=20 "$ssh_user@$host_ip" \
|
|
||||||
"$CMD_PREFIX incus exec $container_name -- tar czf - $subdir 2>/dev/null" > "$backup_file"; then
|
|
||||||
|
|
||||||
# Validate backup file size (tar.gz should be > 1KB)
|
|
||||||
if [[ -f "$backup_file" ]]; then
|
|
||||||
file_size=$(stat -c%s "$backup_file" 2>/dev/null || echo 0)
|
|
||||||
if [[ $file_size -lt 1024 ]]; then
|
|
||||||
echo " WARNING: Backup file very small (${file_size} bytes): $subdir" | tee -a "$LOG_FILE"
|
|
||||||
# Keep the file but note it's small
|
|
||||||
size=$(du -h "$backup_file" | cut -f1)
|
|
||||||
size_mb=$(format_size_mb "$backup_file")
|
|
||||||
echo " ✓ Saved (small): $(basename "$backup_file") ($size)" | tee -a "$LOG_FILE"
|
|
||||||
echo " DIR: $(basename "$backup_file") - ${size_mb} Mo (WARNING: small)" >> "$RECAP_FILE"
|
|
||||||
else
|
|
||||||
size=$(du -h "$backup_file" | cut -f1)
|
|
||||||
size_mb=$(format_size_mb "$backup_file")
|
|
||||||
echo " ✓ Saved: $(basename "$backup_file") ($size)" | tee -a "$LOG_FILE"
|
|
||||||
echo " DIR: $(basename "$backup_file") - ${size_mb} Mo" >> "$RECAP_FILE"
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Test tar integrity
|
|
||||||
if ! tar tzf "$backup_file" >/dev/null 2>&1; then
|
|
||||||
echo " ERROR: Tar integrity check failed" | tee -a "$LOG_FILE"
|
|
||||||
((ERROR_COUNT++))
|
|
||||||
fi
|
|
||||||
else
|
|
||||||
echo " ERROR: Backup file not created: $subdir" | tee -a "$LOG_FILE"
|
|
||||||
((ERROR_COUNT++))
|
|
||||||
fi
|
|
||||||
else
|
|
||||||
echo " ERROR: Failed to backup $subdir" | tee -a "$LOG_FILE"
|
|
||||||
((ERROR_COUNT++))
|
|
||||||
rm -f "$backup_file"
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
else
|
|
||||||
# Normal backup for other directories
|
|
||||||
dir_name=$(basename "$dir_path" | tr '/' '_')
|
|
||||||
backup_file="$backup_dir/${dir_name}_$(date +%Y%m%d_%H).tar.gz"
|
|
||||||
|
|
||||||
echo " Backing up: $dir_path" | tee -a "$LOG_FILE"
|
|
||||||
|
|
||||||
if ssh -i "$ssh_key" -p "$ssh_port" -o ConnectTimeout=20 "$ssh_user@$host_ip" \
|
|
||||||
"$CMD_PREFIX incus exec $container_name -- tar czf - $dir_path 2>/dev/null" > "$backup_file"; then
|
|
||||||
|
|
||||||
# Validate backup file size (tar.gz should be > 1KB)
|
|
||||||
if [[ -f "$backup_file" ]]; then
|
|
||||||
file_size=$(stat -c%s "$backup_file" 2>/dev/null || echo 0)
|
|
||||||
if [[ $file_size -lt 1024 ]]; then
|
|
||||||
echo " WARNING: Backup file very small (${file_size} bytes): $dir_path" | tee -a "$LOG_FILE"
|
|
||||||
# Keep the file but note it's small
|
|
||||||
size=$(du -h "$backup_file" | cut -f1)
|
|
||||||
size_mb=$(format_size_mb "$backup_file")
|
|
||||||
echo " ✓ Saved (small): $(basename "$backup_file") ($size)" | tee -a "$LOG_FILE"
|
|
||||||
echo " DIR: $(basename "$backup_file") - ${size_mb} Mo (WARNING: small)" >> "$RECAP_FILE"
|
|
||||||
else
|
|
||||||
size=$(du -h "$backup_file" | cut -f1)
|
|
||||||
size_mb=$(format_size_mb "$backup_file")
|
|
||||||
echo " ✓ Saved: $(basename "$backup_file") ($size)" | tee -a "$LOG_FILE"
|
|
||||||
echo " DIR: $(basename "$backup_file") - ${size_mb} Mo" >> "$RECAP_FILE"
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Test tar integrity
|
|
||||||
if ! tar tzf "$backup_file" >/dev/null 2>&1; then
|
|
||||||
echo " ERROR: Tar integrity check failed" | tee -a "$LOG_FILE"
|
|
||||||
((ERROR_COUNT++))
|
|
||||||
fi
|
|
||||||
else
|
|
||||||
echo " ERROR: Backup file not created: $dir_path" | tee -a "$LOG_FILE"
|
|
||||||
((ERROR_COUNT++))
|
|
||||||
fi
|
|
||||||
else
|
|
||||||
echo " ERROR: Failed to backup $dir_path" | tee -a "$LOG_FILE"
|
|
||||||
((ERROR_COUNT++))
|
|
||||||
rm -f "$backup_file"
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
fi # End of directory backup section
|
|
||||||
|
|
||||||
# Backup databases
|
|
||||||
db_user=$(yq ".hosts[$i].containers[$c].db_user" "$CONFIG_FILE" 2>/dev/null | tr -d '"')
|
|
||||||
db_pass=$(yq ".hosts[$i].containers[$c].db_pass" "$CONFIG_FILE" 2>/dev/null | tr -d '"')
|
|
||||||
db_host=$(yq ".hosts[$i].containers[$c].db_host // \"localhost\"" "$CONFIG_FILE" 2>/dev/null | tr -d '"')
|
|
||||||
|
|
||||||
# Check if we're in onlydb mode
|
|
||||||
if [[ "$ONLY_DB" == "true" ]]; then
|
|
||||||
# Use onlydb list if it exists
|
|
||||||
onlydb_count=$(yq ".hosts[$i].containers[$c].onlydb | length" "$CONFIG_FILE" 2>/dev/null || echo "0")
|
|
||||||
if [[ "$onlydb_count" != "0" ]] && [[ "$onlydb_count" != "null" ]]; then
|
|
||||||
db_count="$onlydb_count"
|
|
||||||
use_onlydb=true
|
|
||||||
else
|
|
||||||
# No onlydb list, skip this container in onlydb mode
|
|
||||||
continue
|
|
||||||
fi
|
|
||||||
else
|
|
||||||
# Normal mode - use databases list
|
|
||||||
db_count=$(yq ".hosts[$i].containers[$c].databases | length" "$CONFIG_FILE" 2>/dev/null || echo "0")
|
|
||||||
use_onlydb=false
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [[ -n "$db_user" ]] && [[ -n "$db_pass" ]] && [[ "$db_count" != "0" ]]; then
|
|
||||||
for ((db=0; db<$db_count; db++)); do
|
|
||||||
if [[ "$use_onlydb" == "true" ]]; then
|
|
||||||
db_name=$(yq ".hosts[$i].containers[$c].onlydb[$db]" "$CONFIG_FILE" | tr -d '"')
|
|
||||||
else
|
|
||||||
db_name=$(yq ".hosts[$i].containers[$c].databases[$db]" "$CONFIG_FILE" | tr -d '"')
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [[ "$db_name" == "ALL" ]]; then
|
|
||||||
echo " Fetching all databases..." | tee -a "$LOG_FILE"
|
|
||||||
|
|
||||||
# Get database list
|
|
||||||
if [[ "$ssh_user" != "root" ]]; then
|
|
||||||
db_list=$(ssh -i "$ssh_key" -p "$ssh_port" -o ConnectTimeout=20 "$ssh_user@$host_ip" \
|
|
||||||
"sudo incus exec $container_name -- bash -c 'cat > /tmp/d6back.cnf << EOF
|
|
||||||
[client]
|
|
||||||
user=$db_user
|
|
||||||
password=$db_pass
|
|
||||||
host=$db_host
|
|
||||||
EOF
|
|
||||||
chmod 600 /tmp/d6back.cnf
|
|
||||||
mariadb --defaults-extra-file=/tmp/d6back.cnf -e \"SHOW DATABASES;\" 2>/dev/null
|
|
||||||
rm -f /tmp/d6back.cnf'" | \
|
|
||||||
grep -Ev '^(Database|information_schema|performance_schema|mysql|sys)$' || echo "")
|
|
||||||
else
|
|
||||||
db_list=$(ssh -i "$ssh_key" -p "$ssh_port" -o ConnectTimeout=20 "$ssh_user@$host_ip" \
|
|
||||||
"incus exec $container_name -- bash -c 'cat > /tmp/d6back.cnf << EOF
|
|
||||||
[client]
|
|
||||||
user=$db_user
|
|
||||||
password=$db_pass
|
|
||||||
host=$db_host
|
|
||||||
EOF
|
|
||||||
chmod 600 /tmp/d6back.cnf
|
|
||||||
mariadb --defaults-extra-file=/tmp/d6back.cnf -e \"SHOW DATABASES;\" 2>/dev/null
|
|
||||||
rm -f /tmp/d6back.cnf'" | \
|
|
||||||
grep -Ev '^(Database|information_schema|performance_schema|mysql|sys)$' || echo "")
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Backup each database
|
|
||||||
for single_db in $db_list; do
|
|
||||||
backup_database "$single_db"
|
|
||||||
done
|
|
||||||
else
|
|
||||||
backup_database "$db_name"
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
done
|
|
||||||
|
|
||||||
echo "=== Backup Completed $(date) ===" | tee -a "$LOG_FILE"
|
|
||||||
|
|
||||||
# Cleanup old backups according to retention policy
|
|
||||||
cleanup_old_backups
|
|
||||||
|
|
||||||
# Show summary
|
|
||||||
total_size=$(du -sh "$DIR_BACKUP" 2>/dev/null | cut -f1)
|
|
||||||
echo "Total backup size: $total_size" | tee -a "$LOG_FILE"
|
|
||||||
|
|
||||||
# Add summary to recap
|
|
||||||
echo "" >> "$RECAP_FILE"
|
|
||||||
echo "========================================" >> "$RECAP_FILE"
|
|
||||||
|
|
||||||
# Add size details per host/container
|
|
||||||
echo "BACKUP SIZES:" >> "$RECAP_FILE"
|
|
||||||
for host_dir in "$DIR_BACKUP"/*; do
|
|
||||||
if [[ -d "$host_dir" ]]; then
|
|
||||||
host_name=$(basename "$host_dir")
|
|
||||||
host_size=$(du -sh "$host_dir" 2>/dev/null | cut -f1)
|
|
||||||
echo "" >> "$RECAP_FILE"
|
|
||||||
echo " $host_name: $host_size" >> "$RECAP_FILE"
|
|
||||||
|
|
||||||
# Size per container
|
|
||||||
for container_dir in "$host_dir"/*; do
|
|
||||||
if [[ -d "$container_dir" ]]; then
|
|
||||||
container_name=$(basename "$container_dir")
|
|
||||||
container_size=$(du -sh "$container_dir" 2>/dev/null | cut -f1)
|
|
||||||
echo " - $container_name: $container_size" >> "$RECAP_FILE"
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
|
|
||||||
echo "" >> "$RECAP_FILE"
|
|
||||||
echo "TOTAL SIZE: $total_size" >> "$RECAP_FILE"
|
|
||||||
echo "COMPLETED: $(date '+%d.%m.%Y %H:%M')" >> "$RECAP_FILE"
|
|
||||||
|
|
||||||
# Prepare email subject with date format
|
|
||||||
DATE_SUBJECT=$(date '+%d.%m.%Y %H')
|
|
||||||
|
|
||||||
# Send recap email
|
|
||||||
if [[ $ERROR_COUNT -gt 0 ]]; then
|
|
||||||
echo "Total errors: $ERROR_COUNT" | tee -a "$LOG_FILE"
|
|
||||||
|
|
||||||
# Add errors to recap
|
|
||||||
echo "" >> "$RECAP_FILE"
|
|
||||||
echo "ERRORS DETECTED: $ERROR_COUNT" >> "$RECAP_FILE"
|
|
||||||
echo "----------------------------" >> "$RECAP_FILE"
|
|
||||||
grep -i "ERROR" "$LOG_FILE" >> "$RECAP_FILE"
|
|
||||||
|
|
||||||
# Send email with ERROR in subject
|
|
||||||
echo "Sending ERROR email to $EMAIL_TO (Errors found: $ERROR_COUNT)" | tee -a "$LOG_FILE"
|
|
||||||
if command -v msmtp &> /dev/null; then
|
|
||||||
{
|
|
||||||
echo "To: $EMAIL_TO"
|
|
||||||
echo "Subject: Backup${BACKUP_SERVER} ERROR $DATE_SUBJECT"
|
|
||||||
echo ""
|
|
||||||
cat "$RECAP_FILE"
|
|
||||||
} | msmtp "$EMAIL_TO"
|
|
||||||
echo "ERROR email sent successfully to $EMAIL_TO" | tee -a "$LOG_FILE"
|
|
||||||
else
|
|
||||||
echo "WARNING: msmtp not found - ERROR email NOT sent" | tee -a "$LOG_FILE"
|
|
||||||
fi
|
|
||||||
else
|
|
||||||
echo "Backup completed successfully with no errors" | tee -a "$LOG_FILE"
|
|
||||||
|
|
||||||
# Send success recap email
|
|
||||||
echo "Sending SUCCESS recap email to $EMAIL_TO" | tee -a "$LOG_FILE"
|
|
||||||
if command -v msmtp &> /dev/null; then
|
|
||||||
{
|
|
||||||
echo "To: $EMAIL_TO"
|
|
||||||
echo "Subject: Backup${BACKUP_SERVER} $DATE_SUBJECT"
|
|
||||||
echo ""
|
|
||||||
cat "$RECAP_FILE"
|
|
||||||
} | msmtp "$EMAIL_TO"
|
|
||||||
echo "SUCCESS recap email sent successfully to $EMAIL_TO" | tee -a "$LOG_FILE"
|
|
||||||
else
|
|
||||||
echo "WARNING: msmtp not found - SUCCESS recap email NOT sent" | tee -a "$LOG_FILE"
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Clean up recap file
|
|
||||||
rm -f "$RECAP_FILE"
|
|
||||||
|
|
||||||
# Exit with error code if there were errors
|
|
||||||
if [[ $ERROR_COUNT -gt 0 ]]; then
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
@@ -1,112 +0,0 @@
|
|||||||
# Configuration for MariaDB and directories backup
|
|
||||||
# Backup structure: $dir_backup/$hostname/$containername/ for dirs
|
|
||||||
# $dir_backup/$hostname/$containername/sql/ for databases
|
|
||||||
|
|
||||||
# Global parameters
|
|
||||||
global:
|
|
||||||
backup_server: PM7 # Nom du serveur de backup (PM7, PM1, etc.)
|
|
||||||
email_to: support@unikoffice.com # Email de notification
|
|
||||||
dir_backup: /var/pierre/back # Base backup directory
|
|
||||||
enc_key: /home/pierre/.key_enc # Encryption key for SQL backups
|
|
||||||
keep_dirs: 7d # Garde 7 jours pour les dirs
|
|
||||||
keep_db: 5d,3w,15m # 5 jours complets, 3 semaines (1/jour), 15 mois (1/semaine)
|
|
||||||
|
|
||||||
# Hosts configuration
|
|
||||||
hosts:
|
|
||||||
- name: IN2
|
|
||||||
ip: 145.239.9.105
|
|
||||||
user: debian
|
|
||||||
key: /home/pierre/.ssh/backup_key
|
|
||||||
port: 22
|
|
||||||
dirs:
|
|
||||||
- /etc/nginx
|
|
||||||
containers:
|
|
||||||
- name: nx4
|
|
||||||
db_user: root
|
|
||||||
db_pass: MyDebServer,90b
|
|
||||||
db_host: localhost
|
|
||||||
dirs:
|
|
||||||
- /etc/nginx
|
|
||||||
- /var/www
|
|
||||||
databases:
|
|
||||||
- ALL # Backup all databases
|
|
||||||
onlydb: # Used only with -onlydb parameter (optional)
|
|
||||||
- turing
|
|
||||||
|
|
||||||
- name: IN3
|
|
||||||
ip: 195.154.80.116
|
|
||||||
user: pierre
|
|
||||||
key: /home/pierre/.ssh/backup_key
|
|
||||||
port: 22
|
|
||||||
dirs:
|
|
||||||
- /etc/nginx
|
|
||||||
containers:
|
|
||||||
- name: nx4
|
|
||||||
db_user: root
|
|
||||||
db_pass: MyAlpLocal,90b
|
|
||||||
db_host: localhost
|
|
||||||
dirs:
|
|
||||||
- /etc/nginx
|
|
||||||
- /var/www
|
|
||||||
databases:
|
|
||||||
- ALL # Backup all databases
|
|
||||||
onlydb: # Used only with -onlydb parameter (optional)
|
|
||||||
- geosector
|
|
||||||
|
|
||||||
- name: rca-geo
|
|
||||||
dirs:
|
|
||||||
- /etc/nginx
|
|
||||||
- /var/www
|
|
||||||
|
|
||||||
- name: dva-res
|
|
||||||
db_user: root
|
|
||||||
db_pass: MyAlpineDb.90b
|
|
||||||
db_host: localhost
|
|
||||||
dirs:
|
|
||||||
- /etc/nginx
|
|
||||||
- /var/www
|
|
||||||
databases:
|
|
||||||
- ALL
|
|
||||||
onlydb:
|
|
||||||
- resalice
|
|
||||||
|
|
||||||
- name: dva-front
|
|
||||||
dirs:
|
|
||||||
- /etc/nginx
|
|
||||||
- /var/www
|
|
||||||
|
|
||||||
- name: maria3
|
|
||||||
db_user: root
|
|
||||||
db_pass: MyAlpLocal,90b
|
|
||||||
db_host: localhost
|
|
||||||
dirs:
|
|
||||||
- /etc/my.cnf.d
|
|
||||||
- /var/osm
|
|
||||||
- /var/log
|
|
||||||
databases:
|
|
||||||
- ALL
|
|
||||||
onlydb:
|
|
||||||
- cleo
|
|
||||||
- rca_geo
|
|
||||||
|
|
||||||
- name: IN4
|
|
||||||
ip: 51.159.7.190
|
|
||||||
user: pierre
|
|
||||||
key: /home/pierre/.ssh/backup_key
|
|
||||||
port: 22
|
|
||||||
dirs:
|
|
||||||
- /etc/nginx
|
|
||||||
containers:
|
|
||||||
- name: maria4
|
|
||||||
db_user: root
|
|
||||||
db_pass: MyAlpLocal,90b
|
|
||||||
db_host: localhost
|
|
||||||
dirs:
|
|
||||||
- /etc/my.cnf.d
|
|
||||||
- /var/osm
|
|
||||||
- /var/log
|
|
||||||
databases:
|
|
||||||
- ALL
|
|
||||||
onlydb:
|
|
||||||
- cleo
|
|
||||||
- pra_geo
|
|
||||||
@@ -1,118 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
|
|
||||||
set -euo pipefail
|
|
||||||
|
|
||||||
# Colors for output
|
|
||||||
RED='\033[0;31m'
|
|
||||||
GREEN='\033[0;32m'
|
|
||||||
YELLOW='\033[1;33m'
|
|
||||||
BLUE='\033[0;34m'
|
|
||||||
NC='\033[0m' # No Color
|
|
||||||
|
|
||||||
# Configuration
|
|
||||||
CONFIG_FILE="backpm7.yaml"
|
|
||||||
|
|
||||||
# Check if file argument is provided
|
|
||||||
if [ $# -eq 0 ]; then
|
|
||||||
echo -e "${RED}Error: No input file specified${NC}"
|
|
||||||
echo "Usage: $0 <database.sql.gz.enc>"
|
|
||||||
echo "Example: $0 wordpress_20250905_14.sql.gz.enc"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
INPUT_FILE="$1"
|
|
||||||
|
|
||||||
# Check if input file exists
|
|
||||||
if [ ! -f "$INPUT_FILE" ]; then
|
|
||||||
echo -e "${RED}Error: File not found: $INPUT_FILE${NC}"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Function to load encryption key from config
|
|
||||||
load_key_from_config() {
|
|
||||||
if [ ! -f "$CONFIG_FILE" ]; then
|
|
||||||
echo -e "${YELLOW}Warning: $CONFIG_FILE not found${NC}"
|
|
||||||
return 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Check for yq
|
|
||||||
if ! command -v yq &> /dev/null; then
|
|
||||||
echo -e "${RED}Error: yq is required to read config file${NC}"
|
|
||||||
echo "Install with: sudo wget -qO /usr/local/bin/yq https://github.com/mikefarah/yq/releases/latest/download/yq_linux_amd64 && sudo chmod +x /usr/local/bin/yq"
|
|
||||||
return 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
local key_path=$(yq '.global.enc_key' "$CONFIG_FILE" | tr -d '"')
|
|
||||||
|
|
||||||
if [ -z "$key_path" ]; then
|
|
||||||
echo -e "${RED}Error: enc_key not found in $CONFIG_FILE${NC}"
|
|
||||||
return 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [ ! -f "$key_path" ]; then
|
|
||||||
echo -e "${RED}Error: Encryption key file not found: $key_path${NC}"
|
|
||||||
return 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
ENC_KEY=$(cat "$key_path")
|
|
||||||
echo -e "${GREEN}Encryption key loaded from: $key_path${NC}"
|
|
||||||
return 0
|
|
||||||
}
|
|
||||||
|
|
||||||
# Check file type early - accept both old and new naming
|
|
||||||
if [[ "$INPUT_FILE" != *.sql.gz.enc ]] && [[ "$INPUT_FILE" != *.sql.tar.gz.enc ]]; then
|
|
||||||
echo -e "${RED}Error: File must be a .sql.gz.enc or .sql.tar.gz.enc file${NC}"
|
|
||||||
echo "This tool only decrypts SQL backup files created by backpm7.sh"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Get encryption key from config
|
|
||||||
if ! load_key_from_config; then
|
|
||||||
echo -e "${RED}Error: Cannot load encryption key${NC}"
|
|
||||||
echo "Make sure $CONFIG_FILE exists and contains enc_key path"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Process SQL backup file
|
|
||||||
echo -e "${BLUE}Decrypting SQL backup: $INPUT_FILE${NC}"
|
|
||||||
|
|
||||||
# Determine output file - extract just the filename and put in current directory
|
|
||||||
BASENAME=$(basename "$INPUT_FILE")
|
|
||||||
if [[ "$BASENAME" == *.sql.tar.gz.enc ]]; then
|
|
||||||
OUTPUT_FILE="${BASENAME%.sql.tar.gz.enc}.sql"
|
|
||||||
else
|
|
||||||
OUTPUT_FILE="${BASENAME%.sql.gz.enc}.sql"
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Decrypt and decompress in one command
|
|
||||||
echo "Decrypting to: $OUTPUT_FILE"
|
|
||||||
|
|
||||||
# Decrypt and decompress in one pipeline
|
|
||||||
if openssl enc -aes-256-cbc -d -salt -pass pass:"$ENC_KEY" -pbkdf2 -in "$INPUT_FILE" | gunzip > "$OUTPUT_FILE" 2>/dev/null; then
|
|
||||||
# Get file size
|
|
||||||
size=$(du -h "$OUTPUT_FILE" | cut -f1)
|
|
||||||
echo -e "${GREEN}✓ Successfully decrypted: $OUTPUT_FILE ($size)${NC}"
|
|
||||||
|
|
||||||
# Show first few lines of SQL
|
|
||||||
echo -e "${BLUE}First 5 lines of SQL:${NC}"
|
|
||||||
head -n 5 "$OUTPUT_FILE"
|
|
||||||
else
|
|
||||||
echo -e "${RED}✗ Decryption failed${NC}"
|
|
||||||
echo "Possible causes:"
|
|
||||||
echo " - Wrong encryption key"
|
|
||||||
echo " - Corrupted file"
|
|
||||||
echo " - File was encrypted differently"
|
|
||||||
|
|
||||||
# Try to help debug
|
|
||||||
echo -e "\n${YELLOW}Debug info:${NC}"
|
|
||||||
echo "File size: $(du -h "$INPUT_FILE" | cut -f1)"
|
|
||||||
echo "First bytes (should start with 'Salted__'):"
|
|
||||||
hexdump -C "$INPUT_FILE" | head -n 1
|
|
||||||
|
|
||||||
# Let's also check what key we're using (first 10 chars)
|
|
||||||
echo "Key begins with: ${ENC_KEY:0:10}..."
|
|
||||||
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo -e "${GREEN}Operation completed successfully${NC}"
|
|
||||||
@@ -1,248 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
#
|
|
||||||
# sync_geosector.sh - Synchronise les backups geosector depuis PM7 vers maria3 (IN3) et maria4 (IN4)
|
|
||||||
#
|
|
||||||
# Ce script :
|
|
||||||
# 1. Trouve le dernier backup chiffré de geosector sur PM7
|
|
||||||
# 2. Le déchiffre et décompresse localement
|
|
||||||
# 3. Le transfère et l'importe dans IN3/maria3/geosector
|
|
||||||
# 4. Le transfère et l'importe dans IN4/maria4/geosector
|
|
||||||
#
|
|
||||||
# Installation: /var/pierre/bat/sync_geosector.sh
|
|
||||||
# Usage: ./sync_geosector.sh [--force] [--date YYYYMMDD_HH]
|
|
||||||
#
|
|
||||||
|
|
||||||
set -uo pipefail
|
|
||||||
# Note: Removed -e to allow script to continue on sync errors
|
|
||||||
# Errors are handled explicitly with ERROR_COUNT
|
|
||||||
|
|
||||||
# Configuration
|
|
||||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
|
||||||
CONFIG_FILE="$SCRIPT_DIR/d6back.yaml"
|
|
||||||
BACKUP_DIR="/var/pierre/back/IN3/nx4/sql"
|
|
||||||
ENC_KEY_FILE="/home/pierre/.key_enc"
|
|
||||||
SSH_KEY="/home/pierre/.ssh/backup_key"
|
|
||||||
TEMP_DIR="/tmp/geosector_sync"
|
|
||||||
LOG_FILE="/var/pierre/bat/logs/sync_geosector.log"
|
|
||||||
RECAP_FILE="/tmp/sync_geosector_recap_$$.txt"
|
|
||||||
|
|
||||||
# Load email config from d6back.yaml
|
|
||||||
if [[ -f "$CONFIG_FILE" ]]; then
|
|
||||||
EMAIL_TO=$(yq '.global.email_to // "support@unikoffice.com"' "$CONFIG_FILE" | tr -d '"')
|
|
||||||
BACKUP_SERVER=$(yq '.global.backup_server // "BACKUP"' "$CONFIG_FILE" | tr -d '"')
|
|
||||||
else
|
|
||||||
EMAIL_TO="support@unikoffice.com"
|
|
||||||
BACKUP_SERVER="BACKUP"
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Serveurs cibles
|
|
||||||
IN3_HOST="195.154.80.116"
|
|
||||||
IN3_USER="pierre"
|
|
||||||
IN3_CONTAINER="maria3"
|
|
||||||
|
|
||||||
IN4_HOST="51.159.7.190"
|
|
||||||
IN4_USER="pierre"
|
|
||||||
IN4_CONTAINER="maria4"
|
|
||||||
|
|
||||||
# Credentials MariaDB
|
|
||||||
DB_USER="root"
|
|
||||||
IN3_DB_PASS="MyAlpLocal,90b" # maria3
|
|
||||||
IN4_DB_PASS="MyAlpLocal,90b" # maria4
|
|
||||||
DB_NAME="geosector"
|
|
||||||
|
|
||||||
# Fonctions utilitaires
|
|
||||||
log() {
|
|
||||||
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $*" | tee -a "$LOG_FILE"
|
|
||||||
}
|
|
||||||
|
|
||||||
error() {
|
|
||||||
log "ERROR: $*"
|
|
||||||
exit 1
|
|
||||||
}
|
|
||||||
|
|
||||||
cleanup() {
|
|
||||||
if [[ -d "$TEMP_DIR" ]]; then
|
|
||||||
log "Nettoyage de $TEMP_DIR"
|
|
||||||
rm -rf "$TEMP_DIR"
|
|
||||||
fi
|
|
||||||
rm -f "$RECAP_FILE"
|
|
||||||
}
|
|
||||||
|
|
||||||
trap cleanup EXIT
|
|
||||||
|
|
||||||
# Lecture de la clé de chiffrement
|
|
||||||
if [[ ! -f "$ENC_KEY_FILE" ]]; then
|
|
||||||
error "Clé de chiffrement non trouvée: $ENC_KEY_FILE"
|
|
||||||
fi
|
|
||||||
ENC_KEY=$(cat "$ENC_KEY_FILE")
|
|
||||||
|
|
||||||
# Parsing des arguments
|
|
||||||
FORCE=0
|
|
||||||
SPECIFIC_DATE=""
|
|
||||||
|
|
||||||
while [[ $# -gt 0 ]]; do
|
|
||||||
case $1 in
|
|
||||||
--force)
|
|
||||||
FORCE=1
|
|
||||||
shift
|
|
||||||
;;
|
|
||||||
--date)
|
|
||||||
SPECIFIC_DATE="$2"
|
|
||||||
shift 2
|
|
||||||
;;
|
|
||||||
*)
|
|
||||||
echo "Usage: $0 [--force] [--date YYYYMMDD_HH]"
|
|
||||||
exit 1
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
done
|
|
||||||
|
|
||||||
# Trouver le fichier backup
|
|
||||||
if [[ -n "$SPECIFIC_DATE" ]]; then
|
|
||||||
BACKUP_FILE="$BACKUP_DIR/geosector_${SPECIFIC_DATE}.sql.gz.enc"
|
|
||||||
if [[ ! -f "$BACKUP_FILE" ]]; then
|
|
||||||
error "Backup non trouvé: $BACKUP_FILE"
|
|
||||||
fi
|
|
||||||
else
|
|
||||||
# Chercher le plus récent
|
|
||||||
BACKUP_FILE=$(find "$BACKUP_DIR" -name "geosector_*.sql.gz.enc" -type f -printf '%T@ %p\n' | sort -rn | head -1 | cut -d' ' -f2-)
|
|
||||||
if [[ -z "$BACKUP_FILE" ]]; then
|
|
||||||
error "Aucun backup geosector trouvé dans $BACKUP_DIR"
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
|
|
||||||
BACKUP_BASENAME=$(basename "$BACKUP_FILE")
|
|
||||||
log "Backup sélectionné: $BACKUP_BASENAME"
|
|
||||||
|
|
||||||
# Initialiser le fichier récapitulatif
|
|
||||||
echo "SYNC GEOSECTOR REPORT - $(hostname) - $(date '+%d.%m.%Y %H')h" > "$RECAP_FILE"
|
|
||||||
echo "========================================" >> "$RECAP_FILE"
|
|
||||||
echo "" >> "$RECAP_FILE"
|
|
||||||
echo "Backup source: $BACKUP_BASENAME" >> "$RECAP_FILE"
|
|
||||||
echo "" >> "$RECAP_FILE"
|
|
||||||
|
|
||||||
# Créer le répertoire temporaire
|
|
||||||
mkdir -p "$TEMP_DIR"
|
|
||||||
DECRYPTED_FILE="$TEMP_DIR/geosector.sql"
|
|
||||||
|
|
||||||
# Étape 1: Déchiffrer et décompresser
|
|
||||||
log "Déchiffrement et décompression du backup..."
|
|
||||||
if ! openssl enc -aes-256-cbc -d -pass pass:"$ENC_KEY" -pbkdf2 -in "$BACKUP_FILE" | gunzip > "$DECRYPTED_FILE"; then
|
|
||||||
error "Échec du déchiffrement/décompression"
|
|
||||||
fi
|
|
||||||
|
|
||||||
FILE_SIZE=$(du -h "$DECRYPTED_FILE" | cut -f1)
|
|
||||||
log "Fichier SQL déchiffré: $FILE_SIZE"
|
|
||||||
|
|
||||||
echo "Decrypted SQL size: $FILE_SIZE" >> "$RECAP_FILE"
|
|
||||||
echo "" >> "$RECAP_FILE"
|
|
||||||
|
|
||||||
# Compteur d'erreurs
|
|
||||||
ERROR_COUNT=0
|
|
||||||
|
|
||||||
# Fonction pour synchroniser vers un serveur
|
|
||||||
sync_to_server() {
|
|
||||||
local HOST=$1
|
|
||||||
local USER=$2
|
|
||||||
local CONTAINER=$3
|
|
||||||
local DB_PASS=$4
|
|
||||||
local SERVER_NAME=$5
|
|
||||||
|
|
||||||
log "=== Synchronisation vers $SERVER_NAME ($HOST) ==="
|
|
||||||
echo "TARGET: $SERVER_NAME ($HOST/$CONTAINER)" >> "$RECAP_FILE"
|
|
||||||
|
|
||||||
# Test de connexion SSH
|
|
||||||
if ! ssh -i "$SSH_KEY" -o ConnectTimeout=10 "$USER@$HOST" "echo 'SSH OK'" &>/dev/null; then
|
|
||||||
log "ERROR: Impossible de se connecter à $HOST via SSH"
|
|
||||||
echo " ✗ SSH connection FAILED" >> "$RECAP_FILE"
|
|
||||||
((ERROR_COUNT++))
|
|
||||||
return 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Import dans MariaDB
|
|
||||||
log "Import dans $SERVER_NAME/$CONTAINER/geosector..."
|
|
||||||
|
|
||||||
# Drop et recréer la base sur le serveur distant
|
|
||||||
if ! ssh -i "$SSH_KEY" "$USER@$HOST" "incus exec $CONTAINER --project default -- mariadb -u root -p'$DB_PASS' -e 'DROP DATABASE IF EXISTS $DB_NAME; CREATE DATABASE $DB_NAME CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;'"; then
|
|
||||||
log "ERROR: Échec de la création de la base sur $SERVER_NAME"
|
|
||||||
echo " ✗ Database creation FAILED" >> "$RECAP_FILE"
|
|
||||||
((ERROR_COUNT++))
|
|
||||||
return 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Filtrer et importer le SQL (enlever CREATE DATABASE et USE avec timestamp)
|
|
||||||
log "Filtrage et import du SQL..."
|
|
||||||
if ! sed -e '/^CREATE DATABASE.*geosector_[0-9]/d' \
|
|
||||||
-e '/^USE.*geosector_[0-9]/d' \
|
|
||||||
"$DECRYPTED_FILE" | \
|
|
||||||
ssh -i "$SSH_KEY" "$USER@$HOST" "incus exec $CONTAINER --project default -- mariadb -u root -p'$DB_PASS' $DB_NAME"; then
|
|
||||||
log "ERROR: Échec de l'import sur $SERVER_NAME"
|
|
||||||
echo " ✗ SQL import FAILED" >> "$RECAP_FILE"
|
|
||||||
((ERROR_COUNT++))
|
|
||||||
return 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
log "$SERVER_NAME: Import réussi"
|
|
||||||
echo " ✓ Import SUCCESS" >> "$RECAP_FILE"
|
|
||||||
echo "" >> "$RECAP_FILE"
|
|
||||||
}
|
|
||||||
|
|
||||||
# Synchronisation vers IN3/maria3
|
|
||||||
sync_to_server "$IN3_HOST" "$IN3_USER" "$IN3_CONTAINER" "$IN3_DB_PASS" "IN3/maria3"
|
|
||||||
|
|
||||||
# Synchronisation vers IN4/maria4
|
|
||||||
sync_to_server "$IN4_HOST" "$IN4_USER" "$IN4_CONTAINER" "$IN4_DB_PASS" "IN4/maria4"
|
|
||||||
|
|
||||||
# Finaliser le récapitulatif
|
|
||||||
echo "========================================" >> "$RECAP_FILE"
|
|
||||||
echo "COMPLETED: $(date '+%d.%m.%Y %H:%M')" >> "$RECAP_FILE"
|
|
||||||
|
|
||||||
# Préparer le sujet email avec date
|
|
||||||
DATE_SUBJECT=$(date '+%d.%m.%Y %H')
|
|
||||||
|
|
||||||
# Envoyer l'email récapitulatif
|
|
||||||
if [[ $ERROR_COUNT -gt 0 ]]; then
|
|
||||||
log "Total errors: $ERROR_COUNT"
|
|
||||||
|
|
||||||
# Ajouter les erreurs au récap
|
|
||||||
echo "" >> "$RECAP_FILE"
|
|
||||||
echo "ERRORS DETECTED: $ERROR_COUNT" >> "$RECAP_FILE"
|
|
||||||
echo "----------------------------" >> "$RECAP_FILE"
|
|
||||||
grep -i "ERROR" "$LOG_FILE" | tail -20 >> "$RECAP_FILE"
|
|
||||||
|
|
||||||
# Envoyer email avec ERROR dans le sujet
|
|
||||||
log "Sending ERROR email to $EMAIL_TO (Errors found: $ERROR_COUNT)"
|
|
||||||
if command -v msmtp &> /dev/null; then
|
|
||||||
{
|
|
||||||
echo "To: $EMAIL_TO"
|
|
||||||
echo "Subject: Sync${BACKUP_SERVER} ERROR $DATE_SUBJECT"
|
|
||||||
echo ""
|
|
||||||
cat "$RECAP_FILE"
|
|
||||||
} | msmtp "$EMAIL_TO"
|
|
||||||
log "ERROR email sent successfully to $EMAIL_TO"
|
|
||||||
else
|
|
||||||
log "WARNING: msmtp not found - ERROR email NOT sent"
|
|
||||||
fi
|
|
||||||
|
|
||||||
log "=== Synchronisation terminée avec des erreurs ==="
|
|
||||||
exit 1
|
|
||||||
else
|
|
||||||
log "=== Synchronisation terminée avec succès ==="
|
|
||||||
log "Les bases geosector sur maria3 et maria4 sont à jour avec le backup $BACKUP_BASENAME"
|
|
||||||
|
|
||||||
# Envoyer email de succès
|
|
||||||
log "Sending SUCCESS recap email to $EMAIL_TO"
|
|
||||||
if command -v msmtp &> /dev/null; then
|
|
||||||
{
|
|
||||||
echo "To: $EMAIL_TO"
|
|
||||||
echo "Subject: Sync${BACKUP_SERVER} $DATE_SUBJECT"
|
|
||||||
echo ""
|
|
||||||
cat "$RECAP_FILE"
|
|
||||||
} | msmtp "$EMAIL_TO"
|
|
||||||
log "SUCCESS recap email sent successfully to $EMAIL_TO"
|
|
||||||
else
|
|
||||||
log "WARNING: msmtp not found - SUCCESS recap email NOT sent"
|
|
||||||
fi
|
|
||||||
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
2594
api/TODO-API.md
2594
api/TODO-API.md
File diff suppressed because it is too large
Load Diff
@@ -8,20 +8,13 @@
|
|||||||
"ext-openssl": "*",
|
"ext-openssl": "*",
|
||||||
"ext-pdo": "*",
|
"ext-pdo": "*",
|
||||||
"phpmailer/phpmailer": "^6.8",
|
"phpmailer/phpmailer": "^6.8",
|
||||||
"phpoffice/phpspreadsheet": "^5.0",
|
"phpoffice/phpspreadsheet": "^2.0",
|
||||||
"setasign/fpdf": "^1.8",
|
"setasign/fpdf": "^1.8",
|
||||||
"setasign/fpdi": "^2.6",
|
|
||||||
"stripe/stripe-php": "^17.6"
|
"stripe/stripe-php": "^17.6"
|
||||||
},
|
},
|
||||||
"autoload": {
|
"autoload": {
|
||||||
"psr-4": {
|
|
||||||
"App\\": "src/"
|
|
||||||
},
|
|
||||||
"classmap": [
|
"classmap": [
|
||||||
"src/Core/",
|
"src/"
|
||||||
"src/Config/",
|
|
||||||
"src/Utils/",
|
|
||||||
"src/Controllers/LogController.php"
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
"config": {
|
"config": {
|
||||||
|
|||||||
124
api/composer.lock
generated
124
api/composer.lock
generated
@@ -4,7 +4,7 @@
|
|||||||
"Read more about it at https://getcomposer.org/doc/01-basic-usage.md#installing-dependencies",
|
"Read more about it at https://getcomposer.org/doc/01-basic-usage.md#installing-dependencies",
|
||||||
"This file is @generated automatically"
|
"This file is @generated automatically"
|
||||||
],
|
],
|
||||||
"content-hash": "936a7e1a35fde56354a4dea02b309267",
|
"content-hash": "155893f9be89bceda3639efbf19b14d1",
|
||||||
"packages": [
|
"packages": [
|
||||||
{
|
{
|
||||||
"name": "composer/pcre",
|
"name": "composer/pcre",
|
||||||
@@ -87,22 +87,22 @@
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
"name": "maennchen/zipstream-php",
|
"name": "maennchen/zipstream-php",
|
||||||
"version": "3.2.0",
|
"version": "3.1.2",
|
||||||
"source": {
|
"source": {
|
||||||
"type": "git",
|
"type": "git",
|
||||||
"url": "https://github.com/maennchen/ZipStream-PHP.git",
|
"url": "https://github.com/maennchen/ZipStream-PHP.git",
|
||||||
"reference": "9712d8fa4cdf9240380b01eb4be55ad8dcf71416"
|
"reference": "aeadcf5c412332eb426c0f9b4485f6accba2a99f"
|
||||||
},
|
},
|
||||||
"dist": {
|
"dist": {
|
||||||
"type": "zip",
|
"type": "zip",
|
||||||
"url": "https://api.github.com/repos/maennchen/ZipStream-PHP/zipball/9712d8fa4cdf9240380b01eb4be55ad8dcf71416",
|
"url": "https://api.github.com/repos/maennchen/ZipStream-PHP/zipball/aeadcf5c412332eb426c0f9b4485f6accba2a99f",
|
||||||
"reference": "9712d8fa4cdf9240380b01eb4be55ad8dcf71416",
|
"reference": "aeadcf5c412332eb426c0f9b4485f6accba2a99f",
|
||||||
"shasum": ""
|
"shasum": ""
|
||||||
},
|
},
|
||||||
"require": {
|
"require": {
|
||||||
"ext-mbstring": "*",
|
"ext-mbstring": "*",
|
||||||
"ext-zlib": "*",
|
"ext-zlib": "*",
|
||||||
"php-64bit": "^8.3"
|
"php-64bit": "^8.2"
|
||||||
},
|
},
|
||||||
"require-dev": {
|
"require-dev": {
|
||||||
"brianium/paratest": "^7.7",
|
"brianium/paratest": "^7.7",
|
||||||
@@ -111,7 +111,7 @@
|
|||||||
"guzzlehttp/guzzle": "^7.5",
|
"guzzlehttp/guzzle": "^7.5",
|
||||||
"mikey179/vfsstream": "^1.6",
|
"mikey179/vfsstream": "^1.6",
|
||||||
"php-coveralls/php-coveralls": "^2.5",
|
"php-coveralls/php-coveralls": "^2.5",
|
||||||
"phpunit/phpunit": "^12.0",
|
"phpunit/phpunit": "^11.0",
|
||||||
"vimeo/psalm": "^6.0"
|
"vimeo/psalm": "^6.0"
|
||||||
},
|
},
|
||||||
"suggest": {
|
"suggest": {
|
||||||
@@ -153,7 +153,7 @@
|
|||||||
],
|
],
|
||||||
"support": {
|
"support": {
|
||||||
"issues": "https://github.com/maennchen/ZipStream-PHP/issues",
|
"issues": "https://github.com/maennchen/ZipStream-PHP/issues",
|
||||||
"source": "https://github.com/maennchen/ZipStream-PHP/tree/3.2.0"
|
"source": "https://github.com/maennchen/ZipStream-PHP/tree/3.1.2"
|
||||||
},
|
},
|
||||||
"funding": [
|
"funding": [
|
||||||
{
|
{
|
||||||
@@ -161,7 +161,7 @@
|
|||||||
"type": "github"
|
"type": "github"
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"time": "2025-07-17T11:15:13+00:00"
|
"time": "2025-01-27T12:07:53+00:00"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"name": "markbaker/complex",
|
"name": "markbaker/complex",
|
||||||
@@ -272,16 +272,16 @@
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
"name": "phpmailer/phpmailer",
|
"name": "phpmailer/phpmailer",
|
||||||
"version": "v6.11.1",
|
"version": "v6.10.0",
|
||||||
"source": {
|
"source": {
|
||||||
"type": "git",
|
"type": "git",
|
||||||
"url": "https://github.com/PHPMailer/PHPMailer.git",
|
"url": "https://github.com/PHPMailer/PHPMailer.git",
|
||||||
"reference": "d9e3b36b47f04b497a0164c5a20f92acb4593284"
|
"reference": "bf74d75a1fde6beaa34a0ddae2ec5fce0f72a144"
|
||||||
},
|
},
|
||||||
"dist": {
|
"dist": {
|
||||||
"type": "zip",
|
"type": "zip",
|
||||||
"url": "https://api.github.com/repos/PHPMailer/PHPMailer/zipball/d9e3b36b47f04b497a0164c5a20f92acb4593284",
|
"url": "https://api.github.com/repos/PHPMailer/PHPMailer/zipball/bf74d75a1fde6beaa34a0ddae2ec5fce0f72a144",
|
||||||
"reference": "d9e3b36b47f04b497a0164c5a20f92acb4593284",
|
"reference": "bf74d75a1fde6beaa34a0ddae2ec5fce0f72a144",
|
||||||
"shasum": ""
|
"shasum": ""
|
||||||
},
|
},
|
||||||
"require": {
|
"require": {
|
||||||
@@ -302,7 +302,6 @@
|
|||||||
},
|
},
|
||||||
"suggest": {
|
"suggest": {
|
||||||
"decomplexity/SendOauth2": "Adapter for using XOAUTH2 authentication",
|
"decomplexity/SendOauth2": "Adapter for using XOAUTH2 authentication",
|
||||||
"ext-imap": "Needed to support advanced email address parsing according to RFC822",
|
|
||||||
"ext-mbstring": "Needed to send email in multibyte encoding charset or decode encoded addresses",
|
"ext-mbstring": "Needed to send email in multibyte encoding charset or decode encoded addresses",
|
||||||
"ext-openssl": "Needed for secure SMTP sending and DKIM signing",
|
"ext-openssl": "Needed for secure SMTP sending and DKIM signing",
|
||||||
"greew/oauth2-azure-provider": "Needed for Microsoft Azure XOAUTH2 authentication",
|
"greew/oauth2-azure-provider": "Needed for Microsoft Azure XOAUTH2 authentication",
|
||||||
@@ -342,7 +341,7 @@
|
|||||||
"description": "PHPMailer is a full-featured email creation and transfer class for PHP",
|
"description": "PHPMailer is a full-featured email creation and transfer class for PHP",
|
||||||
"support": {
|
"support": {
|
||||||
"issues": "https://github.com/PHPMailer/PHPMailer/issues",
|
"issues": "https://github.com/PHPMailer/PHPMailer/issues",
|
||||||
"source": "https://github.com/PHPMailer/PHPMailer/tree/v6.11.1"
|
"source": "https://github.com/PHPMailer/PHPMailer/tree/v6.10.0"
|
||||||
},
|
},
|
||||||
"funding": [
|
"funding": [
|
||||||
{
|
{
|
||||||
@@ -350,24 +349,24 @@
|
|||||||
"type": "github"
|
"type": "github"
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"time": "2025-09-30T11:54:53+00:00"
|
"time": "2025-04-24T15:19:31+00:00"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"name": "phpoffice/phpspreadsheet",
|
"name": "phpoffice/phpspreadsheet",
|
||||||
"version": "5.1.0",
|
"version": "2.3.8",
|
||||||
"source": {
|
"source": {
|
||||||
"type": "git",
|
"type": "git",
|
||||||
"url": "https://github.com/PHPOffice/PhpSpreadsheet.git",
|
"url": "https://github.com/PHPOffice/PhpSpreadsheet.git",
|
||||||
"reference": "fd26e45a814e94ae2aad0df757d9d1739c4bf2e0"
|
"reference": "7a700683743bf1c4a21837c84b266916f1aa7d25"
|
||||||
},
|
},
|
||||||
"dist": {
|
"dist": {
|
||||||
"type": "zip",
|
"type": "zip",
|
||||||
"url": "https://api.github.com/repos/PHPOffice/PhpSpreadsheet/zipball/fd26e45a814e94ae2aad0df757d9d1739c4bf2e0",
|
"url": "https://api.github.com/repos/PHPOffice/PhpSpreadsheet/zipball/7a700683743bf1c4a21837c84b266916f1aa7d25",
|
||||||
"reference": "fd26e45a814e94ae2aad0df757d9d1739c4bf2e0",
|
"reference": "7a700683743bf1c4a21837c84b266916f1aa7d25",
|
||||||
"shasum": ""
|
"shasum": ""
|
||||||
},
|
},
|
||||||
"require": {
|
"require": {
|
||||||
"composer/pcre": "^1||^2||^3",
|
"composer/pcre": "^1 || ^2 || ^3",
|
||||||
"ext-ctype": "*",
|
"ext-ctype": "*",
|
||||||
"ext-dom": "*",
|
"ext-dom": "*",
|
||||||
"ext-fileinfo": "*",
|
"ext-fileinfo": "*",
|
||||||
@@ -396,10 +395,9 @@
|
|||||||
"mitoteam/jpgraph": "^10.3",
|
"mitoteam/jpgraph": "^10.3",
|
||||||
"mpdf/mpdf": "^8.1.1",
|
"mpdf/mpdf": "^8.1.1",
|
||||||
"phpcompatibility/php-compatibility": "^9.3",
|
"phpcompatibility/php-compatibility": "^9.3",
|
||||||
"phpstan/phpstan": "^1.1 || ^2.0",
|
"phpstan/phpstan": "^1.1",
|
||||||
"phpstan/phpstan-deprecation-rules": "^1.0 || ^2.0",
|
"phpstan/phpstan-phpunit": "^1.0",
|
||||||
"phpstan/phpstan-phpunit": "^1.0 || ^2.0",
|
"phpunit/phpunit": "^9.6 || ^10.5",
|
||||||
"phpunit/phpunit": "^10.5",
|
|
||||||
"squizlabs/php_codesniffer": "^3.7",
|
"squizlabs/php_codesniffer": "^3.7",
|
||||||
"tecnickcom/tcpdf": "^6.5"
|
"tecnickcom/tcpdf": "^6.5"
|
||||||
},
|
},
|
||||||
@@ -454,9 +452,9 @@
|
|||||||
],
|
],
|
||||||
"support": {
|
"support": {
|
||||||
"issues": "https://github.com/PHPOffice/PhpSpreadsheet/issues",
|
"issues": "https://github.com/PHPOffice/PhpSpreadsheet/issues",
|
||||||
"source": "https://github.com/PHPOffice/PhpSpreadsheet/tree/5.1.0"
|
"source": "https://github.com/PHPOffice/PhpSpreadsheet/tree/2.3.8"
|
||||||
},
|
},
|
||||||
"time": "2025-09-04T05:34:49+00:00"
|
"time": "2025-02-08T03:01:45+00:00"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"name": "psr/http-client",
|
"name": "psr/http-client",
|
||||||
@@ -715,78 +713,6 @@
|
|||||||
},
|
},
|
||||||
"time": "2023-06-26T14:44:25+00:00"
|
"time": "2023-06-26T14:44:25+00:00"
|
||||||
},
|
},
|
||||||
{
|
|
||||||
"name": "setasign/fpdi",
|
|
||||||
"version": "v2.6.4",
|
|
||||||
"source": {
|
|
||||||
"type": "git",
|
|
||||||
"url": "https://github.com/Setasign/FPDI.git",
|
|
||||||
"reference": "4b53852fde2734ec6a07e458a085db627c60eada"
|
|
||||||
},
|
|
||||||
"dist": {
|
|
||||||
"type": "zip",
|
|
||||||
"url": "https://api.github.com/repos/Setasign/FPDI/zipball/4b53852fde2734ec6a07e458a085db627c60eada",
|
|
||||||
"reference": "4b53852fde2734ec6a07e458a085db627c60eada",
|
|
||||||
"shasum": ""
|
|
||||||
},
|
|
||||||
"require": {
|
|
||||||
"ext-zlib": "*",
|
|
||||||
"php": "^7.1 || ^8.0"
|
|
||||||
},
|
|
||||||
"conflict": {
|
|
||||||
"setasign/tfpdf": "<1.31"
|
|
||||||
},
|
|
||||||
"require-dev": {
|
|
||||||
"phpunit/phpunit": "^7",
|
|
||||||
"setasign/fpdf": "~1.8.6",
|
|
||||||
"setasign/tfpdf": "~1.33",
|
|
||||||
"squizlabs/php_codesniffer": "^3.5",
|
|
||||||
"tecnickcom/tcpdf": "^6.8"
|
|
||||||
},
|
|
||||||
"suggest": {
|
|
||||||
"setasign/fpdf": "FPDI will extend this class but as it is also possible to use TCPDF or tFPDF as an alternative. There's no fixed dependency configured."
|
|
||||||
},
|
|
||||||
"type": "library",
|
|
||||||
"autoload": {
|
|
||||||
"psr-4": {
|
|
||||||
"setasign\\Fpdi\\": "src/"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"notification-url": "https://packagist.org/downloads/",
|
|
||||||
"license": [
|
|
||||||
"MIT"
|
|
||||||
],
|
|
||||||
"authors": [
|
|
||||||
{
|
|
||||||
"name": "Jan Slabon",
|
|
||||||
"email": "jan.slabon@setasign.com",
|
|
||||||
"homepage": "https://www.setasign.com"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "Maximilian Kresse",
|
|
||||||
"email": "maximilian.kresse@setasign.com",
|
|
||||||
"homepage": "https://www.setasign.com"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"description": "FPDI is a collection of PHP classes facilitating developers to read pages from existing PDF documents and use them as templates in FPDF. Because it is also possible to use FPDI with TCPDF, there are no fixed dependencies defined. Please see suggestions for packages which evaluates the dependencies automatically.",
|
|
||||||
"homepage": "https://www.setasign.com/fpdi",
|
|
||||||
"keywords": [
|
|
||||||
"fpdf",
|
|
||||||
"fpdi",
|
|
||||||
"pdf"
|
|
||||||
],
|
|
||||||
"support": {
|
|
||||||
"issues": "https://github.com/Setasign/FPDI/issues",
|
|
||||||
"source": "https://github.com/Setasign/FPDI/tree/v2.6.4"
|
|
||||||
},
|
|
||||||
"funding": [
|
|
||||||
{
|
|
||||||
"url": "https://tidelift.com/funding/github/packagist/setasign/fpdi",
|
|
||||||
"type": "tidelift"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"time": "2025-08-05T09:57:14+00:00"
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
"name": "stripe/stripe-php",
|
"name": "stripe/stripe-php",
|
||||||
"version": "v17.6.0",
|
"version": "v17.6.0",
|
||||||
|
|||||||
@@ -1,200 +0,0 @@
|
|||||||
# =============================================================================
|
|
||||||
# Configuration NGINX PRODUCTION pour pra-geo (IN4)
|
|
||||||
# Date: 2025-10-07
|
|
||||||
# Environnement: PRODUCTION
|
|
||||||
# Server: Container pra-geo (13.23.34.43)
|
|
||||||
# Port: 80 uniquement (HTTP)
|
|
||||||
# SSL/HTTPS: Géré par le reverse proxy NGINX sur le host IN4
|
|
||||||
# =============================================================================
|
|
||||||
|
|
||||||
# Site principal (web statique)
|
|
||||||
server {
|
|
||||||
listen 80;
|
|
||||||
server_name geosector.fr;
|
|
||||||
|
|
||||||
root /var/www/geosector/web;
|
|
||||||
index index.html;
|
|
||||||
|
|
||||||
# Logs PRODUCTION
|
|
||||||
access_log /var/log/nginx/geosector-web_access.log combined;
|
|
||||||
error_log /var/log/nginx/geosector-web_error.log warn;
|
|
||||||
|
|
||||||
location / {
|
|
||||||
try_files $uri $uri/ /index.html;
|
|
||||||
}
|
|
||||||
|
|
||||||
# Assets statiques avec cache agressif
|
|
||||||
location ~* \.(jpg|jpeg|png|gif|ico|svg|webp|css|js|woff|woff2|ttf|eot)$ {
|
|
||||||
expires 1y;
|
|
||||||
add_header Cache-Control "public, immutable";
|
|
||||||
access_log off;
|
|
||||||
}
|
|
||||||
|
|
||||||
# Protection des fichiers sensibles
|
|
||||||
location ~ /\.(?!well-known) {
|
|
||||||
deny all;
|
|
||||||
access_log off;
|
|
||||||
log_not_found off;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
# =============================================================================
|
|
||||||
# APPLICATION FLUTTER + API PHP
|
|
||||||
# =============================================================================
|
|
||||||
|
|
||||||
server {
|
|
||||||
listen 80;
|
|
||||||
server_name app3.geosector.fr;
|
|
||||||
|
|
||||||
# Logs PRODUCTION
|
|
||||||
access_log /var/log/nginx/pra-app_access.log combined;
|
|
||||||
error_log /var/log/nginx/pra-app_error.log warn;
|
|
||||||
|
|
||||||
# Récupérer le vrai IP du client depuis le reverse proxy
|
|
||||||
set_real_ip_from 13.23.34.0/24; # Réseau Incus
|
|
||||||
set_real_ip_from 51.159.7.190; # IP publique IN4
|
|
||||||
real_ip_header X-Forwarded-For;
|
|
||||||
real_ip_recursive on;
|
|
||||||
|
|
||||||
# Taille maximale des uploads (pour les logos, exports, etc.)
|
|
||||||
client_max_body_size 10M;
|
|
||||||
client_body_buffer_size 128k;
|
|
||||||
|
|
||||||
# Timeouts optimisés pour PRODUCTION
|
|
||||||
client_body_timeout 30s;
|
|
||||||
client_header_timeout 30s;
|
|
||||||
send_timeout 60s;
|
|
||||||
|
|
||||||
# =============================================================================
|
|
||||||
# APPLICATION FLUTTER (contenu statique)
|
|
||||||
# =============================================================================
|
|
||||||
location / {
|
|
||||||
root /var/www/geosector/app;
|
|
||||||
index index.html;
|
|
||||||
try_files $uri $uri/ /index.html;
|
|
||||||
|
|
||||||
# Cache intelligent pour PRODUCTION
|
|
||||||
# HTML : pas de cache (pour déploiements)
|
|
||||||
location ~* \.html$ {
|
|
||||||
expires -1;
|
|
||||||
add_header Cache-Control "no-cache, no-store, must-revalidate";
|
|
||||||
}
|
|
||||||
|
|
||||||
# Assets Flutter (JS, CSS, fonts) avec hash : cache agressif
|
|
||||||
location ~* \.(js|css|woff|woff2|ttf|eot)$ {
|
|
||||||
expires 1y;
|
|
||||||
add_header Cache-Control "public, immutable";
|
|
||||||
access_log off;
|
|
||||||
}
|
|
||||||
|
|
||||||
# Images : cache longue durée
|
|
||||||
location ~* \.(jpg|jpeg|png|gif|ico|svg|webp)$ {
|
|
||||||
expires 30d;
|
|
||||||
add_header Cache-Control "public";
|
|
||||||
access_log off;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
# =============================================================================
|
|
||||||
# API PHP (RESTful)
|
|
||||||
# =============================================================================
|
|
||||||
location /api/ {
|
|
||||||
root /var/www/geosector;
|
|
||||||
|
|
||||||
# CORS - Le reverse proxy IN4 ajoute déjà les headers CORS
|
|
||||||
# On les ajoute ici pour les requêtes internes si besoin
|
|
||||||
|
|
||||||
# Cache API : pas de cache (données dynamiques)
|
|
||||||
add_header Cache-Control "no-store, no-cache, must-revalidate" always;
|
|
||||||
add_header Pragma "no-cache" always;
|
|
||||||
add_header Expires "0" always;
|
|
||||||
|
|
||||||
# Rewrite vers index.php
|
|
||||||
try_files $uri $uri/ /api/index.php$is_args$args;
|
|
||||||
|
|
||||||
# Traitement PHP
|
|
||||||
location ~ ^/api/(.+\.php)$ {
|
|
||||||
root /var/www/geosector;
|
|
||||||
|
|
||||||
# FastCGI PHP-FPM
|
|
||||||
fastcgi_pass unix:/run/php-fpm83/php-fpm.sock;
|
|
||||||
fastcgi_index index.php;
|
|
||||||
include fastcgi_params;
|
|
||||||
fastcgi_param SCRIPT_FILENAME $request_filename;
|
|
||||||
|
|
||||||
# Variable d'environnement PRODUCTION
|
|
||||||
fastcgi_param APP_ENV "production";
|
|
||||||
fastcgi_param SERVER_NAME "app3.geosector.fr";
|
|
||||||
|
|
||||||
# Headers transmis à PHP (viennent du reverse proxy)
|
|
||||||
fastcgi_param HTTP_X_REAL_IP $http_x_real_ip;
|
|
||||||
fastcgi_param HTTP_X_FORWARDED_FOR $http_x_forwarded_for;
|
|
||||||
fastcgi_param HTTP_X_FORWARDED_PROTO $http_x_forwarded_proto;
|
|
||||||
fastcgi_param HTTPS $http_x_forwarded_proto;
|
|
||||||
|
|
||||||
# Timeouts pour opérations longues (sync, exports)
|
|
||||||
fastcgi_read_timeout 300;
|
|
||||||
fastcgi_send_timeout 300;
|
|
||||||
fastcgi_connect_timeout 60;
|
|
||||||
|
|
||||||
# Buffers optimisés
|
|
||||||
fastcgi_buffer_size 128k;
|
|
||||||
fastcgi_buffers 256 16k;
|
|
||||||
fastcgi_busy_buffers_size 256k;
|
|
||||||
fastcgi_temp_file_write_size 256k;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
# =============================================================================
|
|
||||||
# UPLOADS ET MÉDIAS
|
|
||||||
# =============================================================================
|
|
||||||
location /api/uploads/ {
|
|
||||||
alias /var/www/geosector/api/uploads/;
|
|
||||||
|
|
||||||
# Cache pour les médias uploadés
|
|
||||||
expires 7d;
|
|
||||||
add_header Cache-Control "public";
|
|
||||||
|
|
||||||
# Sécurité : empêcher l'exécution de scripts
|
|
||||||
location ~ \.(php|phtml|php3|php4|php5|phps)$ {
|
|
||||||
deny all;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
# =============================================================================
|
|
||||||
# SÉCURITÉ
|
|
||||||
# =============================================================================
|
|
||||||
|
|
||||||
# Bloquer l'accès aux fichiers sensibles
|
|
||||||
location ~ /\.(?!well-known) {
|
|
||||||
deny all;
|
|
||||||
access_log off;
|
|
||||||
log_not_found off;
|
|
||||||
}
|
|
||||||
|
|
||||||
# Bloquer l'accès aux fichiers de configuration
|
|
||||||
location ~* \.(env|sql|bak|backup|swp|config|conf|ini|log)$ {
|
|
||||||
deny all;
|
|
||||||
access_log off;
|
|
||||||
log_not_found off;
|
|
||||||
}
|
|
||||||
|
|
||||||
# Protection contre les requêtes invalides
|
|
||||||
if ($request_method !~ ^(GET|HEAD|POST|PUT|DELETE|PATCH|OPTIONS)$) {
|
|
||||||
return 405;
|
|
||||||
}
|
|
||||||
|
|
||||||
# =============================================================================
|
|
||||||
# MONITORING
|
|
||||||
# =============================================================================
|
|
||||||
|
|
||||||
# Endpoint de health check (accessible en interne)
|
|
||||||
location = /nginx-health {
|
|
||||||
access_log off;
|
|
||||||
allow 127.0.0.1;
|
|
||||||
allow 13.23.34.0/24; # Réseau interne Incus
|
|
||||||
deny all;
|
|
||||||
return 200 "healthy\n";
|
|
||||||
add_header Content-Type text/plain;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -1,290 +0,0 @@
|
|||||||
# =============================================================================
|
|
||||||
# Configuration NGINX PRODUCTION pour pra-geo (IN4)
|
|
||||||
# Date: 2025-10-07
|
|
||||||
# Environnement: PRODUCTION
|
|
||||||
# Server: IN4 (51.159.7.190)
|
|
||||||
# =============================================================================
|
|
||||||
|
|
||||||
# Site principal (redirection vers www ou app)
|
|
||||||
server {
|
|
||||||
listen 80;
|
|
||||||
server_name geosector.fr;
|
|
||||||
|
|
||||||
# Redirection permanente vers HTTPS
|
|
||||||
return 301 https://$server_name$request_uri;
|
|
||||||
}
|
|
||||||
|
|
||||||
server {
|
|
||||||
listen 443 ssl http2;
|
|
||||||
server_name geosector.fr;
|
|
||||||
|
|
||||||
# Certificats SSL
|
|
||||||
ssl_certificate /etc/letsencrypt/live/geosector.fr/fullchain.pem;
|
|
||||||
ssl_certificate_key /etc/letsencrypt/live/geosector.fr/privkey.pem;
|
|
||||||
|
|
||||||
# Configuration SSL optimisée
|
|
||||||
ssl_protocols TLSv1.2 TLSv1.3;
|
|
||||||
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-CHACHA20-POLY1305';
|
|
||||||
ssl_prefer_server_ciphers on;
|
|
||||||
ssl_session_cache shared:SSL:10m;
|
|
||||||
ssl_session_timeout 10m;
|
|
||||||
ssl_stapling on;
|
|
||||||
ssl_stapling_verify on;
|
|
||||||
|
|
||||||
root /var/www/geosector/web;
|
|
||||||
index index.html;
|
|
||||||
|
|
||||||
# Logs PRODUCTION
|
|
||||||
access_log /var/log/nginx/geosector-web_access.log combined;
|
|
||||||
error_log /var/log/nginx/geosector-web_error.log warn;
|
|
||||||
|
|
||||||
# Headers de sécurité
|
|
||||||
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;
|
|
||||||
add_header X-Frame-Options "SAMEORIGIN" always;
|
|
||||||
add_header X-Content-Type-Options "nosniff" always;
|
|
||||||
add_header X-XSS-Protection "1; mode=block" always;
|
|
||||||
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
|
|
||||||
|
|
||||||
location / {
|
|
||||||
try_files $uri $uri/ /index.html;
|
|
||||||
}
|
|
||||||
|
|
||||||
# Assets statiques avec cache agressif
|
|
||||||
location ~* \.(jpg|jpeg|png|gif|ico|svg|webp|css|js|woff|woff2|ttf|eot)$ {
|
|
||||||
expires 1y;
|
|
||||||
add_header Cache-Control "public, immutable";
|
|
||||||
access_log off;
|
|
||||||
}
|
|
||||||
|
|
||||||
# Protection des fichiers sensibles
|
|
||||||
location ~ /\.(?!well-known) {
|
|
||||||
deny all;
|
|
||||||
access_log off;
|
|
||||||
log_not_found off;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
# =============================================================================
|
|
||||||
# APPLICATION FLUTTER + API PHP
|
|
||||||
# =============================================================================
|
|
||||||
|
|
||||||
# Redirection HTTP → HTTPS
|
|
||||||
server {
|
|
||||||
listen 80;
|
|
||||||
server_name app3.geosector.fr;
|
|
||||||
|
|
||||||
# Permettre Let's Encrypt validation
|
|
||||||
location ^~ /.well-known/acme-challenge/ {
|
|
||||||
root /var/www/letsencrypt;
|
|
||||||
allow all;
|
|
||||||
}
|
|
||||||
|
|
||||||
# Redirection permanente vers HTTPS
|
|
||||||
return 301 https://$server_name$request_uri;
|
|
||||||
}
|
|
||||||
|
|
||||||
server {
|
|
||||||
listen 443 ssl http2;
|
|
||||||
server_name app3.geosector.fr;
|
|
||||||
|
|
||||||
# Certificats SSL
|
|
||||||
ssl_certificate /etc/letsencrypt/live/app3.geosector.fr/fullchain.pem;
|
|
||||||
ssl_certificate_key /etc/letsencrypt/live/app3.geosector.fr/privkey.pem;
|
|
||||||
|
|
||||||
# Configuration SSL optimisée (même que ci-dessus)
|
|
||||||
ssl_protocols TLSv1.2 TLSv1.3;
|
|
||||||
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-CHACHA20-POLY1305';
|
|
||||||
ssl_prefer_server_ciphers on;
|
|
||||||
ssl_session_cache shared:SSL:10m;
|
|
||||||
ssl_session_timeout 10m;
|
|
||||||
ssl_stapling on;
|
|
||||||
ssl_stapling_verify on;
|
|
||||||
|
|
||||||
# Logs PRODUCTION
|
|
||||||
access_log /var/log/nginx/pra-app_access.log combined;
|
|
||||||
error_log /var/log/nginx/pra-app_error.log warn;
|
|
||||||
|
|
||||||
# Headers de sécurité globaux
|
|
||||||
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;
|
|
||||||
add_header X-Frame-Options "SAMEORIGIN" always;
|
|
||||||
add_header X-Content-Type-Options "nosniff" always;
|
|
||||||
add_header X-XSS-Protection "1; mode=block" always;
|
|
||||||
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
|
|
||||||
|
|
||||||
# Taille maximale des uploads (pour les logos, exports, etc.)
|
|
||||||
client_max_body_size 10M;
|
|
||||||
client_body_buffer_size 128k;
|
|
||||||
|
|
||||||
# Timeouts optimisés pour PRODUCTION
|
|
||||||
client_body_timeout 30s;
|
|
||||||
client_header_timeout 30s;
|
|
||||||
send_timeout 60s;
|
|
||||||
|
|
||||||
# =============================================================================
|
|
||||||
# APPLICATION FLUTTER (contenu statique)
|
|
||||||
# =============================================================================
|
|
||||||
location / {
|
|
||||||
root /var/www/geosector/app;
|
|
||||||
index index.html;
|
|
||||||
try_files $uri $uri/ /index.html;
|
|
||||||
|
|
||||||
# Cache intelligent pour PRODUCTION
|
|
||||||
# HTML : pas de cache (pour déploiements)
|
|
||||||
location ~* \.html$ {
|
|
||||||
expires -1;
|
|
||||||
add_header Cache-Control "no-cache, no-store, must-revalidate";
|
|
||||||
}
|
|
||||||
|
|
||||||
# Assets Flutter (JS, CSS, fonts) avec hash : cache agressif
|
|
||||||
location ~* \.(js|css|woff|woff2|ttf|eot)$ {
|
|
||||||
expires 1y;
|
|
||||||
add_header Cache-Control "public, immutable";
|
|
||||||
access_log off;
|
|
||||||
}
|
|
||||||
|
|
||||||
# Images : cache longue durée
|
|
||||||
location ~* \.(jpg|jpeg|png|gif|ico|svg|webp)$ {
|
|
||||||
expires 30d;
|
|
||||||
add_header Cache-Control "public";
|
|
||||||
access_log off;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
# =============================================================================
|
|
||||||
# API PHP (RESTful)
|
|
||||||
# =============================================================================
|
|
||||||
location /api/ {
|
|
||||||
root /var/www/geosector;
|
|
||||||
|
|
||||||
# CORS - Liste blanche des origines autorisées en PRODUCTION
|
|
||||||
set $cors_origin "";
|
|
||||||
|
|
||||||
# Autoriser uniquement les domaines de production
|
|
||||||
if ($http_origin ~* ^https://(app\.geosector\.fr|geosector\.fr)$) {
|
|
||||||
set $cors_origin $http_origin;
|
|
||||||
}
|
|
||||||
|
|
||||||
# Gestion des preflight requests (OPTIONS)
|
|
||||||
if ($request_method = 'OPTIONS') {
|
|
||||||
add_header 'Access-Control-Allow-Origin' $cors_origin always;
|
|
||||||
add_header 'Access-Control-Allow-Methods' 'GET, POST, PUT, DELETE, PATCH, OPTIONS' always;
|
|
||||||
add_header 'Access-Control-Allow-Headers' 'DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range,Authorization' always;
|
|
||||||
add_header 'Access-Control-Allow-Credentials' 'true' always;
|
|
||||||
add_header 'Access-Control-Max-Age' 86400;
|
|
||||||
add_header 'Content-Type' 'text/plain; charset=utf-8';
|
|
||||||
add_header 'Content-Length' 0;
|
|
||||||
return 204;
|
|
||||||
}
|
|
||||||
|
|
||||||
# Headers CORS pour les requêtes normales
|
|
||||||
add_header 'Access-Control-Allow-Origin' $cors_origin always;
|
|
||||||
add_header 'Access-Control-Allow-Methods' 'GET, POST, PUT, DELETE, PATCH, OPTIONS' always;
|
|
||||||
add_header 'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Authorization' always;
|
|
||||||
add_header 'Access-Control-Allow-Credentials' 'true' always;
|
|
||||||
|
|
||||||
# Cache API : pas de cache (données dynamiques)
|
|
||||||
add_header Cache-Control "no-store, no-cache, must-revalidate, proxy-revalidate" always;
|
|
||||||
add_header Pragma "no-cache" always;
|
|
||||||
add_header Expires "0" always;
|
|
||||||
|
|
||||||
# Headers de sécurité spécifiques API
|
|
||||||
add_header X-Content-Type-Options "nosniff" always;
|
|
||||||
add_header X-Frame-Options "DENY" always;
|
|
||||||
|
|
||||||
# Rewrite vers index.php
|
|
||||||
try_files $uri $uri/ /api/index.php$is_args$args;
|
|
||||||
|
|
||||||
# Traitement PHP
|
|
||||||
location ~ ^/api/(.+\.php)$ {
|
|
||||||
root /var/www/geosector;
|
|
||||||
|
|
||||||
# FastCGI PHP-FPM
|
|
||||||
fastcgi_pass unix:/run/php-fpm83/php-fpm.sock;
|
|
||||||
fastcgi_index index.php;
|
|
||||||
include fastcgi_params;
|
|
||||||
fastcgi_param SCRIPT_FILENAME $request_filename;
|
|
||||||
|
|
||||||
# Variable d'environnement PRODUCTION
|
|
||||||
fastcgi_param APP_ENV "production";
|
|
||||||
fastcgi_param SERVER_NAME "app3.geosector.fr";
|
|
||||||
|
|
||||||
# Headers transmis à PHP
|
|
||||||
fastcgi_param HTTP_X_REAL_IP $remote_addr;
|
|
||||||
fastcgi_param HTTP_X_FORWARDED_FOR $proxy_add_x_forwarded_for;
|
|
||||||
fastcgi_param HTTP_X_FORWARDED_PROTO $scheme;
|
|
||||||
|
|
||||||
# Timeouts pour opérations longues (sync, exports)
|
|
||||||
fastcgi_read_timeout 300;
|
|
||||||
fastcgi_send_timeout 300;
|
|
||||||
fastcgi_connect_timeout 60;
|
|
||||||
|
|
||||||
# Buffers optimisés
|
|
||||||
fastcgi_buffer_size 128k;
|
|
||||||
fastcgi_buffers 256 16k;
|
|
||||||
fastcgi_busy_buffers_size 256k;
|
|
||||||
fastcgi_temp_file_write_size 256k;
|
|
||||||
|
|
||||||
# Headers CORS pour réponses PHP
|
|
||||||
add_header 'Access-Control-Allow-Origin' $cors_origin always;
|
|
||||||
add_header 'Access-Control-Allow-Credentials' 'true' always;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
# =============================================================================
|
|
||||||
# UPLOADS ET MÉDIAS
|
|
||||||
# =============================================================================
|
|
||||||
location /api/uploads/ {
|
|
||||||
alias /var/www/geosector/api/uploads/;
|
|
||||||
|
|
||||||
# Cache pour les médias uploadés
|
|
||||||
expires 7d;
|
|
||||||
add_header Cache-Control "public";
|
|
||||||
|
|
||||||
# Sécurité : empêcher l'exécution de scripts
|
|
||||||
location ~ \.(php|phtml|php3|php4|php5|phps)$ {
|
|
||||||
deny all;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
# =============================================================================
|
|
||||||
# SÉCURITÉ
|
|
||||||
# =============================================================================
|
|
||||||
|
|
||||||
# Bloquer l'accès aux fichiers sensibles
|
|
||||||
location ~ /\.(?!well-known) {
|
|
||||||
deny all;
|
|
||||||
access_log off;
|
|
||||||
log_not_found off;
|
|
||||||
}
|
|
||||||
|
|
||||||
# Bloquer l'accès aux fichiers de configuration
|
|
||||||
location ~* \.(env|sql|bak|backup|swp|config|conf|ini|log)$ {
|
|
||||||
deny all;
|
|
||||||
access_log off;
|
|
||||||
log_not_found off;
|
|
||||||
}
|
|
||||||
|
|
||||||
# Bloquer les user-agents malveillants
|
|
||||||
if ($http_user_agent ~* (bot|crawler|spider|scraper|wget|curl)) {
|
|
||||||
return 403;
|
|
||||||
}
|
|
||||||
|
|
||||||
# Protection contre les requêtes invalides
|
|
||||||
if ($request_method !~ ^(GET|HEAD|POST|PUT|DELETE|PATCH|OPTIONS)$) {
|
|
||||||
return 405;
|
|
||||||
}
|
|
||||||
|
|
||||||
# =============================================================================
|
|
||||||
# MONITORING
|
|
||||||
# =============================================================================
|
|
||||||
|
|
||||||
# Endpoint de health check (accessible uniquement en local)
|
|
||||||
location = /nginx-health {
|
|
||||||
access_log off;
|
|
||||||
allow 127.0.0.1;
|
|
||||||
allow 13.23.34.0/24; # Réseau interne Incus
|
|
||||||
deny all;
|
|
||||||
return 200 "healthy\n";
|
|
||||||
add_header Content-Type text/plain;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -1 +0,0 @@
|
|||||||
{"ip":"169.155.255.55","timestamp":1758618220,"retrieved_at":"2025-09-23 09:03:41"}
|
|
||||||
@@ -1,30 +0,0 @@
|
|||||||
# Répertoire data
|
|
||||||
|
|
||||||
Ce répertoire contient les données de référence pour l'API.
|
|
||||||
|
|
||||||
## Fichiers
|
|
||||||
|
|
||||||
- `stripe_certified_devices.json` (optionnel) : Liste personnalisée des appareils certifiés Stripe Tap to Pay
|
|
||||||
|
|
||||||
## Format stripe_certified_devices.json
|
|
||||||
|
|
||||||
Si vous souhaitez ajouter des appareils supplémentaires à la liste intégrée, créez un fichier `stripe_certified_devices.json` avec le format suivant :
|
|
||||||
|
|
||||||
```json
|
|
||||||
[
|
|
||||||
{
|
|
||||||
"manufacturer": "Samsung",
|
|
||||||
"model": "Galaxy A55",
|
|
||||||
"model_identifier": "SM-A556B",
|
|
||||||
"min_android_version": 14
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"manufacturer": "Fairphone",
|
|
||||||
"model": "Fairphone 5",
|
|
||||||
"model_identifier": "FP5",
|
|
||||||
"min_android_version": 13
|
|
||||||
}
|
|
||||||
]
|
|
||||||
```
|
|
||||||
|
|
||||||
Les appareils dans ce fichier seront ajoutés à la liste intégrée dans le script CRON.
|
|
||||||
@@ -1,42 +1,27 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
|
|
||||||
# Script de déploiement unifié pour GEOSECTOR API
|
# Script de déploiement pour GEOSECTOR API
|
||||||
# Version: 4.0 (Janvier 2025)
|
# Version: 3.0 (10 mai 2025)
|
||||||
# Auteur: Pierre (avec l'aide de Claude)
|
# Auteur: Pierre (avec l'aide de Claude)
|
||||||
#
|
|
||||||
# Usage:
|
|
||||||
# ./deploy-api.sh # Déploiement local DEV (code → container geo)
|
|
||||||
# ./deploy-api.sh rca # Livraison RECETTE (container geo → rca-geo)
|
|
||||||
# ./deploy-api.sh pra # Livraison PRODUCTION (rca-geo → pra-geo)
|
|
||||||
|
|
||||||
set -euo pipefail
|
set -euo pipefail
|
||||||
|
|
||||||
# =====================================
|
|
||||||
# Configuration générale
|
|
||||||
# =====================================
|
|
||||||
|
|
||||||
# Paramètre optionnel pour l'environnement cible
|
|
||||||
TARGET_ENV=${1:-dev}
|
|
||||||
|
|
||||||
# Configuration SSH
|
|
||||||
HOST_KEY="/home/pierre/.ssh/id_rsa_mbpi"
|
|
||||||
HOST_PORT="22"
|
|
||||||
HOST_USER="root"
|
|
||||||
|
|
||||||
# Configuration des serveurs
|
# Configuration des serveurs
|
||||||
RCA_HOST="195.154.80.116" # IN3 - Serveur de recette
|
JUMP_USER="root"
|
||||||
PRA_HOST="51.159.7.190" # IN4 - Serveur de production
|
JUMP_HOST="195.154.80.116"
|
||||||
|
JUMP_PORT="22"
|
||||||
|
JUMP_KEY="/home/pierre/.ssh/id_rsa_mbpi"
|
||||||
|
|
||||||
# Configuration Incus
|
# Paramètres du container Incus
|
||||||
INCUS_PROJECT="default"
|
INCUS_PROJECT=default
|
||||||
API_PATH="/var/www/geosector/api"
|
INCUS_CONTAINER=dva-geo
|
||||||
|
CONTAINER_USER=root
|
||||||
|
|
||||||
|
# Paramètres de déploiement
|
||||||
|
FINAL_PATH="/var/www/geosector/api"
|
||||||
FINAL_OWNER="nginx"
|
FINAL_OWNER="nginx"
|
||||||
FINAL_GROUP="nginx"
|
FINAL_GROUP="nginx"
|
||||||
FINAL_OWNER_LOGS="nobody"
|
FINAL_OWNER_LOGS="nobody"
|
||||||
FINAL_GROUP_LOGS="nginx"
|
|
||||||
|
|
||||||
# Configuration de sauvegarde
|
|
||||||
BACKUP_DIR="/data/backup/geosector/api"
|
|
||||||
|
|
||||||
# Couleurs pour les messages
|
# Couleurs pour les messages
|
||||||
GREEN='\033[0;32m'
|
GREEN='\033[0;32m'
|
||||||
@@ -45,339 +30,134 @@ YELLOW='\033[0;33m'
|
|||||||
BLUE='\033[0;34m'
|
BLUE='\033[0;34m'
|
||||||
NC='\033[0m' # No Color
|
NC='\033[0m' # No Color
|
||||||
|
|
||||||
# =====================================
|
run_in_container() {
|
||||||
# Fonctions utilitaires
|
echo "-> Running: $*"
|
||||||
# =====================================
|
incus exec "${INCUS_CONTAINER}" -- "$@" || {
|
||||||
|
echo "❌ Failed to run: $*"
|
||||||
|
exit 1
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
# Fonction pour afficher les messages d'étape
|
||||||
echo_step() {
|
echo_step() {
|
||||||
echo -e "${GREEN}==>${NC} $1"
|
echo -e "${GREEN}==>${NC} $1"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# Fonction pour afficher les informations
|
||||||
echo_info() {
|
echo_info() {
|
||||||
echo -e "${BLUE}Info:${NC} $1"
|
echo -e "${BLUE}Info:${NC} $1"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# Fonction pour afficher les avertissements
|
||||||
echo_warning() {
|
echo_warning() {
|
||||||
echo -e "${YELLOW}Warning:${NC} $1"
|
echo -e "${YELLOW}Warning:${NC} $1"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# Fonction pour afficher les erreurs
|
||||||
echo_error() {
|
echo_error() {
|
||||||
echo -e "${RED}Error:${NC} $1"
|
echo -e "${RED}Error:${NC} $1"
|
||||||
exit 1
|
exit 1
|
||||||
}
|
}
|
||||||
|
|
||||||
# Fonction pour nettoyer les anciens backups
|
# Vérification de l'environnement
|
||||||
cleanup_old_backups() {
|
echo_step "Verifying environment..."
|
||||||
local prefix=""
|
|
||||||
case $TARGET_ENV in
|
|
||||||
"dev") prefix="api-dev-" ;;
|
|
||||||
"rca") prefix="api-rca-" ;;
|
|
||||||
"pra") prefix="api-pra-" ;;
|
|
||||||
esac
|
|
||||||
|
|
||||||
echo_info "Cleaning old backups (keeping last 5)..."
|
# Vérification des fichiers requis
|
||||||
ls -t "${BACKUP_DIR}"/${prefix}*.tar.gz 2>/dev/null | tail -n +6 | xargs -r rm -f && {
|
if [ ! -f "src/Config/AppConfig.php" ]; then
|
||||||
REMAINING_BACKUPS=$(ls "${BACKUP_DIR}"/${prefix}*.tar.gz 2>/dev/null | wc -l)
|
echo_error "Configuration file missing"
|
||||||
echo_info "Kept ${REMAINING_BACKUPS} backup(s) for ${TARGET_ENV}"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
# =====================================
|
|
||||||
# Détermination de la configuration selon l'environnement
|
|
||||||
# =====================================
|
|
||||||
|
|
||||||
case $TARGET_ENV in
|
|
||||||
"dev")
|
|
||||||
echo_step "Configuring for DEV deployment on IN3"
|
|
||||||
SOURCE_TYPE="local_code"
|
|
||||||
DEST_CONTAINER="dva-geo"
|
|
||||||
DEST_HOST="${RCA_HOST}" # IN3 pour le DEV aussi
|
|
||||||
ENV_NAME="DEVELOPMENT"
|
|
||||||
;;
|
|
||||||
"rca")
|
|
||||||
echo_step "Configuring for RECETTE delivery"
|
|
||||||
SOURCE_TYPE="remote_container"
|
|
||||||
SOURCE_CONTAINER="dva-geo"
|
|
||||||
SOURCE_HOST="${RCA_HOST}"
|
|
||||||
DEST_CONTAINER="rca-geo"
|
|
||||||
DEST_HOST="${RCA_HOST}"
|
|
||||||
ENV_NAME="RECETTE"
|
|
||||||
;;
|
|
||||||
"pra")
|
|
||||||
echo_step "Configuring for PRODUCTION delivery"
|
|
||||||
SOURCE_TYPE="remote_container"
|
|
||||||
SOURCE_HOST="${RCA_HOST}"
|
|
||||||
SOURCE_CONTAINER="rca-geo"
|
|
||||||
DEST_CONTAINER="pra-geo"
|
|
||||||
DEST_HOST="${PRA_HOST}"
|
|
||||||
ENV_NAME="PRODUCTION"
|
|
||||||
;;
|
|
||||||
*)
|
|
||||||
echo_error "Unknown environment: $TARGET_ENV. Use 'dev', 'rca' or 'pra'"
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
|
|
||||||
echo_info "Deployment flow: ${ENV_NAME}"
|
|
||||||
|
|
||||||
# =====================================
|
|
||||||
# Création de l'archive selon la source
|
|
||||||
# =====================================
|
|
||||||
|
|
||||||
# Créer le dossier de backup s'il n'existe pas
|
|
||||||
if [ ! -d "${BACKUP_DIR}" ]; then
|
|
||||||
echo_info "Creating backup directory ${BACKUP_DIR}..."
|
|
||||||
mkdir -p "${BACKUP_DIR}" || echo_error "Failed to create backup directory"
|
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Horodatage format YYYYMMDDHH
|
if [ ! -f "composer.json" ] || [ ! -f "composer.lock" ]; then
|
||||||
TIMESTAMP=$(date +%Y%m%d%H)
|
echo_error "Composer files missing"
|
||||||
|
|
||||||
# Nom de l'archive selon l'environnement
|
|
||||||
case $TARGET_ENV in
|
|
||||||
"dev")
|
|
||||||
ARCHIVE_NAME="api-dev-${TIMESTAMP}.tar.gz"
|
|
||||||
;;
|
|
||||||
"rca")
|
|
||||||
ARCHIVE_NAME="api-rca-${TIMESTAMP}.tar.gz"
|
|
||||||
;;
|
|
||||||
"pra")
|
|
||||||
ARCHIVE_NAME="api-pra-${TIMESTAMP}.tar.gz"
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
|
|
||||||
ARCHIVE_PATH="${BACKUP_DIR}/${ARCHIVE_NAME}"
|
|
||||||
|
|
||||||
if [ "$SOURCE_TYPE" = "local_code" ]; then
|
|
||||||
# DEV: Créer une archive depuis le code local
|
|
||||||
echo_step "Creating archive from local code..."
|
|
||||||
|
|
||||||
# Vérification des fichiers requis
|
|
||||||
if [ ! -f "src/Config/AppConfig.php" ]; then
|
|
||||||
echo_error "Configuration file missing"
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [ ! -f "composer.json" ] || [ ! -f "composer.lock" ]; then
|
|
||||||
echo_error "Composer files missing"
|
|
||||||
fi
|
|
||||||
|
|
||||||
tar --exclude='.git' \
|
|
||||||
--exclude='.gitignore' \
|
|
||||||
--exclude='.vscode' \
|
|
||||||
--exclude='logs' \
|
|
||||||
--exclude='sessions' \
|
|
||||||
--exclude='opendata' \
|
|
||||||
--exclude='*.template' \
|
|
||||||
--exclude='*.sh' \
|
|
||||||
--exclude='.env' \
|
|
||||||
--exclude='.env_marker' \
|
|
||||||
--exclude='*.log' \
|
|
||||||
--exclude='.DS_Store' \
|
|
||||||
--exclude='README.md' \
|
|
||||||
--exclude="*.tar.gz" \
|
|
||||||
--exclude='node_modules' \
|
|
||||||
--exclude='vendor' \
|
|
||||||
--exclude='*.swp' \
|
|
||||||
--exclude='*.swo' \
|
|
||||||
--exclude='*~' \
|
|
||||||
-czf "${ARCHIVE_PATH}" . 2>/dev/null || echo_error "Failed to create archive"
|
|
||||||
|
|
||||||
echo_info "Archive created: ${ARCHIVE_PATH}"
|
|
||||||
echo_info "Archive size: $(du -h "${ARCHIVE_PATH}" | cut -f1)"
|
|
||||||
|
|
||||||
# Cette section n'est plus utilisée car RCA utilise maintenant remote_container
|
|
||||||
|
|
||||||
elif [ "$SOURCE_TYPE" = "remote_container" ]; then
|
|
||||||
# RCA et PRA: Créer une archive depuis un container distant
|
|
||||||
echo_step "Creating archive from remote container ${SOURCE_CONTAINER} on ${SOURCE_HOST}..."
|
|
||||||
|
|
||||||
# Créer l'archive sur le serveur source
|
|
||||||
ssh -i ${HOST_KEY} -p ${HOST_PORT} ${HOST_USER}@${SOURCE_HOST} "
|
|
||||||
incus project switch ${INCUS_PROJECT} &&
|
|
||||||
incus exec ${SOURCE_CONTAINER} -- tar \
|
|
||||||
--exclude='logs' \
|
|
||||||
--exclude='uploads' \
|
|
||||||
--exclude='sessions' \
|
|
||||||
--exclude='opendata' \
|
|
||||||
-czf /tmp/${ARCHIVE_NAME} -C ${API_PATH} .
|
|
||||||
" || echo_error "Failed to create archive on remote"
|
|
||||||
|
|
||||||
# Extraire l'archive du container vers l'hôte
|
|
||||||
ssh -i ${HOST_KEY} -p ${HOST_PORT} ${HOST_USER}@${SOURCE_HOST} "
|
|
||||||
incus file pull ${SOURCE_CONTAINER}/tmp/${ARCHIVE_NAME} /tmp/${ARCHIVE_NAME} &&
|
|
||||||
incus exec ${SOURCE_CONTAINER} -- rm -f /tmp/${ARCHIVE_NAME}
|
|
||||||
" || echo_error "Failed to extract archive from remote container"
|
|
||||||
|
|
||||||
# Copier l'archive vers la machine locale
|
|
||||||
scp -i ${HOST_KEY} -P ${HOST_PORT} ${HOST_USER}@${SOURCE_HOST}:/tmp/${ARCHIVE_NAME} ${ARCHIVE_PATH} || echo_error "Failed to copy archive locally"
|
|
||||||
|
|
||||||
echo_info "Archive saved: ${ARCHIVE_PATH}"
|
|
||||||
echo_info "Archive size: $(du -h "${ARCHIVE_PATH}" | cut -f1)"
|
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Nettoyer les anciens backups
|
# Étape 0: Définir le nom de l'archive
|
||||||
cleanup_old_backups
|
ARCHIVE_NAME="api-deploy-$(date +%s).tar.gz"
|
||||||
|
TEMP_ARCHIVE="/tmp/${ARCHIVE_NAME}"
|
||||||
|
echo_info "Archive name will be: $ARCHIVE_NAME"
|
||||||
|
|
||||||
# =====================================
|
# Étape 1: Créer une archive du projet
|
||||||
# Déploiement selon la destination
|
echo_step "Creating project archive..."
|
||||||
# =====================================
|
tar --exclude='.git' \
|
||||||
|
--exclude='.gitignore' \
|
||||||
|
--exclude='.vscode' \
|
||||||
|
--exclude='logs' \
|
||||||
|
--exclude='*.template' \
|
||||||
|
--exclude='*.sh' \
|
||||||
|
--exclude='.env' \
|
||||||
|
--exclude='*.log' \
|
||||||
|
--exclude='.DS_Store' \
|
||||||
|
--exclude='README.md' \
|
||||||
|
--exclude="*.tar.gz" \
|
||||||
|
--exclude='node_modules' \
|
||||||
|
--exclude='vendor' \
|
||||||
|
--exclude='*.swp' \
|
||||||
|
--exclude='*.swo' \
|
||||||
|
--exclude='*~' \
|
||||||
|
--warning=no-file-changed \
|
||||||
|
--no-xattrs \
|
||||||
|
-czf "${TEMP_ARCHIVE}" . || echo_error "Failed to create archive"
|
||||||
|
|
||||||
# Tous les déploiements se font maintenant sur des containers distants
|
# Vérifier la taille de l'archive
|
||||||
if [ "$DEST_HOST" != "local" ]; then
|
ARCHIVE_SIZE=$(du -h "${TEMP_ARCHIVE}" | cut -f1)
|
||||||
# Déploiement sur container distant (DEV, RCA ou PRA)
|
|
||||||
echo_step "Deploying to remote container ${DEST_CONTAINER} on ${DEST_HOST}..."
|
|
||||||
|
|
||||||
# Créer une sauvegarde sur le serveur de destination
|
|
||||||
BACKUP_TIMESTAMP=$(date +"%Y%m%d_%H%M%S")
|
|
||||||
REMOTE_BACKUP_DIR="${API_PATH}_backup_${BACKUP_TIMESTAMP}"
|
|
||||||
|
|
||||||
echo_info "Creating backup on destination..."
|
|
||||||
ssh -i ${HOST_KEY} -p ${HOST_PORT} ${HOST_USER}@${DEST_HOST} "
|
|
||||||
incus project switch ${INCUS_PROJECT} &&
|
|
||||||
incus exec ${DEST_CONTAINER} -- test -d ${API_PATH} &&
|
|
||||||
incus exec ${DEST_CONTAINER} -- cp -r ${API_PATH} ${REMOTE_BACKUP_DIR} &&
|
|
||||||
echo 'Backup created: ${REMOTE_BACKUP_DIR}'
|
|
||||||
" || echo_warning "No existing installation to backup"
|
|
||||||
|
|
||||||
# Transférer l'archive vers le serveur de destination
|
|
||||||
echo_info "Transferring archive to ${DEST_HOST}..."
|
|
||||||
|
|
||||||
if [ "$SOURCE_TYPE" = "local_code" ]; then
|
|
||||||
# Pour DEV: copier depuis local vers IN3
|
|
||||||
scp -i ${HOST_KEY} -P ${HOST_PORT} ${ARCHIVE_PATH} ${HOST_USER}@${DEST_HOST}:/tmp/${ARCHIVE_NAME} || echo_error "Failed to copy archive to destination"
|
|
||||||
elif [ "$SOURCE_TYPE" = "remote_container" ] && [ "$SOURCE_HOST" = "$DEST_HOST" ]; then
|
|
||||||
# Pour RCA: même serveur (IN3), pas de transfert nécessaire, l'archive est déjà là
|
|
||||||
echo_info "Archive already on destination server (same host)"
|
|
||||||
else
|
|
||||||
# Pour PRA: l'archive est déjà sur la machine locale (copiée depuis IN3)
|
|
||||||
# On la transfère maintenant vers IN4
|
|
||||||
echo_info "Transferring archive from local to IN4..."
|
|
||||||
scp -i ${HOST_KEY} -P ${HOST_PORT} ${ARCHIVE_PATH} ${HOST_USER}@${DEST_HOST}:/tmp/${ARCHIVE_NAME} || echo_error "Failed to copy archive to IN4"
|
|
||||||
|
|
||||||
# Nettoyer sur le serveur source IN3
|
SSH_JUMP_CMD="ssh -i ${JUMP_KEY} -p ${JUMP_PORT} ${JUMP_USER}@${JUMP_HOST}"
|
||||||
ssh -i ${HOST_KEY} -p ${HOST_PORT} ${HOST_USER}@${SOURCE_HOST} "rm -f /tmp/${ARCHIVE_NAME}" || echo_warning "Could not clean source server"
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Déployer sur le container de destination
|
|
||||||
echo_info "Extracting on destination container..."
|
|
||||||
|
|
||||||
# Déterminer le nom de l'environnement pour le marqueur
|
# Étape 2: Copier l'archive vers le serveur de saut
|
||||||
case $TARGET_ENV in
|
echo_step "Copying archive to jump server..."
|
||||||
"dev") ENV_MARKER="development" ;;
|
echo_info "Archive size: $ARCHIVE_SIZE"
|
||||||
"rca") ENV_MARKER="recette" ;;
|
scp -i "${JUMP_KEY}" -P "${JUMP_PORT}" "${TEMP_ARCHIVE}" "${JUMP_USER}@${JUMP_HOST}:/tmp/${ARCHIVE_NAME}" || echo_error "Failed to copy archive to jump server"
|
||||||
"pra") ENV_MARKER="production" ;;
|
|
||||||
esac
|
|
||||||
|
|
||||||
ssh -i ${HOST_KEY} -p ${HOST_PORT} ${HOST_USER}@${DEST_HOST} "
|
# Étape 3: Exécuter les commandes sur le serveur de saut pour déployer dans le container Incus
|
||||||
set -euo pipefail
|
echo_step "Deploying to Incus container..."
|
||||||
|
$SSH_JUMP_CMD "
|
||||||
|
set -euo pipefail
|
||||||
|
|
||||||
# Pousser l'archive dans le container
|
echo '✅ Passage au projet Incus...'
|
||||||
incus project switch ${INCUS_PROJECT} &&
|
incus project switch ${INCUS_PROJECT} || exit 1
|
||||||
incus file push /tmp/${ARCHIVE_NAME} ${DEST_CONTAINER}/tmp/${ARCHIVE_NAME} &&
|
|
||||||
|
|
||||||
# Nettoyer sélectivement (préserver logs, uploads et sessions)
|
echo '📦 Poussée de archive dans le conteneur...'
|
||||||
incus exec ${DEST_CONTAINER} -- find ${API_PATH} -mindepth 1 -maxdepth 1 ! -name 'uploads' ! -name 'logs' ! -name 'sessions' -exec rm -rf {} \; 2>/dev/null || true &&
|
incus file push /tmp/${ARCHIVE_NAME} ${INCUS_CONTAINER}/tmp/${ARCHIVE_NAME} || exit 1
|
||||||
|
|
||||||
# Extraire l'archive
|
echo '📁 Préparation du dossier final...'
|
||||||
incus exec ${DEST_CONTAINER} -- tar -xzf /tmp/${ARCHIVE_NAME} -C ${API_PATH}/ &&
|
incus exec ${INCUS_CONTAINER} -- mkdir -p ${FINAL_PATH} || exit 1
|
||||||
|
incus exec ${INCUS_CONTAINER} -- rm -rf ${FINAL_PATH}/* || exit 1
|
||||||
|
incus exec ${INCUS_CONTAINER} -- tar -xzf /tmp/${ARCHIVE_NAME} -C ${FINAL_PATH}/ || exit 1
|
||||||
|
|
||||||
# Créer le marqueur d'environnement pour la détection CLI
|
echo '🔧 Réglage des permissions...'
|
||||||
incus exec ${DEST_CONTAINER} -- bash -c 'echo \"${ENV_MARKER}\" > ${API_PATH}/.env_marker' &&
|
incus exec ${INCUS_CONTAINER} -- mkdir -p ${FINAL_PATH}/logs || exit 1
|
||||||
|
incus exec ${INCUS_CONTAINER} -- chown -R ${FINAL_OWNER}:${FINAL_GROUP} ${FINAL_PATH} || exit 1
|
||||||
# Permissions
|
incus exec ${INCUS_CONTAINER} -- find ${FINAL_PATH} -type d -exec chmod 755 {} \; || exit 1
|
||||||
incus exec ${DEST_CONTAINER} -- chown -R ${FINAL_OWNER}:${FINAL_GROUP} ${API_PATH} &&
|
incus exec ${INCUS_CONTAINER} -- find ${FINAL_PATH} -type f -exec chmod 644 {} \; || exit 1
|
||||||
incus exec ${DEST_CONTAINER} -- find ${API_PATH} -type d -exec chmod 755 {} \; &&
|
|
||||||
incus exec ${DEST_CONTAINER} -- find ${API_PATH} -type f -exec chmod 644 {} \; &&
|
# Permissions spéciales pour le dossier logs (pour permettre à PHP-FPM de l'utilisateur nobody d'y écrire)
|
||||||
|
incus exec ${INCUS_CONTAINER} -- chown -R ${FINAL_OWNER}:${FINAL_OWNER_LOGS} ${FINAL_PATH}/logs || exit 1
|
||||||
# Permissions spéciales pour logs
|
incus exec ${INCUS_CONTAINER} -- chmod -R 775 ${FINAL_PATH}/logs || exit 1
|
||||||
incus exec ${DEST_CONTAINER} -- mkdir -p ${API_PATH}/logs/events &&
|
incus exec ${INCUS_CONTAINER} -- find ${FINAL_PATH}/logs -type f -exec chmod 664 {} \; || exit 1
|
||||||
incus exec ${DEST_CONTAINER} -- chown -R ${FINAL_OWNER_LOGS}:${FINAL_GROUP} ${API_PATH}/logs &&
|
|
||||||
incus exec ${DEST_CONTAINER} -- find ${API_PATH}/logs -type d -exec chmod 750 {} \; &&
|
|
||||||
incus exec ${DEST_CONTAINER} -- find ${API_PATH}/logs -type f -exec chmod 640 {} \; &&
|
|
||||||
|
|
||||||
# Permissions spéciales pour uploads
|
|
||||||
incus exec ${DEST_CONTAINER} -- mkdir -p ${API_PATH}/uploads &&
|
|
||||||
incus exec ${DEST_CONTAINER} -- chown -R ${FINAL_OWNER_LOGS}:${FINAL_GROUP} ${API_PATH}/uploads &&
|
|
||||||
incus exec ${DEST_CONTAINER} -- find ${API_PATH}/uploads -type d -exec chmod 750 {} \; &&
|
|
||||||
incus exec ${DEST_CONTAINER} -- find ${API_PATH}/uploads -type f -exec chmod 640 {} \; &&
|
|
||||||
|
|
||||||
# Permissions spéciales pour sessions
|
echo '📁 Création des dossiers uploads...'
|
||||||
incus exec ${DEST_CONTAINER} -- mkdir -p ${API_PATH}/sessions &&
|
incus exec ${INCUS_CONTAINER} -- mkdir -p ${FINAL_PATH}/uploads || exit 1
|
||||||
incus exec ${DEST_CONTAINER} -- chown -R ${FINAL_OWNER_LOGS}:${FINAL_GROUP} ${API_PATH}/sessions &&
|
incus exec ${INCUS_CONTAINER} -- chown -R ${FINAL_OWNER}:${FINAL_OWNER_LOGS} ${FINAL_PATH}/uploads || exit 1
|
||||||
incus exec ${DEST_CONTAINER} -- chmod 700 ${API_PATH}/sessions &&
|
incus exec ${INCUS_CONTAINER} -- chmod -R 775 ${FINAL_PATH}/uploads || exit 1
|
||||||
|
incus exec ${INCUS_CONTAINER} -- find ${FINAL_PATH}/uploads -type f -exec chmod -R 664 {} \; || exit 1
|
||||||
|
|
||||||
# Composer (installation stricte - échec bloquant)
|
echo '📦 Mise à jour des dépendances Composer...'
|
||||||
incus exec ${DEST_CONTAINER} -- bash -c 'cd ${API_PATH} && composer install --no-dev --optimize-autoloader' || { echo 'ERROR: Composer install failed'; exit 1; } &&
|
incus exec ${INCUS_CONTAINER} -- bash -c 'cd ${FINAL_PATH} && composer update --no-dev --optimize-autoloader' || {
|
||||||
|
echo '⚠️ Composer non disponible ou échec, poursuite sans mise à jour des dépendances'
|
||||||
# Nettoyage
|
}
|
||||||
incus exec ${DEST_CONTAINER} -- rm -f /tmp/${ARCHIVE_NAME} &&
|
|
||||||
rm -f /tmp/${ARCHIVE_NAME}
|
|
||||||
" || echo_error "Deployment failed on destination"
|
|
||||||
|
|
||||||
echo_info "Remote backup saved: ${REMOTE_BACKUP_DIR} on ${DEST_CONTAINER}"
|
echo '🧹 Nettoyage...'
|
||||||
|
incus exec ${INCUS_CONTAINER} -- rm -f /tmp/${ARCHIVE_NAME} || exit 1
|
||||||
|
rm -f /tmp/${ARCHIVE_NAME} || exit 1
|
||||||
|
"
|
||||||
|
|
||||||
# Nettoyage des anciens backups sur le container distant
|
# Nettoyage local
|
||||||
echo_info "Cleaning old backup directories on ${DEST_CONTAINER}..."
|
rm -f "${TEMP_ARCHIVE}"
|
||||||
ssh -i ${HOST_KEY} -p ${HOST_PORT} ${HOST_USER}@${DEST_HOST} "
|
|
||||||
incus exec ${DEST_CONTAINER} -- bash -c 'rm -rf ${API_PATH}_backup_*'
|
|
||||||
" && echo_info "Old backups cleaned" || echo_warning "Could not clean old backups"
|
|
||||||
|
|
||||||
# =====================================
|
|
||||||
# Configuration des tâches CRON
|
|
||||||
# =====================================
|
|
||||||
|
|
||||||
echo_step "Configuring CRON tasks..."
|
|
||||||
|
|
||||||
ssh -i ${HOST_KEY} -p ${HOST_PORT} ${HOST_USER}@${DEST_HOST} "
|
|
||||||
incus exec ${DEST_CONTAINER} -- bash <<'EOFCRON'
|
|
||||||
# Sauvegarder les crons existants (hors geosector)
|
|
||||||
crontab -l 2>/dev/null | grep -v 'geosector/api/scripts/cron' > /tmp/crontab_backup || true
|
|
||||||
|
|
||||||
# Créer le nouveau crontab avec les tâches CRON pour l'API
|
|
||||||
cat /tmp/crontab_backup > /tmp/new_crontab
|
|
||||||
cat >> /tmp/new_crontab <<'EOF'
|
|
||||||
|
|
||||||
# GEOSECTOR API - Email queue processing (every 5 minutes)
|
|
||||||
*/5 * * * * /usr/bin/php /var/www/geosector/api/scripts/cron/process_email_queue.php >> /var/www/geosector/api/logs/email_queue.log 2>&1
|
|
||||||
|
|
||||||
# GEOSECTOR API - Security data cleanup (daily at 2am)
|
|
||||||
0 2 * * * /usr/bin/php /var/www/geosector/api/scripts/cron/cleanup_security_data.php >> /var/www/geosector/api/logs/cleanup_security.log 2>&1
|
|
||||||
|
|
||||||
# GEOSECTOR API - Stripe devices update (weekly Sunday at 3am)
|
|
||||||
0 3 * * 0 /usr/bin/php /var/www/geosector/api/scripts/cron/update_stripe_devices.php >> /var/www/geosector/api/logs/stripe_devices.log 2>&1
|
|
||||||
EOF
|
|
||||||
|
|
||||||
# Installer le nouveau crontab
|
|
||||||
crontab /tmp/new_crontab
|
|
||||||
|
|
||||||
# Nettoyer
|
|
||||||
rm -f /tmp/crontab_backup /tmp/new_crontab
|
|
||||||
|
|
||||||
# Afficher les crons installés
|
|
||||||
echo 'CRON tasks installed:'
|
|
||||||
crontab -l | grep geosector
|
|
||||||
EOFCRON
|
|
||||||
" && echo_info "CRON tasks configured successfully" || echo_warning "CRON configuration failed"
|
|
||||||
fi
|
|
||||||
|
|
||||||
# L'archive reste dans le dossier de backup, pas de nettoyage nécessaire
|
|
||||||
echo_info "Archive preserved in backup directory: ${ARCHIVE_PATH}"
|
|
||||||
|
|
||||||
# =====================================
|
|
||||||
# Résumé final
|
# Résumé final
|
||||||
# =====================================
|
echo_step "Deployment completed successfully."
|
||||||
|
echo_info "Your API has been updated on the container."
|
||||||
echo_step "Deployment completed successfully!"
|
|
||||||
echo_info "Environment: ${ENV_NAME}"
|
|
||||||
|
|
||||||
if [ "$TARGET_ENV" = "dev" ]; then
|
|
||||||
echo_info "Deployed from local code to container ${DEST_CONTAINER} on IN3 (${DEST_HOST})"
|
|
||||||
elif [ "$TARGET_ENV" = "rca" ]; then
|
|
||||||
echo_info "Delivered from ${SOURCE_CONTAINER} to ${DEST_CONTAINER} on ${DEST_HOST}"
|
|
||||||
elif [ "$TARGET_ENV" = "pra" ]; then
|
|
||||||
echo_info "Delivered from ${SOURCE_CONTAINER} on ${SOURCE_HOST} to ${DEST_CONTAINER} on ${DEST_HOST}"
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo_info "Deployment completed at: $(date)"
|
echo_info "Deployment completed at: $(date)"
|
||||||
|
|
||||||
# Journaliser le déploiement
|
# Journaliser le déploiement
|
||||||
echo "$(date '+%Y-%m-%d %H:%M:%S') - API deployed to ${ENV_NAME} (${DEST_CONTAINER}) - Archive: ${ARCHIVE_NAME}" >> ~/.geo_deploy_history
|
echo "$(date '+%Y-%m-%d %H:%M:%S') - API deployed to ${JUMP_HOST}:${INCUS_CONTAINER}" >> ~/.geo_deploy_history
|
||||||
|
|||||||
@@ -1,495 +0,0 @@
|
|||||||
# Système de logs d'événements JSONL
|
|
||||||
|
|
||||||
## 📋 Vue d'ensemble
|
|
||||||
|
|
||||||
Système de traçabilité des événements métier pour statistiques et audit, stocké en fichiers JSONL (JSON Lines) sans impact sur la base de données principale.
|
|
||||||
|
|
||||||
**Créé le :** 26 Octobre 2025
|
|
||||||
**Rétention :** 15 mois
|
|
||||||
**Format :** JSONL (une ligne = un événement JSON)
|
|
||||||
|
|
||||||
## 🎯 Objectifs
|
|
||||||
|
|
||||||
### Événements tracés
|
|
||||||
|
|
||||||
**Authentification**
|
|
||||||
- Connexions réussies (user_id, entity_id, plateforme, IP)
|
|
||||||
- Tentatives échouées (username, raison, IP, nb tentatives)
|
|
||||||
|
|
||||||
**CRUD métier**
|
|
||||||
- **Passages** : création, modification, suppression
|
|
||||||
- **Secteurs** : création, modification, suppression
|
|
||||||
- **Membres** : création, modification, suppression
|
|
||||||
- **Entités** : création, modification, suppression
|
|
||||||
|
|
||||||
### Cas d'usage
|
|
||||||
|
|
||||||
**1. Admin entité**
|
|
||||||
- Stats de son entité : connexions, passages, secteurs sur 1 jour/semaine/mois
|
|
||||||
- Activité des membres de l'entité
|
|
||||||
|
|
||||||
**2. Super-admin**
|
|
||||||
- Stats globales : tous les passages modifiés sur 2 semaines
|
|
||||||
- Événements toutes entités sur période donnée
|
|
||||||
- Détection d'anomalies
|
|
||||||
|
|
||||||
## 📁 Architecture de stockage
|
|
||||||
|
|
||||||
### Structure des répertoires
|
|
||||||
|
|
||||||
```
|
|
||||||
/logs/events/
|
|
||||||
├── 2025-10-26.jsonl # Fichier du jour (écriture append)
|
|
||||||
├── 2025-10-25.jsonl
|
|
||||||
├── 2025-10-24.jsonl
|
|
||||||
├── 2025-09-30.jsonl
|
|
||||||
├── 2025-09-29.jsonl.gz # Compression auto après 30 jours
|
|
||||||
└── archive/
|
|
||||||
├── 2025-09.jsonl.gz # Archive mensuelle
|
|
||||||
├── 2025-08.jsonl.gz
|
|
||||||
└── 2024-07.jsonl.gz # Supprimé auto après 15 mois
|
|
||||||
```
|
|
||||||
|
|
||||||
### Cycle de vie des fichiers
|
|
||||||
|
|
||||||
| Âge | État | Taille estimée | Accès |
|
|
||||||
|-----|------|----------------|-------|
|
|
||||||
| 0-30 jours | `.jsonl` non compressé | 1-10 MB/jour | Lecture directe rapide |
|
|
||||||
| 30 jours-15 mois | `.jsonl.gz` compressé | ~100 KB/jour | Décompression à la volée |
|
|
||||||
| > 15 mois | Supprimé automatiquement | - | - |
|
|
||||||
|
|
||||||
### Rotation et rétention
|
|
||||||
|
|
||||||
**CRON mensuel** : `scripts/cron/rotate_event_logs.php`
|
|
||||||
- **Fréquence** : 1er du mois à 3h00
|
|
||||||
- **Actions** :
|
|
||||||
1. Compresser les fichiers `.jsonl` de plus de 30 jours en `.jsonl.gz`
|
|
||||||
2. Supprimer les fichiers `.jsonl.gz` de plus de 15 mois
|
|
||||||
3. Logger le résumé de rotation
|
|
||||||
|
|
||||||
**Commande manuelle** :
|
|
||||||
```bash
|
|
||||||
php scripts/cron/rotate_event_logs.php
|
|
||||||
```
|
|
||||||
|
|
||||||
## 📊 Format des événements
|
|
||||||
|
|
||||||
### Structure commune
|
|
||||||
|
|
||||||
Tous les événements partagent ces champs :
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"timestamp": "2025-10-26T14:32:15Z", // ISO 8601 UTC
|
|
||||||
"event": "nom_evenement", // Type d'événement
|
|
||||||
"user_id": 123, // ID utilisateur (si authentifié)
|
|
||||||
"entity_id": 5, // ID entité (si applicable)
|
|
||||||
"ip": "192.168.1.100", // IP client
|
|
||||||
"platform": "ios|android|web", // Plateforme
|
|
||||||
"app_version": "3.3.6" // Version app (mobile uniquement)
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Événements d'authentification
|
|
||||||
|
|
||||||
#### Login réussi
|
|
||||||
```jsonl
|
|
||||||
{"timestamp":"2025-10-26T14:32:15Z","event":"login_success","user_id":123,"entity_id":5,"platform":"ios","app_version":"3.3.6","ip":"192.168.1.100","username":"user123"}
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Login échoué
|
|
||||||
```jsonl
|
|
||||||
{"timestamp":"2025-10-26T14:35:22Z","event":"login_failed","username":"test","reason":"invalid_password","ip":"192.168.1.101","attempt":3,"platform":"web"}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Raisons possibles** : `invalid_password`, `user_not_found`, `account_inactive`, `blocked_ip`
|
|
||||||
|
|
||||||
#### Logout
|
|
||||||
```jsonl
|
|
||||||
{"timestamp":"2025-10-26T16:45:00Z","event":"logout","user_id":123,"entity_id":5,"platform":"android","session_duration":7800}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Événements Passages
|
|
||||||
|
|
||||||
#### Création
|
|
||||||
```jsonl
|
|
||||||
{"timestamp":"2025-10-26T14:40:10Z","event":"passage_created","passage_id":45678,"user_id":123,"entity_id":5,"operation_id":789,"sector_id":12,"amount":50.00,"payment_type":"cash","platform":"android"}
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Modification
|
|
||||||
```jsonl
|
|
||||||
{"timestamp":"2025-10-26T14:42:05Z","event":"passage_updated","passage_id":45678,"user_id":123,"entity_id":5,"changes":{"amount":{"old":50.00,"new":75.00},"payment_type":{"old":"cash","new":"stripe"}},"platform":"ios"}
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Suppression
|
|
||||||
```jsonl
|
|
||||||
{"timestamp":"2025-10-26T14:45:30Z","event":"passage_deleted","passage_id":45678,"user_id":123,"entity_id":5,"operation_id":789,"deleted_by":123,"soft_delete":true,"platform":"web"}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Événements Secteurs
|
|
||||||
|
|
||||||
#### Création
|
|
||||||
```jsonl
|
|
||||||
{"timestamp":"2025-10-26T15:10:00Z","event":"sector_created","sector_id":456,"operation_id":789,"entity_id":5,"user_id":123,"sector_name":"Secteur A","platform":"web"}
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Modification
|
|
||||||
```jsonl
|
|
||||||
{"timestamp":"2025-10-26T15:12:00Z","event":"sector_updated","sector_id":456,"operation_id":789,"entity_id":5,"user_id":123,"changes":{"sector_name":{"old":"Secteur A","new":"Secteur Alpha"}},"platform":"web"}
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Suppression
|
|
||||||
```jsonl
|
|
||||||
{"timestamp":"2025-10-26T15:15:00Z","event":"sector_deleted","sector_id":456,"operation_id":789,"entity_id":5,"user_id":123,"deleted_by":123,"soft_delete":true,"platform":"web"}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Événements Membres (Users)
|
|
||||||
|
|
||||||
#### Création
|
|
||||||
```jsonl
|
|
||||||
{"timestamp":"2025-10-26T15:20:00Z","event":"user_created","new_user_id":789,"entity_id":5,"created_by":123,"role_id":1,"username":"newuser","platform":"web"}
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Modification
|
|
||||||
```jsonl
|
|
||||||
{"timestamp":"2025-10-26T15:25:00Z","event":"user_updated","user_id":789,"entity_id":5,"updated_by":123,"changes":{"role_id":{"old":1,"new":2},"encrypted_phone":true},"platform":"web"}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Note** : Les champs chiffrés sont indiqués par un booléen `true` sans exposer les valeurs
|
|
||||||
|
|
||||||
#### Suppression
|
|
||||||
```jsonl
|
|
||||||
{"timestamp":"2025-10-26T15:30:00Z","event":"user_deleted","user_id":789,"entity_id":5,"deleted_by":123,"soft_delete":true,"platform":"web"}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Événements Entités
|
|
||||||
|
|
||||||
#### Création
|
|
||||||
```jsonl
|
|
||||||
{"timestamp":"2025-10-26T15:35:00Z","event":"entity_created","entity_id":25,"created_by":1,"entity_type_id":1,"postal_code":"75001","platform":"web"}
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Modification
|
|
||||||
```jsonl
|
|
||||||
{"timestamp":"2025-10-26T15:40:00Z","event":"entity_updated","entity_id":25,"user_id":123,"updated_by":123,"changes":{"encrypted_name":true,"encrypted_email":true,"chk_stripe":{"old":0,"new":1}},"platform":"web"}
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Suppression (rare)
|
|
||||||
```jsonl
|
|
||||||
{"timestamp":"2025-10-26T15:45:00Z","event":"entity_deleted","entity_id":25,"deleted_by":1,"soft_delete":true,"reason":"duplicate","platform":"web"}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Événements Opérations
|
|
||||||
|
|
||||||
#### Création
|
|
||||||
```jsonl
|
|
||||||
{"timestamp":"2025-10-26T16:00:00Z","event":"operation_created","operation_id":999,"entity_id":5,"created_by":123,"date_start":"2025-11-01","date_end":"2025-11-30","platform":"web"}
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Modification
|
|
||||||
```jsonl
|
|
||||||
{"timestamp":"2025-10-26T16:05:00Z","event":"operation_updated","operation_id":999,"entity_id":5,"updated_by":123,"changes":{"date_end":{"old":"2025-11-30","new":"2025-12-15"},"chk_active":{"old":0,"new":1}},"platform":"web"}
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Suppression
|
|
||||||
```jsonl
|
|
||||||
{"timestamp":"2025-10-26T16:10:00Z","event":"operation_deleted","operation_id":999,"entity_id":5,"deleted_by":123,"soft_delete":true,"platform":"web"}
|
|
||||||
```
|
|
||||||
|
|
||||||
## 🛠️ Implémentation
|
|
||||||
|
|
||||||
### Service EventLogService.php
|
|
||||||
|
|
||||||
**Emplacement** : `src/Services/EventLogService.php`
|
|
||||||
|
|
||||||
**Méthodes publiques** :
|
|
||||||
```php
|
|
||||||
EventLogService::logLoginSuccess($userId, $entityId, $username)
|
|
||||||
EventLogService::logLoginFailed($username, $reason, $attempt)
|
|
||||||
EventLogService::logLogout($userId, $entityId, $sessionDuration)
|
|
||||||
|
|
||||||
EventLogService::logPassageCreated($passageId, $operationId, $sectorId, $amount, $paymentType)
|
|
||||||
EventLogService::logPassageUpdated($passageId, $changes)
|
|
||||||
EventLogService::logPassageDeleted($passageId, $operationId, $softDelete)
|
|
||||||
|
|
||||||
EventLogService::logSectorCreated($sectorId, $operationId, $sectorName)
|
|
||||||
EventLogService::logSectorUpdated($sectorId, $operationId, $changes)
|
|
||||||
EventLogService::logSectorDeleted($sectorId, $operationId, $softDelete)
|
|
||||||
|
|
||||||
EventLogService::logUserCreated($newUserId, $entityId, $roleId, $username)
|
|
||||||
EventLogService::logUserUpdated($userId, $changes)
|
|
||||||
EventLogService::logUserDeleted($userId, $softDelete)
|
|
||||||
|
|
||||||
EventLogService::logEntityCreated($entityId, $entityTypeId, $postalCode)
|
|
||||||
EventLogService::logEntityUpdated($entityId, $changes)
|
|
||||||
EventLogService::logEntityDeleted($entityId, $reason)
|
|
||||||
|
|
||||||
EventLogService::logOperationCreated($operationId, $dateStart, $dateEnd)
|
|
||||||
EventLogService::logOperationUpdated($operationId, $changes)
|
|
||||||
EventLogService::logOperationDeleted($operationId, $softDelete)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Enrichissement automatique** :
|
|
||||||
- `timestamp` : Généré automatiquement (UTC)
|
|
||||||
- `user_id`, `entity_id` : Récupérés depuis `Session`
|
|
||||||
- `ip` : Récupérée via `ClientDetector`
|
|
||||||
- `platform` : Détecté via `ClientDetector` (ios/android/web)
|
|
||||||
- `app_version` : Extrait du User-Agent pour mobile
|
|
||||||
|
|
||||||
### Intégration dans les Controllers
|
|
||||||
|
|
||||||
**Exemple dans PassageController** :
|
|
||||||
```php
|
|
||||||
public function createPassage(Request $request, Response $response): void {
|
|
||||||
// ... validation et création ...
|
|
||||||
|
|
||||||
$passageId = $db->lastInsertId();
|
|
||||||
|
|
||||||
// Log de l'événement
|
|
||||||
EventLogService::logPassageCreated(
|
|
||||||
$passageId,
|
|
||||||
$data['fk_operation'],
|
|
||||||
$data['fk_sector'],
|
|
||||||
$data['montant'],
|
|
||||||
$data['fk_type_reglement']
|
|
||||||
);
|
|
||||||
|
|
||||||
// ... suite du code ...
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Scripts d'analyse
|
|
||||||
|
|
||||||
#### 1. Stats entité
|
|
||||||
|
|
||||||
**Fichier** : `scripts/stats/entity_stats.php`
|
|
||||||
|
|
||||||
**Usage** :
|
|
||||||
```bash
|
|
||||||
# Stats entité 5 sur 7 derniers jours
|
|
||||||
php scripts/stats/entity_stats.php --entity-id=5 --days=7
|
|
||||||
|
|
||||||
# Stats entité 5 entre deux dates
|
|
||||||
php scripts/stats/entity_stats.php --entity-id=5 --from=2025-10-01 --to=2025-10-26
|
|
||||||
|
|
||||||
# Résultat JSON
|
|
||||||
{
|
|
||||||
"entity_id": 5,
|
|
||||||
"period": {"from": "2025-10-20", "to": "2025-10-26"},
|
|
||||||
"stats": {
|
|
||||||
"logins": {"success": 45, "failed": 2},
|
|
||||||
"passages": {"created": 120, "updated": 15, "deleted": 3},
|
|
||||||
"sectors": {"created": 2, "updated": 8, "deleted": 0},
|
|
||||||
"users": {"created": 1, "updated": 5, "deleted": 0}
|
|
||||||
},
|
|
||||||
"top_users": [
|
|
||||||
{"user_id": 123, "actions": 85},
|
|
||||||
{"user_id": 456, "actions": 42}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 2. Stats globales super-admin
|
|
||||||
|
|
||||||
**Fichier** : `scripts/stats/global_stats.php`
|
|
||||||
|
|
||||||
**Usage** :
|
|
||||||
```bash
|
|
||||||
# Tous les passages modifiés sur 2 semaines
|
|
||||||
php scripts/stats/global_stats.php --event=passage_updated --days=14
|
|
||||||
|
|
||||||
# Toutes les connexions échouées du mois
|
|
||||||
php scripts/stats/global_stats.php --event=login_failed --month=2025-10
|
|
||||||
|
|
||||||
# Résultat JSON
|
|
||||||
{
|
|
||||||
"event": "passage_updated",
|
|
||||||
"period": {"from": "2025-10-13", "to": "2025-10-26"},
|
|
||||||
"total_events": 342,
|
|
||||||
"by_entity": [
|
|
||||||
{"entity_id": 5, "count": 120},
|
|
||||||
{"entity_id": 12, "count": 85},
|
|
||||||
{"entity_id": 18, "count": 67}
|
|
||||||
],
|
|
||||||
"by_day": {
|
|
||||||
"2025-10-26": 45,
|
|
||||||
"2025-10-25": 38,
|
|
||||||
"2025-10-24": 52
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 3. Export CSV pour analyse externe
|
|
||||||
|
|
||||||
**Fichier** : `scripts/stats/export_events_csv.php`
|
|
||||||
|
|
||||||
**Usage** :
|
|
||||||
```bash
|
|
||||||
# Exporter toutes les connexions du mois en CSV
|
|
||||||
php scripts/stats/export_events_csv.php \
|
|
||||||
--event=login_success \
|
|
||||||
--month=2025-10 \
|
|
||||||
--output=/tmp/logins_october.csv
|
|
||||||
```
|
|
||||||
|
|
||||||
### CRON de rotation
|
|
||||||
|
|
||||||
**Fichier** : `scripts/cron/rotate_event_logs.php`
|
|
||||||
|
|
||||||
**Configuration crontab** :
|
|
||||||
```cron
|
|
||||||
# Rotation des logs d'événements - 1er du mois à 3h
|
|
||||||
0 3 1 * * cd /var/www/geosector/api && php scripts/cron/rotate_event_logs.php
|
|
||||||
```
|
|
||||||
|
|
||||||
**Actions** :
|
|
||||||
1. Compresser fichiers > 30 jours : `gzip logs/events/2025-09-*.jsonl`
|
|
||||||
2. Supprimer archives > 15 mois : `rm logs/events/*-2024-06-*.jsonl.gz`
|
|
||||||
3. Logger résumé dans `logs/rotation.log`
|
|
||||||
|
|
||||||
## 📈 Performances et volumétrie
|
|
||||||
|
|
||||||
### Estimations
|
|
||||||
|
|
||||||
**Volume quotidien moyen** (pour 50 entités actives) :
|
|
||||||
- 500 connexions/jour = 500 lignes
|
|
||||||
- 2000 passages créés/modifiés = 2000 lignes
|
|
||||||
- 100 autres événements = 100 lignes
|
|
||||||
- **Total : ~2600 événements/jour**
|
|
||||||
|
|
||||||
**Taille fichier** :
|
|
||||||
- 1 événement ≈ 200-400 bytes JSON
|
|
||||||
- 2600 événements ≈ 0.8-1 MB/jour non compressé
|
|
||||||
- Compression gzip : ratio ~10:1 → **~100 KB/jour compressé**
|
|
||||||
|
|
||||||
**Rétention 15 mois** :
|
|
||||||
- Non compressé (30 jours) : 30 MB
|
|
||||||
- Compressé (14.5 mois) : 45 MB
|
|
||||||
- **Total stockage : ~75 MB** pour 15 mois
|
|
||||||
|
|
||||||
### Optimisation lecture
|
|
||||||
|
|
||||||
**Lecture mono-fichier** : < 50ms pour analyser 1 jour (2600 événements)
|
|
||||||
|
|
||||||
**Lecture période 7 jours** :
|
|
||||||
- 7 fichiers × 1 MB = 7 MB à lire
|
|
||||||
- Filtrage `jq` ou PHP : ~200-300ms
|
|
||||||
|
|
||||||
**Lecture période 2 semaines (super-admin)** :
|
|
||||||
- 14 fichiers × 1 MB = 14 MB à lire
|
|
||||||
- Filtrage sur type événement : ~500ms
|
|
||||||
|
|
||||||
**Lecture archive compressée** :
|
|
||||||
- Décompression à la volée : +100-200ms
|
|
||||||
- Total : ~700-800ms pour 1 mois compressé
|
|
||||||
|
|
||||||
## 🔒 Sécurité et confidentialité
|
|
||||||
|
|
||||||
### Données sensibles
|
|
||||||
|
|
||||||
**❌ Jamais loggé en clair** :
|
|
||||||
- Mots de passe
|
|
||||||
- Contenu chiffré (noms, emails, téléphones, IBAN)
|
|
||||||
- Tokens d'authentification
|
|
||||||
|
|
||||||
**✅ Loggé** :
|
|
||||||
- IDs (user_id, entity_id, passage_id, etc.)
|
|
||||||
- Montants financiers
|
|
||||||
- Dates et timestamps
|
|
||||||
- Types de modifications (indicateur booléen pour champs chiffrés)
|
|
||||||
|
|
||||||
### Exemple champ chiffré
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"event": "user_updated",
|
|
||||||
"changes": {
|
|
||||||
"encrypted_name": true, // Indique modification sans valeur
|
|
||||||
"encrypted_email": true,
|
|
||||||
"role_id": {"old": 1, "new": 2} // Champ non sensible = valeurs OK
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Permissions d'accès
|
|
||||||
|
|
||||||
**Fichiers logs** :
|
|
||||||
- Propriétaire : `nginx:nginx`
|
|
||||||
- Permissions : `0640` (lecture nginx, écriture nginx, aucun autre)
|
|
||||||
- Dossier `/logs/events/` : `0750`
|
|
||||||
|
|
||||||
**Scripts d'analyse** :
|
|
||||||
- Exécution : root ou nginx uniquement
|
|
||||||
- Pas d'accès direct via endpoints API (pour l'instant)
|
|
||||||
|
|
||||||
## 🚀 Roadmap et évolutions futures
|
|
||||||
|
|
||||||
### Phase 1 - MVP (actuel) ✅
|
|
||||||
- [x] Architecture JSONL quotidienne
|
|
||||||
- [x] Service EventLogService.php
|
|
||||||
- [x] Intégration dans controllers (LoginController, PassageController, UserController, SectorController, OperationController, EntiteController)
|
|
||||||
- [ ] CRON de rotation 15 mois
|
|
||||||
- [ ] Scripts d'analyse de base
|
|
||||||
|
|
||||||
### Phase 2 - Dashboards (Q1 2026)
|
|
||||||
- [ ] Endpoints API : `GET /api/stats/entity/{id}`, `GET /api/stats/global`
|
|
||||||
- [ ] Interface web admin : graphiques connexions, passages
|
|
||||||
- [ ] Filtres avancés (période, plateforme, utilisateur)
|
|
||||||
|
|
||||||
### Phase 3 - Alertes (Q2 2026)
|
|
||||||
- [ ] Détection anomalies (pics de connexions échouées)
|
|
||||||
- [ ] Alertes email super-admins
|
|
||||||
- [ ] Seuils configurables par entité
|
|
||||||
|
|
||||||
### Phase 4 - Migration TimescaleDB (si besoin)
|
|
||||||
- [ ] Évaluation volume : si > 50k événements/jour
|
|
||||||
- [ ] Import JSONL → TimescaleDB
|
|
||||||
- [ ] Rétention hybride : 90j TimescaleDB, archives JSONL
|
|
||||||
|
|
||||||
## 📝 Statut implémentation
|
|
||||||
|
|
||||||
**Date : 28 Octobre 2025**
|
|
||||||
|
|
||||||
### ✅ Terminé
|
|
||||||
- Service `EventLogService.php` créé avec toutes les méthodes de logging
|
|
||||||
- Intégration complète dans les 6 controllers principaux :
|
|
||||||
- **LoginController** : login réussi/échoué, logout
|
|
||||||
- **PassageController** : création, modification, suppression passages
|
|
||||||
- **UserController** : création, modification, suppression utilisateurs
|
|
||||||
- **SectorController** : création, modification, suppression secteurs
|
|
||||||
- **OperationController** : création, modification, suppression opérations
|
|
||||||
- **EntiteController** : création, modification entités
|
|
||||||
- Enrichissement automatique : timestamp UTC, user_id, entity_id, IP, platform, app_version
|
|
||||||
- Sécurité : champs sensibles loggés en booléen uniquement (pas de valeurs chiffrées)
|
|
||||||
- Script de déploiement `deploy-api.sh` crée automatiquement `/logs/events/` avec permissions 0750
|
|
||||||
|
|
||||||
### 🔄 En attente
|
|
||||||
- Scripts d'analyse (`entity_stats.php`, `global_stats.php`, `export_events_csv.php`)
|
|
||||||
- CRON de rotation 15 mois (`rotate_event_logs.php`)
|
|
||||||
- Tests en environnement DEV
|
|
||||||
|
|
||||||
## 📝 Checklist déploiement
|
|
||||||
|
|
||||||
### Environnement DEV (dva-geo)
|
|
||||||
- [x] Créer dossier `/logs/events/` (permissions 0750) - Intégré dans deploy-api.sh
|
|
||||||
- [x] Déployer `EventLogService.php`
|
|
||||||
- [ ] Déployer scripts stats et rotation
|
|
||||||
- [ ] Configurer CRON rotation
|
|
||||||
- [ ] Tests : générer événements manuellement
|
|
||||||
- [ ] Valider format JSONL et rotation
|
|
||||||
|
|
||||||
### Environnement RECETTE (rca-geo)
|
|
||||||
- [ ] Déployer depuis DEV validé
|
|
||||||
- [ ] Tests de charge : 10k événements/jour
|
|
||||||
- [ ] Valider performances scripts d'analyse
|
|
||||||
- [ ] Valider compression et suppression auto
|
|
||||||
|
|
||||||
### Environnement PRODUCTION (pra-geo)
|
|
||||||
- [ ] Déployer depuis RECETTE validée
|
|
||||||
- [ ] Monitoring volumétrie
|
|
||||||
- [ ] Backups quotidiens `/logs/events/` (via CRON général)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Dernière mise à jour :** 28 Octobre 2025
|
|
||||||
**Version :** 1.1
|
|
||||||
**Statut :** ✅ Service implémenté et intégré - Scripts d'analyse à développer
|
|
||||||
@@ -24,78 +24,24 @@ Ce document décrit le système de gestion des secteurs dans l'API Geosector, in
|
|||||||
- Contient toutes les tables de l'application
|
- Contient toutes les tables de l'application
|
||||||
- Tables concernées : `ope_sectors`, `sectors_adresses`, `ope_pass`, `ope_users_sectors`, `x_departements_contours`
|
- Tables concernées : `ope_sectors`, `sectors_adresses`, `ope_pass`, `ope_users_sectors`, `x_departements_contours`
|
||||||
|
|
||||||
2. **Base adresses** (dans conteneurs maria3/maria4)
|
2. **Base adresses** (dans conteneurs Incus séparés)
|
||||||
- **DVA** : maria3 (13.23.33.4) - base `adresses`
|
- DVA : `dva-maria` (13.23.33.46) - base `adresses`
|
||||||
- User : `adr_geo_user` / `d66,AdrGeoDev.User`
|
- RCA : `rca-maria` (13.23.33.36) - base `adresses`
|
||||||
- **RCA** : maria3 (13.23.33.4) - base `adresses`
|
- PRA : `pra-maria` (13.23.33.26) - base `adresses`
|
||||||
- User : `adr_geo_user` / `d66,AdrGeoRec.User`
|
- Credentials : `adr_geo_user` / `d66,AdrGeoDev.User`
|
||||||
- **PROD** : maria4 (13.23.33.4) - base `adresses`
|
|
||||||
- User : `adr_geo_user` / `d66,AdrGeoPrd.User`
|
|
||||||
- Tables par département : `cp22`, `cp23`, etc.
|
- Tables par département : `cp22`, `cp23`, etc.
|
||||||
|
|
||||||
3. **Base bâtiments** (dans conteneurs maria3/maria4)
|
|
||||||
- **DVA** : maria3 (13.23.33.4) - base `batiments`
|
|
||||||
- User : `adr_geo_user` / `d66,AdrGeoDev.User`
|
|
||||||
- **RCA** : maria3 (13.23.33.4) - base `batiments`
|
|
||||||
- User : `adr_geo_user` / `d66,AdrGeoRec.User`
|
|
||||||
- **PROD** : maria4 (13.23.33.4) - base `batiments`
|
|
||||||
- User : `adr_geo_user` / `d66,AdrGeoPrd.User`
|
|
||||||
- Tables par département : `bat22`, `bat23`, etc.
|
|
||||||
- Colonnes principales : `batiment_groupe_id`, `cle_interop_adr`, `nb_log`, `nb_niveau`, `residence`, `altitude_sol_mean`
|
|
||||||
- Lien avec adresses : `bat{dept}.cle_interop_adr = cp{dept}.id`
|
|
||||||
|
|
||||||
### Configuration
|
### Configuration
|
||||||
|
|
||||||
Dans `src/Config/AppConfig.php` :
|
Dans `src/Config/AppConfig.php` :
|
||||||
|
|
||||||
```php
|
```php
|
||||||
// DÉVELOPPEMENT
|
|
||||||
'addresses_database' => [
|
'addresses_database' => [
|
||||||
'host' => '13.23.33.4', // Container maria3 sur IN3
|
'host' => '13.23.33.46', // Varie selon l'environnement
|
||||||
'name' => 'adresses',
|
'name' => 'adresses',
|
||||||
'username' => 'adr_geo_user',
|
'username' => 'adr_geo_user',
|
||||||
'password' => 'd66,AdrGeoDev.User',
|
'password' => 'd66,AdrGeoDev.User',
|
||||||
],
|
],
|
||||||
|
|
||||||
// RECETTE
|
|
||||||
'addresses_database' => [
|
|
||||||
'host' => '13.23.33.4', // Container maria3 sur IN3
|
|
||||||
'name' => 'adresses',
|
|
||||||
'username' => 'adr_geo_user',
|
|
||||||
'password' => 'd66,AdrGeoRec.User',
|
|
||||||
],
|
|
||||||
|
|
||||||
// PRODUCTION
|
|
||||||
'addresses_database' => [
|
|
||||||
'host' => '13.23.33.4', // Container maria4 sur IN4
|
|
||||||
'name' => 'adresses',
|
|
||||||
'username' => 'adr_geo_user',
|
|
||||||
'password' => 'd66,AdrGeoPrd.User',
|
|
||||||
],
|
|
||||||
|
|
||||||
// DÉVELOPPEMENT - Bâtiments
|
|
||||||
'buildings_database' => [
|
|
||||||
'host' => '13.23.33.4', // Container maria3 sur IN3
|
|
||||||
'name' => 'batiments',
|
|
||||||
'username' => 'adr_geo_user',
|
|
||||||
'password' => 'd66,AdrGeoDev.User',
|
|
||||||
],
|
|
||||||
|
|
||||||
// RECETTE - Bâtiments
|
|
||||||
'buildings_database' => [
|
|
||||||
'host' => '13.23.33.4', // Container maria3 sur IN3
|
|
||||||
'name' => 'batiments',
|
|
||||||
'username' => 'adr_geo_user',
|
|
||||||
'password' => 'd66,AdrGeoRec.User',
|
|
||||||
],
|
|
||||||
|
|
||||||
// PRODUCTION - Bâtiments
|
|
||||||
'buildings_database' => [
|
|
||||||
'host' => '13.23.33.4', // Container maria4 sur IN4
|
|
||||||
'name' => 'batiments',
|
|
||||||
'username' => 'adr_geo_user',
|
|
||||||
'password' => 'd66,AdrGeoPrd.User',
|
|
||||||
],
|
|
||||||
```
|
```
|
||||||
|
|
||||||
## Gestion des contours départementaux
|
## Gestion des contours départementaux
|
||||||
@@ -154,7 +100,7 @@ Vérifie les limites départementales des secteurs :
|
|||||||
class DepartmentBoundaryService {
|
class DepartmentBoundaryService {
|
||||||
// Vérifie si un secteur est contenu dans un département
|
// Vérifie si un secteur est contenu dans un département
|
||||||
public function checkSectorInDepartment(array $sectorCoordinates, string $departmentCode): array
|
public function checkSectorInDepartment(array $sectorCoordinates, string $departmentCode): array
|
||||||
|
|
||||||
// Liste tous les départements touchés par un secteur
|
// Liste tous les départements touchés par un secteur
|
||||||
public function getDepartmentsForSector(array $sectorCoordinates): array
|
public function getDepartmentsForSector(array $sectorCoordinates): array
|
||||||
}
|
}
|
||||||
@@ -172,46 +118,6 @@ class DepartmentBoundaryService {
|
|||||||
]
|
]
|
||||||
```
|
```
|
||||||
|
|
||||||
### BuildingService
|
|
||||||
|
|
||||||
Enrichit les adresses avec les données bâtiments :
|
|
||||||
|
|
||||||
```php
|
|
||||||
namespace App\Services;
|
|
||||||
|
|
||||||
class BuildingService {
|
|
||||||
// Enrichit une liste d'adresses avec les métadonnées des bâtiments
|
|
||||||
public function enrichAddresses(array $addresses): array
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Fonctionnement** :
|
|
||||||
- Connexion à la base `batiments` externe
|
|
||||||
- Interrogation des tables `bat{dept}` par département
|
|
||||||
- JOIN sur `bat{dept}.cle_interop_adr = cp{dept}.id`
|
|
||||||
- Ajout des métadonnées : `fk_batiment`, `fk_habitat`, `nb_niveau`, `nb_log`, `residence`, `alt_sol`
|
|
||||||
- Fallback : `fk_habitat=1` (maison individuelle) si pas de bâtiment trouvé
|
|
||||||
|
|
||||||
**Données retournées** :
|
|
||||||
```php
|
|
||||||
[
|
|
||||||
'id' => 'cp22.123456',
|
|
||||||
'numero' => '10',
|
|
||||||
'voie' => 'Rue Victor Hugo',
|
|
||||||
'code_postal' => '22000',
|
|
||||||
'commune' => 'Saint-Brieuc',
|
|
||||||
'latitude' => 48.5149,
|
|
||||||
'longitude' => -2.7658,
|
|
||||||
// Données bâtiment enrichies :
|
|
||||||
'fk_batiment' => 'BAT_123456', // null si maison
|
|
||||||
'fk_habitat' => 2, // 1=individuel, 2=collectif
|
|
||||||
'nb_niveau' => 4, // null si maison
|
|
||||||
'nb_log' => 12, // null si maison
|
|
||||||
'residence' => 'Résidence Les Pins', // '' si maison
|
|
||||||
'alt_sol' => 25.5 // null si maison
|
|
||||||
]
|
|
||||||
```
|
|
||||||
|
|
||||||
## Processus de création de secteur
|
## Processus de création de secteur
|
||||||
|
|
||||||
### 1. Structure du payload
|
### 1. Structure du payload
|
||||||
@@ -244,77 +150,13 @@ class BuildingService {
|
|||||||
- Recherche des passages avec `fk_sector = 0` dans le polygone
|
- Recherche des passages avec `fk_sector = 0` dans le polygone
|
||||||
- Mise à jour de leur `fk_sector` vers le nouveau secteur
|
- Mise à jour de leur `fk_sector` vers le nouveau secteur
|
||||||
- Exclusion des passages ayant déjà une `fk_adresse`
|
- Exclusion des passages ayant déjà une `fk_adresse`
|
||||||
7. **Récupération** des adresses via `AddressService::getAddressesInPolygon()`
|
7. **Récupération** des adresses via `AddressService`
|
||||||
8. **Enrichissement** avec données bâtiments via `AddressService::enrichAddressesWithBuildings()`
|
8. **Stockage** des adresses dans `sectors_adresses`
|
||||||
9. **Stockage** des adresses dans `sectors_adresses` avec colonnes bâtiment :
|
9. **Création** des passages dans `ope_pass` pour chaque adresse :
|
||||||
- `fk_batiment`, `fk_habitat`, `nb_niveau`, `nb_log`, `residence`, `alt_sol`
|
|
||||||
10. **Création** des passages dans `ope_pass` :
|
|
||||||
- **Maisons individuelles** (fk_habitat=1) : 1 passage par adresse
|
|
||||||
- **Immeubles** (fk_habitat=2) : nb_log passages par adresse (1 par appartement)
|
|
||||||
- Champs ajoutés : `residence`, `appt` (numéro 1 à nb_log), `fk_habitat`
|
|
||||||
- Affectés au premier utilisateur de la liste
|
- Affectés au premier utilisateur de la liste
|
||||||
- Avec toutes les FK nécessaires (entité, opération, secteur, user)
|
- Avec toutes les FK nécessaires (entité, opération, secteur, user)
|
||||||
- Données d'adresse complètes
|
- Données d'adresse complètes
|
||||||
11. **Commit** de la transaction ou **rollback** en cas d'erreur
|
10. **Commit** de la transaction ou **rollback** en cas d'erreur
|
||||||
|
|
||||||
## Processus de modification de secteur
|
|
||||||
|
|
||||||
### 1. Structure du payload UPDATE
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"libelle": "Secteur Centre-Ville Modifié",
|
|
||||||
"color": "#00FF00",
|
|
||||||
"sector": "48.117266/-1.6777926#48.118500/-1.6750000#...",
|
|
||||||
"users": [12, 34],
|
|
||||||
"chk_adresses_change": 1
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. Paramètre chk_adresses_change
|
|
||||||
|
|
||||||
**Valeurs** :
|
|
||||||
- `0` : Ne pas recalculer les adresses et passages (modification simple)
|
|
||||||
- `1` : Recalculer les adresses et passages (défaut)
|
|
||||||
|
|
||||||
**Cas d'usage** :
|
|
||||||
|
|
||||||
#### chk_adresses_change = 0
|
|
||||||
Modification rapide sans toucher aux adresses/passages :
|
|
||||||
- ✅ Modification du libellé
|
|
||||||
- ✅ Modification de la couleur
|
|
||||||
- ✅ Modification des coordonnées du polygone (visuel uniquement)
|
|
||||||
- ✅ Modification des membres affectés
|
|
||||||
- ❌ Pas de recalcul des adresses dans sectors_adresses
|
|
||||||
- ❌ Pas de mise à jour des passages (orphelins, créés, supprimés)
|
|
||||||
- ❌ **Réponse sans passages_sector** (tableau vide)
|
|
||||||
|
|
||||||
**Utilité** : Permet aux admins de corriger rapidement un libellé, une couleur, ou d'ajuster légèrement le périmètre visuel sans déclencher un recalcul complet qui pourrait prendre plusieurs secondes.
|
|
||||||
|
|
||||||
**Réponse API** :
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"status": "success",
|
|
||||||
"message": "Secteur modifié avec succès",
|
|
||||||
"sector": { "id": 123, "libelle": "...", "color": "...", "sector": "..." },
|
|
||||||
"passages_sector": [], // Vide car chk_adresses_change = 0
|
|
||||||
"passages_orphaned": 0,
|
|
||||||
"passages_deleted": 0,
|
|
||||||
"passages_updated": 0,
|
|
||||||
"passages_created": 0,
|
|
||||||
"passages_total": 0,
|
|
||||||
"users_sectors": [...]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
#### chk_adresses_change = 1 (défaut)
|
|
||||||
Modification complète avec recalcul :
|
|
||||||
- ✅ Modification du libellé/couleur/polygone
|
|
||||||
- ✅ Modification des membres
|
|
||||||
- ✅ Suppression et recréation de sectors_adresses
|
|
||||||
- ✅ Application des règles de gestion des bâtiments
|
|
||||||
- ✅ Mise en orphelin des passages hors périmètre
|
|
||||||
- ✅ Création de nouveaux passages pour nouvelles adresses
|
|
||||||
|
|
||||||
### 3. Réponse API pour CREATE
|
### 3. Réponse API pour CREATE
|
||||||
|
|
||||||
@@ -445,28 +287,14 @@ $coordinates = [
|
|||||||
|
|
||||||
### sectors_adresses
|
### sectors_adresses
|
||||||
- `fk_sector` : Lien vers le secteur
|
- `fk_sector` : Lien vers le secteur
|
||||||
- `fk_adresse` : ID de l'adresse dans la base externe
|
- `fk_address` : ID de l'adresse dans la base externe
|
||||||
- `numero`, `rue`, `rue_bis`, `cp`, `ville`
|
- `numero`, `voie`, `code_postal`, `commune`
|
||||||
- `gps_lat`, `gps_lng`
|
- `latitude`, `longitude`
|
||||||
- **Colonnes bâtiment** :
|
|
||||||
- `fk_batiment` : ID bâtiment (VARCHAR 50, null si maison)
|
|
||||||
- `fk_habitat` : 1=individuel, 2=collectif (TINYINT UNSIGNED)
|
|
||||||
- `nb_niveau` : Nombre d'étages (INT, null)
|
|
||||||
- `nb_log` : Nombre de logements (INT, null)
|
|
||||||
- `residence` : Nom résidence/copropriété (VARCHAR 75)
|
|
||||||
- `alt_sol` : Altitude sol en mètres (DECIMAL 10,2, null)
|
|
||||||
|
|
||||||
### ope_pass (passages)
|
### ope_pass (passages)
|
||||||
- `fk_operation`, `fk_sector`, `fk_user`, `fk_adresse`
|
- `fk_entite`, `fk_operation`, `fk_sector`, `fk_user`
|
||||||
- `numero`, `rue`, `rue_bis`, `ville`
|
- `numero`, `voie`, `code_postal`, `commune`
|
||||||
- `gps_lat`, `gps_lng`
|
- `latitude`, `longitude`
|
||||||
- **Colonnes bâtiment** :
|
|
||||||
- `residence` : Nom résidence (VARCHAR 75)
|
|
||||||
- `appt` : Numéro appartement (VARCHAR 10, saisie libre)
|
|
||||||
- `niveau` : Étage (VARCHAR 10, saisie libre)
|
|
||||||
- `fk_habitat` : 1=individuel, 2=collectif (TINYINT UNSIGNED)
|
|
||||||
- `fk_type` : Type passage (2=à faire, autres valeurs pour fait/refus)
|
|
||||||
- `encrypted_name`, `encrypted_email`, `encrypted_phone` : Données cryptées
|
|
||||||
- `created_at`, `fk_user_creat`, `chk_active`
|
- `created_at`, `fk_user_creat`, `chk_active`
|
||||||
|
|
||||||
### ope_users_sectors
|
### ope_users_sectors
|
||||||
@@ -475,103 +303,6 @@ $coordinates = [
|
|||||||
- `fk_sector` : Lien vers le secteur
|
- `fk_sector` : Lien vers le secteur
|
||||||
- `created_at`, `fk_user_creat`, `chk_active`
|
- `created_at`, `fk_user_creat`, `chk_active`
|
||||||
|
|
||||||
## Règles de gestion des bâtiments lors de l'UPDATE
|
|
||||||
|
|
||||||
### Principe général
|
|
||||||
|
|
||||||
Lors de la mise à jour d'un secteur, le système applique une logique intelligente pour gérer les passages en fonction du type d'habitat (maison/immeuble) et du nombre de logements.
|
|
||||||
|
|
||||||
### Clé d'identification unique
|
|
||||||
|
|
||||||
**Tous les passages** sont identifiés par la clé : `numero|rue|rue_bis|ville`
|
|
||||||
|
|
||||||
Cette clé ne contient **pas** `residence` ni `appt` car ces champs sont en **saisie libre** par l'utilisateur.
|
|
||||||
|
|
||||||
### Cas 1 : Maison individuelle (fk_habitat=1)
|
|
||||||
|
|
||||||
#### Si 0 passage existant :
|
|
||||||
```
|
|
||||||
→ INSERT 1 nouveau passage
|
|
||||||
- fk_habitat = 1
|
|
||||||
- residence = ''
|
|
||||||
- appt = ''
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Si 1+ passages existants :
|
|
||||||
```
|
|
||||||
→ UPDATE le premier passage
|
|
||||||
- fk_habitat = 1
|
|
||||||
- residence = ''
|
|
||||||
→ Les autres passages restent INTACTS
|
|
||||||
(peuvent correspondre à plusieurs habitants saisis manuellement)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Cas 2 : Immeuble (fk_habitat=2)
|
|
||||||
|
|
||||||
#### Étape 1 : UPDATE systématique
|
|
||||||
```
|
|
||||||
→ UPDATE TOUS les passages existants à cette adresse
|
|
||||||
- fk_habitat = 2
|
|
||||||
- residence = sectors_adresses.residence (si non vide)
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Étape 2a : Si nb_existants < nb_log (ex: 3 passages, nb_log=6)
|
|
||||||
```
|
|
||||||
→ INSERT (nb_log - nb_existants) nouveaux passages
|
|
||||||
- fk_habitat = 2
|
|
||||||
- residence = sectors_adresses.residence
|
|
||||||
- appt = '' (pas de numéro prédéfini)
|
|
||||||
- fk_type = 2 (à faire)
|
|
||||||
|
|
||||||
Résultat : 6 passages total (3 conservés + 3 créés)
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Étape 2b : Si nb_existants > nb_log (ex: 10 passages, nb_log=6)
|
|
||||||
```
|
|
||||||
→ DELETE max (nb_existants - nb_log) passages
|
|
||||||
Conditions de suppression :
|
|
||||||
- fk_type = 2 (à faire)
|
|
||||||
- ET encrypted_name vide (non visité)
|
|
||||||
- Tri par created_at ASC (les plus anciens d'abord)
|
|
||||||
|
|
||||||
Résultat : Entre 6 et 10 passages (selon combien sont visités)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Points importants
|
|
||||||
|
|
||||||
✅ **Préservation des données utilisateur** :
|
|
||||||
- `appt` et `niveau` ne sont **JAMAIS modifiés** (saisie libre conservée)
|
|
||||||
- Les passages visités (encrypted_name rempli) ne sont **JAMAIS supprimés**
|
|
||||||
|
|
||||||
✅ **Mise à jour conditionnelle** :
|
|
||||||
- `residence` est mis à jour **uniquement si non vide** dans sectors_adresses
|
|
||||||
- Permet de conserver une saisie manuelle si la base bâtiments n'a pas l'info
|
|
||||||
|
|
||||||
✅ **Gestion des transitions** :
|
|
||||||
- Une adresse peut passer de maison (fk_habitat=1) à immeuble (fk_habitat=2) ou inversement
|
|
||||||
- La logique s'adapte automatiquement au nouveau type d'habitat
|
|
||||||
|
|
||||||
✅ **Uniformisation GPS** :
|
|
||||||
- **Tous les passages d'une même adresse partagent les mêmes coordonnées GPS** (gps_lat, gps_lng)
|
|
||||||
- Ces coordonnées proviennent de `sectors_adresses` (enrichies depuis la base externe `adresses`)
|
|
||||||
- Cette règle s'applique lors de la **création** et de la **mise à jour** avec `chk_adresses_change=1`
|
|
||||||
- Garantit la cohérence géographique pour tous les passages d'un même immeuble
|
|
||||||
|
|
||||||
### Exemple concret
|
|
||||||
|
|
||||||
**Situation initiale** :
|
|
||||||
- Adresse : "10 rue Victor Hugo, 22000 Saint-Brieuc"
|
|
||||||
- 8 passages existants (dont 3 visités)
|
|
||||||
- nb_log passe de 8 à 5
|
|
||||||
|
|
||||||
**Actions** :
|
|
||||||
1. UPDATE les 8 passages → fk_habitat=2, residence="Les Chênes"
|
|
||||||
2. Tentative suppression de (8-5) = 3 passages
|
|
||||||
3. Recherche des passages avec fk_type=2 ET encrypted_name vide
|
|
||||||
4. Suppose 5 passages non visités trouvés
|
|
||||||
5. Suppression des 3 plus anciens non visités
|
|
||||||
6. **Résultat** : 5 passages restants (3 visités + 2 non visités)
|
|
||||||
|
|
||||||
## Logs et monitoring
|
## Logs et monitoring
|
||||||
|
|
||||||
Le système génère des logs détaillés pour :
|
Le système génère des logs détaillés pour :
|
||||||
|
|||||||
@@ -1,7 +1,6 @@
|
|||||||
# PLANNING STRIPE - DÉVELOPPEUR BACKEND PHP
|
# PLANNING STRIPE - DÉVELOPPEUR BACKEND PHP
|
||||||
## API PHP 8.3 - Intégration Stripe Tap to Pay (Mobile uniquement)
|
## API PHP 8.3 - Intégration Stripe Connect + Terminal
|
||||||
### Période : 25/08/2024 - 05/09/2024
|
### Période : 25/08/2024 - 05/09/2024
|
||||||
### Mise à jour : Janvier 2025 - Simplification architecture
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -15,30 +14,20 @@ composer require stripe/stripe-php
|
|||||||
```
|
```
|
||||||
|
|
||||||
#### ✅ Configuration environnement
|
#### ✅ Configuration environnement
|
||||||
- [x] Créer configuration Stripe dans `AppConfig.php` avec clés TEST
|
- [ ] Créer `config/stripe.php` avec clés TEST
|
||||||
- [x] Ajouter variables de configuration :
|
- [ ] Ajouter variables `.env` :
|
||||||
```php
|
```env
|
||||||
'stripe' => [
|
STRIPE_PUBLIC_KEY=pk_test_...
|
||||||
'public_key_test' => 'pk_test_51QwoVN00pblGEgsXkf8qlXm...',
|
STRIPE_SECRET_KEY=sk_test_...
|
||||||
'secret_key_test' => 'sk_test_51QwoVN00pblGEgsXnvqi8qf...',
|
STRIPE_WEBHOOK_SECRET=whsec_...
|
||||||
'webhook_secret_test' => 'whsec_test_...',
|
STRIPE_API_VERSION=2024-06-20
|
||||||
'api_version' => '2024-06-20',
|
|
||||||
'application_fee_percent' => 0, // DECISION: 0% commission
|
|
||||||
'mode' => 'test'
|
|
||||||
]
|
|
||||||
```
|
```
|
||||||
- [x] Créer service `StripeService.php` singleton
|
- [ ] Créer service `StripeService.php` singleton
|
||||||
- [x] Configurer authentification Session-based API
|
- [ ] Configurer middleware authentification API
|
||||||
|
|
||||||
#### ✅ Base de données
|
#### ✅ Base de données
|
||||||
```sql
|
```sql
|
||||||
-- Modification de la table ope_pass existante (JANVIER 2025)
|
-- Tables à créer
|
||||||
ALTER TABLE `ope_pass`
|
|
||||||
DROP COLUMN IF EXISTS `is_striped`,
|
|
||||||
ADD COLUMN `stripe_payment_id` VARCHAR(50) DEFAULT NULL COMMENT 'ID du PaymentIntent Stripe (pi_xxx)',
|
|
||||||
ADD INDEX `idx_stripe_payment` (`stripe_payment_id`);
|
|
||||||
|
|
||||||
-- Tables à créer (simplifiées)
|
|
||||||
CREATE TABLE stripe_accounts (
|
CREATE TABLE stripe_accounts (
|
||||||
id INT PRIMARY KEY AUTO_INCREMENT,
|
id INT PRIMARY KEY AUTO_INCREMENT,
|
||||||
amicale_id INT NOT NULL,
|
amicale_id INT NOT NULL,
|
||||||
@@ -51,8 +40,32 @@ CREATE TABLE stripe_accounts (
|
|||||||
FOREIGN KEY (amicale_id) REFERENCES amicales(id)
|
FOREIGN KEY (amicale_id) REFERENCES amicales(id)
|
||||||
);
|
);
|
||||||
|
|
||||||
-- NOTE: Table payment_intents SUPPRIMÉE - on utilise directement stripe_payment_id dans ope_pass
|
CREATE TABLE payment_intents (
|
||||||
-- NOTE: Table terminal_readers SUPPRIMÉE - Tap to Pay uniquement, pas de terminaux externes
|
id INT PRIMARY KEY AUTO_INCREMENT,
|
||||||
|
stripe_payment_intent_id VARCHAR(255) UNIQUE,
|
||||||
|
amicale_id INT NOT NULL,
|
||||||
|
pompier_id INT NOT NULL,
|
||||||
|
amount INT NOT NULL, -- en centimes
|
||||||
|
currency VARCHAR(3) DEFAULT 'eur',
|
||||||
|
status VARCHAR(50),
|
||||||
|
application_fee INT,
|
||||||
|
metadata JSON,
|
||||||
|
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||||
|
FOREIGN KEY (amicale_id) REFERENCES amicales(id),
|
||||||
|
FOREIGN KEY (pompier_id) REFERENCES users(id)
|
||||||
|
);
|
||||||
|
|
||||||
|
CREATE TABLE terminal_readers (
|
||||||
|
id INT PRIMARY KEY AUTO_INCREMENT,
|
||||||
|
stripe_reader_id VARCHAR(255) UNIQUE,
|
||||||
|
amicale_id INT NOT NULL,
|
||||||
|
label VARCHAR(255),
|
||||||
|
location VARCHAR(255),
|
||||||
|
status VARCHAR(50),
|
||||||
|
device_type VARCHAR(50),
|
||||||
|
last_seen_at TIMESTAMP,
|
||||||
|
FOREIGN KEY (amicale_id) REFERENCES amicales(id)
|
||||||
|
);
|
||||||
|
|
||||||
CREATE TABLE android_certified_devices (
|
CREATE TABLE android_certified_devices (
|
||||||
id INT PRIMARY KEY AUTO_INCREMENT,
|
id INT PRIMARY KEY AUTO_INCREMENT,
|
||||||
@@ -70,10 +83,10 @@ CREATE TABLE android_certified_devices (
|
|||||||
|
|
||||||
### 🌆 Après-midi (4h)
|
### 🌆 Après-midi (4h)
|
||||||
|
|
||||||
#### ✅ Endpoints Connect - Onboarding (RÉALISÉS)
|
#### ✅ Endpoints Connect - Onboarding
|
||||||
```php
|
```php
|
||||||
// POST /api/stripe/accounts - IMPLEMENTED
|
// POST /api/amicales/{id}/stripe-account
|
||||||
public function createAccount() {
|
public function createStripeAccount($amicaleId) {
|
||||||
$amicale = Amicale::find($amicaleId);
|
$amicale = Amicale::find($amicaleId);
|
||||||
|
|
||||||
$account = \Stripe\Account::create([
|
$account = \Stripe\Account::create([
|
||||||
@@ -145,47 +158,45 @@ public function handleWebhook(Request $request) {
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
#### ✅ Configuration Tap to Pay
|
#### ✅ Terminal Connection Token
|
||||||
```php
|
```php
|
||||||
// POST /api/stripe/tap-to-pay/init
|
// POST /api/terminal/connection-token
|
||||||
public function initTapToPay(Request $request) {
|
public function createConnectionToken(Request $request) {
|
||||||
$userId = Session::getUserId();
|
$pompier = Auth::user();
|
||||||
$entityId = Session::getEntityId();
|
$amicale = $pompier->amicale;
|
||||||
|
|
||||||
// Vérifier que l'entité a un compte Stripe
|
$connectionToken = \Stripe\Terminal\ConnectionToken::create([
|
||||||
$account = $this->getStripeAccount($entityId);
|
'location' => $amicale->stripe_location_id,
|
||||||
|
], [
|
||||||
return [
|
'stripe_account' => $amicale->stripe_account_id
|
||||||
'stripe_account_id' => $account->stripe_account_id,
|
]);
|
||||||
'tap_to_pay_enabled' => true
|
|
||||||
];
|
return ['secret' => $connectionToken->secret];
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
### 🌆 Après-midi (4h)
|
### 🌆 Après-midi (4h)
|
||||||
|
|
||||||
#### ✅ Vérification compatibilité Device
|
#### ✅ Gestion des Locations
|
||||||
```php
|
```php
|
||||||
// POST /api/stripe/devices/check-tap-to-pay
|
// POST /api/amicales/{id}/create-location
|
||||||
public function checkTapToPayCapability(Request $request) {
|
public function createLocation($amicaleId) {
|
||||||
$platform = $request->input('platform');
|
$amicale = Amicale::find($amicaleId);
|
||||||
$model = $request->input('device_model');
|
|
||||||
$osVersion = $request->input('os_version');
|
$location = \Stripe\Terminal\Location::create([
|
||||||
|
'display_name' => $amicale->name,
|
||||||
if ($platform === 'iOS') {
|
'address' => [
|
||||||
// iPhone XS et ultérieurs avec iOS 16.4+
|
'line1' => $amicale->address,
|
||||||
$supported = $this->checkiOSCompatibility($model, $osVersion);
|
'city' => $amicale->city,
|
||||||
} else {
|
'postal_code' => $amicale->postal_code,
|
||||||
// Android certifié pour la France
|
'country' => 'FR',
|
||||||
$supported = $this->checkAndroidCertification($model);
|
],
|
||||||
}
|
], [
|
||||||
|
'stripe_account' => $amicale->stripe_account_id
|
||||||
return [
|
]);
|
||||||
'tap_to_pay_supported' => $supported,
|
|
||||||
'message' => $supported ?
|
$amicale->update(['stripe_location_id' => $location->id]);
|
||||||
'Tap to Pay disponible' :
|
return $location;
|
||||||
'Appareil non compatible'
|
|
||||||
];
|
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -195,22 +206,24 @@ public function checkTapToPayCapability(Request $request) {
|
|||||||
|
|
||||||
### 🌅 Matin (4h)
|
### 🌅 Matin (4h)
|
||||||
|
|
||||||
#### ✅ Création PaymentIntent avec association au passage
|
#### ✅ Création PaymentIntent avec commission
|
||||||
```php
|
```php
|
||||||
// POST /api/payments/create-intent
|
// POST /api/payments/create-intent
|
||||||
public function createPaymentIntent(Request $request) {
|
public function createPaymentIntent(Request $request) {
|
||||||
$validated = $request->validate([
|
$validated = $request->validate([
|
||||||
'amount' => 'required|integer|min:100', // en centimes
|
'amount' => 'required|integer|min:100', // en centimes
|
||||||
'passage_id' => 'required|integer', // ID du passage ope_pass
|
'amicale_id' => 'required|exists:amicales,id',
|
||||||
'entity_id' => 'required|integer',
|
|
||||||
]);
|
]);
|
||||||
|
|
||||||
$userId = Session::getUserId();
|
$pompier = Auth::user();
|
||||||
$entity = $this->getEntity($validated['entity_id']);
|
$amicale = Amicale::find($validated['amicale_id']);
|
||||||
|
|
||||||
// Commission à 0% (décision client)
|
// Calculer la commission (2.5% ou 50 centimes minimum)
|
||||||
$applicationFee = 0;
|
$applicationFee = max(
|
||||||
|
50, // 0.50€ minimum
|
||||||
|
round($validated['amount'] * 0.025) // 2.5%
|
||||||
|
);
|
||||||
|
|
||||||
$paymentIntent = \Stripe\PaymentIntent::create([
|
$paymentIntent = \Stripe\PaymentIntent::create([
|
||||||
'amount' => $validated['amount'],
|
'amount' => $validated['amount'],
|
||||||
'currency' => 'eur',
|
'currency' => 'eur',
|
||||||
@@ -218,27 +231,26 @@ public function createPaymentIntent(Request $request) {
|
|||||||
'capture_method' => 'automatic',
|
'capture_method' => 'automatic',
|
||||||
'application_fee_amount' => $applicationFee,
|
'application_fee_amount' => $applicationFee,
|
||||||
'transfer_data' => [
|
'transfer_data' => [
|
||||||
'destination' => $entity->stripe_account_id,
|
'destination' => $amicale->stripe_account_id,
|
||||||
],
|
],
|
||||||
'metadata' => [
|
'metadata' => [
|
||||||
'passage_id' => $validated['passage_id'],
|
'pompier_id' => $pompier->id,
|
||||||
'user_id' => $userId,
|
'pompier_name' => $pompier->name,
|
||||||
'entity_id' => $entity->id,
|
'amicale_id' => $amicale->id,
|
||||||
'year' => date('Y'),
|
'calendrier_annee' => date('Y'),
|
||||||
],
|
],
|
||||||
]);
|
]);
|
||||||
|
|
||||||
// Mise à jour directe dans ope_pass
|
// Sauvegarder en DB
|
||||||
$this->db->prepare("
|
PaymentIntent::create([
|
||||||
UPDATE ope_pass
|
'stripe_payment_intent_id' => $paymentIntent->id,
|
||||||
SET stripe_payment_id = :stripe_id,
|
'amicale_id' => $amicale->id,
|
||||||
date_modified = NOW()
|
'pompier_id' => $pompier->id,
|
||||||
WHERE id = :passage_id
|
'amount' => $validated['amount'],
|
||||||
")->execute([
|
'application_fee' => $applicationFee,
|
||||||
':stripe_id' => $paymentIntent->id,
|
'status' => $paymentIntent->status,
|
||||||
':passage_id' => $validated['passage_id']
|
|
||||||
]);
|
]);
|
||||||
|
|
||||||
return [
|
return [
|
||||||
'client_secret' => $paymentIntent->client_secret,
|
'client_secret' => $paymentIntent->client_secret,
|
||||||
'payment_intent_id' => $paymentIntent->id,
|
'payment_intent_id' => $paymentIntent->id,
|
||||||
@@ -252,59 +264,31 @@ public function createPaymentIntent(Request $request) {
|
|||||||
```php
|
```php
|
||||||
// POST /api/payments/{id}/capture
|
// POST /api/payments/{id}/capture
|
||||||
public function capturePayment($paymentIntentId) {
|
public function capturePayment($paymentIntentId) {
|
||||||
// Récupérer le passage depuis ope_pass
|
$localPayment = PaymentIntent::where('stripe_payment_intent_id', $paymentIntentId)->first();
|
||||||
$stmt = $this->db->prepare("
|
|
||||||
SELECT id, stripe_payment_id, montant
|
|
||||||
FROM ope_pass
|
|
||||||
WHERE stripe_payment_id = :stripe_id
|
|
||||||
");
|
|
||||||
$stmt->execute([':stripe_id' => $paymentIntentId]);
|
|
||||||
$passage = $stmt->fetch();
|
|
||||||
|
|
||||||
$paymentIntent = \Stripe\PaymentIntent::retrieve($paymentIntentId);
|
$paymentIntent = \Stripe\PaymentIntent::retrieve($paymentIntentId);
|
||||||
|
|
||||||
if ($paymentIntent->status === 'requires_capture') {
|
if ($paymentIntent->status === 'requires_capture') {
|
||||||
$paymentIntent->capture();
|
$paymentIntent->capture();
|
||||||
}
|
}
|
||||||
|
|
||||||
// Mettre à jour le statut dans ope_pass si nécessaire
|
$localPayment->update(['status' => $paymentIntent->status]);
|
||||||
if ($paymentIntent->status === 'succeeded' && $passage) {
|
|
||||||
$this->db->prepare("
|
// Si succès, envoyer email reçu
|
||||||
UPDATE ope_pass
|
if ($paymentIntent->status === 'succeeded') {
|
||||||
SET date_stripe_validated = NOW()
|
$this->sendReceipt($localPayment);
|
||||||
WHERE id = :passage_id
|
|
||||||
")->execute([':passage_id' => $passage['id']]);
|
|
||||||
|
|
||||||
// Envoyer email reçu si configuré
|
|
||||||
$this->sendReceipt($passage['id']);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
return $paymentIntent;
|
return $paymentIntent;
|
||||||
}
|
}
|
||||||
|
|
||||||
// GET /api/passages/{id}/stripe-status
|
// GET /api/payments/{id}/status
|
||||||
public function getPassageStripeStatus($passageId) {
|
public function getPaymentStatus($paymentIntentId) {
|
||||||
$stmt = $this->db->prepare("
|
$payment = PaymentIntent::where('stripe_payment_intent_id', $paymentIntentId)->first();
|
||||||
SELECT stripe_payment_id, montant, date_creat
|
|
||||||
FROM ope_pass
|
|
||||||
WHERE id = :id
|
|
||||||
");
|
|
||||||
$stmt->execute([':id' => $passageId]);
|
|
||||||
$passage = $stmt->fetch();
|
|
||||||
|
|
||||||
if (!$passage['stripe_payment_id']) {
|
|
||||||
return ['status' => 'no_stripe_payment'];
|
|
||||||
}
|
|
||||||
|
|
||||||
// Récupérer le statut depuis Stripe
|
|
||||||
$paymentIntent = \Stripe\PaymentIntent::retrieve($passage['stripe_payment_id']);
|
|
||||||
|
|
||||||
return [
|
return [
|
||||||
'stripe_payment_id' => $passage['stripe_payment_id'],
|
'status' => $payment->status,
|
||||||
'status' => $paymentIntent->status,
|
'amount' => $payment->amount,
|
||||||
'amount' => $paymentIntent->amount,
|
'created_at' => $payment->created_at,
|
||||||
'currency' => $paymentIntent->currency,
|
|
||||||
'created_at' => $passage['date_creat']
|
|
||||||
];
|
];
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
@@ -635,184 +619,4 @@ Log::channel('stripe')->info('Payment created', [
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## 🎯 BILAN DÉVELOPPEMENT API (01/09/2024)
|
*Document créé le 24/08/2024 - À mettre à jour quotidiennement*
|
||||||
|
|
||||||
### ✅ ENDPOINTS IMPLÉMENTÉS (TAP TO PAY UNIQUEMENT)
|
|
||||||
|
|
||||||
#### **Stripe Connect - Comptes**
|
|
||||||
- **POST /api/stripe/accounts** ✅
|
|
||||||
- Création compte Stripe Express pour amicales
|
|
||||||
- Gestion déchiffrement données (encrypted_email, encrypted_name)
|
|
||||||
- Support des comptes existants
|
|
||||||
|
|
||||||
- **GET /api/stripe/accounts/:entityId/status** ✅
|
|
||||||
- Récupération statut complet du compte
|
|
||||||
- Vérification charges_enabled et payouts_enabled
|
|
||||||
- Retour JSON avec informations détaillées
|
|
||||||
|
|
||||||
- **POST /api/stripe/accounts/:accountId/onboarding-link** ✅
|
|
||||||
- Génération liens d'onboarding Stripe
|
|
||||||
- URLs de retour configurées
|
|
||||||
- Gestion des erreurs et timeouts
|
|
||||||
|
|
||||||
#### **Configuration et Utilitaires**
|
|
||||||
- **GET /api/stripe/config** ✅
|
|
||||||
- Configuration publique Stripe
|
|
||||||
- Clés publiques et paramètres client
|
|
||||||
- Adaptation par environnement
|
|
||||||
|
|
||||||
- **POST /api/stripe/webhook** ✅
|
|
||||||
- Réception événements Stripe
|
|
||||||
- Vérification signatures webhook
|
|
||||||
- Traitement des événements Connect
|
|
||||||
|
|
||||||
### 🔧 CORRECTIONS TECHNIQUES RÉALISÉES
|
|
||||||
|
|
||||||
#### **StripeController.php**
|
|
||||||
- Fixed `Database::getInstance()` → `$this->db`
|
|
||||||
- Fixed `$db->prepare()` → `$this->db->prepare()`
|
|
||||||
- Removed `details_submitted` column from SQL UPDATE
|
|
||||||
- Added proper exit statements after JSON responses
|
|
||||||
- Commented out Logger class calls (class not found)
|
|
||||||
|
|
||||||
#### **StripeService.php**
|
|
||||||
- Added proper Stripe SDK imports (`use Stripe\Account`)
|
|
||||||
- Fixed `Account::retrieve()` → `$this->stripe->accounts->retrieve()`
|
|
||||||
- **CRUCIAL**: Added data decryption support:
|
|
||||||
```php
|
|
||||||
$nom = !empty($entite['encrypted_name']) ?
|
|
||||||
\ApiService::decryptData($entite['encrypted_name']) : '';
|
|
||||||
$email = !empty($entite['encrypted_email']) ?
|
|
||||||
\ApiService::decryptSearchableData($entite['encrypted_email']) : null;
|
|
||||||
```
|
|
||||||
- Fixed address mapping (adresse1, adresse2 vs adresse)
|
|
||||||
- **REMOVED commission calculation - set to 0%**
|
|
||||||
|
|
||||||
#### **Router.php**
|
|
||||||
- Commented out excessive debug logging causing nginx 502 errors:
|
|
||||||
```php
|
|
||||||
// error_log("Recherche de route pour: méthode=$method, uri=$uri");
|
|
||||||
// error_log("Test pattern: $pattern contre uri: $uri");
|
|
||||||
```
|
|
||||||
|
|
||||||
#### **AppConfig.php**
|
|
||||||
- Set `application_fee_percent` to 0 (was 2.5)
|
|
||||||
- Set `application_fee_minimum` to 0 (was 50)
|
|
||||||
- **Policy**: 100% of payments go to amicales
|
|
||||||
|
|
||||||
### 📊 TESTS ET VALIDATION
|
|
||||||
|
|
||||||
#### **Tests Réussis**
|
|
||||||
1. **POST /api/stripe/accounts** → 200 OK (Compte créé: acct_1S2YfNP63A07c33Y)
|
|
||||||
2. **GET /api/stripe/accounts/5/status** → 200 OK (charges_enabled: true)
|
|
||||||
3. **POST /api/stripe/locations** → 200 OK (Location: tml_GLJ21w7KCYX4Wj)
|
|
||||||
4. **POST /api/stripe/accounts/.../onboarding-link** → 200 OK (Link generated)
|
|
||||||
5. **Onboarding Stripe** → Completed successfully by user
|
|
||||||
|
|
||||||
#### **Erreurs Résolues**
|
|
||||||
- ❌ 500 "Class App\Controllers\Database not found" → ✅ Fixed
|
|
||||||
- ❌ 400 "Invalid email address: " → ✅ Fixed (decryption added)
|
|
||||||
- ❌ 502 "upstream sent too big header" → ✅ Fixed (logs removed)
|
|
||||||
- ❌ SQL "Column not found: details_submitted" → ✅ Fixed
|
|
||||||
|
|
||||||
### 🚀 ARCHITECTURE TECHNIQUE
|
|
||||||
|
|
||||||
#### **Services Implémentés**
|
|
||||||
- **StripeService**: Singleton pour interactions Stripe API
|
|
||||||
- **StripeController**: Endpoints REST avec gestion sessions
|
|
||||||
- **StripeWebhookController**: Handler événements webhook
|
|
||||||
- **ApiService**: Déchiffrement données encrypted fields
|
|
||||||
|
|
||||||
#### **Sécurité**
|
|
||||||
- Validation signatures webhook Stripe
|
|
||||||
- Authentification session-based pour APIs privées
|
|
||||||
- Public endpoints: webhook uniquement
|
|
||||||
- Pas de stockage clés secrètes en base
|
|
||||||
|
|
||||||
#### **Base de données (MISE À JOUR JANVIER 2025)**
|
|
||||||
- **Modification table `ope_pass`** : `stripe_payment_id` VARCHAR(50) remplace `is_striped`
|
|
||||||
- **Table `payment_intents` supprimée** : Intégration directe dans `ope_pass`
|
|
||||||
- Utilisation tables existantes (entites)
|
|
||||||
- Champs encrypted_email et encrypted_name supportés
|
|
||||||
- Déchiffrement automatique avant envoi Stripe
|
|
||||||
|
|
||||||
### 🎯 PROCHAINES ÉTAPES API
|
|
||||||
1. **Tests paiements réels** avec PaymentIntents
|
|
||||||
2. **Endpoints statistiques** pour dashboard amicales
|
|
||||||
3. **Webhooks production** avec clés live
|
|
||||||
4. **Monitoring et logs** des transactions
|
|
||||||
5. **Rate limiting** sur endpoints sensibles
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📱 FLOW TAP TO PAY SIMPLIFIÉ (Janvier 2025)
|
|
||||||
|
|
||||||
### Architecture
|
|
||||||
```
|
|
||||||
Flutter App (Tap to Pay) ↔ API PHP ↔ Stripe API
|
|
||||||
```
|
|
||||||
|
|
||||||
### Étape 1: Création PaymentIntent
|
|
||||||
**Flutter → API**
|
|
||||||
```json
|
|
||||||
POST /api/stripe/payments/create-intent
|
|
||||||
{
|
|
||||||
"amount": 1500,
|
|
||||||
"passage_id": 123,
|
|
||||||
"entity_id": 5
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**API → Stripe → Base de données**
|
|
||||||
```php
|
|
||||||
// 1. Créer le PaymentIntent
|
|
||||||
$paymentIntent = Stripe\PaymentIntent::create([...]);
|
|
||||||
|
|
||||||
// 2. Sauvegarder dans ope_pass
|
|
||||||
UPDATE ope_pass SET stripe_payment_id = 'pi_xxx' WHERE id = 123;
|
|
||||||
```
|
|
||||||
|
|
||||||
**API → Flutter**
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"client_secret": "pi_xxx_secret_yyy",
|
|
||||||
"payment_intent_id": "pi_xxx"
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Étape 2: Collecte du paiement (Flutter)
|
|
||||||
- L'app Flutter utilise le SDK Stripe Terminal
|
|
||||||
- Le téléphone devient le terminal de paiement (Tap to Pay)
|
|
||||||
- Utilise le client_secret pour collecter le paiement
|
|
||||||
|
|
||||||
### Étape 3: Confirmation (Webhook)
|
|
||||||
**Stripe → API**
|
|
||||||
- Event: `payment_intent.succeeded`
|
|
||||||
- Met à jour le statut dans la base de données
|
|
||||||
|
|
||||||
### Tables nécessaires
|
|
||||||
- ✅ `ope_pass.stripe_payment_id` - Association passage/paiement
|
|
||||||
- ✅ `stripe_accounts` - Comptes Connect des amicales
|
|
||||||
- ✅ `android_certified_devices` - Vérification compatibilité
|
|
||||||
- ❌ ~~`stripe_payment_intents`~~ - Supprimée
|
|
||||||
- ❌ ~~`terminal_readers`~~ - Pas de terminaux externes
|
|
||||||
|
|
||||||
### Endpoints essentiels
|
|
||||||
1. `POST /api/stripe/payments/create-intent` - Créer PaymentIntent
|
|
||||||
2. `POST /api/stripe/devices/check-tap-to-pay` - Vérifier compatibilité
|
|
||||||
3. `POST /api/stripe/webhook` - Recevoir confirmations
|
|
||||||
4. `GET /api/passages/{id}/stripe-status` - Vérifier statut
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📝 CHANGELOG
|
|
||||||
|
|
||||||
### Janvier 2025 - Refactoring base de données
|
|
||||||
- **Suppression** de la table `payment_intents` (non nécessaire)
|
|
||||||
- **Migration** : `is_striped` → `stripe_payment_id` VARCHAR(50) dans `ope_pass`
|
|
||||||
- **Simplification** : Association directe PaymentIntent ↔ Passage
|
|
||||||
- **Avantage** : Traçabilité directe sans table intermédiaire
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
*Document créé le 24/08/2024 - Dernière mise à jour : 09/01/2025*
|
|
||||||
@@ -1,464 +0,0 @@
|
|||||||
# 🔧 Migration Backend Stripe - Option A (Tout en 1)
|
|
||||||
|
|
||||||
## 📋 Objectif
|
|
||||||
|
|
||||||
Optimiser la création de compte Stripe Connect en **1 seule requête** côté Flutter qui crée :
|
|
||||||
1. Le compte Stripe Connect
|
|
||||||
2. La Location Terminal
|
|
||||||
3. Le lien d'onboarding
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🗄️ 1. Modification de la base de données
|
|
||||||
|
|
||||||
### **Ajouter la colonne `stripe_location_id`**
|
|
||||||
|
|
||||||
```sql
|
|
||||||
ALTER TABLE amicales
|
|
||||||
ADD COLUMN stripe_location_id VARCHAR(255) NULL
|
|
||||||
AFTER stripe_id;
|
|
||||||
```
|
|
||||||
|
|
||||||
**Vérification** :
|
|
||||||
```sql
|
|
||||||
DESCRIBE amicales;
|
|
||||||
```
|
|
||||||
|
|
||||||
Doit afficher :
|
|
||||||
```
|
|
||||||
+-------------------+--------------+------+-----+---------+-------+
|
|
||||||
| Field | Type | Null | Key | Default | Extra |
|
|
||||||
+-------------------+--------------+------+-----+---------+-------+
|
|
||||||
| stripe_id | varchar(255) | YES | | NULL | |
|
|
||||||
| stripe_location_id| varchar(255) | YES | | NULL | |
|
|
||||||
+-------------------+--------------+------+-----+---------+-------+
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🔧 2. Modification de l'endpoint `POST /stripe/accounts`
|
|
||||||
|
|
||||||
### **Fichier** : `app/Http/Controllers/StripeController.php` (ou similaire)
|
|
||||||
|
|
||||||
### **Méthode** : `createAccount()` ou `store()`
|
|
||||||
|
|
||||||
### **Code proposé** :
|
|
||||||
|
|
||||||
```php
|
|
||||||
<?php
|
|
||||||
|
|
||||||
use Illuminate\Http\Request;
|
|
||||||
use Illuminate\Support\Facades\DB;
|
|
||||||
use App\Models\Amicale;
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Créer un compte Stripe Connect avec Location Terminal et lien d'onboarding
|
|
||||||
*
|
|
||||||
* @param Request $request
|
|
||||||
* @return \Illuminate\Http\JsonResponse
|
|
||||||
*/
|
|
||||||
public function createStripeAccount(Request $request)
|
|
||||||
{
|
|
||||||
$request->validate([
|
|
||||||
'fk_entite' => 'required|integer|exists:amicales,id',
|
|
||||||
'return_url' => 'required|string|url',
|
|
||||||
'refresh_url' => 'required|string|url',
|
|
||||||
]);
|
|
||||||
|
|
||||||
$fkEntite = $request->fk_entite;
|
|
||||||
$amicale = Amicale::findOrFail($fkEntite);
|
|
||||||
|
|
||||||
// Vérifier si un compte existe déjà
|
|
||||||
if (!empty($amicale->stripe_id)) {
|
|
||||||
return $this->handleExistingAccount($amicale, $request);
|
|
||||||
}
|
|
||||||
|
|
||||||
DB::beginTransaction();
|
|
||||||
|
|
||||||
try {
|
|
||||||
// Configurer la clé Stripe (selon environnement)
|
|
||||||
\Stripe\Stripe::setApiKey(config('services.stripe.secret'));
|
|
||||||
|
|
||||||
// 1️⃣ Créer le compte Stripe Connect Express
|
|
||||||
$account = \Stripe\Account::create([
|
|
||||||
'type' => 'express',
|
|
||||||
'country' => 'FR',
|
|
||||||
'email' => $amicale->email,
|
|
||||||
'business_type' => 'non_profit', // ou 'company' selon le cas
|
|
||||||
'business_profile' => [
|
|
||||||
'name' => $amicale->name,
|
|
||||||
'url' => config('app.url'),
|
|
||||||
],
|
|
||||||
'capabilities' => [
|
|
||||||
'card_payments' => ['requested' => true],
|
|
||||||
'transfers' => ['requested' => true],
|
|
||||||
],
|
|
||||||
]);
|
|
||||||
|
|
||||||
\Log::info('Stripe account created', [
|
|
||||||
'amicale_id' => $amicale->id,
|
|
||||||
'account_id' => $account->id,
|
|
||||||
]);
|
|
||||||
|
|
||||||
// 2️⃣ Créer la Location Terminal pour Tap to Pay
|
|
||||||
$location = \Stripe\Terminal\Location::create([
|
|
||||||
'display_name' => $amicale->name,
|
|
||||||
'address' => [
|
|
||||||
'line1' => $amicale->adresse1 ?: 'Non renseigné',
|
|
||||||
'line2' => $amicale->adresse2,
|
|
||||||
'city' => $amicale->ville ?: 'Non renseigné',
|
|
||||||
'postal_code' => $amicale->code_postal ?: '00000',
|
|
||||||
'country' => 'FR',
|
|
||||||
],
|
|
||||||
], [
|
|
||||||
'stripe_account' => $account->id, // ← Important : Connect account
|
|
||||||
]);
|
|
||||||
|
|
||||||
\Log::info('Stripe Terminal Location created', [
|
|
||||||
'amicale_id' => $amicale->id,
|
|
||||||
'location_id' => $location->id,
|
|
||||||
]);
|
|
||||||
|
|
||||||
// 3️⃣ Créer le lien d'onboarding
|
|
||||||
$accountLink = \Stripe\AccountLink::create([
|
|
||||||
'account' => $account->id,
|
|
||||||
'refresh_url' => $request->refresh_url,
|
|
||||||
'return_url' => $request->return_url,
|
|
||||||
'type' => 'account_onboarding',
|
|
||||||
]);
|
|
||||||
|
|
||||||
\Log::info('Stripe onboarding link created', [
|
|
||||||
'amicale_id' => $amicale->id,
|
|
||||||
'account_id' => $account->id,
|
|
||||||
]);
|
|
||||||
|
|
||||||
// 4️⃣ Sauvegarder TOUT en base de données
|
|
||||||
$amicale->stripe_id = $account->id;
|
|
||||||
$amicale->stripe_location_id = $location->id;
|
|
||||||
$amicale->chk_stripe = true;
|
|
||||||
$amicale->save();
|
|
||||||
|
|
||||||
DB::commit();
|
|
||||||
|
|
||||||
// 5️⃣ Retourner TOUTES les informations
|
|
||||||
return response()->json([
|
|
||||||
'success' => true,
|
|
||||||
'account_id' => $account->id,
|
|
||||||
'location_id' => $location->id,
|
|
||||||
'onboarding_url' => $accountLink->url,
|
|
||||||
'charges_enabled' => $account->charges_enabled,
|
|
||||||
'payouts_enabled' => $account->payouts_enabled,
|
|
||||||
'existing' => false,
|
|
||||||
'message' => 'Compte Stripe Connect créé avec succès',
|
|
||||||
], 201);
|
|
||||||
|
|
||||||
} catch (\Stripe\Exception\ApiErrorException $e) {
|
|
||||||
DB::rollBack();
|
|
||||||
|
|
||||||
\Log::error('Stripe API error', [
|
|
||||||
'amicale_id' => $amicale->id,
|
|
||||||
'error' => $e->getMessage(),
|
|
||||||
'type' => get_class($e),
|
|
||||||
]);
|
|
||||||
|
|
||||||
return response()->json([
|
|
||||||
'success' => false,
|
|
||||||
'message' => 'Erreur Stripe : ' . $e->getMessage(),
|
|
||||||
], 500);
|
|
||||||
|
|
||||||
} catch (\Exception $e) {
|
|
||||||
DB::rollBack();
|
|
||||||
|
|
||||||
\Log::error('Stripe account creation failed', [
|
|
||||||
'amicale_id' => $amicale->id,
|
|
||||||
'error' => $e->getMessage(),
|
|
||||||
'trace' => $e->getTraceAsString(),
|
|
||||||
]);
|
|
||||||
|
|
||||||
return response()->json([
|
|
||||||
'success' => false,
|
|
||||||
'message' => 'Erreur lors de la création du compte Stripe',
|
|
||||||
], 500);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Gérer le cas d'un compte Stripe existant
|
|
||||||
*/
|
|
||||||
private function handleExistingAccount(Amicale $amicale, Request $request)
|
|
||||||
{
|
|
||||||
try {
|
|
||||||
\Stripe\Stripe::setApiKey(config('services.stripe.secret'));
|
|
||||||
|
|
||||||
// Récupérer les infos du compte existant
|
|
||||||
$account = \Stripe\Account::retrieve($amicale->stripe_id);
|
|
||||||
|
|
||||||
// Si pas de location_id, la créer maintenant
|
|
||||||
if (empty($amicale->stripe_location_id)) {
|
|
||||||
$location = \Stripe\Terminal\Location::create([
|
|
||||||
'display_name' => $amicale->name,
|
|
||||||
'address' => [
|
|
||||||
'line1' => $amicale->adresse1 ?: 'Non renseigné',
|
|
||||||
'city' => $amicale->ville ?: 'Non renseigné',
|
|
||||||
'postal_code' => $amicale->code_postal ?: '00000',
|
|
||||||
'country' => 'FR',
|
|
||||||
],
|
|
||||||
], [
|
|
||||||
'stripe_account' => $amicale->stripe_id,
|
|
||||||
]);
|
|
||||||
|
|
||||||
$amicale->stripe_location_id = $location->id;
|
|
||||||
$amicale->save();
|
|
||||||
|
|
||||||
\Log::info('Location created for existing account', [
|
|
||||||
'amicale_id' => $amicale->id,
|
|
||||||
'location_id' => $location->id,
|
|
||||||
]);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Si le compte est déjà complètement configuré
|
|
||||||
if ($account->charges_enabled && $account->payouts_enabled) {
|
|
||||||
return response()->json([
|
|
||||||
'success' => true,
|
|
||||||
'account_id' => $amicale->stripe_id,
|
|
||||||
'location_id' => $amicale->stripe_location_id,
|
|
||||||
'onboarding_url' => null,
|
|
||||||
'charges_enabled' => true,
|
|
||||||
'payouts_enabled' => true,
|
|
||||||
'existing' => true,
|
|
||||||
'message' => 'Compte Stripe déjà configuré et actif',
|
|
||||||
]);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Compte existant mais configuration incomplète : générer un nouveau lien
|
|
||||||
$accountLink = \Stripe\AccountLink::create([
|
|
||||||
'account' => $amicale->stripe_id,
|
|
||||||
'refresh_url' => $request->refresh_url,
|
|
||||||
'return_url' => $request->return_url,
|
|
||||||
'type' => 'account_onboarding',
|
|
||||||
]);
|
|
||||||
|
|
||||||
return response()->json([
|
|
||||||
'success' => true,
|
|
||||||
'account_id' => $amicale->stripe_id,
|
|
||||||
'location_id' => $amicale->stripe_location_id,
|
|
||||||
'onboarding_url' => $accountLink->url,
|
|
||||||
'charges_enabled' => $account->charges_enabled,
|
|
||||||
'payouts_enabled' => $account->payouts_enabled,
|
|
||||||
'existing' => true,
|
|
||||||
'message' => 'Compte existant, configuration à finaliser',
|
|
||||||
]);
|
|
||||||
|
|
||||||
} catch (\Exception $e) {
|
|
||||||
\Log::error('Error handling existing account', [
|
|
||||||
'amicale_id' => $amicale->id,
|
|
||||||
'error' => $e->getMessage(),
|
|
||||||
]);
|
|
||||||
|
|
||||||
return response()->json([
|
|
||||||
'success' => false,
|
|
||||||
'message' => 'Erreur lors de la vérification du compte existant',
|
|
||||||
], 500);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📡 3. Modification de l'endpoint `GET /stripe/accounts/{id}/status`
|
|
||||||
|
|
||||||
Ajouter `location_id` dans la réponse :
|
|
||||||
|
|
||||||
```php
|
|
||||||
public function checkAccountStatus($amicaleId)
|
|
||||||
{
|
|
||||||
$amicale = Amicale::findOrFail($amicaleId);
|
|
||||||
|
|
||||||
if (empty($amicale->stripe_id)) {
|
|
||||||
return response()->json([
|
|
||||||
'has_account' => false,
|
|
||||||
'account_id' => null,
|
|
||||||
'location_id' => null,
|
|
||||||
'charges_enabled' => false,
|
|
||||||
'payouts_enabled' => false,
|
|
||||||
'onboarding_completed' => false,
|
|
||||||
]);
|
|
||||||
}
|
|
||||||
|
|
||||||
try {
|
|
||||||
\Stripe\Stripe::setApiKey(config('services.stripe.secret'));
|
|
||||||
$account = \Stripe\Account::retrieve($amicale->stripe_id);
|
|
||||||
|
|
||||||
return response()->json([
|
|
||||||
'has_account' => true,
|
|
||||||
'account_id' => $amicale->stripe_id,
|
|
||||||
'location_id' => $amicale->stripe_location_id, // ← Ajouté
|
|
||||||
'charges_enabled' => $account->charges_enabled,
|
|
||||||
'payouts_enabled' => $account->payouts_enabled,
|
|
||||||
'onboarding_completed' => $account->details_submitted,
|
|
||||||
]);
|
|
||||||
|
|
||||||
} catch (\Exception $e) {
|
|
||||||
return response()->json([
|
|
||||||
'has_account' => false,
|
|
||||||
'error' => $e->getMessage(),
|
|
||||||
], 500);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🗑️ 4. Endpoint à SUPPRIMER (devenu inutile)
|
|
||||||
|
|
||||||
### **❌ `POST /stripe/locations`**
|
|
||||||
|
|
||||||
Cet endpoint n'est plus nécessaire car la Location est créée automatiquement dans `POST /stripe/accounts`.
|
|
||||||
|
|
||||||
**Option 1** : Supprimer complètement
|
|
||||||
**Option 2** : Le garder pour compatibilité temporaire (si utilisé ailleurs)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📝 5. Modification du modèle Eloquent
|
|
||||||
|
|
||||||
### **Fichier** : `app/Models/Amicale.php`
|
|
||||||
|
|
||||||
Ajouter le champ `stripe_location_id` :
|
|
||||||
|
|
||||||
```php
|
|
||||||
protected $fillable = [
|
|
||||||
// ... autres champs
|
|
||||||
'stripe_id',
|
|
||||||
'stripe_location_id', // ← Ajouté
|
|
||||||
'chk_stripe',
|
|
||||||
];
|
|
||||||
|
|
||||||
protected $casts = [
|
|
||||||
'chk_stripe' => 'boolean',
|
|
||||||
];
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## ✅ 6. Tests à effectuer
|
|
||||||
|
|
||||||
### **Test 1 : Nouvelle amicale**
|
|
||||||
```bash
|
|
||||||
curl -X POST http://localhost/api/stripe/accounts \
|
|
||||||
-H "Content-Type: application/json" \
|
|
||||||
-H "Authorization: Bearer {token}" \
|
|
||||||
-d '{
|
|
||||||
"fk_entite": 123,
|
|
||||||
"return_url": "https://app.geosector.fr/stripe/success",
|
|
||||||
"refresh_url": "https://app.geosector.fr/stripe/refresh"
|
|
||||||
}'
|
|
||||||
```
|
|
||||||
|
|
||||||
**Réponse attendue** :
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"success": true,
|
|
||||||
"account_id": "acct_xxxxxxxxxxxxx",
|
|
||||||
"location_id": "tml_xxxxxxxxxxxxx",
|
|
||||||
"onboarding_url": "https://connect.stripe.com/setup/...",
|
|
||||||
"charges_enabled": false,
|
|
||||||
"payouts_enabled": false,
|
|
||||||
"existing": false,
|
|
||||||
"message": "Compte Stripe Connect créé avec succès"
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Test 2 : Amicale avec compte existant**
|
|
||||||
```bash
|
|
||||||
curl -X POST http://localhost/api/stripe/accounts \
|
|
||||||
-H "Content-Type: application/json" \
|
|
||||||
-H "Authorization: Bearer {token}" \
|
|
||||||
-d '{
|
|
||||||
"fk_entite": 456,
|
|
||||||
"return_url": "https://app.geosector.fr/stripe/success",
|
|
||||||
"refresh_url": "https://app.geosector.fr/stripe/refresh"
|
|
||||||
}'
|
|
||||||
```
|
|
||||||
|
|
||||||
**Réponse attendue** :
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"success": true,
|
|
||||||
"account_id": "acct_xxxxxxxxxxxxx",
|
|
||||||
"location_id": "tml_xxxxxxxxxxxxx",
|
|
||||||
"onboarding_url": null,
|
|
||||||
"charges_enabled": true,
|
|
||||||
"payouts_enabled": true,
|
|
||||||
"existing": true,
|
|
||||||
"message": "Compte Stripe déjà configuré et actif"
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Test 3 : Vérifier la BDD**
|
|
||||||
```sql
|
|
||||||
SELECT id, name, stripe_id, stripe_location_id, chk_stripe
|
|
||||||
FROM amicales
|
|
||||||
WHERE id = 123;
|
|
||||||
```
|
|
||||||
|
|
||||||
**Résultat attendu** :
|
|
||||||
```
|
|
||||||
+-----+------------------+-------------------+-------------------+------------+
|
|
||||||
| id | name | stripe_id | stripe_location_id| chk_stripe |
|
|
||||||
+-----+------------------+-------------------+-------------------+------------+
|
|
||||||
| 123 | Pompiers Paris15 | acct_xxxxxxxxxxxxx| tml_xxxxxxxxxxxxx | 1 |
|
|
||||||
+-----+------------------+-------------------+-------------------+------------+
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🚀 7. Déploiement
|
|
||||||
|
|
||||||
### **Étapes** :
|
|
||||||
1. ✅ Appliquer la migration SQL
|
|
||||||
2. ✅ Déployer le code Backend modifié
|
|
||||||
3. ✅ Tester avec Postman/curl
|
|
||||||
4. ✅ Déployer le code Flutter modifié
|
|
||||||
5. ✅ Tester le flow complet depuis l'app
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📊 Comparaison Avant/Après
|
|
||||||
|
|
||||||
| Aspect | Avant | Après |
|
|
||||||
|--------|-------|-------|
|
|
||||||
| **Appels API Flutter → Backend** | 3 | 1 |
|
|
||||||
| **Appels Backend → Stripe** | 3 | 3 (mais atomiques) |
|
|
||||||
| **Latence totale** | ~3-5s | ~1-2s |
|
|
||||||
| **Gestion erreurs** | Complexe | Simplifié avec transaction |
|
|
||||||
| **Atomicité** | ❌ Non | ✅ Oui (DB transaction) |
|
|
||||||
| **Location ID sauvegardé** | ❌ Non | ✅ Oui |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🎯 Bénéfices
|
|
||||||
|
|
||||||
1. ✅ **Performance** : Latence divisée par 2-3
|
|
||||||
2. ✅ **Fiabilité** : Transaction BDD garantit la cohérence
|
|
||||||
3. ✅ **Simplicité** : Code Flutter plus simple
|
|
||||||
4. ✅ **Maintenance** : Moins de code à maintenir
|
|
||||||
5. ✅ **Traçabilité** : Logs centralisés côté Backend
|
|
||||||
6. ✅ **Tap to Pay prêt** : `location_id` disponible immédiatement
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## ⚠️ Points d'attention
|
|
||||||
|
|
||||||
1. **Rollback** : Si la transaction échoue, rien n'est sauvegardé (bon comportement)
|
|
||||||
2. **Logs** : Bien logger chaque étape pour le debug
|
|
||||||
3. **Stripe Connect limitations** : Respecter les rate limits Stripe
|
|
||||||
4. **Tests** : Tester avec des comptes Stripe de test d'abord
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📚 Ressources
|
|
||||||
|
|
||||||
- [Stripe Connect Express Accounts](https://stripe.com/docs/connect/express-accounts)
|
|
||||||
- [Stripe Terminal Locations](https://stripe.com/docs/terminal/fleet/locations)
|
|
||||||
- [Stripe Account Links](https://stripe.com/docs/connect/account-links)
|
|
||||||
@@ -1,343 +0,0 @@
|
|||||||
# Flow de paiement Stripe Tap to Pay
|
|
||||||
|
|
||||||
## Vue d'ensemble
|
|
||||||
|
|
||||||
Ce document décrit le flow complet pour les paiements Stripe Tap to Pay dans l'application GeoSector, depuis la création du compte Stripe Connect jusqu'au paiement final.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🏢 PRÉALABLE : Création d'un compte Amicale Stripe Connect
|
|
||||||
|
|
||||||
Avant de pouvoir utiliser les paiements Stripe, chaque amicale doit créer son compte Stripe Connect.
|
|
||||||
|
|
||||||
### 📋 Flow de création du compte
|
|
||||||
|
|
||||||
#### 1. Initiation depuis l'application web admin
|
|
||||||
|
|
||||||
**Endpoint :** `POST /api/stripe/accounts/create`
|
|
||||||
|
|
||||||
**Requête :**
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"amicale_id": 45,
|
|
||||||
"type": "express", // Type de compte Stripe Connect
|
|
||||||
"country": "FR",
|
|
||||||
"email": "contact@amicale-pompiers-paris.fr",
|
|
||||||
"business_profile": {
|
|
||||||
"name": "Amicale des Pompiers de Paris",
|
|
||||||
"product_description": "Vente de calendriers des pompiers",
|
|
||||||
"mcc": "8398", // Code activité : organisations civiques
|
|
||||||
"url": "https://www.amicale-pompiers-paris.fr"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 2. Création du compte Stripe
|
|
||||||
|
|
||||||
**Actions API :**
|
|
||||||
1. Appel Stripe API pour créer un compte Express
|
|
||||||
2. Génération d'un lien d'onboarding personnalisé
|
|
||||||
3. Sauvegarde en base de données
|
|
||||||
|
|
||||||
**Réponse :**
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"success": true,
|
|
||||||
"stripe_account_id": "acct_1O3ABC456DEF789",
|
|
||||||
"onboarding_url": "https://connect.stripe.com/express/oauth/authorize?...",
|
|
||||||
"status": "pending"
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 3. Processus d'onboarding Stripe
|
|
||||||
|
|
||||||
**Actions utilisateur (dirigeant amicale) :**
|
|
||||||
1. Clic sur le lien d'onboarding
|
|
||||||
2. Connexion/création compte Stripe
|
|
||||||
3. Saisie des informations légales :
|
|
||||||
- **Entité** : Association loi 1901
|
|
||||||
- **SIRET** de l'amicale
|
|
||||||
- **RIB** pour les virements
|
|
||||||
- **Pièce d'identité** du représentant légal
|
|
||||||
4. Validation des conditions d'utilisation
|
|
||||||
|
|
||||||
#### 4. Vérification et activation
|
|
||||||
|
|
||||||
**Webhook Stripe → API :**
|
|
||||||
```json
|
|
||||||
POST /api/stripe/webhooks
|
|
||||||
{
|
|
||||||
"type": "account.updated",
|
|
||||||
"data": {
|
|
||||||
"object": {
|
|
||||||
"id": "acct_1O3ABC456DEF789",
|
|
||||||
"charges_enabled": true,
|
|
||||||
"payouts_enabled": true,
|
|
||||||
"details_submitted": true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Actions API :**
|
|
||||||
1. Mise à jour du statut en base
|
|
||||||
2. Notification email à l'amicale
|
|
||||||
3. Activation des fonctionnalités de paiement
|
|
||||||
|
|
||||||
#### 5. Structure en base de données
|
|
||||||
|
|
||||||
**Table `stripe_accounts` :**
|
|
||||||
```sql
|
|
||||||
CREATE TABLE `stripe_accounts` (
|
|
||||||
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
|
|
||||||
`fk_entite` int(10) unsigned NOT NULL,
|
|
||||||
`stripe_account_id` varchar(50) NOT NULL,
|
|
||||||
`account_type` enum('express','standard','custom') DEFAULT 'express',
|
|
||||||
`charges_enabled` tinyint(1) DEFAULT 0,
|
|
||||||
`payouts_enabled` tinyint(1) DEFAULT 0,
|
|
||||||
`details_submitted` tinyint(1) DEFAULT 0,
|
|
||||||
`country` varchar(2) DEFAULT 'FR',
|
|
||||||
`default_currency` varchar(3) DEFAULT 'eur',
|
|
||||||
`business_name` varchar(255) DEFAULT NULL,
|
|
||||||
`support_email` varchar(255) DEFAULT NULL,
|
|
||||||
`onboarding_completed_at` timestamp NULL DEFAULT NULL,
|
|
||||||
`created_at` timestamp NOT NULL DEFAULT current_timestamp(),
|
|
||||||
`updated_at` timestamp NULL DEFAULT NULL ON UPDATE current_timestamp(),
|
|
||||||
PRIMARY KEY (`id`),
|
|
||||||
UNIQUE KEY `stripe_account_id` (`stripe_account_id`),
|
|
||||||
KEY `fk_entite` (`fk_entite`),
|
|
||||||
CONSTRAINT `stripe_accounts_ibfk_1` FOREIGN KEY (`fk_entite`) REFERENCES `entites` (`id`)
|
|
||||||
);
|
|
||||||
```
|
|
||||||
|
|
||||||
### 🔐 Sécurité et validation
|
|
||||||
|
|
||||||
#### Prérequis pour créer un compte :
|
|
||||||
- ✅ Utilisateur administrateur de l'amicale
|
|
||||||
- ✅ Amicale active avec statut validé
|
|
||||||
- ✅ Email de contact vérifié
|
|
||||||
- ✅ Informations légales complètes (SIRET, adresse)
|
|
||||||
|
|
||||||
#### Validation avant paiements :
|
|
||||||
- ✅ `charges_enabled = 1` (peut recevoir des paiements)
|
|
||||||
- ✅ `payouts_enabled = 1` (peut recevoir des virements)
|
|
||||||
- ✅ `details_submitted = 1` (onboarding terminé)
|
|
||||||
|
|
||||||
### 📊 États du compte Stripe
|
|
||||||
|
|
||||||
| État | Description | Actions possibles |
|
|
||||||
|------|-------------|-------------------|
|
|
||||||
| `pending` | Compte créé, onboarding en cours | Compléter l'onboarding |
|
|
||||||
| `restricted` | Informations manquantes | Fournir documents manquants |
|
|
||||||
| `restricted_soon` | Vérification en cours | Attendre validation Stripe |
|
|
||||||
| `active` | Compte opérationnel | Recevoir des paiements ✅ |
|
|
||||||
| `rejected` | Compte refusé par Stripe | Contacter support |
|
|
||||||
|
|
||||||
### 🚨 Gestion des erreurs
|
|
||||||
|
|
||||||
#### Erreurs courantes lors de la création :
|
|
||||||
- **400** : Données manquantes ou invalides
|
|
||||||
- **409** : Compte Stripe déjà existant pour cette amicale
|
|
||||||
- **403** : Utilisateur non autorisé
|
|
||||||
|
|
||||||
#### Erreurs durant l'onboarding :
|
|
||||||
- Documents manquants ou invalides
|
|
||||||
- Informations bancaires incorrectes
|
|
||||||
- Activité non autorisée par Stripe
|
|
||||||
|
|
||||||
### 📞 Support et résolution
|
|
||||||
|
|
||||||
#### Pour les amicales :
|
|
||||||
1. **Email support** : support@geosector.fr
|
|
||||||
2. **Documentation** : Guides d'onboarding disponibles
|
|
||||||
3. **Assistance téléphonique** : Disponible aux heures ouvrables
|
|
||||||
|
|
||||||
#### Pour les développeurs :
|
|
||||||
1. **Stripe Dashboard** : Accès aux comptes et statuts
|
|
||||||
2. **Logs API** : Traçabilité complète des opérations
|
|
||||||
3. **Webhook monitoring** : Suivi des événements Stripe
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🚨 IMPORTANT : Nouveau Flow (v2)
|
|
||||||
|
|
||||||
**Le passage est TOUJOURS créé/modifié EN PREMIER** pour obtenir un ID réel, PUIS le PaymentIntent est créé avec cet ID.
|
|
||||||
|
|
||||||
## Flow détaillé
|
|
||||||
|
|
||||||
### 1. Sauvegarde du passage EN PREMIER
|
|
||||||
|
|
||||||
L'application crée ou modifie d'abord le passage pour obtenir un ID réel :
|
|
||||||
|
|
||||||
```
|
|
||||||
POST /api/passages/create // Nouveau passage
|
|
||||||
PUT /api/passages/456 // Mise à jour passage existant
|
|
||||||
```
|
|
||||||
|
|
||||||
**Réponse avec l'ID réel :**
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"status": "success",
|
|
||||||
"passage_id": 456 // ID RÉEL du passage créé/modifié
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. Création du PaymentIntent AVEC l'ID réel
|
|
||||||
|
|
||||||
Ensuite seulement, création du PaymentIntent avec le `passage_id` réel :
|
|
||||||
|
|
||||||
```
|
|
||||||
POST /api/stripe/payments/create-intent
|
|
||||||
```
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"amount": 2500, // En centimes (25€)
|
|
||||||
"passage_id": 456, // ID RÉEL du passage (JAMAIS 0)
|
|
||||||
"payment_method_types": ["card_present"], // Tap to Pay
|
|
||||||
"location_id": "tml_xxx", // Terminal reader location
|
|
||||||
"amicale_id": 45,
|
|
||||||
"member_id": 67,
|
|
||||||
"stripe_account": "acct_xxx"
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Réponse
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"status": "success",
|
|
||||||
"data": {
|
|
||||||
"client_secret": "pi_3QaXYZ_secret_xyz",
|
|
||||||
"payment_intent_id": "pi_3QaXYZ123ABC456",
|
|
||||||
"amount": 2500,
|
|
||||||
"currency": "eur",
|
|
||||||
"passage_id": 789, // 0 pour nouveau passage
|
|
||||||
"type": "tap_to_pay"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. Traitement du paiement côté client
|
|
||||||
|
|
||||||
L'application utilise le SDK Stripe pour traiter le paiement via NFC :
|
|
||||||
|
|
||||||
```dart
|
|
||||||
// Flutter - Utilisation du client_secret
|
|
||||||
final paymentResult = await stripe.collectPaymentMethod(
|
|
||||||
clientSecret: response['client_secret'],
|
|
||||||
// ... configuration Tap to Pay
|
|
||||||
);
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3. Traitement du paiement Tap to Pay
|
|
||||||
|
|
||||||
L'application utilise le SDK Stripe Terminal avec le `client_secret` pour collecter le paiement via NFC.
|
|
||||||
|
|
||||||
### 4. Mise à jour du passage avec stripe_payment_id
|
|
||||||
|
|
||||||
Après succès du paiement, l'app met à jour le passage avec le `stripe_payment_id` :
|
|
||||||
|
|
||||||
```json
|
|
||||||
PUT /api/passages/456
|
|
||||||
{
|
|
||||||
"stripe_payment_id": "pi_3QaXYZ123ABC456", // ← LIEN AVEC STRIPE
|
|
||||||
// ... autres champs si nécessaire
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Points clés du nouveau flow
|
|
||||||
|
|
||||||
### ✅ Avantages
|
|
||||||
|
|
||||||
1. **Passage toujours existant** : Le passage existe toujours avec un ID réel avant le paiement
|
|
||||||
2. **Traçabilité garantie** : Le `passage_id` dans Stripe est toujours valide
|
|
||||||
3. **Gestion d'erreur robuste** : Si le paiement échoue, le passage existe déjà
|
|
||||||
4. **Cohérence des données** : Pas de passage "orphelin" ou de paiement sans passage
|
|
||||||
|
|
||||||
### ❌ Ce qui n'est plus supporté
|
|
||||||
|
|
||||||
1. **passage_id=0** : Plus jamais utilisé dans `/create-intent`
|
|
||||||
2. **operation_id** : Plus nécessaire car le passage existe déjà
|
|
||||||
3. **Création conditionnelle** : Le passage est toujours créé avant
|
|
||||||
|
|
||||||
## Schéma de séquence (Nouveau Flow v2)
|
|
||||||
|
|
||||||
```
|
|
||||||
┌─────────┐ ┌─────────┐ ┌────────┐ ┌────────────┐
|
|
||||||
│ App │ │ API │ │ Stripe │ │ ope_pass │
|
|
||||||
└────┬────┘ └────┬────┘ └────┬───┘ └─────┬──────┘
|
|
||||||
│ │ │ │
|
|
||||||
│ 1. CREATE/UPDATE passage │ │
|
|
||||||
├──────────────>│ │ │
|
|
||||||
│ ├────────────────┼───────────────>│
|
|
||||||
│ │ │ INSERT/UPDATE │
|
|
||||||
│ │ │ │
|
|
||||||
│ 2. passage_id: 456 (réel) │ │
|
|
||||||
│<──────────────│ │ │
|
|
||||||
│ │ │ │
|
|
||||||
│ 3. create-intent (id=456) │ │
|
|
||||||
├──────────────>│ │ │
|
|
||||||
│ │ │ │
|
|
||||||
│ │ 4. Create PI │ │
|
|
||||||
│ ├───────────────>│ │
|
|
||||||
│ │ │ │
|
|
||||||
│ │ 5. PI created │ │
|
|
||||||
│ │<───────────────│ │
|
|
||||||
│ │ │ │
|
|
||||||
│ │ 6. UPDATE │ │
|
|
||||||
│ ├────────────────┼───────────────>│
|
|
||||||
│ │ stripe_payment_id = pi_xxx │
|
|
||||||
│ │ │ │
|
|
||||||
│ 7. client_secret + pi_id │ │
|
|
||||||
│<──────────────│ │ │
|
|
||||||
│ │ │ │
|
|
||||||
│ 8. Tap to Pay │ │ │
|
|
||||||
├───────────────┼───────────────>│ │
|
|
||||||
│ avec SDK │ │ │
|
|
||||||
│ │ │ │
|
|
||||||
│ 9. Payment OK │ │ │
|
|
||||||
│<──────────────┼────────────────│ │
|
|
||||||
│ │ │ │
|
|
||||||
│ 10. UPDATE passage (optionnel) │ │
|
|
||||||
├──────────────>│ │ │
|
|
||||||
│ ├────────────────┼───────────────>│
|
|
||||||
│ │ Confirmer stripe_payment_id │
|
|
||||||
│ │ │ │
|
|
||||||
│ 11. Success │ │ │
|
|
||||||
│<──────────────│ │ │
|
|
||||||
│ │ │ │
|
|
||||||
```
|
|
||||||
|
|
||||||
## Points importants (Nouveau Flow v2)
|
|
||||||
|
|
||||||
1. **Passage créé en premier** : Le passage est TOUJOURS créé/modifié AVANT le PaymentIntent
|
|
||||||
2. **ID réel obligatoire** : Le `passage_id` ne peut jamais être 0 dans `/create-intent`
|
|
||||||
3. **Lien Stripe automatique** : Le `stripe_payment_id` est ajouté automatiquement lors de la création du PaymentIntent
|
|
||||||
4. **Idempotence** : Un passage ne peut avoir qu'un seul `stripe_payment_id`
|
|
||||||
5. **Validation stricte** : Vérification du montant, propriété et existence du passage
|
|
||||||
|
|
||||||
## Erreurs possibles
|
|
||||||
|
|
||||||
- **400** :
|
|
||||||
- `passage_id` manquant ou ≤ 0
|
|
||||||
- Montant invalide (< 1€ ou > 999€)
|
|
||||||
- Passage déjà payé par Stripe
|
|
||||||
- Montant ne correspond pas au passage
|
|
||||||
- **401** : Non authentifié
|
|
||||||
- **403** : Passage non autorisé (pas le bon utilisateur)
|
|
||||||
- **404** : Passage non trouvé
|
|
||||||
|
|
||||||
## Migration base de données
|
|
||||||
|
|
||||||
La colonne `stripe_payment_id VARCHAR(50)` a été ajoutée via :
|
|
||||||
```sql
|
|
||||||
ALTER TABLE `ope_pass` ADD COLUMN `stripe_payment_id` VARCHAR(50) DEFAULT NULL;
|
|
||||||
ALTER TABLE `ope_pass` ADD INDEX `idx_stripe_payment` (`stripe_payment_id`);
|
|
||||||
```
|
|
||||||
|
|
||||||
## Environnements
|
|
||||||
|
|
||||||
- **DEV** : dva-geo sur IN3 - Base mise à jour ✅
|
|
||||||
- **REC** : rca-geo sur IN3 - Base mise à jour ✅
|
|
||||||
- **PROD** : pra-geo sur IN4 - À mettre à jour
|
|
||||||
@@ -1,197 +0,0 @@
|
|||||||
# Stripe Tap to Pay - Requirements officiels
|
|
||||||
|
|
||||||
> Document basé sur la documentation officielle Stripe - Dernière vérification : 29 septembre 2025
|
|
||||||
|
|
||||||
## 📱 iOS - Tap to Pay sur iPhone
|
|
||||||
|
|
||||||
### Configuration minimum requise
|
|
||||||
|
|
||||||
| Composant | Requirement | Notes |
|
|
||||||
|-----------|------------|--------|
|
|
||||||
| **Appareil** | iPhone XS ou plus récent | iPhone XS, XR, 11, 12, 13, 14, 15, 16 |
|
|
||||||
| **iOS** | iOS 16.4 ou plus récent | Pour support PIN complet |
|
|
||||||
| **SDK** | Terminal iOS SDK 2.23.0+ | Version 3.6.0+ pour Interac (Canada) |
|
|
||||||
| **Entitlement** | Apple Tap to Pay | À demander sur Apple Developer |
|
|
||||||
|
|
||||||
### Fonctionnalités par version iOS
|
|
||||||
|
|
||||||
- **iOS 16.0-16.3** : Tap to Pay basique (sans PIN)
|
|
||||||
- **iOS 16.4+** : Support PIN complet pour toutes les cartes
|
|
||||||
- **Versions beta** : NON SUPPORTÉES
|
|
||||||
|
|
||||||
### Méthodes de paiement supportées
|
|
||||||
|
|
||||||
- ✅ Cartes sans contact : Visa, Mastercard, American Express
|
|
||||||
- ✅ Wallets NFC : Apple Pay, Google Pay, Samsung Pay
|
|
||||||
- ✅ Discover (USA uniquement)
|
|
||||||
- ✅ Interac (Canada uniquement, SDK 3.6.0+)
|
|
||||||
- ✅ eftpos (Australie uniquement)
|
|
||||||
|
|
||||||
### Limitations importantes
|
|
||||||
|
|
||||||
- ❌ iPad non supporté (pas de NFC)
|
|
||||||
- ❌ Puerto Rico non disponible
|
|
||||||
- ❌ Versions iOS beta non supportées
|
|
||||||
|
|
||||||
## 🤖 Android - Tap to Pay
|
|
||||||
|
|
||||||
### Configuration minimum requise
|
|
||||||
|
|
||||||
| Composant | Requirement | Notes |
|
|
||||||
|-----------|------------|--------|
|
|
||||||
| **Android** | Android 11 ou plus récent | API level 30+ |
|
|
||||||
| **NFC** | Capteur NFC fonctionnel | Obligatoire |
|
|
||||||
| **Processeur** | ARM | x86 non supporté |
|
|
||||||
| **Sécurité** | Appareil non rooté | Bootloader verrouillé |
|
|
||||||
| **Services** | Google Mobile Services | GMS obligatoire |
|
|
||||||
| **Keystore** | Hardware keystore intégré | Pour sécurité |
|
|
||||||
| **OS** | OS constructeur non modifié | Pas de ROM custom |
|
|
||||||
|
|
||||||
### Appareils certifiés en France (liste non exhaustive)
|
|
||||||
|
|
||||||
#### Samsung
|
|
||||||
- Galaxy S21, S21+, S21 Ultra, S21 FE (Android 11+)
|
|
||||||
- Galaxy S22, S22+, S22 Ultra (Android 12+)
|
|
||||||
- Galaxy S23, S23+, S23 Ultra, S23 FE (Android 13+)
|
|
||||||
- Galaxy S24, S24+, S24 Ultra (Android 14+)
|
|
||||||
- Galaxy Z Fold 3, 4, 5, 6
|
|
||||||
- Galaxy Z Flip 3, 4, 5, 6
|
|
||||||
- Galaxy Note 20, Note 20 Ultra
|
|
||||||
- Galaxy A54, A73 (haut de gamme)
|
|
||||||
|
|
||||||
#### Google Pixel
|
|
||||||
- Pixel 6, 6 Pro, 6a (Android 12+)
|
|
||||||
- Pixel 7, 7 Pro, 7a (Android 13+)
|
|
||||||
- Pixel 8, 8 Pro, 8a (Android 14+)
|
|
||||||
- Pixel 9, 9 Pro, 9 Pro XL (Android 14+)
|
|
||||||
- Pixel Fold (Android 13+)
|
|
||||||
- Pixel Tablet (Android 13+)
|
|
||||||
|
|
||||||
#### OnePlus
|
|
||||||
- OnePlus 9, 9 Pro (Android 11+)
|
|
||||||
- OnePlus 10 Pro, 10T (Android 12+)
|
|
||||||
- OnePlus 11, 11R (Android 13+)
|
|
||||||
- OnePlus 12, 12R (Android 14+)
|
|
||||||
- OnePlus Open (Android 13+)
|
|
||||||
|
|
||||||
#### Xiaomi
|
|
||||||
- Mi 11, 11 Ultra (Android 11+)
|
|
||||||
- Xiaomi 12, 12 Pro, 12T Pro (Android 12+)
|
|
||||||
- Xiaomi 13, 13 Pro, 13T Pro (Android 13+)
|
|
||||||
- Xiaomi 14, 14 Pro, 14 Ultra (Android 14+)
|
|
||||||
|
|
||||||
#### Autres marques
|
|
||||||
- OPPO Find X3/X5/X6 Pro, Find N2/N3
|
|
||||||
- Realme GT 2 Pro, GT 3, GT 5 Pro
|
|
||||||
- Honor Magic5/6 Pro, 90
|
|
||||||
- ASUS Zenfone 9/10, ROG Phone 7
|
|
||||||
- Nothing Phone (1), (2), (2a)
|
|
||||||
|
|
||||||
## 🌍 Disponibilité par pays
|
|
||||||
|
|
||||||
### Europe
|
|
||||||
- ✅ France : Disponible
|
|
||||||
- ✅ Royaume-Uni : Disponible
|
|
||||||
- ✅ Allemagne : Disponible
|
|
||||||
- ✅ Pays-Bas : Disponible
|
|
||||||
- ✅ Irlande : Disponible
|
|
||||||
- ✅ Italie : Disponible (récent)
|
|
||||||
- ✅ Espagne : Disponible (récent)
|
|
||||||
|
|
||||||
### Amérique
|
|
||||||
- ✅ États-Unis : Disponible (+ Discover)
|
|
||||||
- ✅ Canada : Disponible (+ Interac)
|
|
||||||
- ❌ Puerto Rico : Non disponible
|
|
||||||
- ❌ Mexique : Non disponible
|
|
||||||
|
|
||||||
### Asie-Pacifique
|
|
||||||
- ✅ Australie : Disponible (+ eftpos)
|
|
||||||
- ✅ Nouvelle-Zélande : Disponible
|
|
||||||
- ✅ Singapour : Disponible
|
|
||||||
- ✅ Japon : Disponible (récent)
|
|
||||||
|
|
||||||
## 🔧 Intégration technique
|
|
||||||
|
|
||||||
### SDK Requirements
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
// iOS
|
|
||||||
pod 'StripeTerminal', '~> 2.23.0' // Minimum pour Tap to Pay
|
|
||||||
pod 'StripeTerminal', '~> 3.6.0' // Pour support Interac
|
|
||||||
|
|
||||||
// Android
|
|
||||||
implementation 'com.stripe:stripeterminal-taptopay:3.7.1'
|
|
||||||
implementation 'com.stripe:stripeterminal-core:3.7.1'
|
|
||||||
|
|
||||||
// React Native
|
|
||||||
"@stripe/stripe-terminal-react-native": "^0.0.1-beta.17"
|
|
||||||
|
|
||||||
// Flutter
|
|
||||||
stripe_terminal: ^3.2.0
|
|
||||||
```
|
|
||||||
|
|
||||||
### Capacités requises
|
|
||||||
|
|
||||||
#### iOS Info.plist
|
|
||||||
```xml
|
|
||||||
<key>NSBluetoothAlwaysUsageDescription</key>
|
|
||||||
<string>Bluetooth nécessaire pour Tap to Pay</string>
|
|
||||||
<key>NFCReaderUsageDescription</key>
|
|
||||||
<string>NFC nécessaire pour lire les cartes</string>
|
|
||||||
<key>com.apple.developer.proximity-reader</key>
|
|
||||||
<true/>
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Android Manifest
|
|
||||||
```xml
|
|
||||||
<uses-permission android:name="android.permission.NFC" />
|
|
||||||
<uses-permission android:name="android.permission.INTERNET" />
|
|
||||||
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />
|
|
||||||
<uses-feature android:name="android.hardware.nfc" android:required="true" />
|
|
||||||
```
|
|
||||||
|
|
||||||
## 📊 Limites techniques
|
|
||||||
|
|
||||||
| Limite | Valeur | Notes |
|
|
||||||
|--------|--------|-------|
|
|
||||||
| **Montant min** | 1€ / $1 | Selon devise |
|
|
||||||
| **Montant max** | Variable par pays | France : 50€ sans PIN, illimité avec PIN |
|
|
||||||
| **Timeout transaction** | 60 secondes | Après présentation carte |
|
|
||||||
| **Distance NFC** | 4cm max | Distance optimale |
|
|
||||||
| **Tentatives PIN** | 3 max | Puis carte bloquée |
|
|
||||||
|
|
||||||
## 🔐 Sécurité
|
|
||||||
|
|
||||||
### Certifications
|
|
||||||
- PCI-DSS Level 1
|
|
||||||
- EMV Contactless Level 1
|
|
||||||
- Apple ProximityReader Framework
|
|
||||||
- Google SafetyNet Attestation
|
|
||||||
|
|
||||||
### Données sensibles
|
|
||||||
- Les données de carte ne transitent JAMAIS par l'appareil
|
|
||||||
- Tokenisation end-to-end par Stripe
|
|
||||||
- Pas de stockage local des données carte
|
|
||||||
- PIN chiffré directement vers Stripe
|
|
||||||
|
|
||||||
## 📚 Ressources officielles
|
|
||||||
|
|
||||||
- [Documentation Stripe Terminal](https://docs.stripe.com/terminal)
|
|
||||||
- [Tap to Pay sur iPhone - Apple Developer](https://developer.apple.com/tap-to-pay/)
|
|
||||||
- [Guide d'intégration iOS](https://docs.stripe.com/terminal/payments/setup-reader/tap-to-pay?platform=ios)
|
|
||||||
- [Guide d'intégration Android](https://docs.stripe.com/terminal/payments/setup-reader/tap-to-pay?platform=android)
|
|
||||||
- [SDK Terminal iOS](https://github.com/stripe/stripe-terminal-ios)
|
|
||||||
- [SDK Terminal Android](https://github.com/stripe/stripe-terminal-android)
|
|
||||||
|
|
||||||
## 🔄 Historique des versions
|
|
||||||
|
|
||||||
| Date | Version iOS | Changement |
|
|
||||||
|------|-------------|------------|
|
|
||||||
| Sept 2022 | iOS 16.0 | Lancement initial Tap to Pay |
|
|
||||||
| Mars 2023 | iOS 16.4 | Ajout support PIN |
|
|
||||||
| Sept 2023 | iOS 17.0 | Améliorations performances |
|
|
||||||
| Sept 2024 | iOS 18.0 | Support étendu international |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
*Document maintenu par l'équipe GeoSector - Dernière mise à jour : 29/09/2025*
|
|
||||||
File diff suppressed because one or more lines are too long
File diff suppressed because it is too large
Load Diff
@@ -1,53 +0,0 @@
|
|||||||
-- Table pour stocker les informations des devices des utilisateurs
|
|
||||||
CREATE TABLE IF NOT EXISTS `user_devices` (
|
|
||||||
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
|
|
||||||
`fk_user` int(10) unsigned NOT NULL COMMENT 'Référence vers la table users',
|
|
||||||
|
|
||||||
-- Informations générales du device
|
|
||||||
`platform` varchar(20) NOT NULL COMMENT 'Plateforme: iOS, Android, etc.',
|
|
||||||
`device_model` varchar(100) DEFAULT NULL COMMENT 'Modèle du device (ex: iPhone13,2)',
|
|
||||||
`device_name` varchar(255) DEFAULT NULL COMMENT 'Nom personnalisé du device',
|
|
||||||
`device_manufacturer` varchar(100) DEFAULT NULL COMMENT 'Fabricant (Apple, Samsung, etc.)',
|
|
||||||
`device_identifier` varchar(100) DEFAULT NULL COMMENT 'Identifiant unique du device',
|
|
||||||
|
|
||||||
-- Informations réseau (IPv4 uniquement)
|
|
||||||
`device_ip_local` varchar(15) DEFAULT NULL COMMENT 'Adresse IP locale IPv4',
|
|
||||||
`device_ip_public` varchar(15) DEFAULT NULL COMMENT 'Adresse IP publique IPv4',
|
|
||||||
`device_wifi_name` varchar(255) DEFAULT NULL COMMENT 'Nom du réseau WiFi (SSID)',
|
|
||||||
`device_wifi_bssid` varchar(17) DEFAULT NULL COMMENT 'BSSID du point d\'accès (format
|
|
||||||
XX:XX:XX:XX:XX:XX)',
|
|
||||||
|
|
||||||
-- Capacités et version OS
|
|
||||||
`ios_version` varchar(20) DEFAULT NULL COMMENT 'Version iOS/Android OS',
|
|
||||||
`device_nfc_capable` tinyint(1) DEFAULT NULL COMMENT 'Support NFC (1=oui, 0=non)',
|
|
||||||
`device_supports_tap_to_pay` tinyint(1) DEFAULT NULL COMMENT 'Support Tap to Pay (1=oui, 0=non)',
|
|
||||||
|
|
||||||
-- État batterie
|
|
||||||
`battery_level` tinyint(3) unsigned DEFAULT NULL COMMENT 'Niveau batterie en pourcentage (0-100)',
|
|
||||||
`battery_charging` tinyint(1) DEFAULT NULL COMMENT 'En charge (1=oui, 0=non)',
|
|
||||||
`battery_state` varchar(20) DEFAULT NULL COMMENT 'État batterie (charging, discharging, full)',
|
|
||||||
|
|
||||||
-- Versions application
|
|
||||||
`app_version` varchar(20) DEFAULT NULL COMMENT 'Version de l\'application (ex: 3.2.8)',
|
|
||||||
`app_build` varchar(20) DEFAULT NULL COMMENT 'Numéro de build (ex: 328)',
|
|
||||||
|
|
||||||
-- Timestamps
|
|
||||||
`last_device_info_check` timestamp NULL DEFAULT NULL COMMENT 'Dernier check des infos device côté
|
|
||||||
app',
|
|
||||||
`created_at` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT 'Date de création de
|
|
||||||
l\'enregistrement',
|
|
||||||
`updated_at` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT
|
|
||||||
'Date de dernière modification',
|
|
||||||
|
|
||||||
PRIMARY KEY (`id`),
|
|
||||||
KEY `idx_fk_user` (`fk_user`) COMMENT 'Index pour recherche par utilisateur',
|
|
||||||
KEY `idx_updated_at` (`updated_at`) COMMENT 'Index pour tri par date de mise à jour',
|
|
||||||
KEY `idx_last_check` (`last_device_info_check`) COMMENT 'Index pour recherche par dernière
|
|
||||||
vérification',
|
|
||||||
UNIQUE KEY `unique_user_device` (`fk_user`, `device_identifier`) COMMENT 'Un seul enregistrement
|
|
||||||
par device/user',
|
|
||||||
|
|
||||||
CONSTRAINT `fk_user_devices_user` FOREIGN KEY (`fk_user`)
|
|
||||||
REFERENCES `users` (`id`) ON DELETE CASCADE
|
|
||||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci COMMENT='Informations des devices
|
|
||||||
utilisateurs';
|
|
||||||
619
api/docs/geosector_app.sql
Executable file
619
api/docs/geosector_app.sql
Executable file
@@ -0,0 +1,619 @@
|
|||||||
|
-- Création de la base de données geo_app si elle n'existe pas
|
||||||
|
DROP DATABASE IF EXISTS `geo_app`;
|
||||||
|
CREATE DATABASE IF NOT EXISTS `geo_app` DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
|
||||||
|
|
||||||
|
-- Création de l'utilisateur et attribution des droits
|
||||||
|
CREATE USER IF NOT EXISTS 'geo_app_user'@'localhost' IDENTIFIED BY 'QO:96df*?k{4W6m';
|
||||||
|
GRANT SELECT, INSERT, UPDATE, DELETE ON `geo_app`.* TO 'geo_app_user'@'localhost';
|
||||||
|
FLUSH PRIVILEGES;
|
||||||
|
|
||||||
|
USE geo_app;
|
||||||
|
|
||||||
|
--
|
||||||
|
-- Table structure for table `email_counter`
|
||||||
|
--
|
||||||
|
|
||||||
|
DROP TABLE IF EXISTS `email_counter`;
|
||||||
|
/*!40101 SET @saved_cs_client = @@character_set_client */;
|
||||||
|
/*!50503 SET character_set_client = utf8mb4 */;
|
||||||
|
CREATE TABLE `email_counter` (
|
||||||
|
`id` int unsigned NOT NULL DEFAULT '1',
|
||||||
|
`hour_start` timestamp NULL DEFAULT NULL,
|
||||||
|
`count` int unsigned DEFAULT '0',
|
||||||
|
PRIMARY KEY (`id`)
|
||||||
|
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
|
||||||
|
/*!40101 SET character_set_client = @saved_cs_client */;
|
||||||
|
|
||||||
|
DROP TABLE IF EXISTS `x_devises`;
|
||||||
|
/*!40101 SET @saved_cs_client = @@character_set_client */;
|
||||||
|
/*!50503 SET character_set_client = utf8mb4 */;
|
||||||
|
CREATE TABLE `x_devises` (
|
||||||
|
`id` int unsigned NOT NULL AUTO_INCREMENT,
|
||||||
|
`code` varchar(3) DEFAULT NULL,
|
||||||
|
`symbole` varchar(6) DEFAULT NULL,
|
||||||
|
`libelle` varchar(45) DEFAULT NULL,
|
||||||
|
`chk_active` tinyint(1) unsigned DEFAULT '1',
|
||||||
|
PRIMARY KEY (`id`),
|
||||||
|
UNIQUE KEY `id_UNIQUE` (`id`)
|
||||||
|
) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
|
||||||
|
/*!40101 SET character_set_client = @saved_cs_client */;
|
||||||
|
|
||||||
|
--
|
||||||
|
-- Table structure for table `x_entites_types`
|
||||||
|
--
|
||||||
|
|
||||||
|
DROP TABLE IF EXISTS `x_entites_types`;
|
||||||
|
/*!40101 SET @saved_cs_client = @@character_set_client */;
|
||||||
|
/*!50503 SET character_set_client = utf8mb4 */;
|
||||||
|
CREATE TABLE `x_entites_types` (
|
||||||
|
`id` int unsigned NOT NULL AUTO_INCREMENT,
|
||||||
|
`libelle` varchar(45) DEFAULT NULL,
|
||||||
|
`chk_active` tinyint(1) unsigned DEFAULT NULL,
|
||||||
|
PRIMARY KEY (`id`),
|
||||||
|
UNIQUE KEY `id_UNIQUE` (`id`)
|
||||||
|
) ENGINE=InnoDB AUTO_INCREMENT=3 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
|
||||||
|
/*!40101 SET character_set_client = @saved_cs_client */;
|
||||||
|
|
||||||
|
--
|
||||||
|
-- Table structure for table `x_types_passages`
|
||||||
|
--
|
||||||
|
|
||||||
|
DROP TABLE IF EXISTS `x_types_passages`;
|
||||||
|
/*!40101 SET @saved_cs_client = @@character_set_client */;
|
||||||
|
/*!50503 SET character_set_client = utf8mb4 */;
|
||||||
|
CREATE TABLE `x_types_passages` (
|
||||||
|
`id` int unsigned NOT NULL AUTO_INCREMENT,
|
||||||
|
`libelle` varchar(10) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci DEFAULT NULL,
|
||||||
|
`color_button` varchar(15) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci DEFAULT NULL,
|
||||||
|
`color_mark` varchar(15) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci DEFAULT NULL,
|
||||||
|
`color_table` varchar(15) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci DEFAULT NULL,
|
||||||
|
`chk_active` tinyint(1) unsigned NOT NULL DEFAULT '1',
|
||||||
|
PRIMARY KEY (`id`)
|
||||||
|
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
|
||||||
|
/*!40101 SET character_set_client = @saved_cs_client */;
|
||||||
|
|
||||||
|
--
|
||||||
|
-- Table structure for table `x_types_reglements`
|
||||||
|
--
|
||||||
|
|
||||||
|
DROP TABLE IF EXISTS `x_types_reglements`;
|
||||||
|
/*!40101 SET @saved_cs_client = @@character_set_client */;
|
||||||
|
/*!50503 SET character_set_client = utf8mb4 */;
|
||||||
|
CREATE TABLE `x_types_reglements` (
|
||||||
|
`id` int unsigned NOT NULL AUTO_INCREMENT,
|
||||||
|
`libelle` varchar(45) DEFAULT NULL,
|
||||||
|
`chk_active` tinyint(1) unsigned DEFAULT '1',
|
||||||
|
PRIMARY KEY (`id`)
|
||||||
|
) ENGINE=InnoDB AUTO_INCREMENT=4 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
|
||||||
|
/*!40101 SET character_set_client = @saved_cs_client */;
|
||||||
|
|
||||||
|
--
|
||||||
|
-- Table structure for table `x_users_roles`
|
||||||
|
--
|
||||||
|
|
||||||
|
DROP TABLE IF EXISTS `x_users_roles`;
|
||||||
|
|
||||||
|
CREATE TABLE `x_users_roles` (
|
||||||
|
`id` int unsigned NOT NULL AUTO_INCREMENT,
|
||||||
|
`libelle` varchar(45) DEFAULT NULL,
|
||||||
|
`chk_active` tinyint(1) unsigned DEFAULT '1',
|
||||||
|
PRIMARY KEY (`id`),
|
||||||
|
UNIQUE KEY `id_UNIQUE` (`id`)
|
||||||
|
) ENGINE=InnoDB AUTO_INCREMENT=10 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci COMMENT='Les différents rôles des utilisateurs';
|
||||||
|
/*!40101 SET character_set_client = @saved_cs_client */;
|
||||||
|
|
||||||
|
DROP TABLE IF EXISTS `x_users_titres`;
|
||||||
|
/*!40101 SET @saved_cs_client = @@character_set_client */;
|
||||||
|
/*!50503 SET character_set_client = utf8mb4 */;
|
||||||
|
CREATE TABLE `x_users_titres` (
|
||||||
|
`id` int unsigned NOT NULL AUTO_INCREMENT,
|
||||||
|
`libelle` varchar(45) DEFAULT NULL,
|
||||||
|
`chk_active` tinyint(1) unsigned DEFAULT '1',
|
||||||
|
PRIMARY KEY (`id`),
|
||||||
|
UNIQUE KEY `id_UNIQUE` (`id`)
|
||||||
|
) ENGINE=InnoDB AUTO_INCREMENT=4 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci COMMENT='Les différents titres des utilisateurs';
|
||||||
|
|
||||||
|
DROP TABLE IF EXISTS `x_pays`;
|
||||||
|
/*!40101 SET @saved_cs_client = @@character_set_client */;
|
||||||
|
/*!50503 SET character_set_client = utf8mb4 */;
|
||||||
|
CREATE TABLE `x_pays` (
|
||||||
|
`id` int unsigned NOT NULL AUTO_INCREMENT,
|
||||||
|
`code` varchar(3) DEFAULT NULL,
|
||||||
|
`fk_continent` int unsigned DEFAULT NULL,
|
||||||
|
`fk_devise` int unsigned DEFAULT '1',
|
||||||
|
`libelle` varchar(45) DEFAULT NULL,
|
||||||
|
`chk_active` tinyint(1) unsigned DEFAULT '1',
|
||||||
|
PRIMARY KEY (`id`),
|
||||||
|
UNIQUE KEY `id_UNIQUE` (`id`),
|
||||||
|
CONSTRAINT `x_pays_ibfk_1` FOREIGN KEY (`fk_devise`) REFERENCES `x_devises` (`id`) ON DELETE RESTRICT ON UPDATE CASCADE
|
||||||
|
) ENGINE=InnoDB AUTO_INCREMENT=10 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci COMMENT='Table des pays avec leurs codes';
|
||||||
|
/*!40101 SET character_set_client = @saved_cs_client */;
|
||||||
|
|
||||||
|
|
||||||
|
DROP TABLE IF EXISTS `x_regions`;
|
||||||
|
/*!40101 SET @saved_cs_client = @@character_set_client */;
|
||||||
|
/*!50503 SET character_set_client = utf8mb4 */;
|
||||||
|
CREATE TABLE `x_regions` (
|
||||||
|
`id` int unsigned NOT NULL AUTO_INCREMENT,
|
||||||
|
`fk_pays` int unsigned DEFAULT '1',
|
||||||
|
`libelle` varchar(45) DEFAULT NULL,
|
||||||
|
`libelle_long` varchar(45) DEFAULT NULL,
|
||||||
|
`table_osm` varchar(45) DEFAULT NULL,
|
||||||
|
`departements` varchar(45) DEFAULT NULL,
|
||||||
|
`chk_active` tinyint(1) unsigned DEFAULT '1',
|
||||||
|
PRIMARY KEY (`id`),
|
||||||
|
UNIQUE KEY `id_UNIQUE` (`id`),
|
||||||
|
CONSTRAINT `x_regions_ibfk_1` FOREIGN KEY (`fk_pays`) REFERENCES `x_pays` (`id`) ON DELETE RESTRICT ON UPDATE CASCADE
|
||||||
|
) ENGINE=InnoDB AUTO_INCREMENT=29 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
|
||||||
|
/*!40101 SET character_set_client = @saved_cs_client */;
|
||||||
|
|
||||||
|
DROP TABLE IF EXISTS `x_departements`;
|
||||||
|
/*!40101 SET @saved_cs_client = @@character_set_client */;
|
||||||
|
/*!50503 SET character_set_client = utf8mb4 */;
|
||||||
|
CREATE TABLE `x_departements` (
|
||||||
|
`id` int unsigned NOT NULL AUTO_INCREMENT,
|
||||||
|
`code` varchar(3) DEFAULT NULL,
|
||||||
|
`fk_region` int unsigned DEFAULT '1',
|
||||||
|
`libelle` varchar(45) DEFAULT NULL,
|
||||||
|
`chk_active` tinyint(1) unsigned DEFAULT '1',
|
||||||
|
PRIMARY KEY (`id`),
|
||||||
|
UNIQUE KEY `id_UNIQUE` (`id`),
|
||||||
|
CONSTRAINT `x_departements_ibfk_1` FOREIGN KEY (`fk_region`) REFERENCES `x_regions` (`id`) ON DELETE RESTRICT ON UPDATE CASCADE
|
||||||
|
) ENGINE=InnoDB AUTO_INCREMENT=105 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
|
||||||
|
/*!40101 SET character_set_client = @saved_cs_client */;
|
||||||
|
|
||||||
|
DROP TABLE IF EXISTS `entites`;
|
||||||
|
/*!40101 SET @saved_cs_client = @@character_set_client */;
|
||||||
|
/*!50503 SET character_set_client = utf8mb4 */;
|
||||||
|
CREATE TABLE `entites` (
|
||||||
|
`id` int unsigned NOT NULL AUTO_INCREMENT,
|
||||||
|
`encrypted_name` varchar(255) DEFAULT NULL,
|
||||||
|
`adresse1` varchar(45) DEFAULT '',
|
||||||
|
`adresse2` varchar(45) DEFAULT '',
|
||||||
|
`code_postal` varchar(5) DEFAULT '',
|
||||||
|
`ville` varchar(45) DEFAULT '',
|
||||||
|
`fk_region` int unsigned DEFAULT NULL,
|
||||||
|
`fk_type` int unsigned DEFAULT '1',
|
||||||
|
`encrypted_phone` varchar(128) DEFAULT '',
|
||||||
|
`encrypted_mobile` varchar(128) DEFAULT '',
|
||||||
|
`encrypted_email` varchar(255) DEFAULT '',
|
||||||
|
`gps_lat` varchar(20) NOT NULL DEFAULT '',
|
||||||
|
`gps_lng` varchar(20) NOT NULL DEFAULT '',
|
||||||
|
`encrypted_stripe_id` varchar(255) DEFAULT '',
|
||||||
|
`encrypted_iban` varchar(255) DEFAULT '',
|
||||||
|
`encrypted_bic` varchar(128) DEFAULT '',
|
||||||
|
`chk_demo` tinyint(1) unsigned DEFAULT '1',
|
||||||
|
`chk_mdp_manuel` tinyint(1) unsigned NOT NULL DEFAULT '1' COMMENT 'Gestion des mots de passe manuelle O/N',
|
||||||
|
`chk_copie_mail_recu` tinyint(1) unsigned NOT NULL DEFAULT '0',
|
||||||
|
`chk_accept_sms` tinyint(1) unsigned NOT NULL DEFAULT '0',
|
||||||
|
`created_at` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT 'Date de création',
|
||||||
|
`fk_user_creat` int unsigned DEFAULT NULL,
|
||||||
|
`updated_at` timestamp NULL DEFAULT NULL ON UPDATE CURRENT_TIMESTAMP COMMENT 'Date de modification',
|
||||||
|
`fk_user_modif` int unsigned DEFAULT NULL,
|
||||||
|
`chk_active` tinyint(1) unsigned DEFAULT '1',
|
||||||
|
PRIMARY KEY (`id`),
|
||||||
|
CONSTRAINT `entites_ibfk_1` FOREIGN KEY (`fk_region`) REFERENCES `x_regions` (`id`) ON DELETE RESTRICT ON UPDATE CASCADE,
|
||||||
|
CONSTRAINT `entites_ibfk_2` FOREIGN KEY (`fk_type`) REFERENCES `x_entites_types` (`id`) ON DELETE RESTRICT ON UPDATE CASCADE
|
||||||
|
) ENGINE=InnoDB AUTO_INCREMENT=0 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
|
||||||
|
/*!40101 SET character_set_client = @saved_cs_client */;
|
||||||
|
|
||||||
|
DROP TABLE IF EXISTS `x_villes`;
|
||||||
|
/*!40101 SET @saved_cs_client = @@character_set_client */;
|
||||||
|
/*!50503 SET character_set_client = utf8mb4 */;
|
||||||
|
CREATE TABLE `x_villes` (
|
||||||
|
`id` int unsigned NOT NULL AUTO_INCREMENT,
|
||||||
|
`fk_departement` int unsigned DEFAULT '1',
|
||||||
|
`libelle` varchar(65) DEFAULT NULL,
|
||||||
|
`cp` varchar(5) DEFAULT NULL,
|
||||||
|
`code_insee` varchar(5) DEFAULT NULL,
|
||||||
|
`departement` varchar(65) DEFAULT NULL,
|
||||||
|
`chk_active` tinyint(1) unsigned DEFAULT '1',
|
||||||
|
PRIMARY KEY (`id`),
|
||||||
|
UNIQUE KEY `id_UNIQUE` (`id`),
|
||||||
|
CONSTRAINT `x_villes_ibfk_1` FOREIGN KEY (`fk_departement`) REFERENCES `x_departements` (`id`) ON DELETE RESTRICT ON UPDATE CASCADE
|
||||||
|
) ENGINE=InnoDB AUTO_INCREMENT=0 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
|
||||||
|
/*!40101 SET character_set_client = @saved_cs_client */;
|
||||||
|
|
||||||
|
DROP TABLE IF EXISTS `users`;
|
||||||
|
/*!40101 SET @saved_cs_client = @@character_set_client */;
|
||||||
|
/*!50503 SET character_set_client = utf8mb4 */;
|
||||||
|
CREATE TABLE `users` (
|
||||||
|
`id` int unsigned NOT NULL AUTO_INCREMENT,
|
||||||
|
`fk_entite` int unsigned DEFAULT '1',
|
||||||
|
`fk_role` int unsigned DEFAULT '1',
|
||||||
|
`fk_titre` int unsigned DEFAULT '1',
|
||||||
|
`encrypted_name` varchar(255) DEFAULT NULL,
|
||||||
|
`first_name` varchar(45) DEFAULT NULL,
|
||||||
|
`sect_name` varchar(60) DEFAULT '',
|
||||||
|
`encrypted_user_name` varchar(128) DEFAULT '',
|
||||||
|
`user_pass_hash` varchar(60) DEFAULT NULL,
|
||||||
|
`encrypted_phone` varchar(128) DEFAULT NULL,
|
||||||
|
`encrypted_mobile` varchar(128) DEFAULT NULL,
|
||||||
|
`encrypted_email` varchar(255) DEFAULT '',
|
||||||
|
`chk_alert_email` tinyint(1) unsigned DEFAULT '1',
|
||||||
|
`chk_suivi` tinyint(1) unsigned DEFAULT '0',
|
||||||
|
`date_naissance` date DEFAULT NULL,
|
||||||
|
`date_embauche` date DEFAULT NULL,
|
||||||
|
`created_at` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT 'Date de création',
|
||||||
|
`fk_user_creat` int unsigned DEFAULT NULL,
|
||||||
|
`updated_at` timestamp NULL DEFAULT NULL ON UPDATE CURRENT_TIMESTAMP COMMENT 'Date de modification',
|
||||||
|
`fk_user_modif` int unsigned DEFAULT NULL,
|
||||||
|
`chk_active` tinyint(1) unsigned DEFAULT '1',
|
||||||
|
PRIMARY KEY (`id`),
|
||||||
|
KEY `fk_entite` (`fk_entite`),
|
||||||
|
KEY `username` (`encrypted_user_name`),
|
||||||
|
CONSTRAINT `users_ibfk_1` FOREIGN KEY (`fk_entite`) REFERENCES `entites` (`id`) ON DELETE RESTRICT ON UPDATE CASCADE,
|
||||||
|
CONSTRAINT `users_ibfk_2` FOREIGN KEY (`fk_role`) REFERENCES `x_users_roles` (`id`) ON DELETE RESTRICT ON UPDATE CASCADE,
|
||||||
|
CONSTRAINT `users_ibfk_3` FOREIGN KEY (`fk_titre`) REFERENCES `x_users_titres` (`id`) ON DELETE RESTRICT ON UPDATE CASCADE
|
||||||
|
) ENGINE=InnoDB AUTO_INCREMENT=0 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
|
||||||
|
/*!40101 SET character_set_client = @saved_cs_client */;
|
||||||
|
|
||||||
|
DROP TABLE IF EXISTS `operations`;
|
||||||
|
/*!40101 SET @saved_cs_client = @@character_set_client */;
|
||||||
|
/*!50503 SET character_set_client = utf8mb4 */;
|
||||||
|
CREATE TABLE `operations` (
|
||||||
|
`id` int unsigned NOT NULL AUTO_INCREMENT,
|
||||||
|
`fk_entite` int unsigned NOT NULL DEFAULT '1',
|
||||||
|
`libelle` varchar(75) NOT NULL DEFAULT '',
|
||||||
|
`date_deb` date NOT NULL DEFAULT '0000-00-00',
|
||||||
|
`date_fin` date NOT NULL DEFAULT '0000-00-00',
|
||||||
|
`chk_distinct_sectors` tinyint(1) unsigned NOT NULL DEFAULT '0',
|
||||||
|
`created_at` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT 'Date de création',
|
||||||
|
`fk_user_creat` int unsigned NOT NULL DEFAULT '0',
|
||||||
|
`updated_at` timestamp NULL DEFAULT NULL ON UPDATE CURRENT_TIMESTAMP COMMENT 'Date de modification',
|
||||||
|
`fk_user_modif` int unsigned NOT NULL DEFAULT '0',
|
||||||
|
`chk_active` tinyint(1) unsigned NOT NULL DEFAULT '1',
|
||||||
|
PRIMARY KEY (`id`),
|
||||||
|
KEY `fk_entite` (`fk_entite`),
|
||||||
|
KEY `date_deb` (`date_deb`),
|
||||||
|
CONSTRAINT `operations_ibfk_1` FOREIGN KEY (`fk_entite`) REFERENCES `entites` (`id`) ON DELETE RESTRICT ON UPDATE CASCADE
|
||||||
|
) ENGINE=InnoDB AUTO_INCREMENT=0 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
|
||||||
|
/*!40101 SET character_set_client = @saved_cs_client */;
|
||||||
|
|
||||||
|
|
||||||
|
DROP TABLE IF EXISTS `ope_sectors`;
|
||||||
|
/*!40101 SET @saved_cs_client = @@character_set_client */;
|
||||||
|
/*!50503 SET character_set_client = utf8mb4 */;
|
||||||
|
CREATE TABLE `ope_sectors` (
|
||||||
|
`id` int unsigned NOT NULL AUTO_INCREMENT,
|
||||||
|
`fk_operation` int unsigned NOT NULL DEFAULT '0',
|
||||||
|
`fk_old_sector` int unsigned NOT NULL DEFAULT '0',
|
||||||
|
`libelle` varchar(75) NOT NULL DEFAULT '',
|
||||||
|
`sector` text NOT NULL DEFAULT '',
|
||||||
|
`color` varchar(7) NOT NULL DEFAULT '#4B77BE',
|
||||||
|
`created_at` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT 'Date de création',
|
||||||
|
`fk_user_creat` int unsigned NOT NULL DEFAULT '0',
|
||||||
|
`updated_at` timestamp NULL DEFAULT NULL ON UPDATE CURRENT_TIMESTAMP COMMENT 'Date de modification',
|
||||||
|
`fk_user_modif` int unsigned NOT NULL DEFAULT '0',
|
||||||
|
`chk_active` tinyint(1) unsigned NOT NULL DEFAULT '1',
|
||||||
|
PRIMARY KEY (`id`),
|
||||||
|
UNIQUE KEY `id` (`id`),
|
||||||
|
KEY `fk_operation` (`fk_operation`),
|
||||||
|
CONSTRAINT `ope_sectors_ibfk_1` FOREIGN KEY (`fk_operation`) REFERENCES `operations` (`id`) ON DELETE RESTRICT ON UPDATE CASCADE
|
||||||
|
) ENGINE=InnoDB AUTO_INCREMENT=0 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
|
||||||
|
|
||||||
|
|
||||||
|
DROP TABLE IF EXISTS `ope_users`;
|
||||||
|
/*!40101 SET @saved_cs_client = @@character_set_client */;
|
||||||
|
/*!50503 SET character_set_client = utf8mb4 */;
|
||||||
|
CREATE TABLE `ope_users` (
|
||||||
|
`id` int unsigned NOT NULL AUTO_INCREMENT,
|
||||||
|
`fk_operation` int unsigned NOT NULL DEFAULT '0',
|
||||||
|
`fk_user` int unsigned NOT NULL DEFAULT '0',
|
||||||
|
`created_at` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT 'Date de création',
|
||||||
|
`fk_user_creat` int unsigned DEFAULT NULL,
|
||||||
|
`updated_at` timestamp NULL DEFAULT NULL ON UPDATE CURRENT_TIMESTAMP COMMENT 'Date de modification',
|
||||||
|
`fk_user_modif` int unsigned DEFAULT NULL,
|
||||||
|
`chk_active` tinyint(1) unsigned NOT NULL DEFAULT '1',
|
||||||
|
PRIMARY KEY (`id`),
|
||||||
|
UNIQUE KEY `id_UNIQUE` (`id`),
|
||||||
|
CONSTRAINT `ope_users_ibfk_1` FOREIGN KEY (`fk_operation`) REFERENCES `operations` (`id`) ON DELETE RESTRICT ON UPDATE CASCADE,
|
||||||
|
CONSTRAINT `ope_users_ibfk_2` FOREIGN KEY (`fk_user`) REFERENCES `users` (`id`) ON DELETE RESTRICT ON UPDATE CASCADE
|
||||||
|
) ENGINE=InnoDB AUTO_INCREMENT=0 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
|
||||||
|
/*!40101 SET character_set_client = @saved_cs_client */;
|
||||||
|
|
||||||
|
DROP TABLE IF EXISTS `email_queue`;
|
||||||
|
/*!40101 SET @saved_cs_client = @@character_set_client */;
|
||||||
|
/*!50503 SET character_set_client = utf8mb4 */;
|
||||||
|
CREATE TABLE `email_queue` (
|
||||||
|
`id` int unsigned NOT NULL AUTO_INCREMENT,
|
||||||
|
`fk_pass` int unsigned NOT NULL DEFAULT '0',
|
||||||
|
`to_email` varchar(255) COLLATE utf8mb4_unicode_ci DEFAULT NULL,
|
||||||
|
`subject` varchar(255) COLLATE utf8mb4_unicode_ci DEFAULT NULL,
|
||||||
|
`body` text COLLATE utf8mb4_unicode_ci,
|
||||||
|
`headers` text COLLATE utf8mb4_unicode_ci,
|
||||||
|
`created_at` timestamp NULL DEFAULT CURRENT_TIMESTAMP,
|
||||||
|
`status` enum('pending','sent','failed') COLLATE utf8mb4_unicode_ci DEFAULT 'pending',
|
||||||
|
`attempts` int unsigned DEFAULT '0',
|
||||||
|
PRIMARY KEY (`id`)
|
||||||
|
) ENGINE=InnoDB AUTO_INCREMENT=0 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
|
||||||
|
/*!40101 SET character_set_client = @saved_cs_client */;
|
||||||
|
|
||||||
|
DROP TABLE IF EXISTS `ope_users_sectors`;
|
||||||
|
/*!40101 SET @saved_cs_client = @@character_set_client */;
|
||||||
|
/*!50503 SET character_set_client = utf8mb4 */;
|
||||||
|
CREATE TABLE `ope_users_sectors` (
|
||||||
|
`id` int unsigned NOT NULL AUTO_INCREMENT,
|
||||||
|
`fk_operation` int unsigned NOT NULL DEFAULT '0',
|
||||||
|
`fk_user` int unsigned NOT NULL DEFAULT '0',
|
||||||
|
`fk_sector` int unsigned NOT NULL DEFAULT '0',
|
||||||
|
`created_at` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT 'Date de création',
|
||||||
|
`fk_user_creat` int unsigned NOT NULL DEFAULT '0',
|
||||||
|
`updated_at` timestamp NULL DEFAULT NULL ON UPDATE CURRENT_TIMESTAMP COMMENT 'Date de modification',
|
||||||
|
`fk_user_modif` int unsigned DEFAULT NULL,
|
||||||
|
`chk_active` tinyint(1) unsigned DEFAULT '1',
|
||||||
|
PRIMARY KEY (`id`),
|
||||||
|
UNIQUE KEY `id` (`id`),
|
||||||
|
KEY `fk_operation` (`fk_operation`),
|
||||||
|
KEY `fk_user` (`fk_user`),
|
||||||
|
KEY `fk_sector` (`fk_sector`),
|
||||||
|
CONSTRAINT `ope_users_sectors_ibfk_1` FOREIGN KEY (`fk_operation`) REFERENCES `operations` (`id`) ON DELETE RESTRICT ON UPDATE CASCADE,
|
||||||
|
CONSTRAINT `ope_users_sectors_ibfk_2` FOREIGN KEY (`fk_user`) REFERENCES `users` (`id`) ON DELETE RESTRICT ON UPDATE CASCADE,
|
||||||
|
CONSTRAINT `ope_users_sectors_ibfk_3` FOREIGN KEY (`fk_sector`) REFERENCES `ope_sectors` (`id`) ON DELETE RESTRICT ON UPDATE CASCADE
|
||||||
|
) ENGINE=InnoDB AUTO_INCREMENT=0 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
|
||||||
|
/*!40101 SET character_set_client = @saved_cs_client */;
|
||||||
|
|
||||||
|
DROP TABLE IF EXISTS `ope_users_suivis`;
|
||||||
|
/*!40101 SET @saved_cs_client = @@character_set_client */;
|
||||||
|
/*!50503 SET character_set_client = utf8mb4 */;
|
||||||
|
CREATE TABLE `ope_users_suivis` (
|
||||||
|
`id` int unsigned NOT NULL AUTO_INCREMENT,
|
||||||
|
`fk_operation` int unsigned NOT NULL DEFAULT '0',
|
||||||
|
`fk_user` int unsigned NOT NULL DEFAULT '0',
|
||||||
|
`date_suivi` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT 'Date du suivi',
|
||||||
|
`gps_lat` varchar(20) NOT NULL DEFAULT '',
|
||||||
|
`gps_lng` varchar(20) NOT NULL DEFAULT '',
|
||||||
|
`vitesse` varchar(20) NOT NULL DEFAULT '',
|
||||||
|
`created_at` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT 'Date de création',
|
||||||
|
`fk_user_creat` int unsigned NOT NULL DEFAULT '0',
|
||||||
|
`updated_at` timestamp NULL DEFAULT NULL ON UPDATE CURRENT_TIMESTAMP COMMENT 'Date de modification',
|
||||||
|
`fk_user_modif` int unsigned NOT NULL DEFAULT '0',
|
||||||
|
PRIMARY KEY (`id`),
|
||||||
|
UNIQUE KEY `id_UNIQUE` (`id`)
|
||||||
|
) ENGINE=InnoDB AUTO_INCREMENT=0 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
|
||||||
|
/*!40101 SET character_set_client = @saved_cs_client */;
|
||||||
|
|
||||||
|
DROP TABLE IF EXISTS `sectors_adresses`;
|
||||||
|
/*!40101 SET @saved_cs_client = @@character_set_client */;
|
||||||
|
/*!50503 SET character_set_client = utf8mb4 */;
|
||||||
|
CREATE TABLE `sectors_adresses` (
|
||||||
|
`id` int unsigned NOT NULL AUTO_INCREMENT,
|
||||||
|
`fk_adresse` varchar(25) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci DEFAULT NULL COMMENT 'adresses.cp??.id',
|
||||||
|
`osm_id` int unsigned NOT NULL DEFAULT '0',
|
||||||
|
`fk_sector` int unsigned NOT NULL DEFAULT '0',
|
||||||
|
`osm_name` varchar(50) NOT NULL DEFAULT '',
|
||||||
|
`numero` varchar(5) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT '',
|
||||||
|
`rue_bis` varchar(5) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT '',
|
||||||
|
`rue` varchar(60) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT '',
|
||||||
|
`cp` varchar(5) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT '',
|
||||||
|
`ville` varchar(60) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT '',
|
||||||
|
`gps_lat` varchar(20) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT '',
|
||||||
|
`gps_lng` varchar(20) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT '',
|
||||||
|
`osm_date_creat` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',
|
||||||
|
`created_at` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT 'Date de création',
|
||||||
|
`updated_at` timestamp NULL DEFAULT NULL ON UPDATE CURRENT_TIMESTAMP COMMENT 'Date de modification',
|
||||||
|
PRIMARY KEY (`id`),
|
||||||
|
KEY `sectors_adresses_fk_sector_index` (`fk_sector`),
|
||||||
|
KEY `sectors_adresses_numero_index` (`numero`),
|
||||||
|
KEY `sectors_adresses_rue_index` (`rue`),
|
||||||
|
KEY `sectors_adresses_ville_index` (`ville`),
|
||||||
|
CONSTRAINT `sectors_adresses_ibfk_1` FOREIGN KEY (`fk_sector`) REFERENCES `ope_sectors` (`id`) ON DELETE RESTRICT ON UPDATE CASCADE
|
||||||
|
) ENGINE=InnoDB AUTO_INCREMENT=0 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
|
||||||
|
|
||||||
|
DROP TABLE IF EXISTS `ope_pass`;
|
||||||
|
/*!40101 SET @saved_cs_client = @@character_set_client */;
|
||||||
|
/*!50503 SET character_set_client = utf8mb4 */;
|
||||||
|
CREATE TABLE `ope_pass` (
|
||||||
|
`id` int unsigned NOT NULL AUTO_INCREMENT,
|
||||||
|
`fk_operation` int unsigned NOT NULL DEFAULT '0',
|
||||||
|
`fk_sector` int unsigned DEFAULT '0',
|
||||||
|
`fk_user` int unsigned NOT NULL DEFAULT '0',
|
||||||
|
`fk_adresse` varchar(25) DEFAULT '' COMMENT 'adresses.cp??.id',
|
||||||
|
`passed_at` timestamp NULL DEFAULT NULL COMMENT 'Date du passage',
|
||||||
|
`fk_type` int unsigned DEFAULT '0',
|
||||||
|
`numero` varchar(10) NOT NULL DEFAULT '',
|
||||||
|
`rue` varchar(75) NOT NULL DEFAULT '',
|
||||||
|
`rue_bis` varchar(1) NOT NULL DEFAULT '',
|
||||||
|
`ville` varchar(75) NOT NULL DEFAULT '',
|
||||||
|
`fk_habitat` int unsigned DEFAULT '1',
|
||||||
|
`appt` varchar(5) DEFAULT '',
|
||||||
|
`niveau` varchar(5) DEFAULT '',
|
||||||
|
`residence` varchar(75) DEFAULT '',
|
||||||
|
`gps_lat` varchar(20) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT '',
|
||||||
|
`gps_lng` varchar(20) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT '',
|
||||||
|
`encrypted_name` varchar(255) NOT NULL DEFAULT '',
|
||||||
|
`montant` decimal(7,2) NOT NULL DEFAULT '0.00',
|
||||||
|
`fk_type_reglement` int unsigned DEFAULT '1',
|
||||||
|
`remarque` text DEFAULT '',
|
||||||
|
`encrypted_email` varchar(255) DEFAULT '',
|
||||||
|
`nom_recu` varchar(50) COLLATE utf8mb4_unicode_ci DEFAULT NULL,
|
||||||
|
`date_recu` timestamp NULL DEFAULT NULL COMMENT 'Date de réception',
|
||||||
|
`date_creat_recu` timestamp NULL DEFAULT NULL COMMENT 'Date de création du reçu',
|
||||||
|
`date_sent_recu` timestamp NULL DEFAULT NULL COMMENT 'Date envoi du reçu',
|
||||||
|
`email_erreur` varchar(30) DEFAULT '',
|
||||||
|
`chk_email_sent` tinyint(1) unsigned NOT NULL DEFAULT '0',
|
||||||
|
`encrypted_phone` varchar(128) NOT NULL DEFAULT '',
|
||||||
|
`chk_striped` tinyint(1) unsigned DEFAULT '0',
|
||||||
|
`docremis` tinyint(1) unsigned DEFAULT '0',
|
||||||
|
`date_repasser` timestamp NULL DEFAULT NULL COMMENT 'Date prévue pour repasser',
|
||||||
|
`nb_passages` int DEFAULT '1' COMMENT 'Nb passages pour les a repasser',
|
||||||
|
`chk_gps_maj` tinyint(1) unsigned DEFAULT '0',
|
||||||
|
`chk_map_create` tinyint(1) unsigned DEFAULT '0',
|
||||||
|
`chk_mobile` tinyint(1) unsigned DEFAULT '0',
|
||||||
|
`chk_synchro` tinyint(1) unsigned DEFAULT '1' COMMENT 'chk synchro entre web et appli',
|
||||||
|
`chk_api_adresse` tinyint(1) unsigned DEFAULT '0',
|
||||||
|
`chk_maj_adresse` tinyint(1) unsigned DEFAULT '0',
|
||||||
|
`anomalie` tinyint(1) unsigned DEFAULT '0',
|
||||||
|
`created_at` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT 'Date de création',
|
||||||
|
`fk_user_creat` int unsigned DEFAULT NULL,
|
||||||
|
`updated_at` timestamp NULL DEFAULT NULL ON UPDATE CURRENT_TIMESTAMP COMMENT 'Date de modification',
|
||||||
|
`fk_user_modif` int unsigned DEFAULT NULL,
|
||||||
|
`chk_active` tinyint(1) unsigned NOT NULL DEFAULT '1',
|
||||||
|
PRIMARY KEY (`id`),
|
||||||
|
KEY `fk_operation` (`fk_operation`),
|
||||||
|
KEY `fk_sector` (`fk_sector`),
|
||||||
|
KEY `fk_user` (`fk_user`),
|
||||||
|
KEY `fk_type` (`fk_type`),
|
||||||
|
KEY `fk_type_reglement` (`fk_type_reglement`),
|
||||||
|
KEY `email` (`email`),
|
||||||
|
CONSTRAINT `ope_pass_ibfk_1` FOREIGN KEY (`fk_operation`) REFERENCES `operations` (`id`) ON DELETE RESTRICT ON UPDATE CASCADE,
|
||||||
|
CONSTRAINT `ope_pass_ibfk_2` FOREIGN KEY (`fk_sector`) REFERENCES `ope_sectors` (`id`) ON DELETE RESTRICT ON UPDATE CASCADE,
|
||||||
|
CONSTRAINT `ope_pass_ibfk_3` FOREIGN KEY (`fk_user`) REFERENCES `users` (`id`) ON DELETE RESTRICT ON UPDATE CASCADE,
|
||||||
|
CONSTRAINT `ope_pass_ibfk_4` FOREIGN KEY (`fk_type_reglement`) REFERENCES `x_types_reglements` (`id`) ON DELETE RESTRICT ON UPDATE CASCADE
|
||||||
|
) ENGINE=InnoDB AUTO_INCREMENT=0 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
|
||||||
|
/*!40101 SET character_set_client = @saved_cs_client */;
|
||||||
|
|
||||||
|
DROP TABLE IF EXISTS `ope_pass_histo`;
|
||||||
|
/*!40101 SET @saved_cs_client = @@character_set_client */;
|
||||||
|
/*!50503 SET character_set_client = utf8mb4 */;
|
||||||
|
CREATE TABLE `ope_pass_histo` (
|
||||||
|
`id` int unsigned NOT NULL AUTO_INCREMENT,
|
||||||
|
`fk_pass` int unsigned NOT NULL DEFAULT '0',
|
||||||
|
`date_histo` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT 'Date historique',
|
||||||
|
`sujet` varchar(50) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci DEFAULT NULL,
|
||||||
|
`remarque` varchar(250) NOT NULL DEFAULT '',
|
||||||
|
PRIMARY KEY (`id`),
|
||||||
|
KEY `ope_pass_histo_fk_pass_IDX` (`fk_pass`) USING BTREE,
|
||||||
|
KEY `ope_pass_histo_date_histo_IDX` (`date_histo`) USING BTREE,
|
||||||
|
CONSTRAINT `ope_pass_histo_ibfk_1` FOREIGN KEY (`fk_pass`) REFERENCES `ope_pass` (`id`) ON DELETE CASCADE ON UPDATE CASCADE
|
||||||
|
) ENGINE=InnoDB AUTO_INCREMENT=0 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
|
||||||
|
/*!40101 SET character_set_client = @saved_cs_client */;
|
||||||
|
|
||||||
|
DROP TABLE IF EXISTS `medias`;
|
||||||
|
/*!40101 SET @saved_cs_client = @@character_set_client */;
|
||||||
|
/*!50503 SET character_set_client = utf8mb4 */;
|
||||||
|
CREATE TABLE `medias` (
|
||||||
|
`id` int unsigned NOT NULL AUTO_INCREMENT,
|
||||||
|
`support` varchar(45) NOT NULL DEFAULT '',
|
||||||
|
`support_id` int unsigned NOT NULL DEFAULT '0',
|
||||||
|
`fichier` varchar(250) NOT NULL DEFAULT '',
|
||||||
|
`description` varchar(100) NOT NULL DEFAULT '',
|
||||||
|
`created_at` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
|
||||||
|
`fk_user_creat` int unsigned NOT NULL DEFAULT '0',
|
||||||
|
`updated_at` timestamp NULL DEFAULT NULL ON UPDATE CURRENT_TIMESTAMP,
|
||||||
|
`fk_user_modif` int unsigned NOT NULL DEFAULT '0',
|
||||||
|
PRIMARY KEY (`id`),
|
||||||
|
UNIQUE KEY `id_UNIQUE` (`id`)
|
||||||
|
) ENGINE=InnoDB AUTO_INCREMENT=0 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
|
||||||
|
/*!40101 SET character_set_client = @saved_cs_client */;
|
||||||
|
|
||||||
|
-- Création des tables pour le système de chat
|
||||||
|
DROP TABLE IF EXISTS `chat_rooms`;
|
||||||
|
-- Table des salles de discussion
|
||||||
|
CREATE TABLE chat_rooms (
|
||||||
|
id INT UNSIGNED AUTO_INCREMENT PRIMARY KEY,
|
||||||
|
name VARCHAR(100) NOT NULL,
|
||||||
|
type ENUM('privee', 'groupe', 'liste_diffusion') NOT NULL,
|
||||||
|
date_creation timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT 'Date de création',
|
||||||
|
fk_user INT UNSIGNED NOT NULL,
|
||||||
|
fk_entite INT UNSIGNED,
|
||||||
|
statut ENUM('active', 'archive') NOT NULL DEFAULT 'active',
|
||||||
|
description TEXT,
|
||||||
|
INDEX idx_user (fk_user),
|
||||||
|
INDEX idx_entite (fk_entite)
|
||||||
|
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
|
||||||
|
|
||||||
|
|
||||||
|
DROP TABLE IF EXISTS `chat_participants`;
|
||||||
|
-- Table des participants aux salles de discussion
|
||||||
|
CREATE TABLE chat_participants (
|
||||||
|
id INT UNSIGNED AUTO_INCREMENT PRIMARY KEY,
|
||||||
|
id_room INT UNSIGNED NOT NULL,
|
||||||
|
id_user INT UNSIGNED NOT NULL,
|
||||||
|
role ENUM('administrateur', 'participant', 'en_lecture_seule') NOT NULL DEFAULT 'participant',
|
||||||
|
date_ajout timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT 'Date ajout',
|
||||||
|
notification_activee BOOLEAN NOT NULL DEFAULT TRUE,
|
||||||
|
INDEX idx_room (id_room),
|
||||||
|
INDEX idx_user (id_user),
|
||||||
|
CONSTRAINT uc_room_user UNIQUE (id_room, id_user),
|
||||||
|
FOREIGN KEY (id_room) REFERENCES chat_rooms(id) ON DELETE CASCADE
|
||||||
|
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
|
||||||
|
|
||||||
|
DROP TABLE IF EXISTS `chat_messages`;
|
||||||
|
-- Table des messages
|
||||||
|
CREATE TABLE chat_messages (
|
||||||
|
id INT UNSIGNED AUTO_INCREMENT PRIMARY KEY,
|
||||||
|
fk_room INT UNSIGNED NOT NULL,
|
||||||
|
fk_user INT UNSIGNED NOT NULL,
|
||||||
|
content TEXT,
|
||||||
|
date_sent timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT 'Date envoi',
|
||||||
|
type ENUM('texte', 'media', 'systeme') NOT NULL DEFAULT 'texte',
|
||||||
|
statut ENUM('envoye', 'livre', 'lu') NOT NULL DEFAULT 'envoye',
|
||||||
|
INDEX idx_room (fk_room),
|
||||||
|
INDEX idx_user (fk_user),
|
||||||
|
INDEX idx_date (date_sent),
|
||||||
|
FOREIGN KEY (fk_room) REFERENCES chat_rooms(id) ON DELETE CASCADE
|
||||||
|
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
|
||||||
|
|
||||||
|
DROP TABLE IF EXISTS `chat_listes_diffusion`;
|
||||||
|
-- Table des listes de diffusion
|
||||||
|
CREATE TABLE chat_listes_diffusion (
|
||||||
|
id INT UNSIGNED AUTO_INCREMENT PRIMARY KEY,
|
||||||
|
fk_room INT UNSIGNED NOT NULL,
|
||||||
|
name VARCHAR(100) NOT NULL,
|
||||||
|
description TEXT,
|
||||||
|
fk_user INT UNSIGNED NOT NULL,
|
||||||
|
date_creation timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT 'Date de création',
|
||||||
|
INDEX idx_room (fk_room),
|
||||||
|
INDEX idx_user (fk_user),
|
||||||
|
FOREIGN KEY (fk_room) REFERENCES chat_rooms(id) ON DELETE CASCADE
|
||||||
|
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
|
||||||
|
|
||||||
|
DROP TABLE IF EXISTS `chat_read_messages`;
|
||||||
|
-- Table pour suivre la lecture des messages
|
||||||
|
CREATE TABLE chat_read_messages (
|
||||||
|
id BIGINT UNSIGNED AUTO_INCREMENT PRIMARY KEY,
|
||||||
|
fk_message INT UNSIGNED NOT NULL,
|
||||||
|
fk_user INT UNSIGNED NOT NULL,
|
||||||
|
date_read timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT 'Date de lecture',
|
||||||
|
INDEX idx_message (fk_message),
|
||||||
|
INDEX idx_user (fk_user),
|
||||||
|
CONSTRAINT uc_message_user UNIQUE (fk_message, fk_user),
|
||||||
|
FOREIGN KEY (fk_message) REFERENCES chat_messages(id) ON DELETE CASCADE
|
||||||
|
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
|
||||||
|
|
||||||
|
DROP TABLE IF EXISTS `chat_notifications`;
|
||||||
|
-- Table des notifications
|
||||||
|
CREATE TABLE chat_notifications (
|
||||||
|
id BIGINT UNSIGNED AUTO_INCREMENT PRIMARY KEY,
|
||||||
|
fk_user INT UNSIGNED NOT NULL,
|
||||||
|
fk_message INT UNSIGNED,
|
||||||
|
fk_room INT UNSIGNED,
|
||||||
|
type VARCHAR(50) NOT NULL,
|
||||||
|
contenu TEXT,
|
||||||
|
date_creation timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT 'Date de création',
|
||||||
|
date_lecture timestamp NULL DEFAULT NULL COMMENT 'Date de lecture',
|
||||||
|
statut ENUM('non_lue', 'lue') NOT NULL DEFAULT 'non_lue',
|
||||||
|
INDEX idx_user (fk_user),
|
||||||
|
INDEX idx_message (fk_message),
|
||||||
|
INDEX idx_room (fk_room),
|
||||||
|
FOREIGN KEY (fk_message) REFERENCES chat_messages(id) ON DELETE SET NULL,
|
||||||
|
FOREIGN KEY (fk_room) REFERENCES chat_rooms(id) ON DELETE SET NULL
|
||||||
|
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
|
||||||
|
|
||||||
|
DROP TABLE IF EXISTS `z_params`;
|
||||||
|
/*!40101 SET @saved_cs_client = @@character_set_client */;
|
||||||
|
/*!50503 SET character_set_client = utf8mb4 */;
|
||||||
|
CREATE TABLE `params` (
|
||||||
|
`id` int unsigned NOT NULL AUTO_INCREMENT,
|
||||||
|
`libelle` varchar(35) NOT NULL DEFAULT '',
|
||||||
|
`valeur` varchar(255) NOT NULL DEFAULT '',
|
||||||
|
`aide` varchar(150) NOT NULL DEFAULT '',
|
||||||
|
PRIMARY KEY (`id`)
|
||||||
|
) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
|
||||||
|
/*!40101 SET character_set_client = @saved_cs_client */;
|
||||||
|
|
||||||
|
|
||||||
|
DROP TABLE IF EXISTS `z_sessions`;
|
||||||
|
/*!40101 SET @saved_cs_client = @@character_set_client */;
|
||||||
|
/*!50503 SET character_set_client = utf8mb4 */;
|
||||||
|
CREATE TABLE `z_sessions` (
|
||||||
|
`sid` text NOT NULL,
|
||||||
|
`fk_user` int NOT NULL,
|
||||||
|
`role` varchar(10) DEFAULT NULL,
|
||||||
|
`date_modified` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
|
||||||
|
`ip` varchar(50) NOT NULL,
|
||||||
|
`browser` varchar(150) NOT NULL,
|
||||||
|
`data` mediumtext
|
||||||
|
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
|
||||||
|
/*!40101 SET character_set_client = @saved_cs_client */;
|
||||||
@@ -1,166 +0,0 @@
|
|||||||
1. Route /session/refresh/all
|
|
||||||
|
|
||||||
Méthode : POSTAuthentification : Requise (via session_id dans headers ou cookies)
|
|
||||||
|
|
||||||
Headers requis :
|
|
||||||
Authorization: Bearer {session_id}
|
|
||||||
// ou
|
|
||||||
Cookie: session_id={session_id}
|
|
||||||
|
|
||||||
Réponse attendue :
|
|
||||||
{
|
|
||||||
"status": "success",
|
|
||||||
"message": "Session refreshed",
|
|
||||||
"user": {
|
|
||||||
// Mêmes données que le login
|
|
||||||
"id": 123,
|
|
||||||
"email": "user@example.com",
|
|
||||||
"name": "John Doe",
|
|
||||||
"fk_role": 2,
|
|
||||||
"fk_entite": 1,
|
|
||||||
// ...
|
|
||||||
},
|
|
||||||
"amicale": {
|
|
||||||
// Données de l'amicale
|
|
||||||
"id": 1,
|
|
||||||
"name": "Amicale Pompiers",
|
|
||||||
// ...
|
|
||||||
},
|
|
||||||
"operations": [...],
|
|
||||||
"sectors": [...],
|
|
||||||
"passages": [...],
|
|
||||||
"membres": [...],
|
|
||||||
"session_id": "current_session_id",
|
|
||||||
"session_expiry": "2024-01-20T10:00:00Z"
|
|
||||||
}
|
|
||||||
|
|
||||||
Code PHP suggéré :
|
|
||||||
// routes/session.php
|
|
||||||
Route::post('/session/refresh/all', function(Request $request) {
|
|
||||||
$user = Auth::user();
|
|
||||||
if (!$user) {
|
|
||||||
return response()->json(['status' => 'error', 'message' => 'Not authenticated'], 401);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Retourner les mêmes données qu'un login normal
|
|
||||||
return response()->json([
|
|
||||||
'status' => 'success',
|
|
||||||
'user' => $user->toArray(),
|
|
||||||
'amicale' => $user->amicale,
|
|
||||||
'operations' => Operation::where('fk_entite', $user->fk_entite)->get(),
|
|
||||||
'sectors' => Sector::where('fk_entite', $user->fk_entite)->get(),
|
|
||||||
'passages' => Passage::where('fk_entite', $user->fk_entite)->get(),
|
|
||||||
'membres' => Membre::where('fk_entite', $user->fk_entite)->get(),
|
|
||||||
'session_id' => session()->getId(),
|
|
||||||
'session_expiry' => now()->addDays(7)->toIso8601String()
|
|
||||||
]);
|
|
||||||
});
|
|
||||||
|
|
||||||
2. Route /session/refresh/partial
|
|
||||||
|
|
||||||
Méthode : POSTAuthentification : Requise
|
|
||||||
|
|
||||||
Body requis :
|
|
||||||
{
|
|
||||||
"last_sync": "2024-01-19T10:00:00Z"
|
|
||||||
}
|
|
||||||
|
|
||||||
Réponse attendue :
|
|
||||||
{
|
|
||||||
"status": "success",
|
|
||||||
"message": "Partial refresh completed",
|
|
||||||
"sectors": [
|
|
||||||
// Uniquement les secteurs modifiés après last_sync
|
|
||||||
{
|
|
||||||
"id": 45,
|
|
||||||
"name": "Secteur A",
|
|
||||||
"updated_at": "2024-01-19T15:00:00Z",
|
|
||||||
// ...
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"passages": [
|
|
||||||
// Uniquement les passages modifiés après last_sync
|
|
||||||
{
|
|
||||||
"id": 789,
|
|
||||||
"fk_sector": 45,
|
|
||||||
"updated_at": "2024-01-19T14:30:00Z",
|
|
||||||
// ...
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"operations": [...], // Si modifiées
|
|
||||||
"membres": [...] // Si modifiés
|
|
||||||
}
|
|
||||||
|
|
||||||
Code PHP suggéré :
|
|
||||||
// routes/session.php
|
|
||||||
Route::post('/session/refresh/partial', function(Request $request) {
|
|
||||||
$user = Auth::user();
|
|
||||||
if (!$user) {
|
|
||||||
return response()->json(['status' => 'error', 'message' => 'Not authenticated'], 401);
|
|
||||||
}
|
|
||||||
|
|
||||||
$lastSync = Carbon::parse($request->input('last_sync'));
|
|
||||||
|
|
||||||
// Récupérer uniquement les données modifiées après last_sync
|
|
||||||
$response = [
|
|
||||||
'status' => 'success',
|
|
||||||
'message' => 'Partial refresh completed'
|
|
||||||
];
|
|
||||||
|
|
||||||
// Secteurs modifiés
|
|
||||||
$sectors = Sector::where('fk_entite', $user->fk_entite)
|
|
||||||
->where('updated_at', '>', $lastSync)
|
|
||||||
->get();
|
|
||||||
if ($sectors->count() > 0) {
|
|
||||||
$response['sectors'] = $sectors;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Passages modifiés
|
|
||||||
$passages = Passage::where('fk_entite', $user->fk_entite)
|
|
||||||
->where('updated_at', '>', $lastSync)
|
|
||||||
->get();
|
|
||||||
if ($passages->count() > 0) {
|
|
||||||
$response['passages'] = $passages;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Opérations modifiées
|
|
||||||
$operations = Operation::where('fk_entite', $user->fk_entite)
|
|
||||||
->where('updated_at', '>', $lastSync)
|
|
||||||
->get();
|
|
||||||
if ($operations->count() > 0) {
|
|
||||||
$response['operations'] = $operations;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Membres modifiés
|
|
||||||
$membres = Membre::where('fk_entite', $user->fk_entite)
|
|
||||||
->where('updated_at', '>', $lastSync)
|
|
||||||
->get();
|
|
||||||
if ($membres->count() > 0) {
|
|
||||||
$response['membres'] = $membres;
|
|
||||||
}
|
|
||||||
|
|
||||||
return response()->json($response);
|
|
||||||
});
|
|
||||||
|
|
||||||
Points importants pour l'API :
|
|
||||||
|
|
||||||
1. Vérification de session : Les deux routes doivent vérifier que le session_id est valide et non expiré
|
|
||||||
2. Timestamps : Assurez-vous que toutes vos tables ont des colonnes updated_at qui sont mises à jour automatiquement
|
|
||||||
3. Gestion des suppressions : Pour le refresh partiel, vous pourriez ajouter un champ pour les éléments supprimés :
|
|
||||||
{
|
|
||||||
"deleted": {
|
|
||||||
"sectors": [12, 34], // IDs des secteurs supprimés
|
|
||||||
"passages": [567, 890]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
4. Optimisation : Pour éviter de surcharger, limitez le refresh partiel aux dernières 24-48h maximum
|
|
||||||
5. Gestion d'erreurs :
|
|
||||||
{
|
|
||||||
"status": "error",
|
|
||||||
"message": "Session expired",
|
|
||||||
"code": "SESSION_EXPIRED"
|
|
||||||
}
|
|
||||||
|
|
||||||
L'app Flutter s'attend à ces formats de réponse et utilisera automatiquement le refresh partiel si la dernière sync
|
|
||||||
date de moins de 24h, sinon elle fera un refresh complet.
|
|
||||||
Binary file not shown.
@@ -1,193 +0,0 @@
|
|||||||
USE batiments;
|
|
||||||
|
|
||||||
-- Table temp pour FFO (nb_niveau, nb_log)
|
|
||||||
DROP TABLE IF EXISTS tmp_ffo_999;
|
|
||||||
CREATE TABLE tmp_ffo_999 (
|
|
||||||
batiment_groupe_id VARCHAR(50),
|
|
||||||
code_departement_insee VARCHAR(5),
|
|
||||||
nb_niveau INT,
|
|
||||||
annee_construction INT,
|
|
||||||
usage_niveau_1_txt VARCHAR(100),
|
|
||||||
mat_mur_txt VARCHAR(100),
|
|
||||||
mat_toit_txt VARCHAR(100),
|
|
||||||
nb_log INT,
|
|
||||||
KEY (batiment_groupe_id)
|
|
||||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
|
|
||||||
|
|
||||||
LOAD DATA LOCAL INFILE '/var/osm/csv/batiment_groupe_ffo_bat.csv'
|
|
||||||
INTO TABLE tmp_ffo_999
|
|
||||||
CHARACTER SET 'UTF8mb4'
|
|
||||||
FIELDS TERMINATED BY ','
|
|
||||||
OPTIONALLY ENCLOSED BY '"'
|
|
||||||
IGNORE 1 LINES;
|
|
||||||
|
|
||||||
-- Table temp pour Adresse (lien BAN)
|
|
||||||
DROP TABLE IF EXISTS tmp_adr_999;
|
|
||||||
CREATE TABLE tmp_adr_999 (
|
|
||||||
wkt TEXT,
|
|
||||||
batiment_groupe_id VARCHAR(50),
|
|
||||||
cle_interop_adr VARCHAR(50),
|
|
||||||
code_departement_insee VARCHAR(5),
|
|
||||||
classe VARCHAR(50),
|
|
||||||
lien_valide TINYINT,
|
|
||||||
origine VARCHAR(50),
|
|
||||||
KEY (batiment_groupe_id),
|
|
||||||
KEY (cle_interop_adr)
|
|
||||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
|
|
||||||
|
|
||||||
LOAD DATA LOCAL INFILE '/var/osm/csv/rel_batiment_groupe_adresse.csv'
|
|
||||||
INTO TABLE tmp_adr_999
|
|
||||||
CHARACTER SET 'UTF8mb4'
|
|
||||||
FIELDS TERMINATED BY ','
|
|
||||||
OPTIONALLY ENCLOSED BY '"'
|
|
||||||
IGNORE 1 LINES;
|
|
||||||
|
|
||||||
-- Table temp pour RNC (copropriétés)
|
|
||||||
DROP TABLE IF EXISTS tmp_rnc_999;
|
|
||||||
CREATE TABLE tmp_rnc_999 (
|
|
||||||
batiment_groupe_id VARCHAR(50),
|
|
||||||
code_departement_insee VARCHAR(5),
|
|
||||||
numero_immat_principal VARCHAR(50),
|
|
||||||
periode_construction_max VARCHAR(50),
|
|
||||||
l_annee_construction VARCHAR(100),
|
|
||||||
nb_lot_garpark INT,
|
|
||||||
nb_lot_tot INT,
|
|
||||||
nb_log INT,
|
|
||||||
nb_lot_tertiaire INT,
|
|
||||||
l_nom_copro VARCHAR(200),
|
|
||||||
l_siret VARCHAR(50),
|
|
||||||
copro_dans_pvd TINYINT,
|
|
||||||
KEY (batiment_groupe_id)
|
|
||||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
|
|
||||||
|
|
||||||
LOAD DATA LOCAL INFILE '/var/osm/csv/batiment_groupe_rnc.csv'
|
|
||||||
INTO TABLE tmp_rnc_999
|
|
||||||
CHARACTER SET 'UTF8mb4'
|
|
||||||
FIELDS TERMINATED BY ','
|
|
||||||
OPTIONALLY ENCLOSED BY '"'
|
|
||||||
IGNORE 1 LINES;
|
|
||||||
|
|
||||||
-- Table temp pour BDTOPO (altitude)
|
|
||||||
DROP TABLE IF EXISTS tmp_topo_999;
|
|
||||||
CREATE TABLE tmp_topo_999 (
|
|
||||||
batiment_groupe_id VARCHAR(50),
|
|
||||||
code_departement_insee VARCHAR(5),
|
|
||||||
l_nature VARCHAR(200),
|
|
||||||
l_usage_1 VARCHAR(200),
|
|
||||||
l_usage_2 VARCHAR(200),
|
|
||||||
l_etat VARCHAR(100),
|
|
||||||
hauteur_mean DECIMAL(10,2),
|
|
||||||
max_hauteur DECIMAL(10,2),
|
|
||||||
altitude_sol_mean DECIMAL(10,2),
|
|
||||||
KEY (batiment_groupe_id)
|
|
||||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
|
|
||||||
|
|
||||||
LOAD DATA LOCAL INFILE '/var/osm/csv/batiment_groupe_bdtopo_bat.csv'
|
|
||||||
INTO TABLE tmp_topo_999
|
|
||||||
CHARACTER SET 'UTF8mb4'
|
|
||||||
FIELDS TERMINATED BY ','
|
|
||||||
OPTIONALLY ENCLOSED BY '"'
|
|
||||||
IGNORE 1 LINES;
|
|
||||||
|
|
||||||
-- Table temp pour Usage principal
|
|
||||||
DROP TABLE IF EXISTS tmp_usage_999;
|
|
||||||
CREATE TABLE tmp_usage_999 (
|
|
||||||
batiment_groupe_id VARCHAR(50),
|
|
||||||
code_departement_insee VARCHAR(5),
|
|
||||||
usage_principal_bdnb_open VARCHAR(100),
|
|
||||||
KEY (batiment_groupe_id)
|
|
||||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
|
|
||||||
|
|
||||||
LOAD DATA LOCAL INFILE '/var/osm/csv/batiment_groupe_synthese_propriete_usage.csv'
|
|
||||||
INTO TABLE tmp_usage_999
|
|
||||||
CHARACTER SET 'UTF8mb4'
|
|
||||||
FIELDS TERMINATED BY ','
|
|
||||||
OPTIONALLY ENCLOSED BY '"'
|
|
||||||
IGNORE 1 LINES;
|
|
||||||
|
|
||||||
-- Table temp pour DLE Enedis (compteurs électriques)
|
|
||||||
DROP TABLE IF EXISTS tmp_dle_999;
|
|
||||||
CREATE TABLE tmp_dle_999 (
|
|
||||||
batiment_groupe_id VARCHAR(50),
|
|
||||||
code_departement_insee VARCHAR(5),
|
|
||||||
millesime VARCHAR(10),
|
|
||||||
nb_pdl_res INT,
|
|
||||||
nb_pdl_pro INT,
|
|
||||||
nb_pdl_tot INT,
|
|
||||||
conso_res DECIMAL(12,2),
|
|
||||||
conso_pro DECIMAL(12,2),
|
|
||||||
conso_tot DECIMAL(12,2),
|
|
||||||
conso_res_par_pdl DECIMAL(12,2),
|
|
||||||
conso_pro_par_pdl DECIMAL(12,2),
|
|
||||||
conso_tot_par_pdl DECIMAL(12,2),
|
|
||||||
KEY (batiment_groupe_id)
|
|
||||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
|
|
||||||
|
|
||||||
LOAD DATA LOCAL INFILE '/var/osm/csv/batiment_groupe_dle_elec_multimillesime.csv'
|
|
||||||
INTO TABLE tmp_dle_999
|
|
||||||
CHARACTER SET 'UTF8mb4'
|
|
||||||
FIELDS TERMINATED BY ','
|
|
||||||
OPTIONALLY ENCLOSED BY '"'
|
|
||||||
IGNORE 1 LINES;
|
|
||||||
|
|
||||||
-- Création de la table finale avec jointure et filtre
|
|
||||||
DROP TABLE IF EXISTS bat999;
|
|
||||||
CREATE TABLE bat999 (
|
|
||||||
batiment_groupe_id VARCHAR(50) PRIMARY KEY,
|
|
||||||
code_departement_insee VARCHAR(5),
|
|
||||||
cle_interop_adr VARCHAR(50),
|
|
||||||
nb_niveau INT,
|
|
||||||
nb_log INT,
|
|
||||||
nb_pdl_tot INT,
|
|
||||||
annee_construction INT,
|
|
||||||
residence VARCHAR(200),
|
|
||||||
usage_principal VARCHAR(100),
|
|
||||||
altitude_sol_mean DECIMAL(10,2),
|
|
||||||
gps_lat DECIMAL(10,7),
|
|
||||||
gps_lng DECIMAL(10,7),
|
|
||||||
KEY (cle_interop_adr),
|
|
||||||
KEY (usage_principal),
|
|
||||||
KEY (nb_log)
|
|
||||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
|
|
||||||
|
|
||||||
INSERT INTO bat999
|
|
||||||
SELECT
|
|
||||||
f.batiment_groupe_id,
|
|
||||||
f.code_departement_insee,
|
|
||||||
a.cle_interop_adr,
|
|
||||||
f.nb_niveau,
|
|
||||||
f.nb_log,
|
|
||||||
d.nb_pdl_tot,
|
|
||||||
f.annee_construction,
|
|
||||||
REPLACE(REPLACE(REPLACE(REPLACE(r.l_nom_copro, '[', ''), ']', ''), '"', ''), ' ', ' ') as residence,
|
|
||||||
u.usage_principal_bdnb_open as usage_principal,
|
|
||||||
t.altitude_sol_mean,
|
|
||||||
NULL as gps_lat,
|
|
||||||
NULL as gps_lng
|
|
||||||
FROM tmp_ffo_999 f
|
|
||||||
INNER JOIN tmp_adr_999 a ON f.batiment_groupe_id = a.batiment_groupe_id AND a.lien_valide = 1
|
|
||||||
LEFT JOIN tmp_rnc_999 r ON f.batiment_groupe_id = r.batiment_groupe_id
|
|
||||||
LEFT JOIN tmp_topo_999 t ON f.batiment_groupe_id = t.batiment_groupe_id
|
|
||||||
LEFT JOIN tmp_usage_999 u ON f.batiment_groupe_id = u.batiment_groupe_id
|
|
||||||
LEFT JOIN tmp_dle_999 d ON f.batiment_groupe_id = d.batiment_groupe_id
|
|
||||||
WHERE u.usage_principal_bdnb_open IN ('Résidentiel individuel', 'Résidentiel collectif', 'Secondaire', 'Tertiaire')
|
|
||||||
AND f.nb_log > 1
|
|
||||||
AND a.cle_interop_adr IS NOT NULL
|
|
||||||
GROUP BY f.batiment_groupe_id;
|
|
||||||
|
|
||||||
-- Mise à jour des coordonnées GPS depuis la base adresses
|
|
||||||
UPDATE bat999 b
|
|
||||||
JOIN adresses.cp999 a ON b.cle_interop_adr = a.id
|
|
||||||
SET b.gps_lat = a.gps_lat, b.gps_lng = a.gps_lng
|
|
||||||
WHERE b.cle_interop_adr IS NOT NULL;
|
|
||||||
|
|
||||||
-- Nettoyage des tables temporaires
|
|
||||||
DROP TABLE IF EXISTS tmp_ffo_999;
|
|
||||||
DROP TABLE IF EXISTS tmp_adr_999;
|
|
||||||
DROP TABLE IF EXISTS tmp_rnc_999;
|
|
||||||
DROP TABLE IF EXISTS tmp_topo_999;
|
|
||||||
DROP TABLE IF EXISTS tmp_usage_999;
|
|
||||||
DROP TABLE IF EXISTS tmp_dle_999;
|
|
||||||
|
|
||||||
-- Historique
|
|
||||||
INSERT INTO _histo SET date_import=NOW(), dept='999', nb_batiments=(SELECT COUNT(*) FROM bat999);
|
|
||||||
@@ -42,8 +42,6 @@ require_once __DIR__ . '/src/Controllers/ChatController.php';
|
|||||||
require_once __DIR__ . '/src/Controllers/SecurityController.php';
|
require_once __DIR__ . '/src/Controllers/SecurityController.php';
|
||||||
require_once __DIR__ . '/src/Controllers/StripeController.php';
|
require_once __DIR__ . '/src/Controllers/StripeController.php';
|
||||||
require_once __DIR__ . '/src/Controllers/StripeWebhookController.php';
|
require_once __DIR__ . '/src/Controllers/StripeWebhookController.php';
|
||||||
require_once __DIR__ . '/src/Controllers/MigrationController.php';
|
|
||||||
require_once __DIR__ . '/src/Controllers/HealthController.php';
|
|
||||||
|
|
||||||
// Initialiser la configuration
|
// Initialiser la configuration
|
||||||
$appConfig = AppConfig::getInstance();
|
$appConfig = AppConfig::getInstance();
|
||||||
|
|||||||
167
api/livre-api.sh
Executable file
167
api/livre-api.sh
Executable file
@@ -0,0 +1,167 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
# Vérification des arguments
|
||||||
|
if [ $# -ne 1 ]; then
|
||||||
|
echo "Usage: $0 <environment>"
|
||||||
|
echo " rec : Livrer de DVA (dva-geo) vers RECETTE (rca-geo)"
|
||||||
|
echo " prod : Livrer de RECETTE (rca-geo) vers PRODUCTION (pra-geo)"
|
||||||
|
echo ""
|
||||||
|
echo "Examples:"
|
||||||
|
echo " $0 rec # DVA → RECETTE"
|
||||||
|
echo " $0 prod # RECETTE → PRODUCTION"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
HOST_IP="195.154.80.116"
|
||||||
|
HOST_USER=root
|
||||||
|
HOST_KEY=/home/pierre/.ssh/id_rsa_mbpi
|
||||||
|
HOST_PORT=22
|
||||||
|
|
||||||
|
# Mapping des environnements
|
||||||
|
ENVIRONMENT=$1
|
||||||
|
case $ENVIRONMENT in
|
||||||
|
"rca")
|
||||||
|
SOURCE_CONTAINER="dva-geo"
|
||||||
|
DEST_CONTAINER="rca-geo"
|
||||||
|
ENV_NAME="RECETTE"
|
||||||
|
;;
|
||||||
|
"pra")
|
||||||
|
SOURCE_CONTAINER="rca-geo"
|
||||||
|
DEST_CONTAINER="pra-geo"
|
||||||
|
ENV_NAME="PRODUCTION"
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "❌ Environnement '$ENVIRONMENT' non reconnu"
|
||||||
|
echo "Utilisez 'rec' pour RECETTE ou 'prod' pour PRODUCTION"
|
||||||
|
exit 1
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
API_PATH="/var/www/geosector/api"
|
||||||
|
TIMESTAMP=$(date +"%Y%m%d_%H%M%S")
|
||||||
|
BACKUP_DIR="${API_PATH}_backup_${TIMESTAMP}"
|
||||||
|
PROJECT="default"
|
||||||
|
|
||||||
|
echo "🔄 Livraison vers $ENV_NAME : $SOURCE_CONTAINER → $DEST_CONTAINER (projet: $PROJECT)"
|
||||||
|
|
||||||
|
# Vérifier si les containers existent
|
||||||
|
echo "🔍 Vérification des containers..."
|
||||||
|
ssh -i $HOST_KEY -p $HOST_PORT $HOST_USER@$HOST_IP "incus info $SOURCE_CONTAINER --project $PROJECT" > /dev/null 2>&1
|
||||||
|
if [ $? -ne 0 ]; then
|
||||||
|
echo "❌ Erreur: Le container source $SOURCE_CONTAINER n'existe pas ou n'est pas accessible dans le projet $PROJECT"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
ssh -i $HOST_KEY -p $HOST_PORT $HOST_USER@$HOST_IP "incus info $DEST_CONTAINER --project $PROJECT" > /dev/null 2>&1
|
||||||
|
if [ $? -ne 0 ]; then
|
||||||
|
echo "❌ Erreur: Le container destination $DEST_CONTAINER n'existe pas ou n'est pas accessible dans le projet $PROJECT"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Créer une sauvegarde du dossier de destination avant de le remplacer
|
||||||
|
echo "📦 Création d'une sauvegarde sur $DEST_CONTAINER..."
|
||||||
|
# Vérifier si le dossier API existe
|
||||||
|
ssh -i $HOST_KEY -p $HOST_PORT $HOST_USER@$HOST_IP "incus exec $DEST_CONTAINER --project $PROJECT -- test -d $API_PATH"
|
||||||
|
if [ $? -eq 0 ]; then
|
||||||
|
# Le dossier existe, créer une sauvegarde
|
||||||
|
ssh -i $HOST_KEY -p $HOST_PORT $HOST_USER@$HOST_IP "incus exec $DEST_CONTAINER --project $PROJECT -- cp -r $API_PATH $BACKUP_DIR"
|
||||||
|
echo "✅ Sauvegarde créée dans $BACKUP_DIR"
|
||||||
|
else
|
||||||
|
echo "⚠️ Le dossier API n'existe pas sur la destination"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Copier le dossier API entre les containers
|
||||||
|
echo "📋 Copie des fichiers en cours..."
|
||||||
|
|
||||||
|
# Nettoyage sélectif : supprimer seulement le code, pas les données (logs et uploads)
|
||||||
|
echo "🧹 Nettoyage sélectif (préservation de logs et uploads)..."
|
||||||
|
ssh -i $HOST_KEY -p $HOST_PORT $HOST_USER@$HOST_IP "incus exec $DEST_CONTAINER --project $PROJECT -- find $API_PATH -mindepth 1 -maxdepth 1 ! -name 'uploads' ! -name 'logs' -exec rm -rf {} \;"
|
||||||
|
|
||||||
|
# Copier directement du container source vers le container destination (en excluant logs et uploads)
|
||||||
|
echo "📤 Transfert du code (hors logs et uploads)..."
|
||||||
|
ssh -i $HOST_KEY -p $HOST_PORT $HOST_USER@$HOST_IP "incus exec $SOURCE_CONTAINER --project $PROJECT -- tar -cf - -C $API_PATH --exclude='uploads' --exclude='logs' . | incus exec $DEST_CONTAINER --project $PROJECT -- tar -xf - -C $API_PATH"
|
||||||
|
if [ $? -ne 0 ]; then
|
||||||
|
echo "❌ Erreur lors du transfert entre containers"
|
||||||
|
echo "⚠️ Tentative de restauration de la sauvegarde..."
|
||||||
|
# Vérifier si la sauvegarde existe
|
||||||
|
ssh -i $HOST_KEY -p $HOST_PORT $HOST_USER@$HOST_IP "incus exec $DEST_CONTAINER --project $PROJECT -- test -d $BACKUP_DIR"
|
||||||
|
if [ $? -eq 0 ]; then
|
||||||
|
# La sauvegarde existe, la restaurer
|
||||||
|
ssh -i $HOST_KEY -p $HOST_PORT $HOST_USER@$HOST_IP "incus exec $DEST_CONTAINER --project $PROJECT -- rm -rf $API_PATH"
|
||||||
|
ssh -i $HOST_KEY -p $HOST_PORT $HOST_USER@$HOST_IP "incus exec $DEST_CONTAINER --project $PROJECT -- cp -r $BACKUP_DIR $API_PATH"
|
||||||
|
echo "✅ Restauration réussie"
|
||||||
|
else
|
||||||
|
echo "❌ Échec de la restauration"
|
||||||
|
fi
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "✅ Code transféré avec succès (logs et uploads préservés)"
|
||||||
|
|
||||||
|
# Changer le propriétaire et les permissions des fichiers
|
||||||
|
echo "👤 Application des droits et permissions pour tous les fichiers..."
|
||||||
|
|
||||||
|
# Définir le propriétaire pour tous les fichiers
|
||||||
|
ssh -i $HOST_KEY -p $HOST_PORT $HOST_USER@$HOST_IP "incus exec $DEST_CONTAINER --project $PROJECT -- chown -R nginx:nginx $API_PATH"
|
||||||
|
|
||||||
|
# Appliquer les permissions de base pour les dossiers (755)
|
||||||
|
ssh -i $HOST_KEY -p $HOST_PORT $HOST_USER@$HOST_IP "incus exec $DEST_CONTAINER --project $PROJECT -- find $API_PATH -type d -exec chmod 755 {} \;"
|
||||||
|
|
||||||
|
# Appliquer les permissions pour les fichiers (644)
|
||||||
|
ssh -i $HOST_KEY -p $HOST_PORT $HOST_USER@$HOST_IP "incus exec $DEST_CONTAINER --project $PROJECT -- find $API_PATH -type f -exec chmod 644 {} \;"
|
||||||
|
|
||||||
|
# Appliquer des permissions spécifiques pour le dossier logs (pour permettre à PHP-FPM de l'utilisateur nobody d'y écrire)
|
||||||
|
ssh -i $HOST_KEY -p $HOST_PORT $HOST_USER@$HOST_IP "incus exec $DEST_CONTAINER --project $PROJECT -- test -d $API_PATH/logs"
|
||||||
|
if [ $? -eq 0 ]; then
|
||||||
|
# Changer le groupe du dossier logs à nobody (utilisateur PHP-FPM)
|
||||||
|
ssh -i $HOST_KEY -p $HOST_PORT $HOST_USER@$HOST_IP "incus exec $DEST_CONTAINER --project $PROJECT -- chown -R nginx:nobody $API_PATH/logs"
|
||||||
|
# Appliquer les permissions 775 pour le dossier
|
||||||
|
ssh -i $HOST_KEY -p $HOST_PORT $HOST_USER@$HOST_IP "incus exec $DEST_CONTAINER --project $PROJECT -- chmod -R 775 $API_PATH/logs"
|
||||||
|
# Appliquer les permissions 664 pour les fichiers
|
||||||
|
ssh -i $HOST_KEY -p $HOST_PORT $HOST_USER@$HOST_IP "incus exec $DEST_CONTAINER --project $PROJECT -- find $API_PATH/logs -type f -exec chmod 664 {} \;"
|
||||||
|
echo "✅ Droits spécifiques appliqués au dossier logs (nginx:nobody avec permissions 775/664)"
|
||||||
|
else
|
||||||
|
echo "⚠️ Le dossier logs n'existe pas"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Vérifier et corriger les permissions du dossier uploads s'il existe
|
||||||
|
ssh -i $HOST_KEY -p $HOST_PORT $HOST_USER@$HOST_IP "incus exec $DEST_CONTAINER --project $PROJECT -- test -d $API_PATH/uploads"
|
||||||
|
if [ $? -eq 0 ]; then
|
||||||
|
# S'assurer que uploads a les bonnes permissions
|
||||||
|
ssh -i $HOST_KEY -p $HOST_PORT $HOST_USER@$HOST_IP "incus exec $DEST_CONTAINER --project $PROJECT -- chown -R nginx:nobody $API_PATH/uploads"
|
||||||
|
ssh -i $HOST_KEY -p $HOST_PORT $HOST_USER@$HOST_IP "incus exec $DEST_CONTAINER --project $PROJECT -- chmod -R 775 $API_PATH/uploads"
|
||||||
|
ssh -i $HOST_KEY -p $HOST_PORT $HOST_USER@$HOST_IP "incus exec $DEST_CONTAINER --project $PROJECT -- find $API_PATH/uploads -type f -exec chmod 664 {} \;"
|
||||||
|
echo "✅ Droits vérifiés pour le dossier uploads (nginx:nginx avec permissions 775)"
|
||||||
|
else
|
||||||
|
# Créer le dossier uploads s'il n'existe pas
|
||||||
|
ssh -i $HOST_KEY -p $HOST_PORT $HOST_USER@$HOST_IP "incus exec $DEST_CONTAINER --project $PROJECT -- mkdir -p $API_PATH/uploads"
|
||||||
|
ssh -i $HOST_KEY -p $HOST_PORT $HOST_USER@$HOST_IP "incus exec $DEST_CONTAINER --project $PROJECT -- chown -R nginx:nobody $API_PATH/uploads"
|
||||||
|
ssh -i $HOST_KEY -p $HOST_PORT $HOST_USER@$HOST_IP "incus exec $DEST_CONTAINER --project $PROJECT -- chmod -R 775 $API_PATH/uploads"
|
||||||
|
ssh -i $HOST_KEY -p $HOST_PORT $HOST_USER@$HOST_IP "incus exec $DEST_CONTAINER --project $PROJECT -- find $API_PATH/uploads -type f -exec chmod 664 {} \;"
|
||||||
|
echo "✅ Dossier uploads créé avec les bonnes permissions (nginx:nginx avec permissions 775/664)"
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "✅ Propriétaire et permissions appliqués avec succès"
|
||||||
|
|
||||||
|
# Mise à jour des dépendances Composer
|
||||||
|
echo "📦 Mise à jour des dépendances Composer sur $DEST_CONTAINER..."
|
||||||
|
ssh -i $HOST_KEY -p $HOST_PORT $HOST_USER@$HOST_IP "incus exec $DEST_CONTAINER --project $PROJECT -- bash -c 'cd $API_PATH && composer update --no-dev --optimize-autoloader'" > /dev/null 2>&1
|
||||||
|
if [ $? -eq 0 ]; then
|
||||||
|
echo "✅ Dépendances Composer mises à jour avec succès"
|
||||||
|
else
|
||||||
|
echo "⚠️ Composer non disponible ou échec, poursuite sans mise à jour des dépendances"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Vérifier la copie
|
||||||
|
echo "✅ Vérification de la copie..."
|
||||||
|
ssh -i $HOST_KEY -p $HOST_PORT $HOST_USER@$HOST_IP "incus exec $DEST_CONTAINER --project $PROJECT -- test -d $API_PATH"
|
||||||
|
if [ $? -eq 0 ]; then
|
||||||
|
echo "✅ Copie réussie"
|
||||||
|
else
|
||||||
|
echo "❌ Erreur: Le dossier API n'a pas été copié correctement"
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "✅ Livraison vers $ENV_NAME terminée avec succès!"
|
||||||
|
echo "📤 Source: $SOURCE_CONTAINER → Destination: $DEST_CONTAINER"
|
||||||
|
echo "📁 Sauvegarde créée: $BACKUP_DIR sur $DEST_CONTAINER"
|
||||||
|
echo "🔒 Données préservées: logs/ et uploads/ intouchés"
|
||||||
|
echo "👤 Permissions: nginx:nginx (755/644) + logs (nginx:nobody 775/664)"
|
||||||
@@ -1,290 +0,0 @@
|
|||||||
# 🔧 CORRECTIONS CRITIQUES - migrate_from_backup.php
|
|
||||||
|
|
||||||
## ❌ ERREURS DÉTECTÉES
|
|
||||||
|
|
||||||
### 1. **migrateUsers** (ligne 456)
|
|
||||||
```sql
|
|
||||||
-- ERREUR
|
|
||||||
u.nom, u.prenom, u.nom_sect, u.username, u.password, u.phone, u.mobile
|
|
||||||
|
|
||||||
-- CORRECTION (noms réels dans geosector.users)
|
|
||||||
u.libelle, u.prenom, u.nom_tournee, u.username, u.userpass, u.telephone, u.mobile
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. **migrateOpePass** (ligne 1043)
|
|
||||||
```sql
|
|
||||||
-- ERREUR
|
|
||||||
p.passed_at, p.libelle, p.email, p.phone
|
|
||||||
|
|
||||||
-- CORRECTION (noms réels dans geosector.ope_pass)
|
|
||||||
p.date_eve AS passed_at, p.libelle AS encrypted_name, p.email, p.phone
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3. **migrateSectorsAdresses** (ligne 777)
|
|
||||||
```sql
|
|
||||||
-- ERREUR
|
|
||||||
sa.osm_id, sa.osm_name, sa.osm_date_creat
|
|
||||||
|
|
||||||
-- CORRECTION (ces champs n'existent PAS dans geosector.sectors_adresses)
|
|
||||||
-- Ces champs doivent être mis à 0 ou NULL dans la cible
|
|
||||||
0 AS osm_id, '' AS osm_name, NULL AS osm_date_creat
|
|
||||||
```
|
|
||||||
|
|
||||||
### 4. **migrateOpeUsersSectors** (ligne 955)
|
|
||||||
```sql
|
|
||||||
-- ERREUR
|
|
||||||
ous.date_creat, ous.fk_user_creat, ous.date_modif, ous.fk_user_modif
|
|
||||||
|
|
||||||
-- CORRECTION (geosector.ope_users_sectors n'a PAS ces champs)
|
|
||||||
NULL AS created_at, NULL AS fk_user_creat, NULL AS updated_at, NULL AS fk_user_modif
|
|
||||||
```
|
|
||||||
|
|
||||||
### 5. **migrateMedias** (à vérifier)
|
|
||||||
```sql
|
|
||||||
-- ERREUR potentielle
|
|
||||||
m.support_rowid
|
|
||||||
|
|
||||||
-- CORRECTION
|
|
||||||
m.support_rowid AS support_id
|
|
||||||
```
|
|
||||||
|
|
||||||
### 6. **migrateOperations** (erreur NOT NULL)
|
|
||||||
```sql
|
|
||||||
-- PROBLÈME: Column 'fk_user_modif' cannot be null
|
|
||||||
-- CORRECTION: Utiliser 0 au lieu de NULL
|
|
||||||
'fk_user_modif' => $row['fk_user_modif'] ?? 0
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## ✅ SOLUTION RAPIDE
|
|
||||||
|
|
||||||
Créez un script `HOTFIX_migrate.sql` pour corriger rapidement :
|
|
||||||
|
|
||||||
```sql
|
|
||||||
-- Permettre NULL sur les champs problématiques
|
|
||||||
ALTER TABLE operations MODIFY COLUMN fk_user_modif INT(10) UNSIGNED NULL DEFAULT NULL;
|
|
||||||
ALTER TABLE ope_sectors MODIFY COLUMN fk_user_modif INT(10) UNSIGNED NULL DEFAULT NULL;
|
|
||||||
ALTER TABLE ope_users MODIFY COLUMN fk_user_creat INT(10) UNSIGNED NULL DEFAULT NULL;
|
|
||||||
ALTER TABLE ope_users MODIFY COLUMN fk_user_modif INT(10) UNSIGNED NULL DEFAULT NULL;
|
|
||||||
ALTER TABLE ope_users_sectors MODIFY COLUMN fk_user_creat INT(10) UNSIGNED NULL DEFAULT NULL;
|
|
||||||
ALTER TABLE ope_users_sectors MODIFY COLUMN fk_user_modif INT(10) UNSIGNED NULL DEFAULT NULL;
|
|
||||||
```
|
|
||||||
|
|
||||||
OU utiliser `0` à la place de `NULL` systématiquement dans le script PHP.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📋 STATUT DES CORRECTIONS (10/10/2025)
|
|
||||||
|
|
||||||
1. ✅ **migrateEntites** - CORRIGÉ (cp, tel1, tel2, demo)
|
|
||||||
2. ✅ **migrateUsers** - CORRIGÉ (libelle, nom_tournee, telephone, userpass, alert_email) - Lignes 455-537
|
|
||||||
3. ✅ **migrateOperations** - CORRIGÉ (fk_user_modif ?? 0, fk_user_creat ?? 0) - Lignes 614-625
|
|
||||||
4. ✅ **migrateOpeSectors** - CORRIGÉ (fk_user_modif ?? 0, fk_user_creat ?? 0) - Lignes 727-738
|
|
||||||
5. ✅ **migrateSectorsAdresses** - CORRIGÉ (osm_id=0, osm_name='', osm_date_creat=null, created_at/updated_at=null) - Lignes 776-855
|
|
||||||
6. ✅ **migrateOpeUsers** - CORRIGÉ (vérification existence user dans TARGET avant insertion) - Lignes 960-1020
|
|
||||||
7. ✅ **migrateOpeUsersSectors** - CORRIGÉ (date_creat, fk_user_creat, date_modif, fk_user_modif = null + vérification user) - Lignes 1054-1135
|
|
||||||
8. ✅ **migrateOpePass** - CORRIGÉ (date_eve, libelle, recu + fk_type_reglement forcé à 4 si invalide + vérification user) - Lignes 1215-1330
|
|
||||||
9. ✅ **migrateMedias** - CORRIGÉ (support_rowid, type_fichier, hauteur/largeur) - Lignes 1281-1343
|
|
||||||
10. ✅ **countTargetRows()** - CORRIGÉ (requêtes SQL spécifiques par table avec JOINs corrects) - Lignes 303-355
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## ✅ CORRECTIONS APPLIQUÉES
|
|
||||||
|
|
||||||
**Toutes les erreurs ont été corrigées dans `migrate_from_backup.php`.**
|
|
||||||
|
|
||||||
Les corrections incluent :
|
|
||||||
- Utilisation des vrais noms de colonnes SOURCE (`geosector-structure.sql`)
|
|
||||||
- Gestion des champs manquants dans SOURCE avec valeurs par défaut
|
|
||||||
- Utilisation de `?? 0` au lieu de `?? null` pour les FK NOT NULL
|
|
||||||
- Suppression des champs inexistants dans les requêtes SELECT
|
|
||||||
|
|
||||||
**ATTENTION** : Les noms de colonnes TARGET n'ont PAS été vérifiés contre `geo_app_structure.sql`.
|
|
||||||
Le script utilise peut-être les mauvais noms TARGET (à vérifier avec `migrate_users.php` et autres `migrate_*.php` de référence).
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🔧 CORRECTIONS RÉCENTES (Session actuelle)
|
|
||||||
|
|
||||||
### 10. **Vérification FK users** (lignes 1008-1015, 1117-1125, 1257-1266)
|
|
||||||
**Problème** : Violations de contraintes FK car certains `fk_user` référencent des utilisateurs absents dans TARGET.
|
|
||||||
|
|
||||||
**Solution** : Ajout de vérification d'existence avant insertion :
|
|
||||||
```php
|
|
||||||
// Vérifier que fk_user existe dans users de la TARGET
|
|
||||||
$checkUser = $this->targetDb->prepare("SELECT id FROM users WHERE id = ?");
|
|
||||||
$checkUser->execute([$row['fk_user']]);
|
|
||||||
if (!$checkUser->fetch()) {
|
|
||||||
$this->log(" ⚠ Record {$row['rowid']}: user {$row['fk_user']} non trouvé, ignoré", 'WARNING');
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Appliqué sur** :
|
|
||||||
- `migrateOpeUsers()` - ligne 1008
|
|
||||||
- `migrateOpeUsersSectors()` - ligne 1117
|
|
||||||
- `migrateOpePass()` - ligne 1257
|
|
||||||
|
|
||||||
**Résultat** : Les enregistrements avec FK invalides sont ignorés avec un WARNING au lieu de provoquer une erreur fatale.
|
|
||||||
|
|
||||||
### 11. **countTargetRows() - Requêtes SQL spécifiques** (lignes 303-355)
|
|
||||||
**Problème** : Erreurs SQL car toutes les tables n'ont pas les mêmes colonnes/relations :
|
|
||||||
- `Unknown column 'fk_entite' in 'WHERE'` pour `entites`
|
|
||||||
- `Unknown column 't.fk_operation' in 'ON'` pour `operations`, `ope_pass_histo`, `medias`
|
|
||||||
|
|
||||||
**Solution** : Requêtes SQL personnalisées par table :
|
|
||||||
```php
|
|
||||||
// Pour entites : pas de FK, juste l'ID
|
|
||||||
if ($tableName === 'entites') {
|
|
||||||
$sql = "SELECT COUNT(*) as count FROM $tableName WHERE id = :entity_id";
|
|
||||||
}
|
|
||||||
// Pour operations : FK directe vers entites
|
|
||||||
else if ($tableName === 'operations') {
|
|
||||||
$sql = "SELECT COUNT(*) as count FROM $tableName WHERE fk_entite = :entity_id";
|
|
||||||
}
|
|
||||||
// Pour sectors_adresses : JOIN via ope_sectors -> operations
|
|
||||||
else if ($tableName === 'sectors_adresses') {
|
|
||||||
$sql = "SELECT COUNT(*) as count FROM $tableName sa
|
|
||||||
INNER JOIN ope_sectors s ON sa.fk_sector = s.id
|
|
||||||
INNER JOIN operations o ON s.fk_operation = o.id
|
|
||||||
WHERE o.fk_entite = :entity_id";
|
|
||||||
}
|
|
||||||
// Pour tables avec fk_operation directe
|
|
||||||
else if (in_array($tableName, ['ope_sectors', 'ope_users', 'ope_users_sectors', 'ope_pass', 'ope_pass_histo', 'medias'])) {
|
|
||||||
$sql = "SELECT COUNT(*) as count FROM $tableName t
|
|
||||||
INNER JOIN operations o ON t.fk_operation = o.id
|
|
||||||
WHERE o.fk_entite = :entity_id";
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Résultat** : Comptages TARGET précis et sans erreurs SQL pour toutes les tables.
|
|
||||||
|
|
||||||
### 12. **fk_type_reglement validation** (lignes 1237-1241)
|
|
||||||
**Problème** : FK violations car certains `fk_type_reglement` référencent des IDs inexistants dans `x_types_reglements` (IDs valides : 1, 2, 3).
|
|
||||||
|
|
||||||
**Solution** : Forcer à 4 ("-") si valeur invalide (comme dans `migrate_ope_pass.php`) :
|
|
||||||
```php
|
|
||||||
// Vérification et correction du type de règlement
|
|
||||||
$fkTypeReglement = $row['fk_type_reglement'] ?? 1;
|
|
||||||
if (!in_array($fkTypeReglement, [1, 2, 3])) {
|
|
||||||
$fkTypeReglement = 4; // Forcer à 4 ("-") si différent de 1, 2 ou 3
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Résultat** : Tous les `ope_pass` sont migrés sans violation de FK sur `fk_type_reglement`.
|
|
||||||
|
|
||||||
### 13. **Limitation aux 3 dernières opérations** (lignes 646-647) ⚠️ IMPORTANT
|
|
||||||
**Problème** : Migration de TOUTES les opérations au lieu des 3 dernières uniquement.
|
|
||||||
|
|
||||||
**Solution** : Ajout de `ORDER BY rowid DESC LIMIT 3` dans la requête :
|
|
||||||
```php
|
|
||||||
// Ne migrer que les 3 dernières opérations (plus récentes)
|
|
||||||
$sql .= " ORDER BY rowid DESC LIMIT 3";
|
|
||||||
```
|
|
||||||
|
|
||||||
**Résultat** : Seules les 3 opérations les plus récentes (par rowid DESC) sont migrées par entité.
|
|
||||||
**Impact** : Réduit considérablement le volume de données migrées et toutes les tables liées (ope_sectors, ope_users, ope_users_sectors, ope_pass, medias, sectors_adresses).
|
|
||||||
|
|
||||||
### 14. **Option de suppression avant migration** (lignes 127-200, 1692, 1722, 1776) ⭐ NOUVELLE FONCTIONNALITÉ
|
|
||||||
**Besoin** : Permettre de supprimer les données existantes d'une entité dans TARGET avant migration pour repartir à zéro.
|
|
||||||
|
|
||||||
**Solution** : Ajout du paramètre `--delete-before` :
|
|
||||||
|
|
||||||
**Script bash** (lignes 174-183) :
|
|
||||||
```bash
|
|
||||||
# Demander si suppression des données de l'entité avant migration
|
|
||||||
echo -ne "${YELLOW}3️⃣ Supprimer les données existantes de cette entité dans la TARGET avant migration ? (y/N): ${NC}"
|
|
||||||
read -r DELETE_BEFORE
|
|
||||||
DELETE_FLAG=""
|
|
||||||
if [[ $DELETE_BEFORE =~ ^[Yy]$ ]]; then
|
|
||||||
echo -e "${GREEN}✓${NC} Les données seront supprimées avant migration"
|
|
||||||
DELETE_FLAG="--delete-before"
|
|
||||||
fi
|
|
||||||
```
|
|
||||||
|
|
||||||
**Script PHP** - Méthode `deleteEntityData()` (lignes 127-200) :
|
|
||||||
```php
|
|
||||||
private function deleteEntityData($entityId) {
|
|
||||||
// Ordre de suppression inverse pour respecter les FK
|
|
||||||
$deletionOrder = [
|
|
||||||
'medias', 'ope_pass_histo', 'ope_pass', 'ope_users_sectors',
|
|
||||||
'ope_users', 'sectors_adresses', 'ope_sectors', 'operations', 'users'
|
|
||||||
];
|
|
||||||
|
|
||||||
foreach ($deletionOrder as $table) {
|
|
||||||
// Suppression via JOIN avec operations pour respecter FK
|
|
||||||
DELETE t FROM $table t
|
|
||||||
INNER JOIN operations o ON t.fk_operation = o.id
|
|
||||||
WHERE o.fk_entite = ?
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Résultat** :
|
|
||||||
- En mode interactif, l'utilisateur peut choisir de supprimer les données existantes avant migration
|
|
||||||
- Suppression propre dans l'ordre inverse des FK (pas d'erreur de contrainte)
|
|
||||||
- L'entité elle-même n'est PAS supprimée (car peut avoir d'autres données liées)
|
|
||||||
- Transaction avec rollback en cas d'erreur
|
|
||||||
|
|
||||||
**Usage** :
|
|
||||||
```bash
|
|
||||||
# Interactif
|
|
||||||
./scripts/migrate_batch.sh
|
|
||||||
# Choisir option d) puis répondre 'y' à la question de suppression
|
|
||||||
|
|
||||||
# Direct
|
|
||||||
php migrate_from_backup.php --source-db=geosector_20251008 --mode=entity --entity-id=5 --delete-before
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📊 RÉSULTATS MIGRATION TEST (Entité #5)
|
|
||||||
|
|
||||||
Dernière exécution avec toutes les corrections :
|
|
||||||
- ✅ **Entités** : 1 SOURCE → 1 TARGET
|
|
||||||
- ✅ **Users** : 21 SOURCE → 21 TARGET (100%)
|
|
||||||
- ✅ **Operations** : 4 SOURCE → 4 TARGET (100%)
|
|
||||||
- ✅ **Ope_sectors** : 64 SOURCE → 64 TARGET (100%)
|
|
||||||
- ⚠️ **Sectors_adresses** : 1975 SOURCE → 1040 TARGET (différence de -935, à investiguer)
|
|
||||||
- ✅ **Ope_users** : 20 migrés (0 erreurs après vérification FK)
|
|
||||||
- ✅ **Ope_users_sectors** : 20 migrés (0 erreurs après vérification FK)
|
|
||||||
- ⚠️ **Ope_pass** : 466 erreurs (users manquants - comportement attendu avec validation FK)
|
|
||||||
- ✅ **Medias** : Migration réussie
|
|
||||||
|
|
||||||
### 15. **Ajout de contraintes UNIQUE pour éviter les doublons** (10/10/2025) ⭐ CONTRAINTES MANQUANTES
|
|
||||||
**Problème** : Les tables `ope_users` et `ope_users_sectors` n'avaient PAS de contrainte UNIQUE sur leurs combinaisons de FK, permettant des doublons massifs.
|
|
||||||
|
|
||||||
**Diagnostic** :
|
|
||||||
- Table `ope_users` : 186+ doublons pour la même paire (fk_operation, fk_user)
|
|
||||||
- Table `ope_users_sectors` : Risque de doublons sur (fk_operation, fk_user, fk_sector)
|
|
||||||
- Le `ON DUPLICATE KEY UPDATE` ne fonctionnait pas car aucune contrainte UNIQUE n'existait
|
|
||||||
|
|
||||||
**Solution** : Création du script `scripts/sql/add_unique_constraints_ope_tables.sql` qui :
|
|
||||||
1. Supprime les doublons existants (garde la première occurrence, supprime les duplicatas)
|
|
||||||
2. Ajoute `UNIQUE KEY idx_operation_user (fk_operation, fk_user)` sur `ope_users`
|
|
||||||
3. Ajoute `UNIQUE KEY idx_operation_user_sector (fk_operation, fk_user, fk_sector)` sur `ope_users_sectors`
|
|
||||||
4. Vérifie les contraintes et compte les doublons restants
|
|
||||||
|
|
||||||
**Fichiers modifiés** :
|
|
||||||
- `scripts/sql/add_unique_constraints_ope_tables.sql` - Script SQL d'ajout des contraintes
|
|
||||||
- `scripts/php/geo_app_structure.sql` - Documentation de la structure cible avec contraintes
|
|
||||||
|
|
||||||
**À exécuter AVANT la prochaine migration** :
|
|
||||||
```bash
|
|
||||||
mysql -u root -p pra_geo < scripts/sql/add_unique_constraints_ope_tables.sql
|
|
||||||
```
|
|
||||||
|
|
||||||
**Puis re-migrer l'entité** :
|
|
||||||
```bash
|
|
||||||
php migrate_from_backup.php --source-db=geosector_20251008 --mode=entity --entity-id=5 --delete-before
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Prochaines étapes** :
|
|
||||||
1. ✅ Exécuter le script SQL pour ajouter les contraintes UNIQUE
|
|
||||||
2. ✅ Re-migrer l'entité #5 avec `--delete-before` pour vérifier l'absence de doublons
|
|
||||||
3. Investiguer la différence de -935 sur `sectors_adresses`
|
|
||||||
4. Analyser les 466 erreurs sur `ope_pass` (probablement des références à des users d'autres entités)
|
|
||||||
5. Tester sur une autre entité pour valider la stabilité des corrections
|
|
||||||
@@ -1,350 +0,0 @@
|
|||||||
# Instructions de modification des scripts de migration
|
|
||||||
|
|
||||||
## Modifications à effectuer
|
|
||||||
|
|
||||||
### 1. migrate_from_backup.php
|
|
||||||
|
|
||||||
#### A. Remplacer les lignes 31-50 (configuration DB)
|
|
||||||
|
|
||||||
**ANCIEN** :
|
|
||||||
```php
|
|
||||||
private $sourceDbName;
|
|
||||||
private $targetDbName;
|
|
||||||
private $sourceDb;
|
|
||||||
private $targetDb;
|
|
||||||
private $mode;
|
|
||||||
private $entityId;
|
|
||||||
private $logFile;
|
|
||||||
private $deleteBefore;
|
|
||||||
|
|
||||||
// Configuration MariaDB (maria4 sur IN4)
|
|
||||||
// pra-geo se connecte à maria4 via l'IP du container
|
|
||||||
private const DB_HOST = '13.23.33.4'; // maria4 sur IN4
|
|
||||||
private const DB_PORT = 3306;
|
|
||||||
private const DB_USER = 'pra_geo_user';
|
|
||||||
private const DB_PASS = 'd2jAAGGWi8fxFrWgXjOA';
|
|
||||||
|
|
||||||
// Pour la base source (backup), on utilise pra_geo_user (avec SELECT sur geosector_*)
|
|
||||||
// L'utilisateur root n'est pas accessible depuis pra-geo (13.23.33.22)
|
|
||||||
private const DB_USER_ROOT = 'pra_geo_user';
|
|
||||||
private const DB_PASS_ROOT = 'd2jAAGGWi8fxFrWgXjOA';
|
|
||||||
```
|
|
||||||
|
|
||||||
**NOUVEAU** :
|
|
||||||
```php
|
|
||||||
private $sourceDbName;
|
|
||||||
private $targetDbName;
|
|
||||||
private $sourceDb;
|
|
||||||
private $targetDb;
|
|
||||||
private $mode;
|
|
||||||
private $entityId;
|
|
||||||
private $logFile;
|
|
||||||
private $deleteBefore;
|
|
||||||
private $env;
|
|
||||||
|
|
||||||
// Configuration multi-environnement
|
|
||||||
private const ENVIRONMENTS = [
|
|
||||||
'rca' => [
|
|
||||||
'host' => '13.23.33.3', // maria3 sur IN3
|
|
||||||
'port' => 3306,
|
|
||||||
'user' => 'rca_geo_user',
|
|
||||||
'pass' => 'UPf3C0cQ805LypyM71iW',
|
|
||||||
'target_db' => 'rca_geo',
|
|
||||||
'source_db' => 'geosector' // Base synchronisée par PM7
|
|
||||||
],
|
|
||||||
'pra' => [
|
|
||||||
'host' => '13.23.33.4', // maria4 sur IN4
|
|
||||||
'port' => 3306,
|
|
||||||
'user' => 'pra_geo_user',
|
|
||||||
'pass' => 'd2jAAGGWi8fxFrWgXjOA',
|
|
||||||
'target_db' => 'pra_geo',
|
|
||||||
'source_db' => 'geosector' // Base synchronisée par PM7
|
|
||||||
]
|
|
||||||
];
|
|
||||||
```
|
|
||||||
|
|
||||||
#### B. Modifier le constructeur (ligne 67)
|
|
||||||
|
|
||||||
**ANCIEN** :
|
|
||||||
```php
|
|
||||||
public function __construct($sourceDbName, $targetDbName, $mode = 'global', $entityId = null, $logFile = null, $deleteBefore = false) {
|
|
||||||
$this->sourceDbName = $sourceDbName;
|
|
||||||
$this->targetDbName = $targetDbName;
|
|
||||||
$this->mode = $mode;
|
|
||||||
$this->entityId = $entityId;
|
|
||||||
$this->logFile = $logFile ?? '/var/back/migration_' . date('Ymd_His') . '.log';
|
|
||||||
$this->deleteBefore = $deleteBefore;
|
|
||||||
|
|
||||||
$this->log("=== Migration depuis backup PM7 ===");
|
|
||||||
$this->log("Source: {$sourceDbName}");
|
|
||||||
$this->log("Cible: {$targetDbName}");
|
|
||||||
$this->log("Mode: {$mode}");
|
|
||||||
```
|
|
||||||
|
|
||||||
**NOUVEAU** :
|
|
||||||
```php
|
|
||||||
public function __construct($env, $mode = 'global', $entityId = null, $logFile = null, $deleteBefore = false) {
|
|
||||||
// Validation de l'environnement
|
|
||||||
if (!isset(self::ENVIRONMENTS[$env])) {
|
|
||||||
throw new Exception("Invalid environment: $env. Use 'rca' or 'pra'");
|
|
||||||
}
|
|
||||||
|
|
||||||
$this->env = $env;
|
|
||||||
$config = self::ENVIRONMENTS[$env];
|
|
||||||
$this->sourceDbName = $config['source_db'];
|
|
||||||
$this->targetDbName = $config['target_db'];
|
|
||||||
$this->mode = $mode;
|
|
||||||
$this->entityId = $entityId;
|
|
||||||
$this->logFile = $logFile ?? '/var/back/migration_' . date('Ymd_His') . '.log';
|
|
||||||
$this->deleteBefore = $deleteBefore;
|
|
||||||
|
|
||||||
$this->log("=== Migration depuis backup PM7 ===");
|
|
||||||
$this->log("Environment: {$env}");
|
|
||||||
$this->log("Source: {$this->sourceDbName} → Cible: {$this->targetDbName}");
|
|
||||||
$this->log("Mode: {$mode}");
|
|
||||||
```
|
|
||||||
|
|
||||||
#### C. Modifier connect() (lignes 90-112)
|
|
||||||
|
|
||||||
**Remplacer toutes les constantes** :
|
|
||||||
- `self::DB_HOST` → `self::ENVIRONMENTS[$this->env]['host']`
|
|
||||||
- `self::DB_PORT` → `self::ENVIRONMENTS[$this->env]['port']`
|
|
||||||
- `self::DB_USER_ROOT` → `self::ENVIRONMENTS[$this->env]['user']`
|
|
||||||
- `self::DB_PASS_ROOT` → `self::ENVIRONMENTS[$this->env]['pass']`
|
|
||||||
- `self::DB_USER` → `self::ENVIRONMENTS[$this->env]['user']`
|
|
||||||
- `self::DB_PASS` → `self::ENVIRONMENTS[$this->env]['pass']`
|
|
||||||
|
|
||||||
#### D. Modifier parseArguments() (vers la fin du fichier)
|
|
||||||
|
|
||||||
**ANCIEN** :
|
|
||||||
```php
|
|
||||||
$args = [
|
|
||||||
'source-db' => null,
|
|
||||||
'target-db' => 'pra_geo',
|
|
||||||
'mode' => 'global',
|
|
||||||
'entity-id' => null,
|
|
||||||
'log' => null,
|
|
||||||
'delete-before' => true,
|
|
||||||
'help' => false
|
|
||||||
];
|
|
||||||
```
|
|
||||||
|
|
||||||
**NOUVEAU** :
|
|
||||||
```php
|
|
||||||
$args = [
|
|
||||||
'env' => 'rca', // Défaut: recette
|
|
||||||
'mode' => 'global',
|
|
||||||
'entity-id' => null,
|
|
||||||
'log' => null,
|
|
||||||
'delete-before' => true,
|
|
||||||
'help' => false
|
|
||||||
];
|
|
||||||
```
|
|
||||||
|
|
||||||
#### E. Modifier showHelp()
|
|
||||||
|
|
||||||
**ANCIEN** :
|
|
||||||
```php
|
|
||||||
--source-db=NAME Nom de la base source (backup restauré, ex: geosector_20251007) [REQUIS]
|
|
||||||
--target-db=NAME Nom de la base cible (défaut: pra_geo)
|
|
||||||
```
|
|
||||||
|
|
||||||
**NOUVEAU** :
|
|
||||||
```php
|
|
||||||
--env=ENV Environment: 'rca' (recette) ou 'pra' (production) [défaut: rca]
|
|
||||||
```
|
|
||||||
|
|
||||||
**ANCIEN** (exemples) :
|
|
||||||
```php
|
|
||||||
php migrate_from_backup.php --source-db=geosector_20251007 --target-db=pra_geo --mode=global
|
|
||||||
```
|
|
||||||
|
|
||||||
**NOUVEAU** :
|
|
||||||
```php
|
|
||||||
php migrate_from_backup.php --env=pra --mode=global
|
|
||||||
php migrate_from_backup.php --env=rca --mode=entity --entity-id=45
|
|
||||||
```
|
|
||||||
|
|
||||||
#### F. Modifier validation des arguments
|
|
||||||
|
|
||||||
**ANCIEN** :
|
|
||||||
```php
|
|
||||||
if (!$args['source-db']) {
|
|
||||||
echo "Erreur: --source-db est requis\n\n";
|
|
||||||
showHelp();
|
|
||||||
exit(1);
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**NOUVEAU** :
|
|
||||||
```php
|
|
||||||
if (!in_array($args['env'], ['rca', 'pra'])) {
|
|
||||||
echo "Erreur: --env doit être 'rca' ou 'pra'\n\n";
|
|
||||||
showHelp();
|
|
||||||
exit(1);
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
#### G. Modifier instanciation BackupMigration
|
|
||||||
|
|
||||||
**ANCIEN** :
|
|
||||||
```php
|
|
||||||
$migration = new BackupMigration(
|
|
||||||
$args['source-db'],
|
|
||||||
$args['target-db'],
|
|
||||||
$args['mode'],
|
|
||||||
$args['entity-id'],
|
|
||||||
$args['log'],
|
|
||||||
(bool)$args['delete-before']
|
|
||||||
);
|
|
||||||
```
|
|
||||||
|
|
||||||
**NOUVEAU** :
|
|
||||||
```php
|
|
||||||
$migration = new BackupMigration(
|
|
||||||
$args['env'],
|
|
||||||
$args['mode'],
|
|
||||||
$args['entity-id'],
|
|
||||||
$args['log'],
|
|
||||||
(bool)$args['delete-before']
|
|
||||||
);
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### 2. migrate_batch.sh
|
|
||||||
|
|
||||||
#### A. Ajouter détection automatique de l'environnement (après ligne 22)
|
|
||||||
|
|
||||||
**AJOUTER** :
|
|
||||||
```bash
|
|
||||||
# Détection automatique de l'environnement
|
|
||||||
if [ -f "/etc/hostname" ]; then
|
|
||||||
CONTAINER_NAME=$(cat /etc/hostname)
|
|
||||||
case $CONTAINER_NAME in
|
|
||||||
rca-geo)
|
|
||||||
ENV="rca"
|
|
||||||
;;
|
|
||||||
pra-geo)
|
|
||||||
ENV="pra"
|
|
||||||
;;
|
|
||||||
*)
|
|
||||||
ENV="rca" # Défaut
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
else
|
|
||||||
ENV="rca" # Défaut
|
|
||||||
fi
|
|
||||||
```
|
|
||||||
|
|
||||||
#### B. Remplacer lignes 26-27
|
|
||||||
|
|
||||||
**ANCIEN** :
|
|
||||||
```bash
|
|
||||||
SOURCE_DB="geosector_20251013_13"
|
|
||||||
TARGET_DB="pra_geo"
|
|
||||||
```
|
|
||||||
|
|
||||||
**NOUVEAU** :
|
|
||||||
```bash
|
|
||||||
# SOURCE_DB et TARGET_DB ne sont plus utilisés
|
|
||||||
# Ils sont déduits de --env dans migrate_from_backup.php
|
|
||||||
```
|
|
||||||
|
|
||||||
#### C. Ajouter option --env dans le parsing (ligne 68)
|
|
||||||
|
|
||||||
**AJOUTER avant `--interactive|-i)` ** :
|
|
||||||
```bash
|
|
||||||
--env)
|
|
||||||
ENV="$2"
|
|
||||||
shift 2
|
|
||||||
;;
|
|
||||||
```
|
|
||||||
|
|
||||||
#### D. Modifier les appels PHP - ligne 200-206
|
|
||||||
|
|
||||||
**ANCIEN** :
|
|
||||||
```bash
|
|
||||||
php "$MIGRATION_SCRIPT" \
|
|
||||||
--source-db="$SOURCE_DB" \
|
|
||||||
--target-db="$TARGET_DB" \
|
|
||||||
--mode=entity \
|
|
||||||
--entity-id="$SPECIFIC_ENTITY_ID" \
|
|
||||||
--log="$ENTITY_LOG" \
|
|
||||||
$DELETE_FLAG
|
|
||||||
```
|
|
||||||
|
|
||||||
**NOUVEAU** :
|
|
||||||
```bash
|
|
||||||
php "$MIGRATION_SCRIPT" \
|
|
||||||
--env="$ENV" \
|
|
||||||
--mode=entity \
|
|
||||||
--entity-id="$SPECIFIC_ENTITY_ID" \
|
|
||||||
--log="$ENTITY_LOG" \
|
|
||||||
$DELETE_FLAG
|
|
||||||
```
|
|
||||||
|
|
||||||
#### E. Modifier les appels PHP - ligne 374-379
|
|
||||||
|
|
||||||
**ANCIEN** :
|
|
||||||
```bash
|
|
||||||
php "$MIGRATION_SCRIPT" \
|
|
||||||
--source-db="$SOURCE_DB" \
|
|
||||||
--target-db="$TARGET_DB" \
|
|
||||||
--mode=entity \
|
|
||||||
--entity-id="$ENTITY_ID" \
|
|
||||||
--log="$ENTITY_LOG" > /tmp/migration_output_$$.txt 2>&1
|
|
||||||
```
|
|
||||||
|
|
||||||
**NOUVEAU** :
|
|
||||||
```bash
|
|
||||||
php "$MIGRATION_SCRIPT" \
|
|
||||||
--env="$ENV" \
|
|
||||||
--mode=entity \
|
|
||||||
--entity-id="$ENTITY_ID" \
|
|
||||||
--log="$ENTITY_LOG" > /tmp/migration_output_$$.txt 2>&1
|
|
||||||
```
|
|
||||||
|
|
||||||
#### F. Modifier les messages de log (lignes 289-291)
|
|
||||||
|
|
||||||
**ANCIEN** :
|
|
||||||
```bash
|
|
||||||
log "📅 Date: $(date '+%Y-%m-%d %H:%M:%S')"
|
|
||||||
log "📁 Source: $SOURCE_DB"
|
|
||||||
log "📁 Cible: $TARGET_DB"
|
|
||||||
```
|
|
||||||
|
|
||||||
**NOUVEAU** :
|
|
||||||
```bash
|
|
||||||
log "📅 Date: $(date '+%Y-%m-%d %H:%M:%S')"
|
|
||||||
log "🌍 Environment: $ENV"
|
|
||||||
log "📁 Source: geosector → Target: (déduit de \$ENV)"
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Nouveaux usages
|
|
||||||
|
|
||||||
### Sur rca-geo (IN3)
|
|
||||||
```bash
|
|
||||||
# Détection automatique
|
|
||||||
./migrate_batch.sh
|
|
||||||
|
|
||||||
# Ou explicite
|
|
||||||
./migrate_batch.sh --env=rca
|
|
||||||
|
|
||||||
# Migration PHP directe
|
|
||||||
php php/migrate_from_backup.php --env=rca --mode=entity --entity-id=45
|
|
||||||
```
|
|
||||||
|
|
||||||
### Sur pra-geo (IN4)
|
|
||||||
```bash
|
|
||||||
# Détection automatique
|
|
||||||
./migrate_batch.sh
|
|
||||||
|
|
||||||
# Ou explicite
|
|
||||||
./migrate_batch.sh --env=pra
|
|
||||||
|
|
||||||
# Migration PHP directe
|
|
||||||
php php/migrate_from_backup.php --env=pra --mode=entity --entity-id=45
|
|
||||||
```
|
|
||||||
File diff suppressed because it is too large
Load Diff
@@ -1,165 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
|
|
||||||
##############################################################################
|
|
||||||
# Script de mise à jour des paramètres PHP-FPM pour GeoSector
|
|
||||||
#
|
|
||||||
# Usage:
|
|
||||||
# ./update_php_fpm_settings.sh dev # Pour DVA
|
|
||||||
# ./update_php_fpm_settings.sh rec # Pour RCA
|
|
||||||
# ./update_php_fpm_settings.sh prod # Pour PRA
|
|
||||||
##############################################################################
|
|
||||||
|
|
||||||
set -e
|
|
||||||
|
|
||||||
# Couleurs
|
|
||||||
RED='\033[0;31m'
|
|
||||||
GREEN='\033[0;32m'
|
|
||||||
YELLOW='\033[1;33m'
|
|
||||||
NC='\033[0m' # No Color
|
|
||||||
|
|
||||||
# Déterminer l'environnement
|
|
||||||
ENV=${1:-dev}
|
|
||||||
|
|
||||||
case $ENV in
|
|
||||||
dev)
|
|
||||||
CONTAINER="dva-geo"
|
|
||||||
TIMEOUT=180
|
|
||||||
MAX_REQUESTS=1000
|
|
||||||
MEMORY_LIMIT=512M
|
|
||||||
;;
|
|
||||||
rec)
|
|
||||||
CONTAINER="rca-geo"
|
|
||||||
TIMEOUT=120
|
|
||||||
MAX_REQUESTS=2000
|
|
||||||
MEMORY_LIMIT=256M
|
|
||||||
;;
|
|
||||||
prod)
|
|
||||||
CONTAINER="pra-geo"
|
|
||||||
TIMEOUT=120
|
|
||||||
MAX_REQUESTS=2000
|
|
||||||
MEMORY_LIMIT=256M
|
|
||||||
;;
|
|
||||||
*)
|
|
||||||
echo -e "${RED}Erreur: Environnement invalide '$ENV'${NC}"
|
|
||||||
echo "Usage: $0 [dev|rec|prod]"
|
|
||||||
exit 1
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
|
|
||||||
echo -e "${GREEN}=== Mise à jour PHP-FPM pour $ENV ($CONTAINER) ===${NC}"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Vérifier que le container existe
|
|
||||||
if ! incus list | grep -q "$CONTAINER"; then
|
|
||||||
echo -e "${RED}Erreur: Container $CONTAINER non trouvé${NC}"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Trouver le fichier de configuration
|
|
||||||
echo "Recherche du fichier de configuration PHP-FPM..."
|
|
||||||
POOL_FILE=$(incus exec $CONTAINER -- find /etc/php* -name "www.conf" 2>/dev/null | grep fpm/pool | head -1)
|
|
||||||
|
|
||||||
if [ -z "$POOL_FILE" ]; then
|
|
||||||
echo -e "${RED}Erreur: Fichier pool PHP-FPM non trouvé${NC}"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo -e "${GREEN}✓ Fichier trouvé: $POOL_FILE${NC}"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Sauvegarder le fichier original
|
|
||||||
BACKUP_FILE="${POOL_FILE}.backup.$(date +%Y%m%d_%H%M%S)"
|
|
||||||
echo "Création d'une sauvegarde..."
|
|
||||||
incus exec $CONTAINER -- cp "$POOL_FILE" "$BACKUP_FILE"
|
|
||||||
echo -e "${GREEN}✓ Sauvegarde créée: $BACKUP_FILE${NC}"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Afficher les valeurs actuelles
|
|
||||||
echo "Valeurs actuelles:"
|
|
||||||
incus exec $CONTAINER -- grep -E "^(request_terminate_timeout|pm.max_requests|memory_limit)" "$POOL_FILE" || echo " (non définies)"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Créer un fichier temporaire avec les nouvelles valeurs
|
|
||||||
TMP_FILE="/tmp/php_fpm_update_$$.conf"
|
|
||||||
|
|
||||||
cat > $TMP_FILE << EOF
|
|
||||||
; === Configuration GeoSector - Modifié le $(date +%Y-%m-%d) ===
|
|
||||||
|
|
||||||
; Timeout des requêtes
|
|
||||||
request_terminate_timeout = ${TIMEOUT}s
|
|
||||||
|
|
||||||
; Nombre max de requêtes avant recyclage du worker
|
|
||||||
pm.max_requests = ${MAX_REQUESTS}
|
|
||||||
|
|
||||||
; Limite mémoire PHP
|
|
||||||
php_admin_value[memory_limit] = ${MEMORY_LIMIT}
|
|
||||||
|
|
||||||
; Log des requêtes lentes
|
|
||||||
slowlog = /var/log/php8.3-fpm-slow.log
|
|
||||||
request_slowlog_timeout = 10s
|
|
||||||
EOF
|
|
||||||
|
|
||||||
echo "Nouvelles valeurs à appliquer:"
|
|
||||||
cat $TMP_FILE
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Demander confirmation
|
|
||||||
read -p "Appliquer ces modifications ? (y/N) " -n 1 -r
|
|
||||||
echo ""
|
|
||||||
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
|
|
||||||
echo "Annulé."
|
|
||||||
rm $TMP_FILE
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Supprimer les anciennes valeurs si présentes
|
|
||||||
echo "Suppression des anciennes valeurs..."
|
|
||||||
incus exec $CONTAINER -- sed -i '/^request_terminate_timeout/d' "$POOL_FILE"
|
|
||||||
incus exec $CONTAINER -- sed -i '/^pm.max_requests/d' "$POOL_FILE"
|
|
||||||
incus exec $CONTAINER -- sed -i '/^php_admin_value\[memory_limit\]/d' "$POOL_FILE"
|
|
||||||
incus exec $CONTAINER -- sed -i '/^slowlog/d' "$POOL_FILE"
|
|
||||||
incus exec $CONTAINER -- sed -i '/^request_slowlog_timeout/d' "$POOL_FILE"
|
|
||||||
|
|
||||||
# Ajouter les nouvelles valeurs à la fin du fichier
|
|
||||||
echo "Ajout des nouvelles valeurs..."
|
|
||||||
incus file push $TMP_FILE $CONTAINER/tmp/php_fpm_settings.conf
|
|
||||||
incus exec $CONTAINER -- bash -c "cat /tmp/php_fpm_settings.conf >> $POOL_FILE"
|
|
||||||
incus exec $CONTAINER -- rm /tmp/php_fpm_settings.conf
|
|
||||||
|
|
||||||
rm $TMP_FILE
|
|
||||||
|
|
||||||
echo -e "${GREEN}✓ Configuration mise à jour${NC}"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Tester la configuration
|
|
||||||
echo "Test de la configuration PHP-FPM..."
|
|
||||||
if incus exec $CONTAINER -- php-fpm8.3 -t; then
|
|
||||||
echo -e "${GREEN}✓ Configuration valide${NC}"
|
|
||||||
else
|
|
||||||
echo -e "${RED}✗ Configuration invalide !${NC}"
|
|
||||||
echo "Restauration de la sauvegarde..."
|
|
||||||
incus exec $CONTAINER -- cp "$BACKUP_FILE" "$POOL_FILE"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
echo "Redémarrage de PHP-FPM..."
|
|
||||||
incus exec $CONTAINER -- rc-service php-fpm8.3 restart
|
|
||||||
|
|
||||||
if [ $? -eq 0 ]; then
|
|
||||||
echo -e "${GREEN}✓ PHP-FPM redémarré avec succès${NC}"
|
|
||||||
else
|
|
||||||
echo -e "${RED}✗ Erreur lors du redémarrage${NC}"
|
|
||||||
echo "Restauration de la sauvegarde..."
|
|
||||||
incus exec $CONTAINER -- cp "$BACKUP_FILE" "$POOL_FILE"
|
|
||||||
incus exec $CONTAINER -- rc-service php-fpm8.3 restart
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
echo -e "${GREEN}=== Mise à jour terminée avec succès ===${NC}"
|
|
||||||
echo ""
|
|
||||||
echo "Vérification des nouvelles valeurs:"
|
|
||||||
incus exec $CONTAINER -- grep -E "^(request_terminate_timeout|pm.max_requests|php_admin_value\[memory_limit\])" "$POOL_FILE"
|
|
||||||
echo ""
|
|
||||||
echo "Sauvegarde disponible: $BACKUP_FILE"
|
|
||||||
@@ -1,273 +0,0 @@
|
|||||||
# Documentation des tâches CRON - API Geosector
|
|
||||||
|
|
||||||
Ce dossier contient les scripts automatisés de maintenance et de traitement pour l'API Geosector.
|
|
||||||
|
|
||||||
## Scripts disponibles
|
|
||||||
|
|
||||||
### 1. `process_email_queue.php`
|
|
||||||
|
|
||||||
**Fonction** : Traite la queue d'emails en attente (reçus fiscaux, notifications)
|
|
||||||
|
|
||||||
**Caractéristiques** :
|
|
||||||
|
|
||||||
- Traite 50 emails maximum par exécution
|
|
||||||
- 3 tentatives maximum par email
|
|
||||||
- Lock file pour éviter l'exécution simultanée
|
|
||||||
- Nettoyage automatique des emails envoyés de plus de 30 jours
|
|
||||||
|
|
||||||
**Fréquence recommandée** : Toutes les 5 minutes
|
|
||||||
|
|
||||||
**Ligne crontab** :
|
|
||||||
|
|
||||||
```bash
|
|
||||||
*/5 * * * * /usr/bin/php /var/www/geosector/api/scripts/cron/process_email_queue.php >> /var/www/geosector/api/logs/email_queue.log 2>&1
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### 2. `cleanup_security_data.php`
|
|
||||||
|
|
||||||
**Fonction** : Purge les données de sécurité obsolètes selon la politique de rétention
|
|
||||||
|
|
||||||
**Données nettoyées** :
|
|
||||||
|
|
||||||
- Métriques de performance : 30 jours
|
|
||||||
- Tentatives de login échouées : 7 jours
|
|
||||||
- Alertes résolues : 90 jours
|
|
||||||
- IPs expirées : Déblocage immédiat
|
|
||||||
|
|
||||||
**Fréquence recommandée** : Quotidien à 2h du matin
|
|
||||||
|
|
||||||
**Ligne crontab** :
|
|
||||||
|
|
||||||
```bash
|
|
||||||
0 2 * * * /usr/bin/php /var/www/geosector/api/scripts/cron/cleanup_security_data.php >> /var/www/geosector/api/logs/cleanup_security.log 2>&1
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### 3. `cleanup_logs.php`
|
|
||||||
|
|
||||||
**Fonction** : Supprime les fichiers de logs de plus de 10 jours
|
|
||||||
|
|
||||||
**Caractéristiques** :
|
|
||||||
|
|
||||||
- Cible tous les fichiers `*.log` dans `/api/logs/`
|
|
||||||
- Exclut le dossier `/logs/events/` (rétention 15 mois)
|
|
||||||
- Rétention : 10 jours
|
|
||||||
- Logs détaillés des fichiers supprimés et taille libérée
|
|
||||||
- Lock file pour éviter l'exécution simultanée
|
|
||||||
|
|
||||||
**Fréquence recommandée** : Quotidien à 3h du matin
|
|
||||||
|
|
||||||
**Ligne crontab** :
|
|
||||||
|
|
||||||
```bash
|
|
||||||
0 3 * * * /usr/bin/php /var/www/geosector/api/scripts/cron/cleanup_logs.php >> /var/www/geosector/api/logs/cleanup_logs.log 2>&1
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### 4. `rotate_event_logs.php`
|
|
||||||
|
|
||||||
**Fonction** : Rotation des logs d'événements JSONL (système EventLogService)
|
|
||||||
|
|
||||||
**Politique de rétention (15 mois)** :
|
|
||||||
|
|
||||||
- 0-15 mois : fichiers `.jsonl` conservés (non compressés pour accès API)
|
|
||||||
- > 15 mois : suppression automatique
|
|
||||||
|
|
||||||
**Caractéristiques** :
|
|
||||||
|
|
||||||
- Suppression des fichiers > 15 mois
|
|
||||||
- Pas de compression (fichiers accessibles par l'API)
|
|
||||||
- Logs détaillés des suppressions
|
|
||||||
- Lock file pour éviter l'exécution simultanée
|
|
||||||
|
|
||||||
**Fréquence recommandée** : Mensuel le 1er à 3h du matin
|
|
||||||
|
|
||||||
**Ligne crontab** :
|
|
||||||
|
|
||||||
```bash
|
|
||||||
0 3 1 * * /usr/bin/php /var/www/geosector/api/scripts/cron/rotate_event_logs.php >> /var/www/geosector/api/logs/rotation_events.log 2>&1
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### 5. `update_stripe_devices.php`
|
|
||||||
|
|
||||||
**Fonction** : Met à jour la liste des appareils Android certifiés pour Tap to Pay
|
|
||||||
|
|
||||||
**Caractéristiques** :
|
|
||||||
|
|
||||||
- Liste de 95+ devices intégrée
|
|
||||||
- Ajoute les nouveaux appareils certifiés
|
|
||||||
- Met à jour les versions Android minimales
|
|
||||||
- Désactive les appareils obsolètes
|
|
||||||
- Notification email si changements importants
|
|
||||||
- Possibilité de personnaliser via `/data/stripe_certified_devices.json`
|
|
||||||
|
|
||||||
**Fréquence recommandée** : Hebdomadaire le dimanche à 3h
|
|
||||||
|
|
||||||
**Ligne crontab** :
|
|
||||||
|
|
||||||
```bash
|
|
||||||
0 3 * * 0 /usr/bin/php /var/www/geosector/api/scripts/cron/update_stripe_devices.php >> /var/www/geosector/api/logs/stripe_devices.log 2>&1
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### 6. `sync_databases.php`
|
|
||||||
|
|
||||||
**Fonction** : Synchronise les bases de données entre environnements
|
|
||||||
|
|
||||||
**Note** : Ce script est spécifique à un cas d'usage particulier. Vérifier son utilité avant activation.
|
|
||||||
|
|
||||||
**Fréquence recommandée** : À définir selon les besoins
|
|
||||||
|
|
||||||
**Ligne crontab** :
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# À configurer selon les besoins
|
|
||||||
# 0 4 * * * /usr/bin/php /var/www/geosector/api/scripts/cron/sync_databases.php >> /var/www/geosector/api/logs/sync_databases.log 2>&1
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Installation sur les containers Incus
|
|
||||||
|
|
||||||
### 1. Déployer les scripts sur les environnements
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# DEV (dva-geo sur IN3)
|
|
||||||
./deploy-api.sh
|
|
||||||
|
|
||||||
# RECETTE (rca-geo sur IN3)
|
|
||||||
./deploy-api.sh rca
|
|
||||||
|
|
||||||
# PRODUCTION (pra-geo sur IN4)
|
|
||||||
./deploy-api.sh pra
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. Configurer le crontab sur chaque container
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Se connecter au container
|
|
||||||
incus exec dva-geo -- sh # ou rca-geo, pra-geo
|
|
||||||
|
|
||||||
# Éditer le crontab
|
|
||||||
crontab -e
|
|
||||||
|
|
||||||
# Ajouter les lignes ci-dessous (adapter les chemins si nécessaire)
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3. Configuration complète recommandée
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Traitement de la queue d'emails (toutes les 5 minutes)
|
|
||||||
*/5 * * * * /usr/bin/php /var/www/geosector/api/scripts/cron/process_email_queue.php >> /var/www/geosector/api/logs/email_queue.log 2>&1
|
|
||||||
|
|
||||||
# Nettoyage des données de sécurité (quotidien à 2h)
|
|
||||||
0 2 * * * /usr/bin/php /var/www/geosector/api/scripts/cron/cleanup_security_data.php >> /var/www/geosector/api/logs/cleanup_security.log 2>&1
|
|
||||||
|
|
||||||
# Nettoyage des anciens logs (quotidien à 3h)
|
|
||||||
0 3 * * * /usr/bin/php /var/www/geosector/api/scripts/cron/cleanup_logs.php >> /var/www/geosector/api/logs/cleanup_logs.log 2>&1
|
|
||||||
|
|
||||||
# Rotation des logs événements (mensuel le 1er à 3h)
|
|
||||||
0 3 1 * * /usr/bin/php /var/www/geosector/api/scripts/cron/rotate_event_logs.php >> /var/www/geosector/api/logs/rotation_events.log 2>&1
|
|
||||||
|
|
||||||
# Mise à jour des devices Stripe (hebdomadaire dimanche à 3h)
|
|
||||||
0 3 * * 0 /usr/bin/php /var/www/geosector/api/scripts/cron/update_stripe_devices.php >> /var/www/geosector/api/logs/stripe_devices.log 2>&1
|
|
||||||
```
|
|
||||||
|
|
||||||
### 4. Vérifier que les CRONs sont actifs
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Lister les CRONs configurés
|
|
||||||
crontab -l
|
|
||||||
|
|
||||||
# Vérifier les logs pour s'assurer qu'ils s'exécutent
|
|
||||||
tail -f /var/www/geosector/api/logs/email_queue.log
|
|
||||||
tail -f /var/www/geosector/api/logs/cleanup_logs.log
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Surveillance et monitoring
|
|
||||||
|
|
||||||
### Emplacement des logs
|
|
||||||
|
|
||||||
Tous les logs CRON sont stockés dans `/var/www/geosector/api/logs/` :
|
|
||||||
|
|
||||||
- `email_queue.log` : Traitement de la queue d'emails
|
|
||||||
- `cleanup_security.log` : Nettoyage des données de sécurité
|
|
||||||
- `cleanup_logs.log` : Nettoyage des anciens fichiers logs
|
|
||||||
- `rotation_events.log` : Rotation des logs événements JSONL
|
|
||||||
- `stripe_devices.log` : Mise à jour des devices Tap to Pay
|
|
||||||
|
|
||||||
### Vérification de l'exécution
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Voir les dernières exécutions du processeur d'emails
|
|
||||||
tail -n 50 /var/www/geosector/api/logs/email_queue.log
|
|
||||||
|
|
||||||
# Voir les derniers nettoyages de logs
|
|
||||||
tail -n 50 /var/www/geosector/api/logs/cleanup_logs.log
|
|
||||||
|
|
||||||
# Voir les dernières rotations des logs événements
|
|
||||||
tail -n 50 /var/www/geosector/api/logs/rotation_events.log
|
|
||||||
|
|
||||||
# Voir les dernières mises à jour Stripe
|
|
||||||
tail -n 50 /var/www/geosector/api/logs/stripe_devices.log
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Notes importantes
|
|
||||||
|
|
||||||
1. **Détection d'environnement** : Tous les scripts détectent automatiquement l'environnement via `gethostname()` :
|
|
||||||
|
|
||||||
- `pra-geo` → Production (app3.geosector.fr)
|
|
||||||
- `rca-geo` → Recette (rapp.geosector.fr)
|
|
||||||
- `dva-geo` → Développement (dapp.geosector.fr)
|
|
||||||
|
|
||||||
2. **Lock files** : Les scripts critiques utilisent des fichiers de lock dans `/tmp/` pour éviter l'exécution simultanée
|
|
||||||
|
|
||||||
3. **Permissions** : Les scripts doivent être exécutables (`chmod +x script.php`)
|
|
||||||
|
|
||||||
4. **Logs** : Tous les scripts loggent via `LogService` pour traçabilité complète
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Dépannage
|
|
||||||
|
|
||||||
### Le CRON ne s'exécute pas
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Vérifier que le service cron est actif
|
|
||||||
rc-service crond status # Alpine Linux
|
|
||||||
|
|
||||||
# Relancer le service si nécessaire
|
|
||||||
rc-service crond restart
|
|
||||||
```
|
|
||||||
|
|
||||||
### Erreur de permissions
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Vérifier les permissions du script
|
|
||||||
ls -l /var/www/geosector/api/scripts/cron/
|
|
||||||
|
|
||||||
# Rendre exécutable si nécessaire
|
|
||||||
chmod +x /var/www/geosector/api/scripts/cron/*.php
|
|
||||||
|
|
||||||
# Vérifier les permissions du dossier logs
|
|
||||||
ls -ld /var/www/geosector/api/logs/
|
|
||||||
```
|
|
||||||
|
|
||||||
### Lock file bloqué
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Si un script semble bloqué, supprimer le lock file
|
|
||||||
rm /tmp/process_email_queue.lock
|
|
||||||
rm /tmp/cleanup_logs.lock
|
|
||||||
```
|
|
||||||
@@ -1,165 +0,0 @@
|
|||||||
#!/usr/bin/env php
|
|
||||||
<?php
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Script CRON pour nettoyer les anciens fichiers de logs
|
|
||||||
* Supprime les fichiers .log de plus de 10 jours dans le dossier /logs/
|
|
||||||
*
|
|
||||||
* À exécuter quotidiennement via crontab :
|
|
||||||
* 0 3 * * * /usr/bin/php /var/www/geosector/api/scripts/cron/cleanup_logs.php >> /var/www/geosector/api/logs/cleanup_logs.log 2>&1
|
|
||||||
*/
|
|
||||||
|
|
||||||
declare(strict_types=1);
|
|
||||||
|
|
||||||
// Configuration
|
|
||||||
define('LOG_RETENTION_DAYS', 10);
|
|
||||||
define('LOCK_FILE', '/tmp/cleanup_logs.lock');
|
|
||||||
|
|
||||||
// Empêcher l'exécution multiple simultanée
|
|
||||||
if (file_exists(LOCK_FILE)) {
|
|
||||||
$lockTime = filemtime(LOCK_FILE);
|
|
||||||
// Si le lock a plus de 30 minutes, on le supprime (processus probablement bloqué)
|
|
||||||
if (time() - $lockTime > 1800) {
|
|
||||||
unlink(LOCK_FILE);
|
|
||||||
} else {
|
|
||||||
die("Le processus est déjà en cours d'exécution\n");
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Créer le fichier de lock
|
|
||||||
file_put_contents(LOCK_FILE, getmypid());
|
|
||||||
|
|
||||||
// Enregistrer un handler pour supprimer le lock en cas d'arrêt
|
|
||||||
register_shutdown_function(function() {
|
|
||||||
if (file_exists(LOCK_FILE)) {
|
|
||||||
unlink(LOCK_FILE);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
// Simuler l'environnement web pour AppConfig en CLI
|
|
||||||
if (php_sapi_name() === 'cli') {
|
|
||||||
// Détecter l'environnement basé sur le hostname
|
|
||||||
$hostname = gethostname();
|
|
||||||
if (strpos($hostname, 'pra') !== false) {
|
|
||||||
$_SERVER['SERVER_NAME'] = 'app3.geosector.fr';
|
|
||||||
} elseif (strpos($hostname, 'rca') !== false) {
|
|
||||||
$_SERVER['SERVER_NAME'] = 'rapp.geosector.fr';
|
|
||||||
} else {
|
|
||||||
$_SERVER['SERVER_NAME'] = 'dapp.geosector.fr'; // DVA par défaut
|
|
||||||
}
|
|
||||||
|
|
||||||
$_SERVER['HTTP_HOST'] = $_SERVER['HTTP_HOST'] ?? $_SERVER['SERVER_NAME'];
|
|
||||||
$_SERVER['REMOTE_ADDR'] = $_SERVER['REMOTE_ADDR'] ?? '127.0.0.1';
|
|
||||||
|
|
||||||
// Définir getallheaders si elle n'existe pas (CLI)
|
|
||||||
if (!function_exists('getallheaders')) {
|
|
||||||
function getallheaders() {
|
|
||||||
return [];
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Chargement de l'environnement
|
|
||||||
require_once __DIR__ . '/../../vendor/autoload.php';
|
|
||||||
require_once __DIR__ . '/../../src/Config/AppConfig.php';
|
|
||||||
require_once __DIR__ . '/../../src/Services/LogService.php';
|
|
||||||
|
|
||||||
try {
|
|
||||||
// Initialisation de la configuration
|
|
||||||
$appConfig = AppConfig::getInstance();
|
|
||||||
$environment = $appConfig->getEnvironment();
|
|
||||||
|
|
||||||
// Définir le chemin du dossier logs
|
|
||||||
$logDir = __DIR__ . '/../../logs';
|
|
||||||
|
|
||||||
if (!is_dir($logDir)) {
|
|
||||||
echo "Le dossier de logs n'existe pas : {$logDir}\n";
|
|
||||||
exit(0);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Date limite (10 jours en arrière)
|
|
||||||
$cutoffDate = time() - (LOG_RETENTION_DAYS * 24 * 60 * 60);
|
|
||||||
|
|
||||||
// Lister tous les fichiers .log (exclure le dossier events/)
|
|
||||||
$logFiles = glob($logDir . '/*.log');
|
|
||||||
|
|
||||||
// Exclure explicitement les logs du sous-dossier events/
|
|
||||||
$logFiles = array_filter($logFiles, function($file) {
|
|
||||||
return strpos($file, '/events/') === false;
|
|
||||||
});
|
|
||||||
|
|
||||||
if (empty($logFiles)) {
|
|
||||||
echo "Aucun fichier .log trouvé dans {$logDir}\n";
|
|
||||||
exit(0);
|
|
||||||
}
|
|
||||||
|
|
||||||
$deletedCount = 0;
|
|
||||||
$deletedSize = 0;
|
|
||||||
$deletedFiles = [];
|
|
||||||
|
|
||||||
foreach ($logFiles as $file) {
|
|
||||||
$fileTime = filemtime($file);
|
|
||||||
|
|
||||||
// Vérifier si le fichier est plus vieux que la date limite
|
|
||||||
if ($fileTime < $cutoffDate) {
|
|
||||||
$fileSize = filesize($file);
|
|
||||||
$fileName = basename($file);
|
|
||||||
|
|
||||||
if (unlink($file)) {
|
|
||||||
$deletedCount++;
|
|
||||||
$deletedSize += $fileSize;
|
|
||||||
$deletedFiles[] = $fileName;
|
|
||||||
echo "Supprimé : {$fileName} (" . number_format($fileSize / 1024, 2) . " KB)\n";
|
|
||||||
} else {
|
|
||||||
echo "ERREUR : Impossible de supprimer {$fileName}\n";
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Logger le résumé
|
|
||||||
if ($deletedCount > 0) {
|
|
||||||
$message = sprintf(
|
|
||||||
"Nettoyage des logs terminé - %d fichier(s) supprimé(s) - %.2f MB libérés",
|
|
||||||
$deletedCount,
|
|
||||||
$deletedSize / (1024 * 1024)
|
|
||||||
);
|
|
||||||
|
|
||||||
LogService::log($message, [
|
|
||||||
'level' => 'info',
|
|
||||||
'script' => 'cleanup_logs.php',
|
|
||||||
'environment' => $environment,
|
|
||||||
'deleted_count' => $deletedCount,
|
|
||||||
'deleted_size_mb' => round($deletedSize / (1024 * 1024), 2),
|
|
||||||
'deleted_files' => $deletedFiles
|
|
||||||
]);
|
|
||||||
|
|
||||||
echo "\n" . $message . "\n";
|
|
||||||
} else {
|
|
||||||
echo "Aucun fichier à supprimer (tous les logs ont moins de " . LOG_RETENTION_DAYS . " jours)\n";
|
|
||||||
}
|
|
||||||
|
|
||||||
} catch (Exception $e) {
|
|
||||||
$errorMsg = 'Erreur lors du nettoyage des logs : ' . $e->getMessage();
|
|
||||||
|
|
||||||
LogService::log($errorMsg, [
|
|
||||||
'level' => 'error',
|
|
||||||
'script' => 'cleanup_logs.php',
|
|
||||||
'trace' => $e->getTraceAsString()
|
|
||||||
]);
|
|
||||||
|
|
||||||
echo $errorMsg . "\n";
|
|
||||||
|
|
||||||
// Supprimer le lock en cas d'erreur
|
|
||||||
if (file_exists(LOCK_FILE)) {
|
|
||||||
unlink(LOCK_FILE);
|
|
||||||
}
|
|
||||||
|
|
||||||
exit(1);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Supprimer le lock
|
|
||||||
if (file_exists(LOCK_FILE)) {
|
|
||||||
unlink(LOCK_FILE);
|
|
||||||
}
|
|
||||||
|
|
||||||
exit(0);
|
|
||||||
@@ -41,14 +41,14 @@ register_shutdown_function(function() {
|
|||||||
if (php_sapi_name() === 'cli') {
|
if (php_sapi_name() === 'cli') {
|
||||||
// Détecter l'environnement basé sur le hostname ou un paramètre
|
// Détecter l'environnement basé sur le hostname ou un paramètre
|
||||||
$hostname = gethostname();
|
$hostname = gethostname();
|
||||||
if (strpos($hostname, 'pra') !== false) {
|
if (strpos($hostname, 'prod') !== false) {
|
||||||
$_SERVER['SERVER_NAME'] = 'app3.geosector.fr';
|
$_SERVER['SERVER_NAME'] = 'app.geosector.fr';
|
||||||
} elseif (strpos($hostname, 'rca') !== false) {
|
} elseif (strpos($hostname, 'rec') !== false || strpos($hostname, 'rapp') !== false) {
|
||||||
$_SERVER['SERVER_NAME'] = 'rapp.geosector.fr';
|
$_SERVER['SERVER_NAME'] = 'rapp.geosector.fr';
|
||||||
} else {
|
} else {
|
||||||
$_SERVER['SERVER_NAME'] = 'dapp.geosector.fr'; // DVA par défaut
|
$_SERVER['SERVER_NAME'] = 'dapp.geosector.fr'; // DVA par défaut
|
||||||
}
|
}
|
||||||
|
|
||||||
$_SERVER['HTTP_HOST'] = $_SERVER['HTTP_HOST'] ?? $_SERVER['SERVER_NAME'];
|
$_SERVER['HTTP_HOST'] = $_SERVER['HTTP_HOST'] ?? $_SERVER['SERVER_NAME'];
|
||||||
$_SERVER['REMOTE_ADDR'] = $_SERVER['REMOTE_ADDR'] ?? '127.0.0.1';
|
$_SERVER['REMOTE_ADDR'] = $_SERVER['REMOTE_ADDR'] ?? '127.0.0.1';
|
||||||
|
|
||||||
@@ -69,7 +69,6 @@ require_once __DIR__ . '/../../src/Services/LogService.php';
|
|||||||
use PHPMailer\PHPMailer\PHPMailer;
|
use PHPMailer\PHPMailer\PHPMailer;
|
||||||
use PHPMailer\PHPMailer\SMTP;
|
use PHPMailer\PHPMailer\SMTP;
|
||||||
use PHPMailer\PHPMailer\Exception;
|
use PHPMailer\PHPMailer\Exception;
|
||||||
use App\Services\LogService;
|
|
||||||
|
|
||||||
try {
|
try {
|
||||||
// Initialisation de la configuration
|
// Initialisation de la configuration
|
||||||
|
|||||||
@@ -1,169 +0,0 @@
|
|||||||
#!/usr/bin/env php
|
|
||||||
<?php
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Script CRON pour rotation des logs d'événements JSONL
|
|
||||||
*
|
|
||||||
* Politique de rétention : 15 mois
|
|
||||||
* - 0-15 mois : fichiers .jsonl conservés (non compressés pour accès API)
|
|
||||||
* - > 15 mois : suppression
|
|
||||||
*
|
|
||||||
* À exécuter mensuellement via crontab (1er du mois à 3h) :
|
|
||||||
* 0 3 1 * * /usr/bin/php /var/www/geosector/api/scripts/cron/rotate_event_logs.php >> /var/www/geosector/api/logs/rotation_events.log 2>&1
|
|
||||||
*/
|
|
||||||
|
|
||||||
declare(strict_types=1);
|
|
||||||
|
|
||||||
// Configuration
|
|
||||||
define('RETENTION_MONTHS', 15); // Conserver 15 mois
|
|
||||||
define('LOCK_FILE', '/tmp/rotate_event_logs.lock');
|
|
||||||
|
|
||||||
// Empêcher l'exécution multiple simultanée
|
|
||||||
if (file_exists(LOCK_FILE)) {
|
|
||||||
$lockTime = filemtime(LOCK_FILE);
|
|
||||||
// Si le lock a plus de 2 heures, on le supprime (processus probablement bloqué)
|
|
||||||
if (time() - $lockTime > 7200) {
|
|
||||||
unlink(LOCK_FILE);
|
|
||||||
} else {
|
|
||||||
die("Le processus est déjà en cours d'exécution\n");
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Créer le fichier de lock
|
|
||||||
file_put_contents(LOCK_FILE, getmypid());
|
|
||||||
|
|
||||||
// Enregistrer un handler pour supprimer le lock en cas d'arrêt
|
|
||||||
register_shutdown_function(function() {
|
|
||||||
if (file_exists(LOCK_FILE)) {
|
|
||||||
unlink(LOCK_FILE);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
// Simuler l'environnement web pour AppConfig en CLI
|
|
||||||
if (php_sapi_name() === 'cli') {
|
|
||||||
// Détecter l'environnement basé sur le hostname
|
|
||||||
$hostname = gethostname();
|
|
||||||
if (strpos($hostname, 'pra') !== false) {
|
|
||||||
$_SERVER['SERVER_NAME'] = 'app3.geosector.fr';
|
|
||||||
} elseif (strpos($hostname, 'rca') !== false) {
|
|
||||||
$_SERVER['SERVER_NAME'] = 'rapp.geosector.fr';
|
|
||||||
} else {
|
|
||||||
$_SERVER['SERVER_NAME'] = 'dapp.geosector.fr'; // DVA par défaut
|
|
||||||
}
|
|
||||||
|
|
||||||
$_SERVER['HTTP_HOST'] = $_SERVER['HTTP_HOST'] ?? $_SERVER['SERVER_NAME'];
|
|
||||||
$_SERVER['REMOTE_ADDR'] = $_SERVER['REMOTE_ADDR'] ?? '127.0.0.1';
|
|
||||||
|
|
||||||
// Définir getallheaders si elle n'existe pas (CLI)
|
|
||||||
if (!function_exists('getallheaders')) {
|
|
||||||
function getallheaders() {
|
|
||||||
return [];
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Chargement de l'environnement
|
|
||||||
require_once __DIR__ . '/../../vendor/autoload.php';
|
|
||||||
require_once __DIR__ . '/../../src/Config/AppConfig.php';
|
|
||||||
require_once __DIR__ . '/../../src/Services/LogService.php';
|
|
||||||
|
|
||||||
try {
|
|
||||||
// Initialisation de la configuration
|
|
||||||
$appConfig = AppConfig::getInstance();
|
|
||||||
$environment = $appConfig->getEnvironment();
|
|
||||||
|
|
||||||
// Définir le chemin du dossier des logs événements
|
|
||||||
$eventLogDir = __DIR__ . '/../../logs/events';
|
|
||||||
|
|
||||||
if (!is_dir($eventLogDir)) {
|
|
||||||
echo "Le dossier de logs événements n'existe pas : {$eventLogDir}\n";
|
|
||||||
exit(0);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Date limite de suppression
|
|
||||||
$deletionDate = strtotime('-' . RETENTION_MONTHS . ' months');
|
|
||||||
|
|
||||||
// Lister tous les fichiers .jsonl
|
|
||||||
$jsonlFiles = glob($eventLogDir . '/*.jsonl');
|
|
||||||
|
|
||||||
if (empty($jsonlFiles)) {
|
|
||||||
echo "Aucun fichier .jsonl trouvé dans {$eventLogDir}\n";
|
|
||||||
exit(0);
|
|
||||||
}
|
|
||||||
|
|
||||||
$deletedCount = 0;
|
|
||||||
$deletedSize = 0;
|
|
||||||
$deletedFiles = [];
|
|
||||||
|
|
||||||
// ========================================
|
|
||||||
// Suppression des fichiers > 15 mois
|
|
||||||
// ========================================
|
|
||||||
foreach ($jsonlFiles as $file) {
|
|
||||||
$fileTime = filemtime($file);
|
|
||||||
|
|
||||||
// Vérifier si le fichier est plus vieux que la date de rétention
|
|
||||||
if ($fileTime < $deletionDate) {
|
|
||||||
$fileSize = filesize($file);
|
|
||||||
$fileName = basename($file);
|
|
||||||
|
|
||||||
if (unlink($file)) {
|
|
||||||
$deletedCount++;
|
|
||||||
$deletedSize += $fileSize;
|
|
||||||
$deletedFiles[] = $fileName;
|
|
||||||
echo "Supprimé : {$fileName} (> " . RETENTION_MONTHS . " mois, " .
|
|
||||||
number_format($fileSize / 1024, 2) . " KB)\n";
|
|
||||||
} else {
|
|
||||||
echo "ERREUR : Impossible de supprimer {$fileName}\n";
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// ========================================
|
|
||||||
// RÉSUMÉ ET LOGGING
|
|
||||||
// ========================================
|
|
||||||
if ($deletedCount > 0) {
|
|
||||||
$message = sprintf(
|
|
||||||
"Rotation des logs événements terminée - %d fichier(s) supprimé(s) - %.2f MB libérés",
|
|
||||||
$deletedCount,
|
|
||||||
$deletedSize / (1024 * 1024)
|
|
||||||
);
|
|
||||||
|
|
||||||
LogService::log($message, [
|
|
||||||
'level' => 'info',
|
|
||||||
'script' => 'rotate_event_logs.php',
|
|
||||||
'environment' => $environment,
|
|
||||||
'deleted_count' => $deletedCount,
|
|
||||||
'deleted_size_mb' => round($deletedSize / (1024 * 1024), 2),
|
|
||||||
'deleted_files' => $deletedFiles
|
|
||||||
]);
|
|
||||||
|
|
||||||
echo "\n" . $message . "\n";
|
|
||||||
} else {
|
|
||||||
echo "Aucune rotation nécessaire - Tous les fichiers .jsonl ont moins de " . RETENTION_MONTHS . " mois\n";
|
|
||||||
}
|
|
||||||
|
|
||||||
} catch (Exception $e) {
|
|
||||||
$errorMsg = 'Erreur lors de la rotation des logs événements : ' . $e->getMessage();
|
|
||||||
|
|
||||||
LogService::log($errorMsg, [
|
|
||||||
'level' => 'error',
|
|
||||||
'script' => 'rotate_event_logs.php',
|
|
||||||
'trace' => $e->getTraceAsString()
|
|
||||||
]);
|
|
||||||
|
|
||||||
echo $errorMsg . "\n";
|
|
||||||
|
|
||||||
// Supprimer le lock en cas d'erreur
|
|
||||||
if (file_exists(LOCK_FILE)) {
|
|
||||||
unlink(LOCK_FILE);
|
|
||||||
}
|
|
||||||
|
|
||||||
exit(1);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Supprimer le lock
|
|
||||||
if (file_exists(LOCK_FILE)) {
|
|
||||||
unlink(LOCK_FILE);
|
|
||||||
}
|
|
||||||
|
|
||||||
exit(0);
|
|
||||||
186
api/scripts/cron/test_email_queue.php
Executable file
186
api/scripts/cron/test_email_queue.php
Executable file
@@ -0,0 +1,186 @@
|
|||||||
|
#!/usr/bin/env php
|
||||||
|
<?php
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Script de test pour vérifier le processeur de queue d'emails
|
||||||
|
* Affiche les emails en attente sans les envoyer
|
||||||
|
*/
|
||||||
|
|
||||||
|
declare(strict_types=1);
|
||||||
|
|
||||||
|
// Simuler l'environnement web pour AppConfig en CLI
|
||||||
|
if (php_sapi_name() === 'cli') {
|
||||||
|
$_SERVER['SERVER_NAME'] = $_SERVER['SERVER_NAME'] ?? 'dapp.geosector.fr'; // DVA par défaut
|
||||||
|
$_SERVER['HTTP_HOST'] = $_SERVER['HTTP_HOST'] ?? $_SERVER['SERVER_NAME'];
|
||||||
|
$_SERVER['REMOTE_ADDR'] = $_SERVER['REMOTE_ADDR'] ?? '127.0.0.1';
|
||||||
|
|
||||||
|
// Définir getallheaders si elle n'existe pas (CLI)
|
||||||
|
if (!function_exists('getallheaders')) {
|
||||||
|
function getallheaders() {
|
||||||
|
return [];
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
require_once __DIR__ . '/../../src/Core/Database.php';
|
||||||
|
require_once __DIR__ . '/../../src/Config/AppConfig.php';
|
||||||
|
|
||||||
|
try {
|
||||||
|
// Initialiser la configuration
|
||||||
|
$appConfig = AppConfig::getInstance();
|
||||||
|
$dbConfig = $appConfig->getDatabaseConfig();
|
||||||
|
|
||||||
|
// Initialiser la base de données avec la configuration
|
||||||
|
Database::init($dbConfig);
|
||||||
|
$db = Database::getInstance();
|
||||||
|
|
||||||
|
echo "=== TEST DE LA QUEUE D'EMAILS ===\n\n";
|
||||||
|
|
||||||
|
// Statistiques générales
|
||||||
|
$stmt = $db->query('
|
||||||
|
SELECT
|
||||||
|
status,
|
||||||
|
COUNT(*) as count,
|
||||||
|
MIN(created_at) as oldest,
|
||||||
|
MAX(created_at) as newest
|
||||||
|
FROM email_queue
|
||||||
|
GROUP BY status
|
||||||
|
');
|
||||||
|
|
||||||
|
$stats = $stmt->fetchAll(PDO::FETCH_ASSOC);
|
||||||
|
|
||||||
|
echo "STATISTIQUES:\n";
|
||||||
|
echo "-------------\n";
|
||||||
|
foreach ($stats as $stat) {
|
||||||
|
echo sprintf(
|
||||||
|
"Status: %s - Nombre: %d (Plus ancien: %s, Plus récent: %s)\n",
|
||||||
|
$stat['status'],
|
||||||
|
$stat['count'],
|
||||||
|
$stat['oldest'] ?? 'N/A',
|
||||||
|
$stat['newest'] ?? 'N/A'
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
echo "\n";
|
||||||
|
|
||||||
|
// Emails en attente
|
||||||
|
$stmt = $db->prepare('
|
||||||
|
SELECT
|
||||||
|
eq.id,
|
||||||
|
eq.fk_pass,
|
||||||
|
eq.to_email,
|
||||||
|
eq.subject,
|
||||||
|
eq.created_at,
|
||||||
|
eq.attempts,
|
||||||
|
eq.status,
|
||||||
|
p.fk_type,
|
||||||
|
p.montant,
|
||||||
|
p.nom_recu
|
||||||
|
FROM email_queue eq
|
||||||
|
LEFT JOIN ope_pass p ON eq.fk_pass = p.id
|
||||||
|
WHERE eq.status = ?
|
||||||
|
ORDER BY eq.created_at DESC
|
||||||
|
LIMIT 10
|
||||||
|
');
|
||||||
|
|
||||||
|
$stmt->execute(['pending']);
|
||||||
|
$pendingEmails = $stmt->fetchAll(PDO::FETCH_ASSOC);
|
||||||
|
|
||||||
|
if (empty($pendingEmails)) {
|
||||||
|
echo "Aucun email en attente.\n";
|
||||||
|
} else {
|
||||||
|
echo "EMAILS EN ATTENTE (10 plus récents):\n";
|
||||||
|
echo "------------------------------------\n";
|
||||||
|
foreach ($pendingEmails as $email) {
|
||||||
|
echo sprintf(
|
||||||
|
"ID: %d | Passage: %d | Destinataire: %s\n",
|
||||||
|
$email['id'],
|
||||||
|
$email['fk_pass'],
|
||||||
|
$email['to_email']
|
||||||
|
);
|
||||||
|
echo sprintf(
|
||||||
|
" Sujet: %s\n",
|
||||||
|
$email['subject']
|
||||||
|
);
|
||||||
|
echo sprintf(
|
||||||
|
" Créé le: %s | Tentatives: %d\n",
|
||||||
|
$email['created_at'],
|
||||||
|
$email['attempts']
|
||||||
|
);
|
||||||
|
if ($email['fk_pass'] > 0) {
|
||||||
|
echo sprintf(
|
||||||
|
" Passage - Type: %s | Montant: %.2f€ | Reçu: %s\n",
|
||||||
|
$email['fk_type'] == 1 ? 'DON' : 'Autre',
|
||||||
|
$email['montant'] ?? 0,
|
||||||
|
$email['nom_recu'] ?? 'Non généré'
|
||||||
|
);
|
||||||
|
}
|
||||||
|
echo "---\n";
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Emails échoués
|
||||||
|
$stmt = $db->prepare('
|
||||||
|
SELECT
|
||||||
|
id,
|
||||||
|
fk_pass,
|
||||||
|
to_email,
|
||||||
|
subject,
|
||||||
|
created_at,
|
||||||
|
attempts,
|
||||||
|
error_message
|
||||||
|
FROM email_queue
|
||||||
|
WHERE status = ?
|
||||||
|
ORDER BY created_at DESC
|
||||||
|
LIMIT 5
|
||||||
|
');
|
||||||
|
|
||||||
|
$stmt->execute(['failed']);
|
||||||
|
$failedEmails = $stmt->fetchAll(PDO::FETCH_ASSOC);
|
||||||
|
|
||||||
|
if (!empty($failedEmails)) {
|
||||||
|
echo "\nEMAILS ÉCHOUÉS (5 plus récents):\n";
|
||||||
|
echo "--------------------------------\n";
|
||||||
|
foreach ($failedEmails as $email) {
|
||||||
|
echo sprintf(
|
||||||
|
"ID: %d | Passage: %d | Destinataire: %s\n",
|
||||||
|
$email['id'],
|
||||||
|
$email['fk_pass'],
|
||||||
|
$email['to_email']
|
||||||
|
);
|
||||||
|
echo sprintf(
|
||||||
|
" Sujet: %s\n",
|
||||||
|
$email['subject']
|
||||||
|
);
|
||||||
|
echo sprintf(
|
||||||
|
" Tentatives: %d | Erreur: %s\n",
|
||||||
|
$email['attempts'],
|
||||||
|
$email['error_message'] ?? 'Non spécifiée'
|
||||||
|
);
|
||||||
|
echo "---\n";
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Vérifier la configuration SMTP
|
||||||
|
echo "\nCONFIGURATION SMTP:\n";
|
||||||
|
echo "-------------------\n";
|
||||||
|
|
||||||
|
$smtpConfig = $appConfig->getSmtpConfig();
|
||||||
|
$emailConfig = $appConfig->getEmailConfig();
|
||||||
|
|
||||||
|
echo "Host: " . ($smtpConfig['host'] ?? 'Non configuré') . "\n";
|
||||||
|
echo "Port: " . ($smtpConfig['port'] ?? 'Non configuré') . "\n";
|
||||||
|
echo "Username: " . ($smtpConfig['user'] ?? 'Non configuré') . "\n";
|
||||||
|
echo "Password: " . (isset($smtpConfig['pass']) ? '***' : 'Non configuré') . "\n";
|
||||||
|
echo "Encryption: " . ($smtpConfig['secure'] ?? 'Non configuré') . "\n";
|
||||||
|
echo "From Email: " . ($emailConfig['from'] ?? 'Non configuré') . "\n";
|
||||||
|
echo "Contact Email: " . ($emailConfig['contact'] ?? 'Non configuré') . "\n";
|
||||||
|
|
||||||
|
echo "\n=== FIN DU TEST ===\n";
|
||||||
|
|
||||||
|
} catch (Exception $e) {
|
||||||
|
echo "ERREUR: " . $e->getMessage() . "\n";
|
||||||
|
exit(1);
|
||||||
|
}
|
||||||
|
|
||||||
|
exit(0);
|
||||||
@@ -1,444 +0,0 @@
|
|||||||
#!/usr/bin/env php
|
|
||||||
<?php
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Script CRON pour mettre à jour la liste des appareils certifiés Stripe Tap to Pay
|
|
||||||
*
|
|
||||||
* Ce script récupère et met à jour la liste des appareils Android certifiés
|
|
||||||
* pour Tap to Pay en France dans la table stripe_android_certified_devices
|
|
||||||
*
|
|
||||||
* À exécuter hebdomadairement via crontab :
|
|
||||||
* Exemple: 0 3 * * 0 /usr/bin/php /path/to/api/scripts/cron/update_stripe_devices.php
|
|
||||||
*/
|
|
||||||
|
|
||||||
declare(strict_types=1);
|
|
||||||
|
|
||||||
// Configuration
|
|
||||||
define('LOCK_FILE', '/tmp/update_stripe_devices.lock');
|
|
||||||
define('DEVICES_JSON_URL', 'https://raw.githubusercontent.com/stripe/stripe-terminal-android/master/tap-to-pay/certified-devices.json');
|
|
||||||
define('DEVICES_LOCAL_FILE', __DIR__ . '/../../data/stripe_certified_devices.json');
|
|
||||||
|
|
||||||
// Empêcher l'exécution multiple simultanée
|
|
||||||
if (file_exists(LOCK_FILE)) {
|
|
||||||
$lockTime = filemtime(LOCK_FILE);
|
|
||||||
if (time() - $lockTime > 3600) { // Lock de plus d'1 heure = processus bloqué
|
|
||||||
unlink(LOCK_FILE);
|
|
||||||
} else {
|
|
||||||
die("[" . date('Y-m-d H:i:s') . "] Le processus est déjà en cours d'exécution\n");
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Créer le fichier de lock
|
|
||||||
file_put_contents(LOCK_FILE, getmypid());
|
|
||||||
|
|
||||||
// Enregistrer un handler pour supprimer le lock en cas d'arrêt
|
|
||||||
register_shutdown_function(function() {
|
|
||||||
if (file_exists(LOCK_FILE)) {
|
|
||||||
unlink(LOCK_FILE);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
// Simuler l'environnement web pour AppConfig en CLI
|
|
||||||
if (php_sapi_name() === 'cli') {
|
|
||||||
$hostname = gethostname();
|
|
||||||
if (strpos($hostname, 'prod') !== false || strpos($hostname, 'pra') !== false) {
|
|
||||||
$_SERVER['SERVER_NAME'] = 'app3.geosector.fr';
|
|
||||||
} elseif (strpos($hostname, 'rec') !== false || strpos($hostname, 'rca') !== false) {
|
|
||||||
$_SERVER['SERVER_NAME'] = 'rapp.geosector.fr';
|
|
||||||
} else {
|
|
||||||
$_SERVER['SERVER_NAME'] = 'dapp.geosector.fr'; // DVA par défaut
|
|
||||||
}
|
|
||||||
$_SERVER['REQUEST_URI'] = '/cron/update_stripe_devices';
|
|
||||||
$_SERVER['REQUEST_METHOD'] = 'CLI';
|
|
||||||
$_SERVER['HTTP_HOST'] = $_SERVER['SERVER_NAME'];
|
|
||||||
$_SERVER['REMOTE_ADDR'] = '127.0.0.1';
|
|
||||||
|
|
||||||
// Définir getallheaders si elle n'existe pas (CLI)
|
|
||||||
if (!function_exists('getallheaders')) {
|
|
||||||
function getallheaders() {
|
|
||||||
return [];
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Charger l'environnement
|
|
||||||
require_once dirname(dirname(__DIR__)) . '/vendor/autoload.php';
|
|
||||||
require_once dirname(dirname(__DIR__)) . '/src/Config/AppConfig.php';
|
|
||||||
require_once dirname(dirname(__DIR__)) . '/src/Core/Database.php';
|
|
||||||
require_once dirname(dirname(__DIR__)) . '/src/Services/LogService.php';
|
|
||||||
|
|
||||||
use App\Services\LogService;
|
|
||||||
|
|
||||||
try {
|
|
||||||
echo "[" . date('Y-m-d H:i:s') . "] Début de la mise à jour des devices Stripe certifiés\n";
|
|
||||||
|
|
||||||
// Initialiser la configuration et la base de données
|
|
||||||
$appConfig = AppConfig::getInstance();
|
|
||||||
$dbConfig = $appConfig->getDatabaseConfig();
|
|
||||||
Database::init($dbConfig);
|
|
||||||
$db = Database::getInstance();
|
|
||||||
|
|
||||||
// Logger le début
|
|
||||||
LogService::log("Début de la mise à jour des devices Stripe certifiés", [
|
|
||||||
'source' => 'cron',
|
|
||||||
'script' => 'update_stripe_devices.php'
|
|
||||||
]);
|
|
||||||
|
|
||||||
// Étape 1: Récupérer la liste des devices
|
|
||||||
$devicesData = fetchCertifiedDevices();
|
|
||||||
|
|
||||||
if (empty($devicesData)) {
|
|
||||||
echo "[" . date('Y-m-d H:i:s') . "] Aucune donnée de devices récupérée\n";
|
|
||||||
LogService::log("Aucune donnée de devices récupérée", ['level' => 'warning']);
|
|
||||||
exit(1);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Étape 2: Traiter et mettre à jour la base de données
|
|
||||||
$stats = updateDatabase($db, $devicesData);
|
|
||||||
|
|
||||||
// Étape 3: Logger les résultats
|
|
||||||
$message = sprintf(
|
|
||||||
"Mise à jour terminée : %d ajoutés, %d modifiés, %d désactivés, %d inchangés",
|
|
||||||
$stats['added'],
|
|
||||||
$stats['updated'],
|
|
||||||
$stats['disabled'],
|
|
||||||
$stats['unchanged']
|
|
||||||
);
|
|
||||||
|
|
||||||
echo "[" . date('Y-m-d H:i:s') . "] $message\n";
|
|
||||||
|
|
||||||
LogService::log($message, [
|
|
||||||
'source' => 'cron',
|
|
||||||
'stats' => $stats
|
|
||||||
]);
|
|
||||||
|
|
||||||
// Étape 4: Envoyer une notification si changements significatifs
|
|
||||||
if ($stats['added'] > 0 || $stats['disabled'] > 0) {
|
|
||||||
sendNotification($stats);
|
|
||||||
}
|
|
||||||
|
|
||||||
echo "[" . date('Y-m-d H:i:s') . "] Mise à jour terminée avec succès\n";
|
|
||||||
|
|
||||||
} catch (Exception $e) {
|
|
||||||
$errorMsg = "Erreur lors de la mise à jour des devices: " . $e->getMessage();
|
|
||||||
echo "[" . date('Y-m-d H:i:s') . "] $errorMsg\n";
|
|
||||||
LogService::log($errorMsg, [
|
|
||||||
'level' => 'error',
|
|
||||||
'trace' => $e->getTraceAsString()
|
|
||||||
]);
|
|
||||||
exit(1);
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Récupère la liste des devices certifiés
|
|
||||||
* Essaie d'abord depuis une URL externe, puis depuis un fichier local en fallback
|
|
||||||
*/
|
|
||||||
function fetchCertifiedDevices(): array {
|
|
||||||
// Liste maintenue manuellement des devices certifiés en France
|
|
||||||
// Source: Documentation Stripe Terminal et tests confirmés
|
|
||||||
$frenchCertifiedDevices = [
|
|
||||||
// Samsung Galaxy S Series
|
|
||||||
['manufacturer' => 'Samsung', 'model' => 'Galaxy S21', 'model_identifier' => 'SM-G991B', 'min_android_version' => 11],
|
|
||||||
['manufacturer' => 'Samsung', 'model' => 'Galaxy S21+', 'model_identifier' => 'SM-G996B', 'min_android_version' => 11],
|
|
||||||
['manufacturer' => 'Samsung', 'model' => 'Galaxy S21 Ultra', 'model_identifier' => 'SM-G998B', 'min_android_version' => 11],
|
|
||||||
['manufacturer' => 'Samsung', 'model' => 'Galaxy S21 FE', 'model_identifier' => 'SM-G990B', 'min_android_version' => 11],
|
|
||||||
['manufacturer' => 'Samsung', 'model' => 'Galaxy S22', 'model_identifier' => 'SM-S901B', 'min_android_version' => 12],
|
|
||||||
['manufacturer' => 'Samsung', 'model' => 'Galaxy S22+', 'model_identifier' => 'SM-S906B', 'min_android_version' => 12],
|
|
||||||
['manufacturer' => 'Samsung', 'model' => 'Galaxy S22 Ultra', 'model_identifier' => 'SM-S908B', 'min_android_version' => 12],
|
|
||||||
['manufacturer' => 'Samsung', 'model' => 'Galaxy S23', 'model_identifier' => 'SM-S911B', 'min_android_version' => 13],
|
|
||||||
['manufacturer' => 'Samsung', 'model' => 'Galaxy S23+', 'model_identifier' => 'SM-S916B', 'min_android_version' => 13],
|
|
||||||
['manufacturer' => 'Samsung', 'model' => 'Galaxy S23 Ultra', 'model_identifier' => 'SM-S918B', 'min_android_version' => 13],
|
|
||||||
['manufacturer' => 'Samsung', 'model' => 'Galaxy S23 FE', 'model_identifier' => 'SM-S711B', 'min_android_version' => 13],
|
|
||||||
['manufacturer' => 'Samsung', 'model' => 'Galaxy S24', 'model_identifier' => 'SM-S921B', 'min_android_version' => 14],
|
|
||||||
['manufacturer' => 'Samsung', 'model' => 'Galaxy S24+', 'model_identifier' => 'SM-S926B', 'min_android_version' => 14],
|
|
||||||
['manufacturer' => 'Samsung', 'model' => 'Galaxy S24 Ultra', 'model_identifier' => 'SM-S928B', 'min_android_version' => 14],
|
|
||||||
|
|
||||||
// Samsung Galaxy Note
|
|
||||||
['manufacturer' => 'Samsung', 'model' => 'Galaxy Note 20', 'model_identifier' => 'SM-N980F', 'min_android_version' => 10],
|
|
||||||
['manufacturer' => 'Samsung', 'model' => 'Galaxy Note 20 Ultra', 'model_identifier' => 'SM-N986B', 'min_android_version' => 10],
|
|
||||||
|
|
||||||
// Samsung Galaxy Z Fold
|
|
||||||
['manufacturer' => 'Samsung', 'model' => 'Galaxy Z Fold3', 'model_identifier' => 'SM-F926B', 'min_android_version' => 11],
|
|
||||||
['manufacturer' => 'Samsung', 'model' => 'Galaxy Z Fold4', 'model_identifier' => 'SM-F936B', 'min_android_version' => 12],
|
|
||||||
['manufacturer' => 'Samsung', 'model' => 'Galaxy Z Fold5', 'model_identifier' => 'SM-F946B', 'min_android_version' => 13],
|
|
||||||
['manufacturer' => 'Samsung', 'model' => 'Galaxy Z Fold6', 'model_identifier' => 'SM-F956B', 'min_android_version' => 14],
|
|
||||||
|
|
||||||
// Samsung Galaxy Z Flip
|
|
||||||
['manufacturer' => 'Samsung', 'model' => 'Galaxy Z Flip3', 'model_identifier' => 'SM-F711B', 'min_android_version' => 11],
|
|
||||||
['manufacturer' => 'Samsung', 'model' => 'Galaxy Z Flip4', 'model_identifier' => 'SM-F721B', 'min_android_version' => 12],
|
|
||||||
['manufacturer' => 'Samsung', 'model' => 'Galaxy Z Flip5', 'model_identifier' => 'SM-F731B', 'min_android_version' => 13],
|
|
||||||
['manufacturer' => 'Samsung', 'model' => 'Galaxy Z Flip6', 'model_identifier' => 'SM-F741B', 'min_android_version' => 14],
|
|
||||||
|
|
||||||
// Samsung Galaxy A Series (haut de gamme)
|
|
||||||
['manufacturer' => 'Samsung', 'model' => 'Galaxy A54', 'model_identifier' => 'SM-A546B', 'min_android_version' => 13],
|
|
||||||
['manufacturer' => 'Samsung', 'model' => 'Galaxy A73', 'model_identifier' => 'SM-A736B', 'min_android_version' => 12],
|
|
||||||
|
|
||||||
// Google Pixel
|
|
||||||
['manufacturer' => 'Google', 'model' => 'Pixel 6', 'model_identifier' => 'oriole', 'min_android_version' => 12],
|
|
||||||
['manufacturer' => 'Google', 'model' => 'Pixel 6 Pro', 'model_identifier' => 'raven', 'min_android_version' => 12],
|
|
||||||
['manufacturer' => 'Google', 'model' => 'Pixel 6a', 'model_identifier' => 'bluejay', 'min_android_version' => 12],
|
|
||||||
['manufacturer' => 'Google', 'model' => 'Pixel 7', 'model_identifier' => 'panther', 'min_android_version' => 13],
|
|
||||||
['manufacturer' => 'Google', 'model' => 'Pixel 7 Pro', 'model_identifier' => 'cheetah', 'min_android_version' => 13],
|
|
||||||
['manufacturer' => 'Google', 'model' => 'Pixel 7a', 'model_identifier' => 'lynx', 'min_android_version' => 13],
|
|
||||||
['manufacturer' => 'Google', 'model' => 'Pixel 8', 'model_identifier' => 'shiba', 'min_android_version' => 14],
|
|
||||||
['manufacturer' => 'Google', 'model' => 'Pixel 8 Pro', 'model_identifier' => 'husky', 'min_android_version' => 14],
|
|
||||||
['manufacturer' => 'Google', 'model' => 'Pixel 8a', 'model_identifier' => 'akita', 'min_android_version' => 14],
|
|
||||||
['manufacturer' => 'Google', 'model' => 'Pixel 9', 'model_identifier' => 'tokay', 'min_android_version' => 14],
|
|
||||||
['manufacturer' => 'Google', 'model' => 'Pixel 9 Pro', 'model_identifier' => 'caiman', 'min_android_version' => 14],
|
|
||||||
['manufacturer' => 'Google', 'model' => 'Pixel 9 Pro XL', 'model_identifier' => 'komodo', 'min_android_version' => 14],
|
|
||||||
['manufacturer' => 'Google', 'model' => 'Pixel Fold', 'model_identifier' => 'felix', 'min_android_version' => 13],
|
|
||||||
['manufacturer' => 'Google', 'model' => 'Pixel Tablet', 'model_identifier' => 'tangorpro', 'min_android_version' => 13],
|
|
||||||
|
|
||||||
// OnePlus
|
|
||||||
['manufacturer' => 'OnePlus', 'model' => '9', 'model_identifier' => 'LE2113', 'min_android_version' => 11],
|
|
||||||
['manufacturer' => 'OnePlus', 'model' => '9 Pro', 'model_identifier' => 'LE2123', 'min_android_version' => 11],
|
|
||||||
['manufacturer' => 'OnePlus', 'model' => '10 Pro', 'model_identifier' => 'NE2213', 'min_android_version' => 12],
|
|
||||||
['manufacturer' => 'OnePlus', 'model' => '10T', 'model_identifier' => 'CPH2413', 'min_android_version' => 12],
|
|
||||||
['manufacturer' => 'OnePlus', 'model' => '11', 'model_identifier' => 'CPH2449', 'min_android_version' => 13],
|
|
||||||
['manufacturer' => 'OnePlus', 'model' => '11R', 'model_identifier' => 'CPH2487', 'min_android_version' => 13],
|
|
||||||
['manufacturer' => 'OnePlus', 'model' => '12', 'model_identifier' => 'CPH2581', 'min_android_version' => 14],
|
|
||||||
['manufacturer' => 'OnePlus', 'model' => '12R', 'model_identifier' => 'CPH2585', 'min_android_version' => 14],
|
|
||||||
['manufacturer' => 'OnePlus', 'model' => 'Open', 'model_identifier' => 'CPH2551', 'min_android_version' => 13],
|
|
||||||
|
|
||||||
// Xiaomi
|
|
||||||
['manufacturer' => 'Xiaomi', 'model' => 'Mi 11', 'model_identifier' => 'M2011K2G', 'min_android_version' => 11],
|
|
||||||
['manufacturer' => 'Xiaomi', 'model' => 'Mi 11 Ultra', 'model_identifier' => 'M2102K1G', 'min_android_version' => 11],
|
|
||||||
['manufacturer' => 'Xiaomi', 'model' => '12', 'model_identifier' => '2201123G', 'min_android_version' => 12],
|
|
||||||
['manufacturer' => 'Xiaomi', 'model' => '12 Pro', 'model_identifier' => '2201122G', 'min_android_version' => 12],
|
|
||||||
['manufacturer' => 'Xiaomi', 'model' => '12T Pro', 'model_identifier' => '2207122MC', 'min_android_version' => 12],
|
|
||||||
['manufacturer' => 'Xiaomi', 'model' => '13', 'model_identifier' => '2211133G', 'min_android_version' => 13],
|
|
||||||
['manufacturer' => 'Xiaomi', 'model' => '13 Pro', 'model_identifier' => '2210132G', 'min_android_version' => 13],
|
|
||||||
['manufacturer' => 'Xiaomi', 'model' => '13T Pro', 'model_identifier' => '23078PND5G', 'min_android_version' => 13],
|
|
||||||
['manufacturer' => 'Xiaomi', 'model' => '14', 'model_identifier' => '23127PN0CG', 'min_android_version' => 14],
|
|
||||||
['manufacturer' => 'Xiaomi', 'model' => '14 Pro', 'model_identifier' => '23116PN5BG', 'min_android_version' => 14],
|
|
||||||
['manufacturer' => 'Xiaomi', 'model' => '14 Ultra', 'model_identifier' => '24030PN60G', 'min_android_version' => 14],
|
|
||||||
|
|
||||||
// OPPO
|
|
||||||
['manufacturer' => 'OPPO', 'model' => 'Find X3 Pro', 'model_identifier' => 'CPH2173', 'min_android_version' => 11],
|
|
||||||
['manufacturer' => 'OPPO', 'model' => 'Find X5 Pro', 'model_identifier' => 'CPH2305', 'min_android_version' => 12],
|
|
||||||
['manufacturer' => 'OPPO', 'model' => 'Find X6 Pro', 'model_identifier' => 'CPH2449', 'min_android_version' => 13],
|
|
||||||
['manufacturer' => 'OPPO', 'model' => 'Find N2', 'model_identifier' => 'CPH2399', 'min_android_version' => 13],
|
|
||||||
['manufacturer' => 'OPPO', 'model' => 'Find N3', 'model_identifier' => 'CPH2499', 'min_android_version' => 13],
|
|
||||||
|
|
||||||
// Realme
|
|
||||||
['manufacturer' => 'Realme', 'model' => 'GT 2 Pro', 'model_identifier' => 'RMX3301', 'min_android_version' => 12],
|
|
||||||
['manufacturer' => 'Realme', 'model' => 'GT 3', 'model_identifier' => 'RMX3709', 'min_android_version' => 13],
|
|
||||||
['manufacturer' => 'Realme', 'model' => 'GT 5 Pro', 'model_identifier' => 'RMX3888', 'min_android_version' => 14],
|
|
||||||
|
|
||||||
// Honor
|
|
||||||
['manufacturer' => 'Honor', 'model' => 'Magic5 Pro', 'model_identifier' => 'PGT-N19', 'min_android_version' => 13],
|
|
||||||
['manufacturer' => 'Honor', 'model' => 'Magic6 Pro', 'model_identifier' => 'BVL-N49', 'min_android_version' => 14],
|
|
||||||
['manufacturer' => 'Honor', 'model' => '90', 'model_identifier' => 'REA-NX9', 'min_android_version' => 13],
|
|
||||||
|
|
||||||
// ASUS
|
|
||||||
['manufacturer' => 'ASUS', 'model' => 'Zenfone 9', 'model_identifier' => 'AI2202', 'min_android_version' => 12],
|
|
||||||
['manufacturer' => 'ASUS', 'model' => 'Zenfone 10', 'model_identifier' => 'AI2302', 'min_android_version' => 13],
|
|
||||||
['manufacturer' => 'ASUS', 'model' => 'ROG Phone 7', 'model_identifier' => 'AI2205', 'min_android_version' => 13],
|
|
||||||
|
|
||||||
// Nothing
|
|
||||||
['manufacturer' => 'Nothing', 'model' => 'Phone (1)', 'model_identifier' => 'A063', 'min_android_version' => 12],
|
|
||||||
['manufacturer' => 'Nothing', 'model' => 'Phone (2)', 'model_identifier' => 'A065', 'min_android_version' => 13],
|
|
||||||
['manufacturer' => 'Nothing', 'model' => 'Phone (2a)', 'model_identifier' => 'A142', 'min_android_version' => 14],
|
|
||||||
];
|
|
||||||
|
|
||||||
// Essayer de charger depuis un fichier JSON local si présent
|
|
||||||
if (file_exists(DEVICES_LOCAL_FILE)) {
|
|
||||||
$localData = json_decode(file_get_contents(DEVICES_LOCAL_FILE), true);
|
|
||||||
if (!empty($localData)) {
|
|
||||||
echo "[" . date('Y-m-d H:i:s') . "] Données chargées depuis le fichier local\n";
|
|
||||||
return array_merge($frenchCertifiedDevices, $localData);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
echo "[" . date('Y-m-d H:i:s') . "] Utilisation de la liste intégrée des devices certifiés\n";
|
|
||||||
return $frenchCertifiedDevices;
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Met à jour la base de données avec les nouvelles données
|
|
||||||
*/
|
|
||||||
function updateDatabase($db, array $devices): array {
|
|
||||||
$stats = [
|
|
||||||
'added' => 0,
|
|
||||||
'updated' => 0,
|
|
||||||
'disabled' => 0,
|
|
||||||
'unchanged' => 0,
|
|
||||||
'total' => 0
|
|
||||||
];
|
|
||||||
|
|
||||||
// Récupérer tous les devices existants
|
|
||||||
$stmt = $db->prepare("SELECT * FROM stripe_android_certified_devices WHERE country = 'FR'");
|
|
||||||
$stmt->execute();
|
|
||||||
$existingDevices = [];
|
|
||||||
while ($row = $stmt->fetch(PDO::FETCH_ASSOC)) {
|
|
||||||
$key = $row['manufacturer'] . '|' . $row['model'] . '|' . $row['model_identifier'];
|
|
||||||
$existingDevices[$key] = $row;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Marquer tous les devices pour tracking
|
|
||||||
$processedKeys = [];
|
|
||||||
|
|
||||||
// Traiter chaque device de la nouvelle liste
|
|
||||||
foreach ($devices as $device) {
|
|
||||||
$key = $device['manufacturer'] . '|' . $device['model'] . '|' . $device['model_identifier'];
|
|
||||||
$processedKeys[$key] = true;
|
|
||||||
|
|
||||||
if (isset($existingDevices[$key])) {
|
|
||||||
// Le device existe, vérifier s'il faut le mettre à jour
|
|
||||||
$existing = $existingDevices[$key];
|
|
||||||
|
|
||||||
// Vérifier si des champs ont changé
|
|
||||||
$needsUpdate = false;
|
|
||||||
if ($existing['min_android_version'] != $device['min_android_version']) {
|
|
||||||
$needsUpdate = true;
|
|
||||||
}
|
|
||||||
if ($existing['tap_to_pay_certified'] != 1) {
|
|
||||||
$needsUpdate = true;
|
|
||||||
}
|
|
||||||
|
|
||||||
if ($needsUpdate) {
|
|
||||||
$stmt = $db->prepare("
|
|
||||||
UPDATE stripe_android_certified_devices
|
|
||||||
SET min_android_version = :min_version,
|
|
||||||
tap_to_pay_certified = 1,
|
|
||||||
last_verified = NOW(),
|
|
||||||
updated_at = NOW()
|
|
||||||
WHERE manufacturer = :manufacturer
|
|
||||||
AND model = :model
|
|
||||||
AND model_identifier = :model_identifier
|
|
||||||
AND country = 'FR'
|
|
||||||
");
|
|
||||||
$stmt->execute([
|
|
||||||
'min_version' => $device['min_android_version'],
|
|
||||||
'manufacturer' => $device['manufacturer'],
|
|
||||||
'model' => $device['model'],
|
|
||||||
'model_identifier' => $device['model_identifier']
|
|
||||||
]);
|
|
||||||
$stats['updated']++;
|
|
||||||
|
|
||||||
LogService::log("Device mis à jour", [
|
|
||||||
'device' => $device['manufacturer'] . ' ' . $device['model']
|
|
||||||
]);
|
|
||||||
} else {
|
|
||||||
// Juste mettre à jour last_verified
|
|
||||||
$stmt = $db->prepare("
|
|
||||||
UPDATE stripe_android_certified_devices
|
|
||||||
SET last_verified = NOW()
|
|
||||||
WHERE manufacturer = :manufacturer
|
|
||||||
AND model = :model
|
|
||||||
AND model_identifier = :model_identifier
|
|
||||||
AND country = 'FR'
|
|
||||||
");
|
|
||||||
$stmt->execute([
|
|
||||||
'manufacturer' => $device['manufacturer'],
|
|
||||||
'model' => $device['model'],
|
|
||||||
'model_identifier' => $device['model_identifier']
|
|
||||||
]);
|
|
||||||
$stats['unchanged']++;
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
// Nouveau device, l'ajouter
|
|
||||||
$stmt = $db->prepare("
|
|
||||||
INSERT INTO stripe_android_certified_devices
|
|
||||||
(manufacturer, model, model_identifier, tap_to_pay_certified,
|
|
||||||
certification_date, min_android_version, country, notes, last_verified)
|
|
||||||
VALUES
|
|
||||||
(:manufacturer, :model, :model_identifier, 1,
|
|
||||||
NOW(), :min_version, 'FR', 'Ajouté automatiquement via CRON', NOW())
|
|
||||||
");
|
|
||||||
$stmt->execute([
|
|
||||||
'manufacturer' => $device['manufacturer'],
|
|
||||||
'model' => $device['model'],
|
|
||||||
'model_identifier' => $device['model_identifier'],
|
|
||||||
'min_version' => $device['min_android_version']
|
|
||||||
]);
|
|
||||||
$stats['added']++;
|
|
||||||
|
|
||||||
LogService::log("Nouveau device ajouté", [
|
|
||||||
'device' => $device['manufacturer'] . ' ' . $device['model']
|
|
||||||
]);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Désactiver les devices qui ne sont plus dans la liste
|
|
||||||
foreach ($existingDevices as $key => $existing) {
|
|
||||||
if (!isset($processedKeys[$key]) && $existing['tap_to_pay_certified'] == 1) {
|
|
||||||
$stmt = $db->prepare("
|
|
||||||
UPDATE stripe_android_certified_devices
|
|
||||||
SET tap_to_pay_certified = 0,
|
|
||||||
notes = CONCAT(IFNULL(notes, ''), ' | Désactivé le ', NOW(), ' (non présent dans la mise à jour)'),
|
|
||||||
updated_at = NOW()
|
|
||||||
WHERE id = :id
|
|
||||||
");
|
|
||||||
$stmt->execute(['id' => $existing['id']]);
|
|
||||||
$stats['disabled']++;
|
|
||||||
|
|
||||||
LogService::log("Device désactivé", [
|
|
||||||
'device' => $existing['manufacturer'] . ' ' . $existing['model'],
|
|
||||||
'reason' => 'Non présent dans la liste mise à jour'
|
|
||||||
]);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
$stats['total'] = count($devices);
|
|
||||||
return $stats;
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Envoie une notification email aux administrateurs si changements importants
|
|
||||||
*/
|
|
||||||
function sendNotification(array $stats): void {
|
|
||||||
try {
|
|
||||||
// Récupérer la configuration
|
|
||||||
$appConfig = AppConfig::getInstance();
|
|
||||||
$emailConfig = $appConfig->getEmailConfig();
|
|
||||||
|
|
||||||
if (empty($emailConfig['admin_email'])) {
|
|
||||||
return; // Pas d'email admin configuré
|
|
||||||
}
|
|
||||||
|
|
||||||
$db = Database::getInstance();
|
|
||||||
|
|
||||||
// Préparer le contenu de l'email
|
|
||||||
$subject = "Mise à jour des devices Stripe Tap to Pay";
|
|
||||||
$body = "Bonjour,\n\n";
|
|
||||||
$body .= "La mise à jour automatique de la liste des appareils certifiés Stripe Tap to Pay a été effectuée.\n\n";
|
|
||||||
$body .= "Résumé des changements :\n";
|
|
||||||
$body .= "- Nouveaux appareils ajoutés : " . $stats['added'] . "\n";
|
|
||||||
$body .= "- Appareils mis à jour : " . $stats['updated'] . "\n";
|
|
||||||
$body .= "- Appareils désactivés : " . $stats['disabled'] . "\n";
|
|
||||||
$body .= "- Appareils inchangés : " . $stats['unchanged'] . "\n";
|
|
||||||
$body .= "- Total d'appareils traités : " . $stats['total'] . "\n\n";
|
|
||||||
|
|
||||||
if ($stats['added'] > 0) {
|
|
||||||
$body .= "Les nouveaux appareils ont été automatiquement ajoutés à la base de données.\n";
|
|
||||||
}
|
|
||||||
|
|
||||||
if ($stats['disabled'] > 0) {
|
|
||||||
$body .= "Certains appareils ont été désactivés car ils ne sont plus certifiés.\n";
|
|
||||||
}
|
|
||||||
|
|
||||||
$body .= "\nConsultez les logs pour plus de détails.\n";
|
|
||||||
$body .= "\nCordialement,\nLe système GeoSector";
|
|
||||||
|
|
||||||
// Insérer dans la queue d'emails
|
|
||||||
$stmt = $db->prepare("
|
|
||||||
INSERT INTO email_queue
|
|
||||||
(to_email, subject, body, status, created_at, attempts)
|
|
||||||
VALUES
|
|
||||||
(:to_email, :subject, :body, 'pending', NOW(), 0)
|
|
||||||
");
|
|
||||||
|
|
||||||
$stmt->execute([
|
|
||||||
'to_email' => $emailConfig['admin_email'],
|
|
||||||
'subject' => $subject,
|
|
||||||
'body' => $body
|
|
||||||
]);
|
|
||||||
|
|
||||||
echo "[" . date('Y-m-d H:i:s') . "] Notification ajoutée à la queue d'emails\n";
|
|
||||||
|
|
||||||
} catch (Exception $e) {
|
|
||||||
// Ne pas faire échouer le script si l'email ne peut pas être envoyé
|
|
||||||
echo "[" . date('Y-m-d H:i:s') . "] Impossible d'envoyer la notification: " . $e->getMessage() . "\n";
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -1,467 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
|
|
||||||
###############################################################################
|
|
||||||
# Script de migration en batch des entités depuis geosector_20251008
|
|
||||||
#
|
|
||||||
# Usage: ./migrate_batch.sh [options]
|
|
||||||
#
|
|
||||||
# Options:
|
|
||||||
# --start N Commencer à partir de l'entité N (défaut: 1)
|
|
||||||
# --limit N Migrer seulement N entités (défaut: toutes)
|
|
||||||
# --dry-run Simuler sans exécuter
|
|
||||||
# --continue Continuer après une erreur (défaut: s'arrêter)
|
|
||||||
# --interactive Mode interactif (défaut si aucune option)
|
|
||||||
#
|
|
||||||
# Exemple:
|
|
||||||
# ./migrate_batch.sh --start 10 --limit 5
|
|
||||||
# ./migrate_batch.sh --continue
|
|
||||||
# ./migrate_batch.sh --interactive
|
|
||||||
###############################################################################
|
|
||||||
|
|
||||||
# Configuration
|
|
||||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
|
||||||
JSON_FILE="${SCRIPT_DIR}/migrations_entites.json"
|
|
||||||
LOG_DIR="/var/www/geosector/api/logs/migrations"
|
|
||||||
MIGRATION_SCRIPT="${SCRIPT_DIR}/php/migrate_from_backup.php"
|
|
||||||
SOURCE_DB="geosector_20251013_13"
|
|
||||||
TARGET_DB="pra_geo"
|
|
||||||
|
|
||||||
# Paramètres par défaut
|
|
||||||
START_INDEX=1
|
|
||||||
LIMIT=0
|
|
||||||
DRY_RUN=0
|
|
||||||
CONTINUE_ON_ERROR=0
|
|
||||||
INTERACTIVE_MODE=0
|
|
||||||
SPECIFIC_ENTITY_ID=""
|
|
||||||
SPECIFIC_CP=""
|
|
||||||
|
|
||||||
# Couleurs
|
|
||||||
RED='\033[0;31m'
|
|
||||||
GREEN='\033[0;32m'
|
|
||||||
YELLOW='\033[1;33m'
|
|
||||||
BLUE='\033[0;34m'
|
|
||||||
CYAN='\033[0;36m'
|
|
||||||
NC='\033[0m' # No Color
|
|
||||||
|
|
||||||
# Sauvegarder le nombre d'arguments avant le parsing
|
|
||||||
INITIAL_ARGS=$#
|
|
||||||
|
|
||||||
# Parse des arguments
|
|
||||||
while [[ $# -gt 0 ]]; do
|
|
||||||
case $1 in
|
|
||||||
--start)
|
|
||||||
START_INDEX="$2"
|
|
||||||
shift 2
|
|
||||||
;;
|
|
||||||
--limit)
|
|
||||||
LIMIT="$2"
|
|
||||||
shift 2
|
|
||||||
;;
|
|
||||||
--dry-run)
|
|
||||||
DRY_RUN=1
|
|
||||||
shift
|
|
||||||
;;
|
|
||||||
--continue)
|
|
||||||
CONTINUE_ON_ERROR=1
|
|
||||||
shift
|
|
||||||
;;
|
|
||||||
--interactive|-i)
|
|
||||||
INTERACTIVE_MODE=1
|
|
||||||
shift
|
|
||||||
;;
|
|
||||||
--help)
|
|
||||||
grep "^#" "$0" | grep -v "^#!/bin/bash" | sed 's/^# //'
|
|
||||||
exit 0
|
|
||||||
;;
|
|
||||||
*)
|
|
||||||
echo "Option inconnue: $1"
|
|
||||||
echo "Utilisez --help pour l'aide"
|
|
||||||
exit 1
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
done
|
|
||||||
|
|
||||||
# Activer le mode interactif si aucun argument n'a été fourni
|
|
||||||
if [ $INITIAL_ARGS -eq 0 ]; then
|
|
||||||
INTERACTIVE_MODE=1
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Vérifications préalables
|
|
||||||
if [ ! -f "$JSON_FILE" ]; then
|
|
||||||
echo -e "${RED}❌ Fichier JSON introuvable: $JSON_FILE${NC}"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [ ! -f "$MIGRATION_SCRIPT" ]; then
|
|
||||||
echo -e "${RED}❌ Script de migration introuvable: $MIGRATION_SCRIPT${NC}"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Créer le répertoire de logs
|
|
||||||
mkdir -p "$LOG_DIR"
|
|
||||||
|
|
||||||
# Fichiers de log
|
|
||||||
BATCH_LOG="${LOG_DIR}/batch_$(date +%Y%m%d_%H%M%S).log"
|
|
||||||
SUCCESS_LOG="${LOG_DIR}/success.log"
|
|
||||||
ERROR_LOG="${LOG_DIR}/errors.log"
|
|
||||||
|
|
||||||
# MODE INTERACTIF
|
|
||||||
if [ $INTERACTIVE_MODE -eq 1 ]; then
|
|
||||||
echo ""
|
|
||||||
echo -e "${CYAN}═══════════════════════════════════════════════════════════${NC}"
|
|
||||||
echo -e "${CYAN} 🔧 Mode interactif - Migration d'entités GeoSector${NC}"
|
|
||||||
echo -e "${CYAN}═══════════════════════════════════════════════════════════${NC}"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Question 1: Migration globale ou ciblée ?
|
|
||||||
echo -e "${YELLOW}1️⃣ Type de migration :${NC}"
|
|
||||||
echo -e " ${CYAN}a)${NC} Migration globale (toutes les entités)"
|
|
||||||
echo -e " ${CYAN}b)${NC} Migration par lot (plage d'entités)"
|
|
||||||
echo -e " ${CYAN}c)${NC} Migration par code postal"
|
|
||||||
echo -e " ${CYAN}d)${NC} Migration d'une entité spécifique (ID)"
|
|
||||||
echo ""
|
|
||||||
echo -ne "${YELLOW}Votre choix (a/b/c/d) : ${NC}"
|
|
||||||
read -r MIGRATION_TYPE
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
case $MIGRATION_TYPE in
|
|
||||||
a|A)
|
|
||||||
# Migration globale - garder les valeurs par défaut
|
|
||||||
START_INDEX=1
|
|
||||||
LIMIT=0
|
|
||||||
echo -e "${GREEN}✓${NC} Migration globale sélectionnée"
|
|
||||||
;;
|
|
||||||
b|B)
|
|
||||||
# Migration par lot
|
|
||||||
echo -e "${YELLOW}2️⃣ Configuration du lot :${NC}"
|
|
||||||
echo -ne " Première entité (index, défaut=1) : "
|
|
||||||
read -r USER_START
|
|
||||||
if [ -n "$USER_START" ]; then
|
|
||||||
START_INDEX=$USER_START
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo -ne " Limite (nombre d'entités, défaut=toutes) : "
|
|
||||||
read -r USER_LIMIT
|
|
||||||
if [ -n "$USER_LIMIT" ]; then
|
|
||||||
LIMIT=$USER_LIMIT
|
|
||||||
fi
|
|
||||||
echo ""
|
|
||||||
echo -e "${GREEN}✓${NC} Migration par lot : de l'index $START_INDEX, limite de $LIMIT entités"
|
|
||||||
;;
|
|
||||||
c|C)
|
|
||||||
# Migration par code postal
|
|
||||||
echo -ne "${YELLOW}2️⃣ Code postal à migrer : ${NC}"
|
|
||||||
read -r SPECIFIC_CP
|
|
||||||
echo ""
|
|
||||||
if [ -z "$SPECIFIC_CP" ]; then
|
|
||||||
echo -e "${RED}❌ Code postal requis${NC}"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
echo -e "${GREEN}✓${NC} Migration pour le code postal : $SPECIFIC_CP"
|
|
||||||
;;
|
|
||||||
d|D)
|
|
||||||
# Migration d'une entité spécifique - bypass complet du JSON
|
|
||||||
echo -ne "${YELLOW}2️⃣ ID de l'entité à migrer : ${NC}"
|
|
||||||
read -r SPECIFIC_ENTITY_ID
|
|
||||||
echo ""
|
|
||||||
if [ -z "$SPECIFIC_ENTITY_ID" ]; then
|
|
||||||
echo -e "${RED}❌ ID d'entité requis${NC}"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
echo -e "${GREEN}✓${NC} Migration de l'entité ID : $SPECIFIC_ENTITY_ID"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Demander si suppression des données de l'entité avant migration
|
|
||||||
echo -ne "${YELLOW}3️⃣ Supprimer les données existantes de cette entité dans la TARGET avant migration ? (y/N): ${NC}"
|
|
||||||
read -r DELETE_BEFORE
|
|
||||||
DELETE_FLAG=""
|
|
||||||
if [[ $DELETE_BEFORE =~ ^[Yy]$ ]]; then
|
|
||||||
echo -e "${GREEN}✓${NC} Les données seront supprimées avant migration"
|
|
||||||
DELETE_FLAG="--delete-before"
|
|
||||||
else
|
|
||||||
echo -e "${BLUE}ℹ${NC} Les données seront conservées (ON DUPLICATE KEY UPDATE)"
|
|
||||||
fi
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Confirmer la migration
|
|
||||||
echo -ne "${YELLOW}⚠️ Confirmer la migration de l'entité #${SPECIFIC_ENTITY_ID} ? (y/N): ${NC}"
|
|
||||||
read -r CONFIRM
|
|
||||||
if [[ ! $CONFIRM =~ ^[Yy]$ ]]; then
|
|
||||||
echo -e "${RED}❌ Migration annulée${NC}"
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Exécuter directement la migration sans passer par le JSON
|
|
||||||
ENTITY_LOG="${LOG_DIR}/entity_${SPECIFIC_ENTITY_ID}_$(date +%Y%m%d_%H%M%S).log"
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
echo -e "${BLUE}⏳ Migration de l'entité #${SPECIFIC_ENTITY_ID} en cours...${NC}"
|
|
||||||
|
|
||||||
php "$MIGRATION_SCRIPT" \
|
|
||||||
--source-db="$SOURCE_DB" \
|
|
||||||
--target-db="$TARGET_DB" \
|
|
||||||
--mode=entity \
|
|
||||||
--entity-id="$SPECIFIC_ENTITY_ID" \
|
|
||||||
--log="$ENTITY_LOG" \
|
|
||||||
$DELETE_FLAG
|
|
||||||
|
|
||||||
EXIT_CODE=$?
|
|
||||||
|
|
||||||
if [ $EXIT_CODE -eq 0 ]; then
|
|
||||||
echo -e "${GREEN}✅ Entité #${SPECIFIC_ENTITY_ID} migrée avec succès${NC}"
|
|
||||||
echo -e "${BLUE}📋 Log détaillé : $ENTITY_LOG${NC}"
|
|
||||||
else
|
|
||||||
echo -e "${RED}❌ Erreur lors de la migration de l'entité #${SPECIFIC_ENTITY_ID}${NC}"
|
|
||||||
echo -e "${RED}📋 Voir le log : $ENTITY_LOG${NC}"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
exit 0
|
|
||||||
;;
|
|
||||||
*)
|
|
||||||
echo -e "${RED}❌ Choix invalide${NC}"
|
|
||||||
exit 1
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Fonctions utilitaires
|
|
||||||
log() {
|
|
||||||
echo -e "$1" | tee -a "$BATCH_LOG"
|
|
||||||
}
|
|
||||||
|
|
||||||
log_success() {
|
|
||||||
echo "$1" >> "$SUCCESS_LOG"
|
|
||||||
log "${GREEN}✓${NC} $1"
|
|
||||||
}
|
|
||||||
|
|
||||||
log_error() {
|
|
||||||
echo "$1" >> "$ERROR_LOG"
|
|
||||||
log "${RED}✗${NC} $1"
|
|
||||||
}
|
|
||||||
|
|
||||||
# Extraire les entity_id du JSON (compatible sans jq)
|
|
||||||
get_entity_ids() {
|
|
||||||
if [ -n "$SPECIFIC_ENTITY_ID" ]; then
|
|
||||||
# Entité spécifique par ID - chercher exactement "entity_id" : ID,
|
|
||||||
grep "\"entity_id\" : ${SPECIFIC_ENTITY_ID}," "$JSON_FILE" | sed 's/.*: \([0-9]*\).*/\1/'
|
|
||||||
elif [ -n "$SPECIFIC_CP" ]; then
|
|
||||||
# Entités par code postal
|
|
||||||
grep -B 2 "\"code_postal\" : \"$SPECIFIC_CP\"" "$JSON_FILE" | grep '"entity_id"' | sed 's/.*: \([0-9]*\).*/\1/'
|
|
||||||
else
|
|
||||||
# Toutes les entités
|
|
||||||
grep '"entity_id"' "$JSON_FILE" | sed 's/.*: \([0-9]*\).*/\1/'
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
# Compter le nombre total d'entités
|
|
||||||
TOTAL_ENTITIES=$(get_entity_ids | wc -l)
|
|
||||||
|
|
||||||
# Vérifier si des entités ont été trouvées
|
|
||||||
if [ $TOTAL_ENTITIES -eq 0 ]; then
|
|
||||||
if [ -n "$SPECIFIC_ENTITY_ID" ]; then
|
|
||||||
echo -e "${RED}❌ Entité #${SPECIFIC_ENTITY_ID} introuvable dans le fichier JSON${NC}"
|
|
||||||
elif [ -n "$SPECIFIC_CP" ]; then
|
|
||||||
echo -e "${RED}❌ Aucune entité trouvée pour le code postal ${SPECIFIC_CP}${NC}"
|
|
||||||
else
|
|
||||||
echo -e "${RED}❌ Aucune entité trouvée${NC}"
|
|
||||||
fi
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Calculer le nombre d'entités à migrer
|
|
||||||
if [ $LIMIT -gt 0 ]; then
|
|
||||||
END_INDEX=$((START_INDEX + LIMIT - 1))
|
|
||||||
if [ $END_INDEX -gt $TOTAL_ENTITIES ]; then
|
|
||||||
END_INDEX=$TOTAL_ENTITIES
|
|
||||||
fi
|
|
||||||
else
|
|
||||||
END_INDEX=$TOTAL_ENTITIES
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Bannière de démarrage
|
|
||||||
echo ""
|
|
||||||
log "${BLUE}═══════════════════════════════════════════════════════════${NC}"
|
|
||||||
log "${BLUE} Migration en batch des entités GeoSector${NC}"
|
|
||||||
log "${BLUE}═══════════════════════════════════════════════════════════${NC}"
|
|
||||||
log "📅 Date: $(date '+%Y-%m-%d %H:%M:%S')"
|
|
||||||
log "📁 Source: $SOURCE_DB"
|
|
||||||
log "📁 Cible: $TARGET_DB"
|
|
||||||
|
|
||||||
# Afficher les informations selon le mode
|
|
||||||
if [ -n "$SPECIFIC_ENTITY_ID" ]; then
|
|
||||||
log "🎯 Mode: Migration d'une entité spécifique"
|
|
||||||
log "📊 Entité ID: $SPECIFIC_ENTITY_ID"
|
|
||||||
elif [ -n "$SPECIFIC_CP" ]; then
|
|
||||||
log "🎯 Mode: Migration par code postal"
|
|
||||||
log "📮 Code postal: $SPECIFIC_CP"
|
|
||||||
log "📊 Entités trouvées: $TOTAL_ENTITIES"
|
|
||||||
else
|
|
||||||
TOTAL_AVAILABLE=$(grep '"entity_id"' "$JSON_FILE" | wc -l)
|
|
||||||
log "📊 Total entités disponibles: $TOTAL_AVAILABLE"
|
|
||||||
log "🎯 Entités à migrer: $START_INDEX à $END_INDEX"
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [ $DRY_RUN -eq 1 ]; then
|
|
||||||
log "${YELLOW}🔍 Mode DRY-RUN (simulation)${NC}"
|
|
||||||
fi
|
|
||||||
log "${BLUE}═══════════════════════════════════════════════════════════${NC}"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Confirmation utilisateur
|
|
||||||
if [ $DRY_RUN -eq 0 ]; then
|
|
||||||
echo -ne "${YELLOW}⚠️ Confirmer la migration ? (y/N): ${NC}"
|
|
||||||
read -r CONFIRM
|
|
||||||
if [[ ! $CONFIRM =~ ^[Yy]$ ]]; then
|
|
||||||
log "❌ Migration annulée par l'utilisateur"
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
echo ""
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Compteurs
|
|
||||||
SUCCESS_COUNT=0
|
|
||||||
ERROR_COUNT=0
|
|
||||||
SKIPPED_COUNT=0
|
|
||||||
CURRENT_INDEX=0
|
|
||||||
|
|
||||||
# Début de la migration
|
|
||||||
START_TIME=$(date +%s)
|
|
||||||
|
|
||||||
# Lire les entity_id et migrer
|
|
||||||
get_entity_ids | while read -r ENTITY_ID; do
|
|
||||||
CURRENT_INDEX=$((CURRENT_INDEX + 1))
|
|
||||||
|
|
||||||
# Filtrer par index
|
|
||||||
if [ $CURRENT_INDEX -lt $START_INDEX ]; then
|
|
||||||
continue
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [ $CURRENT_INDEX -gt $END_INDEX ]; then
|
|
||||||
break
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Récupérer les détails de l'entité depuis le JSON (match exact avec la virgule)
|
|
||||||
ENTITY_INFO=$(grep -A 8 "\"entity_id\" : ${ENTITY_ID}," "$JSON_FILE")
|
|
||||||
ENTITY_NAME=$(echo "$ENTITY_INFO" | grep '"nom"' | sed 's/.*: "\(.*\)".*/\1/')
|
|
||||||
ENTITY_CP=$(echo "$ENTITY_INFO" | grep '"code_postal"' | sed 's/.*: "\(.*\)".*/\1/')
|
|
||||||
NB_USERS=$(echo "$ENTITY_INFO" | grep '"nb_users"' | sed 's/.*: \([0-9]*\).*/\1/')
|
|
||||||
NB_PASSAGES=$(echo "$ENTITY_INFO" | grep '"nb_passages"' | sed 's/.*: \([0-9]*\).*/\1/')
|
|
||||||
|
|
||||||
# Afficher la progression
|
|
||||||
PROGRESS=$((CURRENT_INDEX - START_INDEX + 1))
|
|
||||||
TOTAL=$((END_INDEX - START_INDEX + 1))
|
|
||||||
PERCENT=$((PROGRESS * 100 / TOTAL))
|
|
||||||
|
|
||||||
log ""
|
|
||||||
log "${BLUE}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
|
|
||||||
log "${BLUE}[$PROGRESS/$TOTAL - ${PERCENT}%]${NC} Entité #${ENTITY_ID}: ${ENTITY_NAME} (${ENTITY_CP})"
|
|
||||||
log " 👥 Users: ${NB_USERS} | 📍 Passages: ${NB_PASSAGES}"
|
|
||||||
|
|
||||||
# Mode dry-run
|
|
||||||
if [ $DRY_RUN -eq 1 ]; then
|
|
||||||
log "${YELLOW} 🔍 [DRY-RUN] Simulation de la migration${NC}"
|
|
||||||
SKIPPED_COUNT=$((SKIPPED_COUNT + 1))
|
|
||||||
continue
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Exécuter la migration
|
|
||||||
ENTITY_LOG="${LOG_DIR}/entity_${ENTITY_ID}_$(date +%Y%m%d_%H%M%S).log"
|
|
||||||
|
|
||||||
log " ⏳ Migration en cours..."
|
|
||||||
php "$MIGRATION_SCRIPT" \
|
|
||||||
--source-db="$SOURCE_DB" \
|
|
||||||
--target-db="$TARGET_DB" \
|
|
||||||
--mode=entity \
|
|
||||||
--entity-id="$ENTITY_ID" \
|
|
||||||
--log="$ENTITY_LOG" > /tmp/migration_output_$$.txt 2>&1
|
|
||||||
|
|
||||||
EXIT_CODE=$?
|
|
||||||
|
|
||||||
if [ $EXIT_CODE -eq 0 ]; then
|
|
||||||
# Succès
|
|
||||||
SUCCESS_COUNT=$((SUCCESS_COUNT + 1))
|
|
||||||
log_success "Entité #${ENTITY_ID} (${ENTITY_NAME}) migrée avec succès"
|
|
||||||
|
|
||||||
# Afficher un résumé du log avec détails
|
|
||||||
if [ -f "$ENTITY_LOG" ]; then
|
|
||||||
# Chercher la ligne avec les marqueurs #STATS#
|
|
||||||
STATS_LINE=$(grep "#STATS#" "$ENTITY_LOG" 2>/dev/null)
|
|
||||||
|
|
||||||
if [ -n "$STATS_LINE" ]; then
|
|
||||||
# Extraire chaque compteur
|
|
||||||
OPE=$(echo "$STATS_LINE" | grep -oE 'OPE:[0-9]+' | cut -d: -f2)
|
|
||||||
USERS=$(echo "$STATS_LINE" | grep -oE 'USER:[0-9]+' | cut -d: -f2)
|
|
||||||
SECTORS=$(echo "$STATS_LINE" | grep -oE 'SECTOR:[0-9]+' | cut -d: -f2)
|
|
||||||
PASSAGES=$(echo "$STATS_LINE" | grep -oE 'PASS:[0-9]+' | cut -d: -f2)
|
|
||||||
|
|
||||||
# Valeurs par défaut si extraction échoue
|
|
||||||
OPE=${OPE:-0}
|
|
||||||
USERS=${USERS:-0}
|
|
||||||
SECTORS=${SECTORS:-0}
|
|
||||||
PASSAGES=${PASSAGES:-0}
|
|
||||||
|
|
||||||
log " 📊 ope: ${OPE} | users: ${USERS} | sectors: ${SECTORS} | passages: ${PASSAGES}"
|
|
||||||
else
|
|
||||||
log " 📊 Statistiques non disponibles"
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
else
|
|
||||||
# Erreur
|
|
||||||
ERROR_COUNT=$((ERROR_COUNT + 1))
|
|
||||||
log_error "Entité #${ENTITY_ID} (${ENTITY_NAME}) - Erreur code $EXIT_CODE"
|
|
||||||
|
|
||||||
# Afficher les dernières lignes du log d'erreur
|
|
||||||
if [ -f "/tmp/migration_output_$$.txt" ]; then
|
|
||||||
log "${RED} 📋 Dernières erreurs:${NC}"
|
|
||||||
tail -5 "/tmp/migration_output_$$.txt" | sed 's/^/ /' | tee -a "$BATCH_LOG"
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Arrêter ou continuer ?
|
|
||||||
if [ $CONTINUE_ON_ERROR -eq 0 ]; then
|
|
||||||
log ""
|
|
||||||
log "${RED}❌ Migration interrompue suite à une erreur${NC}"
|
|
||||||
log " Utilisez --continue pour continuer malgré les erreurs"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Nettoyage
|
|
||||||
rm -f "/tmp/migration_output_$$.txt"
|
|
||||||
|
|
||||||
# Pause entre les migrations (pour éviter de surcharger)
|
|
||||||
sleep 1
|
|
||||||
done
|
|
||||||
|
|
||||||
# Fin de la migration
|
|
||||||
END_TIME=$(date +%s)
|
|
||||||
DURATION=$((END_TIME - START_TIME))
|
|
||||||
HOURS=$((DURATION / 3600))
|
|
||||||
MINUTES=$(((DURATION % 3600) / 60))
|
|
||||||
SECONDS=$((DURATION % 60))
|
|
||||||
|
|
||||||
# Résumé final
|
|
||||||
log ""
|
|
||||||
log "${BLUE}═══════════════════════════════════════════════════════════${NC}"
|
|
||||||
log "${BLUE} Résumé de la migration${NC}"
|
|
||||||
log "${BLUE}═══════════════════════════════════════════════════════════${NC}"
|
|
||||||
log "✅ Succès: ${GREEN}${SUCCESS_COUNT}${NC}"
|
|
||||||
log "❌ Erreurs: ${RED}${ERROR_COUNT}${NC}"
|
|
||||||
log "⏭️ Ignorées: ${YELLOW}${SKIPPED_COUNT}${NC}"
|
|
||||||
log "⏱️ Durée: ${HOURS}h ${MINUTES}m ${SECONDS}s"
|
|
||||||
log ""
|
|
||||||
log "📋 Logs détaillés:"
|
|
||||||
log " - Batch: $BATCH_LOG"
|
|
||||||
log " - Succès: $SUCCESS_LOG"
|
|
||||||
log " - Erreurs: $ERROR_LOG"
|
|
||||||
log " - Individuels: $LOG_DIR/entity_*.log"
|
|
||||||
log "${BLUE}═══════════════════════════════════════════════════════════${NC}"
|
|
||||||
|
|
||||||
# Code de sortie
|
|
||||||
if [ $ERROR_COUNT -gt 0 ]; then
|
|
||||||
exit 1
|
|
||||||
else
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
@@ -1,248 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
|
|
||||||
# Script de migration des bases de données vers les containers MariaDB
|
|
||||||
# Date: Janvier 2025
|
|
||||||
# Auteur: Pierre (avec l'aide de Claude)
|
|
||||||
#
|
|
||||||
# Ce script migre les bases de données depuis les containers applicatifs
|
|
||||||
# vers les containers MariaDB dédiés (maria3 sur IN3, maria4 sur IN4)
|
|
||||||
|
|
||||||
set -euo pipefail
|
|
||||||
|
|
||||||
# Configuration SSH
|
|
||||||
HOST_KEY="/home/pierre/.ssh/id_rsa_mbpi"
|
|
||||||
HOST_PORT="22"
|
|
||||||
HOST_USER="root"
|
|
||||||
|
|
||||||
# Serveurs
|
|
||||||
RCA_HOST="195.154.80.116" # IN3
|
|
||||||
PRA_HOST="51.159.7.190" # IN4
|
|
||||||
|
|
||||||
# Configuration MariaDB
|
|
||||||
MARIA_ROOT_PASS="MyAlpLocal,90b" # Mot de passe root pour maria3 et maria4
|
|
||||||
|
|
||||||
# Couleurs
|
|
||||||
GREEN='\033[0;32m'
|
|
||||||
RED='\033[0;31m'
|
|
||||||
YELLOW='\033[0;33m'
|
|
||||||
BLUE='\033[0;34m'
|
|
||||||
NC='\033[0m'
|
|
||||||
|
|
||||||
echo_step() {
|
|
||||||
echo -e "${GREEN}==>${NC} $1"
|
|
||||||
}
|
|
||||||
|
|
||||||
echo_info() {
|
|
||||||
echo -e "${BLUE}Info:${NC} $1"
|
|
||||||
}
|
|
||||||
|
|
||||||
echo_warning() {
|
|
||||||
echo -e "${YELLOW}Warning:${NC} $1"
|
|
||||||
}
|
|
||||||
|
|
||||||
echo_error() {
|
|
||||||
echo -e "${RED}Error:${NC} $1"
|
|
||||||
exit 1
|
|
||||||
}
|
|
||||||
|
|
||||||
# Fonction pour créer une base de données et un utilisateur
|
|
||||||
create_database_and_user() {
|
|
||||||
local HOST=$1
|
|
||||||
local CONTAINER=$2
|
|
||||||
local DB_NAME=$3
|
|
||||||
local DB_USER=$4
|
|
||||||
local DB_PASS=$5
|
|
||||||
local SOURCE_CONTAINER=$6
|
|
||||||
|
|
||||||
echo_step "Creating database ${DB_NAME} in ${CONTAINER} on ${HOST}..."
|
|
||||||
|
|
||||||
# Commandes SQL pour créer la base et l'utilisateur
|
|
||||||
SQL_COMMANDS="
|
|
||||||
CREATE DATABASE IF NOT EXISTS ${DB_NAME} CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
|
|
||||||
CREATE USER IF NOT EXISTS '${DB_USER}'@'%' IDENTIFIED BY '${DB_PASS}';
|
|
||||||
GRANT ALL PRIVILEGES ON ${DB_NAME}.* TO '${DB_USER}'@'%';
|
|
||||||
FLUSH PRIVILEGES;
|
|
||||||
"
|
|
||||||
|
|
||||||
if [ "$HOST" = "local" ]; then
|
|
||||||
# Pour local (non utilisé actuellement)
|
|
||||||
incus exec ${CONTAINER} -- mysql -u root -p${MARIA_ROOT_PASS} -e "${SQL_COMMANDS}"
|
|
||||||
else
|
|
||||||
# Pour serveur distant
|
|
||||||
ssh -i ${HOST_KEY} -p ${HOST_PORT} ${HOST_USER}@${HOST} \
|
|
||||||
"incus exec ${CONTAINER} -- mysql -u root -p${MARIA_ROOT_PASS} -e \"${SQL_COMMANDS}\""
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo_info "Database ${DB_NAME} and user ${DB_USER} created"
|
|
||||||
}
|
|
||||||
|
|
||||||
# Fonction pour migrer les données
|
|
||||||
migrate_data() {
|
|
||||||
local HOST=$1
|
|
||||||
local SOURCE_CONTAINER=$2
|
|
||||||
local TARGET_CONTAINER=$3
|
|
||||||
local SOURCE_DB=$4
|
|
||||||
local TARGET_DB=$5
|
|
||||||
local TARGET_USER=$6
|
|
||||||
local TARGET_PASS=$7
|
|
||||||
|
|
||||||
echo_step "Migrating data from ${SOURCE_CONTAINER}/${SOURCE_DB} to ${TARGET_CONTAINER}/${TARGET_DB}..."
|
|
||||||
|
|
||||||
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
|
|
||||||
DUMP_FILE="/tmp/${SOURCE_DB}_dump_${TIMESTAMP}.sql"
|
|
||||||
|
|
||||||
# Créer le dump depuis le container source
|
|
||||||
echo_info "Creating database dump..."
|
|
||||||
|
|
||||||
# Déterminer si le container source utilise root avec mot de passe
|
|
||||||
# Les containers d'app (dva-geo, rca-geo, pra-geo) n'ont probablement pas de mot de passe root
|
|
||||||
ssh -i ${HOST_KEY} -p ${HOST_PORT} ${HOST_USER}@${HOST} \
|
|
||||||
"incus exec ${SOURCE_CONTAINER} -- mysqldump --single-transaction --routines --triggers ${SOURCE_DB} > ${DUMP_FILE} 2>/dev/null || \
|
|
||||||
incus exec ${SOURCE_CONTAINER} -- mysqldump -u root -p${MARIA_ROOT_PASS} --single-transaction --routines --triggers ${SOURCE_DB} > ${DUMP_FILE}"
|
|
||||||
|
|
||||||
# Importer dans le container cible
|
|
||||||
echo_info "Importing data into ${TARGET_CONTAINER}..."
|
|
||||||
ssh -i ${HOST_KEY} -p ${HOST_PORT} ${HOST_USER}@${HOST} \
|
|
||||||
"cat ${DUMP_FILE} | incus exec ${TARGET_CONTAINER} -- mysql -u ${TARGET_USER} -p${TARGET_PASS} ${TARGET_DB}"
|
|
||||||
|
|
||||||
# Nettoyer
|
|
||||||
ssh -i ${HOST_KEY} -p ${HOST_PORT} ${HOST_USER}@${HOST} "rm -f ${DUMP_FILE}"
|
|
||||||
|
|
||||||
echo_info "Migration completed for ${TARGET_DB}"
|
|
||||||
}
|
|
||||||
|
|
||||||
# Fonction pour migrer les données entre serveurs différents (pour PRODUCTION)
|
|
||||||
migrate_data_cross_server() {
|
|
||||||
local SOURCE_HOST=$1
|
|
||||||
local SOURCE_CONTAINER=$2
|
|
||||||
local TARGET_HOST=$3
|
|
||||||
local TARGET_CONTAINER=$4
|
|
||||||
local SOURCE_DB=$5
|
|
||||||
local TARGET_DB=$6
|
|
||||||
local TARGET_USER=$7
|
|
||||||
local TARGET_PASS=$8
|
|
||||||
|
|
||||||
echo_step "Migrating data from ${SOURCE_HOST}/${SOURCE_CONTAINER}/${SOURCE_DB} to ${TARGET_HOST}/${TARGET_CONTAINER}/${TARGET_DB}..."
|
|
||||||
|
|
||||||
echo_info "Using WireGuard VPN tunnel (IN3 → IN4)..."
|
|
||||||
|
|
||||||
# Option 1: Streaming direct via VPN avec agent forwarding
|
|
||||||
echo_info "Streaming database directly through VPN tunnel..."
|
|
||||||
echo_warning "Note: This requires SSH agent forwarding (ssh -A) when connecting to IN3"
|
|
||||||
|
|
||||||
# Utiliser -A pour activer l'agent forwarding vers IN3
|
|
||||||
# Utilise l'alias 'in4' défini dans /root/.ssh/config sur IN3
|
|
||||||
ssh -A -i ${HOST_KEY} -p ${HOST_PORT} ${HOST_USER}@${SOURCE_HOST} "
|
|
||||||
# Dump depuis maria3 avec mot de passe root et pipe direct vers IN4 via VPN
|
|
||||||
incus exec ${SOURCE_CONTAINER} -- mysqldump -u root -p${MARIA_ROOT_PASS} --single-transaction --routines --triggers ${SOURCE_DB} | \
|
|
||||||
ssh in4 'incus exec ${TARGET_CONTAINER} -- mysql -u ${TARGET_USER} -p${TARGET_PASS} ${TARGET_DB}'
|
|
||||||
"
|
|
||||||
|
|
||||||
if [ $? -eq 0 ]; then
|
|
||||||
echo_info "Direct VPN streaming migration completed successfully!"
|
|
||||||
else
|
|
||||||
echo_warning "VPN streaming failed, falling back to file transfer method..."
|
|
||||||
|
|
||||||
# Option 2: Fallback avec fichiers temporaires si le streaming échoue
|
|
||||||
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
|
|
||||||
DUMP_FILE="/tmp/${SOURCE_DB}_dump_${TIMESTAMP}.sql"
|
|
||||||
|
|
||||||
# Créer le dump sur IN3
|
|
||||||
echo_info "Creating database dump..."
|
|
||||||
ssh -A -i ${HOST_KEY} -p ${HOST_PORT} ${HOST_USER}@${SOURCE_HOST} \
|
|
||||||
"incus exec ${SOURCE_CONTAINER} -- mysqldump -u root -p${MARIA_ROOT_PASS} --single-transaction --routines --triggers ${SOURCE_DB} > ${DUMP_FILE}"
|
|
||||||
|
|
||||||
# Transférer via VPN depuis IN3 vers IN4 (utilise l'alias 'in4')
|
|
||||||
echo_info "Transferring dump file through VPN..."
|
|
||||||
ssh -A -i ${HOST_KEY} -p ${HOST_PORT} ${HOST_USER}@${SOURCE_HOST} \
|
|
||||||
"scp ${DUMP_FILE} in4:${DUMP_FILE}"
|
|
||||||
|
|
||||||
# Importer sur IN4 (utilise l'alias 'in4')
|
|
||||||
echo_info "Importing data on IN4..."
|
|
||||||
ssh -A -i ${HOST_KEY} -p ${HOST_PORT} ${HOST_USER}@${SOURCE_HOST} \
|
|
||||||
"ssh in4 'cat ${DUMP_FILE} | incus exec ${TARGET_CONTAINER} -- mysql -u ${TARGET_USER} -p${TARGET_PASS} ${TARGET_DB}'"
|
|
||||||
|
|
||||||
# Nettoyer
|
|
||||||
ssh -A -i ${HOST_KEY} -p ${HOST_PORT} ${HOST_USER}@${SOURCE_HOST} "rm -f ${DUMP_FILE}"
|
|
||||||
ssh -A -i ${HOST_KEY} -p ${HOST_PORT} ${HOST_USER}@${SOURCE_HOST} \
|
|
||||||
"ssh in4 'rm -f ${DUMP_FILE}'"
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo_info "Cross-server migration completed for ${TARGET_DB}"
|
|
||||||
}
|
|
||||||
|
|
||||||
# Menu de sélection
|
|
||||||
echo_step "Database Migration to MariaDB Containers"
|
|
||||||
echo ""
|
|
||||||
echo "Select environment to migrate:"
|
|
||||||
echo "1) DEV - dva-geo → maria3/dva_geo"
|
|
||||||
echo "2) RCA - rca-geo → maria3/rca_geo"
|
|
||||||
echo "3) PROD - rca_geo (IN3/maria3) → maria4/pra_geo (copy from RECETTE)"
|
|
||||||
echo "4) ALL - Migrate all environments"
|
|
||||||
echo ""
|
|
||||||
read -p "Your choice [1-4]: " choice
|
|
||||||
|
|
||||||
case $choice in
|
|
||||||
1)
|
|
||||||
echo_step "Migrating DEV environment..."
|
|
||||||
create_database_and_user "${RCA_HOST}" "maria3" "dva_geo" "dva_geo_user" "CBq9tKHj6PGPZuTmAHV7" "dva-geo"
|
|
||||||
migrate_data "${RCA_HOST}" "dva-geo" "maria3" "geo_app" "dva_geo" "dva_geo_user" "CBq9tKHj6PGPZuTmAHV7"
|
|
||||||
echo_step "DEV migration completed!"
|
|
||||||
;;
|
|
||||||
2)
|
|
||||||
echo_step "Migrating RECETTE environment..."
|
|
||||||
create_database_and_user "${RCA_HOST}" "maria3" "rca_geo" "rca_geo_user" "UPf3C0cQ805LypyM71iW" "rca-geo"
|
|
||||||
migrate_data "${RCA_HOST}" "rca-geo" "maria3" "geo_app" "rca_geo" "rca_geo_user" "UPf3C0cQ805LypyM71iW"
|
|
||||||
echo_step "RECETTE migration completed!"
|
|
||||||
;;
|
|
||||||
3)
|
|
||||||
echo_step "Migrating PRODUCTION environment (copying from RECETTE)..."
|
|
||||||
echo_warning "Note: PRODUCTION will be duplicated from rca_geo on IN3/maria3"
|
|
||||||
|
|
||||||
# Créer la base et l'utilisateur sur IN4/maria4
|
|
||||||
create_database_and_user "${PRA_HOST}" "maria4" "pra_geo" "pra_geo_user" "d2jAAGGWi8fxFrWgXjOA" "pra-geo"
|
|
||||||
|
|
||||||
# Copier les données depuis rca_geo (IN3/maria3) vers pra_geo (IN4/maria4)
|
|
||||||
migrate_data_cross_server "${RCA_HOST}" "maria3" "${PRA_HOST}" "maria4" "rca_geo" "pra_geo" "pra_geo_user" "d2jAAGGWi8fxFrWgXjOA"
|
|
||||||
|
|
||||||
echo_step "PRODUCTION migration completed (duplicated from RECETTE)!"
|
|
||||||
;;
|
|
||||||
4)
|
|
||||||
echo_step "Migrating ALL environments..."
|
|
||||||
|
|
||||||
echo_info "Starting DEV migration..."
|
|
||||||
create_database_and_user "${RCA_HOST}" "maria3" "dva_geo" "dva_geo_user" "CBq9tKHj6PGPZuTmAHV7" "dva-geo"
|
|
||||||
migrate_data "${RCA_HOST}" "dva-geo" "maria3" "geo_app" "dva_geo" "dva_geo_user" "CBq9tKHj6PGPZuTmAHV7"
|
|
||||||
|
|
||||||
echo_info "Starting RECETTE migration..."
|
|
||||||
create_database_and_user "${RCA_HOST}" "maria3" "rca_geo" "rca_geo_user" "UPf3C0cQ805LypyM71iW" "rca-geo"
|
|
||||||
migrate_data "${RCA_HOST}" "rca-geo" "maria3" "geo_app" "rca_geo" "rca_geo_user" "UPf3C0cQ805LypyM71iW"
|
|
||||||
|
|
||||||
echo_info "Starting PRODUCTION migration (copying from RECETTE)..."
|
|
||||||
echo_warning "Note: PRODUCTION will be duplicated from rca_geo on IN3/maria3"
|
|
||||||
create_database_and_user "${PRA_HOST}" "maria4" "pra_geo" "pra_geo_user" "d2jAAGGWi8fxFrWgXjOA" "pra-geo"
|
|
||||||
migrate_data_cross_server "${RCA_HOST}" "maria3" "${PRA_HOST}" "maria4" "rca_geo" "pra_geo" "pra_geo_user" "d2jAAGGWi8fxFrWgXjOA"
|
|
||||||
|
|
||||||
echo_step "All migrations completed!"
|
|
||||||
;;
|
|
||||||
*)
|
|
||||||
echo_error "Invalid choice"
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
echo_step "Migration Summary:"
|
|
||||||
echo ""
|
|
||||||
echo "┌─────────────┬──────────────┬──────────────┬─────────────┬──────────────────────┐"
|
|
||||||
echo "│ Environment │ Source │ Target │ Database │ User │"
|
|
||||||
echo "├─────────────┼──────────────┼──────────────┼─────────────┼──────────────────────┤"
|
|
||||||
echo "│ DEV │ dva-geo │ maria3 (IN3) │ dva_geo │ dva_geo_user │"
|
|
||||||
echo "│ RECETTE │ rca-geo │ maria3 (IN3) │ rca_geo │ rca_geo_user │"
|
|
||||||
echo "│ PRODUCTION │ pra-geo │ maria4 (IN4) │ pra_geo │ pra_geo_user │"
|
|
||||||
echo "└─────────────┴──────────────┴──────────────┴─────────────┴──────────────────────┘"
|
|
||||||
echo ""
|
|
||||||
echo_warning "Remember to:"
|
|
||||||
echo " 1. Test database connectivity from application containers"
|
|
||||||
echo " 2. Deploy the updated AppConfig.php"
|
|
||||||
echo " 3. Monitor application logs after migration"
|
|
||||||
echo " 4. Keep old databases for rollback if needed"
|
|
||||||
@@ -1,410 +0,0 @@
|
|||||||
# Migration v2 - Architecture modulaire
|
|
||||||
|
|
||||||
## Vue d'ensemble
|
|
||||||
|
|
||||||
Cette nouvelle architecture simplifie la migration en utilisant :
|
|
||||||
- **Source fixe** : `geosector` (synchronisée 2x/jour par PM7 depuis nx4)
|
|
||||||
- **Multi-environnement** : `--env=dva` (développement), `--env=rca` (recette) ou `--env=pra` (production)
|
|
||||||
- **Auto-détection** : L'environnement est détecté automatiquement selon le serveur
|
|
||||||
- **Classes réutilisables** : Configuration, Logger, Connexion
|
|
||||||
|
|
||||||
## Structure modulaire
|
|
||||||
|
|
||||||
```
|
|
||||||
migration2/
|
|
||||||
├── README.md # Ce fichier
|
|
||||||
├── logs/ # Logs de migration (auto-créé)
|
|
||||||
│ └── .gitignore
|
|
||||||
├── php/
|
|
||||||
│ ├── migrate_from_backup.php # Script principal orchestrateur
|
|
||||||
│ └── lib/
|
|
||||||
│ ├── DatabaseConfig.php # Configuration multi-env
|
|
||||||
│ ├── MigrationLogger.php # Gestion des logs
|
|
||||||
│ ├── DatabaseConnection.php # Connexions PDO
|
|
||||||
│ ├── OperationMigrator.php # Migration des opérations
|
|
||||||
│ ├── UserMigrator.php # Migration des ope_users
|
|
||||||
│ ├── SectorMigrator.php # Migration des secteurs
|
|
||||||
│ └── PassageMigrator.php # Migration des passages
|
|
||||||
```
|
|
||||||
|
|
||||||
**Architecture modulaire** : Chaque type de données a son propre migrator spécialisé, orchestré par le script principal.
|
|
||||||
|
|
||||||
## ⚠️ AVERTISSEMENT IMPORTANT
|
|
||||||
|
|
||||||
**Par défaut, le script SUPPRIME toutes les données de l'entité dans la base cible avant la migration.**
|
|
||||||
|
|
||||||
Cela inclut :
|
|
||||||
- ✅ Toutes les opérations de l'entité
|
|
||||||
- ✅ Tous les utilisateurs de l'entité
|
|
||||||
- ✅ Tous les secteurs et passages
|
|
||||||
- ✅ Tous les médias associés
|
|
||||||
- ℹ️ L'entité elle-même est conservée (seules les données liées sont supprimées)
|
|
||||||
|
|
||||||
Pour **désactiver** la suppression et conserver les données existantes :
|
|
||||||
```bash
|
|
||||||
php php/migrate_from_backup.php --mode=entity --entity-id=45 --delete-before=false
|
|
||||||
```
|
|
||||||
|
|
||||||
⚠️ **Attention** : Sans suppression préalable, risque de doublons si les données existent déjà.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Utilisation
|
|
||||||
|
|
||||||
### Migration d'une entité spécifique
|
|
||||||
|
|
||||||
#### Sur dva-geo (IN3)
|
|
||||||
```bash
|
|
||||||
# Auto-détection de l'environnement (recommandé)
|
|
||||||
php php/migrate_from_backup.php --mode=entity --entity-id=45
|
|
||||||
|
|
||||||
# Ou avec environnement explicite
|
|
||||||
php php/migrate_from_backup.php --env=dva --mode=entity --entity-id=45
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Sur rca-geo (IN3)
|
|
||||||
```bash
|
|
||||||
# Auto-détection de l'environnement (recommandé)
|
|
||||||
php php/migrate_from_backup.php --mode=entity --entity-id=45
|
|
||||||
|
|
||||||
# Ou avec environnement explicite
|
|
||||||
php php/migrate_from_backup.php --env=rca --mode=entity --entity-id=45
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Sur pra-geo (IN4)
|
|
||||||
```bash
|
|
||||||
# Auto-détection de l'environnement (recommandé)
|
|
||||||
php php/migrate_from_backup.php --mode=entity --entity-id=45
|
|
||||||
|
|
||||||
# Ou avec environnement explicite
|
|
||||||
php php/migrate_from_backup.php --env=pra --mode=entity --entity-id=45
|
|
||||||
```
|
|
||||||
|
|
||||||
### Migration globale (toutes les entités)
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Sur dva-geo, rca-geo ou pra-geo
|
|
||||||
php php/migrate_from_backup.php --mode=global
|
|
||||||
```
|
|
||||||
|
|
||||||
### Options disponibles
|
|
||||||
|
|
||||||
```bash
|
|
||||||
--env=ENV # 'dva' (développement), 'rca' (recette) ou 'pra' (production)
|
|
||||||
# Par défaut : auto-détection selon le hostname
|
|
||||||
--mode=MODE # 'global' ou 'entity' (défaut: global)
|
|
||||||
--entity-id=ID # ID de l'entité à migrer (requis si mode=entity)
|
|
||||||
--log=PATH # Fichier de log personnalisé
|
|
||||||
# Par défaut : logs/migration_YYYYMMDD_HHMMSS.log
|
|
||||||
--delete-before # Supprimer les données existantes avant migration (défaut: true)
|
|
||||||
--help # Afficher l'aide complète
|
|
||||||
```
|
|
||||||
|
|
||||||
### Exemples d'utilisation
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Migration STANDARD (avec suppression des données existantes - recommandé)
|
|
||||||
php php/migrate_from_backup.php --mode=entity --entity-id=45
|
|
||||||
|
|
||||||
# Migration SANS suppression (pour ajout/mise à jour uniquement - risque de doublons)
|
|
||||||
php php/migrate_from_backup.php --mode=entity --entity-id=45 --delete-before=false
|
|
||||||
|
|
||||||
# Migration avec log personnalisé
|
|
||||||
php php/migrate_from_backup.php --mode=entity --entity-id=45 --log=/custom/path/entity_45.log
|
|
||||||
|
|
||||||
# Afficher l'aide complète
|
|
||||||
php php/migrate_from_backup.php --help
|
|
||||||
```
|
|
||||||
|
|
||||||
## Différences avec l'ancienne version
|
|
||||||
|
|
||||||
| Aspect | Ancien | Nouveau |
|
|
||||||
|--------|--------|---------|
|
|
||||||
| **Source** | `--source-db=geosector_YYYYMMDD_HH` | Toujours `geosector` (fixe) |
|
|
||||||
| **Cible** | `--target-db=pra_geo` | Déduite de `--env` ou auto-détectée (dva_geo, rca_geo, pra_geo) |
|
|
||||||
| **Config** | Constantes hardcodées | Classes configurables |
|
|
||||||
| **Environnement** | Manuel | Auto-détection par hostname (dva-geo, rca-geo, pra-geo) |
|
|
||||||
| **Arguments** | 2 arguments DB requis | 1 seul `--env` (optionnel) |
|
|
||||||
|
|
||||||
## Avantages
|
|
||||||
|
|
||||||
✅ **Plus simple** : Plus besoin de spécifier les noms de bases
|
|
||||||
✅ **Plus sûr** : Moins de risques d'erreurs de saisie
|
|
||||||
✅ **Plus flexible** : Fonctionne sur dva-geo, rca-geo et pra-geo sans modification
|
|
||||||
✅ **Plus maintenable** : Configuration centralisée dans DatabaseConfig
|
|
||||||
✅ **Meilleurs logs** : Séparateurs, niveaux (info/warning/error/success)
|
|
||||||
|
|
||||||
## Déploiement
|
|
||||||
|
|
||||||
### Copier vers dva-geo (IN3)
|
|
||||||
```bash
|
|
||||||
scp -r migration2 root@195.154.80.116:/tmp/
|
|
||||||
ssh root@195.154.80.116 "incus file push -r /tmp/migration2 dva-geo/var/www/geosector/api/scripts/"
|
|
||||||
```
|
|
||||||
|
|
||||||
### Copier vers rca-geo (IN3)
|
|
||||||
```bash
|
|
||||||
scp -r migration2 root@195.154.80.116:/tmp/
|
|
||||||
ssh root@195.154.80.116 "incus file push -r /tmp/migration2 rca-geo/var/www/geosector/api/scripts/"
|
|
||||||
```
|
|
||||||
|
|
||||||
### Copier vers pra-geo (IN4)
|
|
||||||
```bash
|
|
||||||
scp -r migration2 root@51.159.7.190:/tmp/
|
|
||||||
ssh root@51.159.7.190 "incus file push -r /tmp/migration2 pra-geo/var/www/geosector/api/scripts/"
|
|
||||||
```
|
|
||||||
|
|
||||||
### Test après déploiement
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Se connecter au container
|
|
||||||
incus exec dva-geo -- bash # ou rca-geo, ou pra-geo
|
|
||||||
|
|
||||||
# Tester avec une entité
|
|
||||||
cd /var/www/geosector/api/scripts/migration2
|
|
||||||
php php/migrate_from_backup.php --mode=entity --entity-id=45
|
|
||||||
```
|
|
||||||
|
|
||||||
## Logs
|
|
||||||
|
|
||||||
Les logs sont enregistrés par défaut dans :
|
|
||||||
```
|
|
||||||
scripts/migration2/logs/migration_[MODE]_YYYYMMDD_HHMMSS.log
|
|
||||||
```
|
|
||||||
|
|
||||||
**Nommage automatique selon le mode :**
|
|
||||||
- Migration globale : `migration_global_20251021_143045.log`
|
|
||||||
- Migration d'une entité : `migration_entite_45_20251021_143045.log`
|
|
||||||
|
|
||||||
Format des logs :
|
|
||||||
- `[INFO]` : Informations générales
|
|
||||||
- `[SUCCESS]` : Opérations réussies
|
|
||||||
- `[WARNING]` : Avertissements
|
|
||||||
- `[ERROR]` : Erreurs
|
|
||||||
|
|
||||||
Le dossier `logs/` est créé automatiquement si nécessaire.
|
|
||||||
|
|
||||||
**Note :** Vous pouvez toujours spécifier un fichier de log personnalisé avec l'option `--log=PATH`.
|
|
||||||
|
|
||||||
## Récapitulatif de migration
|
|
||||||
|
|
||||||
À la fin de chaque migration, un **récapitulatif détaillé** est automatiquement affiché et enregistré dans le fichier de log.
|
|
||||||
|
|
||||||
### Format du récapitulatif
|
|
||||||
|
|
||||||
```
|
|
||||||
========================================
|
|
||||||
📊 RÉCAPITULATIF DE LA MIGRATION
|
|
||||||
========================================
|
|
||||||
Entité: Nom de l'entité (ID: XX)
|
|
||||||
Date: YYYY-MM-DD HH:MM:SS
|
|
||||||
|
|
||||||
Opérations migrées: 3
|
|
||||||
|
|
||||||
Opération #1: "Adhésions 2024" (ID: 850)
|
|
||||||
├─ Utilisateurs: 12
|
|
||||||
├─ Secteurs: 5
|
|
||||||
├─ Passages totaux: 245
|
|
||||||
└─ Détail par secteur:
|
|
||||||
├─ Centre-ville (ID: 5400)
|
|
||||||
│ ├─ Utilisateurs affectés: 3
|
|
||||||
│ └─ Passages: 67
|
|
||||||
├─ Quartier Est (ID: 5401)
|
|
||||||
│ ├─ Utilisateurs affectés: 5
|
|
||||||
│ └─ Passages: 98
|
|
||||||
└─ Nord (ID: 5402)
|
|
||||||
├─ Utilisateurs affectés: 4
|
|
||||||
└─ Passages: 80
|
|
||||||
|
|
||||||
Opération #2: "Collecte Printemps" (ID: 851)
|
|
||||||
├─ Utilisateurs: 8
|
|
||||||
├─ Secteurs: 3
|
|
||||||
├─ Passages totaux: 156
|
|
||||||
└─ Détail par secteur:
|
|
||||||
[...]
|
|
||||||
|
|
||||||
========================================
|
|
||||||
```
|
|
||||||
|
|
||||||
### Informations fournies
|
|
||||||
|
|
||||||
Le récapitulatif inclut pour chaque migration :
|
|
||||||
|
|
||||||
**Au niveau de l'entité :**
|
|
||||||
- Nom et ID de l'entité
|
|
||||||
- Date et heure de la migration
|
|
||||||
- Nombre total d'opérations migrées
|
|
||||||
|
|
||||||
**Pour chaque opération :**
|
|
||||||
- Nom et nouvel ID
|
|
||||||
- Nombre d'utilisateurs migrés
|
|
||||||
- Nombre de secteurs migrés
|
|
||||||
- Nombre total de passages migrés
|
|
||||||
|
|
||||||
**Pour chaque secteur :**
|
|
||||||
- Nom et nouvel ID
|
|
||||||
- Nombre d'utilisateurs affectés au secteur
|
|
||||||
- Nombre de passages effectués dans le secteur
|
|
||||||
|
|
||||||
### Utilisation du récapitulatif
|
|
||||||
|
|
||||||
Ce récapitulatif permet de :
|
|
||||||
- ✅ Vérifier rapidement que toutes les données ont été migrées
|
|
||||||
- ✅ Comparer avec les données source pour validation
|
|
||||||
- ✅ Identifier d'éventuelles anomalies (secteurs vides, passages manquants)
|
|
||||||
- ✅ Documenter précisément ce qui a été migré
|
|
||||||
- ✅ Tracer les migrations pour audit
|
|
||||||
|
|
||||||
Le récapitulatif est présent à la fois :
|
|
||||||
- **À l'écran** (stdout) en temps réel
|
|
||||||
- **Dans le fichier de log** pour conservation
|
|
||||||
|
|
||||||
## Dépannage
|
|
||||||
|
|
||||||
### Erreur "env doit être 'dva', 'rca' ou 'pra'"
|
|
||||||
L'auto-détection a échoué. Spécifiez manuellement :
|
|
||||||
```bash
|
|
||||||
php php/migrate_from_backup.php --env=dva --mode=entity --entity-id=45
|
|
||||||
```
|
|
||||||
|
|
||||||
### Erreur de connexion
|
|
||||||
Vérifiez que vous êtes bien dans le bon container (dva-geo, rca-geo ou pra-geo).
|
|
||||||
|
|
||||||
### Données dupliquées après migration
|
|
||||||
Si vous avez des doublons, c'est que vous avez utilisé `--delete-before=false` sur des données existantes.
|
|
||||||
|
|
||||||
**Solution** : Refaire la migration avec suppression (défaut) :
|
|
||||||
```bash
|
|
||||||
php php/migrate_from_backup.php --mode=entity --entity-id=45
|
|
||||||
```
|
|
||||||
|
|
||||||
### Vérifier ce qui sera supprimé avant migration
|
|
||||||
Consultez la section "Ordre de suppression" ci-dessous pour voir exactement quelles tables seront affectées.
|
|
||||||
|
|
||||||
### Logs non créés
|
|
||||||
Vérifiez les permissions du dossier `logs/` :
|
|
||||||
```bash
|
|
||||||
ls -la scripts/migration2/logs/
|
|
||||||
```
|
|
||||||
|
|
||||||
## Détails techniques
|
|
||||||
|
|
||||||
### Architecture hiérarchique de migration
|
|
||||||
|
|
||||||
La migration fonctionne par **opération** avec une hiérarchie complète :
|
|
||||||
|
|
||||||
```
|
|
||||||
Pour chaque opération de l'entité:
|
|
||||||
migrateOperation($oldOperationId)
|
|
||||||
├── Créer operation
|
|
||||||
├── Migrer ope_users (DISTINCT depuis ope_users_sectors)
|
|
||||||
│ └── Mapper oldUserId → newOpeUserId
|
|
||||||
├── Pour chaque secteur DISTINCT de l'opération:
|
|
||||||
│ └── migrateSector($oldOperationId, $newOperationId, $oldSectorId)
|
|
||||||
│ ├── Créer ope_sectors
|
|
||||||
│ ├── Mapper "opId_sectId" → newOpeSectorId
|
|
||||||
│ ├── Migrer sectors_adresses (fk_sector = newOpeSectorId)
|
|
||||||
│ ├── Migrer ope_users_sectors (avec mappings users + sector)
|
|
||||||
│ ├── Migrer ope_pass (avec mappings users + sector)
|
|
||||||
│ │ └── Pour chaque passage:
|
|
||||||
│ │ └── migratePassageHisto($oldPassId, $newPassId)
|
|
||||||
│ └── Migrer médias des passages
|
|
||||||
└── Migrer médias de l'opération
|
|
||||||
```
|
|
||||||
|
|
||||||
### Changement d'organisation des données : Exemple concret
|
|
||||||
|
|
||||||
#### Contexte : Opération de collecte des adhésions 2024
|
|
||||||
|
|
||||||
**Ancienne organisation** (base geosector - partagée) :
|
|
||||||
- 1 opération "Adhésions 2024" avec ID 450
|
|
||||||
- 3 utilisateurs affectés : Jean (ID 100), Marie (ID 101), Paul (ID 102)
|
|
||||||
- 2 secteurs utilisés : Centre-ville (ID 1004) et Quartier Est (ID 1005)
|
|
||||||
- Jean travaille sur Centre-ville, Marie et Paul sur Quartier Est
|
|
||||||
|
|
||||||
Dans l'ancienne base :
|
|
||||||
- Les 3 users existent UNE SEULE FOIS dans la table centrale `users`
|
|
||||||
- Les 2 secteurs existent UNE SEULE FOIS dans la table centrale `sectors`
|
|
||||||
- Les liens entre users et secteurs sont dans `ope_users_sectors`
|
|
||||||
- Les passages font référence directement aux users (ID 100, 101, 102)
|
|
||||||
|
|
||||||
**Nouvelle organisation** (base rca_geo/pra_geo - isolée par opération) :
|
|
||||||
|
|
||||||
Après migration, **CHAQUE opération devient autonome** :
|
|
||||||
- L'opération "Adhésions 2024" reçoit un nouvel ID (exemple : 850)
|
|
||||||
- Les 3 utilisateurs sont **dupliqués** dans `ope_users` avec de nouveaux IDs :
|
|
||||||
- Jean → ope_users.id = 2500 (avec fk_user = 100 et fk_operation = 850)
|
|
||||||
- Marie → ope_users.id = 2501 (avec fk_user = 101 et fk_operation = 850)
|
|
||||||
- Paul → ope_users.id = 2502 (avec fk_user = 102 et fk_operation = 850)
|
|
||||||
- Les 2 secteurs sont **dupliqués** dans `ope_sectors` :
|
|
||||||
- Centre-ville → ope_sectors.id = 5400 (avec fk_operation = 850)
|
|
||||||
- Quartier Est → ope_sectors.id = 5401 (avec fk_operation = 850)
|
|
||||||
- Tous les passages sont mis à jour pour référencer les NOUVEAUX IDs (2500, 2501, 2502)
|
|
||||||
|
|
||||||
**Pourquoi cette duplication ?**
|
|
||||||
|
|
||||||
✅ **Isolation complète** : Si l'opération est supprimée, tout part avec (secteurs, users, passages)
|
|
||||||
✅ **Performance** : Pas de jointures complexes entre opérations
|
|
||||||
✅ **Historique** : Les données de l'opération restent figées dans le temps
|
|
||||||
✅ **Simplicité** : Chaque opération est indépendante
|
|
||||||
|
|
||||||
**Impact pour un utilisateur qui travaille sur 3 opérations différentes** :
|
|
||||||
- Il existera 1 seule fois dans la table centrale `users` (ID 100)
|
|
||||||
- Il existera 3 fois dans `ope_users` (1 enregistrement par opération)
|
|
||||||
- Chaque enregistrement `ope_users` garde la référence vers `users.id = 100`
|
|
||||||
|
|
||||||
Cette architecture permet de **fermer** une opération complètement sans impacter les autres.
|
|
||||||
|
|
||||||
### Sélection des opérations à migrer
|
|
||||||
|
|
||||||
Pour chaque entité, **maximum 3 opérations** sont migrées :
|
|
||||||
1. **1 opération active** (`active = 1`)
|
|
||||||
2. **2 dernières opérations inactives** (`active = 0`) ayant au moins **10 passages effectués** (`fk_type = 1`)
|
|
||||||
|
|
||||||
### Ordre de suppression (si --delete-before=true)
|
|
||||||
|
|
||||||
Les données sont supprimées dans cet ordre pour respecter les contraintes de clés étrangères :
|
|
||||||
|
|
||||||
1. `medias` - Médias associés à l'entité ou aux opérations
|
|
||||||
2. `ope_pass_histo` - Historique des passages
|
|
||||||
3. `ope_pass` - Passages
|
|
||||||
4. `ope_users_sectors` - Associations utilisateurs/secteurs
|
|
||||||
5. `ope_users` - Utilisateurs d'opérations
|
|
||||||
6. `sectors_adresses` - Adresses de secteurs
|
|
||||||
7. `ope_sectors` - Secteurs d'opérations
|
|
||||||
8. `operations` - Opérations
|
|
||||||
9. `users` - Utilisateurs de l'entité
|
|
||||||
|
|
||||||
⚠️ **L'entité elle-même** (`entites`) **n'est jamais supprimée**.
|
|
||||||
|
|
||||||
### Tables de référence non migrées
|
|
||||||
|
|
||||||
Les tables suivantes ne sont **pas** migrées car déjà remplies dans la cible :
|
|
||||||
- `x_*` - Tables de référence (secteurs, adresses, etc.)
|
|
||||||
|
|
||||||
## Notes importantes
|
|
||||||
|
|
||||||
1. **Configuration centralisée** : Les paramètres de connexion DB sont récupérés depuis `AppConfig.php` - pas de duplication
|
|
||||||
2. **Chiffrement** : ApiService est toujours utilisé pour les mots de passe
|
|
||||||
3. **Logique métier** : Inchangée (migrateEntites, migrateUsers, etc.)
|
|
||||||
4. **Mappings** : Secteurs et adresses sont toujours mappés automatiquement
|
|
||||||
5. **Backup** : Un backup de l'ancien script est disponible dans `migrate_from_backup.php.backup`
|
|
||||||
6. **Suppression par défaut** : Activée pour éviter les doublons et garantir une migration propre
|
|
||||||
|
|
||||||
## Statut
|
|
||||||
|
|
||||||
**Architecture modulaire v2** :
|
|
||||||
- ✅ DatabaseConfig.php - Configuration multi-environnement
|
|
||||||
- ✅ MigrationLogger.php - Gestion des logs
|
|
||||||
- ✅ DatabaseConnection.php - Connexions PDO
|
|
||||||
- ✅ OperationMigrator.php - Migration hiérarchique des opérations
|
|
||||||
- ✅ UserMigrator.php - Migration des utilisateurs par opération
|
|
||||||
- ✅ SectorMigrator.php - Migration des secteurs par opération
|
|
||||||
- ✅ PassageMigrator.php - Migration des passages et historiques
|
|
||||||
- ✅ migrate_from_backup.php - Script principal orchestrateur
|
|
||||||
- ⏳ Tests sur rca-geo
|
|
||||||
- ⏳ Tests sur pra-geo
|
|
||||||
|
|
||||||
## Support
|
|
||||||
|
|
||||||
En cas de problème, consulter les logs détaillés ou contacter l'équipe technique.
|
|
||||||
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
1
api/scripts/migration2/logs/.gitignore
vendored
1
api/scripts/migration2/logs/.gitignore
vendored
@@ -1 +0,0 @@
|
|||||||
*.log
|
|
||||||
@@ -1,467 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
|
|
||||||
###############################################################################
|
|
||||||
# Script de migration en batch des entités depuis geosector_20251008
|
|
||||||
#
|
|
||||||
# Usage: ./migrate_batch.sh [options]
|
|
||||||
#
|
|
||||||
# Options:
|
|
||||||
# --start N Commencer à partir de l'entité N (défaut: 1)
|
|
||||||
# --limit N Migrer seulement N entités (défaut: toutes)
|
|
||||||
# --dry-run Simuler sans exécuter
|
|
||||||
# --continue Continuer après une erreur (défaut: s'arrêter)
|
|
||||||
# --interactive Mode interactif (défaut si aucune option)
|
|
||||||
#
|
|
||||||
# Exemple:
|
|
||||||
# ./migrate_batch.sh --start 10 --limit 5
|
|
||||||
# ./migrate_batch.sh --continue
|
|
||||||
# ./migrate_batch.sh --interactive
|
|
||||||
###############################################################################
|
|
||||||
|
|
||||||
# Configuration
|
|
||||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
|
||||||
JSON_FILE="${SCRIPT_DIR}/migrations_entites.json"
|
|
||||||
LOG_DIR="/var/www/geosector/api/logs/migrations"
|
|
||||||
MIGRATION_SCRIPT="${SCRIPT_DIR}/php/migrate_from_backup.php"
|
|
||||||
SOURCE_DB="geosector_20251013_13"
|
|
||||||
TARGET_DB="pra_geo"
|
|
||||||
|
|
||||||
# Paramètres par défaut
|
|
||||||
START_INDEX=1
|
|
||||||
LIMIT=0
|
|
||||||
DRY_RUN=0
|
|
||||||
CONTINUE_ON_ERROR=0
|
|
||||||
INTERACTIVE_MODE=0
|
|
||||||
SPECIFIC_ENTITY_ID=""
|
|
||||||
SPECIFIC_CP=""
|
|
||||||
|
|
||||||
# Couleurs
|
|
||||||
RED='\033[0;31m'
|
|
||||||
GREEN='\033[0;32m'
|
|
||||||
YELLOW='\033[1;33m'
|
|
||||||
BLUE='\033[0;34m'
|
|
||||||
CYAN='\033[0;36m'
|
|
||||||
NC='\033[0m' # No Color
|
|
||||||
|
|
||||||
# Sauvegarder le nombre d'arguments avant le parsing
|
|
||||||
INITIAL_ARGS=$#
|
|
||||||
|
|
||||||
# Parse des arguments
|
|
||||||
while [[ $# -gt 0 ]]; do
|
|
||||||
case $1 in
|
|
||||||
--start)
|
|
||||||
START_INDEX="$2"
|
|
||||||
shift 2
|
|
||||||
;;
|
|
||||||
--limit)
|
|
||||||
LIMIT="$2"
|
|
||||||
shift 2
|
|
||||||
;;
|
|
||||||
--dry-run)
|
|
||||||
DRY_RUN=1
|
|
||||||
shift
|
|
||||||
;;
|
|
||||||
--continue)
|
|
||||||
CONTINUE_ON_ERROR=1
|
|
||||||
shift
|
|
||||||
;;
|
|
||||||
--interactive|-i)
|
|
||||||
INTERACTIVE_MODE=1
|
|
||||||
shift
|
|
||||||
;;
|
|
||||||
--help)
|
|
||||||
grep "^#" "$0" | grep -v "^#!/bin/bash" | sed 's/^# //'
|
|
||||||
exit 0
|
|
||||||
;;
|
|
||||||
*)
|
|
||||||
echo "Option inconnue: $1"
|
|
||||||
echo "Utilisez --help pour l'aide"
|
|
||||||
exit 1
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
done
|
|
||||||
|
|
||||||
# Activer le mode interactif si aucun argument n'a été fourni
|
|
||||||
if [ $INITIAL_ARGS -eq 0 ]; then
|
|
||||||
INTERACTIVE_MODE=1
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Vérifications préalables
|
|
||||||
if [ ! -f "$JSON_FILE" ]; then
|
|
||||||
echo -e "${RED}❌ Fichier JSON introuvable: $JSON_FILE${NC}"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [ ! -f "$MIGRATION_SCRIPT" ]; then
|
|
||||||
echo -e "${RED}❌ Script de migration introuvable: $MIGRATION_SCRIPT${NC}"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Créer le répertoire de logs
|
|
||||||
mkdir -p "$LOG_DIR"
|
|
||||||
|
|
||||||
# Fichiers de log
|
|
||||||
BATCH_LOG="${LOG_DIR}/batch_$(date +%Y%m%d_%H%M%S).log"
|
|
||||||
SUCCESS_LOG="${LOG_DIR}/success.log"
|
|
||||||
ERROR_LOG="${LOG_DIR}/errors.log"
|
|
||||||
|
|
||||||
# MODE INTERACTIF
|
|
||||||
if [ $INTERACTIVE_MODE -eq 1 ]; then
|
|
||||||
echo ""
|
|
||||||
echo -e "${CYAN}═══════════════════════════════════════════════════════════${NC}"
|
|
||||||
echo -e "${CYAN} 🔧 Mode interactif - Migration d'entités GeoSector${NC}"
|
|
||||||
echo -e "${CYAN}═══════════════════════════════════════════════════════════${NC}"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Question 1: Migration globale ou ciblée ?
|
|
||||||
echo -e "${YELLOW}1️⃣ Type de migration :${NC}"
|
|
||||||
echo -e " ${CYAN}a)${NC} Migration globale (toutes les entités)"
|
|
||||||
echo -e " ${CYAN}b)${NC} Migration par lot (plage d'entités)"
|
|
||||||
echo -e " ${CYAN}c)${NC} Migration par code postal"
|
|
||||||
echo -e " ${CYAN}d)${NC} Migration d'une entité spécifique (ID)"
|
|
||||||
echo ""
|
|
||||||
echo -ne "${YELLOW}Votre choix (a/b/c/d) : ${NC}"
|
|
||||||
read -r MIGRATION_TYPE
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
case $MIGRATION_TYPE in
|
|
||||||
a|A)
|
|
||||||
# Migration globale - garder les valeurs par défaut
|
|
||||||
START_INDEX=1
|
|
||||||
LIMIT=0
|
|
||||||
echo -e "${GREEN}✓${NC} Migration globale sélectionnée"
|
|
||||||
;;
|
|
||||||
b|B)
|
|
||||||
# Migration par lot
|
|
||||||
echo -e "${YELLOW}2️⃣ Configuration du lot :${NC}"
|
|
||||||
echo -ne " Première entité (index, défaut=1) : "
|
|
||||||
read -r USER_START
|
|
||||||
if [ -n "$USER_START" ]; then
|
|
||||||
START_INDEX=$USER_START
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo -ne " Limite (nombre d'entités, défaut=toutes) : "
|
|
||||||
read -r USER_LIMIT
|
|
||||||
if [ -n "$USER_LIMIT" ]; then
|
|
||||||
LIMIT=$USER_LIMIT
|
|
||||||
fi
|
|
||||||
echo ""
|
|
||||||
echo -e "${GREEN}✓${NC} Migration par lot : de l'index $START_INDEX, limite de $LIMIT entités"
|
|
||||||
;;
|
|
||||||
c|C)
|
|
||||||
# Migration par code postal
|
|
||||||
echo -ne "${YELLOW}2️⃣ Code postal à migrer : ${NC}"
|
|
||||||
read -r SPECIFIC_CP
|
|
||||||
echo ""
|
|
||||||
if [ -z "$SPECIFIC_CP" ]; then
|
|
||||||
echo -e "${RED}❌ Code postal requis${NC}"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
echo -e "${GREEN}✓${NC} Migration pour le code postal : $SPECIFIC_CP"
|
|
||||||
;;
|
|
||||||
d|D)
|
|
||||||
# Migration d'une entité spécifique - bypass complet du JSON
|
|
||||||
echo -ne "${YELLOW}2️⃣ ID de l'entité à migrer : ${NC}"
|
|
||||||
read -r SPECIFIC_ENTITY_ID
|
|
||||||
echo ""
|
|
||||||
if [ -z "$SPECIFIC_ENTITY_ID" ]; then
|
|
||||||
echo -e "${RED}❌ ID d'entité requis${NC}"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
echo -e "${GREEN}✓${NC} Migration de l'entité ID : $SPECIFIC_ENTITY_ID"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Demander si suppression des données de l'entité avant migration
|
|
||||||
echo -ne "${YELLOW}3️⃣ Supprimer les données existantes de cette entité dans la TARGET avant migration ? (y/N): ${NC}"
|
|
||||||
read -r DELETE_BEFORE
|
|
||||||
DELETE_FLAG=""
|
|
||||||
if [[ $DELETE_BEFORE =~ ^[Yy]$ ]]; then
|
|
||||||
echo -e "${GREEN}✓${NC} Les données seront supprimées avant migration"
|
|
||||||
DELETE_FLAG="--delete-before"
|
|
||||||
else
|
|
||||||
echo -e "${BLUE}ℹ${NC} Les données seront conservées (ON DUPLICATE KEY UPDATE)"
|
|
||||||
fi
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Confirmer la migration
|
|
||||||
echo -ne "${YELLOW}⚠️ Confirmer la migration de l'entité #${SPECIFIC_ENTITY_ID} ? (y/N): ${NC}"
|
|
||||||
read -r CONFIRM
|
|
||||||
if [[ ! $CONFIRM =~ ^[Yy]$ ]]; then
|
|
||||||
echo -e "${RED}❌ Migration annulée${NC}"
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Exécuter directement la migration sans passer par le JSON
|
|
||||||
ENTITY_LOG="${LOG_DIR}/entity_${SPECIFIC_ENTITY_ID}_$(date +%Y%m%d_%H%M%S).log"
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
echo -e "${BLUE}⏳ Migration de l'entité #${SPECIFIC_ENTITY_ID} en cours...${NC}"
|
|
||||||
|
|
||||||
php "$MIGRATION_SCRIPT" \
|
|
||||||
--source-db="$SOURCE_DB" \
|
|
||||||
--target-db="$TARGET_DB" \
|
|
||||||
--mode=entity \
|
|
||||||
--entity-id="$SPECIFIC_ENTITY_ID" \
|
|
||||||
--log="$ENTITY_LOG" \
|
|
||||||
$DELETE_FLAG
|
|
||||||
|
|
||||||
EXIT_CODE=$?
|
|
||||||
|
|
||||||
if [ $EXIT_CODE -eq 0 ]; then
|
|
||||||
echo -e "${GREEN}✅ Entité #${SPECIFIC_ENTITY_ID} migrée avec succès${NC}"
|
|
||||||
echo -e "${BLUE}📋 Log détaillé : $ENTITY_LOG${NC}"
|
|
||||||
else
|
|
||||||
echo -e "${RED}❌ Erreur lors de la migration de l'entité #${SPECIFIC_ENTITY_ID}${NC}"
|
|
||||||
echo -e "${RED}📋 Voir le log : $ENTITY_LOG${NC}"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
exit 0
|
|
||||||
;;
|
|
||||||
*)
|
|
||||||
echo -e "${RED}❌ Choix invalide${NC}"
|
|
||||||
exit 1
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Fonctions utilitaires
|
|
||||||
log() {
|
|
||||||
echo -e "$1" | tee -a "$BATCH_LOG"
|
|
||||||
}
|
|
||||||
|
|
||||||
log_success() {
|
|
||||||
echo "$1" >> "$SUCCESS_LOG"
|
|
||||||
log "${GREEN}✓${NC} $1"
|
|
||||||
}
|
|
||||||
|
|
||||||
log_error() {
|
|
||||||
echo "$1" >> "$ERROR_LOG"
|
|
||||||
log "${RED}✗${NC} $1"
|
|
||||||
}
|
|
||||||
|
|
||||||
# Extraire les entity_id du JSON (compatible sans jq)
|
|
||||||
get_entity_ids() {
|
|
||||||
if [ -n "$SPECIFIC_ENTITY_ID" ]; then
|
|
||||||
# Entité spécifique par ID - chercher exactement "entity_id" : ID,
|
|
||||||
grep "\"entity_id\" : ${SPECIFIC_ENTITY_ID}," "$JSON_FILE" | sed 's/.*: \([0-9]*\).*/\1/'
|
|
||||||
elif [ -n "$SPECIFIC_CP" ]; then
|
|
||||||
# Entités par code postal
|
|
||||||
grep -B 2 "\"code_postal\" : \"$SPECIFIC_CP\"" "$JSON_FILE" | grep '"entity_id"' | sed 's/.*: \([0-9]*\).*/\1/'
|
|
||||||
else
|
|
||||||
# Toutes les entités
|
|
||||||
grep '"entity_id"' "$JSON_FILE" | sed 's/.*: \([0-9]*\).*/\1/'
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
# Compter le nombre total d'entités
|
|
||||||
TOTAL_ENTITIES=$(get_entity_ids | wc -l)
|
|
||||||
|
|
||||||
# Vérifier si des entités ont été trouvées
|
|
||||||
if [ $TOTAL_ENTITIES -eq 0 ]; then
|
|
||||||
if [ -n "$SPECIFIC_ENTITY_ID" ]; then
|
|
||||||
echo -e "${RED}❌ Entité #${SPECIFIC_ENTITY_ID} introuvable dans le fichier JSON${NC}"
|
|
||||||
elif [ -n "$SPECIFIC_CP" ]; then
|
|
||||||
echo -e "${RED}❌ Aucune entité trouvée pour le code postal ${SPECIFIC_CP}${NC}"
|
|
||||||
else
|
|
||||||
echo -e "${RED}❌ Aucune entité trouvée${NC}"
|
|
||||||
fi
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Calculer le nombre d'entités à migrer
|
|
||||||
if [ $LIMIT -gt 0 ]; then
|
|
||||||
END_INDEX=$((START_INDEX + LIMIT - 1))
|
|
||||||
if [ $END_INDEX -gt $TOTAL_ENTITIES ]; then
|
|
||||||
END_INDEX=$TOTAL_ENTITIES
|
|
||||||
fi
|
|
||||||
else
|
|
||||||
END_INDEX=$TOTAL_ENTITIES
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Bannière de démarrage
|
|
||||||
echo ""
|
|
||||||
log "${BLUE}═══════════════════════════════════════════════════════════${NC}"
|
|
||||||
log "${BLUE} Migration en batch des entités GeoSector${NC}"
|
|
||||||
log "${BLUE}═══════════════════════════════════════════════════════════${NC}"
|
|
||||||
log "📅 Date: $(date '+%Y-%m-%d %H:%M:%S')"
|
|
||||||
log "📁 Source: $SOURCE_DB"
|
|
||||||
log "📁 Cible: $TARGET_DB"
|
|
||||||
|
|
||||||
# Afficher les informations selon le mode
|
|
||||||
if [ -n "$SPECIFIC_ENTITY_ID" ]; then
|
|
||||||
log "🎯 Mode: Migration d'une entité spécifique"
|
|
||||||
log "📊 Entité ID: $SPECIFIC_ENTITY_ID"
|
|
||||||
elif [ -n "$SPECIFIC_CP" ]; then
|
|
||||||
log "🎯 Mode: Migration par code postal"
|
|
||||||
log "📮 Code postal: $SPECIFIC_CP"
|
|
||||||
log "📊 Entités trouvées: $TOTAL_ENTITIES"
|
|
||||||
else
|
|
||||||
TOTAL_AVAILABLE=$(grep '"entity_id"' "$JSON_FILE" | wc -l)
|
|
||||||
log "📊 Total entités disponibles: $TOTAL_AVAILABLE"
|
|
||||||
log "🎯 Entités à migrer: $START_INDEX à $END_INDEX"
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [ $DRY_RUN -eq 1 ]; then
|
|
||||||
log "${YELLOW}🔍 Mode DRY-RUN (simulation)${NC}"
|
|
||||||
fi
|
|
||||||
log "${BLUE}═══════════════════════════════════════════════════════════${NC}"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Confirmation utilisateur
|
|
||||||
if [ $DRY_RUN -eq 0 ]; then
|
|
||||||
echo -ne "${YELLOW}⚠️ Confirmer la migration ? (y/N): ${NC}"
|
|
||||||
read -r CONFIRM
|
|
||||||
if [[ ! $CONFIRM =~ ^[Yy]$ ]]; then
|
|
||||||
log "❌ Migration annulée par l'utilisateur"
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
echo ""
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Compteurs
|
|
||||||
SUCCESS_COUNT=0
|
|
||||||
ERROR_COUNT=0
|
|
||||||
SKIPPED_COUNT=0
|
|
||||||
CURRENT_INDEX=0
|
|
||||||
|
|
||||||
# Début de la migration
|
|
||||||
START_TIME=$(date +%s)
|
|
||||||
|
|
||||||
# Lire les entity_id et migrer
|
|
||||||
get_entity_ids | while read -r ENTITY_ID; do
|
|
||||||
CURRENT_INDEX=$((CURRENT_INDEX + 1))
|
|
||||||
|
|
||||||
# Filtrer par index
|
|
||||||
if [ $CURRENT_INDEX -lt $START_INDEX ]; then
|
|
||||||
continue
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [ $CURRENT_INDEX -gt $END_INDEX ]; then
|
|
||||||
break
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Récupérer les détails de l'entité depuis le JSON (match exact avec la virgule)
|
|
||||||
ENTITY_INFO=$(grep -A 8 "\"entity_id\" : ${ENTITY_ID}," "$JSON_FILE")
|
|
||||||
ENTITY_NAME=$(echo "$ENTITY_INFO" | grep '"nom"' | sed 's/.*: "\(.*\)".*/\1/')
|
|
||||||
ENTITY_CP=$(echo "$ENTITY_INFO" | grep '"code_postal"' | sed 's/.*: "\(.*\)".*/\1/')
|
|
||||||
NB_USERS=$(echo "$ENTITY_INFO" | grep '"nb_users"' | sed 's/.*: \([0-9]*\).*/\1/')
|
|
||||||
NB_PASSAGES=$(echo "$ENTITY_INFO" | grep '"nb_passages"' | sed 's/.*: \([0-9]*\).*/\1/')
|
|
||||||
|
|
||||||
# Afficher la progression
|
|
||||||
PROGRESS=$((CURRENT_INDEX - START_INDEX + 1))
|
|
||||||
TOTAL=$((END_INDEX - START_INDEX + 1))
|
|
||||||
PERCENT=$((PROGRESS * 100 / TOTAL))
|
|
||||||
|
|
||||||
log ""
|
|
||||||
log "${BLUE}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
|
|
||||||
log "${BLUE}[$PROGRESS/$TOTAL - ${PERCENT}%]${NC} Entité #${ENTITY_ID}: ${ENTITY_NAME} (${ENTITY_CP})"
|
|
||||||
log " 👥 Users: ${NB_USERS} | 📍 Passages: ${NB_PASSAGES}"
|
|
||||||
|
|
||||||
# Mode dry-run
|
|
||||||
if [ $DRY_RUN -eq 1 ]; then
|
|
||||||
log "${YELLOW} 🔍 [DRY-RUN] Simulation de la migration${NC}"
|
|
||||||
SKIPPED_COUNT=$((SKIPPED_COUNT + 1))
|
|
||||||
continue
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Exécuter la migration
|
|
||||||
ENTITY_LOG="${LOG_DIR}/entity_${ENTITY_ID}_$(date +%Y%m%d_%H%M%S).log"
|
|
||||||
|
|
||||||
log " ⏳ Migration en cours..."
|
|
||||||
php "$MIGRATION_SCRIPT" \
|
|
||||||
--source-db="$SOURCE_DB" \
|
|
||||||
--target-db="$TARGET_DB" \
|
|
||||||
--mode=entity \
|
|
||||||
--entity-id="$ENTITY_ID" \
|
|
||||||
--log="$ENTITY_LOG" > /tmp/migration_output_$$.txt 2>&1
|
|
||||||
|
|
||||||
EXIT_CODE=$?
|
|
||||||
|
|
||||||
if [ $EXIT_CODE -eq 0 ]; then
|
|
||||||
# Succès
|
|
||||||
SUCCESS_COUNT=$((SUCCESS_COUNT + 1))
|
|
||||||
log_success "Entité #${ENTITY_ID} (${ENTITY_NAME}) migrée avec succès"
|
|
||||||
|
|
||||||
# Afficher un résumé du log avec détails
|
|
||||||
if [ -f "$ENTITY_LOG" ]; then
|
|
||||||
# Chercher la ligne avec les marqueurs #STATS#
|
|
||||||
STATS_LINE=$(grep "#STATS#" "$ENTITY_LOG" 2>/dev/null)
|
|
||||||
|
|
||||||
if [ -n "$STATS_LINE" ]; then
|
|
||||||
# Extraire chaque compteur
|
|
||||||
OPE=$(echo "$STATS_LINE" | grep -oE 'OPE:[0-9]+' | cut -d: -f2)
|
|
||||||
USERS=$(echo "$STATS_LINE" | grep -oE 'USER:[0-9]+' | cut -d: -f2)
|
|
||||||
SECTORS=$(echo "$STATS_LINE" | grep -oE 'SECTOR:[0-9]+' | cut -d: -f2)
|
|
||||||
PASSAGES=$(echo "$STATS_LINE" | grep -oE 'PASS:[0-9]+' | cut -d: -f2)
|
|
||||||
|
|
||||||
# Valeurs par défaut si extraction échoue
|
|
||||||
OPE=${OPE:-0}
|
|
||||||
USERS=${USERS:-0}
|
|
||||||
SECTORS=${SECTORS:-0}
|
|
||||||
PASSAGES=${PASSAGES:-0}
|
|
||||||
|
|
||||||
log " 📊 ope: ${OPE} | users: ${USERS} | sectors: ${SECTORS} | passages: ${PASSAGES}"
|
|
||||||
else
|
|
||||||
log " 📊 Statistiques non disponibles"
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
else
|
|
||||||
# Erreur
|
|
||||||
ERROR_COUNT=$((ERROR_COUNT + 1))
|
|
||||||
log_error "Entité #${ENTITY_ID} (${ENTITY_NAME}) - Erreur code $EXIT_CODE"
|
|
||||||
|
|
||||||
# Afficher les dernières lignes du log d'erreur
|
|
||||||
if [ -f "/tmp/migration_output_$$.txt" ]; then
|
|
||||||
log "${RED} 📋 Dernières erreurs:${NC}"
|
|
||||||
tail -5 "/tmp/migration_output_$$.txt" | sed 's/^/ /' | tee -a "$BATCH_LOG"
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Arrêter ou continuer ?
|
|
||||||
if [ $CONTINUE_ON_ERROR -eq 0 ]; then
|
|
||||||
log ""
|
|
||||||
log "${RED}❌ Migration interrompue suite à une erreur${NC}"
|
|
||||||
log " Utilisez --continue pour continuer malgré les erreurs"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Nettoyage
|
|
||||||
rm -f "/tmp/migration_output_$$.txt"
|
|
||||||
|
|
||||||
# Pause entre les migrations (pour éviter de surcharger)
|
|
||||||
sleep 1
|
|
||||||
done
|
|
||||||
|
|
||||||
# Fin de la migration
|
|
||||||
END_TIME=$(date +%s)
|
|
||||||
DURATION=$((END_TIME - START_TIME))
|
|
||||||
HOURS=$((DURATION / 3600))
|
|
||||||
MINUTES=$(((DURATION % 3600) / 60))
|
|
||||||
SECONDS=$((DURATION % 60))
|
|
||||||
|
|
||||||
# Résumé final
|
|
||||||
log ""
|
|
||||||
log "${BLUE}═══════════════════════════════════════════════════════════${NC}"
|
|
||||||
log "${BLUE} Résumé de la migration${NC}"
|
|
||||||
log "${BLUE}═══════════════════════════════════════════════════════════${NC}"
|
|
||||||
log "✅ Succès: ${GREEN}${SUCCESS_COUNT}${NC}"
|
|
||||||
log "❌ Erreurs: ${RED}${ERROR_COUNT}${NC}"
|
|
||||||
log "⏭️ Ignorées: ${YELLOW}${SKIPPED_COUNT}${NC}"
|
|
||||||
log "⏱️ Durée: ${HOURS}h ${MINUTES}m ${SECONDS}s"
|
|
||||||
log ""
|
|
||||||
log "📋 Logs détaillés:"
|
|
||||||
log " - Batch: $BATCH_LOG"
|
|
||||||
log " - Succès: $SUCCESS_LOG"
|
|
||||||
log " - Erreurs: $ERROR_LOG"
|
|
||||||
log " - Individuels: $LOG_DIR/entity_*.log"
|
|
||||||
log "${BLUE}═══════════════════════════════════════════════════════════${NC}"
|
|
||||||
|
|
||||||
# Code de sortie
|
|
||||||
if [ $ERROR_COUNT -gt 0 ]; then
|
|
||||||
exit 1
|
|
||||||
else
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
@@ -1,176 +0,0 @@
|
|||||||
<?php
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Classe abstraite de base pour tous les migrators
|
|
||||||
*
|
|
||||||
* Fournit les méthodes communes pour migrer des données d'une table
|
|
||||||
*/
|
|
||||||
abstract class DataMigrator
|
|
||||||
{
|
|
||||||
protected $connection;
|
|
||||||
protected $logger;
|
|
||||||
protected $sourceDb;
|
|
||||||
protected $targetDb;
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Constructeur
|
|
||||||
*
|
|
||||||
* @param DatabaseConnection $connection Connexion aux bases
|
|
||||||
* @param MigrationLogger $logger Logger
|
|
||||||
*/
|
|
||||||
public function __construct(DatabaseConnection $connection, MigrationLogger $logger)
|
|
||||||
{
|
|
||||||
$this->connection = $connection;
|
|
||||||
$this->logger = $logger;
|
|
||||||
$this->sourceDb = $connection->getSourceDb();
|
|
||||||
$this->targetDb = $connection->getTargetDb();
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Méthode principale de migration (à implémenter dans chaque migrator)
|
|
||||||
*
|
|
||||||
* @param int|null $entityId ID de l'entité à migrer (null = toutes)
|
|
||||||
* @param bool $deleteBefore Supprimer les données existantes avant migration
|
|
||||||
* @return array ['success' => int, 'errors' => int]
|
|
||||||
*/
|
|
||||||
abstract public function migrate(?int $entityId = null, bool $deleteBefore = false): array;
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Retourne le nom de la table gérée par ce migrator
|
|
||||||
*/
|
|
||||||
abstract public function getTableName(): string;
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Supprime les données d'une entité dans la cible
|
|
||||||
* À surcharger si la logique de suppression est spécifique
|
|
||||||
*
|
|
||||||
* @param int $entityId ID de l'entité
|
|
||||||
* @return int Nombre de lignes supprimées
|
|
||||||
*/
|
|
||||||
protected function deleteEntityData(int $entityId): int
|
|
||||||
{
|
|
||||||
$table = $this->getTableName();
|
|
||||||
|
|
||||||
try {
|
|
||||||
// Par défaut: suppression simple avec fk_entite
|
|
||||||
$stmt = $this->targetDb->prepare("DELETE FROM $table WHERE fk_entite = ?");
|
|
||||||
$stmt->execute([$entityId]);
|
|
||||||
$deleted = $stmt->rowCount();
|
|
||||||
|
|
||||||
if ($deleted > 0) {
|
|
||||||
$this->logger->debug(" Supprimé $deleted ligne(s) de $table pour entité #$entityId");
|
|
||||||
}
|
|
||||||
|
|
||||||
return $deleted;
|
|
||||||
|
|
||||||
} catch (PDOException $e) {
|
|
||||||
$this->logger->warning(" Erreur suppression $table: " . $e->getMessage());
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Compte les lignes dans la source
|
|
||||||
*
|
|
||||||
* @param int|null $entityId ID de l'entité (null = toutes)
|
|
||||||
* @return int Nombre de lignes
|
|
||||||
*/
|
|
||||||
protected function countSourceRows(?int $entityId = null): int
|
|
||||||
{
|
|
||||||
return $this->connection->countSourceRows($this->getTableName(), $entityId);
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Compte les lignes dans la cible
|
|
||||||
*
|
|
||||||
* @param int|null $entityId ID de l'entité (null = toutes)
|
|
||||||
* @return int Nombre de lignes
|
|
||||||
*/
|
|
||||||
protected function countTargetRows(?int $entityId = null): int
|
|
||||||
{
|
|
||||||
return $this->connection->countTargetRows($this->getTableName(), $entityId);
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Log le début de la migration d'une table
|
|
||||||
*/
|
|
||||||
protected function logStart(?int $entityId = null): void
|
|
||||||
{
|
|
||||||
$table = $this->getTableName();
|
|
||||||
$entityStr = $entityId ? " pour entité #$entityId" : " (toutes les entités)";
|
|
||||||
$this->logger->info("🔄 Migration de $table{$entityStr}...");
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Log la fin de la migration avec statistiques
|
|
||||||
*
|
|
||||||
* @param int $success Nombre de succès
|
|
||||||
* @param int $errors Nombre d'erreurs
|
|
||||||
* @param int|null $entityId ID de l'entité
|
|
||||||
*/
|
|
||||||
protected function logEnd(int $success, int $errors, ?int $entityId = null): void
|
|
||||||
{
|
|
||||||
$table = $this->getTableName();
|
|
||||||
$sourceCount = $this->countSourceRows($entityId);
|
|
||||||
$targetCount = $this->countTargetRows($entityId);
|
|
||||||
$diff = $targetCount - $sourceCount;
|
|
||||||
$diffStr = $diff >= 0 ? "+$diff" : "$diff";
|
|
||||||
|
|
||||||
if ($errors > 0) {
|
|
||||||
$this->logger->warning(" ⚠️ $table: $success succès, $errors erreurs");
|
|
||||||
} else {
|
|
||||||
$this->logger->success(" ✓ $table: $success enregistrement(s) migré(s)");
|
|
||||||
}
|
|
||||||
|
|
||||||
$this->logger->info(" 📊 SOURCE: $sourceCount → CIBLE: $targetCount (différence: $diffStr)");
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Exécute une requête INSERT avec ON DUPLICATE KEY UPDATE
|
|
||||||
*
|
|
||||||
* @param string $insertSql SQL d'insertion
|
|
||||||
* @param array $data Données à insérer
|
|
||||||
* @return bool True si succès
|
|
||||||
*/
|
|
||||||
protected function insertOrUpdate(string $insertSql, array $data): bool
|
|
||||||
{
|
|
||||||
try {
|
|
||||||
$stmt = $this->targetDb->prepare($insertSql);
|
|
||||||
$stmt->execute($data);
|
|
||||||
return true;
|
|
||||||
} catch (PDOException $e) {
|
|
||||||
$this->logger->debug(" Erreur INSERT: " . $e->getMessage());
|
|
||||||
return false;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Démarre une transaction sur la cible
|
|
||||||
*/
|
|
||||||
protected function beginTransaction(): void
|
|
||||||
{
|
|
||||||
if (!$this->targetDb->inTransaction()) {
|
|
||||||
$this->targetDb->beginTransaction();
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Commit la transaction
|
|
||||||
*/
|
|
||||||
protected function commit(): void
|
|
||||||
{
|
|
||||||
if ($this->targetDb->inTransaction()) {
|
|
||||||
$this->targetDb->commit();
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Rollback la transaction
|
|
||||||
*/
|
|
||||||
protected function rollback(): void
|
|
||||||
{
|
|
||||||
if ($this->targetDb->inTransaction()) {
|
|
||||||
$this->targetDb->rollBack();
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -1,192 +0,0 @@
|
|||||||
<?php
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Configuration des environnements de migration
|
|
||||||
*
|
|
||||||
* Utilise AppConfig pour récupérer la configuration DB
|
|
||||||
* Source: geosector (synchronisée par PM7)
|
|
||||||
* Cibles: dva_geo (IN3/maria3), rca_geo (IN3/maria3) ou pra_geo (IN4/maria4)
|
|
||||||
*/
|
|
||||||
class DatabaseConfig
|
|
||||||
{
|
|
||||||
private const ENV_MAPPING = [
|
|
||||||
'dva' => [
|
|
||||||
'name' => 'DÉVELOPPEMENT',
|
|
||||||
'hostname' => 'dapp.geosector.fr',
|
|
||||||
'source_db' => 'geosector',
|
|
||||||
'target_db' => 'dva_geo'
|
|
||||||
],
|
|
||||||
'rca' => [
|
|
||||||
'name' => 'RECETTE',
|
|
||||||
'hostname' => 'rapp.geosector.fr',
|
|
||||||
'source_db' => 'geosector',
|
|
||||||
'target_db' => 'rca_geo'
|
|
||||||
],
|
|
||||||
'pra' => [
|
|
||||||
'name' => 'PRODUCTION',
|
|
||||||
'hostname' => 'app3.geosector.fr',
|
|
||||||
'source_db' => 'geosector',
|
|
||||||
'target_db' => 'pra_geo'
|
|
||||||
]
|
|
||||||
];
|
|
||||||
|
|
||||||
private $env;
|
|
||||||
private $config;
|
|
||||||
private $appConfig;
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Constructeur
|
|
||||||
*
|
|
||||||
* @param string $env Environnement: 'dva', 'rca' ou 'pra'
|
|
||||||
* @throws Exception Si l'environnement est invalide
|
|
||||||
*/
|
|
||||||
public function __construct(string $env)
|
|
||||||
{
|
|
||||||
if (!isset(self::ENV_MAPPING[$env])) {
|
|
||||||
throw new Exception("Invalid environment: $env. Use 'dva', 'rca' or 'pra'");
|
|
||||||
}
|
|
||||||
|
|
||||||
$this->env = $env;
|
|
||||||
|
|
||||||
// Charger AppConfig (remonter de 4 niveaux: lib -> php -> migration2 -> scripts -> api)
|
|
||||||
$appConfigPath = dirname(__DIR__, 4) . '/src/Config/AppConfig.php';
|
|
||||||
if (!file_exists($appConfigPath)) {
|
|
||||||
throw new Exception("AppConfig not found at: $appConfigPath");
|
|
||||||
}
|
|
||||||
require_once $appConfigPath;
|
|
||||||
|
|
||||||
// Simuler le host pour AppConfig en CLI
|
|
||||||
$hostname = self::ENV_MAPPING[$env]['hostname'];
|
|
||||||
$_SERVER['SERVER_NAME'] = $hostname;
|
|
||||||
$_SERVER['HTTP_HOST'] = $hostname;
|
|
||||||
|
|
||||||
$this->appConfig = AppConfig::getInstance();
|
|
||||||
|
|
||||||
// Récupérer la config DB depuis AppConfig
|
|
||||||
$dbConfig = $this->appConfig->getDatabaseConfig();
|
|
||||||
|
|
||||||
if (!$dbConfig || !isset($dbConfig['host'])) {
|
|
||||||
throw new Exception("Database configuration not found for hostname: $hostname");
|
|
||||||
}
|
|
||||||
|
|
||||||
// Construire la config pour la migration
|
|
||||||
$this->config = [
|
|
||||||
'name' => self::ENV_MAPPING[$env]['name'],
|
|
||||||
'host' => $dbConfig['host'],
|
|
||||||
'port' => $dbConfig['port'] ?? 3306,
|
|
||||||
'user' => $dbConfig['username'],
|
|
||||||
'pass' => $dbConfig['password'],
|
|
||||||
'source_db' => self::ENV_MAPPING[$env]['source_db'],
|
|
||||||
'target_db' => self::ENV_MAPPING[$env]['target_db']
|
|
||||||
];
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Retourne l'environnement actuel
|
|
||||||
*/
|
|
||||||
public function getEnv(): string
|
|
||||||
{
|
|
||||||
return $this->env;
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Retourne le nom complet de l'environnement
|
|
||||||
*/
|
|
||||||
public function getEnvName(): string
|
|
||||||
{
|
|
||||||
return $this->config['name'];
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Retourne l'hôte de la base de données
|
|
||||||
*/
|
|
||||||
public function getHost(): string
|
|
||||||
{
|
|
||||||
return $this->config['host'];
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Retourne le port de la base de données
|
|
||||||
*/
|
|
||||||
public function getPort(): int
|
|
||||||
{
|
|
||||||
return $this->config['port'];
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Retourne l'utilisateur de la base de données
|
|
||||||
*/
|
|
||||||
public function getUser(): string
|
|
||||||
{
|
|
||||||
return $this->config['user'];
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Retourne le mot de passe de la base de données
|
|
||||||
*/
|
|
||||||
public function getPassword(): string
|
|
||||||
{
|
|
||||||
return $this->config['pass'];
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Retourne le nom de la base source
|
|
||||||
*/
|
|
||||||
public function getSourceDb(): string
|
|
||||||
{
|
|
||||||
return $this->config['source_db'];
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Retourne le nom de la base cible
|
|
||||||
*/
|
|
||||||
public function getTargetDb(): string
|
|
||||||
{
|
|
||||||
return $this->config['target_db'];
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Retourne toute la configuration
|
|
||||||
*/
|
|
||||||
public function getConfig(): array
|
|
||||||
{
|
|
||||||
return $this->config;
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Détecte automatiquement l'environnement depuis le hostname
|
|
||||||
*
|
|
||||||
* @return string 'dva', 'rca' ou 'pra' (défaut: 'dva')
|
|
||||||
*/
|
|
||||||
public static function autoDetect(): string
|
|
||||||
{
|
|
||||||
$hostname = gethostname();
|
|
||||||
|
|
||||||
switch ($hostname) {
|
|
||||||
case 'dva-geo':
|
|
||||||
return 'dva';
|
|
||||||
case 'rca-geo':
|
|
||||||
return 'rca';
|
|
||||||
case 'pra-geo':
|
|
||||||
return 'pra';
|
|
||||||
default:
|
|
||||||
return 'dva'; // Défaut
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Vérifie si un environnement existe
|
|
||||||
*/
|
|
||||||
public static function exists(string $env): bool
|
|
||||||
{
|
|
||||||
return isset(self::ENV_MAPPING[$env]);
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Retourne la liste des environnements disponibles
|
|
||||||
*/
|
|
||||||
public static function getAvailableEnvironments(): array
|
|
||||||
{
|
|
||||||
return array_keys(self::ENV_MAPPING);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -1,201 +0,0 @@
|
|||||||
<?php
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Gestion des connexions PDO
|
|
||||||
*
|
|
||||||
* Crée et maintient les connexions aux bases source et cible
|
|
||||||
*/
|
|
||||||
class DatabaseConnection
|
|
||||||
{
|
|
||||||
private $config;
|
|
||||||
private $logger;
|
|
||||||
private $sourceDb;
|
|
||||||
private $targetDb;
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Constructeur
|
|
||||||
*
|
|
||||||
* @param DatabaseConfig $config Configuration de l'environnement
|
|
||||||
* @param MigrationLogger $logger Logger pour les messages
|
|
||||||
*/
|
|
||||||
public function __construct(DatabaseConfig $config, MigrationLogger $logger)
|
|
||||||
{
|
|
||||||
$this->config = $config;
|
|
||||||
$this->logger = $logger;
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Établit les connexions aux bases source et cible
|
|
||||||
*
|
|
||||||
* @return bool True si succès
|
|
||||||
*/
|
|
||||||
public function connect(): bool
|
|
||||||
{
|
|
||||||
try {
|
|
||||||
// Connexion à la base source
|
|
||||||
$this->connectSource();
|
|
||||||
|
|
||||||
// Connexion à la base cible
|
|
||||||
$this->connectTarget();
|
|
||||||
|
|
||||||
// Vérifier les versions MariaDB
|
|
||||||
$this->checkVersions();
|
|
||||||
|
|
||||||
return true;
|
|
||||||
|
|
||||||
} catch (PDOException $e) {
|
|
||||||
$this->logger->error("Erreur de connexion: " . $e->getMessage());
|
|
||||||
return false;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Connexion à la base source
|
|
||||||
*/
|
|
||||||
private function connectSource(): void
|
|
||||||
{
|
|
||||||
$dsn = sprintf(
|
|
||||||
'mysql:host=%s;port=%d;dbname=%s;charset=utf8mb4',
|
|
||||||
$this->config->getHost(),
|
|
||||||
$this->config->getPort(),
|
|
||||||
$this->config->getSourceDb()
|
|
||||||
);
|
|
||||||
|
|
||||||
$this->sourceDb = new PDO($dsn, $this->config->getUser(), $this->config->getPassword(), [
|
|
||||||
PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION,
|
|
||||||
PDO::ATTR_DEFAULT_FETCH_MODE => PDO::FETCH_ASSOC,
|
|
||||||
PDO::ATTR_TIMEOUT => 600,
|
|
||||||
PDO::MYSQL_ATTR_INIT_COMMAND => "SET NAMES utf8mb4"
|
|
||||||
]);
|
|
||||||
|
|
||||||
$this->logger->success("✓ Connexion SOURCE: {$this->config->getSourceDb()} sur {$this->config->getHost()}");
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Connexion à la base cible
|
|
||||||
*/
|
|
||||||
private function connectTarget(): void
|
|
||||||
{
|
|
||||||
$dsn = sprintf(
|
|
||||||
'mysql:host=%s;port=%d;dbname=%s;charset=utf8mb4',
|
|
||||||
$this->config->getHost(),
|
|
||||||
$this->config->getPort(),
|
|
||||||
$this->config->getTargetDb()
|
|
||||||
);
|
|
||||||
|
|
||||||
$this->targetDb = new PDO($dsn, $this->config->getUser(), $this->config->getPassword(), [
|
|
||||||
PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION,
|
|
||||||
PDO::ATTR_DEFAULT_FETCH_MODE => PDO::FETCH_ASSOC,
|
|
||||||
PDO::ATTR_TIMEOUT => 600,
|
|
||||||
PDO::MYSQL_ATTR_INIT_COMMAND => "SET NAMES utf8mb4"
|
|
||||||
]);
|
|
||||||
|
|
||||||
$this->logger->success("✓ Connexion CIBLE: {$this->config->getTargetDb()} sur {$this->config->getHost()}");
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Vérifie et affiche les versions MariaDB
|
|
||||||
*/
|
|
||||||
private function checkVersions(): void
|
|
||||||
{
|
|
||||||
$sourceVersion = $this->sourceDb->query("SELECT VERSION()")->fetchColumn();
|
|
||||||
$targetVersion = $this->targetDb->query("SELECT VERSION()")->fetchColumn();
|
|
||||||
|
|
||||||
$this->logger->info(" Version SOURCE: $sourceVersion");
|
|
||||||
$this->logger->info(" Version CIBLE: $targetVersion");
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Retourne la connexion à la base source
|
|
||||||
*/
|
|
||||||
public function getSourceDb(): PDO
|
|
||||||
{
|
|
||||||
if (!$this->sourceDb) {
|
|
||||||
throw new Exception("Source database not connected. Call connect() first.");
|
|
||||||
}
|
|
||||||
return $this->sourceDb;
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Retourne la connexion à la base cible
|
|
||||||
*/
|
|
||||||
public function getTargetDb(): PDO
|
|
||||||
{
|
|
||||||
if (!$this->targetDb) {
|
|
||||||
throw new Exception("Target database not connected. Call connect() first.");
|
|
||||||
}
|
|
||||||
return $this->targetDb;
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Compte le nombre de lignes dans une table de la source
|
|
||||||
*
|
|
||||||
* @param string $table Nom de la table
|
|
||||||
* @param int|null $entityId Filtrer par fk_entite (optionnel)
|
|
||||||
* @return int Nombre de lignes
|
|
||||||
*/
|
|
||||||
public function countSourceRows(string $table, ?int $entityId = null): int
|
|
||||||
{
|
|
||||||
$sql = "SELECT COUNT(*) FROM $table";
|
|
||||||
|
|
||||||
if ($entityId !== null) {
|
|
||||||
// Tables avec fk_entite direct
|
|
||||||
if (in_array($table, ['users', 'operations', 'entites'])) {
|
|
||||||
$sql .= " WHERE fk_entite = :entity_id";
|
|
||||||
}
|
|
||||||
// Tables liées via operations
|
|
||||||
elseif (in_array($table, ['ope_sectors', 'ope_users', 'ope_pass'])) {
|
|
||||||
$sql .= " WHERE fk_operation IN (SELECT id FROM operations WHERE fk_entite = :entity_id)";
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
$stmt = $this->sourceDb->prepare($sql);
|
|
||||||
if ($entityId !== null) {
|
|
||||||
$stmt->execute(['entity_id' => $entityId]);
|
|
||||||
} else {
|
|
||||||
$stmt->execute();
|
|
||||||
}
|
|
||||||
|
|
||||||
return (int) $stmt->fetchColumn();
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Compte le nombre de lignes dans une table de la cible
|
|
||||||
*
|
|
||||||
* @param string $table Nom de la table
|
|
||||||
* @param int|null $entityId Filtrer par fk_entite (optionnel)
|
|
||||||
* @return int Nombre de lignes
|
|
||||||
*/
|
|
||||||
public function countTargetRows(string $table, ?int $entityId = null): int
|
|
||||||
{
|
|
||||||
$sql = "SELECT COUNT(*) FROM $table";
|
|
||||||
|
|
||||||
if ($entityId !== null) {
|
|
||||||
if (in_array($table, ['users', 'operations', 'entites'])) {
|
|
||||||
$sql .= " WHERE fk_entite = :entity_id";
|
|
||||||
}
|
|
||||||
elseif (in_array($table, ['ope_sectors', 'ope_users', 'ope_pass'])) {
|
|
||||||
$sql .= " WHERE fk_operation IN (SELECT id FROM operations WHERE fk_entite = :entity_id)";
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
$stmt = $this->targetDb->prepare($sql);
|
|
||||||
if ($entityId !== null) {
|
|
||||||
$stmt->execute(['entity_id' => $entityId]);
|
|
||||||
} else {
|
|
||||||
$stmt->execute();
|
|
||||||
}
|
|
||||||
|
|
||||||
return (int) $stmt->fetchColumn();
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Ferme les connexions
|
|
||||||
*/
|
|
||||||
public function close(): void
|
|
||||||
{
|
|
||||||
$this->sourceDb = null;
|
|
||||||
$this->targetDb = null;
|
|
||||||
$this->logger->info("Connexions fermées");
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -1,219 +0,0 @@
|
|||||||
<?php
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Gestion des logs de migration
|
|
||||||
*
|
|
||||||
* Écrit dans un fichier et affiche à l'écran avec timestamps
|
|
||||||
*/
|
|
||||||
class MigrationLogger
|
|
||||||
{
|
|
||||||
private $logFile;
|
|
||||||
private $verbose;
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Constructeur
|
|
||||||
*
|
|
||||||
* @param string|null $logFile Chemin du fichier de log (null = auto-généré)
|
|
||||||
* @param bool $verbose Afficher les logs à l'écran
|
|
||||||
*/
|
|
||||||
public function __construct(?string $logFile = null, bool $verbose = true)
|
|
||||||
{
|
|
||||||
// Définir le répertoire de logs par défaut (migration2/logs/)
|
|
||||||
$defaultLogDir = dirname(__DIR__, 2) . '/logs';
|
|
||||||
$this->logFile = $logFile ?? $defaultLogDir . '/migration_' . date('Ymd_His') . '.log';
|
|
||||||
$this->verbose = $verbose;
|
|
||||||
|
|
||||||
// Créer le dossier parent si nécessaire
|
|
||||||
$dir = dirname($this->logFile);
|
|
||||||
if (!is_dir($dir)) {
|
|
||||||
mkdir($dir, 0755, true);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Vérifier que le fichier est accessible en écriture
|
|
||||||
if (!is_writable(dirname($this->logFile))) {
|
|
||||||
throw new Exception("Log directory is not writable: " . dirname($this->logFile));
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Log un message avec niveau INFO
|
|
||||||
*/
|
|
||||||
public function info(string $message): void
|
|
||||||
{
|
|
||||||
$this->log($message, 'INFO');
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Log un message avec niveau SUCCESS
|
|
||||||
*/
|
|
||||||
public function success(string $message): void
|
|
||||||
{
|
|
||||||
$this->log($message, 'SUCCESS');
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Log un message avec niveau WARNING
|
|
||||||
*/
|
|
||||||
public function warning(string $message): void
|
|
||||||
{
|
|
||||||
$this->log($message, 'WARNING');
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Log un message avec niveau ERROR
|
|
||||||
*/
|
|
||||||
public function error(string $message): void
|
|
||||||
{
|
|
||||||
$this->log($message, 'ERROR');
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Log un message avec niveau DEBUG
|
|
||||||
*/
|
|
||||||
public function debug(string $message): void
|
|
||||||
{
|
|
||||||
$this->log($message, 'DEBUG');
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Log une ligne de séparation
|
|
||||||
*/
|
|
||||||
public function separator(): void
|
|
||||||
{
|
|
||||||
$this->log(str_repeat('=', 80), 'INFO');
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Log générique
|
|
||||||
*
|
|
||||||
* @param string $message Message à logger
|
|
||||||
* @param string $level Niveau: INFO, SUCCESS, WARNING, ERROR, DEBUG
|
|
||||||
*/
|
|
||||||
private function log(string $message, string $level = 'INFO'): void
|
|
||||||
{
|
|
||||||
$timestamp = date('Y-m-d H:i:s');
|
|
||||||
$logLine = "[{$timestamp}] [{$level}] {$message}\n";
|
|
||||||
|
|
||||||
// Écriture dans le fichier
|
|
||||||
file_put_contents($this->logFile, $logLine, FILE_APPEND);
|
|
||||||
|
|
||||||
// Affichage à l'écran si verbose
|
|
||||||
if ($this->verbose) {
|
|
||||||
$this->printColored($message, $level);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Affiche un message coloré selon le niveau
|
|
||||||
*/
|
|
||||||
private function printColored(string $message, string $level): void
|
|
||||||
{
|
|
||||||
$colors = [
|
|
||||||
'INFO' => "\033[0;37m", // Blanc
|
|
||||||
'SUCCESS' => "\033[0;32m", // Vert
|
|
||||||
'WARNING' => "\033[0;33m", // Jaune
|
|
||||||
'ERROR' => "\033[0;31m", // Rouge
|
|
||||||
'DEBUG' => "\033[0;36m" // Cyan
|
|
||||||
];
|
|
||||||
|
|
||||||
$reset = "\033[0m";
|
|
||||||
$color = $colors[$level] ?? $colors['INFO'];
|
|
||||||
|
|
||||||
echo $color . $message . $reset . "\n";
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Retourne le chemin du fichier de log
|
|
||||||
*/
|
|
||||||
public function getLogFile(): string
|
|
||||||
{
|
|
||||||
return $this->logFile;
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Log des statistiques de migration
|
|
||||||
*
|
|
||||||
* @param array $stats Tableau associatif [table => count]
|
|
||||||
*/
|
|
||||||
public function logStats(array $stats): void
|
|
||||||
{
|
|
||||||
$this->separator();
|
|
||||||
$this->info("📊 Statistiques de migration:");
|
|
||||||
|
|
||||||
foreach ($stats as $table => $count) {
|
|
||||||
$this->info(" - {$table}: {$count} enregistrement(s)");
|
|
||||||
}
|
|
||||||
|
|
||||||
$this->separator();
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Log une ligne spéciale pour parsing automatique
|
|
||||||
* Format: #STATS# KEY1:VAL1 KEY2:VAL2 ...
|
|
||||||
*/
|
|
||||||
public function logParsableStats(array $stats): void
|
|
||||||
{
|
|
||||||
$pairs = [];
|
|
||||||
foreach ($stats as $key => $value) {
|
|
||||||
$pairs[] = strtoupper($key) . ':' . $value;
|
|
||||||
}
|
|
||||||
|
|
||||||
$line = '#STATS# ' . implode(' ', $pairs);
|
|
||||||
$this->log($line, 'INFO');
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Affiche et log un récapitulatif complet de migration
|
|
||||||
*
|
|
||||||
* @param array $summary Tableau de statistiques hiérarchique
|
|
||||||
*/
|
|
||||||
public function logMigrationSummary(array $summary): void
|
|
||||||
{
|
|
||||||
$this->separator();
|
|
||||||
$this->separator();
|
|
||||||
$this->info("📊 RÉCAPITULATIF DE LA MIGRATION");
|
|
||||||
$this->separator();
|
|
||||||
|
|
||||||
// Entité
|
|
||||||
if (isset($summary['entity'])) {
|
|
||||||
$this->info("Entité: {$summary['entity']['name']} (ID: {$summary['entity']['id']})");
|
|
||||||
}
|
|
||||||
$this->info("Date: " . date('Y-m-d H:i:s'));
|
|
||||||
$this->info("");
|
|
||||||
|
|
||||||
// Nombre total d'opérations
|
|
||||||
$totalOperations = count($summary['operations'] ?? []);
|
|
||||||
$this->success("Opérations migrées: {$totalOperations}");
|
|
||||||
$this->info("");
|
|
||||||
|
|
||||||
// Détail par opération
|
|
||||||
$operationNum = 1;
|
|
||||||
foreach ($summary['operations'] ?? [] as $operation) {
|
|
||||||
$this->info("Opération #{$operationNum}: \"{$operation['name']}\" (ID: {$operation['id']})");
|
|
||||||
$this->info(" ├─ Utilisateurs: {$operation['users']}");
|
|
||||||
$this->info(" ├─ Secteurs: {$operation['sectors']}");
|
|
||||||
$this->info(" ├─ Passages totaux: {$operation['total_passages']}");
|
|
||||||
|
|
||||||
if (!empty($operation['sectors_detail'])) {
|
|
||||||
$this->info(" └─ Détail par secteur:");
|
|
||||||
|
|
||||||
$sectorCount = count($operation['sectors_detail']);
|
|
||||||
$sectorNum = 0;
|
|
||||||
foreach ($operation['sectors_detail'] as $sector) {
|
|
||||||
$sectorNum++;
|
|
||||||
$isLast = ($sectorNum === $sectorCount);
|
|
||||||
$prefix = $isLast ? " └─" : " ├─";
|
|
||||||
|
|
||||||
$this->info("{$prefix} {$sector['name']} (ID: {$sector['id']})");
|
|
||||||
$this->info(" " . ($isLast ? " " : "│") . " ├─ Utilisateurs affectés: {$sector['users']}");
|
|
||||||
$this->info(" " . ($isLast ? " " : "│") . " └─ Passages: {$sector['passages']}");
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
$this->info("");
|
|
||||||
$operationNum++;
|
|
||||||
}
|
|
||||||
|
|
||||||
$this->separator();
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -1,312 +0,0 @@
|
|||||||
<?php
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Migration des opérations complètes
|
|
||||||
*
|
|
||||||
* Orchestre la migration d'une opération avec tous ses utilisateurs,
|
|
||||||
* secteurs, passages et médias. Utilise UserMigrator et SectorMigrator.
|
|
||||||
*/
|
|
||||||
class OperationMigrator
|
|
||||||
{
|
|
||||||
private PDO $sourceDb;
|
|
||||||
private PDO $targetDb;
|
|
||||||
private MigrationLogger $logger;
|
|
||||||
private UserMigrator $userMigrator;
|
|
||||||
private SectorMigrator $sectorMigrator;
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Constructeur
|
|
||||||
*
|
|
||||||
* @param PDO $sourceDb Connexion source
|
|
||||||
* @param PDO $targetDb Connexion cible
|
|
||||||
* @param MigrationLogger $logger Logger
|
|
||||||
* @param UserMigrator $userMigrator Migrator d'utilisateurs
|
|
||||||
* @param SectorMigrator $sectorMigrator Migrator de secteurs
|
|
||||||
*/
|
|
||||||
public function __construct(
|
|
||||||
PDO $sourceDb,
|
|
||||||
PDO $targetDb,
|
|
||||||
MigrationLogger $logger,
|
|
||||||
UserMigrator $userMigrator,
|
|
||||||
SectorMigrator $sectorMigrator
|
|
||||||
) {
|
|
||||||
$this->sourceDb = $sourceDb;
|
|
||||||
$this->targetDb = $targetDb;
|
|
||||||
$this->logger = $logger;
|
|
||||||
$this->userMigrator = $userMigrator;
|
|
||||||
$this->sectorMigrator = $sectorMigrator;
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Récupère les opérations à migrer pour une entité
|
|
||||||
* - 1 opération active
|
|
||||||
* - 2 dernières opérations inactives avec au moins 10 passages effectués
|
|
||||||
*
|
|
||||||
* @param int $entityId ID de l'entité
|
|
||||||
* @return array Liste des IDs d'opérations à migrer
|
|
||||||
*/
|
|
||||||
public function getOperationsToMigrate(int $entityId): array
|
|
||||||
{
|
|
||||||
$operationIds = [];
|
|
||||||
|
|
||||||
// 1. Récupérer l'opération active (pour vérification)
|
|
||||||
$stmt = $this->sourceDb->prepare("
|
|
||||||
SELECT rowid
|
|
||||||
FROM operations
|
|
||||||
WHERE fk_entite = :entity_id AND active = 1
|
|
||||||
LIMIT 1
|
|
||||||
");
|
|
||||||
$stmt->execute([':entity_id' => $entityId]);
|
|
||||||
$activeOp = $stmt->fetch(PDO::FETCH_COLUMN);
|
|
||||||
|
|
||||||
// 2. Récupérer les 2 dernières opérations inactives avec >= 10 passages effectués
|
|
||||||
// ORDER BY DESC pour avoir les plus récentes, puis on inverse
|
|
||||||
$stmt = $this->sourceDb->prepare("
|
|
||||||
SELECT o.rowid, COUNT(p.rowid) as nb_passages
|
|
||||||
FROM operations o
|
|
||||||
LEFT JOIN ope_pass p ON p.fk_operation = o.rowid AND p.fk_type = 1
|
|
||||||
WHERE o.fk_entite = :entity_id
|
|
||||||
AND o.active = 0
|
|
||||||
" . ($activeOp ? "AND o.rowid != :active_id" : "") . "
|
|
||||||
GROUP BY o.rowid
|
|
||||||
HAVING nb_passages >= 10
|
|
||||||
ORDER BY o.rowid DESC
|
|
||||||
LIMIT 2
|
|
||||||
");
|
|
||||||
|
|
||||||
$params = [':entity_id' => $entityId];
|
|
||||||
if ($activeOp) {
|
|
||||||
$params[':active_id'] = $activeOp;
|
|
||||||
}
|
|
||||||
|
|
||||||
$stmt->execute($params);
|
|
||||||
$inactiveOps = $stmt->fetchAll(PDO::FETCH_ASSOC);
|
|
||||||
|
|
||||||
// Inverser pour avoir l'ordre chronologique (plus ancienne → plus récente)
|
|
||||||
$inactiveOps = array_reverse($inactiveOps);
|
|
||||||
|
|
||||||
foreach ($inactiveOps as $op) {
|
|
||||||
$operationIds[] = $op['rowid'];
|
|
||||||
$this->logger->info("✓ Opération inactive trouvée: {$op['rowid']} ({$op['nb_passages']} passages)");
|
|
||||||
}
|
|
||||||
|
|
||||||
// 3. Ajouter l'opération active EN DERNIER
|
|
||||||
if ($activeOp) {
|
|
||||||
$operationIds[] = $activeOp;
|
|
||||||
$this->logger->info("✓ Opération active trouvée: {$activeOp}");
|
|
||||||
}
|
|
||||||
|
|
||||||
$this->logger->info("📊 Total: " . count($operationIds) . " opération(s) à migrer");
|
|
||||||
|
|
||||||
return $operationIds;
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Migre une opération complète avec tous ses utilisateurs et secteurs
|
|
||||||
*
|
|
||||||
* @param int $oldOperationId ID de l'opération dans l'ancienne base
|
|
||||||
* @return array|null Tableau de statistiques ou null en cas d'erreur
|
|
||||||
*/
|
|
||||||
public function migrateOperation(int $oldOperationId): ?array
|
|
||||||
{
|
|
||||||
$this->logger->separator();
|
|
||||||
$this->logger->info("🔄 Migration de l'opération ID: {$oldOperationId}");
|
|
||||||
|
|
||||||
try {
|
|
||||||
// 1. Récupérer l'opération source
|
|
||||||
$stmt = $this->sourceDb->prepare("
|
|
||||||
SELECT * FROM operations
|
|
||||||
WHERE rowid = :id
|
|
||||||
");
|
|
||||||
$stmt->execute([':id' => $oldOperationId]);
|
|
||||||
$operation = $stmt->fetch(PDO::FETCH_ASSOC);
|
|
||||||
|
|
||||||
if (!$operation) {
|
|
||||||
$this->logger->warning("Opération {$oldOperationId} non trouvée");
|
|
||||||
return null;
|
|
||||||
}
|
|
||||||
|
|
||||||
// 2. Créer l'opération dans la nouvelle base
|
|
||||||
$newOperationId = $this->createOperation($operation);
|
|
||||||
|
|
||||||
if (!$newOperationId) {
|
|
||||||
return null;
|
|
||||||
}
|
|
||||||
|
|
||||||
$this->logger->success("✓ Opération créée avec ID: {$newOperationId}");
|
|
||||||
|
|
||||||
// 3. Migrer les utilisateurs de l'opération
|
|
||||||
// Pour opération active : tous les users actifs de l'entité
|
|
||||||
// Pour opération inactive : uniquement ceux dans ope_users_sectors
|
|
||||||
$entityId = (int)$operation['fk_entite'];
|
|
||||||
$isActiveOperation = (int)$operation['active'] === 1;
|
|
||||||
|
|
||||||
$userResult = $this->userMigrator->migrateOperationUsers(
|
|
||||||
$oldOperationId,
|
|
||||||
$newOperationId,
|
|
||||||
$entityId,
|
|
||||||
$isActiveOperation
|
|
||||||
);
|
|
||||||
$userMapping = $userResult['mapping'];
|
|
||||||
$usersCount = $userResult['count'];
|
|
||||||
|
|
||||||
if (empty($userMapping)) {
|
|
||||||
$this->logger->warning("Aucun utilisateur migré, abandon de l'opération {$oldOperationId}");
|
|
||||||
return null;
|
|
||||||
}
|
|
||||||
|
|
||||||
// 4. Récupérer les secteurs DISTINCTS de l'opération
|
|
||||||
$stmt = $this->sourceDb->prepare("
|
|
||||||
SELECT DISTINCT fk_sector
|
|
||||||
FROM ope_users_sectors
|
|
||||||
WHERE fk_operation = :operation_id AND active = 1
|
|
||||||
");
|
|
||||||
$stmt->execute([':operation_id' => $oldOperationId]);
|
|
||||||
$sectors = $stmt->fetchAll(PDO::FETCH_COLUMN);
|
|
||||||
|
|
||||||
$this->logger->info("📍 " . count($sectors) . " secteur(s) distinct(s) à migrer");
|
|
||||||
|
|
||||||
// 5. Migrer chaque secteur et collecter les stats
|
|
||||||
$sectorsDetail = [];
|
|
||||||
$totalPassages = 0;
|
|
||||||
|
|
||||||
foreach ($sectors as $oldSectorId) {
|
|
||||||
$sectorStats = $this->sectorMigrator->migrateSector(
|
|
||||||
$oldOperationId,
|
|
||||||
$newOperationId,
|
|
||||||
$oldSectorId,
|
|
||||||
$userMapping
|
|
||||||
);
|
|
||||||
|
|
||||||
if ($sectorStats) {
|
|
||||||
$sectorsDetail[] = $sectorStats;
|
|
||||||
$totalPassages += $sectorStats['passages'];
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// 6. Migrer les médias de l'opération (support='operations')
|
|
||||||
$this->migrateOperationMedias($oldOperationId, $newOperationId);
|
|
||||||
|
|
||||||
$this->logger->success("✅ Migration de l'opération {$oldOperationId} terminée");
|
|
||||||
|
|
||||||
// 7. Retourner les statistiques
|
|
||||||
return [
|
|
||||||
'id' => $newOperationId,
|
|
||||||
'name' => $operation['libelle'],
|
|
||||||
'users' => $usersCount,
|
|
||||||
'sectors' => count($sectorsDetail),
|
|
||||||
'total_passages' => $totalPassages,
|
|
||||||
'sectors_detail' => $sectorsDetail
|
|
||||||
];
|
|
||||||
|
|
||||||
} catch (Exception $e) {
|
|
||||||
$this->logger->error("❌ Erreur migration opération {$oldOperationId}: " . $e->getMessage());
|
|
||||||
return null;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Crée une opération dans la nouvelle base
|
|
||||||
*
|
|
||||||
* @param array $operation Données de l'opération
|
|
||||||
* @return int|null ID de la nouvelle opération ou null en cas d'erreur
|
|
||||||
*/
|
|
||||||
private function createOperation(array $operation): ?int
|
|
||||||
{
|
|
||||||
try {
|
|
||||||
$stmt = $this->targetDb->prepare("
|
|
||||||
INSERT INTO operations (
|
|
||||||
fk_entite, libelle, date_deb, date_fin,
|
|
||||||
chk_distinct_sectors,
|
|
||||||
created_at, fk_user_creat, updated_at, fk_user_modif, chk_active
|
|
||||||
) VALUES (
|
|
||||||
:fk_entite, :libelle, :date_deb, :date_fin,
|
|
||||||
:chk_distinct_sectors,
|
|
||||||
:created_at, :fk_user_creat, :updated_at, :fk_user_modif, :chk_active
|
|
||||||
)
|
|
||||||
");
|
|
||||||
|
|
||||||
$stmt->execute([
|
|
||||||
':fk_entite' => $operation['fk_entite'],
|
|
||||||
':libelle' => $operation['libelle'],
|
|
||||||
':date_deb' => $operation['date_deb'],
|
|
||||||
':date_fin' => $operation['date_fin'],
|
|
||||||
':chk_distinct_sectors' => $operation['chk_distinct_sectors'],
|
|
||||||
':created_at' => $operation['date_creat'],
|
|
||||||
':fk_user_creat' => $operation['fk_user_creat'],
|
|
||||||
':updated_at' => $operation['date_modif'],
|
|
||||||
':fk_user_modif' => $operation['fk_user_modif'] ?? 0,
|
|
||||||
':chk_active' => $operation['active']
|
|
||||||
]);
|
|
||||||
|
|
||||||
return (int)$this->targetDb->lastInsertId();
|
|
||||||
|
|
||||||
} catch (Exception $e) {
|
|
||||||
$this->logger->error("❌ Erreur création opération: " . $e->getMessage());
|
|
||||||
return null;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Migre les médias d'une opération
|
|
||||||
*
|
|
||||||
* @param int $oldOperationId ID ancienne opération
|
|
||||||
* @param int $newOperationId ID nouvelle opération
|
|
||||||
* @return int Nombre de médias migrés
|
|
||||||
*/
|
|
||||||
private function migrateOperationMedias(int $oldOperationId, int $newOperationId): int
|
|
||||||
{
|
|
||||||
$stmt = $this->sourceDb->prepare("
|
|
||||||
SELECT * FROM medias
|
|
||||||
WHERE support = 'operations' AND support_rowid = :operation_id
|
|
||||||
");
|
|
||||||
$stmt->execute([':operation_id' => $oldOperationId]);
|
|
||||||
$medias = $stmt->fetchAll(PDO::FETCH_ASSOC);
|
|
||||||
|
|
||||||
if (empty($medias)) {
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
$count = 0;
|
|
||||||
foreach ($medias as $media) {
|
|
||||||
$stmt = $this->targetDb->prepare("
|
|
||||||
INSERT INTO medias (
|
|
||||||
dir0, dir1, dir2, support, support_rowid,
|
|
||||||
fichier, type_fichier, description, position,
|
|
||||||
hauteur, largeur, niveaugris,
|
|
||||||
created_at, fk_user_creat, updated_at, fk_user_modif
|
|
||||||
) VALUES (
|
|
||||||
:dir0, :dir1, :dir2, :support, :support_rowid,
|
|
||||||
:fichier, :type_fichier, :description, :position,
|
|
||||||
:hauteur, :largeur, :niveaugris,
|
|
||||||
:created_at, :fk_user_creat, :updated_at, :fk_user_modif
|
|
||||||
)
|
|
||||||
");
|
|
||||||
|
|
||||||
$stmt->execute([
|
|
||||||
':dir0' => $media['dir0'],
|
|
||||||
':dir1' => $media['dir1'],
|
|
||||||
':dir2' => $media['dir2'],
|
|
||||||
':support' => $media['support'],
|
|
||||||
':support_rowid' => $newOperationId,
|
|
||||||
':fichier' => $media['fichier'],
|
|
||||||
':type_fichier' => $media['type_fichier'],
|
|
||||||
':description' => $media['description'],
|
|
||||||
':position' => $media['position'],
|
|
||||||
':hauteur' => $media['hauteur'],
|
|
||||||
':largeur' => $media['largeur'],
|
|
||||||
':niveaugris' => $media['niveaugris'],
|
|
||||||
':created_at' => $media['date_creat'],
|
|
||||||
':fk_user_creat' => $media['fk_user_creat'],
|
|
||||||
':updated_at' => $media['date_modif'],
|
|
||||||
':fk_user_modif' => $media['fk_user_modif']
|
|
||||||
]);
|
|
||||||
|
|
||||||
$count++;
|
|
||||||
}
|
|
||||||
|
|
||||||
$this->logger->success("✓ {$count} média(s) migré(s)");
|
|
||||||
|
|
||||||
return $count;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -1,256 +0,0 @@
|
|||||||
<?php
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Migration des passages (ope_pass) et historiques (ope_pass_histo)
|
|
||||||
*
|
|
||||||
* Gère la migration des passages avec vérification du trio
|
|
||||||
* (operation, user, sector) et migration des historiques associés
|
|
||||||
*/
|
|
||||||
class PassageMigrator
|
|
||||||
{
|
|
||||||
private PDO $sourceDb;
|
|
||||||
private PDO $targetDb;
|
|
||||||
private MigrationLogger $logger;
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Constructeur
|
|
||||||
*
|
|
||||||
* @param PDO $sourceDb Connexion source
|
|
||||||
* @param PDO $targetDb Connexion cible
|
|
||||||
* @param MigrationLogger $logger Logger
|
|
||||||
*/
|
|
||||||
public function __construct(PDO $sourceDb, PDO $targetDb, MigrationLogger $logger)
|
|
||||||
{
|
|
||||||
$this->sourceDb = $sourceDb;
|
|
||||||
$this->targetDb = $targetDb;
|
|
||||||
$this->logger = $logger;
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Migre les passages d'un secteur dans une opération
|
|
||||||
*
|
|
||||||
* @param int $oldOperationId ID ancienne opération
|
|
||||||
* @param int $newOperationId ID nouvelle opération
|
|
||||||
* @param int $oldSectorId ID ancien secteur
|
|
||||||
* @param int $newOpeSectorId ID nouveau ope_sectors
|
|
||||||
* @param array $userMapping Mapping oldUserId => newOpeUserId
|
|
||||||
* @return int Nombre de passages migrés
|
|
||||||
*/
|
|
||||||
public function migratePassages(
|
|
||||||
int $oldOperationId,
|
|
||||||
int $newOperationId,
|
|
||||||
int $oldSectorId,
|
|
||||||
int $newOpeSectorId,
|
|
||||||
array $userMapping
|
|
||||||
): int {
|
|
||||||
$stmt = $this->sourceDb->prepare("
|
|
||||||
SELECT * FROM ope_pass
|
|
||||||
WHERE fk_operation = :operation_id
|
|
||||||
AND fk_sector = :sector_id
|
|
||||||
");
|
|
||||||
$stmt->execute([
|
|
||||||
':operation_id' => $oldOperationId,
|
|
||||||
':sector_id' => $oldSectorId
|
|
||||||
]);
|
|
||||||
$passages = $stmt->fetchAll(PDO::FETCH_ASSOC);
|
|
||||||
|
|
||||||
if (empty($passages)) {
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
$count = 0;
|
|
||||||
foreach ($passages as $passage) {
|
|
||||||
// Vérifier que l'utilisateur a été migré
|
|
||||||
if (!isset($userMapping[$passage['fk_user']])) {
|
|
||||||
$this->logger->warning(" ⚠ Passage {$passage['rowid']}: User {$passage['fk_user']} non trouvé dans mapping");
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Récupérer l'ID de ope_users depuis le mapping
|
|
||||||
$newOpeUserId = $userMapping[$passage['fk_user']];
|
|
||||||
|
|
||||||
// Vérifier que le trio (operation, user, sector) existe dans ope_users_sectors
|
|
||||||
if (!$this->verifyUserSectorAssociation($newOperationId, $newOpeUserId, $newOpeSectorId)) {
|
|
||||||
$this->logger->warning(" ⚠ Passage {$passage['rowid']}: Trio (op={$newOperationId}, user={$newOpeUserId}, sector={$newOpeSectorId}) inexistant");
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Insérer le passage avec l'ID de ope_users
|
|
||||||
$newPassId = $this->insertPassage($passage, $newOperationId, $newOpeSectorId, $newOpeUserId);
|
|
||||||
|
|
||||||
if ($newPassId) {
|
|
||||||
// Migrer l'historique du passage
|
|
||||||
$this->migratePassageHisto($passage['rowid'], $newPassId, $userMapping);
|
|
||||||
$count++;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if ($count > 0) {
|
|
||||||
$this->logger->success(" ✓ {$count} passage(s) migré(s)");
|
|
||||||
}
|
|
||||||
|
|
||||||
return $count;
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Vérifie qu'une association user-sector existe dans ope_users_sectors
|
|
||||||
*
|
|
||||||
* @param int $operationId ID opération
|
|
||||||
* @param int $userId ID ope_users (mapping)
|
|
||||||
* @param int $sectorId ID ope_sectors
|
|
||||||
* @return bool True si l'association existe
|
|
||||||
*/
|
|
||||||
private function verifyUserSectorAssociation(int $operationId, int $userId, int $sectorId): bool
|
|
||||||
{
|
|
||||||
$stmt = $this->targetDb->prepare("
|
|
||||||
SELECT COUNT(*) FROM ope_users_sectors
|
|
||||||
WHERE fk_operation = :operation_id
|
|
||||||
AND fk_user = :user_id
|
|
||||||
AND fk_sector = :sector_id
|
|
||||||
");
|
|
||||||
$stmt->execute([
|
|
||||||
':operation_id' => $operationId,
|
|
||||||
':user_id' => $userId,
|
|
||||||
':sector_id' => $sectorId
|
|
||||||
]);
|
|
||||||
|
|
||||||
return $stmt->fetchColumn() > 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Insère un passage dans la nouvelle base
|
|
||||||
*
|
|
||||||
* @param array $passage Données du passage
|
|
||||||
* @param int $newOperationId ID nouvelle opération
|
|
||||||
* @param int $newOpeSectorId ID nouveau secteur
|
|
||||||
* @param int $userId ID de ope_users (mapping)
|
|
||||||
* @return int|null ID du nouveau passage ou null en cas d'erreur
|
|
||||||
*/
|
|
||||||
private function insertPassage(
|
|
||||||
array $passage,
|
|
||||||
int $newOperationId,
|
|
||||||
int $newOpeSectorId,
|
|
||||||
int $userId
|
|
||||||
): ?int {
|
|
||||||
try {
|
|
||||||
$stmt = $this->targetDb->prepare("
|
|
||||||
INSERT INTO ope_pass (
|
|
||||||
fk_operation, fk_sector, fk_user, fk_adresse,
|
|
||||||
passed_at, fk_type, numero, rue, rue_bis, ville,
|
|
||||||
fk_habitat, appt, niveau, residence,
|
|
||||||
gps_lat, gps_lng, encrypted_name, montant, fk_type_reglement,
|
|
||||||
remarque, nom_recu, encrypted_email, email_erreur, chk_email_sent,
|
|
||||||
encrypted_phone, docremis, date_repasser, nb_passages,
|
|
||||||
chk_gps_maj, chk_map_create, chk_mobile, chk_synchro,
|
|
||||||
chk_api_adresse, chk_maj_adresse, anomalie,
|
|
||||||
created_at, fk_user_creat, updated_at, fk_user_modif, chk_active
|
|
||||||
) VALUES (
|
|
||||||
:fk_operation, :fk_sector, :fk_user, :fk_adresse,
|
|
||||||
:passed_at, :fk_type, :numero, :rue, :rue_bis, :ville,
|
|
||||||
:fk_habitat, :appt, :niveau, :residence,
|
|
||||||
:gps_lat, :gps_lng, :encrypted_name, :montant, :fk_type_reglement,
|
|
||||||
:remarque, :nom_recu, :encrypted_email, :email_erreur, :chk_email_sent,
|
|
||||||
:encrypted_phone, :docremis, :date_repasser, :nb_passages,
|
|
||||||
:chk_gps_maj, :chk_map_create, :chk_mobile, :chk_synchro,
|
|
||||||
:chk_api_adresse, :chk_maj_adresse, :anomalie,
|
|
||||||
:created_at, :fk_user_creat, :updated_at, :fk_user_modif, :chk_active
|
|
||||||
)
|
|
||||||
");
|
|
||||||
|
|
||||||
// Chiffrer les données sensibles
|
|
||||||
require_once dirname(__DIR__, 4) . '/src/Services/ApiService.php';
|
|
||||||
|
|
||||||
$stmt->execute([
|
|
||||||
':fk_operation' => $newOperationId,
|
|
||||||
':fk_sector' => $newOpeSectorId,
|
|
||||||
':fk_user' => $userId, // ID de ope_users (mapping)
|
|
||||||
':fk_adresse' => $passage['fk_adresse'],
|
|
||||||
':passed_at' => $passage['date_eve'],
|
|
||||||
':fk_type' => $passage['fk_type'],
|
|
||||||
':numero' => $passage['numero'],
|
|
||||||
':rue' => $passage['rue'],
|
|
||||||
':rue_bis' => $passage['rue_bis'],
|
|
||||||
':ville' => $passage['ville'],
|
|
||||||
':fk_habitat' => $passage['fk_habitat'],
|
|
||||||
':appt' => $passage['appt'],
|
|
||||||
':niveau' => $passage['niveau'],
|
|
||||||
':residence' => $passage['lieudit'] ?? null,
|
|
||||||
':gps_lat' => $passage['gps_lat'],
|
|
||||||
':gps_lng' => $passage['gps_lng'],
|
|
||||||
':encrypted_name' => $passage['libelle'] ? ApiService::encryptData($passage['libelle']) : '', // Chiffrer avec IV aléatoire
|
|
||||||
':montant' => $passage['montant'],
|
|
||||||
':fk_type_reglement' => (!empty($passage['fk_type_reglement']) && $passage['fk_type_reglement'] > 0) ? $passage['fk_type_reglement'] : 4,
|
|
||||||
':remarque' => $passage['remarque'],
|
|
||||||
':nom_recu' => $passage['recu'] ?? null,
|
|
||||||
':encrypted_email' => $passage['email'] ? ApiService::encryptSearchableData($passage['email']) : null,
|
|
||||||
':email_erreur' => $passage['email_erreur'],
|
|
||||||
':chk_email_sent' => $passage['chk_email_sent'],
|
|
||||||
':encrypted_phone' => $passage['phone'] ? ApiService::encryptData($passage['phone']) : '',
|
|
||||||
':docremis' => $passage['docremis'],
|
|
||||||
':date_repasser' => $passage['date_repasser'],
|
|
||||||
':nb_passages' => ($passage['fk_type'] == 2) ? 0 : $passage['nb_passages'],
|
|
||||||
':chk_gps_maj' => $passage['chk_gps_maj'],
|
|
||||||
':chk_map_create' => $passage['chk_map_create'],
|
|
||||||
':chk_mobile' => $passage['chk_mobile'],
|
|
||||||
':chk_synchro' => $passage['chk_synchro'],
|
|
||||||
':chk_api_adresse' => $passage['chk_api_adresse'],
|
|
||||||
':chk_maj_adresse' => $passage['chk_maj_adresse'],
|
|
||||||
':anomalie' => $passage['anomalie'],
|
|
||||||
':created_at' => $passage['date_creat'],
|
|
||||||
':fk_user_creat' => $passage['fk_user_creat'] ?? 0,
|
|
||||||
':updated_at' => $passage['date_modif'],
|
|
||||||
':fk_user_modif' => $passage['fk_user_modif'] ?? 0,
|
|
||||||
':chk_active' => $passage['active']
|
|
||||||
]);
|
|
||||||
|
|
||||||
return (int)$this->targetDb->lastInsertId();
|
|
||||||
|
|
||||||
} catch (Exception $e) {
|
|
||||||
$this->logger->error(" ❌ Erreur insertion passage {$passage['rowid']}: " . $e->getMessage());
|
|
||||||
return null;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Migre l'historique d'un passage
|
|
||||||
*
|
|
||||||
* @param int $oldPassId ID ancien passage
|
|
||||||
* @param int $newPassId ID nouveau passage
|
|
||||||
* @param array $userMapping Non utilisé (conservé pour compatibilité)
|
|
||||||
* @return int Nombre d'entrées d'historique migrées
|
|
||||||
*/
|
|
||||||
public function migratePassageHisto(int $oldPassId, int $newPassId, array $userMapping): int
|
|
||||||
{
|
|
||||||
$stmt = $this->sourceDb->prepare("
|
|
||||||
SELECT * FROM ope_pass_histo WHERE fk_pass = :pass_id
|
|
||||||
");
|
|
||||||
$stmt->execute([':pass_id' => $oldPassId]);
|
|
||||||
$histos = $stmt->fetchAll(PDO::FETCH_ASSOC);
|
|
||||||
|
|
||||||
if (empty($histos)) {
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
$count = 0;
|
|
||||||
foreach ($histos as $histo) {
|
|
||||||
$stmt = $this->targetDb->prepare("
|
|
||||||
INSERT INTO ope_pass_histo (
|
|
||||||
fk_pass, date_histo, sujet, remarque
|
|
||||||
) VALUES (
|
|
||||||
:fk_pass, :date_histo, :sujet, :remarque
|
|
||||||
)
|
|
||||||
");
|
|
||||||
|
|
||||||
$stmt->execute([
|
|
||||||
':fk_pass' => $newPassId,
|
|
||||||
':date_histo' => $histo['date_histo'],
|
|
||||||
':sujet' => $histo['sujet'],
|
|
||||||
':remarque' => $histo['remarque']
|
|
||||||
]);
|
|
||||||
|
|
||||||
$count++;
|
|
||||||
}
|
|
||||||
|
|
||||||
return $count;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -1,289 +0,0 @@
|
|||||||
<?php
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Migration des secteurs (ope_sectors) et données associées
|
|
||||||
*
|
|
||||||
* Gère la migration des secteurs avec leurs adresses, associations
|
|
||||||
* utilisateurs-secteurs, et passages. Utilise PassageMigrator pour les passages.
|
|
||||||
*/
|
|
||||||
class SectorMigrator
|
|
||||||
{
|
|
||||||
private PDO $sourceDb;
|
|
||||||
private PDO $targetDb;
|
|
||||||
private MigrationLogger $logger;
|
|
||||||
private PassageMigrator $passageMigrator;
|
|
||||||
private array $sectorMapping = [];
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Constructeur
|
|
||||||
*
|
|
||||||
* @param PDO $sourceDb Connexion source
|
|
||||||
* @param PDO $targetDb Connexion cible
|
|
||||||
* @param MigrationLogger $logger Logger
|
|
||||||
* @param PassageMigrator $passageMigrator Migrator de passages
|
|
||||||
*/
|
|
||||||
public function __construct(
|
|
||||||
PDO $sourceDb,
|
|
||||||
PDO $targetDb,
|
|
||||||
MigrationLogger $logger,
|
|
||||||
PassageMigrator $passageMigrator
|
|
||||||
) {
|
|
||||||
$this->sourceDb = $sourceDb;
|
|
||||||
$this->targetDb = $targetDb;
|
|
||||||
$this->logger = $logger;
|
|
||||||
$this->passageMigrator = $passageMigrator;
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Migre un secteur dans le contexte d'une opération
|
|
||||||
*
|
|
||||||
* @param int $oldOperationId ID ancienne opération
|
|
||||||
* @param int $newOperationId ID nouvelle opération
|
|
||||||
* @param int $oldSectorId ID ancien secteur
|
|
||||||
* @param array $userMapping Mapping oldUserId => newOpeUserId
|
|
||||||
* @return array|null ['id' => int, 'name' => string, 'users' => int, 'passages' => int] ou null en cas d'erreur
|
|
||||||
*/
|
|
||||||
public function migrateSector(
|
|
||||||
int $oldOperationId,
|
|
||||||
int $newOperationId,
|
|
||||||
int $oldSectorId,
|
|
||||||
array $userMapping
|
|
||||||
): ?array {
|
|
||||||
$this->logger->info(" 📍 Migration secteur ID: {$oldSectorId}");
|
|
||||||
|
|
||||||
try {
|
|
||||||
// 1. Récupérer le secteur source
|
|
||||||
$stmt = $this->sourceDb->prepare("
|
|
||||||
SELECT * FROM sectors WHERE rowid = :id
|
|
||||||
");
|
|
||||||
$stmt->execute([':id' => $oldSectorId]);
|
|
||||||
$sector = $stmt->fetch(PDO::FETCH_ASSOC);
|
|
||||||
|
|
||||||
if (!$sector) {
|
|
||||||
$this->logger->warning(" Secteur {$oldSectorId} non trouvé");
|
|
||||||
return null;
|
|
||||||
}
|
|
||||||
|
|
||||||
// 2. Créer dans ope_sectors
|
|
||||||
$newOpeSectorId = $this->createOpeSector($sector, $newOperationId);
|
|
||||||
|
|
||||||
if (!$newOpeSectorId) {
|
|
||||||
return null;
|
|
||||||
}
|
|
||||||
|
|
||||||
// 3. Mapper "operationId_sectorId" → newOpeSectorId
|
|
||||||
$mappingKey = "{$oldOperationId}_{$oldSectorId}";
|
|
||||||
$this->sectorMapping[$mappingKey] = $newOpeSectorId;
|
|
||||||
|
|
||||||
$this->logger->success(" ✓ Secteur créé avec ID: {$newOpeSectorId}");
|
|
||||||
|
|
||||||
// 4. Migrer sectors_adresses
|
|
||||||
$this->migrateSectorAddresses($oldSectorId, $newOpeSectorId);
|
|
||||||
|
|
||||||
// 5. Migrer ope_users_sectors
|
|
||||||
$usersCount = $this->migrateUsersSectors($oldOperationId, $newOperationId, $oldSectorId, $newOpeSectorId, $userMapping);
|
|
||||||
|
|
||||||
// 6. Migrer ope_pass
|
|
||||||
$passagesCount = $this->passageMigrator->migratePassages(
|
|
||||||
$oldOperationId,
|
|
||||||
$newOperationId,
|
|
||||||
$oldSectorId,
|
|
||||||
$newOpeSectorId,
|
|
||||||
$userMapping
|
|
||||||
);
|
|
||||||
|
|
||||||
return [
|
|
||||||
'id' => $newOpeSectorId,
|
|
||||||
'name' => $sector['libelle'],
|
|
||||||
'users' => $usersCount,
|
|
||||||
'passages' => $passagesCount
|
|
||||||
];
|
|
||||||
|
|
||||||
} catch (Exception $e) {
|
|
||||||
$this->logger->error(" ❌ Erreur migration secteur {$oldSectorId}: " . $e->getMessage());
|
|
||||||
return null;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Crée un secteur dans ope_sectors
|
|
||||||
*
|
|
||||||
* @param array $sector Données du secteur
|
|
||||||
* @param int $newOperationId ID nouvelle opération
|
|
||||||
* @return int|null ID du nouveau secteur ou null en cas d'erreur
|
|
||||||
*/
|
|
||||||
private function createOpeSector(array $sector, int $newOperationId): ?int
|
|
||||||
{
|
|
||||||
try {
|
|
||||||
$stmt = $this->targetDb->prepare("
|
|
||||||
INSERT INTO ope_sectors (
|
|
||||||
fk_operation, libelle, sector, color,
|
|
||||||
created_at, fk_user_creat, updated_at, fk_user_modif, chk_active
|
|
||||||
) VALUES (
|
|
||||||
:fk_operation, :libelle, :sector, :color,
|
|
||||||
:created_at, :fk_user_creat, :updated_at, :fk_user_modif, :chk_active
|
|
||||||
)
|
|
||||||
");
|
|
||||||
|
|
||||||
$stmt->execute([
|
|
||||||
':fk_operation' => $newOperationId,
|
|
||||||
':libelle' => $sector['libelle'],
|
|
||||||
':sector' => $sector['sector'],
|
|
||||||
':color' => $sector['color'],
|
|
||||||
':created_at' => $sector['date_creat'],
|
|
||||||
':fk_user_creat' => $sector['fk_user_creat'] ?? 0,
|
|
||||||
':updated_at' => $sector['date_modif'],
|
|
||||||
':fk_user_modif' => $sector['fk_user_modif'] ?? 0,
|
|
||||||
':chk_active' => $sector['active']
|
|
||||||
]);
|
|
||||||
|
|
||||||
return (int)$this->targetDb->lastInsertId();
|
|
||||||
|
|
||||||
} catch (Exception $e) {
|
|
||||||
$this->logger->error(" ❌ Erreur création secteur: " . $e->getMessage());
|
|
||||||
return null;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Migre les adresses d'un secteur
|
|
||||||
*
|
|
||||||
* @param int $oldSectorId ID ancien secteur
|
|
||||||
* @param int $newOpeSectorId ID nouveau ope_sectors
|
|
||||||
* @return int Nombre d'adresses migrées
|
|
||||||
*/
|
|
||||||
private function migrateSectorAddresses(int $oldSectorId, int $newOpeSectorId): int
|
|
||||||
{
|
|
||||||
$stmt = $this->sourceDb->prepare("
|
|
||||||
SELECT * FROM sectors_adresses WHERE fk_sector = :sector_id
|
|
||||||
");
|
|
||||||
$stmt->execute([':sector_id' => $oldSectorId]);
|
|
||||||
$addresses = $stmt->fetchAll(PDO::FETCH_ASSOC);
|
|
||||||
|
|
||||||
if (empty($addresses)) {
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
$count = 0;
|
|
||||||
foreach ($addresses as $address) {
|
|
||||||
$stmt = $this->targetDb->prepare("
|
|
||||||
INSERT INTO sectors_adresses (
|
|
||||||
fk_adresse, fk_sector, numero, rue_bis, rue, cp, ville,
|
|
||||||
gps_lat, gps_lng
|
|
||||||
) VALUES (
|
|
||||||
:fk_adresse, :fk_sector, :numero, :rue_bis, :rue, :cp, :ville,
|
|
||||||
:gps_lat, :gps_lng
|
|
||||||
)
|
|
||||||
");
|
|
||||||
|
|
||||||
$stmt->execute([
|
|
||||||
':fk_adresse' => $address['fk_adresse'], // Garde la valeur telle quelle
|
|
||||||
':fk_sector' => $newOpeSectorId,
|
|
||||||
':numero' => $address['numero'],
|
|
||||||
':rue_bis' => $address['rue_bis'],
|
|
||||||
':rue' => $address['rue'],
|
|
||||||
':cp' => $address['cp'],
|
|
||||||
':ville' => $address['ville'],
|
|
||||||
':gps_lat' => $address['gps_lat'],
|
|
||||||
':gps_lng' => $address['gps_lng']
|
|
||||||
]);
|
|
||||||
|
|
||||||
$count++;
|
|
||||||
}
|
|
||||||
|
|
||||||
$this->logger->success(" ✓ {$count} adresse(s) migrée(s)");
|
|
||||||
|
|
||||||
return $count;
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Migre les associations utilisateurs-secteurs
|
|
||||||
*
|
|
||||||
* @param int $oldOperationId ID ancienne opération
|
|
||||||
* @param int $newOperationId ID nouvelle opération
|
|
||||||
* @param int $oldSectorId ID ancien secteur
|
|
||||||
* @param int $newOpeSectorId ID nouveau ope_sectors
|
|
||||||
* @param array $userMapping Mapping oldUserId => newOpeUserId
|
|
||||||
* @return int Nombre d'associations migrées
|
|
||||||
*/
|
|
||||||
private function migrateUsersSectors(
|
|
||||||
int $oldOperationId,
|
|
||||||
int $newOperationId,
|
|
||||||
int $oldSectorId,
|
|
||||||
int $newOpeSectorId,
|
|
||||||
array $userMapping
|
|
||||||
): int {
|
|
||||||
$stmt = $this->sourceDb->prepare("
|
|
||||||
SELECT * FROM ope_users_sectors
|
|
||||||
WHERE fk_operation = :operation_id
|
|
||||||
AND fk_sector = :sector_id
|
|
||||||
AND active = 1
|
|
||||||
");
|
|
||||||
$stmt->execute([
|
|
||||||
':operation_id' => $oldOperationId,
|
|
||||||
':sector_id' => $oldSectorId
|
|
||||||
]);
|
|
||||||
$usersSectors = $stmt->fetchAll(PDO::FETCH_ASSOC);
|
|
||||||
|
|
||||||
if (empty($usersSectors)) {
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
$count = 0;
|
|
||||||
foreach ($usersSectors as $us) {
|
|
||||||
// Vérifier que l'utilisateur existe dans le mapping
|
|
||||||
// (le mapping sert juste à vérifier que l'user a été migré)
|
|
||||||
if (!isset($userMapping[$us['fk_user']])) {
|
|
||||||
$this->logger->warning(" ⚠ User {$us['fk_user']} non trouvé dans mapping");
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
|
|
||||||
$stmt = $this->targetDb->prepare("
|
|
||||||
INSERT INTO ope_users_sectors (
|
|
||||||
fk_operation, fk_user, fk_sector,
|
|
||||||
created_at, fk_user_creat, updated_at, fk_user_modif, chk_active
|
|
||||||
) VALUES (
|
|
||||||
:fk_operation, :fk_user, :fk_sector,
|
|
||||||
:created_at, :fk_user_creat, :updated_at, :fk_user_modif, :chk_active
|
|
||||||
)
|
|
||||||
");
|
|
||||||
|
|
||||||
$stmt->execute([
|
|
||||||
':fk_operation' => $newOperationId,
|
|
||||||
':fk_user' => $userMapping[$us['fk_user']], // ID de ope_users (mapping)
|
|
||||||
':fk_sector' => $newOpeSectorId,
|
|
||||||
':created_at' => date('Y-m-d H:i:s'),
|
|
||||||
':fk_user_creat' => 0,
|
|
||||||
':updated_at' => null,
|
|
||||||
':fk_user_modif' => null,
|
|
||||||
':chk_active' => $us['active']
|
|
||||||
]);
|
|
||||||
|
|
||||||
$count++;
|
|
||||||
}
|
|
||||||
|
|
||||||
$this->logger->success(" ✓ {$count} association(s) user-secteur migrée(s)");
|
|
||||||
|
|
||||||
return $count;
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Retourne le mapping des secteurs
|
|
||||||
*
|
|
||||||
* @return array "operationId_sectorId" => newOpeSectorId
|
|
||||||
*/
|
|
||||||
public function getSectorMapping(): array
|
|
||||||
{
|
|
||||||
return $this->sectorMapping;
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Définit le mapping des secteurs (utile pour réutilisation)
|
|
||||||
*
|
|
||||||
* @param array $mapping "operationId_sectorId" => newOpeSectorId
|
|
||||||
*/
|
|
||||||
public function setSectorMapping(array $mapping): void
|
|
||||||
{
|
|
||||||
$this->sectorMapping = $mapping;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -1,163 +0,0 @@
|
|||||||
<?php
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Migration des utilisateurs d'opérations (ope_users)
|
|
||||||
*
|
|
||||||
* Gère la création des utilisateurs par opération et le mapping
|
|
||||||
* oldUserId (users.rowid) → newOpeUserId (ope_users.id)
|
|
||||||
*/
|
|
||||||
class UserMigrator
|
|
||||||
{
|
|
||||||
private PDO $sourceDb;
|
|
||||||
private PDO $targetDb;
|
|
||||||
private MigrationLogger $logger;
|
|
||||||
private array $userMapping = [];
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Constructeur
|
|
||||||
*
|
|
||||||
* @param PDO $sourceDb Connexion source
|
|
||||||
* @param PDO $targetDb Connexion cible
|
|
||||||
* @param MigrationLogger $logger Logger
|
|
||||||
*/
|
|
||||||
public function __construct(PDO $sourceDb, PDO $targetDb, MigrationLogger $logger)
|
|
||||||
{
|
|
||||||
$this->sourceDb = $sourceDb;
|
|
||||||
$this->targetDb = $targetDb;
|
|
||||||
$this->logger = $logger;
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Migre les utilisateurs d'une opération
|
|
||||||
* - Si opération active : TOUS les users actifs de l'entité
|
|
||||||
* - Si opération inactive : Uniquement ceux dans ope_users_sectors
|
|
||||||
*
|
|
||||||
* @param int $oldOperationId ID ancienne opération
|
|
||||||
* @param int $newOperationId ID nouvelle opération
|
|
||||||
* @param int $entityId ID de l'entité
|
|
||||||
* @param bool $isActiveOperation True si opération active
|
|
||||||
* @return array ['mapping' => array, 'count' => int]
|
|
||||||
*/
|
|
||||||
public function migrateOperationUsers(
|
|
||||||
int $oldOperationId,
|
|
||||||
int $newOperationId,
|
|
||||||
int $entityId,
|
|
||||||
bool $isActiveOperation
|
|
||||||
): array {
|
|
||||||
$this->logger->info("👥 Migration des utilisateurs de l'opération...");
|
|
||||||
|
|
||||||
// Réinitialiser le mapping pour cette opération
|
|
||||||
$this->userMapping = [];
|
|
||||||
|
|
||||||
// Récupérer les utilisateurs selon le type d'opération
|
|
||||||
if ($isActiveOperation) {
|
|
||||||
// Pour l'opération active : TOUS les users actifs de l'entité
|
|
||||||
$this->logger->info(" ℹ Opération ACTIVE : migration de tous les users actifs de l'entité");
|
|
||||||
$stmt = $this->sourceDb->prepare("
|
|
||||||
SELECT rowid
|
|
||||||
FROM users
|
|
||||||
WHERE fk_entite = :entity_id AND active = 1
|
|
||||||
");
|
|
||||||
$stmt->execute([':entity_id' => $entityId]);
|
|
||||||
$userIds = $stmt->fetchAll(PDO::FETCH_COLUMN);
|
|
||||||
} else {
|
|
||||||
// Pour les opérations inactives : uniquement ceux dans ope_users_sectors
|
|
||||||
$this->logger->info(" ℹ Opération INACTIVE : migration des users affectés aux secteurs");
|
|
||||||
$stmt = $this->sourceDb->prepare("
|
|
||||||
SELECT DISTINCT fk_user
|
|
||||||
FROM ope_users_sectors
|
|
||||||
WHERE fk_operation = :operation_id AND active = 1
|
|
||||||
");
|
|
||||||
$stmt->execute([':operation_id' => $oldOperationId]);
|
|
||||||
$userIds = $stmt->fetchAll(PDO::FETCH_COLUMN);
|
|
||||||
}
|
|
||||||
|
|
||||||
if (empty($userIds)) {
|
|
||||||
$this->logger->warning("Aucun utilisateur trouvé pour l'opération {$oldOperationId}");
|
|
||||||
return ['mapping' => [], 'count' => 0];
|
|
||||||
}
|
|
||||||
|
|
||||||
$count = 0;
|
|
||||||
foreach ($userIds as $oldUserId) {
|
|
||||||
// Récupérer les infos utilisateur depuis la table users
|
|
||||||
$stmt = $this->sourceDb->prepare("
|
|
||||||
SELECT * FROM users WHERE rowid = :id AND active = 1
|
|
||||||
");
|
|
||||||
$stmt->execute([':id' => $oldUserId]);
|
|
||||||
$user = $stmt->fetch(PDO::FETCH_ASSOC);
|
|
||||||
|
|
||||||
if (!$user) {
|
|
||||||
$this->logger->warning(" ⚠ Utilisateur {$oldUserId} non trouvé ou inactif");
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Créer dans ope_users de la nouvelle base
|
|
||||||
$stmt = $this->targetDb->prepare("
|
|
||||||
INSERT INTO ope_users (
|
|
||||||
fk_operation, fk_user, fk_role,
|
|
||||||
first_name, encrypted_name, sect_name,
|
|
||||||
created_at, fk_user_creat, updated_at, fk_user_modif, chk_active
|
|
||||||
) VALUES (
|
|
||||||
:fk_operation, :fk_user, :fk_role,
|
|
||||||
:first_name, :encrypted_name, :sect_name,
|
|
||||||
:created_at, :fk_user_creat, :updated_at, :fk_user_modif, :chk_active
|
|
||||||
)
|
|
||||||
");
|
|
||||||
|
|
||||||
$stmt->execute([
|
|
||||||
':fk_operation' => $newOperationId,
|
|
||||||
':fk_user' => $oldUserId, // Référence vers users.id
|
|
||||||
':fk_role' => $user['fk_role'],
|
|
||||||
':first_name' => $user['prenom'],
|
|
||||||
':encrypted_name' => ApiService::encryptData($user['libelle']), // Chiffrer le nom avec IV aléatoire
|
|
||||||
':sect_name' => $user['nom_tournee'],
|
|
||||||
':created_at' => $user['date_creat'],
|
|
||||||
':fk_user_creat' => $user['fk_user_creat'],
|
|
||||||
':updated_at' => $user['date_modif'],
|
|
||||||
':fk_user_modif' => $user['fk_user_modif'],
|
|
||||||
':chk_active' => $user['active']
|
|
||||||
]);
|
|
||||||
|
|
||||||
$newOpeUserId = (int)$this->targetDb->lastInsertId();
|
|
||||||
|
|
||||||
// Mapper oldUserId → newOpeUserId
|
|
||||||
$this->userMapping[$oldUserId] = $newOpeUserId;
|
|
||||||
$count++;
|
|
||||||
}
|
|
||||||
|
|
||||||
$this->logger->success(" ✓ {$count} utilisateur(s) migré(s)");
|
|
||||||
|
|
||||||
return ['mapping' => $this->userMapping, 'count' => $count];
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Retourne le mapping des utilisateurs
|
|
||||||
*
|
|
||||||
* @return array oldUserId => newOpeUserId
|
|
||||||
*/
|
|
||||||
public function getUserMapping(): array
|
|
||||||
{
|
|
||||||
return $this->userMapping;
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Définit le mapping des utilisateurs (utile pour réutilisation)
|
|
||||||
*
|
|
||||||
* @param array $mapping oldUserId => newOpeUserId
|
|
||||||
*/
|
|
||||||
public function setUserMapping(array $mapping): void
|
|
||||||
{
|
|
||||||
$this->userMapping = $mapping;
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Récupère le nouvel ID ope_users depuis le mapping
|
|
||||||
*
|
|
||||||
* @param int $oldUserId ID ancien utilisateur
|
|
||||||
* @return int|null Nouvel ID ope_users ou null si non trouvé
|
|
||||||
*/
|
|
||||||
public function getMappedUserId(int $oldUserId): ?int
|
|
||||||
{
|
|
||||||
return $this->userMapping[$oldUserId] ?? null;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -1,471 +0,0 @@
|
|||||||
#!/usr/bin/env php
|
|
||||||
<?php
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Script de migration v2 - Architecture modulaire
|
|
||||||
*
|
|
||||||
* Utilise les migrators spécialisés pour une migration hiérarchique par opération.
|
|
||||||
* Source fixe: geosector (synchronisée 2x/jour par PM7 depuis nx4)
|
|
||||||
* Cible: dva_geo (développement), rca_geo (recette) ou pra_geo (production)
|
|
||||||
*
|
|
||||||
* Usage:
|
|
||||||
* Migration d'une entité:
|
|
||||||
* php migrate_from_backup.php --mode=entity --entity-id=2
|
|
||||||
*
|
|
||||||
* Migration globale (toutes les entités):
|
|
||||||
* php migrate_from_backup.php --mode=global
|
|
||||||
*
|
|
||||||
* Avec environnement explicite:
|
|
||||||
* php migrate_from_backup.php --env=dva --mode=entity --entity-id=2
|
|
||||||
*/
|
|
||||||
|
|
||||||
// Inclure ApiService pour le chiffrement
|
|
||||||
require_once dirname(__DIR__, 3) . '/src/Services/ApiService.php';
|
|
||||||
|
|
||||||
// Inclure les classes v2
|
|
||||||
require_once __DIR__ . '/lib/DatabaseConfig.php';
|
|
||||||
require_once __DIR__ . '/lib/MigrationLogger.php';
|
|
||||||
require_once __DIR__ . '/lib/DatabaseConnection.php';
|
|
||||||
require_once __DIR__ . '/lib/UserMigrator.php';
|
|
||||||
require_once __DIR__ . '/lib/PassageMigrator.php';
|
|
||||||
require_once __DIR__ . '/lib/SectorMigrator.php';
|
|
||||||
require_once __DIR__ . '/lib/OperationMigrator.php';
|
|
||||||
|
|
||||||
// Configuration PHP pour les grosses migrations
|
|
||||||
ini_set('memory_limit', '512M');
|
|
||||||
ini_set('max_execution_time', '3600'); // 1 heure max
|
|
||||||
|
|
||||||
class DataMigration
|
|
||||||
{
|
|
||||||
private PDO $sourceDb;
|
|
||||||
private PDO $targetDb;
|
|
||||||
private MigrationLogger $logger;
|
|
||||||
private DatabaseConfig $config;
|
|
||||||
private OperationMigrator $operationMigrator;
|
|
||||||
|
|
||||||
// Options
|
|
||||||
private string $mode;
|
|
||||||
private ?int $entityId;
|
|
||||||
private bool $deleteBefore;
|
|
||||||
|
|
||||||
// Statistiques
|
|
||||||
private array $migrationStats = [];
|
|
||||||
|
|
||||||
public function __construct(string $env, string $mode = 'global', ?int $entityId = null, ?string $logFile = null, bool $deleteBefore = true)
|
|
||||||
{
|
|
||||||
// Initialisation config et logger
|
|
||||||
$this->config = new DatabaseConfig($env);
|
|
||||||
$this->mode = $mode;
|
|
||||||
$this->entityId = $entityId;
|
|
||||||
$this->deleteBefore = $deleteBefore;
|
|
||||||
|
|
||||||
// Générer le nom du fichier log selon le mode si non spécifié
|
|
||||||
if (!$logFile) {
|
|
||||||
$logDir = dirname(__DIR__, 2) . '/logs';
|
|
||||||
$timestamp = date('Ymd_His');
|
|
||||||
|
|
||||||
if ($mode === 'entity' && $entityId) {
|
|
||||||
$logFile = "{$logDir}/migration_entite_{$entityId}_{$timestamp}.log";
|
|
||||||
} else {
|
|
||||||
$logFile = "{$logDir}/migration_global_{$timestamp}.log";
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
$this->logger = new MigrationLogger($logFile);
|
|
||||||
|
|
||||||
// Log header
|
|
||||||
$this->logHeader();
|
|
||||||
|
|
||||||
// Connexions
|
|
||||||
$dbConnection = new DatabaseConnection($this->config, $this->logger);
|
|
||||||
$dbConnection->connect();
|
|
||||||
$this->sourceDb = $dbConnection->getSourceDb();
|
|
||||||
$this->targetDb = $dbConnection->getTargetDb();
|
|
||||||
|
|
||||||
// Initialiser les migrators
|
|
||||||
$this->initializeMigrators();
|
|
||||||
}
|
|
||||||
|
|
||||||
private function initializeMigrators(): void
|
|
||||||
{
|
|
||||||
// Créer les migrators dans l'ordre de dépendance
|
|
||||||
$passageMigrator = new PassageMigrator($this->sourceDb, $this->targetDb, $this->logger);
|
|
||||||
$sectorMigrator = new SectorMigrator($this->sourceDb, $this->targetDb, $this->logger, $passageMigrator);
|
|
||||||
$userMigrator = new UserMigrator($this->sourceDb, $this->targetDb, $this->logger);
|
|
||||||
|
|
||||||
$this->operationMigrator = new OperationMigrator(
|
|
||||||
$this->sourceDb,
|
|
||||||
$this->targetDb,
|
|
||||||
$this->logger,
|
|
||||||
$userMigrator,
|
|
||||||
$sectorMigrator
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
public function run(): void
|
|
||||||
{
|
|
||||||
if ($this->mode === 'entity') {
|
|
||||||
if (!$this->entityId) {
|
|
||||||
throw new Exception("entity-id requis en mode entity");
|
|
||||||
}
|
|
||||||
$this->migrateEntity($this->entityId);
|
|
||||||
} else {
|
|
||||||
$this->migrateAllEntities();
|
|
||||||
}
|
|
||||||
|
|
||||||
// Afficher le récapitulatif
|
|
||||||
if (!empty($this->migrationStats)) {
|
|
||||||
$this->logger->logMigrationSummary($this->migrationStats);
|
|
||||||
}
|
|
||||||
|
|
||||||
$this->logger->separator();
|
|
||||||
$this->logger->success("🎉 Migration terminée !");
|
|
||||||
$this->logger->info("📄 Log: " . $this->logger->getLogFile());
|
|
||||||
}
|
|
||||||
|
|
||||||
private function migrateEntity(int $entityId): void
|
|
||||||
{
|
|
||||||
$this->logger->separator();
|
|
||||||
$this->logger->info("🏢 Migration de l'entité ID: {$entityId}");
|
|
||||||
|
|
||||||
// Supprimer les données existantes si demandé
|
|
||||||
if ($this->deleteBefore) {
|
|
||||||
$this->deleteEntityData($entityId);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Migrer l'entité elle-même
|
|
||||||
$this->migrateEntityRecord($entityId);
|
|
||||||
|
|
||||||
// Migrer les users de l'entité (table centrale users)
|
|
||||||
$this->migrateEntityUsers($entityId);
|
|
||||||
|
|
||||||
// Récupérer le nom de l'entité pour les stats
|
|
||||||
$stmt = $this->sourceDb->prepare("SELECT libelle FROM users_entites WHERE rowid = :id");
|
|
||||||
$stmt->execute([':id' => $entityId]);
|
|
||||||
$entityName = $stmt->fetchColumn();
|
|
||||||
|
|
||||||
// Récupérer et migrer les opérations
|
|
||||||
$operationIds = $this->operationMigrator->getOperationsToMigrate($entityId);
|
|
||||||
|
|
||||||
$operations = [];
|
|
||||||
foreach ($operationIds as $oldOperationId) {
|
|
||||||
$operationStats = $this->operationMigrator->migrateOperation($oldOperationId);
|
|
||||||
if ($operationStats) {
|
|
||||||
$operations[] = $operationStats;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Stocker les stats pour cette entité
|
|
||||||
$this->migrationStats = [
|
|
||||||
'entity' => [
|
|
||||||
'id' => $entityId,
|
|
||||||
'name' => $entityName ?: "Entité #{$entityId}"
|
|
||||||
],
|
|
||||||
'operations' => $operations
|
|
||||||
];
|
|
||||||
}
|
|
||||||
|
|
||||||
private function migrateAllEntities(): void
|
|
||||||
{
|
|
||||||
// Récupérer toutes les entités actives
|
|
||||||
$stmt = $this->sourceDb->query("SELECT rowid FROM users_entites WHERE active = 1 ORDER BY rowid");
|
|
||||||
$entities = $stmt->fetchAll(PDO::FETCH_COLUMN);
|
|
||||||
|
|
||||||
$this->logger->info("📊 " . count($entities) . " entité(s) à migrer");
|
|
||||||
|
|
||||||
$allOperations = [];
|
|
||||||
foreach ($entities as $entityId) {
|
|
||||||
// Sauvegarder les stats actuelles avant de migrer
|
|
||||||
$previousStats = $this->migrationStats;
|
|
||||||
|
|
||||||
$this->migrateEntity($entityId);
|
|
||||||
|
|
||||||
// Agréger les opérations de toutes les entités
|
|
||||||
if (!empty($this->migrationStats['operations'])) {
|
|
||||||
$allOperations = array_merge($allOperations, $this->migrationStats['operations']);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Stocker les stats globales
|
|
||||||
$this->migrationStats = [
|
|
||||||
'operations' => $allOperations
|
|
||||||
];
|
|
||||||
}
|
|
||||||
|
|
||||||
private function deleteEntityData(int $entityId): void
|
|
||||||
{
|
|
||||||
$this->logger->separator();
|
|
||||||
$this->logger->warning("🗑️ Suppression des données de l'entité {$entityId}...");
|
|
||||||
|
|
||||||
// Ordre inverse des contraintes FK
|
|
||||||
$tables = [
|
|
||||||
'medias' => "fk_entite = {$entityId} OR fk_operation IN (SELECT id FROM operations WHERE fk_entite = {$entityId})",
|
|
||||||
'ope_pass_histo' => "fk_pass IN (SELECT id FROM ope_pass WHERE fk_operation IN (SELECT id FROM operations WHERE fk_entite = {$entityId}))",
|
|
||||||
'ope_pass' => "fk_operation IN (SELECT id FROM operations WHERE fk_entite = {$entityId})",
|
|
||||||
'ope_users_sectors' => "fk_operation IN (SELECT id FROM operations WHERE fk_entite = {$entityId})",
|
|
||||||
'ope_users' => "fk_operation IN (SELECT id FROM operations WHERE fk_entite = {$entityId})",
|
|
||||||
'sectors_adresses' => "fk_sector IN (SELECT id FROM ope_sectors WHERE fk_operation IN (SELECT id FROM operations WHERE fk_entite = {$entityId}))",
|
|
||||||
'ope_sectors' => "fk_operation IN (SELECT id FROM operations WHERE fk_entite = {$entityId})",
|
|
||||||
'operations' => "fk_entite = {$entityId}",
|
|
||||||
'users' => "fk_entite = {$entityId}"
|
|
||||||
];
|
|
||||||
|
|
||||||
foreach ($tables as $table => $condition) {
|
|
||||||
$stmt = $this->targetDb->query("DELETE FROM {$table} WHERE {$condition}");
|
|
||||||
$count = $stmt->rowCount();
|
|
||||||
if ($count > 0) {
|
|
||||||
$this->logger->info(" ✓ {$table}: {$count} ligne(s) supprimée(s)");
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
$this->logger->success("✓ Suppression terminée");
|
|
||||||
}
|
|
||||||
|
|
||||||
private function migrateEntityRecord(int $entityId): void
|
|
||||||
{
|
|
||||||
// Vérifier si existe déjà
|
|
||||||
$stmt = $this->targetDb->prepare("SELECT COUNT(*) FROM entites WHERE id = :id");
|
|
||||||
$stmt->execute([':id' => $entityId]);
|
|
||||||
|
|
||||||
if ($stmt->fetchColumn() > 0) {
|
|
||||||
$this->logger->info("Entité {$entityId} existe déjà, skip");
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Récupérer depuis source
|
|
||||||
$stmt = $this->sourceDb->prepare("SELECT * FROM users_entites WHERE rowid = :id");
|
|
||||||
$stmt->execute([':id' => $entityId]);
|
|
||||||
$entity = $stmt->fetch(PDO::FETCH_ASSOC);
|
|
||||||
|
|
||||||
if (!$entity) {
|
|
||||||
throw new Exception("Entité {$entityId} non trouvée");
|
|
||||||
}
|
|
||||||
|
|
||||||
// Insérer dans cible (schéma geo_app)
|
|
||||||
$stmt = $this->targetDb->prepare("
|
|
||||||
INSERT INTO entites (
|
|
||||||
id, encrypted_name, adresse1, adresse2, code_postal, ville,
|
|
||||||
fk_region, fk_type, encrypted_phone, encrypted_mobile, encrypted_email,
|
|
||||||
gps_lat, gps_lng, chk_stripe, encrypted_stripe_id, encrypted_iban, encrypted_bic,
|
|
||||||
chk_demo, chk_mdp_manuel, chk_username_manuel, chk_user_delete_pass,
|
|
||||||
chk_copie_mail_recu, chk_accept_sms, chk_lot_actif,
|
|
||||||
created_at, fk_user_creat, updated_at, fk_user_modif, chk_active
|
|
||||||
) VALUES (
|
|
||||||
:id, :encrypted_name, :adresse1, :adresse2, :code_postal, :ville,
|
|
||||||
:fk_region, :fk_type, :encrypted_phone, :encrypted_mobile, :encrypted_email,
|
|
||||||
:gps_lat, :gps_lng, :chk_stripe, :encrypted_stripe_id, :encrypted_iban, :encrypted_bic,
|
|
||||||
:chk_demo, :chk_mdp_manuel, :chk_username_manuel, :chk_user_delete_pass,
|
|
||||||
:chk_copie_mail_recu, :chk_accept_sms, :chk_lot_actif,
|
|
||||||
:created_at, :fk_user_creat, :updated_at, :fk_user_modif, :chk_active
|
|
||||||
)
|
|
||||||
");
|
|
||||||
|
|
||||||
$stmt->execute([
|
|
||||||
':id' => $entityId,
|
|
||||||
':encrypted_name' => $entity['libelle'] ? ApiService::encryptData($entity['libelle']) : '',
|
|
||||||
':adresse1' => $entity['adresse1'] ?? '',
|
|
||||||
':adresse2' => $entity['adresse2'] ?? '',
|
|
||||||
':code_postal' => $entity['cp'] ?? '',
|
|
||||||
':ville' => $entity['ville'] ?? '',
|
|
||||||
':fk_region' => $entity['fk_region'],
|
|
||||||
':fk_type' => $entity['fk_type'] ?? 1,
|
|
||||||
':encrypted_phone' => $entity['tel1'] ? ApiService::encryptData($entity['tel1']) : '',
|
|
||||||
':encrypted_mobile' => $entity['tel2'] ? ApiService::encryptData($entity['tel2']) : '',
|
|
||||||
':encrypted_email' => $entity['email'] ? ApiService::encryptSearchableData($entity['email']) : '',
|
|
||||||
':gps_lat' => $entity['gps_lat'] ?? '',
|
|
||||||
':gps_lng' => $entity['gps_lng'] ?? '',
|
|
||||||
':chk_stripe' => 0,
|
|
||||||
':encrypted_stripe_id' => '',
|
|
||||||
':encrypted_iban' => $entity['iban'] ? ApiService::encryptData($entity['iban']) : '',
|
|
||||||
':encrypted_bic' => $entity['bic'] ? ApiService::encryptData($entity['bic']) : '',
|
|
||||||
':chk_demo' => $entity['demo'] ?? 1,
|
|
||||||
':chk_mdp_manuel' => $entity['chk_mdp_manuel'] ?? 0,
|
|
||||||
':chk_username_manuel' => 0,
|
|
||||||
':chk_user_delete_pass' => 0,
|
|
||||||
':chk_copie_mail_recu' => $entity['chk_copie_mail_recu'] ?? 0,
|
|
||||||
':chk_accept_sms' => $entity['chk_accept_sms'] ?? 0,
|
|
||||||
':chk_lot_actif' => 0,
|
|
||||||
':created_at' => date('Y-m-d H:i:s'),
|
|
||||||
':fk_user_creat' => 0,
|
|
||||||
':updated_at' => $entity['date_modif'],
|
|
||||||
':fk_user_modif' => $entity['fk_user_modif'] ?? 0,
|
|
||||||
':chk_active' => $entity['active'] ?? 1
|
|
||||||
]);
|
|
||||||
|
|
||||||
$this->logger->success("✓ Entité {$entityId} migrée");
|
|
||||||
}
|
|
||||||
|
|
||||||
private function migrateEntityUsers(int $entityId): void
|
|
||||||
{
|
|
||||||
$stmt = $this->sourceDb->prepare("SELECT * FROM users WHERE fk_entite = :entity_id AND active = 1");
|
|
||||||
$stmt->execute([':entity_id' => $entityId]);
|
|
||||||
$users = $stmt->fetchAll(PDO::FETCH_ASSOC);
|
|
||||||
|
|
||||||
$count = 0;
|
|
||||||
foreach ($users as $user) {
|
|
||||||
// Vérifier si existe déjà
|
|
||||||
$stmt = $this->targetDb->prepare("SELECT COUNT(*) FROM users WHERE id = :id");
|
|
||||||
$stmt->execute([':id' => $user['rowid']]);
|
|
||||||
|
|
||||||
if ($stmt->fetchColumn() > 0) {
|
|
||||||
continue; // Skip si existe
|
|
||||||
}
|
|
||||||
|
|
||||||
// Insérer l'utilisateur
|
|
||||||
$stmt = $this->targetDb->prepare("
|
|
||||||
INSERT INTO users (
|
|
||||||
id, fk_entite, fk_role, first_name, encrypted_name,
|
|
||||||
encrypted_user_name, user_pass_hash, encrypted_email, encrypted_phone, encrypted_mobile,
|
|
||||||
created_at, fk_user_creat, updated_at, fk_user_modif, chk_active
|
|
||||||
) VALUES (
|
|
||||||
:id, :fk_entite, :fk_role, :first_name, :encrypted_name,
|
|
||||||
:encrypted_user_name, :user_pass_hash, :encrypted_email, :encrypted_phone, :encrypted_mobile,
|
|
||||||
:created_at, :fk_user_creat, :updated_at, :fk_user_modif, :chk_active
|
|
||||||
)
|
|
||||||
");
|
|
||||||
|
|
||||||
$stmt->execute([
|
|
||||||
':id' => $user['rowid'],
|
|
||||||
':fk_entite' => $user['fk_entite'],
|
|
||||||
':fk_role' => $user['fk_role'],
|
|
||||||
':first_name' => $user['prenom'],
|
|
||||||
':encrypted_name' => ApiService::encryptData($user['libelle']), // Chiffrer avec IV aléatoire
|
|
||||||
':encrypted_user_name' => ApiService::encryptSearchableData($user['username']),
|
|
||||||
':user_pass_hash' => $user['userpswd'], // Hash bcrypt du mot de passe
|
|
||||||
':encrypted_email' => $user['email'] ? ApiService::encryptSearchableData($user['email']) : null,
|
|
||||||
':encrypted_phone' => $user['telephone'] ? ApiService::encryptData($user['telephone']) : null,
|
|
||||||
':encrypted_mobile' => $user['mobile'] ? ApiService::encryptData($user['mobile']) : null,
|
|
||||||
':created_at' => $user['date_creat'],
|
|
||||||
':fk_user_creat' => $user['fk_user_creat'],
|
|
||||||
':updated_at' => $user['date_modif'],
|
|
||||||
':fk_user_modif' => $user['fk_user_modif'],
|
|
||||||
':chk_active' => $user['active']
|
|
||||||
]);
|
|
||||||
|
|
||||||
$count++;
|
|
||||||
}
|
|
||||||
|
|
||||||
$this->logger->success("✓ {$count} utilisateur(s) de l'entité migré(s)");
|
|
||||||
}
|
|
||||||
|
|
||||||
private function logHeader(): void
|
|
||||||
{
|
|
||||||
$this->logger->separator();
|
|
||||||
$this->logger->info("🚀 Migration v2 - Architecture modulaire");
|
|
||||||
$this->logger->info("📅 Date: " . date('Y-m-d H:i:s'));
|
|
||||||
$this->logger->info("🌍 Environnement: " . $this->config->getEnvName());
|
|
||||||
$this->logger->info("🔧 Mode: " . $this->mode);
|
|
||||||
if ($this->entityId) {
|
|
||||||
$this->logger->info("🏢 Entité: " . $this->entityId);
|
|
||||||
}
|
|
||||||
$this->logger->info("🗑️ Suppression avant: " . ($this->deleteBefore ? 'OUI' : 'NON'));
|
|
||||||
$this->logger->separator();
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// === GESTION DES ARGUMENTS CLI ===
|
|
||||||
|
|
||||||
function parseArguments(array $argv): array
|
|
||||||
{
|
|
||||||
$options = [
|
|
||||||
'env' => DatabaseConfig::autoDetect(),
|
|
||||||
'mode' => 'global',
|
|
||||||
'entity-id' => null,
|
|
||||||
'log' => null,
|
|
||||||
'delete-before' => true,
|
|
||||||
'help' => false
|
|
||||||
];
|
|
||||||
|
|
||||||
foreach ($argv as $arg) {
|
|
||||||
if ($arg === '--help') {
|
|
||||||
$options['help'] = true;
|
|
||||||
} elseif (preg_match('/^--env=(.+)$/', $arg, $matches)) {
|
|
||||||
$options['env'] = $matches[1];
|
|
||||||
} elseif (preg_match('/^--mode=(.+)$/', $arg, $matches)) {
|
|
||||||
$options['mode'] = $matches[1];
|
|
||||||
} elseif (preg_match('/^--entity-id=(\d+)$/', $arg, $matches)) {
|
|
||||||
$options['entity-id'] = (int)$matches[1];
|
|
||||||
} elseif (preg_match('/^--log=(.+)$/', $arg, $matches)) {
|
|
||||||
$options['log'] = $matches[1];
|
|
||||||
} elseif ($arg === '--delete-before=false') {
|
|
||||||
$options['delete-before'] = false;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return $options;
|
|
||||||
}
|
|
||||||
|
|
||||||
function showHelp(): void
|
|
||||||
{
|
|
||||||
echo <<<HELP
|
|
||||||
|
|
||||||
🚀 Migration v2 - Architecture modulaire
|
|
||||||
|
|
||||||
USAGE:
|
|
||||||
php migrate_from_backup.php [OPTIONS]
|
|
||||||
|
|
||||||
OPTIONS:
|
|
||||||
--env=ENV Environnement: 'dva' (développement), 'rca' (recette) ou 'pra' (production)
|
|
||||||
Par défaut: auto-détection selon hostname
|
|
||||||
|
|
||||||
--mode=MODE Mode de migration: 'global' ou 'entity'
|
|
||||||
Par défaut: global
|
|
||||||
|
|
||||||
--entity-id=ID ID de l'entité à migrer (requis si mode=entity)
|
|
||||||
|
|
||||||
--log=PATH Fichier de log personnalisé
|
|
||||||
Par défaut: logs/migration_YYYYMMDD_HHMMSS.log
|
|
||||||
|
|
||||||
--delete-before Supprimer les données existantes avant migration
|
|
||||||
Par défaut: true
|
|
||||||
Utiliser --delete-before=false pour désactiver
|
|
||||||
|
|
||||||
--help Afficher cette aide
|
|
||||||
|
|
||||||
EXEMPLES:
|
|
||||||
# Migration d'une entité avec suppression (recommandé)
|
|
||||||
php migrate_from_backup.php --mode=entity --entity-id=2
|
|
||||||
|
|
||||||
# Migration sans suppression (risque de doublons)
|
|
||||||
php migrate_from_backup.php --mode=entity --entity-id=2 --delete-before=false
|
|
||||||
|
|
||||||
# Migration globale de toutes les entités
|
|
||||||
php migrate_from_backup.php --mode=global
|
|
||||||
|
|
||||||
# Spécifier l'environnement manuellement (DVA, RCA ou PRA)
|
|
||||||
php migrate_from_backup.php --env=dva --mode=entity --entity-id=2
|
|
||||||
|
|
||||||
|
|
||||||
HELP;
|
|
||||||
}
|
|
||||||
|
|
||||||
// === POINT D'ENTRÉE ===
|
|
||||||
|
|
||||||
try {
|
|
||||||
$options = parseArguments($argv);
|
|
||||||
|
|
||||||
if ($options['help']) {
|
|
||||||
showHelp();
|
|
||||||
exit(0);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Valider l'environnement
|
|
||||||
if (!DatabaseConfig::exists($options['env'])) {
|
|
||||||
throw new Exception("Invalid environment: {$options['env']}. Use 'dva', 'rca' or 'pra'");
|
|
||||||
}
|
|
||||||
|
|
||||||
// Créer et exécuter la migration
|
|
||||||
$migration = new DataMigration(
|
|
||||||
$options['env'],
|
|
||||||
$options['mode'],
|
|
||||||
$options['entity-id'],
|
|
||||||
$options['log'],
|
|
||||||
$options['delete-before']
|
|
||||||
);
|
|
||||||
|
|
||||||
$migration->run();
|
|
||||||
|
|
||||||
} catch (Exception $e) {
|
|
||||||
echo "❌ ERREUR: " . $e->getMessage() . "\n";
|
|
||||||
exit(1);
|
|
||||||
}
|
|
||||||
File diff suppressed because it is too large
Load Diff
197
api/scripts/migrations/stripe_tables.sql
Normal file
197
api/scripts/migrations/stripe_tables.sql
Normal file
@@ -0,0 +1,197 @@
|
|||||||
|
-- =============================================================
|
||||||
|
-- Tables pour l'intégration Stripe Connect + Terminal
|
||||||
|
-- Date: 2025-09-01
|
||||||
|
-- Version: 1.0.0
|
||||||
|
-- Préfixe: stripe_
|
||||||
|
-- =============================================================
|
||||||
|
|
||||||
|
-- Table pour stocker les comptes Stripe Connect des amicales
|
||||||
|
CREATE TABLE IF NOT EXISTS stripe_accounts (
|
||||||
|
id INT(10) UNSIGNED PRIMARY KEY AUTO_INCREMENT,
|
||||||
|
fk_entite INT(10) UNSIGNED NOT NULL,
|
||||||
|
stripe_account_id VARCHAR(255) UNIQUE,
|
||||||
|
stripe_location_id VARCHAR(255),
|
||||||
|
charges_enabled BOOLEAN DEFAULT FALSE,
|
||||||
|
payouts_enabled BOOLEAN DEFAULT FALSE,
|
||||||
|
onboarding_completed BOOLEAN DEFAULT FALSE,
|
||||||
|
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||||
|
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
|
||||||
|
FOREIGN KEY (fk_entite) REFERENCES entites(id) ON DELETE CASCADE,
|
||||||
|
INDEX idx_fk_entite (fk_entite),
|
||||||
|
INDEX idx_stripe_account_id (stripe_account_id)
|
||||||
|
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
|
||||||
|
|
||||||
|
-- Table pour stocker les intentions de paiement
|
||||||
|
CREATE TABLE IF NOT EXISTS stripe_payment_intents (
|
||||||
|
id INT(10) UNSIGNED PRIMARY KEY AUTO_INCREMENT,
|
||||||
|
stripe_payment_intent_id VARCHAR(255) UNIQUE,
|
||||||
|
fk_entite INT(10) UNSIGNED NOT NULL,
|
||||||
|
fk_user INT(10) UNSIGNED NOT NULL,
|
||||||
|
amount INT NOT NULL COMMENT 'Montant en centimes',
|
||||||
|
currency VARCHAR(3) DEFAULT 'eur',
|
||||||
|
status VARCHAR(50),
|
||||||
|
application_fee INT COMMENT 'Commission en centimes',
|
||||||
|
metadata JSON,
|
||||||
|
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||||
|
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
|
||||||
|
FOREIGN KEY (fk_entite) REFERENCES entites(id) ON DELETE CASCADE,
|
||||||
|
FOREIGN KEY (fk_user) REFERENCES users(id) ON DELETE CASCADE,
|
||||||
|
INDEX idx_fk_entite (fk_entite),
|
||||||
|
INDEX idx_fk_user (fk_user),
|
||||||
|
INDEX idx_status (status),
|
||||||
|
INDEX idx_created_at (created_at)
|
||||||
|
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
|
||||||
|
|
||||||
|
-- Table pour les readers Terminal (Tap to Pay virtuel)
|
||||||
|
CREATE TABLE IF NOT EXISTS stripe_terminal_readers (
|
||||||
|
id INT(10) UNSIGNED PRIMARY KEY AUTO_INCREMENT,
|
||||||
|
stripe_reader_id VARCHAR(255) UNIQUE,
|
||||||
|
fk_entite INT(10) UNSIGNED NOT NULL,
|
||||||
|
label VARCHAR(255),
|
||||||
|
location VARCHAR(255),
|
||||||
|
status VARCHAR(50),
|
||||||
|
device_type VARCHAR(50) COMMENT 'ios_tap_to_pay, android_tap_to_pay',
|
||||||
|
device_info JSON COMMENT 'Infos sur le device (modèle, OS, etc)',
|
||||||
|
last_seen_at TIMESTAMP NULL,
|
||||||
|
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||||
|
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
|
||||||
|
FOREIGN KEY (fk_entite) REFERENCES entites(id) ON DELETE CASCADE,
|
||||||
|
INDEX idx_fk_entite (fk_entite),
|
||||||
|
INDEX idx_device_type (device_type)
|
||||||
|
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
|
||||||
|
|
||||||
|
-- Table pour les appareils Android certifiés Tap to Pay
|
||||||
|
CREATE TABLE IF NOT EXISTS stripe_android_certified_devices (
|
||||||
|
id INT(10) UNSIGNED PRIMARY KEY AUTO_INCREMENT,
|
||||||
|
manufacturer VARCHAR(100),
|
||||||
|
model VARCHAR(200),
|
||||||
|
model_identifier VARCHAR(200),
|
||||||
|
tap_to_pay_certified BOOLEAN DEFAULT FALSE,
|
||||||
|
certification_date DATE,
|
||||||
|
min_android_version INT,
|
||||||
|
country VARCHAR(2) DEFAULT 'FR',
|
||||||
|
notes TEXT,
|
||||||
|
last_verified TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||||
|
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||||
|
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
|
||||||
|
INDEX idx_manufacturer_model (manufacturer, model),
|
||||||
|
INDEX idx_certified (tap_to_pay_certified, country),
|
||||||
|
UNIQUE KEY unique_device (manufacturer, model, model_identifier)
|
||||||
|
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
|
||||||
|
|
||||||
|
-- Table pour l'historique des paiements (pour audit et réconciliation)
|
||||||
|
CREATE TABLE IF NOT EXISTS stripe_payment_history (
|
||||||
|
id INT(10) UNSIGNED PRIMARY KEY AUTO_INCREMENT,
|
||||||
|
fk_payment_intent INT(10) UNSIGNED,
|
||||||
|
event_type VARCHAR(50) COMMENT 'created, processing, succeeded, failed, refunded',
|
||||||
|
event_data JSON,
|
||||||
|
webhook_id VARCHAR(255),
|
||||||
|
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||||
|
FOREIGN KEY (fk_payment_intent) REFERENCES stripe_payment_intents(id) ON DELETE CASCADE,
|
||||||
|
INDEX idx_fk_payment_intent (fk_payment_intent),
|
||||||
|
INDEX idx_event_type (event_type),
|
||||||
|
INDEX idx_created_at (created_at)
|
||||||
|
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
|
||||||
|
|
||||||
|
-- Table pour les remboursements
|
||||||
|
CREATE TABLE IF NOT EXISTS stripe_refunds (
|
||||||
|
id INT(10) UNSIGNED PRIMARY KEY AUTO_INCREMENT,
|
||||||
|
stripe_refund_id VARCHAR(255) UNIQUE,
|
||||||
|
fk_payment_intent INT(10) UNSIGNED NOT NULL,
|
||||||
|
amount INT NOT NULL COMMENT 'Montant remboursé en centimes',
|
||||||
|
reason VARCHAR(100) COMMENT 'duplicate, fraudulent, requested_by_customer',
|
||||||
|
status VARCHAR(50),
|
||||||
|
metadata JSON,
|
||||||
|
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||||
|
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
|
||||||
|
FOREIGN KEY (fk_payment_intent) REFERENCES stripe_payment_intents(id) ON DELETE CASCADE,
|
||||||
|
INDEX idx_fk_payment_intent (fk_payment_intent),
|
||||||
|
INDEX idx_status (status)
|
||||||
|
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
|
||||||
|
|
||||||
|
-- Table pour les webhooks reçus (pour éviter les doublons et debug)
|
||||||
|
CREATE TABLE IF NOT EXISTS stripe_webhooks (
|
||||||
|
id INT(10) UNSIGNED PRIMARY KEY AUTO_INCREMENT,
|
||||||
|
stripe_event_id VARCHAR(255) UNIQUE,
|
||||||
|
event_type VARCHAR(100),
|
||||||
|
livemode BOOLEAN DEFAULT FALSE,
|
||||||
|
payload JSON,
|
||||||
|
processed BOOLEAN DEFAULT FALSE,
|
||||||
|
error_message TEXT NULL,
|
||||||
|
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||||
|
processed_at TIMESTAMP NULL,
|
||||||
|
INDEX idx_event_type (event_type),
|
||||||
|
INDEX idx_processed (processed),
|
||||||
|
INDEX idx_created_at (created_at)
|
||||||
|
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
|
||||||
|
|
||||||
|
-- Insertion des appareils Android certifiés pour Tap to Pay en France
|
||||||
|
INSERT INTO stripe_android_certified_devices (manufacturer, model, model_identifier, tap_to_pay_certified, min_android_version, certification_date) VALUES
|
||||||
|
-- Samsung
|
||||||
|
('Samsung', 'Galaxy S21', 'SM-G991B', TRUE, 11, '2023-01-01'),
|
||||||
|
('Samsung', 'Galaxy S21+', 'SM-G996B', TRUE, 11, '2023-01-01'),
|
||||||
|
('Samsung', 'Galaxy S21 Ultra', 'SM-G998B', TRUE, 11, '2023-01-01'),
|
||||||
|
('Samsung', 'Galaxy S22', 'SM-S901B', TRUE, 12, '2023-01-01'),
|
||||||
|
('Samsung', 'Galaxy S22+', 'SM-S906B', TRUE, 12, '2023-01-01'),
|
||||||
|
('Samsung', 'Galaxy S22 Ultra', 'SM-S908B', TRUE, 12, '2023-01-01'),
|
||||||
|
('Samsung', 'Galaxy S23', 'SM-S911B', TRUE, 13, '2023-06-01'),
|
||||||
|
('Samsung', 'Galaxy S23+', 'SM-S916B', TRUE, 13, '2023-06-01'),
|
||||||
|
('Samsung', 'Galaxy S23 Ultra', 'SM-S918B', TRUE, 13, '2023-06-01'),
|
||||||
|
('Samsung', 'Galaxy S24', 'SM-S921B', TRUE, 14, '2024-01-01'),
|
||||||
|
('Samsung', 'Galaxy S24+', 'SM-S926B', TRUE, 14, '2024-01-01'),
|
||||||
|
('Samsung', 'Galaxy S24 Ultra', 'SM-S928B', TRUE, 14, '2024-01-01'),
|
||||||
|
-- Google Pixel
|
||||||
|
('Google', 'Pixel 6', 'oriole', TRUE, 12, '2023-01-01'),
|
||||||
|
('Google', 'Pixel 6 Pro', 'raven', TRUE, 12, '2023-01-01'),
|
||||||
|
('Google', 'Pixel 6a', 'bluejay', TRUE, 12, '2023-03-01'),
|
||||||
|
('Google', 'Pixel 7', 'panther', TRUE, 13, '2023-03-01'),
|
||||||
|
('Google', 'Pixel 7 Pro', 'cheetah', TRUE, 13, '2023-03-01'),
|
||||||
|
('Google', 'Pixel 7a', 'lynx', TRUE, 13, '2023-06-01'),
|
||||||
|
('Google', 'Pixel 8', 'shiba', TRUE, 14, '2023-10-01'),
|
||||||
|
('Google', 'Pixel 8 Pro', 'husky', TRUE, 14, '2023-10-01'),
|
||||||
|
('Google', 'Pixel Fold', 'felix', TRUE, 13, '2023-07-01'),
|
||||||
|
-- OnePlus
|
||||||
|
('OnePlus', '9', 'LE2113', TRUE, 11, '2023-03-01'),
|
||||||
|
('OnePlus', '9 Pro', 'LE2123', TRUE, 11, '2023-03-01'),
|
||||||
|
('OnePlus', '10 Pro', 'NE2213', TRUE, 12, '2023-06-01'),
|
||||||
|
('OnePlus', '11', 'CPH2449', TRUE, 13, '2023-09-01'),
|
||||||
|
-- Xiaomi
|
||||||
|
('Xiaomi', 'Mi 11', 'M2011K2G', TRUE, 11, '2023-06-01'),
|
||||||
|
('Xiaomi', '12', '2201123G', TRUE, 12, '2023-09-01'),
|
||||||
|
('Xiaomi', '12 Pro', '2201122G', TRUE, 12, '2023-09-01'),
|
||||||
|
('Xiaomi', '13', '2211133G', TRUE, 13, '2024-01-01'),
|
||||||
|
('Xiaomi', '13 Pro', '2210132G', TRUE, 13, '2024-01-01');
|
||||||
|
|
||||||
|
-- Vue pour faciliter les requêtes de statistiques
|
||||||
|
CREATE OR REPLACE VIEW v_stripe_payment_stats AS
|
||||||
|
SELECT
|
||||||
|
spi.fk_entite,
|
||||||
|
e.encrypted_name AS entite_name,
|
||||||
|
spi.fk_user,
|
||||||
|
u.encrypted_name AS user_nom,
|
||||||
|
u.first_name AS user_prenom,
|
||||||
|
COUNT(CASE WHEN spi.status = 'succeeded' THEN 1 END) as total_ventes,
|
||||||
|
SUM(CASE WHEN spi.status = 'succeeded' THEN spi.amount ELSE 0 END) as total_montant,
|
||||||
|
SUM(CASE WHEN spi.status = 'succeeded' THEN spi.application_fee ELSE 0 END) as total_commissions,
|
||||||
|
DATE(spi.created_at) as date_vente
|
||||||
|
FROM stripe_payment_intents spi
|
||||||
|
LEFT JOIN entites e ON spi.fk_entite = e.id
|
||||||
|
LEFT JOIN users u ON spi.fk_user = u.id
|
||||||
|
GROUP BY spi.fk_entite, spi.fk_user, DATE(spi.created_at);
|
||||||
|
|
||||||
|
-- Vue pour le dashboard des amicales
|
||||||
|
CREATE OR REPLACE VIEW v_stripe_amicale_dashboard AS
|
||||||
|
SELECT
|
||||||
|
sa.fk_entite,
|
||||||
|
e.encrypted_name AS entite_name,
|
||||||
|
sa.stripe_account_id,
|
||||||
|
sa.charges_enabled,
|
||||||
|
sa.payouts_enabled,
|
||||||
|
COUNT(DISTINCT spi.id) as total_transactions,
|
||||||
|
SUM(CASE WHEN spi.status = 'succeeded' THEN spi.amount ELSE 0 END) as total_revenus,
|
||||||
|
SUM(CASE WHEN spi.status = 'succeeded' THEN spi.application_fee ELSE 0 END) as total_frais_plateforme,
|
||||||
|
MAX(spi.created_at) as derniere_transaction
|
||||||
|
FROM stripe_accounts sa
|
||||||
|
LEFT JOIN entites e ON sa.fk_entite = e.id
|
||||||
|
LEFT JOIN stripe_payment_intents spi ON sa.fk_entite = spi.fk_entite
|
||||||
|
GROUP BY sa.fk_entite, sa.stripe_account_id;
|
||||||
File diff suppressed because it is too large
Load Diff
@@ -1,473 +0,0 @@
|
|||||||
# TODO - Isolation complète des opérations
|
|
||||||
|
|
||||||
## 🎯 Objectif
|
|
||||||
|
|
||||||
Mettre en place une **isolation complète par opération** où chaque opération est totalement autonome et peut être supprimée indépendamment sans impacter les autres opérations ou la table centrale `users`.
|
|
||||||
|
|
||||||
## 📊 Architecture cible
|
|
||||||
|
|
||||||
```
|
|
||||||
operations (id: 850)
|
|
||||||
├── ope_users (id: 2500, fk_operation: 850, fk_user: 100)
|
|
||||||
│ ├── ope_users_sectors (fk_user: 2500 ← ope_users.id, fk_sector: 5400)
|
|
||||||
│ └── ope_pass (fk_user: 2500 ← ope_users.id, fk_sector: 5400)
|
|
||||||
└── ope_sectors (id: 5400, fk_operation: 850)
|
|
||||||
|
|
||||||
users (id: 100) ← table centrale (conservée même si opération supprimée)
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## ✅ Tâche 1 : Modification du schéma SQL
|
|
||||||
|
|
||||||
### 📁 Fichier : `scripts/orga/fix_fk_constraints.sql`
|
|
||||||
|
|
||||||
### Actions
|
|
||||||
|
|
||||||
- [ ] **1.1** Tester le script SQL sur **dva_geo** (DEV)
|
|
||||||
```bash
|
|
||||||
incus exec dva-geo -- mysql rca_geo < /var/www/geosector/api/scripts/orga/fix_fk_constraints.sql
|
|
||||||
```
|
|
||||||
|
|
||||||
- [ ] **1.2** Vérifier les contraintes après exécution :
|
|
||||||
```sql
|
|
||||||
SELECT TABLE_NAME, COLUMN_NAME, REFERENCED_TABLE_NAME, REFERENCED_COLUMN_NAME
|
|
||||||
FROM information_schema.KEY_COLUMN_USAGE
|
|
||||||
WHERE TABLE_SCHEMA = 'rca_geo'
|
|
||||||
AND TABLE_NAME IN ('ope_users_sectors', 'ope_pass')
|
|
||||||
AND COLUMN_NAME = 'fk_user';
|
|
||||||
```
|
|
||||||
Résultat attendu :
|
|
||||||
- `ope_users_sectors.fk_user → ope_users.id`
|
|
||||||
- `ope_pass.fk_user → ope_users.id`
|
|
||||||
|
|
||||||
- [ ] **1.3** Appliquer sur **rca_geo** (RECETTE) après validation sur dva_geo
|
|
||||||
|
|
||||||
- [ ] **1.4** Appliquer sur **pra_geo** (PRODUCTION) après validation sur rca_geo
|
|
||||||
|
|
||||||
### ⚠️ Important
|
|
||||||
|
|
||||||
- Les données existantes doivent être **nettoyées avant** d'appliquer le script
|
|
||||||
- Ou bien : recréer toutes les données avec la nouvelle migration
|
|
||||||
- Les FK `ON DELETE CASCADE` supprimeront automatiquement `ope_users_sectors` et `ope_pass` quand `ope_users` est supprimé
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## ✅ Tâche 2 : Correction du script de migration2
|
|
||||||
|
|
||||||
### 📁 Fichiers concernés
|
|
||||||
|
|
||||||
1. `scripts/migration2/php/lib/SectorMigrator.php`
|
|
||||||
2. `scripts/migration2/php/lib/PassageMigrator.php`
|
|
||||||
|
|
||||||
### Actions
|
|
||||||
|
|
||||||
#### 2.1 SectorMigrator.php - Migration de ope_users_sectors
|
|
||||||
|
|
||||||
- [ ] **Ligne 253** : Changer de `users.id` vers `ope_users.id`
|
|
||||||
|
|
||||||
```php
|
|
||||||
// ❌ AVANT
|
|
||||||
':fk_user' => $us['fk_user'], // ID de users (table centrale)
|
|
||||||
|
|
||||||
// ✅ APRÈS
|
|
||||||
':fk_user' => $userMapping[$us['fk_user']], // ID de ope_users (mapping)
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 2.2 PassageMigrator.php - Migration de ope_pass
|
|
||||||
|
|
||||||
- [ ] **Ligne 64-67** : Vérifier le mapping existe
|
|
||||||
- [ ] **Ligne 77** : Passer `ope_users.id` au lieu de `users.id`
|
|
||||||
|
|
||||||
```php
|
|
||||||
// ❌ AVANT (ligne 77)
|
|
||||||
$newPassId = $this->insertPassage($passage, $newOperationId, $newOpeSectorId, $passage['fk_user']);
|
|
||||||
|
|
||||||
// ✅ APRÈS
|
|
||||||
$newOpeUserId = $userMapping[$passage['fk_user']];
|
|
||||||
$newPassId = $this->insertPassage($passage, $newOperationId, $newOpeSectorId, $newOpeUserId);
|
|
||||||
```
|
|
||||||
|
|
||||||
- [ ] **Ligne 164** : Utiliser le paramètre `$userId` qui sera maintenant `ope_users.id`
|
|
||||||
|
|
||||||
```php
|
|
||||||
// ❌ AVANT
|
|
||||||
':fk_user' => $userId, // ID de users (table centrale)
|
|
||||||
|
|
||||||
// ✅ APRÈS (le paramètre $userId contiendra déjà ope_users.id)
|
|
||||||
':fk_user' => $userId, // ID de ope_users
|
|
||||||
```
|
|
||||||
|
|
||||||
- [ ] **Ligne 71** : Corriger `verifyUserSectorAssociation` pour vérifier avec `ope_users.id`
|
|
||||||
|
|
||||||
```php
|
|
||||||
// ❌ AVANT
|
|
||||||
if (!$this->verifyUserSectorAssociation($newOperationId, $passage['fk_user'], $newOpeSectorId)) {
|
|
||||||
|
|
||||||
// ✅ APRÈS
|
|
||||||
if (!$this->verifyUserSectorAssociation($newOperationId, $newOpeUserId, $newOpeSectorId)) {
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 2.3 Tester la migration complète
|
|
||||||
|
|
||||||
- [ ] **Sur dva_geo** : Vider les données d'une entité et relancer la migration
|
|
||||||
```bash
|
|
||||||
php php/migrate_from_backup.php --mode=entity --entity-id=5
|
|
||||||
```
|
|
||||||
|
|
||||||
- [ ] **Vérifier** dans la base que :
|
|
||||||
- `ope_users_sectors.fk_user` contient des IDs de `ope_users.id`
|
|
||||||
- `ope_pass.fk_user` contient des IDs de `ope_users.id`
|
|
||||||
- Les valeurs correspondent bien au mapping
|
|
||||||
|
|
||||||
- [ ] **Vérifier** qu'on peut supprimer une opération et que tout part avec (CASCADE)
|
|
||||||
```sql
|
|
||||||
DELETE FROM operations WHERE id = 850;
|
|
||||||
-- Doit supprimer automatiquement :
|
|
||||||
-- - ope_users (ON DELETE CASCADE depuis operations)
|
|
||||||
-- - ope_users_sectors (ON DELETE CASCADE depuis ope_users)
|
|
||||||
-- - ope_pass (ON DELETE CASCADE depuis ope_users)
|
|
||||||
-- - ope_sectors (ON DELETE CASCADE depuis operations)
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## ✅ Tâche 3 : Vérifications API
|
|
||||||
|
|
||||||
### Impact sur les endpoints API
|
|
||||||
|
|
||||||
#### 3.1 Vérifier les requêtes utilisant `ope_pass.fk_user`
|
|
||||||
|
|
||||||
- [ ] **Rechercher** tous les endpoints qui lisent `ope_pass.fk_user`
|
|
||||||
```bash
|
|
||||||
grep -r "ope_pass.*fk_user" src/Controllers/
|
|
||||||
grep -r "fk_user.*ope_pass" src/Controllers/
|
|
||||||
```
|
|
||||||
|
|
||||||
- [ ] **Vérifier** que ces endpoints :
|
|
||||||
- Font-ils des JOIN avec `users` via `ope_pass.fk_user` ?
|
|
||||||
- Si OUI : Ajouter un JOIN via `ope_users` :
|
|
||||||
```sql
|
|
||||||
-- ❌ AVANT
|
|
||||||
SELECT op.*, u.encrypted_name
|
|
||||||
FROM ope_pass op
|
|
||||||
JOIN users u ON op.fk_user = u.id
|
|
||||||
|
|
||||||
-- ✅ APRÈS
|
|
||||||
SELECT op.*, u.encrypted_name
|
|
||||||
FROM ope_pass op
|
|
||||||
JOIN ope_users ou ON op.fk_user = ou.id
|
|
||||||
JOIN users u ON ou.fk_user = u.id
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 3.2 Vérifier les requêtes utilisant `ope_users_sectors.fk_user`
|
|
||||||
|
|
||||||
- [ ] **Rechercher** tous les endpoints qui lisent `ope_users_sectors.fk_user`
|
|
||||||
```bash
|
|
||||||
grep -r "ope_users_sectors.*fk_user" src/Controllers/
|
|
||||||
```
|
|
||||||
|
|
||||||
- [ ] **Vérifier** la même chose : si JOIN avec `users`, ajouter passage par `ope_users`
|
|
||||||
|
|
||||||
#### 3.3 Endpoints probablement concernés
|
|
||||||
|
|
||||||
À vérifier :
|
|
||||||
- [ ] `OperationController` - Liste des utilisateurs d'une opération
|
|
||||||
- [ ] `PassageController` - Liste/détails des passages
|
|
||||||
- [ ] `SectorController` - Liste des secteurs avec utilisateurs affectés
|
|
||||||
- [ ] Tout endpoint retournant des statistiques par utilisateur
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## ✅ Tâche 4 : Corrections API - Response JSON Login
|
|
||||||
|
|
||||||
### Impact sur la réponse JSON du login
|
|
||||||
|
|
||||||
#### 4.1 Groupe `users_sectors` - Ajouter `ope_user_id`
|
|
||||||
|
|
||||||
**Problème identifié** : Flutter reçoit `users_sectors` avec `id` (users.id) mais les `passages` ont `fk_user` (ope_users.id). Le mapping est impossible.
|
|
||||||
|
|
||||||
**Solution** : Modifier la requête dans `LoginController.php` (lignes 426 et 1181) pour retourner les deux IDs :
|
|
||||||
|
|
||||||
```sql
|
|
||||||
-- ✅ APRÈS
|
|
||||||
SELECT DISTINCT
|
|
||||||
u.id as user_id, -- users.id (table centrale, pour gestion membres)
|
|
||||||
ou.id as ope_user_id, -- ope_users.id (pour lier avec passages/sectors)
|
|
||||||
ou.first_name,
|
|
||||||
u.encrypted_name,
|
|
||||||
u.sect_name,
|
|
||||||
us.fk_sector
|
|
||||||
FROM users u
|
|
||||||
JOIN ope_users ou ON u.id = ou.fk_user
|
|
||||||
JOIN ope_users_sectors us ON ou.id = us.fk_user AND ou.fk_operation = us.fk_operation
|
|
||||||
WHERE us.fk_sector IN ($sectorIdsString)
|
|
||||||
AND us.fk_operation = ?
|
|
||||||
AND us.chk_active = 1
|
|
||||||
AND u.chk_active = 1
|
|
||||||
AND u.id != ?
|
|
||||||
```
|
|
||||||
|
|
||||||
**Résultat JSON attendu** :
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"user_id": 123, // users.id (pour gestion des membres dans l'interface)
|
|
||||||
"ope_user_id": 50, // ope_users.id (pour lier avec passages.fk_user et sectors)
|
|
||||||
"first_name": "Jane",
|
|
||||||
"name": "Jane Smith",
|
|
||||||
"sect_name": "Smith",
|
|
||||||
"fk_sector": 456
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Usage Flutter** :
|
|
||||||
```dart
|
|
||||||
// Trouver les passages d'un utilisateur
|
|
||||||
passages.where((p) => p.fkUser == usersSectors[i].opeUserId) // ✅ OK
|
|
||||||
```
|
|
||||||
|
|
||||||
- [ ] **Modifier** `LoginController.php` ligne 426 (méthode `login()`)
|
|
||||||
- [ ] **Modifier** `LoginController.php` ligne 1181 (méthode `checkSession()`)
|
|
||||||
- [ ] **Tester** la réponse JSON du login en mode admin
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## ✅ Tâche 5 : Vérifications Flutter - Gestion des IDs
|
|
||||||
|
|
||||||
### Impact sur l'application mobile
|
|
||||||
|
|
||||||
#### 5.1 Modèles de données
|
|
||||||
|
|
||||||
- [ ] **Vérifier** le modèle `UserSector` (ou équivalent)
|
|
||||||
- Ajouter le champ `opeUserId` (int) pour stocker `ope_users.id`
|
|
||||||
- Conserver `userId` (int) pour stocker `users.id`
|
|
||||||
|
|
||||||
- [ ] **Vérifier** le modèle `Passage` (ou équivalent)
|
|
||||||
- Le champ `fkUser` pointe maintenant vers `ope_users.id`
|
|
||||||
|
|
||||||
#### 5.2 Gestion des secteurs (Mode Admin)
|
|
||||||
|
|
||||||
- [ ] **Création de secteur**
|
|
||||||
- L'API crée dans `ope_sectors`
|
|
||||||
- Attribution des users : utiliser `ope_user_id` (pas `user_id`)
|
|
||||||
- Endpoint : `POST /api/sectors`
|
|
||||||
- Body : `{ ..., users: [50, 51, 52] }` ← IDs de `ope_users`
|
|
||||||
|
|
||||||
- [ ] **Modification de secteur**
|
|
||||||
- Attribution des users : utiliser `ope_user_id`
|
|
||||||
- Endpoint : `PUT /api/sectors/:id`
|
|
||||||
- Body : `{ ..., users: [50, 51, 52] }` ← IDs de `ope_users`
|
|
||||||
|
|
||||||
- [ ] **Suppression de secteur**
|
|
||||||
- L'API supprime dans `ope_pass`, `ope_users_sectors` et `ope_sectors`
|
|
||||||
- CASCADE gère automatiquement les dépendances
|
|
||||||
- Endpoint : `DELETE /api/sectors/:id`
|
|
||||||
|
|
||||||
#### 5.3 Gestion des membres (Mode Admin)
|
|
||||||
|
|
||||||
- [ ] **Création de membre**
|
|
||||||
- L'API crée dans `users` (table centrale)
|
|
||||||
- L'API crée aussi dans `ope_users` pour l'opération active
|
|
||||||
- **Réponse attendue** :
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"status": "success",
|
|
||||||
"user": {
|
|
||||||
"id": 123, // users.id
|
|
||||||
"ope_user_id": 50, // ope_users.id (nouveau)
|
|
||||||
"first_name": "John",
|
|
||||||
"name": "John Doe",
|
|
||||||
...
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
- Endpoint : `POST /api/users`
|
|
||||||
- Flutter stocke les 2 IDs : `userId` et `opeUserId`
|
|
||||||
|
|
||||||
- [ ] **Modification de membre**
|
|
||||||
- L'API met à jour `users` (table centrale)
|
|
||||||
- L'API met à jour aussi `ope_users` pour l'opération active
|
|
||||||
- Endpoint : `PUT /api/users/:id`
|
|
||||||
|
|
||||||
- [ ] **Suppression de membre**
|
|
||||||
- L'API supprime de `ope_users` (opération active)
|
|
||||||
- L'API supprime de `users` (table centrale)
|
|
||||||
- CASCADE supprime automatiquement `ope_users_sectors` et `ope_pass`
|
|
||||||
- Endpoint : `DELETE /api/users/:id?transfer_to=XX`
|
|
||||||
|
|
||||||
#### 5.4 Gestion des passages (Mode Admin & User)
|
|
||||||
|
|
||||||
- [ ] **Création de passage**
|
|
||||||
- Attribution automatique du `ope_sectors.id` le plus proche
|
|
||||||
- Attribution du `ope_users.id` (utilisateur connecté ou sélectionné)
|
|
||||||
- Endpoint : `POST /api/passages`
|
|
||||||
- Body : `{ ..., fk_user: 50, fk_sector: 456 }` ← IDs de `ope_users` et `ope_sectors`
|
|
||||||
|
|
||||||
- [ ] **Modification de passage**
|
|
||||||
- Attribution du `ope_users.id` si changement d'utilisateur
|
|
||||||
- Endpoint : `PUT /api/passages/:id`
|
|
||||||
- Body : `{ ..., fk_user: 50 }` ← ID de `ope_users`
|
|
||||||
|
|
||||||
- [ ] **Suppression de passage**
|
|
||||||
- L'API supprime dans `ope_pass`
|
|
||||||
- Endpoint : `DELETE /api/passages/:id`
|
|
||||||
|
|
||||||
#### 5.5 Interface Flutter - Mapping des IDs
|
|
||||||
|
|
||||||
**Scénarios à gérer** :
|
|
||||||
|
|
||||||
1. **Affichage des secteurs avec utilisateurs affectés** :
|
|
||||||
```dart
|
|
||||||
// Utiliser usersSectors[i].opeUserId pour lier avec passages
|
|
||||||
final userPassages = passages.where((p) =>
|
|
||||||
p.fkUser == usersSectors[i].opeUserId &&
|
|
||||||
p.fkSector == sector.id
|
|
||||||
).toList();
|
|
||||||
```
|
|
||||||
|
|
||||||
2. **Attribution d'un passage à un utilisateur** :
|
|
||||||
```dart
|
|
||||||
// Envoyer ope_user_id dans la requête API
|
|
||||||
await apiService.createPassage({
|
|
||||||
...passageData,
|
|
||||||
'fk_user': userSector.opeUserId, // ope_users.id
|
|
||||||
'fk_sector': sector.id
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
3. **Affichage du nom d'un utilisateur depuis un passage** :
|
|
||||||
```dart
|
|
||||||
// Chercher dans usersSectors avec ope_user_id
|
|
||||||
final userSector = usersSectors.firstWhere(
|
|
||||||
(us) => us.opeUserId == passage.fkUser,
|
|
||||||
orElse: () => null
|
|
||||||
);
|
|
||||||
final userName = userSector?.name ?? 'Inconnu';
|
|
||||||
```
|
|
||||||
|
|
||||||
4. **Gestion des membres** :
|
|
||||||
```dart
|
|
||||||
// Conserver les 2 IDs lors de la création
|
|
||||||
final newMember = await apiService.createUser(userData);
|
|
||||||
membres.add(Member(
|
|
||||||
userId: newMember['id'], // users.id
|
|
||||||
opeUserId: newMember['ope_user_id'], // ope_users.id
|
|
||||||
...
|
|
||||||
));
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 5.6 Tests d'affichage
|
|
||||||
|
|
||||||
- [ ] Tester l'affichage des passages avec noms d'utilisateurs
|
|
||||||
- [ ] Tester l'affichage des secteurs avec utilisateurs affectés
|
|
||||||
- [ ] Tester la création d'un membre (vérifier que les 2 IDs sont reçus)
|
|
||||||
- [ ] Tester la suppression d'un membre (vérifier le transfert de passages)
|
|
||||||
- [ ] Tester la création d'un secteur avec attribution d'utilisateurs
|
|
||||||
- [ ] Tester la création d'un passage avec attribution d'utilisateur
|
|
||||||
- [ ] Tester la suppression d'une opération (doit tout nettoyer)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📋 Ordre d'exécution recommandé
|
|
||||||
|
|
||||||
1. ✅ **Corriger le code de migration2** (PHP)
|
|
||||||
2. ✅ **Tester sur dva_geo** avec schéma modifié
|
|
||||||
3. ✅ **Vérifier l'API** sur dva_geo
|
|
||||||
4. ✅ **Vérifier Flutter** avec dva_geo
|
|
||||||
5. 🚀 **Déployer le schéma SQL** sur rca_geo
|
|
||||||
6. 🚀 **Déployer le code** sur rca_geo
|
|
||||||
7. ✅ **Tester en recette**
|
|
||||||
8. 🚀 **Déployer en production** (pra_geo)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🔍 Requêtes SQL utiles pour vérification
|
|
||||||
|
|
||||||
### Vérifier les contraintes FK actuelles
|
|
||||||
|
|
||||||
```sql
|
|
||||||
SELECT
|
|
||||||
TABLE_NAME,
|
|
||||||
COLUMN_NAME,
|
|
||||||
CONSTRAINT_NAME,
|
|
||||||
REFERENCED_TABLE_NAME,
|
|
||||||
REFERENCED_COLUMN_NAME
|
|
||||||
FROM information_schema.KEY_COLUMN_USAGE
|
|
||||||
WHERE TABLE_SCHEMA = DATABASE()
|
|
||||||
AND (TABLE_NAME = 'ope_pass' OR TABLE_NAME = 'ope_users_sectors')
|
|
||||||
AND COLUMN_NAME = 'fk_user';
|
|
||||||
```
|
|
||||||
|
|
||||||
### Vérifier l'intégrité des données après migration
|
|
||||||
|
|
||||||
```sql
|
|
||||||
-- Vérifier que tous les fk_user de ope_pass existent dans ope_users
|
|
||||||
SELECT COUNT(*) as orphans
|
|
||||||
FROM ope_pass op
|
|
||||||
LEFT JOIN ope_users ou ON op.fk_user = ou.id
|
|
||||||
WHERE ou.id IS NULL;
|
|
||||||
-- Résultat attendu : 0
|
|
||||||
|
|
||||||
-- Vérifier que tous les fk_user de ope_users_sectors existent dans ope_users
|
|
||||||
SELECT COUNT(*) as orphans
|
|
||||||
FROM ope_users_sectors ous
|
|
||||||
LEFT JOIN ope_users ou ON ous.fk_user = ou.id
|
|
||||||
WHERE ou.id IS NULL;
|
|
||||||
-- Résultat attendu : 0
|
|
||||||
```
|
|
||||||
|
|
||||||
### Tester la suppression en cascade
|
|
||||||
|
|
||||||
```sql
|
|
||||||
-- Compter avant suppression
|
|
||||||
SELECT
|
|
||||||
(SELECT COUNT(*) FROM ope_users WHERE fk_operation = 850) as ope_users_count,
|
|
||||||
(SELECT COUNT(*) FROM ope_users_sectors WHERE fk_operation = 850) as ope_users_sectors_count,
|
|
||||||
(SELECT COUNT(*) FROM ope_pass WHERE fk_operation = 850) as ope_pass_count,
|
|
||||||
(SELECT COUNT(*) FROM ope_sectors WHERE fk_operation = 850) as ope_sectors_count;
|
|
||||||
|
|
||||||
-- Supprimer l'opération
|
|
||||||
DELETE FROM operations WHERE id = 850;
|
|
||||||
|
|
||||||
-- Vérifier que tout a été supprimé (doit retourner 0 partout)
|
|
||||||
SELECT
|
|
||||||
(SELECT COUNT(*) FROM ope_users WHERE fk_operation = 850) as ope_users_count,
|
|
||||||
(SELECT COUNT(*) FROM ope_users_sectors WHERE fk_operation = 850) as ope_users_sectors_count,
|
|
||||||
(SELECT COUNT(*) FROM ope_pass WHERE fk_operation = 850) as ope_pass_count,
|
|
||||||
(SELECT COUNT(*) FROM ope_sectors WHERE fk_operation = 850) as ope_sectors_count;
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📝 Notes importantes
|
|
||||||
|
|
||||||
### Avantages de cette architecture
|
|
||||||
|
|
||||||
✅ **Isolation complète** : Supprimer une opération supprime tout (ope_users, secteurs, passages)
|
|
||||||
✅ **Performance** : Pas de jointures complexes avec la table centrale `users`
|
|
||||||
✅ **Historique** : Les données d'une opération sont figées dans le temps
|
|
||||||
✅ **Simplicité** : Requêtes plus simples, moins de risques d'incohérences
|
|
||||||
|
|
||||||
### Implications
|
|
||||||
|
|
||||||
⚠️ **Duplication** : Un utilisateur travaillant sur 3 opérations aura 3 entrées dans `ope_users`
|
|
||||||
⚠️ **Taille** : La table `ope_users` sera plus volumineuse
|
|
||||||
⚠️ **Jointures** : Pour remonter aux infos de la table `users`, il faut passer par `ope_users.fk_user`
|
|
||||||
|
|
||||||
### Rétrocompatibilité
|
|
||||||
|
|
||||||
❌ Ce changement **CASSE** la compatibilité avec les données existantes
|
|
||||||
✅ Nécessite une **re-migration complète** de toutes les entités après modification du schéma
|
|
||||||
✅ Ou bien : script de transformation des données existantes (plus complexe)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🎯 Statut
|
|
||||||
|
|
||||||
- [ ] Schéma SQL modifié sur dva_geo
|
|
||||||
- [ ] Code migration2 corrigé
|
|
||||||
- [ ] API vérifiée et corrigée
|
|
||||||
- [ ] Flutter vérifié et corrigé
|
|
||||||
- [ ] Tests complets sur dva_geo
|
|
||||||
- [ ] Déploiement rca_geo
|
|
||||||
- [ ] Déploiement pra_geo
|
|
||||||
@@ -1,65 +0,0 @@
|
|||||||
-- ================================================================================
|
|
||||||
-- Script de migration : Correction des contraintes FK pour isolation par opération
|
|
||||||
-- ================================================================================
|
|
||||||
--
|
|
||||||
-- Ce script modifie les contraintes de clés étrangères pour que :
|
|
||||||
-- - ope_users_sectors.fk_user → pointe vers ope_users.id (au lieu de users.id)
|
|
||||||
-- - ope_pass.fk_user → pointe vers ope_users.id (au lieu de users.id)
|
|
||||||
--
|
|
||||||
-- Cela permet une isolation complète des opérations : supprimer une opération
|
|
||||||
-- supprime automatiquement tous ses ope_users, ope_sectors, ope_users_sectors et ope_pass.
|
|
||||||
--
|
|
||||||
-- ORDRE D'EXÉCUTION :
|
|
||||||
-- 1. dva_geo (DEV) - test
|
|
||||||
-- 2. rca_geo (RECETTE)
|
|
||||||
-- 3. pra_geo (PRODUCTION)
|
|
||||||
--
|
|
||||||
-- ================================================================================
|
|
||||||
|
|
||||||
USE dva_geo; -- Adapter selon l'environnement (dva_geo, rca_geo, pra_geo)
|
|
||||||
|
|
||||||
-- ================================================================================
|
|
||||||
-- 1. Modification de ope_users_sectors.fk_user
|
|
||||||
-- ================================================================================
|
|
||||||
|
|
||||||
-- Supprimer l'ancienne contrainte FK
|
|
||||||
ALTER TABLE ope_users_sectors
|
|
||||||
DROP FOREIGN KEY ope_users_sectors_ibfk_2;
|
|
||||||
|
|
||||||
-- Recréer la contrainte FK vers ope_users.id
|
|
||||||
ALTER TABLE ope_users_sectors
|
|
||||||
ADD CONSTRAINT ope_users_sectors_ibfk_2
|
|
||||||
FOREIGN KEY (fk_user) REFERENCES ope_users (id) ON DELETE CASCADE ON UPDATE CASCADE;
|
|
||||||
|
|
||||||
-- ================================================================================
|
|
||||||
-- 2. Modification de ope_pass.fk_user
|
|
||||||
-- ================================================================================
|
|
||||||
|
|
||||||
-- Supprimer l'ancienne contrainte FK
|
|
||||||
ALTER TABLE ope_pass
|
|
||||||
DROP FOREIGN KEY ope_pass_ibfk_3;
|
|
||||||
|
|
||||||
-- Recréer la contrainte FK vers ope_users.id
|
|
||||||
ALTER TABLE ope_pass
|
|
||||||
ADD CONSTRAINT ope_pass_ibfk_3
|
|
||||||
FOREIGN KEY (fk_user) REFERENCES ope_users (id) ON DELETE CASCADE ON UPDATE CASCADE;
|
|
||||||
|
|
||||||
-- ================================================================================
|
|
||||||
-- Vérification finale
|
|
||||||
-- ================================================================================
|
|
||||||
|
|
||||||
SELECT
|
|
||||||
TABLE_NAME,
|
|
||||||
COLUMN_NAME,
|
|
||||||
CONSTRAINT_NAME,
|
|
||||||
REFERENCED_TABLE_NAME,
|
|
||||||
REFERENCED_COLUMN_NAME
|
|
||||||
FROM information_schema.KEY_COLUMN_USAGE
|
|
||||||
WHERE TABLE_SCHEMA = DATABASE()
|
|
||||||
AND TABLE_NAME IN ('ope_users_sectors', 'ope_pass')
|
|
||||||
AND COLUMN_NAME = 'fk_user'
|
|
||||||
ORDER BY TABLE_NAME;
|
|
||||||
|
|
||||||
-- Résultat attendu :
|
|
||||||
-- ope_pass | fk_user | ope_pass_ibfk_3 | ope_users | id
|
|
||||||
-- ope_users_sectors | fk_user | ope_users_sectors_ibfk_2 | ope_users | id
|
|
||||||
@@ -1,121 +0,0 @@
|
|||||||
-- ================================================================================
|
|
||||||
-- Script de migration SÉCURISÉ : Correction des contraintes FK pour isolation par opération
|
|
||||||
-- ================================================================================
|
|
||||||
--
|
|
||||||
-- Ce script modifie les contraintes de clés étrangères pour que :
|
|
||||||
-- - ope_users_sectors.fk_user → pointe vers ope_users.id (au lieu de users.id)
|
|
||||||
-- - ope_pass.fk_user → pointe vers ope_users.id (au lieu de users.id)
|
|
||||||
--
|
|
||||||
-- Version SÉCURISÉE : Vérifie l'existence des contraintes avant de les supprimer
|
|
||||||
--
|
|
||||||
-- ================================================================================
|
|
||||||
|
|
||||||
USE dva_geo;
|
|
||||||
|
|
||||||
-- ================================================================================
|
|
||||||
-- Afficher les contraintes FK actuelles
|
|
||||||
-- ================================================================================
|
|
||||||
|
|
||||||
SELECT
|
|
||||||
TABLE_NAME,
|
|
||||||
COLUMN_NAME,
|
|
||||||
CONSTRAINT_NAME,
|
|
||||||
REFERENCED_TABLE_NAME,
|
|
||||||
REFERENCED_COLUMN_NAME
|
|
||||||
FROM information_schema.KEY_COLUMN_USAGE
|
|
||||||
WHERE TABLE_SCHEMA = 'dva_geo'
|
|
||||||
AND TABLE_NAME IN ('ope_users_sectors', 'ope_pass')
|
|
||||||
AND COLUMN_NAME = 'fk_user'
|
|
||||||
ORDER BY TABLE_NAME;
|
|
||||||
|
|
||||||
-- ================================================================================
|
|
||||||
-- 1. Modification de ope_users_sectors.fk_user
|
|
||||||
-- ================================================================================
|
|
||||||
|
|
||||||
-- Supprimer l'ancienne contrainte FK si elle existe
|
|
||||||
SET @constraint_exists = (
|
|
||||||
SELECT COUNT(*)
|
|
||||||
FROM information_schema.KEY_COLUMN_USAGE
|
|
||||||
WHERE TABLE_SCHEMA = 'dva_geo'
|
|
||||||
AND TABLE_NAME = 'ope_users_sectors'
|
|
||||||
AND COLUMN_NAME = 'fk_user'
|
|
||||||
AND CONSTRAINT_NAME LIKE '%ibfk%'
|
|
||||||
);
|
|
||||||
|
|
||||||
SET @sql = IF(@constraint_exists > 0,
|
|
||||||
CONCAT('ALTER TABLE ope_users_sectors DROP FOREIGN KEY ',
|
|
||||||
(SELECT CONSTRAINT_NAME
|
|
||||||
FROM information_schema.KEY_COLUMN_USAGE
|
|
||||||
WHERE TABLE_SCHEMA = 'dva_geo'
|
|
||||||
AND TABLE_NAME = 'ope_users_sectors'
|
|
||||||
AND COLUMN_NAME = 'fk_user'
|
|
||||||
AND CONSTRAINT_NAME LIKE '%ibfk%'
|
|
||||||
LIMIT 1)),
|
|
||||||
'SELECT "Aucune contrainte FK à supprimer sur ope_users_sectors" AS message'
|
|
||||||
);
|
|
||||||
|
|
||||||
PREPARE stmt FROM @sql;
|
|
||||||
EXECUTE stmt;
|
|
||||||
DEALLOCATE PREPARE stmt;
|
|
||||||
|
|
||||||
-- Recréer la contrainte FK vers ope_users.id
|
|
||||||
ALTER TABLE ope_users_sectors
|
|
||||||
ADD CONSTRAINT ope_users_sectors_ibfk_2
|
|
||||||
FOREIGN KEY (fk_user) REFERENCES ope_users (id) ON DELETE CASCADE ON UPDATE CASCADE;
|
|
||||||
|
|
||||||
-- ================================================================================
|
|
||||||
-- 2. Modification de ope_pass.fk_user
|
|
||||||
-- ================================================================================
|
|
||||||
|
|
||||||
-- Supprimer l'ancienne contrainte FK si elle existe
|
|
||||||
SET @constraint_exists = (
|
|
||||||
SELECT COUNT(*)
|
|
||||||
FROM information_schema.KEY_COLUMN_USAGE
|
|
||||||
WHERE TABLE_SCHEMA = 'dva_geo'
|
|
||||||
AND TABLE_NAME = 'ope_pass'
|
|
||||||
AND COLUMN_NAME = 'fk_user'
|
|
||||||
AND CONSTRAINT_NAME LIKE '%ibfk%'
|
|
||||||
);
|
|
||||||
|
|
||||||
SET @sql = IF(@constraint_exists > 0,
|
|
||||||
CONCAT('ALTER TABLE ope_pass DROP FOREIGN KEY ',
|
|
||||||
(SELECT CONSTRAINT_NAME
|
|
||||||
FROM information_schema.KEY_COLUMN_USAGE
|
|
||||||
WHERE TABLE_SCHEMA = 'dva_geo'
|
|
||||||
AND TABLE_NAME = 'ope_pass'
|
|
||||||
AND COLUMN_NAME = 'fk_user'
|
|
||||||
AND CONSTRAINT_NAME LIKE '%ibfk%'
|
|
||||||
LIMIT 1)),
|
|
||||||
'SELECT "Aucune contrainte FK à supprimer sur ope_pass" AS message'
|
|
||||||
);
|
|
||||||
|
|
||||||
PREPARE stmt FROM @sql;
|
|
||||||
EXECUTE stmt;
|
|
||||||
DEALLOCATE PREPARE stmt;
|
|
||||||
|
|
||||||
-- Recréer la contrainte FK vers ope_users.id
|
|
||||||
ALTER TABLE ope_pass
|
|
||||||
ADD CONSTRAINT ope_pass_ibfk_3
|
|
||||||
FOREIGN KEY (fk_user) REFERENCES ope_users (id) ON DELETE CASCADE ON UPDATE CASCADE;
|
|
||||||
|
|
||||||
-- ================================================================================
|
|
||||||
-- Vérification finale
|
|
||||||
-- ================================================================================
|
|
||||||
|
|
||||||
SELECT
|
|
||||||
TABLE_NAME,
|
|
||||||
COLUMN_NAME,
|
|
||||||
CONSTRAINT_NAME,
|
|
||||||
REFERENCED_TABLE_NAME,
|
|
||||||
REFERENCED_COLUMN_NAME
|
|
||||||
FROM information_schema.KEY_COLUMN_USAGE
|
|
||||||
WHERE TABLE_SCHEMA = 'dva_geo'
|
|
||||||
AND TABLE_NAME IN ('ope_users_sectors', 'ope_pass')
|
|
||||||
AND COLUMN_NAME = 'fk_user'
|
|
||||||
ORDER BY TABLE_NAME;
|
|
||||||
|
|
||||||
-- Résultat attendu :
|
|
||||||
-- ope_pass | fk_user | ope_pass_ibfk_3 | ope_users | id
|
|
||||||
-- ope_users_sectors | fk_user | ope_users_sectors_ibfk_2 | ope_users | id
|
|
||||||
|
|
||||||
SELECT '✓ Contraintes FK modifiées avec succès !' AS status;
|
|
||||||
@@ -1,93 +0,0 @@
|
|||||||
-- ================================================================================
|
|
||||||
-- Script de nettoyage complet des tables - DVA_GEO
|
|
||||||
-- ================================================================================
|
|
||||||
--
|
|
||||||
-- Ce script vide toutes les tables pour repartir à zéro.
|
|
||||||
-- ATTENTION : Toutes les données seront perdues !
|
|
||||||
--
|
|
||||||
-- Usage : À exécuter sur dva_geo UNIQUEMENT (environnement de développement)
|
|
||||||
--
|
|
||||||
-- ================================================================================
|
|
||||||
|
|
||||||
USE dva_geo;
|
|
||||||
|
|
||||||
-- Désactiver temporairement les vérifications de clés étrangères
|
|
||||||
SET FOREIGN_KEY_CHECKS = 0;
|
|
||||||
|
|
||||||
-- ================================================================================
|
|
||||||
-- 1. Tables dépendantes (dans l'ordre des dépendances)
|
|
||||||
-- ================================================================================
|
|
||||||
|
|
||||||
TRUNCATE TABLE ope_pass_histo;
|
|
||||||
TRUNCATE TABLE ope_pass;
|
|
||||||
TRUNCATE TABLE ope_users_sectors;
|
|
||||||
TRUNCATE TABLE sectors_adresses;
|
|
||||||
TRUNCATE TABLE ope_sectors;
|
|
||||||
TRUNCATE TABLE ope_users;
|
|
||||||
TRUNCATE TABLE medias;
|
|
||||||
TRUNCATE TABLE operations;
|
|
||||||
|
|
||||||
-- ================================================================================
|
|
||||||
-- 2. Tables liées aux utilisateurs
|
|
||||||
-- ================================================================================
|
|
||||||
|
|
||||||
TRUNCATE TABLE user_devices;
|
|
||||||
|
|
||||||
-- ================================================================================
|
|
||||||
-- 3. Tables de chat
|
|
||||||
-- ================================================================================
|
|
||||||
|
|
||||||
TRUNCATE TABLE chat_messages;
|
|
||||||
TRUNCATE TABLE chat_participants;
|
|
||||||
TRUNCATE TABLE chat_read_receipts;
|
|
||||||
TRUNCATE TABLE chat_rooms;
|
|
||||||
|
|
||||||
-- ================================================================================
|
|
||||||
-- 4. Tables principales
|
|
||||||
-- ================================================================================
|
|
||||||
|
|
||||||
TRUNCATE TABLE users;
|
|
||||||
TRUNCATE TABLE entites;
|
|
||||||
|
|
||||||
-- Réactiver les vérifications de clés étrangères
|
|
||||||
SET FOREIGN_KEY_CHECKS = 1;
|
|
||||||
|
|
||||||
-- ================================================================================
|
|
||||||
-- Vérification : Compter les lignes restantes
|
|
||||||
-- ================================================================================
|
|
||||||
|
|
||||||
SELECT
|
|
||||||
'ope_pass_histo' AS table_name, COUNT(*) AS rows_count FROM ope_pass_histo
|
|
||||||
UNION ALL
|
|
||||||
SELECT 'ope_pass', COUNT(*) FROM ope_pass
|
|
||||||
UNION ALL
|
|
||||||
SELECT 'ope_users_sectors', COUNT(*) FROM ope_users_sectors
|
|
||||||
UNION ALL
|
|
||||||
SELECT 'sectors_adresses', COUNT(*) FROM sectors_adresses
|
|
||||||
UNION ALL
|
|
||||||
SELECT 'ope_sectors', COUNT(*) FROM ope_sectors
|
|
||||||
UNION ALL
|
|
||||||
SELECT 'ope_users', COUNT(*) FROM ope_users
|
|
||||||
UNION ALL
|
|
||||||
SELECT 'medias', COUNT(*) FROM medias
|
|
||||||
UNION ALL
|
|
||||||
SELECT 'operations', COUNT(*) FROM operations
|
|
||||||
UNION ALL
|
|
||||||
SELECT 'user_devices', COUNT(*) FROM user_devices
|
|
||||||
UNION ALL
|
|
||||||
SELECT 'chat_messages', COUNT(*) FROM chat_messages
|
|
||||||
UNION ALL
|
|
||||||
SELECT 'chat_participants', COUNT(*) FROM chat_participants
|
|
||||||
UNION ALL
|
|
||||||
SELECT 'chat_read_receipts', COUNT(*) FROM chat_read_receipts
|
|
||||||
UNION ALL
|
|
||||||
SELECT 'chat_rooms', COUNT(*) FROM chat_rooms
|
|
||||||
UNION ALL
|
|
||||||
SELECT 'users', COUNT(*) FROM users
|
|
||||||
UNION ALL
|
|
||||||
SELECT 'entites', COUNT(*) FROM entites
|
|
||||||
ORDER BY table_name;
|
|
||||||
|
|
||||||
-- Résultat attendu : 0 partout
|
|
||||||
|
|
||||||
SELECT '✓ Toutes les tables ont été vidées avec succès !' AS status;
|
|
||||||
@@ -1,150 +0,0 @@
|
|||||||
-- ================================================================================
|
|
||||||
-- Script de vérification : Isolation complète des opérations
|
|
||||||
-- ================================================================================
|
|
||||||
--
|
|
||||||
-- Ce script vérifie que l'isolation par opération fonctionne correctement
|
|
||||||
--
|
|
||||||
-- ================================================================================
|
|
||||||
|
|
||||||
USE dva_geo;
|
|
||||||
|
|
||||||
-- ================================================================================
|
|
||||||
-- 1. Vérifier les contraintes FK
|
|
||||||
-- ================================================================================
|
|
||||||
|
|
||||||
SELECT '=== VÉRIFICATION DES CONTRAINTES FK ===' AS '';
|
|
||||||
|
|
||||||
SELECT
|
|
||||||
TABLE_NAME,
|
|
||||||
COLUMN_NAME,
|
|
||||||
CONSTRAINT_NAME,
|
|
||||||
REFERENCED_TABLE_NAME,
|
|
||||||
REFERENCED_COLUMN_NAME
|
|
||||||
FROM information_schema.KEY_COLUMN_USAGE
|
|
||||||
WHERE TABLE_SCHEMA = 'dva_geo'
|
|
||||||
AND TABLE_NAME IN ('ope_users_sectors', 'ope_pass')
|
|
||||||
AND COLUMN_NAME = 'fk_user'
|
|
||||||
ORDER BY TABLE_NAME;
|
|
||||||
|
|
||||||
-- Résultat attendu :
|
|
||||||
-- ope_pass | fk_user | ope_pass_ibfk_3 | ope_users | id
|
|
||||||
-- ope_users_sectors | fk_user | ope_users_sectors_ibfk_2 | ope_users | id
|
|
||||||
|
|
||||||
-- ================================================================================
|
|
||||||
-- 2. Vérifier l'intégrité des données (pas d'orphelins)
|
|
||||||
-- ================================================================================
|
|
||||||
|
|
||||||
SELECT '=== VÉRIFICATION INTÉGRITÉ DES DONNÉES ===' AS '';
|
|
||||||
|
|
||||||
-- Vérifier que tous les fk_user de ope_pass existent dans ope_users
|
|
||||||
SELECT
|
|
||||||
'ope_pass → ope_users' AS verification,
|
|
||||||
COUNT(*) as orphelins
|
|
||||||
FROM ope_pass op
|
|
||||||
LEFT JOIN ope_users ou ON op.fk_user = ou.id
|
|
||||||
WHERE ou.id IS NULL;
|
|
||||||
-- Résultat attendu : 0
|
|
||||||
|
|
||||||
-- Vérifier que tous les fk_user de ope_users_sectors existent dans ope_users
|
|
||||||
SELECT
|
|
||||||
'ope_users_sectors → ope_users' AS verification,
|
|
||||||
COUNT(*) as orphelins
|
|
||||||
FROM ope_users_sectors ous
|
|
||||||
LEFT JOIN ope_users ou ON ous.fk_user = ou.id
|
|
||||||
WHERE ou.id IS NULL;
|
|
||||||
-- Résultat attendu : 0
|
|
||||||
|
|
||||||
-- ================================================================================
|
|
||||||
-- 3. Statistiques de migration
|
|
||||||
-- ================================================================================
|
|
||||||
|
|
||||||
SELECT '=== STATISTIQUES DE MIGRATION ===' AS '';
|
|
||||||
|
|
||||||
-- Nombre d'entités
|
|
||||||
SELECT 'Entités' AS table_name, COUNT(*) AS count FROM entites
|
|
||||||
UNION ALL
|
|
||||||
-- Nombre d'opérations
|
|
||||||
SELECT 'Opérations' AS table_name, COUNT(*) AS count FROM operations
|
|
||||||
UNION ALL
|
|
||||||
-- Nombre d'utilisateurs dans la table centrale
|
|
||||||
SELECT 'Users (centrale)' AS table_name, COUNT(*) AS count FROM users
|
|
||||||
UNION ALL
|
|
||||||
-- Nombre d'utilisateurs dans les opérations
|
|
||||||
SELECT 'ope_users' AS table_name, COUNT(*) AS count FROM ope_users
|
|
||||||
UNION ALL
|
|
||||||
-- Nombre de secteurs
|
|
||||||
SELECT 'ope_sectors' AS table_name, COUNT(*) AS count FROM ope_sectors
|
|
||||||
UNION ALL
|
|
||||||
-- Nombre d'associations user-secteur
|
|
||||||
SELECT 'ope_users_sectors' AS table_name, COUNT(*) AS count FROM ope_users_sectors
|
|
||||||
UNION ALL
|
|
||||||
-- Nombre de passages
|
|
||||||
SELECT 'ope_pass' AS table_name, COUNT(*) AS count FROM ope_pass
|
|
||||||
UNION ALL
|
|
||||||
-- Nombre d'historiques de passage
|
|
||||||
SELECT 'ope_pass_histo' AS table_name, COUNT(*) AS count FROM ope_pass_histo;
|
|
||||||
|
|
||||||
-- ================================================================================
|
|
||||||
-- 4. Détail par opération
|
|
||||||
-- ================================================================================
|
|
||||||
|
|
||||||
SELECT '=== DÉTAIL PAR OPÉRATION ===' AS '';
|
|
||||||
|
|
||||||
SELECT
|
|
||||||
o.id AS operation_id,
|
|
||||||
o.libelle AS operation_name,
|
|
||||||
(SELECT COUNT(*) FROM ope_users WHERE fk_operation = o.id) AS nb_users,
|
|
||||||
(SELECT COUNT(*) FROM ope_sectors WHERE fk_operation = o.id) AS nb_sectors,
|
|
||||||
(SELECT COUNT(*) FROM ope_users_sectors WHERE fk_operation = o.id) AS nb_user_sector_links,
|
|
||||||
(SELECT COUNT(*) FROM ope_pass WHERE fk_operation = o.id) AS nb_passages
|
|
||||||
FROM operations o
|
|
||||||
ORDER BY o.id;
|
|
||||||
|
|
||||||
-- ================================================================================
|
|
||||||
-- 5. Vérifier la relation users → ope_users
|
|
||||||
-- ================================================================================
|
|
||||||
|
|
||||||
SELECT '=== RELATION users → ope_users ===' AS '';
|
|
||||||
|
|
||||||
SELECT
|
|
||||||
u.id AS user_id,
|
|
||||||
u.first_name,
|
|
||||||
u.sect_name,
|
|
||||||
COUNT(DISTINCT ou.fk_operation) AS nb_operations,
|
|
||||||
GROUP_CONCAT(DISTINCT ou.fk_operation ORDER BY ou.fk_operation) AS operations_ids
|
|
||||||
FROM users u
|
|
||||||
LEFT JOIN ope_users ou ON u.id = ou.fk_user
|
|
||||||
GROUP BY u.id, u.first_name, u.sect_name
|
|
||||||
ORDER BY u.id;
|
|
||||||
|
|
||||||
-- ================================================================================
|
|
||||||
-- 6. TEST DE SUPPRESSION (commenté pour sécurité)
|
|
||||||
-- ================================================================================
|
|
||||||
|
|
||||||
SELECT '=== INSTRUCTIONS POUR TEST DE SUPPRESSION ===' AS '';
|
|
||||||
SELECT 'Pour tester la suppression en CASCADE, décommentez la section ci-dessous' AS instruction;
|
|
||||||
|
|
||||||
-- Compter avant suppression (remplacer [ID_OPERATION] par un ID réel)
|
|
||||||
/*
|
|
||||||
SET @operation_id = [ID_OPERATION];
|
|
||||||
|
|
||||||
SELECT
|
|
||||||
CONCAT('Opération ID: ', @operation_id) AS info,
|
|
||||||
(SELECT COUNT(*) FROM ope_users WHERE fk_operation = @operation_id) as ope_users_count,
|
|
||||||
(SELECT COUNT(*) FROM ope_users_sectors WHERE fk_operation = @operation_id) as ope_users_sectors_count,
|
|
||||||
(SELECT COUNT(*) FROM ope_pass WHERE fk_operation = @operation_id) as ope_pass_count,
|
|
||||||
(SELECT COUNT(*) FROM ope_sectors WHERE fk_operation = @operation_id) as ope_sectors_count;
|
|
||||||
|
|
||||||
-- Supprimer l'opération
|
|
||||||
DELETE FROM operations WHERE id = @operation_id;
|
|
||||||
|
|
||||||
-- Vérifier que tout a été supprimé (doit retourner 0 partout)
|
|
||||||
SELECT
|
|
||||||
CONCAT('Après suppression de l''opération ID: ', @operation_id) AS info,
|
|
||||||
(SELECT COUNT(*) FROM ope_users WHERE fk_operation = @operation_id) as ope_users_count,
|
|
||||||
(SELECT COUNT(*) FROM ope_users_sectors WHERE fk_operation = @operation_id) as ope_users_sectors_count,
|
|
||||||
(SELECT COUNT(*) FROM ope_pass WHERE fk_operation = @operation_id) as ope_pass_count,
|
|
||||||
(SELECT COUNT(*) FROM ope_sectors WHERE fk_operation = @operation_id) as ope_sectors_count;
|
|
||||||
*/
|
|
||||||
|
|
||||||
SELECT '✓ Vérifications terminées avec succès !' AS status;
|
|
||||||
@@ -1,182 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
#
|
|
||||||
# Script de patch pour adapter migrate_from_backup.php et migrate_batch.sh
|
|
||||||
# pour fonctionner avec --env=rca|pra et source=geosector
|
|
||||||
#
|
|
||||||
|
|
||||||
set -euo pipefail
|
|
||||||
|
|
||||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
|
||||||
PHP_SCRIPT="$SCRIPT_DIR/php/migrate_from_backup.php"
|
|
||||||
BATCH_SCRIPT="$SCRIPT_DIR/migrate_batch.sh"
|
|
||||||
|
|
||||||
echo "=== Patching migration scripts ==="
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Backup des fichiers originaux
|
|
||||||
echo "Creating backups..."
|
|
||||||
cp "$PHP_SCRIPT" "$PHP_SCRIPT.backup"
|
|
||||||
cp "$BATCH_SCRIPT" "$BATCH_SCRIPT.backup"
|
|
||||||
echo "✓ Backups created"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# ============================================================
|
|
||||||
# PATCH 1: migrate_from_backup.php - Configuration multi-env
|
|
||||||
# ============================================================
|
|
||||||
|
|
||||||
echo "Patching migrate_from_backup.php..."
|
|
||||||
|
|
||||||
# Étape 1: Remplacer les constantes DB par configuration multi-env
|
|
||||||
sed -i '31,50s/.*/ \/\/ REPLACED BY PATCH - see below/' "$PHP_SCRIPT"
|
|
||||||
|
|
||||||
# Insérer la nouvelle configuration après la ligne 38
|
|
||||||
sed -i '38a\
|
|
||||||
private $env;\
|
|
||||||
\
|
|
||||||
\/\/ Configuration multi-environnement\
|
|
||||||
private const ENVIRONMENTS = [\
|
|
||||||
'\''rca'\'' => [\
|
|
||||||
'\''host'\'' => '\''13.23.33.3'\'', \/\/ maria3 sur IN3\
|
|
||||||
'\''port'\'' => 3306,\
|
|
||||||
'\''user'\'' => '\''rca_geo_user'\'',\
|
|
||||||
'\''pass'\'' => '\''UPf3C0cQ805LypyM71iW'\'',\
|
|
||||||
'\''target_db'\'' => '\''rca_geo'\'',\
|
|
||||||
'\''source_db'\'' => '\''geosector'\'' \/\/ Base synchronisée par PM7\
|
|
||||||
],\
|
|
||||||
'\''pra'\'' => [\
|
|
||||||
'\''host'\'' => '\''13.23.33.4'\'', \/\/ maria4 sur IN4\
|
|
||||||
'\''port'\'' => 3306,\
|
|
||||||
'\''user'\'' => '\''pra_geo_user'\'',\
|
|
||||||
'\''pass'\'' => '\''d2jAAGGWi8fxFrWgXjOA'\'',\
|
|
||||||
'\''target_db'\'' => '\''pra_geo'\'',\
|
|
||||||
'\''source_db'\'' => '\''geosector'\'' \/\/ Base synchronisée par PM7\
|
|
||||||
]\
|
|
||||||
];' "$PHP_SCRIPT"
|
|
||||||
|
|
||||||
# Étape 2: Modifier le constructeur pour accepter $env
|
|
||||||
sed -i 's/public function __construct($sourceDbName, $targetDbName, $mode/public function __construct($env, $mode/' "$PHP_SCRIPT"
|
|
||||||
|
|
||||||
# Étape 3: Adapter le corps du constructeur
|
|
||||||
sed -i '/public function __construct/,/^ }$/{
|
|
||||||
s/\$this->sourceDbName = \$sourceDbName;/\$this->env = \$env;\n if (!isset(self::ENVIRONMENTS[\$env])) {\n throw new Exception("Invalid environment: \$env. Use '\''rca'\'' or '\''pra'\''");\n }\n \$config = self::ENVIRONMENTS[\$env];\n \$this->sourceDbName = \$config['\''source_db'\''];\n \$this->targetDbName = \$config['\''target_db'\''];/
|
|
||||||
s/\$this->targetDbName = \$targetDbName;//
|
|
||||||
s/Source: {\$sourceDbName}/Environment: \$env/
|
|
||||||
s/Cible: {\$targetDbName}/Source: {\$this->sourceDbName} → Target: {\$this->targetDbName}/
|
|
||||||
}' "$PHP_SCRIPT"
|
|
||||||
|
|
||||||
# Étape 4: Modifier connect() pour utiliser la config de l'env
|
|
||||||
sed -i '/public function connect()/,/^ }$/{
|
|
||||||
s/self::DB_HOST/self::ENVIRONMENTS[\$this->env]['\''host'\'']/g
|
|
||||||
s/self::DB_PORT/self::ENVIRONMENTS[\$this->env]['\''port'\'']/g
|
|
||||||
s/self::DB_USER_ROOT/self::ENVIRONMENTS[\$this->env]['\''user'\'']/g
|
|
||||||
s/self::DB_PASS_ROOT/self::ENVIRONMENTS[\$this->env]['\''pass'\'']/g
|
|
||||||
s/self::DB_USER/self::ENVIRONMENTS[\$this->env]['\''user'\'']/g
|
|
||||||
s/self::DB_PASS/self::ENVIRONMENTS[\$this->env]['\''pass'\'']/g
|
|
||||||
}' "$PHP_SCRIPT"
|
|
||||||
|
|
||||||
# Étape 5: Modifier parseArguments() - supprimer source-db et target-db, ajouter env
|
|
||||||
sed -i '/function parseArguments/,/^}$/{
|
|
||||||
s/'\''source-db'\'' => null,/'\''env'\'' => '\''rca'\'',/
|
|
||||||
s/'\''target-db'\'' => '\''pra_geo'\'',//
|
|
||||||
}' "$PHP_SCRIPT"
|
|
||||||
|
|
||||||
# Étape 6: Modifier showHelp()
|
|
||||||
sed -i '/function showHelp/,/^}$/{
|
|
||||||
s/--source-db=NAME.*\[REQUIS\]/--env=ENV Environment: '\''rca'\'' (recette) ou '\''pra'\'' (production) [défaut: rca]/
|
|
||||||
s/--target-db=NAME.*/ (supprimé - déduit automatiquement de --env)/
|
|
||||||
s/--source-db=geosector_20251007/--env=rca/g
|
|
||||||
s/--target-db=pra_geo//g
|
|
||||||
s/--target-db=rca_geo//g
|
|
||||||
}' "$PHP_SCRIPT"
|
|
||||||
|
|
||||||
# Étape 7: Modifier la validation des arguments
|
|
||||||
sed -i '/Validation des arguments/,/exit(1);/{
|
|
||||||
s/if (!$args\['\''source-db'\''\])/if (!isset(self::ENVIRONMENTS[\$args['\''env'\'']]))/
|
|
||||||
s/--source-db est requis/--env doit être '\''rca'\'' ou '\''pra'\''/
|
|
||||||
}' "$PHP_SCRIPT"
|
|
||||||
|
|
||||||
# Étape 8: Modifier l'instanciation de BackupMigration
|
|
||||||
sed -i '/new BackupMigration/,/);/{
|
|
||||||
s/\$args\['\''source-db'\''\],/\$args['\''env'\''],/
|
|
||||||
s/\$args\['\''target-db'\''\],//
|
|
||||||
}' "$PHP_SCRIPT"
|
|
||||||
|
|
||||||
echo "✓ migrate_from_backup.php patched"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# ============================================================
|
|
||||||
# PATCH 2: migrate_batch.sh - Adapter pour env rca/pra
|
|
||||||
# ============================================================
|
|
||||||
|
|
||||||
echo "Patching migrate_batch.sh..."
|
|
||||||
|
|
||||||
# Étape 1: Détecter l'environnement automatiquement ou via paramètre
|
|
||||||
sed -i '/# Configuration/a\
|
|
||||||
\
|
|
||||||
# Détection automatique de l'\''environnement\
|
|
||||||
if [ -f "/etc/hostname" ]; then\
|
|
||||||
CONTAINER_NAME=$(cat /etc/hostname)\
|
|
||||||
case $CONTAINER_NAME in\
|
|
||||||
rca-geo)\
|
|
||||||
ENV="rca"\
|
|
||||||
;;\
|
|
||||||
pra-geo)\
|
|
||||||
ENV="pra"\
|
|
||||||
;;\
|
|
||||||
*)\
|
|
||||||
ENV="rca" # Défaut\
|
|
||||||
;;\
|
|
||||||
esac\
|
|
||||||
else\
|
|
||||||
ENV="rca" # Défaut\
|
|
||||||
fi' "$BATCH_SCRIPT"
|
|
||||||
|
|
||||||
# Étape 2: Remplacer SOURCE_DB et TARGET_DB
|
|
||||||
sed -i 's/SOURCE_DB="geosector_20251013_13"/# SOURCE_DB removed - always "geosector" (deduced from --env)/' "$BATCH_SCRIPT"
|
|
||||||
sed -i 's/TARGET_DB="pra_geo"/# TARGET_DB removed - deduced from --env/' "$BATCH_SCRIPT"
|
|
||||||
|
|
||||||
# Étape 3: Ajouter option --env dans le parsing
|
|
||||||
sed -i '/--interactive|-i)/i\
|
|
||||||
--env)\
|
|
||||||
ENV="$2"\
|
|
||||||
shift 2\
|
|
||||||
;;' "$BATCH_SCRIPT"
|
|
||||||
|
|
||||||
# Étape 4: Modifier les appels à migrate_from_backup.php - ligne 200
|
|
||||||
sed -i '200,210s/--source-db="\$SOURCE_DB"/--env="$ENV"/' "$BATCH_SCRIPT"
|
|
||||||
sed -i '200,210s/--target-db="\$TARGET_DB"//' "$BATCH_SCRIPT"
|
|
||||||
|
|
||||||
# Étape 5: Modifier les appels dans la boucle - ligne 374
|
|
||||||
sed -i '374,380s/--source-db="\$SOURCE_DB"/--env="$ENV"/' "$BATCH_SCRIPT"
|
|
||||||
sed -i '374,380s/--target-db="\$TARGET_DB"//' "$BATCH_SCRIPT"
|
|
||||||
|
|
||||||
# Étape 6: Mettre à jour les messages de log
|
|
||||||
sed -i 's/📁 Source: \$SOURCE_DB/🌍 Environment: $ENV/' "$BATCH_SCRIPT"
|
|
||||||
sed -i 's/📁 Cible: \$TARGET_DB/📁 Source: geosector → Target: (déduit de $ENV)/' "$BATCH_SCRIPT"
|
|
||||||
|
|
||||||
echo "✓ migrate_batch.sh patched"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# ============================================================
|
|
||||||
# Résumé
|
|
||||||
# ============================================================
|
|
||||||
|
|
||||||
echo "=== Patch completed ==="
|
|
||||||
echo ""
|
|
||||||
echo "Backups saved:"
|
|
||||||
echo " - $PHP_SCRIPT.backup"
|
|
||||||
echo " - $BATCH_SCRIPT.backup"
|
|
||||||
echo ""
|
|
||||||
echo "New usage:"
|
|
||||||
echo " # Sur rca-geo (détection auto)"
|
|
||||||
echo " ./migrate_batch.sh"
|
|
||||||
echo ""
|
|
||||||
echo " # Sur pra-geo avec --env explicite"
|
|
||||||
echo " ./migrate_batch.sh --env=pra"
|
|
||||||
echo ""
|
|
||||||
echo " # Migration d'une entité spécifique"
|
|
||||||
echo " php php/migrate_from_backup.php --env=rca --mode=entity --entity-id=45"
|
|
||||||
echo ""
|
|
||||||
echo "To restore backups:"
|
|
||||||
echo " cp $PHP_SCRIPT.backup $PHP_SCRIPT"
|
|
||||||
echo " cp $BATCH_SCRIPT.backup $BATCH_SCRIPT"
|
|
||||||
@@ -1,240 +0,0 @@
|
|||||||
#!/usr/bin/env php
|
|
||||||
<?php
|
|
||||||
/**
|
|
||||||
* Script : Créer les Stripe Terminal Locations manquantes
|
|
||||||
*
|
|
||||||
* Raison : Certains comptes Stripe Connect ont été créés avant l'implémentation
|
|
||||||
* de la création automatique de Location. Ce script crée les Locations
|
|
||||||
* manquantes pour tous les comptes existants.
|
|
||||||
*
|
|
||||||
* Date : 2025-11-03
|
|
||||||
* Auteur : Migration automatique
|
|
||||||
*/
|
|
||||||
|
|
||||||
// Simuler l'environnement web pour AppConfig en CLI
|
|
||||||
if (php_sapi_name() === 'cli') {
|
|
||||||
$hostname = gethostname();
|
|
||||||
if (strpos($hostname, 'pra') !== false) {
|
|
||||||
$_SERVER['SERVER_NAME'] = 'app3.geosector.fr';
|
|
||||||
} elseif (strpos($hostname, 'rca') !== false) {
|
|
||||||
$_SERVER['SERVER_NAME'] = 'rapp.geosector.fr';
|
|
||||||
} else {
|
|
||||||
$_SERVER['SERVER_NAME'] = 'dapp.geosector.fr';
|
|
||||||
}
|
|
||||||
|
|
||||||
$_SERVER['HTTP_HOST'] = $_SERVER['HTTP_HOST'] ?? $_SERVER['SERVER_NAME'];
|
|
||||||
$_SERVER['REMOTE_ADDR'] = $_SERVER['REMOTE_ADDR'] ?? '127.0.0.1';
|
|
||||||
|
|
||||||
if (!function_exists('getallheaders')) {
|
|
||||||
function getallheaders() {
|
|
||||||
return [];
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Charger l'autoloader Composer (pour Stripe SDK)
|
|
||||||
require_once dirname(dirname(__DIR__)) . '/vendor/autoload.php';
|
|
||||||
|
|
||||||
// Charger les classes nécessaires explicitement
|
|
||||||
require_once dirname(dirname(__DIR__)) . '/src/Config/AppConfig.php';
|
|
||||||
require_once dirname(dirname(__DIR__)) . '/src/Core/Database.php';
|
|
||||||
require_once dirname(dirname(__DIR__)) . '/src/Services/ApiService.php';
|
|
||||||
require_once dirname(dirname(__DIR__)) . '/src/Services/LogService.php';
|
|
||||||
require_once dirname(dirname(__DIR__)) . '/src/Services/StripeService.php';
|
|
||||||
|
|
||||||
use App\Services\StripeService;
|
|
||||||
|
|
||||||
// Initialiser la configuration
|
|
||||||
$config = AppConfig::getInstance();
|
|
||||||
$env = $config->getEnvironment();
|
|
||||||
$dbConfig = $config->getDatabaseConfig();
|
|
||||||
|
|
||||||
echo "\n";
|
|
||||||
echo "=============================================================================\n";
|
|
||||||
echo " Création des Stripe Terminal Locations manquantes\n";
|
|
||||||
echo "=============================================================================\n";
|
|
||||||
echo "Environnement : " . strtoupper($env) . "\n";
|
|
||||||
echo "Base de données : " . $dbConfig['name'] . "\n";
|
|
||||||
echo "\n";
|
|
||||||
|
|
||||||
try {
|
|
||||||
// Initialiser la base de données avec la configuration
|
|
||||||
Database::init($dbConfig);
|
|
||||||
$db = Database::getInstance();
|
|
||||||
|
|
||||||
// StripeService est un singleton
|
|
||||||
$stripeService = StripeService::getInstance();
|
|
||||||
|
|
||||||
// 1. Identifier les comptes sans Location
|
|
||||||
echo "📋 Recherche des comptes Stripe sans Location...\n\n";
|
|
||||||
|
|
||||||
$stmt = $db->query("
|
|
||||||
SELECT
|
|
||||||
sa.id,
|
|
||||||
sa.fk_entite,
|
|
||||||
sa.stripe_account_id,
|
|
||||||
sa.stripe_location_id,
|
|
||||||
e.encrypted_name,
|
|
||||||
e.adresse1,
|
|
||||||
e.adresse2,
|
|
||||||
e.code_postal,
|
|
||||||
e.ville
|
|
||||||
FROM stripe_accounts sa
|
|
||||||
INNER JOIN entites e ON sa.fk_entite = e.id
|
|
||||||
WHERE sa.stripe_account_id IS NOT NULL
|
|
||||||
AND (sa.stripe_location_id IS NULL OR sa.stripe_location_id = '')
|
|
||||||
AND e.chk_active = 1
|
|
||||||
");
|
|
||||||
|
|
||||||
$accountsWithoutLocation = $stmt->fetchAll(PDO::FETCH_ASSOC);
|
|
||||||
$total = count($accountsWithoutLocation);
|
|
||||||
|
|
||||||
if ($total === 0) {
|
|
||||||
echo "✅ Aucun compte sans Location trouvé. Tous les comptes sont à jour !\n\n";
|
|
||||||
exit(0);
|
|
||||||
}
|
|
||||||
|
|
||||||
echo "ℹ️ Trouvé $total compte(s) sans Location :\n\n";
|
|
||||||
|
|
||||||
foreach ($accountsWithoutLocation as $account) {
|
|
||||||
$name = !empty($account['encrypted_name'])
|
|
||||||
? ApiService::decryptData($account['encrypted_name'])
|
|
||||||
: 'Amicale #' . $account['fk_entite'];
|
|
||||||
|
|
||||||
echo " - Entité #{$account['fk_entite']} : $name\n";
|
|
||||||
echo " Stripe Account : {$account['stripe_account_id']}\n";
|
|
||||||
echo " Adresse : {$account['adresse1']}, {$account['code_postal']} {$account['ville']}\n\n";
|
|
||||||
}
|
|
||||||
|
|
||||||
// Demander confirmation
|
|
||||||
echo "⚠️ Voulez-vous créer les Locations manquantes ? (yes/no) : ";
|
|
||||||
$handle = fopen("php://stdin", "r");
|
|
||||||
$line = trim(fgets($handle));
|
|
||||||
fclose($handle);
|
|
||||||
|
|
||||||
if ($line !== 'yes') {
|
|
||||||
echo "❌ Opération annulée.\n\n";
|
|
||||||
exit(0);
|
|
||||||
}
|
|
||||||
|
|
||||||
echo "\n🚀 Création des Locations...\n\n";
|
|
||||||
|
|
||||||
// Initialiser Stripe avec la bonne clé selon le mode
|
|
||||||
$stripeConfig = $config->getStripeConfig();
|
|
||||||
$stripeMode = $stripeConfig['mode'] ?? 'test';
|
|
||||||
$stripeSecretKey = ($stripeMode === 'live')
|
|
||||||
? $stripeConfig['secret_key_live']
|
|
||||||
: $stripeConfig['secret_key_test'];
|
|
||||||
|
|
||||||
\Stripe\Stripe::setApiKey($stripeSecretKey);
|
|
||||||
echo "ℹ️ Mode Stripe : " . strtoupper($stripeMode) . "\n\n";
|
|
||||||
|
|
||||||
$success = 0;
|
|
||||||
$errors = 0;
|
|
||||||
|
|
||||||
// 2. Créer les Locations manquantes
|
|
||||||
foreach ($accountsWithoutLocation as $account) {
|
|
||||||
$entiteId = $account['fk_entite'];
|
|
||||||
$stripeAccountId = $account['stripe_account_id'];
|
|
||||||
|
|
||||||
$name = !empty($account['encrypted_name'])
|
|
||||||
? ApiService::decryptData($account['encrypted_name'])
|
|
||||||
: 'Amicale #' . $entiteId;
|
|
||||||
|
|
||||||
echo "🔧 Entité #{$entiteId} : $name\n";
|
|
||||||
|
|
||||||
try {
|
|
||||||
// Construire l'adresse
|
|
||||||
$adresse1 = !empty($account['adresse1']) ? $account['adresse1'] : 'Adresse non renseignée';
|
|
||||||
$ville = !empty($account['ville']) ? $account['ville'] : 'Ville';
|
|
||||||
$codePostal = !empty($account['code_postal']) ? $account['code_postal'] : '00000';
|
|
||||||
|
|
||||||
// Construire l'adresse pour Stripe (ne pas envoyer line2 si vide)
|
|
||||||
$addressData = [
|
|
||||||
'line1' => $adresse1,
|
|
||||||
'city' => $ville,
|
|
||||||
'postal_code' => $codePostal,
|
|
||||||
'country' => 'FR',
|
|
||||||
];
|
|
||||||
|
|
||||||
// Ajouter line2 seulement s'il n'est pas vide
|
|
||||||
if (!empty($account['adresse2'])) {
|
|
||||||
$addressData['line2'] = $account['adresse2'];
|
|
||||||
}
|
|
||||||
|
|
||||||
// Créer la Location via Stripe API
|
|
||||||
$location = \Stripe\Terminal\Location::create([
|
|
||||||
'display_name' => $name,
|
|
||||||
'address' => $addressData,
|
|
||||||
'metadata' => [
|
|
||||||
'entite_id' => $entiteId,
|
|
||||||
'type' => 'tap_to_pay',
|
|
||||||
'created_by' => 'migration_script'
|
|
||||||
]
|
|
||||||
], [
|
|
||||||
'stripe_account' => $stripeAccountId
|
|
||||||
]);
|
|
||||||
|
|
||||||
$locationId = $location->id;
|
|
||||||
|
|
||||||
// Mettre à jour la base de données
|
|
||||||
$updateStmt = $db->prepare("
|
|
||||||
UPDATE stripe_accounts
|
|
||||||
SET stripe_location_id = :location_id,
|
|
||||||
updated_at = NOW()
|
|
||||||
WHERE id = :id
|
|
||||||
");
|
|
||||||
|
|
||||||
$updateStmt->execute([
|
|
||||||
'location_id' => $locationId,
|
|
||||||
'id' => $account['id']
|
|
||||||
]);
|
|
||||||
|
|
||||||
echo " ✅ Location créée : $locationId\n\n";
|
|
||||||
$success++;
|
|
||||||
|
|
||||||
} catch (\Stripe\Exception\ApiErrorException $e) {
|
|
||||||
echo " ❌ Erreur Stripe : " . $e->getMessage() . "\n\n";
|
|
||||||
$errors++;
|
|
||||||
} catch (Exception $e) {
|
|
||||||
echo " ❌ Erreur : " . $e->getMessage() . "\n\n";
|
|
||||||
$errors++;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// 3. Résumé
|
|
||||||
echo "\n";
|
|
||||||
echo "=============================================================================\n";
|
|
||||||
echo " Résumé de l'opération\n";
|
|
||||||
echo "=============================================================================\n";
|
|
||||||
echo "✅ Locations créées avec succès : $success\n";
|
|
||||||
echo "❌ Erreurs : $errors\n";
|
|
||||||
echo "📊 Total traité : $total\n";
|
|
||||||
echo "\n";
|
|
||||||
|
|
||||||
// 4. Vérification finale
|
|
||||||
echo "🔍 Vérification finale...\n";
|
|
||||||
$stmt = $db->query("
|
|
||||||
SELECT COUNT(*) as remaining
|
|
||||||
FROM stripe_accounts sa
|
|
||||||
WHERE sa.stripe_account_id IS NOT NULL
|
|
||||||
AND (sa.stripe_location_id IS NULL OR sa.stripe_location_id = '')
|
|
||||||
");
|
|
||||||
$remaining = $stmt->fetch(PDO::FETCH_ASSOC);
|
|
||||||
|
|
||||||
echo " ℹ️ Comptes restants sans Location : " . $remaining['remaining'] . "\n\n";
|
|
||||||
|
|
||||||
if ($remaining['remaining'] == 0) {
|
|
||||||
echo "🎉 Tous les comptes Stripe ont maintenant une Location !\n\n";
|
|
||||||
}
|
|
||||||
|
|
||||||
} catch (Exception $e) {
|
|
||||||
echo "\n";
|
|
||||||
echo "=============================================================================\n";
|
|
||||||
echo " ❌ ERREUR\n";
|
|
||||||
echo "=============================================================================\n";
|
|
||||||
echo "Message : " . $e->getMessage() . "\n";
|
|
||||||
echo "Fichier : " . $e->getFile() . ":" . $e->getLine() . "\n";
|
|
||||||
echo "\n";
|
|
||||||
exit(1);
|
|
||||||
}
|
|
||||||
File diff suppressed because it is too large
Load Diff
@@ -1,543 +0,0 @@
|
|||||||
#!/usr/bin/env php
|
|
||||||
<?php
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Script de migration VERBOSE avec détails table par table
|
|
||||||
*
|
|
||||||
* Usage:
|
|
||||||
* php migrate_from_backup_verbose.php \
|
|
||||||
* --source-db=geosector_20251008 \
|
|
||||||
* --target-db=pra_geo \
|
|
||||||
* --entity-id=1178 \
|
|
||||||
* --limit-operations=3
|
|
||||||
*/
|
|
||||||
|
|
||||||
// Inclusion des dépendances de l'API
|
|
||||||
require_once dirname(dirname(__DIR__)) . '/bootstrap.php';
|
|
||||||
|
|
||||||
use GeoSector\Services\ApiService;
|
|
||||||
|
|
||||||
// Configuration
|
|
||||||
const DB_HOST = '13.23.33.4';
|
|
||||||
const DB_PORT = 3306;
|
|
||||||
const DB_USER = 'pra_geo_user';
|
|
||||||
const DB_PASS = 'd2jAAGGWi8fxFrWgXjOA';
|
|
||||||
const DB_USER_ROOT = 'root';
|
|
||||||
const DB_PASS_ROOT = 'MyAlpLocal,90b';
|
|
||||||
|
|
||||||
// Couleurs pour terminal
|
|
||||||
const C_RESET = "\033[0m";
|
|
||||||
const C_RED = "\033[0;31m";
|
|
||||||
const C_GREEN = "\033[0;32m";
|
|
||||||
const C_YELLOW = "\033[1;33m";
|
|
||||||
const C_BLUE = "\033[0;34m";
|
|
||||||
const C_CYAN = "\033[0;36m";
|
|
||||||
const C_BOLD = "\033[1m";
|
|
||||||
|
|
||||||
// Variables globales
|
|
||||||
$sourceDb = null;
|
|
||||||
$targetDb = null;
|
|
||||||
$sourceDbName = null;
|
|
||||||
$targetDbName = null;
|
|
||||||
$entityId = null;
|
|
||||||
$limitOperations = 3;
|
|
||||||
$stats = [
|
|
||||||
'entites' => ['source' => 0, 'migrated' => 0],
|
|
||||||
'users' => ['source' => 0, 'migrated' => 0],
|
|
||||||
'operations' => ['source' => 0, 'migrated' => 0],
|
|
||||||
'ope_sectors' => ['source' => 0, 'migrated' => 0],
|
|
||||||
'sectors_adresses' => ['source' => 0, 'migrated' => 0],
|
|
||||||
'ope_users' => ['source' => 0, 'migrated' => 0],
|
|
||||||
'ope_users_sectors' => ['source' => 0, 'migrated' => 0],
|
|
||||||
'ope_pass' => ['source' => 0, 'migrated' => 0],
|
|
||||||
'ope_pass_histo' => ['source' => 0, 'migrated' => 0],
|
|
||||||
'medias' => ['source' => 0, 'migrated' => 0],
|
|
||||||
];
|
|
||||||
|
|
||||||
// Fonctions utilitaires
|
|
||||||
function println($message, $color = C_RESET) {
|
|
||||||
echo $color . $message . C_RESET . "\n";
|
|
||||||
}
|
|
||||||
|
|
||||||
function printBox($title, $color = C_BLUE) {
|
|
||||||
$width = 70;
|
|
||||||
$titleLen = strlen($title);
|
|
||||||
$padding = ($width - $titleLen - 2) / 2;
|
|
||||||
|
|
||||||
println(str_repeat("═", $width), $color);
|
|
||||||
println(str_repeat(" ", floor($padding)) . $title . str_repeat(" ", ceil($padding)), $color);
|
|
||||||
println(str_repeat("═", $width), $color);
|
|
||||||
}
|
|
||||||
|
|
||||||
function printStep($step, $substep = null) {
|
|
||||||
if ($substep) {
|
|
||||||
println(" ├─ " . $substep, C_CYAN);
|
|
||||||
} else {
|
|
||||||
println("\n" . C_BOLD . "▶ " . $step . C_RESET);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
function printStat($label, $source, $migrated, $indent = " ") {
|
|
||||||
$status = ($source === $migrated) ? C_GREEN . "✓" : C_YELLOW . "⚠";
|
|
||||||
println($indent . "📊 {$label}: {$source} source → {$migrated} migré(s) {$status}" . C_RESET);
|
|
||||||
}
|
|
||||||
|
|
||||||
function connectDatabases($sourceDbName, $targetDbName) {
|
|
||||||
global $sourceDb, $targetDb;
|
|
||||||
|
|
||||||
printStep("Connexion aux bases de données");
|
|
||||||
|
|
||||||
try {
|
|
||||||
// Base source
|
|
||||||
$dsn = sprintf('mysql:host=%s;port=%d;dbname=%s;charset=utf8mb4',
|
|
||||||
DB_HOST, DB_PORT, $sourceDbName);
|
|
||||||
$sourceDb = new PDO($dsn, DB_USER_ROOT, DB_PASS_ROOT, [
|
|
||||||
PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION,
|
|
||||||
PDO::ATTR_DEFAULT_FETCH_MODE => PDO::FETCH_ASSOC,
|
|
||||||
]);
|
|
||||||
printStep("Source connectée: {$sourceDbName}", true);
|
|
||||||
|
|
||||||
// Base cible
|
|
||||||
$dsn = sprintf('mysql:host=%s;port=%d;dbname=%s;charset=utf8mb4',
|
|
||||||
DB_HOST, DB_PORT, $targetDbName);
|
|
||||||
$targetDb = new PDO($dsn, DB_USER, DB_PASS, [
|
|
||||||
PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION,
|
|
||||||
PDO::ATTR_DEFAULT_FETCH_MODE => PDO::FETCH_ASSOC,
|
|
||||||
]);
|
|
||||||
printStep("Cible connectée: {$targetDbName}", true);
|
|
||||||
|
|
||||||
return true;
|
|
||||||
} catch (PDOException $e) {
|
|
||||||
println("✗ Erreur connexion: " . $e->getMessage(), C_RED);
|
|
||||||
return false;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
function getEntityInfo($entityId) {
|
|
||||||
global $sourceDb;
|
|
||||||
|
|
||||||
$stmt = $sourceDb->prepare("
|
|
||||||
SELECT rowid, libelle, cp, ville
|
|
||||||
FROM users_entites
|
|
||||||
WHERE rowid = ?
|
|
||||||
");
|
|
||||||
$stmt->execute([$entityId]);
|
|
||||||
return $stmt->fetch();
|
|
||||||
}
|
|
||||||
|
|
||||||
function migrateReferenceTable($tableName) {
|
|
||||||
global $sourceDb, $targetDb;
|
|
||||||
|
|
||||||
printStep("Migration table: {$tableName}");
|
|
||||||
|
|
||||||
// Compter source
|
|
||||||
$count = $sourceDb->query("SELECT COUNT(*) FROM {$tableName}")->fetchColumn();
|
|
||||||
printStep("Source: {$count} enregistrements", true);
|
|
||||||
|
|
||||||
if ($count === 0) {
|
|
||||||
printStep("Aucune donnée à migrer", true);
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Récupérer les données
|
|
||||||
$rows = $sourceDb->query("SELECT * FROM {$tableName}")->fetchAll();
|
|
||||||
|
|
||||||
// Préparer l'insertion
|
|
||||||
$columns = array_keys($rows[0]);
|
|
||||||
$placeholders = array_map(fn($col) => ":{$col}", $columns);
|
|
||||||
|
|
||||||
$sql = sprintf(
|
|
||||||
"INSERT INTO %s (%s) VALUES (%s) ON DUPLICATE KEY UPDATE %s",
|
|
||||||
$tableName,
|
|
||||||
implode(', ', $columns),
|
|
||||||
implode(', ', $placeholders),
|
|
||||||
implode(', ', array_map(fn($col) => "{$col} = VALUES({$col})", $columns))
|
|
||||||
);
|
|
||||||
|
|
||||||
$stmt = $targetDb->prepare($sql);
|
|
||||||
|
|
||||||
$success = 0;
|
|
||||||
foreach ($rows as $row) {
|
|
||||||
try {
|
|
||||||
$stmt->execute($row);
|
|
||||||
$success++;
|
|
||||||
} catch (PDOException $e) {
|
|
||||||
// Ignorer erreurs
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
printStep("Migré: {$success}/{$count}", true);
|
|
||||||
return $success;
|
|
||||||
}
|
|
||||||
|
|
||||||
function migrateEntite($entityId) {
|
|
||||||
global $sourceDb, $targetDb, $stats;
|
|
||||||
|
|
||||||
printStep("ÉTAPE 1: Migration de l'entité #{$entityId}");
|
|
||||||
|
|
||||||
// Récupérer l'entité source
|
|
||||||
$stmt = $sourceDb->prepare("
|
|
||||||
SELECT * FROM users_entites WHERE rowid = ?
|
|
||||||
");
|
|
||||||
$stmt->execute([$entityId]);
|
|
||||||
$entity = $stmt->fetch();
|
|
||||||
|
|
||||||
if (!$entity) {
|
|
||||||
println(" ✗ Entité introuvable", C_RED);
|
|
||||||
return false;
|
|
||||||
}
|
|
||||||
|
|
||||||
$stats['entites']['source'] = 1;
|
|
||||||
|
|
||||||
println(" 📋 Entité: " . $entity['libelle']);
|
|
||||||
println(" 📍 Code postal: " . ($entity['cp'] ?? 'N/A'));
|
|
||||||
println(" 🏙️ Ville: " . ($entity['ville'] ?? 'N/A'));
|
|
||||||
|
|
||||||
// Chiffrer les données
|
|
||||||
$encryptedName = ApiService::encryptSearchableData($entity['libelle']);
|
|
||||||
$encryptedEmail = !empty($entity['email']) ? ApiService::encryptSearchableData($entity['email']) : '';
|
|
||||||
$encryptedPhone = !empty($entity['phone']) ? ApiService::encryptData($entity['phone']) : '';
|
|
||||||
$encryptedMobile = !empty($entity['mobile']) ? ApiService::encryptData($entity['mobile']) : '';
|
|
||||||
|
|
||||||
// Insérer dans la cible
|
|
||||||
$sql = "INSERT INTO entites (
|
|
||||||
id, encrypted_name, code_postal, ville, encrypted_email, encrypted_phone, encrypted_mobile,
|
|
||||||
fk_region, fk_type, chk_active, created_at, updated_at
|
|
||||||
) VALUES (
|
|
||||||
:id, :name, :cp, :ville, :email, :phone, :mobile,
|
|
||||||
:region, :type, :active, :created, :updated
|
|
||||||
) ON DUPLICATE KEY UPDATE
|
|
||||||
encrypted_name = VALUES(encrypted_name),
|
|
||||||
code_postal = VALUES(code_postal),
|
|
||||||
ville = VALUES(ville)";
|
|
||||||
|
|
||||||
$stmt = $targetDb->prepare($sql);
|
|
||||||
$stmt->execute([
|
|
||||||
'id' => $entity['rowid'],
|
|
||||||
'name' => $encryptedName,
|
|
||||||
'cp' => $entity['cp'] ?? '',
|
|
||||||
'ville' => $entity['ville'] ?? '',
|
|
||||||
'email' => $encryptedEmail,
|
|
||||||
'phone' => $encryptedPhone,
|
|
||||||
'mobile' => $encryptedMobile,
|
|
||||||
'region' => $entity['fk_region'] ?? 1,
|
|
||||||
'type' => $entity['fk_type'] ?? 1,
|
|
||||||
'active' => $entity['active'] ?? 1,
|
|
||||||
'created' => $entity['date_creat'],
|
|
||||||
'updated' => $entity['date_modif']
|
|
||||||
]);
|
|
||||||
|
|
||||||
$stats['entites']['migrated'] = 1;
|
|
||||||
|
|
||||||
printStat("Entité", 1, 1);
|
|
||||||
|
|
||||||
return true;
|
|
||||||
}
|
|
||||||
|
|
||||||
function migrateUsers($entityId) {
|
|
||||||
global $sourceDb, $targetDb, $stats;
|
|
||||||
|
|
||||||
printStep("ÉTAPE 2: Migration des utilisateurs");
|
|
||||||
|
|
||||||
// Compter source
|
|
||||||
$count = $sourceDb->prepare("SELECT COUNT(*) FROM users WHERE fk_entite = ? AND active = 1");
|
|
||||||
$count->execute([$entityId]);
|
|
||||||
$sourceCount = $count->fetchColumn();
|
|
||||||
|
|
||||||
$stats['users']['source'] = $sourceCount;
|
|
||||||
println(" 📊 Source: {$sourceCount} utilisateurs actifs");
|
|
||||||
|
|
||||||
if ($sourceCount === 0) {
|
|
||||||
println(" ⚠️ Aucun utilisateur à migrer", C_YELLOW);
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Récupérer les users
|
|
||||||
$stmt = $sourceDb->prepare("
|
|
||||||
SELECT * FROM users WHERE fk_entite = ? AND active = 1
|
|
||||||
");
|
|
||||||
$stmt->execute([$entityId]);
|
|
||||||
$users = $stmt->fetchAll();
|
|
||||||
|
|
||||||
$success = 0;
|
|
||||||
foreach ($users as $user) {
|
|
||||||
try {
|
|
||||||
$encryptedName = ApiService::encryptSearchableData($user['nom']);
|
|
||||||
$encryptedUsername = !empty($user['username']) ? ApiService::encryptSearchableData($user['username']) : '';
|
|
||||||
$encryptedEmail = !empty($user['email']) ? ApiService::encryptSearchableData($user['email']) : '';
|
|
||||||
$encryptedPhone = !empty($user['telephone']) ? ApiService::encryptData($user['telephone']) : '';
|
|
||||||
$encryptedMobile = !empty($user['mobile']) ? ApiService::encryptData($user['mobile']) : '';
|
|
||||||
|
|
||||||
$sql = "INSERT INTO users (
|
|
||||||
id, fk_entite, fk_role, encrypted_name, first_name,
|
|
||||||
encrypted_user_name, user_pass_hash, encrypted_email,
|
|
||||||
encrypted_phone, encrypted_mobile, chk_active, created_at, updated_at
|
|
||||||
) VALUES (
|
|
||||||
:id, :entity, :role, :name, :firstname,
|
|
||||||
:username, :pass, :email,
|
|
||||||
:phone, :mobile, :active, :created, :updated
|
|
||||||
) ON DUPLICATE KEY UPDATE
|
|
||||||
encrypted_name = VALUES(encrypted_name),
|
|
||||||
encrypted_email = VALUES(encrypted_email)";
|
|
||||||
|
|
||||||
$stmt = $targetDb->prepare($sql);
|
|
||||||
$stmt->execute([
|
|
||||||
'id' => $user['rowid'],
|
|
||||||
'entity' => $entityId,
|
|
||||||
'role' => $user['fk_role'] ?? 1,
|
|
||||||
'name' => $encryptedName,
|
|
||||||
'firstname' => $user['prenom'] ?? '',
|
|
||||||
'username' => $encryptedUsername,
|
|
||||||
'pass' => $user['password'] ?? '',
|
|
||||||
'email' => $encryptedEmail,
|
|
||||||
'phone' => $encryptedPhone,
|
|
||||||
'mobile' => $encryptedMobile,
|
|
||||||
'active' => 1,
|
|
||||||
'created' => $user['date_creat'],
|
|
||||||
'updated' => $user['date_modif']
|
|
||||||
]);
|
|
||||||
|
|
||||||
$success++;
|
|
||||||
} catch (PDOException $e) {
|
|
||||||
// Ignorer
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
$stats['users']['migrated'] = $success;
|
|
||||||
printStat("Utilisateurs", $sourceCount, $success);
|
|
||||||
|
|
||||||
return $success;
|
|
||||||
}
|
|
||||||
|
|
||||||
function migrateOperations($entityId, $limit = 3) {
|
|
||||||
global $sourceDb, $targetDb, $stats;
|
|
||||||
|
|
||||||
printStep("ÉTAPE 3: Migration des opérations (limite: {$limit})");
|
|
||||||
|
|
||||||
// Compter toutes les opérations
|
|
||||||
$count = $sourceDb->prepare("SELECT COUNT(*) FROM operations WHERE fk_entite = ? AND active = 1");
|
|
||||||
$count->execute([$entityId]);
|
|
||||||
$totalCount = $count->fetchColumn();
|
|
||||||
|
|
||||||
println(" 📊 Total disponible: {$totalCount} opérations");
|
|
||||||
println(" 🎯 Limitation: {$limit} dernières opérations");
|
|
||||||
|
|
||||||
$stats['operations']['source'] = min($limit, $totalCount);
|
|
||||||
|
|
||||||
// Récupérer les N dernières opérations
|
|
||||||
$stmt = $sourceDb->prepare("
|
|
||||||
SELECT * FROM operations
|
|
||||||
WHERE fk_entite = ? AND active = 1
|
|
||||||
ORDER BY date_creat DESC
|
|
||||||
LIMIT ?
|
|
||||||
");
|
|
||||||
$stmt->execute([$entityId, $limit]);
|
|
||||||
$operations = $stmt->fetchAll();
|
|
||||||
|
|
||||||
if (empty($operations)) {
|
|
||||||
println(" ⚠️ Aucune opération à migrer", C_YELLOW);
|
|
||||||
return [];
|
|
||||||
}
|
|
||||||
|
|
||||||
$migratedOps = [];
|
|
||||||
foreach ($operations as $op) {
|
|
||||||
try {
|
|
||||||
$sql = "INSERT INTO operations (
|
|
||||||
id, fk_entite, libelle, date_deb, date_fin,
|
|
||||||
chk_distinct_sectors, chk_active, created_at, updated_at
|
|
||||||
) VALUES (
|
|
||||||
:id, :entity, :libelle, :datedeb, :datefin,
|
|
||||||
:distinct, :active, :created, :updated
|
|
||||||
) ON DUPLICATE KEY UPDATE
|
|
||||||
libelle = VALUES(libelle)";
|
|
||||||
|
|
||||||
$stmt = $targetDb->prepare($sql);
|
|
||||||
$stmt->execute([
|
|
||||||
'id' => $op['rowid'],
|
|
||||||
'entity' => $entityId,
|
|
||||||
'libelle' => $op['libelle'],
|
|
||||||
'datedeb' => $op['date_deb'],
|
|
||||||
'datefin' => $op['date_fin'],
|
|
||||||
'distinct' => $op['chk_distinct_sectors'] ?? 0,
|
|
||||||
'active' => 1,
|
|
||||||
'created' => $op['date_creat'],
|
|
||||||
'updated' => $op['date_modif']
|
|
||||||
]);
|
|
||||||
|
|
||||||
$migratedOps[] = $op['rowid'];
|
|
||||||
$stats['operations']['migrated']++;
|
|
||||||
|
|
||||||
println(" ├─ Opération #{$op['rowid']}: " . $op['libelle'], C_GREEN);
|
|
||||||
} catch (PDOException $e) {
|
|
||||||
println(" ├─ ✗ Erreur opération #{$op['rowid']}: " . $e->getMessage(), C_RED);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
printStat("Opérations", count($operations), count($migratedOps));
|
|
||||||
|
|
||||||
return $migratedOps;
|
|
||||||
}
|
|
||||||
|
|
||||||
function migrateOperationDetails($operationId, $entityId) {
|
|
||||||
global $sourceDb, $targetDb, $stats;
|
|
||||||
|
|
||||||
println("\n " . C_BOLD . "┌─ Détails opération #{$operationId}" . C_RESET);
|
|
||||||
|
|
||||||
// 1. Compter les passages
|
|
||||||
$passCount = $sourceDb->prepare("SELECT COUNT(*) FROM ope_pass WHERE fk_operation = ?");
|
|
||||||
$passCount->execute([$operationId]);
|
|
||||||
$nbPassages = $passCount->fetchColumn();
|
|
||||||
|
|
||||||
println(" │ 📊 Passages disponibles: {$nbPassages}");
|
|
||||||
|
|
||||||
// 2. Compter les ope_users
|
|
||||||
$opeUsersCount = $sourceDb->prepare("SELECT COUNT(*) FROM ope_users WHERE fk_operation = ?");
|
|
||||||
$opeUsersCount->execute([$operationId]);
|
|
||||||
$nbOpeUsers = $opeUsersCount->fetchColumn();
|
|
||||||
|
|
||||||
$stats['ope_users']['source'] += $nbOpeUsers;
|
|
||||||
println(" │ 👥 Associations users: {$nbOpeUsers}");
|
|
||||||
|
|
||||||
// 3. Compter les secteurs (via ope_users_sectors)
|
|
||||||
$sectorsCount = $sourceDb->prepare("
|
|
||||||
SELECT COUNT(DISTINCT ous.fk_sector)
|
|
||||||
FROM ope_users_sectors ous
|
|
||||||
WHERE ous.fk_operation = ?
|
|
||||||
");
|
|
||||||
$sectorsCount->execute([$operationId]);
|
|
||||||
$nbSectors = $sectorsCount->fetchColumn();
|
|
||||||
|
|
||||||
println(" │ 🗺️ Secteurs distincts: {$nbSectors}");
|
|
||||||
|
|
||||||
println(" └─ " . C_CYAN . "Migration des données associées..." . C_RESET);
|
|
||||||
|
|
||||||
// Migration ope_users (simplifié pour l'exemple)
|
|
||||||
// ... (code de migration réel ici)
|
|
||||||
|
|
||||||
$stats['ope_pass']['source'] += $nbPassages;
|
|
||||||
}
|
|
||||||
|
|
||||||
// === MAIN ===
|
|
||||||
|
|
||||||
function parseArguments($argv) {
|
|
||||||
$args = [
|
|
||||||
'source-db' => null,
|
|
||||||
'target-db' => 'pra_geo',
|
|
||||||
'entity-id' => null,
|
|
||||||
'limit-operations' => 3,
|
|
||||||
'help' => false
|
|
||||||
];
|
|
||||||
|
|
||||||
foreach ($argv as $arg) {
|
|
||||||
if (strpos($arg, '--') === 0) {
|
|
||||||
$parts = explode('=', substr($arg, 2), 2);
|
|
||||||
$key = $parts[0];
|
|
||||||
$value = $parts[1] ?? true;
|
|
||||||
|
|
||||||
if (array_key_exists($key, $args)) {
|
|
||||||
$args[$key] = $value;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return $args;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Vérifier CLI
|
|
||||||
if (php_sapi_name() !== 'cli') {
|
|
||||||
die("Ce script doit être exécuté en ligne de commande.\n");
|
|
||||||
}
|
|
||||||
|
|
||||||
$args = parseArguments($argv);
|
|
||||||
|
|
||||||
if ($args['help'] || !$args['source-db'] || !$args['entity-id']) {
|
|
||||||
echo <<<HELP
|
|
||||||
|
|
||||||
Usage: php migrate_from_backup_verbose.php [OPTIONS]
|
|
||||||
|
|
||||||
Options:
|
|
||||||
--source-db=NAME Base source (ex: geosector_20251008) [REQUIS]
|
|
||||||
--target-db=NAME Base cible (défaut: pra_geo)
|
|
||||||
--entity-id=ID ID de l'entité à migrer [REQUIS]
|
|
||||||
--limit-operations=N Nombre d'opérations à migrer (défaut: 3)
|
|
||||||
--help Affiche cette aide
|
|
||||||
|
|
||||||
Exemple:
|
|
||||||
php migrate_from_backup_verbose.php \\
|
|
||||||
--source-db=geosector_20251008 \\
|
|
||||||
--target-db=pra_geo \\
|
|
||||||
--entity-id=1178 \\
|
|
||||||
--limit-operations=3
|
|
||||||
|
|
||||||
HELP;
|
|
||||||
exit($args['help'] ? 0 : 1);
|
|
||||||
}
|
|
||||||
|
|
||||||
$sourceDbName = $args['source-db'];
|
|
||||||
$targetDbName = $args['target-db'];
|
|
||||||
$entityId = (int)$args['entity-id'];
|
|
||||||
$limitOperations = (int)$args['limit-operations'];
|
|
||||||
|
|
||||||
// Bannière
|
|
||||||
printBox("MIGRATION VERBOSE - DÉTAILS TABLE PAR TABLE", C_BLUE);
|
|
||||||
println("📅 Date: " . date('Y-m-d H:i:s'));
|
|
||||||
println("📁 Source: {$sourceDbName}");
|
|
||||||
println("📁 Cible: {$targetDbName}");
|
|
||||||
println("🎯 Entité: #{$entityId}");
|
|
||||||
println("📊 Limite opérations: {$limitOperations}");
|
|
||||||
println("");
|
|
||||||
|
|
||||||
// Connexion
|
|
||||||
if (!connectDatabases($sourceDbName, $targetDbName)) {
|
|
||||||
exit(1);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Récupérer infos entité
|
|
||||||
$entityInfo = getEntityInfo($entityId);
|
|
||||||
if (!$entityInfo) {
|
|
||||||
println("✗ Entité #{$entityId} introuvable", C_RED);
|
|
||||||
exit(1);
|
|
||||||
}
|
|
||||||
|
|
||||||
println("\n📋 Entité trouvée: " . $entityInfo['libelle']);
|
|
||||||
println("📍 CP: " . ($entityInfo['cp'] ?? 'N/A') . " - Ville: " . ($entityInfo['ville'] ?? 'N/A'));
|
|
||||||
println("");
|
|
||||||
|
|
||||||
// Migration des tables de référence (x_*)
|
|
||||||
printBox("TABLES DE RÉFÉRENCE", C_CYAN);
|
|
||||||
$referenceTables = ['x_devises', 'x_entites_types', 'x_types_passages',
|
|
||||||
'x_types_reglements', 'x_users_roles'];
|
|
||||||
foreach ($referenceTables as $table) {
|
|
||||||
migrateReferenceTable($table);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Migration entité
|
|
||||||
printBox("MIGRATION ENTITÉ", C_CYAN);
|
|
||||||
if (!migrateEntite($entityId)) {
|
|
||||||
println("✗ Échec migration entité", C_RED);
|
|
||||||
exit(1);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Migration users
|
|
||||||
printBox("MIGRATION UTILISATEURS", C_CYAN);
|
|
||||||
migrateUsers($entityId);
|
|
||||||
|
|
||||||
// Migration opérations
|
|
||||||
printBox("MIGRATION OPÉRATIONS", C_CYAN);
|
|
||||||
$migratedOps = migrateOperations($entityId, $limitOperations);
|
|
||||||
|
|
||||||
// Détails par opération
|
|
||||||
foreach ($migratedOps as $opId) {
|
|
||||||
migrateOperationDetails($opId, $entityId);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Résumé final
|
|
||||||
printBox("RÉSUMÉ DE LA MIGRATION", C_GREEN);
|
|
||||||
foreach ($stats as $table => $data) {
|
|
||||||
if ($data['source'] > 0 || $data['migrated'] > 0) {
|
|
||||||
printStat(ucfirst($table), $data['source'], $data['migrated'], " ");
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
println("\n✅ Migration terminée avec succès!", C_GREEN);
|
|
||||||
exit(0);
|
|
||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user