Quick Start
AI Traffic Control API - 5-Minute Setup Guide
1. Select Your Trained Model
python select_model.py
Available sweeps: pressure, queue, diff-waiting-time
Locations: C:\Users\gemer\Sumo\my-network\Results\sweeps*\
2. Start the Services
# Windows
start.bat
# Linux/Mac
./start.sh
3. Test the API
# Get a traffic action
curl http://localhost:8080/api/traffic/action
# Or run the comprehensive test
python test_api.py
API Endpoints
Python FastAPI Service (Port 8000)
GET /health- Health checkPOST /predict_action- Action predictionGET /model_info- Model informationGET /docs- Interactive API documentation (Swagger UI)
Java API Gateway (Port 8080)
GET /api/traffic/health- Health checkGET /api/traffic/action- Get traffic action (auto-generated observations)POST /api/traffic/action- Predict action with custom observations
Example Requests
Get Traffic Action
curl http://localhost:8080/api/traffic/action
Response:
{
"predictedAction": 2,
"signalState": "GREEN",
"timestamp": 1710521600000,
"status": "success"
}
Custom Observations
curl -X POST http://localhost:8080/api/traffic/action \
-H "Content-Type: application/json" \
-d '{
"observations": [0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0]
}'
Health Check
curl http://localhost:8080/api/traffic/health
Architecture
Flow: HTTP Request → Java Gateway (8080) → Python FastAPI (8000) → RL Model
Components: 1. Java Spring Boot Gateway - REST API entry point 2. Python FastAPI Service - RL model inference 3. Trained PPO Model - Action prediction
Docker Commands
# Start services
docker-compose up --build
# View logs
docker-compose logs -f
# Stop services
docker-compose down
# Check service status
docker-compose ps
Troubleshooting
Services won't start
- Ensure Docker is installed:
docker --version - Check if ports 8000 and 8080 are available
- Check logs:
docker-compose logs
Model loading fails
- Verify model file exists:
rl-inference-service/app/trained_models/model.zip - Rerun model selector:
python select_model.py - Rebuild:
docker-compose up --build
API returns errors
- Check Python service logs:
docker-compose logs rl-inference - Verify model is loaded:
curl http://localhost:8000/model_info - Test health:
curl http://localhost:8080/api/traffic/health
Next Steps
- Integrate with SUMO: Modify SUMO simulation to call the API
- Custom Observations: Replace dummy observations with real traffic data
- Production Deployment: Use Kubernetes or cloud platforms
- Monitoring: Add logging and metrics collection
Files Overview
| File | Purpose |
|---|---|
select_model.py |
Interactive model selector |
test_api.py |
API test client |
start.bat/.sh |
Quick startup script |
docker-compose.yml |
Service orchestration |
rl-inference-service/ |
Python FastAPI service |
java-api-gateway/ |
Java Spring Boot gateway |
README.md |
Full documentation |
For detailed documentation, see INDEX.md