🐛 Troubleshooting
| Service won't start | Prediction fails with timeout | Model loading fails |
|---|---|---|
Check Docker logs: docker-compose logs |
Increase RL_INFERENCE_SERVICE_TIMEOUT |
Verify model path in environment variables |
| Verify model file exists and is correct format | Check if Python service is running: docker-compose ps |
Ensure model.zip is a valid stable-baselines3 PPO model |
| Check port availability (8000, 8080) | Verify network connectivity: docker-compose exec java-gateway ping rl-inference |
Check file permissions |
Also see Support