[EPIC] Schedule programme data model without ORM/DB
The recent programme data is continuously fetched from the Steering API. To avoid any disruption in play-out (e.g. in cases where the API is not available due to network outages) the programme is also cached locally.
Engine uses a SQLAlchemy model rebuilding an internal representation of the programme and relevant schedules for play-out.
Having an ORM and relatively heavy PostgreSQL server just for this single purpose is some unnecessary overhead. Therefore lean ways of caching should be evaluated. This POC aims to:
- Cache the raw JSON endpoint data
- Rebuild the ORM using simple POPOs
- Use hashing strategies to easily detect updates, where applicable
Sub Tasks
- Extend API data fetchers to transparently cache... (#124 - closed)
- Deserialize API response using Steering/Tank Op... (#129 - closed)
- Scheduling domain objects based on POPOs (#133 - closed)
- Refactor API Fetcher
-
POC for
TimetableService
as a lightweight replacement forProgrammeService
- Diffing between current and planned schedule using JSON file cache
- Refactor timetable renderer for work with new domain model
- Test and integrate timetable and scheduler with new domain model
-
Refactor
scheduling
module to improve testability -
Check if uses of
ProgrammeService.engine.engine_time()
can be relocated, to reduce dependencies - Remove all references of SQLAlchemy and PostgreSQL
- Add some more test cases
- Documentation: Update README.md
- Update docs.aura.radio + Docker Compose settings