Newer
Older
# Advanced Configuration
## Engine API for high-availability setups
AURA Engine allows single and redundant deployment modes for high availability scenarios.
### Single Deployment
Usually Engine API is deployed on the same host as the [Engine](https://gitlab.servus.at/aura/engine).
> In your live deployment you might not want to expose the API directly on the web. For security reasons it's highly recommended to guard it using something like NGINX, acting as a reverse proxy.
<img src="images/engine-api_single.png" width="550" />
### Redundant Deployment
In this scenario there are two Engine instances involved. Here you will need to deploy one Engine API on the host of each Engine instance. Additionally you'll have to set up
a third, so-called _Synchronization Node_ of the Engine API. This sync instance of Engine API is in charge of synchronizing playlogs and managing the active engine state.
<img src="images/engine-api_redundancy.png" width="820" />
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
#### Managing Active Engine State
In order to avoid duplicate playlog storage, the _Synchronization Node_ requires to know what the currently active Engine is. This can be achieved by some external _Status Monitor_
component which tracks the heartbeat of both engines. In case the Status Monitor identifies one Engine as dysfunctional, it sends a REST request to the _Sync Node_, informing it
about the second, functional Engine instance being activated.
The history of active Engine instances is stored in the database of the _Sync Node_. It is not only used for playlog syncing, but is also handy as an audit log.
> At the moment AURA doesn't provide its own _Status Monitor_ solution. You'll need to integrate your self knitted component which tracks the heartbeat of the engines and posts the active engine state to the _Sync Node_.
#### Playlog Synchronization for High Availability deployment scenarios
Usually when some new audio source starts playing, AURA Engine logs it to its local Engine API instance via some REST call. Now, the _Local API server_ stores this information in its
local database. Next, it also performs a POST request to the _Synchronization API Server_. This _Sync Node_ checks if this request is coming from the currently active engine instance.
If yes, it stores this information in its playlog database. This keeps the playlogs of individual (currently active) Engine instances in sync with the _Engine API synchronization node_.
The _Engine API synchronization node_ always only stores the valid (i.e. actually played) playlog records.
##### Active Sync
This top-down synchronization process of posting any incoming playlogs at the _Engine Node_ also to the _Synchronization Node_ can be called **Active Sync**. This **Active Sync**
doesn't work in every scenario, as there might be the case, that the _Synchronization Node_ is not available e.g. due to network outage, maintenance etc. In this situation the playlog
obviously can not be synced. That means the local playlog at the _Engine Node_ is marked as "not synced".
##### Passive Sync
Such marked entries are focus of the secondary synchronization approach, the so called **Passive Sync**: Whenever the _Synchronization Node_ is up- and running again, some automated job
on this node is continuously checking for records on remote nodes marked as "unsynced". If there are such records found, this indicates that there has been an outage of the _Sync Node_.
Hence those "unsynced" records are pending to be synced. Now an automated job on the _Sync Node_ reads those records as batches from that Engine Node and stores them in its local database.
It also keeps track when the last sync has happened, avoiding to query unnecceary records on any remote nodes.
In order to avoid that this **Passive Sync** job might be causing high traffic on an engine instance, these batches are read with some configured delay time (see `sync_interval` and
`sync_step_sleep` in the _Sync Node_ configuration; all values are in seconds) and a configurable batch size (`sync_batch_size`; count of max unsynced playlogs which are read at once).
## Configure Federation
Then configure the type of federation. Depending on how you want to run your
Engine API node and where it is deployed, you'll needed to uncomment one of these federation sections.
When you'll want to test federation you can use the configurations located in `test/config`.
#### Engine 1 Node
Use this section if you are running [Engine](https://gitlab.servus.at/aura/engine) standalone or if this is the first API node in a redundant deployment.
Replace `api.sync.local` with the actual host name or IP of your sync node.
```ini
# NODE 1
host_id=1
sync_host="http://api.sync.local:8008"
```
#### Engine 2 Node
In case this is the second API node in a redundant deployment.
Replace `api.sync.local` with the actual host name or IP of your sync node.
```ini
# NODE 2
host_id=2
sync_host="http://api.sync.local:8008"
```
#### Synchronization Node
This is the synchronization instance in a redundant setup. This instance combines all valid information coming from Engine API 1 and 2.
Replace `engine1.local` and `engine2.local` with the actual details of your main nodes.
```ini
# NODE SYNC
host_id=0
main_host_1="http://engine1.local:8008"
main_host_2="http://engine2.local:8008"
# The Engine which is seen as "active" as long no other information is received from the status monitor
default_source=1
# How often the Engine 1 and 2 nodes should be checked for unsynced records (in seconds)
sync_interval=3600
# How many unsynced records should be retrieved at once (in seconds)
sync_batch_size=100
# How long to wait until the next batch is requested (in seconds)
sync_step_sleep=2
```
## Daemonizing Engine API
Engine can also be deployed using [Systemd](#running-with-systemd) or [Supervisor](#running-with-supervisor).
### Running with Systemd
The Systemd unit file configuration expects to be running under the user `engineuser`. To create such user type:
```bash
sudo adduser engineuser
sudo adduser engineuser sudo
```
Copy the systemd unit file in `config/sample/systemd` to `/etc/systemd/system`. This configuration file is expecting you to have
Engine API installed under `/opt/aura/engine-api` and `engineuser` owning the files.
Let's start the service as root
```bash
systemctl start aura-engine-api
```
And check if it has started successfully
```bash
systemctl status aura-engine-api
```
If you experience issues and need more information, check the syslog while starting the service
```bash
tail -f /var/log/syslog
```
You can stop or restart the service with one of these
```bash
systemctl stop aura-engine-api
systemctl restart aura-engine-api
```
Note, any requirements from the [Installation](#installation) step need to be available for that user.
### Running with Supervisor
Alternatively to Systemd you can start Engine API using [Supervisor](http://supervisord.org/). In `config/sample/supervisor/aura-engine-api.conf` you
can find an example Supervisor configuration file. Follow the initial steps of the Systemd setup.