-
Notifications
You must be signed in to change notification settings - Fork 0
Installation of development version
Here's a brief overview of the installation process for development purposes.
It is assumed that Java 8 or higher is installed. Other than that, there are no mandatory external dependencies.
Octopus may use standalone Postgresql database or H2 embeddable database. When using H2 database, you may skip steps 1 and 2 in this section.
Visit https://www.postgresql.org/download/ and download a binary package for your OS. Depending on OS/package and actions, taken during the installation process, installed Postgresql instance will be configured to run automatically or manually. If the database does not run automatically (which you may check by looking for port 5432 in the list of open TCP ports), then take a look in data/logs/pg10/install.log
. It contains a command for running the database manually.
pgAdmin's purpose is browsing and administering a Postgresql instance within a convenient GUI. It's a separate application, which is usually installed from the same package as Postgresql itself.
Each Octopus instance requires a separate database schema. By default, Octopus assumes that the database JDBC url is as follows: jdbc:postgresql://localhost:5432/octopus?currentSchema=octopus
and credentials are postgres:postgres
, i.e. (1) Postgresql is running on the same host as Octopus on (default) port 5432, (2) there is a database named octopus
, (3) there is a schema named octopus
inside the octopus
database, (4) both database and schema are owned by user postgres
(with password same as username).
This means that what you need to do is:
- launch pgAdmin and connect to Postgresql instance at
localhost:5432
- check if
postgres
user exists; if not, create:
CREATE ROLE postgres LOGIN
ENCRYPTED PASSWORD 'md53175bce1d3201d16594cebf9d7eb3f9d'
SUPERUSER INHERIT CREATEDB CREATEROLE REPLICATION;
- create database
octopus
:
CREATE DATABASE octopus
WITH OWNER = postgres
ENCODING = 'UTF8'
TABLESPACE = pg_default
LC_COLLATE = 'C'
LC_CTYPE = 'C'
CONNECTION LIMIT = -1;
- create schema
octopus
:
CREATE SCHEMA octopus
AUTHORIZATION postgres;
$ git clone https://github.com/manaty/octopus.git
$ ./gradlew build -x test
Navigate to the repository's root directory and run the following command (replacing <version>
placeholder with proper value, which depends on current version of the project; you may look it up in the parent build.gradle
file):
For Postgresql:
$ java -jar server/build/libs/server-<version>-all.jar --config=server/src/main/dist/config/db-postgres.yml --lb-update --lb-default-schema=octopus
For H2:
$ java -jar server/build/libs/server-<version>-all.jar --config=server/src/main/dist/config/db-h2.yml --lb-update
Check the logs for any errors. You may also refresh the schema in pgAdmin to see, if new tables have been created.
From the repository's root directory run the following command (replacing <report-output-directory>
and <version>
placeholders with proper values; and replacing JDBC config file name <db-config>
with either db-postgres
or db-h2
, depending on what database you are going to be using):
$ java -Dbq.server.reportRoot=<report-output-directory> -jar server/build/libs/server-<version>-all.jar --config=server/src/main/dist/config/server.yml --config=server/src/main/dist/config/<db-config>.yml --octopus-server
If Emotiv PRO software is not installed (or not running), you will see an error log message like this:
[24/May/2019:11:52:26] vert.x-eventloop-thread-2 ERROR n.m.o.s.e.CortexClientImpl: Failed to connect websocket
io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: localhost/127.0.0.1:54321
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:327)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:340)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:644)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:591)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:508)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:470)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:909)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.ConnectException: Connection refused
... 11 common frames omitted
If you see it, don't freak out, as it's perfectly fine. Other parts of Octopus like Client API and Web API will function properly, even if Emotiv server is unavailable.
When you switch to a different git revision (either by pulling new changes or switching to a branch or checking out an earlier commit), there's a risk that the structure of Octopus schema, that already exists in your database, is incompatible with the new revision. To mitigate possible startup and runtime problems (which might not always be obvious), make sure to run the following commands:
- First, re-build the artifacts:
$ ./gradlew build -x test
- Second, drop all objects from the database schema by using Liquibase:
For Postgresql:
$ java -jar server/build/libs/server-<version>-all.jar --config=server/src/main/dist/config/db-postgres.yml --lb-drop-all --lb-default-schema=octopus
For H2:
$ java -jar server/build/libs/server-<version>-all.jar --config=server/src/main/dist/config/db-h2.yml --lb-drop-all
- Finally, initialize the database schema from scratch as in step I.5.
This is a situation, when you as a developer want to check interop between two Octopus instances, but don't have the possibility or desire to setup VMs or different machines.
Two Octopus instances may use a single database schema and operate just fine (in the sense, that data consistency is currently not compromised, as there are no updates and no FKs or logical correlations between different records).
For instance, if you need to verify that the reports work correctly (especially, the time adjustment part), it may be a good idea to create a separate schema for each Octopus instance. The easiest way to do this would be to do steps I.2 and I.5 for each Octopus instance, and on each iteration change the schema name (e.g. append a unique number to the name, like octopus2
, octopus3
, etc.) Important thing to remember is that you also need to change one of the command line parameters to the Liquibase command. For instance, if the database schema name is octopus2
, then instead of the default command you need to use the following (note that --lb-default-schema
is changed):
$ java -jar db/build/libs/db-<version>-all.jar --config=server/src/main/dist/config/db-postgres.yml --lb-update --lb-default-schema=octopus2
NOTE: When using H2 database, --lb-default-schema
parameter is not needed.
It's the easiest way to change the configuration: copy the server.yml
file with a different name (e.g. server2.yml)
; then update the configuration values directly inside the new file.
Values that you'll need to update are:
- grpc.port: default value is 9991, so pick a different one (e.g. 9992)
- jetty.connectors[@type="http"].port: default value is 9998, so pick a different one (e.g. 9999)
Depending on what you decided in step 1, you may also need to update jdbc.octopus.url
. Default value is jdbc:postgresql://localhost:5432/octopus?currentSchema=octopus
; you need to change only the last parameter, e.g.: ?currentSchema=octopus2
. Note that JDBC settings are in a different file (named db-<type>.yml
), so you will need to make a copy of this file as well.
Values that you might want to update are:
-
grpc.master
: it's an object node, which by default is absent, and this means that this is a master server; for slave servers you'll need to specify the master's address:
grpc:
port: 9992
master:
address: localhost:9991
Similar to I.6, but you need to use a different config for each Octopus instance being run, e.g.:
# Run master instance (assuming that server.yml and db-postgres.yml are master's configs)
$ java -Dbq.server.reportRoot=<report-output-directory> -jar server/build/libs/server-<version>-all.jar --config=server/src/main/dist/config/server.yml --config=server/src/main/dist/config/db-postgres.yml --octopus-server
# Run slave instance (assuming that server2.yml and db-postgres2.yml are slave's configs with `grpc.master` pointing to master server)
$ java -Dbq.server.reportRoot=<report-output-directory> -jar server/build/libs/server-<version>-all.jar --config=server/src/main/dist/config/server2.yml --config=server/src/main/dist/config/db-postgres2.yml --octopus-server
The order, in which you launch Octopus instances, does not matter; slaves continuously reconnect to masters, so launching master after slave or restarting any server at any time is fine.
In order to connect Octopus to Emotiv you need to do the following steps:
Go to Emotiv website and login. Navigate to My Account -> Downloads and get a version of Emotiv PRO for your OS. Install.
Location of the hosts file is dependent on your OS:
- for Windows, it's usually
C:\\Windows\System32\drivers\etc\hosts
(drive name may differ) - for OS X and Linux, it's
/etc/hosts
Add a mapping for emotivcortex.com
:
- to
127.0.0.1
, if you run Emotiv PRO on the same host as Octopus - to other IP address, if Emotiv PRO is running somewhere else
As in IV.1, go to Emotiv website and login with USERNAME and PASSWORD. This time you need to navigate to My Account -> Cortex Apps and take note of APP ID and CLIENT ID for application named Octopusync. Also you will need the CLIENT SECRET, which you have to get from the person, who registered the application.
When launching Octopus, supply it with a few extra JVM parameters (replacing placeholders with values mentioned above):
-Dbq.cortex.emotiv.username=<USERNAME>
-Dbq.cortex.emotiv.password=<PASSWORD>
-Dbq.cortex.emotiv.clientId=<CLIENT ID>
-Dbq.cortex.emotiv.clientSecret=<CLIENT_SECRET>
-Dbq.cortex.emotiv.appId=<APP ID>
Alternatively, you may set this values directly in YML configuration in cortex.emotiv
section.
By default, connection to Emotiv websocket is established with plain HTTP. If you want to use SSL, then you will need to install Emotiv host's certificate in JVM (refer to keytool reference) and supply Octopus with an extra JVM parameter: -Dbq.cortex.useSsl=true
. Again, instead of turning on SSL via command line, you may set this option in YML configuration.
Normally, Emotiv uses TCP port 54321, so the default Emotiv's socket address in Octopus is emotivcortex.com:54321
. In some circumstances you may want to use a different port (e.g. when connecting to a remote host through some proxy, which performs port forwarding; see our test configuration). Emotiv's socket address is specified in cortex.cortexServerAddress
property, which you may set in YML configuration or provide via JVM parameter -Dbq.cortex.cortexServerAddress
.