Skip to content

AlexLeger/fantastic-couscous

Repository files navigation

OperatorFabric README

MPL-2.0 License Build Status Quality Gate Coverage Code Smells Website opfab.github.io Join us on Mattermost

OperatorFabric is a modular, extensible, industrial-strength and field-tested platform for use in electricity, water, and other utility operations.

  • System visualization and console integration

  • Precise alerting

  • Workflow scheduling

  • Historian

  • Scripting (ex: Python, JavaScript)

It is an open source project initiated by RTE with LF Energy.

1. What does OperatorFabric do in practice?

To perform their duties, an operator has to interact with multiple applications (perform actions, watch for alerts, etc.), which can prove difficult if there are too many of them.

The idea is to aggregate all the notifications from all these applications into a single screen, and to allow the operator to act on them if needed.

Feed screen layout

These notifications are materialized by cards sorted in a feed according to their period of relevance and their severity. When a card is selected in the feed, the right-hand pane displays the details of the card: information about the state of the parent process instance in the third-party application that published it, available actions, etc.

In addition, the cards will also translate as events displayed on a timeline (its design is still under discussion) at the top of the screen. This view will be complimentary to the card feed in that it will allow the operator to see at a glance the status of processes for a given period, when the feed is more like a "To Do" list.

Part of the value of OperatorFabric is that it makes the integration very simple on the part of the third-party applications. To start publishing cards to users in an Opera,torFabric instance, all they have to do is:

  • Register as a publisher through the "Thirds" service and provide a "bundle" containing handlebars templates defining how cards should be rendered, i18n info etc.

  • Publish cards as json containing card data through the card publication API

OperatorFabric will then:

  • Dispatch the cards to the appropriate users (by computing the actual users who should receive the card from the recipients rules defined in the card)

  • Take care of the rendering of the cards, displaying details, actions, inputs etc.

  • Display relevant information from the cards in the timeline

Another aim of OperatorFabric is to make cooperation easier by letting operators forward or send cards to other operators, for example:

  • If they need an input from another operator

  • If they can’t handle a given card for lack of time or because the necessary action is out of their scope

This will replace phone calls or emails, making cooperation more efficient and traceable.

For instance, operators might be interested in knowing why a given decision was made in the past: the cards detailing the decision process steps will be accessible through the Archives screen, showing how the operators reached this agreement.

2. Licensing

This project and all its sub-projects are licensed under Mozilla Public License V2.0. See LICENSE.txt

3. Contributing

Read our CONTRIBUTING file for more information on how to contribute to the project.

4. Requirements

Important
This section describes the projects requirements regardless of installation options. Please see Getting Started below for details on:
  • setting up a development environment with these prerequisites

  • building and running OperatorFabric

4.1. Tools and libraries

  • Gradle 5

  • Java 8.0.163-zulu

  • Maven 3.5.3

  • Docker

  • Docker Compose with 2.1+ file format support

  • Chrome (needed for UI tests in build)

Note
Java 8.0.163-zulu is no longer available for security reasons, the next version is Java 8.0.192-zulu.
Important
It is highly recommended to use sdkman and nvm to manage tools versions.

Once you have installed sdkman and nvm, you can source the following script to set up your development environment (appropriate versions of gradle, java, maven and project variables set):

Set up development environment (using sdkman and nvm)
source bin/load_environment_light.sh

4.2. Software

  • RabbitMQ 3.7.6 +: AMQP messaging layer allows inter service communication

  • MongoDB 4.0 +: Card persistent storage

RabbitMQ is required for :

  • Time change push

  • Card AMQP push

  • Multiple service sync

MongoDB is required for :

  • Current Card storage

  • Archived Card storage

  • User Storage

Important
Installing MongoDB and RabbitMQ is not necessary as preconfigured MongoDB and RabbitMQ are available in the form of docker-compose configuration files at src/main/docker

4.3. Browser support

OperatorFabric TODO

5. Getting Started

Warning
The steps below assume that you are using sdkman and nvm to manage tools versions and that you have already installed them.
Tip
If you encounter any issue, see Troubleshooting below. In particular, a command that hangs then fails is often a proxy issue.

There are several ways to get started with OperatorFabric. Please look into the section that best fits your needs.

Build & run using script

These steps describe how to use dockerized Mongo, RabbitMQ and SonarQube and build and run OperatorFabric using the run_all.sh script.

Clone repository
git clone https://github.com/opfab/operatorfabric-core.git
cd operatorfabric-core
Set up your environment (environment variables & appropriate versions of gradle, maven, etc.)
source ./bin/load_environment_light.sh
Tip
From now on, you can use environment variable $OF_HOME to get back to the repository home.
Deploy dockerized MongoDB, RabbitMQ and SonarQube

MongoDB, RabbitMQ and SonarQube are needed for the build so tests can be run.

A docker-compose file with properly configured containers is available there.

If you haven’t done so before, you will need to configure a docker network for the containers:

docker network create opfabnet

Then run the docker-compose in detached mode:

cd src/main/docker/test-quality-environment/
docker-compose up -d
Build and run OperatorFabric Services using the run_all.sh script
cd $OF_HOME
./bin/run_all.sh -s users,time,cards-publication,cards-consultation,thirds -b true start
Tip
See run_all.sh -h for details.
Check services status
./bin/run_all.sh -s users,time,cards-publication,cards-consultation,thirds status
Log into the UI

URL: localhost:2002/ui/

login: admin

password: test

Warning
It might take a little while for the UI to load even after all services are running.
Warning
Don’t forget the final slash in the URL or you will get an error.
Push cards to the feed

You can check that you see cards into the feed by running the push_card_loop.sh script.

./services/core/cards-publication/src/main/bin/push_card_loop.sh

WORK IN PROGRESS

  • Build & run using jar //TODO

  • Demo (full) //TODO

  • Demo + card loop or rest api //TODO

When you feel ready to experiment with the project, or if the steps above don’t quite cover what you’re planning to do, please look into the Recipes section.

5.1. Prerequisites

Before running containers with docker-compose, it is required to configure a docker network for them

docker network create opfabnet

You can also use the bin/setup_dockerized_environment which builds the services images ant sets up the opfabnet network.

6. Project Contents

6.1. Project Structure

project
├──bin
├──client
│   ├──cards (cards-client-data)
│   ├──src
│   ├──time (time-client-data)
│   └──users (users-client-data)
├──services
│   ├──core
│   │   ├──cards-consultation (cards-consultation-business-service)
│   │   ├──cards-publication (cards-publication-business-service)
│   │   ├──thirds (third-party-business-service)
│   │   ├──time (time-business-service)
│   │   └──users (users-business-service)
│   ├──infra
│   │   ├──auth
│   │   ├──client-gateway (client-gateway-cloud-service)
│   │   ├──config (configuration-cloud-service)
│   │   └──registry (registry-cloud-service)
│   └──web
│       └──web-ui
└── tools
    ├── generic
    │   ├── test-utilities
    │   └── utilities
    ├── spring
    │   ├── spring-amqp-time-utilities
    │   ├── spring-mongo-utilities
    │   ├── spring-oauth2-utilities
    │   └── spring-utilities
    └── swagger-spring-generators

6.2. Conventions regarding project structure and configuration

Sub-projects must conform to a few rules in order for the configured Gradle tasks to work:

6.2.1. Java

[sub-project]/src/main/java

contains java source code

[sub-project]/src/test/java

contains java tests source code

[sub-project]/src/main/resources

contains resource files

[sub-project]/src/test/resources

contains test resource files

6.2.2. Modeling

Core services projects declaring REST APIS that use Swagger for their definition must declare two files:

[sub-project]/src/main/modeling/swagger.yaml

Swagger API definition

[sub-project]/src/main/modeling/config.json

Swagger generator configuration

6.2.3. Docker

Services project all have docker image generated in their build cycle (See Gradle Tasks).

Per project configuration :

  • docker file : [sub-project]/src/main/docker/Dockerfile

  • docker-compose file : [sub-project]/src/main/docker/docker-compose.yml

  • runtime data : [sub-project]/src/main/docker/volume is copied to [sub-project]/build/docker-volume/ by task copyWorkingDir. The latest can then be mounted as volume in docker containers.

6.3. Scripts (bin)

bin/build_all.sh

builds all artifacts as gradle is not able to manage inter project dependencies

bin/clean_all.sh

remove IDE data (project configuration, build output directory) - idea, vsc

bin/load_environment_light.sh

sets up environment when sourced (java version, gradle version, maven version, node version)

bin/load_environment_ramdisk.sh

sets up environment and links build subdirectories to a ramdisk when sourced at ~/tmp

bin/run_all.sh

runs all all services (see below)

bin/setup_dockerized_environment.sh

generate docker images for all services

6.3.1. load_environment_ramdisk.sh

There are prerequisites before sourcing load_environment_ramdisk.sh:

  • Logged user needs sudo rights for mount

  • System needs to have enough free ram

Caution
Never ever run a gradle clean to avoid deleting those links.

6.3.2. run_all.sh

Please see run_all.sh -h usage before running.

Prerequisites

  • mongo running on port 27017 with user "root" and password "password" (See src/main/docker/mongodb/docker-compose.yml for a pre configured instance).

  • rabbitmq running on port 5672 with user "guest" and password "guest" (See src/main/docker/rabbitmq/docker-compose.yml for a pre configured instance).

Ports configuration

Port

2000

config

Configuration service http (REST)

2001

registry

Registry service http (REST)

2002

gateway

Gateway service http (REST+html)

2100

thirds

Third party management service http (REST)

2101

time

Time management service http (REST)

2102

cards-publication

card publication service http (REST)

2103

users

Users management service http (REST)

2104

cards-consultation

card consultation service http (REST)

3000

oauth

Oauth development service http (REST)

4000

config

java debug port

4001

registry

java debug port

4002

gateway

java debug port

4100

thirds

java debug port

4101

time

java debug port

4102

cards-publication

java debug port

4103

users

java debug port

4103

cards-consultation

java debug port

5000

oauth

java debug port

6.3.3. setup_dockerized_environment.sh

Please see setup_dockerized_environment.sh -h usage before running.

Builds all sub-projects, generate docker images and volumes for docker-compose, also sets up docker network "opfabnet" if needed.

7. Environment variables

These variables are loaded by bin/load_environment_light.sh bin/load_environment_ramdisk.sh

  • OF_HOME: OperatorFabric root dir

  • OF_CORE: OperatorFabric business services subroot dir

  • OF_INFRA: OperatorFabric infrastructure services subroot dir

  • OF_CLIENT: OperatorFabric client data definition subroot dir

  • OF_TOOLS: OperatorFabric tooling libraries subroot dir

Additionally, you may want to configure the following variables

  • Docker build proxy configuration (used to configure alpine apk proxy settings)

    • APK_PROXY_URI

    • APK_PROXY_HTTPS_URI

    • APK_PROXY_USER

    • APK_PROXY_PASSWORD

8. Gradle Tasks

In this section only the most useful tasks are described for more information on tasks, refer to "tasks" gradle task output and to gradle and plugins official documentation

8.1. Services

8.1.1. Common tasks for all sub-projects

  • Standard java gradle tasks

  • SpringBoot tasks

    • bootJar : Generate project executable jar - assemble depends on this task

    • bootRun : Runs the application

  • Palantir Docker tasks

    • docker - Builds Docker image

    • dockerClean - Cleans Docker build directory

    • dockerfileZip - Bundles the configured Dockerfile in a zip file

    • dockerPrepare - Prepares Docker build directory

    • dockerPush - Pushes named Docker image to configured Docker Hub

    • dockerPush[tag] - Pushes the Docker image with tag [tag] to configured Docker Hub

    • dockerTag - Applies all tags to the Docker image.

    • dockerTag[tag] - Tags Docker image with tag [tag]

  • Docker Compose tasks:

    • composeUp: runs docker-compose up for docker file

    • composeDown: runs docker-compose down for docker file

    • composeStart: runs docker-compose start for docker file

    • composeStop: runs docker-compose stop for docker file

    • composeLogs: runs docker-compose logs -f for docker file

  • Other:

    • copyWorkingDir : copies [sub-project]/src/main/docker/volume to [sub-project]/build/

    • copyDependencies : copy dependencies to build/libs

    • generateTaskGraph : Generate png from displaying current life cycle tasks

8.1.2. Core

  • Swagger Generator tasks

    • generateSwaggerCode : generate swagger code for all configured swagger source

    • generateSwaggerCodeDoc : generate swagger static documentation as html. Outputs to build/doc/api.

    • generateSwaggerCodeEndpoints : ggenerate swagger code for subproject. Outputs to build/swagger.

    • debugSwaggerOperations : generate swager code from /src/main/modeling/config.json to build/swagger-analyse

8.1.3. Third Party Service

  • Test tasks

    • prepareTestDataDir : prepare directory (build/test-data) for test data

    • compressBundle1Data, compressBundle2Data : generate tar.gz third party configuration data for tests in build/test-data

    • prepareDevDataDir : prepare directory (build/dev-data) for bootRun task

    • createDevData : prepare data in build/test-data for running bootRun task during developpement

8.1.4. infra

config
  • Test tasks

    • createDevData : prepare data in build/test-data for running bootRun task during development

8.2. Tools

8.2.1. Common tasks for all sub-projects

  • Standard java gradle tasks

8.2.2. swagger-spring-generators

Nope

9. Docker demo

Demoable global docker compose files are available at :

  • [root]/src/main/docker/demo : sets up all services, generate a dummy card every 5 seconds

  • [root]/src/main/docker/deploy : sets up all services, ready for card reception

This demo setup exposes the application UI at localhost:2002/ui/

Warning
Don’t forget the final slash in the URL or you will get an error.

Card publication entry points are exposed at localhost:2102/cards

For debugging purpose the following ports are also exposed :

Port Forwards to

2000

config

8080

Configuration service http (REST)

2001

registry

8080

Registry service http (REST)

2002

gateway

8080

Gateway service http (REST+html)

2100

thirds

8080

Third party management service http (REST)

2101

time

8080

Time management service http (REST)

2102

cards-publication

8080

card publication service http (REST)

2103

users

8080

Users management service http (REST)

2104

cards-consultation

8080

card consultation service http (REST)

2200

web-ui

8080

card consultation service http (REST)

3000

oauth

8080

Oauth development service http (REST)

4000

config

5005

java debug port

4001

registry

5005

java debug port

4002

gateway

5005

java debug port

4100

thirds

5005

java debug port

4101

time

5005

java debug port

4102

cards-publication

5005

java debug port

4103

users

5005

java debug port

4104

cards-consultation

5005

java debug port

4200

web-ui

5005

java debug port

5000

oauth

5005

java debug port

27017

mongo

27017

mongo api port

5672

rabbitmq

5672

amqp api port

15672

rabbitmq

15672

rabbitmq api port

10. Recipes

10.1. Generating docker images

To Generate all docker images run bin/setup_dockerized_environment, it will generate all images and also generate an opfabnet docker network

INFORMATION: If you work behind a proxy you need to specify the following properties to configure alpine apk package manager:

  • apk.proxy.uri: proxy http uri ex: "http://somewhere:3128" (defaults to blank)

  • apk.proxy.httpsuri: proxy http uri ex: "http://somewhere:3128" (defaults to apk.proxy.uri value)

  • apk.proxy.user: proxy user

  • apk.proxy.password: proxy unescaped password

Alternatively, you may configure the following environment variables :

  • APK_PROXY_URI

  • APK_PROXY_HTTPS_URI

  • APK_PROXY_USER

  • APK_PROXY_PASSWORD

10.2. Managing a service with docker-compose

Prerequisites : images must be registered

  • To deploy a service run gradle :[subprojectPath]:composeUp example for the third-party-service service : ` gradle :services:core:third-party-service:composeUp `

  • To tear down a service run gradle :[subprojectPath]:composeDown

  • To start an already containerized service run gradle :[subprojectPath]:composeStart

  • To stop an already containerized service run gradle :[subprojectPath]:composeStop

  • To follow logs of a running service run gradle :[subprojectPath]:composeLog

10.3. Running sub-project from jar file

  • gradle :[sub-projectPath]:bootJar

  • or java -jar [sub-projectPath]/build/libs/[sub-project].jar

10.4. Overriding properties when running from jar file

  • java -jar [sub-projectPath]/build/libs/[sub-project].jar –spring.config.additional-location=file:[filepath] NB : properties may be set using ".properties" file or ".yml" file. See Spring Boot configuration for more info.

  • Generic property list extract :

    • server.port (defaults to 8080) : embedded server port

  • :services:core:third-party-service properties list extract :

    • thirds.storage.path (defaults to "") : where to save/load OperatorFabric Third Party data

10.5. Service port table

By default all service built artifacts are configured with server.port set to 8080

If you run the services using bootRun gradle task or the provided docker-compose files (see [prj]/src/main/docker) the ports used are

Port table

Service bootRun port docker-compose mapping docker-compose debug mapping

registry

2001

2001

2001

gateway

2002

2002

2002

thirds

2100

2100

2100

time

2101

2101

2101

cards-publication

2102

2102

2102

users

2103

2103

2103

cards-consultation

2104

2104

2104

oauth

3000

3000

3000

config

4000

4000

4000

registry

4001

4001

4001

gateway

4002

4002

4002

thirds

4100

4100

4100

time

4101

4101

4101

cards-publication

4102

4102

4102

users

4103

4103

4103

cards-consultation

4103

4103

4103

oauth

5000

5000

5000

oauth

5000

5000

5000

11. Troubleshooting

Error using SDKMAN : Internet not reachable

Error message
==== INTERNET NOT REACHABLE! ===================================================

 Some functionality is disabled or only partially available.
 If this persists, please enable the offline mode:

   $ sdk offline

================================================================================
Possible causes & resolution

If network issues have been ruled out, it’s most likely a proxy issue. See setting up proxy → TODO

Proxy error when running third-party docker-compose

Error message
Pulling rabbitmq (rabbitmq:3-management)...
ERROR: Get https://registry-1.docker.io/v2/: Proxy Authentication Required
Possible causes & resolution

When running docker-compose files using third-party images(such as rabbitmq, mongodb etc.) the first time, docker will need to pull these images from their repositories. If the docker proxy isn’t set properly, you will see the above message.

To set the proxy, follow these steps from the docker documentation.

If your proxy needs authentication, add your user and password as follows:

HTTP_PROXY=http://user:password@proxy.example.com:80/
Important
The password should be URL-encoded.

About

Sandbox project with Spring to try cache and tests

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages