-
Notifications
You must be signed in to change notification settings - Fork 15
Developer Documentation
First, if you haven't already, clone the repository:
$ git clone https://github.com/virtualcommons/port-of-mars.git
VSCode is recommended as a primary editor/IDE for this project due to the plugin support for Vue/Typescript as well as tooling for collaboration.
- Vetur - Vue 2 language features
- ESLint and Prettier - code linting/formatting
- GitLens - blame annotations and more for git
- Live Share - collaboration
With the monorepo (project root) opened as a workspace in VSCode, Vetur will struggle to find tsconfig.json
.
To solve this add a file vetur.config.js
to the project root:
module.exports = {
projects: [
{
root: "./client",
package: "./package.json",
tsconfig: "./tsconfig.json"
}
]
};
The following VSCode settings will configure the prettier extension to to use the project config and format JS, TS, and Vue files on save.
.vscode/settings.json
in the project root
{
"editor.tabSize": 2,
"editor.formatOnSave": true,
"editor.formatOnPaste": false,
"prettier.configPath": ".prettierrc",
"[typescript]": {
"editor.defaultFormatter": "esbenp.prettier-vscode"
},
"[javascript]": {
"editor.defaultFormatter": "esbenp.prettier-vscode"
},
"[vue]": {
"editor.defaultFormatter": "esbenp.prettier-vscode"
}
}
Being able to see git history is extremely useful in understanding a codebase or specific piece of code. This can be done on Github, with git blame
or in your editor with something like Gitlens. Large changes that provide no context (formatting, for example) can be ignored with git blame --ignore-rev
. In order to have Gitlens and git blame
ignore these revisions by default add the .git-blame-ignore-revs
file, which indexes commit hashes we want to ignore, to your git config with:
$ git config blame.ignoreRevsFile .git-blame-ignore-revs
These instructions are intended for Linux / MacOS (less tested on MacOS; please submit an issue or PR if you encounter any problems) and assume you have recent versions of Docker and Docker Compose installed on your system (you may also need to install make).
As of docker compose version 2, the command docker-compose
has been replaced with a subcommand docker compose
. One way to fix compatibility with scripts that use docker-compose
is to place a script in your PATH
that redirects. e.g.:
% echo -e '#!/bin/bash\ndocker compose "$@"' >> /usr/bin/docker-compose
% chmod +x /usr/bin/docker-compose
Additionally, you can alias docker compose to something easier to type since it is frequently used, for example: alias doc="docker compose"
To set up a development environment that starts a hot-reloading client and server, run:
$ ./configure dev
$ make secrets # create secrets for the database
$ make # create a new docker-compose.yml and build client and server docker images
If this completes successfully you will be able to bring up all necessary services (client, server, database, redis) with docker-compose up -d
If you are starting from scratch you will need to initialize the database and initialize the schema and then load data fixtures into the database for the Port of Mars tutorial quiz questions.
$ docker-compose exec server bash # open a bash shell into the server container
% yarn typeorm schema:drop # DESTRUCTIVE OPERATION, be careful with this! Drops the existing port of mars database if it exists
% yarn typeorm migration:run # initialize the port of mars database schema and apply all defined typeorm migrations in server/src/migration
% yarn load-fixtures # load data fixtures into the database
At this point you should be able to test the Port of Mars by visiting http://localhost:8081
in your browser.
Since Port of Mars uses docker to containerize the application, node_modules
will only exist inside the containers and you will encounter issues with unknown modules. One way to get around this and have access to code completion is to mirror dependencies on the host/locally.
Install node, npm, and yarn with Node Version Manager (recommended)
$ curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.3/install.sh | bash
$ source ~/.bashrc # for bash shell, alternatively close and re-open the terminal
$ nvm install lts/gallium # install node 16 lts and npm
$ nvm use lts/gallium
# check to make sure everything worked
$ node --version # should be v16.x.x
$ npm --version
$ npm install -g yarn # finally, install yarn with npm
Install project dependencies
$ cd port-of-mars # make sure you are in the project root
# install packages from the lockfiles generated by the containers
$ for dir in {client,server,shared} ; do (cd "$dir" && yarn install --frozen-lockfile) ; done
For more info on typeorm data migrations see https://orkhan.gitbook.io/typeorm/docs/migrations
In the server container:
% yarn typeorm migration:generate -n NameOfMigration
This will generate a new file in migrations/
with up()
and down()
methods based on the changes to the schema (Entities).
To display or run database migrations (for schema changes etc.) use yarn typeorm
commands e.g., migration:show
and migration:run
Example:
% yarn typeorm migration:show # shows all available migrations and any pending ones
yarn run v1.22.5
% ts-node -r tsconfig-paths/register ./node_modules/typeorm/cli.js migration:show
query: SELECT * FROM "information_schema"."tables" WHERE "table_schema" = current_schema() AND "table_name" = 'migrations'
query: SELECT * FROM "migrations" "migrations" ORDER BY "id" DESC
[X] Initial1600968396723
[ ] UserMetadataAddition1607117297405
% yarn typeorm migration:run
yarn run v1.22.5
% ts-node -r tsconfig-paths/register ./node_modules/typeorm/cli.js migration:run
query: SELECT * FROM "information_schema"."tables" WHERE "table_schema" = current_schema() AND "table_name" = 'migrations'
query: SELECT * FROM "migrations" "migrations" ORDER BY "id" DESC
1 migrations are already loaded in the database.
2 migrations were found in the source code.
Initial1600968396723 is the last executed migration. It was executed on Thu Sep 24 2020 17:26:36 GMT+0000 (Coordinated Universal Time).
1 migrations are new migrations that needs to be executed.
query: START TRANSACTION
query: ALTER TABLE "user" ADD "isActive" boolean NOT NULL DEFAULT true
query: ALTER TABLE "user" ADD "dateCreated" TIMESTAMP NOT NULL DEFAULT now()
query: INSERT INTO "migrations"("timestamp", "name") VALUES ($1, $2) -- PARAMETERS: [1607117297405,"UserMetadataAddition1607117297405"]
Migration UserMetadataAddition1607117297405 has been executed successfully.
query: COMMIT
Done in 1.75s.
Changes to the schema and migrations should be committed together to keep things in sync.
In addition to the workflow, you can run Prettier and ESLint locally with:
$ docker-compose exec client bash
% yarn lint # runs eslint, checking for potential issues in the code
% yarn style # runs prettier, checking for formatting
# the same can be done for the server
$ docker-compose exec server bash
% yarn lint
% yarn style
Server tests: https://github.com/virtualcommons/port-of-mars/tree/master/server/tests
Client tests: https://github.com/virtualcommons/port-of-mars/tree/master/client/tests
You can run all of the tests via
$ make test
make docker-compose.yml
generates the docker-compose.yml
from templates and can be re-run to apply any changes in these templates.
make browser
requires the open-url-in-container
Firefox extension: https://addons.mozilla.org/en-US/firefox/addon/open-url-in-container/
$ ./configure staging
$ make deploy
Copy the Sentry DSN url into keys/sentry_dsn
. Then
$ ./configure prod
$ make deploy
The server can be interacted with through a command line interface which contains various utility commands such as exporting data, modifying user data, etc. It can be accessed with the following (on the serve container)
# list available subcommands
% yarn cli --help
# usage
% yarn cli [options] [command]
Data can be exported from the database with the dump.sh
script and pass in the required tournamentRoundId
parameter and optional game ids gids
(numbers separated by spaces). Note that open beta games fall under the same tournamentRoundId
, which may vary depending on the state of the database.
Example:
$ ./dump.sh dev --tournamentRoundId <id> # the tournament round to export
$ ./dump.sh dev --tournamentRoundId <id> --gids 1 5 9 15 # also filter by specific games for the given tournament round
Production
In staging / production, change dev
to prod
:
$ ./dump.sh prod --tournamentRoundId 11 # dump all games for tournament round 11
This generates CSV files with every persisted game event as well as summary csv files with game, player and tournament data. The csv files will be in the docker/dump
folder.
Entry point is currently exportData in https://github.com/virtualcommons/port-of-mars/blob/main/server/src/cli.ts
The general flow is to query all game events and players given the constraints (list of ids, min / from date) and then run a series of Summarizers over them.
The entire Mars Event deck is serialized by the TakenStateSnapshot that captures the entire state of the game at the beginning of the game. Currently TakenStateSnapshot only runs at the beginning of the game.
The Summarizers iterate over the querysets and generate CSVs based on the data within, usually by applying game events, in order, to a fresh GameState and then serializing the results.
GameEventSummarizer
emits a CSV where each line has the serialized game event, the game state before the event was applied, the game state after the event was applied, and the role that generated the event. NOTE: to support flexible group sizes this will need to be changed to a player ID, with a server ID sentinel value indicating that the server generated the event.
AccomplishmentSummarizer
emits the static set of Accomplishments (in-code / in-memory) at the time of data export as a CSV. This may need to be versioned and/or moved into the DB to support differing sets of Accomplishments across different runs of the Port of Mars https://github.com/virtualcommons/port-of-mars/issues/719
The post-processing data workflow developed by @cpritcha runs a series of R functions over the raw CSVs in /dump
. The entry point is https://github.com/virtualcommons/port-of-mars/blob/a43e95fd827f6e344cd6aa02c3e2dd29d8dec208/analytics/R/export.R
This step generates intermediate mostly long form data.
Once the entire tournament has been run and we want to aggregate data and combine it with the survey data (currently from Qualtrics) we need to convert it into wide form and manipulate its structure. This code is in https://github.com/virtualcommons/port-of-mars-analysis
Entry point is main.R
TODO: tournament_dir should probably be a parameter, max_game_rounds should be part of the GameMetadata https://github.com/virtualcommons/port-of-mars/issues/721
GameMetadata
should be read in from some kind of summary file and then incorporated into game.R game_metadata_expand
which produces the "codebook" describing the data columns and what they mean.
IMPORTANT: The survey_cultural_renames
, survey_round_begin_renames
, survey_round_end_renames
data structures in https://github.com/virtualcommons/port-of-mars-analysis/blob/c48147011d3854d75e12e8b8d9947faf2e32e912/R/survey.R#L14 will need to be updated when the Qualtrics survey changes (any new questions, reordered questions, etc)
This diagram gives a general idea of the application structure:
The reigning design principles of the Port of Mars client are:
- Consistency - repeating design and components as well as meeting expectations by using common standards
- Simplicity - eliminating visual clutter in order to keep focus on important elements
- User control - give people the freedom to take actions and undo those actions
Port of Mars currently uses Bootstrap-Vue which composes and extends Bootstrap v4 into reusable Vue components. This provides a convenient UI framework that handles responsiveness and accessibility.
In order to maintain consistency in styling, bootstrap-vue components and bootstrap 'helper classes' are used first. Then, if additional styling is needed, use scoped styles within the component file or global styles if it is shared across components.
Port of Mars is a Single-page application.
The main, 'top level' pages of the application are located in client/src/views/
. These are registered in shared/src/routes.ts
and then client/src/router.ts
, at which point vue-router renders the correct view associated with the URL in the browser.
Pages are generally composed of multiple reusable components, creating a heirarchical structure.
Components are defined in client/src/components/
and are then organized by concern, typically corresponding to a page or in global/
if used by multiple pages.
The majority of application data is maintained in a global state, eliminating the need for a complex web of components passing data back and forth. Vuex is used as a state management library.
The client API provides a clean interface with the server-side API. For non-game-related components, AJAX requests are sent to the server to either send or retrieve data and either store data in the global state or return it. For game-related components, the Colyseus client API is used to synchronize game state between the client and the server.
The shared
directory in the project contains code that is used by both the client and the server and is duplicated on each when building the application.
This includes shared application settings, some utility functions, and most notably: type definitions shared between the client and server.
Entities defined in server/src/entity/
are Typescript classes that map to a table in the database. Port of Mars uses TypeORM as an object-relational mapper which provides an interface for defining schema and querying the database in Typescript. The official docs will often be found to be lacking so a better reference exists at https://orkhan.gitbook.io/typeorm/docs.
Services defined in server/src/services/
contain the majority of logic on the server that is not directly related to game logic. In many cases, services are querying or updating data in the main database using the TypeORM Repository API.
The Settings
service interfaces with a Redis instance, which is used for dynamic application settings (configuration that we want to be able to change at runtime).
The Persistence
service is responsible for storing on-going game data in the form of an event stream in the database.
The Replay
service, on the other hand, simulates games stored by Persistence
by re-hydrating the game state and reapplying the events in order. This is how games are exported for further analysis.
server/src/routes
contains Express endpoints that handle requests from the client by calling a service that usually either returns or updates data. Routes should generally not implement any logic and should delegate to services, only worrying about error handling.
Both the main game and the lobby/waiting room are Colyseus Rooms.
Rooms use Colyseus Schema, a synchronizeable structure (meaning syncing between client/server is handled for us), for managing the game state. Schemas can be nested into a heirarchical structure which the game state makes use of.
The bulk of the game logic is structured in Commands
that execute GameEvents
which encapsulate the functionality needed to apply the event to the game state. A game room will receive messages/requests from the client and then execute a command, while a game loop maintains the clock and applies server actions such as bot actions and phase switching.
The lobby logic is relatively simple so is all contained within the lobby state and lobby room classes.
server/tests/
contains a suite of automated functional and unit tests that are written using the Jest framework. We aim for coverage of the most important functionality of the application, i.e. game functionality (is the game state correctly modified after each event?), replay (is a game stored and simulated correctly?), registration (can a user go through the full process of signing up?), etc.
These tests are run automatically when pushing to the upstream repository but can be run locally as well with make test
.