Skip to content

Commit 3ca2b3c

Browse files
JoaoCarabettad116626vmussaHellcassiusvncsna
authored
[infra] Python 1.6.0 (#898)
* merge with master * [infra] adds upload header to storage and define more modes (#795) * feat(infra): adds upload header to storage * fix(infra): replace if header exists in storage * fix(infra): fix table_id in storage * feat(infra): add mode `architecture` * feat(infra): adjust mode * feat(infra): add new modes * feat(infra): adjust mode all * feat(infra): adjust mode all * feat(infra): change table-approve bd version * feat(infra): trigger table-approve * feat(infra): trigger table-approve * feat(infra): trigger table-approve * feat(infra): trigger table-approve * feat(infra): trigger table-approve * feat(infra): trigger table-approve * feat(infra): trigger table-approve * feat(infra): trigger table-approve * feat(infra): add the option to use bqstorage api (#847) * [infra] Add validate module [2] (#675) * draft validate.py * adding more validate_features * fix attributes and define some helper functions to cut repetition * improve storage upload exception * swipe dataset_id and table_id in Table * adds __init__ to modules * creates single function to generate metadata * redesign metadata API to be more intuitive * add metadata create to CLI * expose Metadata to users * fix small typos * add yaml generator * improves exception * adds ckanapi to reqs * Fix comment_treatment metadata function * Fix BaseDosDadosException imports * Fix dataset init file checks * Raise Exception in case of inconsistent metadata YAML order * cria configs a partir do Metadata * get rid o validate.py * configs come with data * add columns to table_config.yaml * Add tests to metadata module * Delete dataset_config.yaml * Refactor test_metadata * Improve metadata create tests * Add more table metadata create tests * Update metadata create docstring * Add all test_metadata tests placeholders * Add tests for Metadata is_updated method * First working version of Metadata validate method * Add Metadata validate tests * Improve metadata validate and its tests * Add metadata is_updated CLI entrypoint * Add Metadata validate CLI entrypoint * First metadata publish version * Fix metadata create/is_updated bugs, improve validate tests * Fix metadata's test_create_if_exists_pass * Refactor metadata code, improve validate method * Add metadata publish CLI entrypoint * Fix publish bugs, add resource_patch for bdm_table patches * Add response return value to publish, improve exceptions * Add metadata publish tests * Improve metadata publish and validate docstrings * Add partition_columns option to metadata create * Call is_updated before publish * Fix partitions_writer_condition * Fix ckan_data_dict * Update CKAN_URL * Integrate Table.create and Metadata * Fix YAML generation for array fields * feat(infra): adds _make_publish_sql * fix(infra): add partition columns to be created, fix dataset_id in dataset and autofill type from arq sheet * fix(infra): back patitions to str * fix(infra): enhance organization metadata validation * fix(infra): YAML complex fields are generated even if there is no data available * feat(infra): add extras field to dataset validation * fix(infra): clean spaces and put comma * fix(infra): partitions from string to list in _is_partitioned function * fix(infra): fix table_description.txt for tb.publish() * fix(infra): improve update_columns doc string * fix(infra): point metadata.CKAN_URL to staging website * fix(infra): handle new dataset/table case in Metadata.is_updated * Make CKAN_API_KEY and CKAN_URL come from config.toml * bump pyproject to 1.6.0-a0 * Add ckan config variables builder * Add default ckan config to configs/config.toml * Raise error in case of no CKAN_API_KEY when publishing * fix(infra): update ruamel.yaml and python dependencies * fix(infra): base initiation, migrate ckan_url and api_key to __init__ * fix(infra): handle ckan config None values * fix(infra): handle_complex_fields get correct data * feat(infra): improve update_columns * feat(infra): improve update_columns * fix(infra): change coluna to nome * bump to 1.6.0a4 * fix(infra): bump to 1.6.0a5 * fix(infra): force utf-8 in all open methods * feat(infra): release 1.6.0a6 * fix(infra): fix update_columns encoding * feat(infra): pump version 1.6.0a7 * Add extra dataset metadata fields for validation * Improve metadata validation * fix(infra): refactor metadata's ckan_data_dict * fix(infra): remove input_hints from YAMLs * fix(infra): shrink organization dataset YAML field * feat(infra): bump to version 1.6.0-alpha.8 * feat(infra): add test_create_force_columns_is_true metadata test * feat(infra): refactor metadata tests, add test_force_columns_is_false * feat(infra): refactor metadata tests * feat(infra): add partition_columns tests * fix(infra): refatora o pacote metadata (#826) * fix(infra): refatora o pacote metadata * fix(infra): adiciona parte da refatoração * fix(infra): corrige erros da refatoração * feat(infra): adiciona suporte ao comando 'python -m' * feat(infra): adiciona opção de versão * feat(infra): formata o código com black * fix(infra): corrige uns testes e comenta outros * fix(infra): nullify yaml's partitions in case of not-None empty values * fix(infra): fix Metadata.publish tests, remove debugging code * feat(infra): make creation of table_config.yaml only optional * fix(infra): make Metadata.validate work with new datasets and tables * feat(infra): make Metadata.publish handle new datasets or tables * fix(infra): create all dataset files * fix(infra): draft new dataset_description.txt * fix(infra): make table.py work with new YAML, refactor and fix tests * fix(infra): handle non-defined variables for dataset_description.txt template * refactor(infra): make Table and Dataset use Metadata as a component * fix(infra): add gcloud variables to YAML through config.toml * feat(infra): bump to 1.6.0-a9 * fix(infra): adiciona verificação de organização (#869) * fix(infra): adiciona verificação de organização * fix(infra): formatação com black * fix(infra): altera nome do trigger de data checks * feat(infra): rascunho da action de metadata checks * Revert "fix(infra): adiciona verificação de organização (#869)" This reverts commit c82d70a. * fix(infra): bring back all dataset_config.yaml fields to ckan_data_dict * fix(infra): ordena as bibliotecas * fix(infra): corrige formatação * fix/validate: corrige validate e adiciona actions (#876) * fix(infra): adiciona verificação de organização * fix(infra): formatação com black * fix(infra): altera nome do trigger de data checks * feat(infra): rascunho da action de metadata checks * [dados-fix] Sobe INPC (#879) * feat(docs): clarifications on partitions, temporal_coverage, suffixes. (#846) * fix(infra): inicio das correções dos testes * fix(infra): inicio das correções dos testes * fix(infra): mais alterações nos testes * [dados-bot] br_ms_vacinacao_covid19 (2021-10-18) (#884) Co-authored-by: terminal_name <github_email> * [dados-bot] br_ms_vacinacao_covid19 (2021-10-19) (#888) Co-authored-by: terminal_name <github_email> * [dados-atualizacao] br_anp_precos_combustiveis (#883) * atualiza dados dos preços de combustiveis * corrige erro de português no table_description * fix(infra): corrige ordenação das bibliotecas * fix(infra): corrige sintaxe nova Co-authored-by: Gustavo Aires Tiago <36206956+gustavoairestiago@users.noreply.github.com> Co-authored-by: Ricardo Dahis <6617207+rdahis@users.noreply.github.com> Co-authored-by: Lucas Moreira <65978482+lucasnascm@users.noreply.github.com> * Revert "fix/validate: corrige validate e adiciona actions (#876)" This reverts commit 2d3fa09. * Revert "fix(infra): corrige formatação" This reverts commit cb19f31. * Revert "fix(infra): ordena as bibliotecas" This reverts commit 698db35. * Revert "Merge branch 'python-1.6.0' into add_validate_module_2" This reverts commit 9c305f2, reversing changes made to aee8c2a. * feat(infra): add support for organization metadata * fix(infra): complete all functions and methods docstrings * docs(infra): add metadata entrypoints walkthrough to docs Co-authored-by: hellcassius <caiorogerio.santos@gmail.com> Co-authored-by: joaoc <joao.carabetta@gmail.com> Co-authored-by: d116626 <d116626@gmail.com> Co-authored-by: Vinicius Aguiar <vncsna@gmail.com> Co-authored-by: Gustavo Aires Tiago <36206956+gustavoairestiago@users.noreply.github.com> Co-authored-by: Ricardo Dahis <6617207+rdahis@users.noreply.github.com> Co-authored-by: Lucas Moreira <65978482+lucasnascm@users.noreply.github.com> * fix(infra): use basedosdados-dev for inexistent dataset test * fix(infra): update bases with master files * feat(infra): pump version * fix(infra): update click dependency * fix(infra): force setup.py to use click==8.0.3 * feat(infra): add new modes to cli help * Update colab_data.md * fix(infra): fix none in _load_schema * fix(infra): fix the case when table are added for the first time * feat(infra): pump version * fix(infra): try to fix merge conflicts * fix(infra): fix data-check to master * fix(infra): fix data-check to master * feat(infra): add url and api_key in env action * fix(infra): remove space in env-setup * feat(infra): add metadata validate action * fix(infra): change actions bd version * feat(infra): trigger md validate * fix(infra): change action trigger * feat(infra): test table-approve * feat(infra): test table-approve * feat(infra): test table-approve * feat(infra): test table-approve, pump version * feat(infra): test table-approve, pump version * feat(infra): test table-approve * feat(infra): test table-approve * feat(infra): test table-approve * feat(infra): test table-approve * feat(infra): test table-approve * feat(infra): test table-approve * feat(infra): test table-approve * feat(infra): test table-approve * feat(infra): pusblish rais * feat(infra): pusblish rais * feat(infra): pusblish rais * feat(infra): pusblish rais * feat(infra): pusblish rais * fix(infra): fix _load_schema and publish rais * fix(infra): publish rais * fix(infra): publish rais * fix(infra): publish rais * fix(infra): publish rais * fix(infra): publish rais * fix(infra): publish rais * fix(infra): publish rais * feat: updates diretorio_escola, closes #921 * fix(infra): publish rais * fix(infra): publish escolas * fix(infra): remove lint check * fix(infra): try to use storage retry policy * fix(infra): tb-approve bd version * fix(infra): pump storage version * fix(infra): add conditional retry * fix(infra): add conditional retry * fix(infra): publish rais * fix(infra): publish rais * fix(infra): publish rais * fix(infra): publish rais * fix(infra): publish rais * fix(infra): publish rais * fix(infra): publish rais * fix(infra): publish rais * fix(infra): publish rais * fix(infra): publish rais * fix(infra): publish rais vinculos * fix(infra): publish rais vinculos * fix(infra): change metadata-validate trigger' * fix(infra): adjust copy_table * fix(infra): change metadata-validate trigger * fix(infra): change table approve logs * fix(infra): change metadata-validate logs * fix(infra): change tb-app mode order * fix(infra): change tb-app logs * feat(infra): reactivate actions * feat(infra): change action logs * feat(infra): deactivate data-check * feat(infra): change actions logs * feat(infra): change actions logs * feat(infra): republish rais * fix(infra): improve validate metadata tests * tests(infra): add test for invalid organization entry * feat(infra): add --all and --if_exists args to publish * feat(infra): bump to 1.6.0-b20 * fix(infra): prepare data-check * fix(infra):test tb-app * fix(infra): change data-check action * fix(infra): test data-check * fix(infra): test data-check * fix(infra): test data-check * fix(infra): change ci trigger * fix(infra): change ci trigger * fix(infra): change ci trigger * fix(infra): change ci trigger * fix(infra): change ci trigger * fix(infra): test data-check * docs(infra): add --all cli option docs * fix(infra): debug data-check * fix(infra): fix data-check ckan api env variable * debug(infra): verify data-check env variables * debug(infra): fix getenv * feat(infra): test data-check * feat(infra): test data-check * feat(infra): test data-check * feat(infra): test data-check * feat(infra): test data-check * feat(infra): test data-check * feat(infra): test data-check * feat(infra): test data-check * debug(infra): test runtime env variables * debug(infra): try os.environ.get for data-check * debug(infra): test cache for data-check * fix(infra): revert data-check changes * fix(infra): data-check original trigger * fix(infra): data-check original envs * fix(infra): deactivate tb-app branch trigger * fix(infra): update docs folder based on master branch * feat(infra): add update_locally option to metadata publish * feat(infra): add update_locally to metadata publish cli * solves #issue-181 * (feat) infra:COMMIT FINAL CARALHO!!!!! Co-authored-by: d116626 <d116626@gmail.com> Co-authored-by: Vítor Mussa <vtrmussa@gmail.com> Co-authored-by: hellcassius <caiorogerio.santos@gmail.com> Co-authored-by: Vinicius Aguiar <vncsna@gmail.com> Co-authored-by: Gustavo Aires Tiago <36206956+gustavoairestiago@users.noreply.github.com> Co-authored-by: Ricardo Dahis <6617207+rdahis@users.noreply.github.com> Co-authored-by: Lucas Moreira <65978482+lucasnascm@users.noreply.github.com> Co-authored-by: rdahis <rdahis@gmail.com>
1 parent 3b2c07a commit 3ca2b3c

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

56 files changed

+54560
-884
lines changed

.github/workflows/data-check.yml

Lines changed: 8 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,6 @@ on:
44
pull_request_target:
55
types:
66
- labeled
7-
87
jobs:
98
guard:
109
runs-on: ubuntu-latest
@@ -25,7 +24,6 @@ jobs:
2524
steps.changes.outputs.workflows == 'true' &&
2625
github.event.pull_request.head.repo.full_name != github.repository
2726
run: exit 1
28-
2927
get-changes:
3028
needs: guard
3129
runs-on: ubuntu-latest
@@ -57,22 +55,22 @@ jobs:
5755
- name: Set up Python
5856
uses: actions/setup-python@v2
5957
with:
60-
python-version: '3.9.x'
58+
python-version: "3.9.x"
6159
- name: Install dependencies
6260
run: |
6361
python -m pip install --upgrade pip
64-
pip install basedosdados==1.5.2 pyarrow pytest toml
65-
- name: Set up basedosdados environment
66-
run: |
67-
cd .github/workflows/env-setup
68-
python env_setup.py
62+
pip install basedosdados==1.6.0 pyarrow pytest toml
63+
- name: Set up base dos dados environment
6964
shell: bash
70-
env:
71-
BUCKET_NAME: basedosdados-dev
65+
env:
66+
BUCKET_NAME: basedosdados-dev
7267
PROJECT_NAME_PROD: basedosdados-dev
7368
PROJECT_NAME_STAGING: basedosdados-dev
7469
GCP_BD_PROD: ${{ secrets.GCP_BD_DEV_PROD }}
7570
GCP_BD_STAGING: ${{ secrets.GCP_BD_DEV_STAGING }}
71+
CKAN_URL: "https://staging.basedosdados.org"
72+
CKAN_API_KEY: ${{ secrets.CKAN_STAGING }}
73+
run: python .github/workflows/env-setup/env_setup.py
7674
- name: Test data and fill report
7775
run: pytest -v .github/workflows/data-check
7876
shell: bash

.github/workflows/data-check/checks.yaml

Lines changed: 14 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,11 @@
1+
# test_select_all_works:
2+
# id: {{ dataset_id }}/{{ table_id }}/1
3+
# name: Check if select all works
4+
# query: |
5+
# SELECT EXISTS (
6+
# SELECT *
7+
# FROM `{{ project_id }}.{{ dataset_id }}.{{ table_id }}`
8+
# ) AS sucess
19

210
test_table_exists:
311
id: {{ dataset_id }}/{{ table_id }}/0
@@ -26,19 +34,19 @@ test_table_has_no_null_column:
2634
SELECT col_name, nulls_count / total_count null_percent
2735
FROM n_nulls, n_total
2836
29-
test_primary_key_has_unique_values:
37+
test_identifying_column_has_unique_values:
3038
id: {{ dataset_id }}/{{ table_id }}/3
31-
name: Check if primary key has unique values
39+
name: Check if identifying column has unique values
3240
query: |
33-
{% if primary_keys is defined and primary_keys is not none -%}
41+
{% if identifying_columns is defined and identifying_columns is not none -%}
3442
SELECT
3543
COUNT(
3644
DISTINCT CONCAT(
37-
{% for primary_key in primary_keys -%}
38-
IFNULL(SAFE_CAST({{ primary_key }} AS STRING), " "), "&",
45+
{% for identifying_column in identifying_columns -%}
46+
IFNULL(SAFE_CAST({{ identifying_column }} AS STRING), " "), "&",
3947
{% endfor -%}
4048
"EOF"
4149
)
4250
) / COUNT(*) unique_percentage
4351
FROM `{{ project_id }}.{{ dataset_id }}.{{ table_id }}` t
44-
{% endif %}
52+
{% endif %}

.github/workflows/data-check/test_data.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -83,8 +83,8 @@ def test_table_has_no_null_column(configs):
8383
assert result.empty or (result.null_percent.max() < 1)
8484

8585

86-
def test_primary_key_has_unique_values(configs):
87-
config = configs["test_primary_key_has_unique_values"]
86+
def test_identifying_column_has_unique_values(configs):
87+
config = configs["test_identifying_column_has_unique_values"]
8888
result = fetch_data(config)
8989

9090
result = result.unique_percentage.values[0]

.github/workflows/env-setup/env_setup.py

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -77,6 +77,10 @@ def env_setup():
7777
),
7878
},
7979
},
80+
"ckan": {
81+
"url": os.getenv("CKAN_URL"),
82+
"api_key": decoding_base64(os.environ.get("CKAN_API_KEY")).replace("\n", ""),
83+
},
8084
}
8185

8286
# load the secret of prod and staging data

.github/workflows/etl-caged.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -58,6 +58,8 @@ jobs:
5858
PROJECT_NAME_STAGING: basedosdados-staging
5959
GCP_BD_PROD: ${{ secrets.GCP_BD_DEV_PROD }}
6060
GCP_BD_STAGING: ${{ secrets.GCP_BD_DEV_STAGING }}
61+
CKAN_URL: "https://staging.basedosdados.org"
62+
CKAN_API_KEY: ${{ secrets.CKAN_STAGING }}
6163
- name: Run ETL-CAGED
6264
run: python bases/br_me_caged/code/caged_novo/caged_novo.py
6365
shell: bash
Lines changed: 57 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,57 @@
1+
name: metadata-validate
2+
3+
on:
4+
push:
5+
paths:
6+
- 'bases/**'
7+
jobs:
8+
get-changes:
9+
runs-on: ubuntu-latest
10+
steps:
11+
- name: Check file changes
12+
id: file_changes
13+
uses: trilom/file-changes-action@v1.2.4
14+
- name: Copy file changes
15+
run: cp $HOME/files.json files.json
16+
- name: Upload file changes
17+
uses: actions/upload-artifact@v2
18+
with:
19+
name: push-changes
20+
path: files.json
21+
22+
metadata-validate:
23+
needs: get-changes
24+
runs-on: ubuntu-latest
25+
steps:
26+
- name: Check out repository
27+
uses: actions/checkout@v2
28+
- name: Download changes
29+
uses: actions/download-artifact@v2
30+
with:
31+
name: push-changes
32+
- name: Set up Python
33+
uses: actions/setup-python@v2
34+
with:
35+
python-version: "3.9.x"
36+
- name: Install dependencies
37+
run: |
38+
python -m pip install --upgrade pip
39+
pip install basedosdados==1.6.0 toml
40+
- name: Set up base dos dados environment
41+
run: python .github/workflows/env-setup/env_setup.py
42+
shell: bash
43+
env:
44+
BUCKET_NAME: basedosdados
45+
PROJECT_NAME_PROD: basedosdados
46+
PROJECT_NAME_STAGING: basedosdados-staging
47+
GCP_BD_PROD: ${{ secrets.GCP_TABLE_APPROVE_PROD }}
48+
GCP_BD_STAGING: ${{ secrets.GCP_TABLE_APPROVE_STAGING }}
49+
CKAN_URL: "https://staging.basedosdados.org"
50+
CKAN_API_KEY: ${{ secrets.CKAN_STAGING }}
51+
- name: Metadata validate
52+
run: python -u .github/workflows/metadata-validate/metadata_validate.py
53+
shell: bash
54+
env:
55+
PROJECT_ID: ${{ secrets.GCP_MAIN_PROJECT_ID }}
56+
BUCKET_NAME_BACKUP: basedosdados-backup
57+
BUCKET_NAME_DESTINATION: basedosdados
Lines changed: 112 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,112 @@
1+
import json
2+
import os
3+
import traceback
4+
from pathlib import Path
5+
from pprint import pprint
6+
7+
import basedosdados as bd
8+
import yaml
9+
from basedosdados import Dataset, Storage
10+
from basedosdados.upload.base import Base
11+
from basedosdados.upload.metadata import Metadata
12+
13+
14+
def tprint(title=""):
15+
if not len(title):
16+
print(
17+
"\n",
18+
"#" * 80,
19+
"\n",
20+
)
21+
else:
22+
size = 38 - int(len(title) / 2)
23+
print("\n\n\n", "#" * size, title, "#" * size, "\n")
24+
25+
26+
def load_configs(dataset_id, table_id):
27+
# get the config file in .basedosdados/config.toml
28+
configs_path = Base()._load_config()
29+
30+
# get the path to metadata_path, where the folder bases with metadata information
31+
metadata_path = configs_path["metadata_path"]
32+
33+
# get the path to table_config.yaml
34+
table_path = f"{metadata_path}/{dataset_id}/{table_id}"
35+
36+
return (
37+
# load the table_config.yaml
38+
yaml.load(open(f"{table_path}/table_config.yaml", "r"), Loader=yaml.FullLoader),
39+
# return the path to .basedosdados configs
40+
configs_path,
41+
)
42+
43+
44+
def get_table_dataset_id():
45+
# load the change files in PR || diff between PR and master
46+
changes = Path("files.json").open("r")
47+
changes = json.load(changes)
48+
49+
# create a dict to save the dataset and source_bucket related to each table_id
50+
dataset_table_ids = {}
51+
52+
# create a list to save the table folder path, for each table changed in the commit
53+
table_folders = []
54+
for change_file in changes:
55+
# get the directory path for a table with changes
56+
file_dir = Path(change_file).parent
57+
58+
# append the table directory if it was not already appended
59+
if file_dir not in table_folders:
60+
table_folders.append(file_dir)
61+
62+
# construct the iterable for the table_config paths
63+
table_config_paths = [Path(root / "table_config.yaml") for root in table_folders]
64+
65+
# iterate through each config path
66+
for filepath in table_config_paths:
67+
68+
# check if the table_config.yaml exists in the changed folder
69+
if filepath.is_file():
70+
71+
# load the found table_config.yaml
72+
table_config = yaml.load(open(filepath, "r"), Loader=yaml.SafeLoader)
73+
74+
# add the dataset and source_bucket for each table_id
75+
dataset_table_ids[table_config["table_id"]] = {
76+
"dataset_id": table_config["dataset_id"],
77+
"source_bucket_name": table_config["source_bucket_name"],
78+
}
79+
80+
return dataset_table_ids
81+
82+
83+
def metadata_validate():
84+
# find the dataset and tables of the PR
85+
dataset_table_ids = get_table_dataset_id()
86+
87+
# print dataset tables info
88+
tprint("TABLES FOUND")
89+
pprint(dataset_table_ids)
90+
tprint()
91+
92+
# iterate over each table in dataset of the PR
93+
for table_id in dataset_table_ids.keys():
94+
dataset_id = dataset_table_ids[table_id]["dataset_id"]
95+
source_bucket_name = dataset_table_ids[table_id]["source_bucket_name"]
96+
97+
try:
98+
# push the table to bigquery
99+
md = Metadata(dataset_id=dataset_id, table_id=table_id)
100+
101+
md.validate()
102+
tprint(f"SUCESS VALIDATE {dataset_id}.{table_id}")
103+
tprint()
104+
105+
except Exception as error:
106+
tprint(f"ERROR ON {dataset_id}.{table_id}")
107+
traceback.print_exc()
108+
tprint()
109+
110+
111+
if __name__ == "__main__":
112+
metadata_validate()

.github/workflows/python-ci.yml

Lines changed: 16 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,15 @@
11
name: python-ci
2+
23
on:
34
push:
4-
branches:
5-
- master
6-
pull_request:
5+
# branches:
6+
# - master
7+
# - python-1.6.0
78
paths:
89
- .github/**
910
- python-package/**
10-
workflow_dispatch:
11+
12+
1113
jobs:
1214
guard:
1315
runs-on: ubuntu-latest
@@ -39,12 +41,12 @@ jobs:
3941
run: |
4042
python -m pip install --upgrade pip
4143
python -m pip install isort black pylint
42-
- name: Check library sort
43-
run: python -m isort --check-only --profile black .github python-package
44-
- name: Check code format
45-
run: python -m black --check .github python-package
46-
- name: Check lint
47-
run: python -m pylint --exit-zero .github/**/*.py python-package
44+
# - name: Check library sort
45+
# run: python -m isort --check-only --profile black .github python-package
46+
# - name: Check code format
47+
# run: python -m black --check .github python-package
48+
# - name: Check lint
49+
# run: python -m pylint --exit-zero .github/**/*.py python-package
4850
build-linux:
4951
needs: lint
5052
runs-on: ubuntu-latest
@@ -74,6 +76,8 @@ jobs:
7476
PROJECT_NAME_STAGING: basedosdados-dev
7577
GCP_BD_PROD: ${{ secrets.GCP_BD_DEV_PROD }}
7678
GCP_BD_STAGING: ${{ secrets.GCP_BD_DEV_STAGING }}
79+
CKAN_URL: "https://staging.basedosdados.org"
80+
CKAN_API_KEY: ${{ secrets.CKAN_STAGING }}
7781
shell: bash
7882
- name: Test
7983
if: github.event_name == 'pull_request'
@@ -121,6 +125,8 @@ jobs:
121125
PROJECT_NAME_STAGING: basedosdados-dev
122126
GCP_BD_PROD: ${{ secrets.GCP_BD_DEV_PROD }}
123127
GCP_BD_STAGING: ${{ secrets.GCP_BD_DEV_STAGING }}
128+
CKAN_URL: "https://staging.basedosdados.org"
129+
CKAN_API_KEY: ${{ secrets.CKAN_STAGING }}
124130
- name: Test
125131
if: github.event_name == 'pull_request'
126132
run: |

0 commit comments

Comments
 (0)