Releases: fivetran/dbt_fivetran_log
v2.1.1-a1 dbt_fivetran_log
PR #153 includes the following updates:
Under the Hood
- Incorporated
fivetran_platform__credits_pricing
andfivetran_platform_using_transformations
into thequickstart.yml
file. - Updated the package maintainer PR template.
Full Changelog: v2.1.0...v2.1.1-a1
v2.1.0 dbt_fivetran_log
PR #150 includes the following updates:
Dependency Changes
- Removed the dependency on calogica/dbt_date as it is no longer actively maintained. To maintain functionality, key date macros have been replicated within the
fivetran_date_macros
folder with minimal modifications. Only macro versions supporting the Fivetran Log supported destinations are retained, and all have been prefixed withfivetran_
to avoid naming conflicts.date_part
->fivetran_date_part
day_name
->fivetran_day_name
day_of_month
->fivetran_day_of_month
Under the Hood
- Created consistency test on
fivetran_platform__audit_user_activity
to ensureday_name
andday_of_month
counts match.
Full Changelog: v2.0.0...v2.1.0
v2.0.0 dbt_fivetran_log
PR #144 includes the following updates:
Breaking Changes - Action Required
A
--full-refresh
is required after upgrading to prevent errors caused by naming and materialization changes. Additionally, downstream queries must be updated to reflect new model and column names.
-
The materialization of all
stg_*
staging models has been updated fromtable
toview
.- Previously
stg_*_tmp
models were views while the non-*_tmp
versions were tables. Now all are views to eliminate redundant data storage.
- Previously
-
Source Table Transition:
- The
CONNECTOR
source table is deprecated and replaced byCONNECTION
. During a brief transition period, both tables will be identical, butCONNECTOR
will stop receiving data and be removed at a later time.- This change clarifies the distinction: Connectors facilitate the creation of connections between sources and destinations.
- The
CONNECTION
table is now the default source.- For Quickstart users: The
CONNECTOR
will automatically be used ifCONNECTION
is not yet available. - For dbt Core users: Users without the
CONNECTION
source can continue usingCONNECTOR
by adding the following variable to your rootdbt_project.yml
file:vars: fivetran_platform_using_connection: false # default: true
- For more details, refer to the README.
- For Quickstart users: The
- The
-
New Columns:
- As part of the
CONNECTION
updates, the following columns have been added alongside theirconnector_*
equivalents:- INCREMENTAL_MAR:
connection_name
- LOG:
connection_id
- INCREMENTAL_MAR:
- As part of the
-
Renamed Models:
fivetran_platform__connector_status
→fivetran_platform__connection_status
fivetran_platform__connector_daily_events
→fivetran_platform__connection_daily_events
fivetran_platform__usage_mar_destination_history
→fivetran_platform__usage_history
stg_fivetran_platform__connector
→stg_fivetran_platform__connection
stg_fivetran_platform__connector_tmp
→stg_fivetran_platform__connection_tmp
NOTE: Ensure any downstream queries are updated to reflect the new model names.
- Renamed Columns:
- Renamed
connector_id
toconnection_id
andconnector_name
toconnection_name
in the following models:fivetran_platform__connection_status
- Also renamed
connector_health
toconnection_health
- Also renamed
fivetran_platform__mar_table_history
fivetran_platform__connection_daily_events
fivetran_platform__audit_table
fivetran_platform__audit_user_activity
fivetran_platform__schema_changelog
stg_fivetran_platform__connection
stg_fivetran_platform__log
connector_id
toconnection_id
only
stg_fivetran_platform__incremental_mar
connector_name
toconnection_name
only
- Renamed
NOTE: Ensure any downstream queries are updated to reflect the new column names.
Features
- Added macro
coalesce_cast
to ensure consistent data types when usingcoalesce
, preventing potential errors. - Added macro
get_connection_columns
for the newCONNECTION
source.
Documentation
- Updated documentation to reflect all renames and the source table transition.
Under the Hood (Maintainers Only)
- Updated consistency and integrity tests to align with naming changes.
- Refactored seeds and
get_*_columns
macros to reflect renames. - Added a new seed for the
CONNECTION
table. - Updated
run_models
to test new varfivetran_platform_using_connection
.
Full Changelog: v1.11.0...v2.0.0
v1.11.0 dbt_fivetran_log
PR #141 includes the following updates:
Schema Changes: Adding the Transformation Runs Table
-
This package now accounts for the
transformation_runs
source table. Therefore, a new staging modelstg_fivetran_platform__transformation_runs
has been added. Note that not all customers have thetransformation_runs
source table, particularly if they are not using Fivetran Transformations. If the table doesn't exist,stg_fivetran_platform__transformation_runs
will persist as an empty model and respective downstream fields will be null. -
In addition, the following fields have been added to the
fivetran_platform__usage_mar_destination_history
end model:paid_model_runs
free_model_runs
total_model_runs
Documentation Updates
- Included documentation about the
transformation_runs
source table and the aggregated*_model_runs
fields. - Added information about manually configuring the
fivetran_platform_using_transformations
variable in the DECISION LOG. - Added Quickstart model counts to README. (#145)
- Corrected references to connectors and connections in the README. (#145)
Under the Hood
- Introduced the variable
fivetran_platform_using_transformations
to control thestg_fivetran_platform__transformation_runs
output. It is configured based on whether thetransformation_runs
table exists. For more information, refer to the DECISION LOG. - Added the
get_transformation_runs_columns()
macro to ensure all required columns are present. - Added
transformation_runs
seed data inintegration_tests/seeds/
. - Added a
run_count__usage_mar_destination_history
validation test to check model run counts across staging and end model. - (Redshift only) Updates to use limit 1 instead of limit 0 for empty tables. This ensures that Redshift will respect the package's datatype casts.
Full Changelog: v1.10.0...v1.11.0
v1.10.0 dbt_fivetran_log
PR #140 includes the following updates:
Breaking Changes
A
--full-refresh
is recommended after upgrading to ensure historical records in incremental models are refreshed.
- Updated the
fivetran_log_json_parse
macro for Redshift to returnNULL
instead of an empty string when a JSON path is not found. This resolves errors caused by casting empty strings to integers in Redshift. - Standardized the
message_data
field from theLOG
source, in which JSON key names can appear in both camelCase (e.g.,{"totalQueries":5}
) and snake_case (e.g.,{"total_queries":5}
) formats, depending on the Fivetran connector version. Thefivetran_platform__audit_table
andfivetran_platform__connector_daily_events
models now convert all key names to snake_case for consistency. - These changes are considered breaking because the standardization of key names (e.g.,
totalQueries
tototal_queries
) may impact downstream reporting by including previously ignored values.
Under the Hood (Maintainers Only)
- Enhanced seed data for integration testing to include the different spellings and ensure compatibility with Redshift.
Full Changelog: v1.9.1...v1.10.0
v1.9.1 dbt_fivetran_log
PR #138 includes the following updates:
Features
- For Fivetran Platform Connectors created after November 2024, Fivetran has deprecated the
api_call
event in favor ofextract_summary
(release notes). - Accordingly, we have updated the
fivetran_platform__connector_daily_events
model to support the newextract_summary
event while maintaining backward compatibility with theapi_call
event for connectors created before November 2024.
Under the Hood
- Replaced the deprecated
dbt.current_timestamp_backcompat()
function withdbt.current_timestamp()
to ensure all timestamps are captured in UTC. - Updated
fivetran_platform__connector_daily_events
to support runningdbt compile
prior to the initialdbt run
on a new schema.
Full Changelog: v1.9.0...v1.9.1
v1.9.0 dbt_fivetran_log
PR #132 includes the following updates:
🚨 Schema Changes 🚨
- Following the July 2024 Fivetran Platform connector update, the
connector_name
field has been added to theincremental_mar
source table. As a result, the following changes have been applied:- A new tmp model
stg_fivetran_platform__incremental_mar_tmp
has been created. This is necessary to ensure column consistency in downstreamincremental_mar
models. - The
get_incremental_mar_columns()
macro has been added to ensure all required columns are present in thestg_fivetran_platform__incremental_mar
model. - The
stg_fivetran_platform__incremental_mar
has been updated to reference both the aforementioned tmp model and macro to fill empty fields if any required field is not present in the source. - The
connector_name
field in thestg_fivetran_platform__incremental_mar
model is now defined by:coalesce(connector_name, connector_id)
. This ensures the data model will use the appropriate field to define theconnector_name
.
- A new tmp model
Under the Hood
- Updated integration test seed data within
integration_tests/seeds/incremental_mar.csv
to ensure new code updates are working as expected.
Full Changelog: v1.8.0...v1.9.0
v1.8.0 dbt_fivetran_log
PR #130 includes the following updates:
🚨 Breaking Changes 🚨
⚠️ Since the following changes result in the table format changing, we recommend running a--full-refresh
after upgrading to this version to avoid possible incremental failures.
- For Databricks All-Purpose clusters, the
fivetran_platform__audit_table
model will now be materialized using the delta table format (previously parquet).- Delta tables are generally more performant than parquet and are also more widely available for Databricks users. Previously, the parquet file format was causing compilation issues on customers' managed tables.
Documentation Updates
- Updated the
sync_start
andsync_end
field descriptions for thefivetran_platform__audit_table
to explicitly define that these fields only represent the sync start/end times for when the connector wrote new or modified existing records to the specified table. - Addition of integrity and consistency validation tests within integration tests for every end model.
- Removed duplicate Databricks dispatch instructions listed in the README.
Under the Hood
- The
is_databricks_sql_warehouse
macro has been renamed tois_incremental_compatible
and has been modified to returntrue
if the Databricks runtime being used is an all-purpose cluster (previously this macro checked if a sql warehouse runtime was used) or if any other non-Databricks supported destination is being used.- This update was applied as there have been other Databricks runtimes discovered (ie. an endpoint and external runtime) which do not support the
insert_overwrite
incremental strategy used in thefivetran_platform__audit_table
model.
- This update was applied as there have been other Databricks runtimes discovered (ie. an endpoint and external runtime) which do not support the
- In addition to the above, for Databricks users the
fivetran_platform__audit_table
model will now leverage the incremental strategy only if the Databricks runtime is all-purpose. Otherwise, all other Databricks runtimes will not leverage an incremental strategy.
Full Changelog: v1.7.3...v1.8.0
v1.7.3 dbt_fivetran_log
PR #126 includes the following updates:
Performance Improvements
- Updated the sequence of JSON parsing for model
fivetran_platform__audit_table
to reduce runtime.
Bug Fixes
- Updated model
fivetran_platform__audit_user_activity
to correct the JSON parsing used to determine columnemail
. This fixes an issue introduced in v1.5.0 wherefivetran_platform__audit_user_activity
could potentially have 0 rows.
Under the hood
- Updated logic for macro
fivetran_log_lookback
to align with logic used in similar macros in other packages. - Updated logic for the postgres dispatch of macro
fivetran_log_json_parse
to utilizejsonb
instead ofjson
for performance.
Full Changelog: v1.7.2...v1.7.3
v1.7.2 dbt_fivetran_log
PR #123 includes the following updates:
Bug Fixes
- Removal of the leading
/
from thetarget.http_path
regex search within theis_databricks_sql_warehouse()
macro to accurately identify SQL Warehouse Databricks destinations in Quickstart.- The macro above initially worked as expected in dbt core environments; however, in Quickstart implementations this data model was not working. This was due to Quickstart removing the leading
/
from thetarget.http_path
. Thus resulting in the regex search to always fail.
- The macro above initially worked as expected in dbt core environments; however, in Quickstart implementations this data model was not working. This was due to Quickstart removing the leading
Full Changelog: v1.7.1...v1.7.2