Skip to content

Commit ddf35bd

Browse files
authored
[chore] Added tests to verify linter not being stuck in the infinite loop (#3225)
Bug was fixed in v0.46.0 - #3000 - #3027 See: - #2976 `default-format-changed-in-dbr8` and `sql-parse-error` are ignored for LSP plugin output.
1 parent 95c8eae commit ddf35bd

File tree

1 file changed

+14
-0
lines changed

1 file changed

+14
-0
lines changed
Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
import pyspark.sql.functions as F
2+
3+
# ucx[default-format-changed-in-dbr8:+1:17:+1:41] The default format changed in Databricks Runtime 8.0, from Parquet to Delta
4+
churn_features = spark.table("something")
5+
churn_features = (churn_features.withColumn("random", F.rand(seed=42)).withColumn("split",F.when(F.col("random") < train_ratio, "train").when(F.col("random") < train_ratio + val_ratio, "validate").otherwise("test")).drop("random"))
6+
7+
# ucx[default-format-changed-in-dbr8:+1:1:+1:109] The default format changed in Databricks Runtime 8.0, from Parquet to Delta
8+
(churn_features.write.mode("overwrite").option("overwriteSchema", "true").saveAsTable("mlops_churn_training"))
9+
10+
# ucx[default-format-changed-in-dbr8:+1:21:+1:74] The default format changed in Databricks Runtime 8.0, from Parquet to Delta
11+
sdf_system_columns = spark.read.table("system.information_schema.columns")
12+
13+
# ucx[sql-parse-error:+1:14:+1:140] SQL expression is not supported yet: SELECT 1 AS col1, 2 AS col2, 3 AS col3 FROM {sdf_system_columns} LIMIT 5
14+
sdf_example = spark.sql("SELECT 1 AS col1, 2 AS col2, 3 AS col3 FROM {sdf_system_columns} LIMIT 5", sdf_system_columns = sdf_system_columns)

0 commit comments

Comments
 (0)