Skip to content

[SPARK-52272][SQL] V2SessionCatalog does not alter schema on Hive Catalog #51007

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed

Conversation

szehon-ho
Copy link
Contributor

@szehon-ho szehon-ho commented May 24, 2025

What changes were proposed in this pull request?

V2SessionCatalog delegates alterTable to SessionCatalog (V1) alterTable. In the case of "hive", this should be changed to delegate to either alterTable or alterTableDataSchema.

Why are the changes needed?

SessionCatalog has two API's, alterTable and alterTableDataSchema.

alterTable will silently ignore schema changes in Hive implementation (the API Javadoc: If the underlying implementation does not support altering a certain field, this becomes a no-op. ).

So for Hive case, V2SessionCatalog needs to delegate schema changes to V1 alterTableDataSchema

Does this PR introduce any user-facing change?

No

How was this patch tested?

Add new test in HiveDDLSuite, verify functionality and which method is called.

Was this patch authored or co-authored using generative AI tooling?

No

catalogTable.copy(
properties = finalProperties, schema = schema, owner = owner, comment = comment,
collation = collation, storage = storage))
if (SQLConf.get.getConf(StaticSQLConf.CATALOG_IMPLEMENTATION).equals("hive")) {
Copy link
Contributor Author

@szehon-ho szehon-ho May 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note: SessionCatalog (V1) alterTableDataSchema explicitly does not support 'drop column'. https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/SessionCatalog.scala#L499

So to avoid regression in InMemoryCatalog, in which alterTable can alter the schema including 'drop column', I make the change only for "hive". Following the config check for CATALOG_IMPLEMENTATION in other parts of the code.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does it break any test? InMemoryCatalog is for testing and I think it's OK to always call v1 alterTableDataSchema for table schema changes.

Copy link
Contributor Author

@szehon-ho szehon-ho May 27, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let me check if it breaks tests.

@@ -3392,4 +3397,180 @@ class HiveDDLSuite
)
}
}

test("SPARK-52272: V2SessionCatalog does not alter schema on Hive Catalog") {
val externalCatalog = new CustomHiveCatalog(spark.sessionState.catalog.externalCatalog)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do we need a CustomHiveCatalog to test it? I think we can get the real v2 session catalog via session.sessionState.catalogManager.v2SessionCatalog and test it.

Copy link
Contributor Author

@szehon-ho szehon-ho May 27, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yea sorry it was late :), i rewrote it using mockito. I needed something to check which methods are called.

properties = finalProperties, schema = schema, owner = owner, comment = comment,
collation = collation, storage = storage))
}
if (changes.exists(_.isInstanceOf[TableChange.ColumnChange])) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The code assumes that the table changes are either all column changes, or all non-column changes. We should either use assert to guarantee this assumption, or not make this assumption: call both alterTable and alterTableDataSchema if both column and non-column changes are present.

Copy link
Contributor Author

@szehon-ho szehon-ho May 27, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hm, yea i think this code does call both, if both are present?

@cloud-fan
Copy link
Contributor

org.apache.spark.sql.AnalysisException: Some existing schema fields ([id]) are not present in the new schema. We don't support dropping columns yet.

Seems a v2 test is testing an unsupported feature. Shall we remove it?

…essionCatalog due to V2 sessionCatalog limitations
@cloud-fan
Copy link
Contributor

thanks, merging to master!

@cloud-fan cloud-fan closed this in 7e48ba7 May 29, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants