diff --git a/.codegen/_openapi_sha b/.codegen/_openapi_sha index 3b0b1fda..3e670818 100644 --- a/.codegen/_openapi_sha +++ b/.codegen/_openapi_sha @@ -1 +1 @@ -d4c86c045ee9d0410a41ef07e8ae708673b95fa1 \ No newline at end of file +2cee201b2e8d656f7306b2f9ec98edfa721e9829 \ No newline at end of file diff --git a/NEXT_CHANGELOG.md b/NEXT_CHANGELOG.md index 34a0d9f1..35e65b93 100644 --- a/NEXT_CHANGELOG.md +++ b/NEXT_CHANGELOG.md @@ -11,3 +11,64 @@ ### Internal Changes ### API Changes +* Added [a.llm_proxy_partner_powered_account](https://databricks-sdk-py.readthedocs.io/en/latest/account/settings/settings/llm_proxy_partner_powered_account.html) account-level service, [a.llm_proxy_partner_powered_enforce](https://databricks-sdk-py.readthedocs.io/en/latest/account/settings/settings/llm_proxy_partner_powered_enforce.html) account-level service, [w.llm_proxy_partner_powered_workspace](https://databricks-sdk-py.readthedocs.io/en/latest/workspace/settings/settings/llm_proxy_partner_powered_workspace.html) workspace-level service, [a.network_policies](https://databricks-sdk-py.readthedocs.io/en/latest/account/settings/network_policies.html) account-level service and [a.workspace_network_configuration](https://databricks-sdk-py.readthedocs.io/en/latest/account/settings/workspace_network_configuration.html) account-level service. +* Added [w.database_instances](https://databricks-sdk-py.readthedocs.io/en/latest/workspace/catalog/database_instances.html) workspace-level service. +* Added [w.recipient_federation_policies](https://databricks-sdk-py.readthedocs.io/en/latest/workspace/sharing/recipient_federation_policies.html) workspace-level service. +* Added `create_logged_model()`, `delete_logged_model()`, `delete_logged_model_tag()`, `finalize_logged_model()`, `get_logged_model()`, `list_logged_model_artifacts()`, `log_logged_model_params()`, `log_outputs()`, `search_logged_models()` and `set_logged_model_tags()` methods for [w.experiments](https://databricks-sdk-py.readthedocs.io/en/latest/workspace/ml/experiments.html) workspace-level service. +* Added `create_provisioned_throughput_endpoint()` and `update_provisioned_throughput_endpoint_config()` methods for [w.serving_endpoints](https://databricks-sdk-py.readthedocs.io/en/latest/workspace/serving/serving_endpoints.html) workspace-level service. +* Added `uc_securable` field for `databricks.sdk.service.apps.AppResource`. +* Added `enable_file_events` and `file_event_queue` fields for `databricks.sdk.service.catalog.CreateExternalLocation`. +* Added `catalog_name` field for `databricks.sdk.service.catalog.EnableRequest`. +* Added `enable_file_events` and `file_event_queue` fields for `databricks.sdk.service.catalog.ExternalLocationInfo`. +* Added `timeseries_columns` field for `databricks.sdk.service.catalog.PrimaryKeyConstraint`. +* Added `enable_file_events` and `file_event_queue` fields for `databricks.sdk.service.catalog.UpdateExternalLocation`. +* Added `review_state`, `reviews` and `runner_collaborator_aliases` fields for `databricks.sdk.service.cleanrooms.CleanRoomAssetNotebook`. +* Added `notebook_etag` and `notebook_updated_at` fields for `databricks.sdk.service.cleanrooms.CleanRoomNotebookTaskRun`. +* Added `policy_id` and `service_principal_id` fields for `databricks.sdk.service.oauth2.FederationPolicy`. +* Added `root_path` field for `databricks.sdk.service.pipelines.CreatePipeline`. +* Added `root_path` field for `databricks.sdk.service.pipelines.EditPipeline`. +* Added `source_type` field for `databricks.sdk.service.pipelines.IngestionPipelineDefinition`. +* Added `glob` field for `databricks.sdk.service.pipelines.PipelineLibrary`. +* Added `root_path` field for `databricks.sdk.service.pipelines.PipelineSpec`. +* Added `provisioned_model_units` field for `databricks.sdk.service.serving.ServedEntityInput`. +* Added `provisioned_model_units` field for `databricks.sdk.service.serving.ServedEntityOutput`. +* Added `provisioned_model_units` field for `databricks.sdk.service.serving.ServedModelInput`. +* Added `provisioned_model_units` field for `databricks.sdk.service.serving.ServedModelOutput`. +* Added `materialization_namespace` field for `databricks.sdk.service.sharing.Table`. +* Added `omit_permissions_list` field for `databricks.sdk.service.sharing.UpdateSharePermissions`. +* Added `auto_resolve_display_name` field for `databricks.sdk.service.sql.UpdateAlertRequest`. +* Added `auto_resolve_display_name` field for `databricks.sdk.service.sql.UpdateQueryRequest`. +* Added `internal_catalog`, `managed_online_catalog` and `unknown_catalog_type` enum values for `databricks.sdk.service.catalog.CatalogType`. +* Added `catalog`, `clean_room`, `connection`, `credential`, `external_location`, `external_metadata`, `function`, `metastore`, `pipeline`, `provider`, `recipient`, `schema`, `share`, `staging_table`, `storage_credential`, `table`, `unknown_securable_type` and `volume` enum values for `databricks.sdk.service.catalog.SecurableType`. +* Added `describe_query_invalid_sql_error`, `describe_query_timeout`, `describe_query_unexpected_failure`, `invalid_chat_completion_arguments_json_exception`, `invalid_sql_multiple_dataset_references_exception`, `invalid_sql_multiple_statements_exception` and `invalid_sql_unknown_table_exception` enum values for `databricks.sdk.service.dashboards.MessageErrorType`. +* Added `can_create` and `can_monitor_only` enum values for `databricks.sdk.service.iam.PermissionLevel`. +* Added `success_with_failures` enum value for `databricks.sdk.service.jobs.TerminationCodeCode`. +* Added `infrastructure_maintenance` enum value for `databricks.sdk.service.pipelines.StartUpdateCause`. +* Added `infrastructure_maintenance` enum value for `databricks.sdk.service.pipelines.UpdateInfoCause`. +* [Breaking] Changed `create_alert()` and `update_alert()` methods for [w.alerts_v2](https://databricks-sdk-py.readthedocs.io/en/latest/workspace/sql/alerts_v2.html) workspace-level service with new required argument order. +* [Breaking] Changed `set()` method for [w.permissions](https://databricks-sdk-py.readthedocs.io/en/latest/workspace/iam/permissions.html) workspace-level service . New request type is `databricks.sdk.service.iam.SetObjectPermissions` dataclass. +* [Breaking] Changed `update()` method for [w.permissions](https://databricks-sdk-py.readthedocs.io/en/latest/workspace/iam/permissions.html) workspace-level service . New request type is `databricks.sdk.service.iam.UpdateObjectPermissions` dataclass. +* [Breaking] Changed `get()` method for [w.workspace_bindings](https://databricks-sdk-py.readthedocs.io/en/latest/workspace/catalog/workspace_bindings.html) workspace-level service to return `databricks.sdk.service.catalog.GetCatalogWorkspaceBindingsResponse` dataclass. +* [Breaking] Changed `get_bindings()` method for [w.workspace_bindings](https://databricks-sdk-py.readthedocs.io/en/latest/workspace/catalog/workspace_bindings.html) workspace-level service to return `databricks.sdk.service.catalog.GetWorkspaceBindingsResponse` dataclass. +* [Breaking] Changed `update()` method for [w.workspace_bindings](https://databricks-sdk-py.readthedocs.io/en/latest/workspace/catalog/workspace_bindings.html) workspace-level service to return `databricks.sdk.service.catalog.UpdateCatalogWorkspaceBindingsResponse` dataclass. +* [Breaking] Changed `update_bindings()` method for [w.workspace_bindings](https://databricks-sdk-py.readthedocs.io/en/latest/workspace/catalog/workspace_bindings.html) workspace-level service to return `databricks.sdk.service.catalog.UpdateWorkspaceBindingsResponse` dataclass. +* [Breaking] Changed `securable_type` field for `databricks.sdk.service.catalog.CatalogInfo` to type `databricks.sdk.service.catalog.SecurableType` dataclass. +* [Breaking] Changed `securable_type` field for `databricks.sdk.service.catalog.GetBindingsRequest` to type `str` dataclass. +* Changed `schema` and `state` fields for `databricks.sdk.service.catalog.SystemSchemaInfo` to be required. +* [Breaking] Changed `state` field for `databricks.sdk.service.catalog.SystemSchemaInfo` to type `str` dataclass. +* [Breaking] Changed `securable_type` field for `databricks.sdk.service.catalog.UpdateWorkspaceBindingsParameters` to type `str` dataclass. +* [Breaking] Changed `workspace_id` field for `databricks.sdk.service.catalog.WorkspaceBinding` to be required. +* Changed `etag` and `name` fields for `databricks.sdk.service.iam.RuleSetResponse` to be required. +* Changed `gpu_node_pool_id` field for `databricks.sdk.service.jobs.ComputeConfig` to no longer be required. +* [Breaking] Changed `gpu_node_pool_id` field for `databricks.sdk.service.jobs.ComputeConfig` to no longer be required. +* [Breaking] Changed `alert` field for `databricks.sdk.service.sql.CreateAlertV2Request` to be required. +* [Breaking] Changed `alert` field for `databricks.sdk.service.sql.UpdateAlertV2Request` to be required. +* [Breaking] Removed `access_point` field for `databricks.sdk.service.catalog.CreateExternalLocation`. +* [Breaking] Removed `access_point` field for `databricks.sdk.service.catalog.ExternalLocationInfo`. +* [Breaking] Removed `access_point` field for `databricks.sdk.service.catalog.UpdateExternalLocation`. +* [Breaking] Removed `node_type_flexibility` field for `databricks.sdk.service.compute.EditInstancePool`. +* [Breaking] Removed `node_type_flexibility` field for `databricks.sdk.service.compute.GetInstancePool`. +* [Breaking] Removed `node_type_flexibility` field for `databricks.sdk.service.compute.InstancePoolAndStats`. +* [Breaking] Removed `catalog`, `credential`, `external_location` and `storage_credential` enum values for `databricks.sdk.service.catalog.GetBindingsSecurableType`. +* [Breaking] Removed `available`, `disable_initialized`, `enable_completed`, `enable_initialized` and `unavailable` enum values for `databricks.sdk.service.catalog.SystemSchemaInfoState`. +* [Breaking] Removed `catalog`, `credential`, `external_location` and `storage_credential` enum values for `databricks.sdk.service.catalog.UpdateBindingsSecurableType`. diff --git a/databricks/sdk/__init__.py b/databricks/sdk/__init__.py index a51b2155..83f8eeab 100755 --- a/databricks/sdk/__init__.py +++ b/databricks/sdk/__init__.py @@ -23,6 +23,7 @@ AccountStorageCredentialsAPI, ArtifactAllowlistsAPI, CatalogsAPI, ConnectionsAPI, CredentialsAPI, + DatabaseInstancesAPI, ExternalLocationsAPI, FunctionsAPI, GrantsAPI, MetastoresAPI, ModelVersionsAPI, OnlineTablesAPI, @@ -89,12 +90,15 @@ DisableLegacyDbfsAPI, DisableLegacyFeaturesAPI, EnableExportNotebookAPI, EnableIpAccessListsAPI, EnableNotebookTableClipboardAPI, EnableResultsDownloadingAPI, EnhancedSecurityMonitoringAPI, - EsmEnablementAccountAPI, IpAccessListsAPI, NetworkConnectivityAPI, - NotificationDestinationsAPI, PersonalComputeAPI, + EsmEnablementAccountAPI, IpAccessListsAPI, + LlmProxyPartnerPoweredAccountAPI, LlmProxyPartnerPoweredEnforceAPI, + LlmProxyPartnerPoweredWorkspaceAPI, NetworkConnectivityAPI, + NetworkPoliciesAPI, NotificationDestinationsAPI, PersonalComputeAPI, RestrictWorkspaceAdminsAPI, SettingsAPI, TokenManagementAPI, TokensAPI, - WorkspaceConfAPI) + WorkspaceConfAPI, WorkspaceNetworkConfigurationAPI) from databricks.sdk.service.sharing import (ProvidersAPI, RecipientActivationAPI, + RecipientFederationPoliciesAPI, RecipientsAPI, SharesAPI) from databricks.sdk.service.sql import (AlertsAPI, AlertsLegacyAPI, AlertsV2API, DashboardsAPI, @@ -233,6 +237,7 @@ def __init__( self._dashboard_widgets = service.sql.DashboardWidgetsAPI(self._api_client) self._dashboards = service.sql.DashboardsAPI(self._api_client) self._data_sources = service.sql.DataSourcesAPI(self._api_client) + self._database_instances = service.catalog.DatabaseInstancesAPI(self._api_client) self._dbfs = DbfsExt(self._api_client) self._dbsql_permissions = service.sql.DbsqlPermissionsAPI(self._api_client) self._experiments = service.ml.ExperimentsAPI(self._api_client) @@ -282,6 +287,7 @@ def __init__( self._query_visualizations = service.sql.QueryVisualizationsAPI(self._api_client) self._query_visualizations_legacy = service.sql.QueryVisualizationsLegacyAPI(self._api_client) self._recipient_activation = service.sharing.RecipientActivationAPI(self._api_client) + self._recipient_federation_policies = service.sharing.RecipientFederationPoliciesAPI(self._api_client) self._recipients = service.sharing.RecipientsAPI(self._api_client) self._redash_config = service.sql.RedashConfigAPI(self._api_client) self._registered_models = service.catalog.RegisteredModelsAPI(self._api_client) @@ -459,6 +465,11 @@ def data_sources(self) -> service.sql.DataSourcesAPI: """This API is provided to assist you in making new query objects.""" return self._data_sources + @property + def database_instances(self) -> service.catalog.DatabaseInstancesAPI: + """Database Instances provide access to a database via REST API or direct SQL.""" + return self._database_instances + @property def dbfs(self) -> DbfsExt: """DBFS API makes it simple to interact with various data sources without having to include a users credentials every time to read a file.""" @@ -684,6 +695,11 @@ def recipient_activation(self) -> service.sharing.RecipientActivationAPI: """The Recipient Activation API is only applicable in the open sharing model where the recipient object has the authentication type of `TOKEN`.""" return self._recipient_activation + @property + def recipient_federation_policies(self) -> service.sharing.RecipientFederationPoliciesAPI: + """The Recipient Federation Policies APIs are only applicable in the open sharing model where the recipient object has the authentication type of `OIDC_RECIPIENT`, enabling data sharing from Databricks to non-Databricks recipients.""" + return self._recipient_federation_policies + @property def recipients(self) -> service.sharing.RecipientsAPI: """A recipient is an object you create using :method:recipients/create to represent an organization which you want to allow access shares.""" @@ -916,6 +932,7 @@ def __init__( self._metastore_assignments = service.catalog.AccountMetastoreAssignmentsAPI(self._api_client) self._metastores = service.catalog.AccountMetastoresAPI(self._api_client) self._network_connectivity = service.settings.NetworkConnectivityAPI(self._api_client) + self._network_policies = service.settings.NetworkPoliciesAPI(self._api_client) self._networks = service.provisioning.NetworksAPI(self._api_client) self._o_auth_published_apps = service.oauth2.OAuthPublishedAppsAPI(self._api_client) self._private_access = service.provisioning.PrivateAccessAPI(self._api_client) @@ -930,6 +947,7 @@ def __init__( self._users = service.iam.AccountUsersAPI(self._api_client) self._vpc_endpoints = service.provisioning.VpcEndpointsAPI(self._api_client) self._workspace_assignment = service.iam.WorkspaceAssignmentAPI(self._api_client) + self._workspace_network_configuration = service.settings.WorkspaceNetworkConfigurationAPI(self._api_client) self._workspaces = service.provisioning.WorkspacesAPI(self._api_client) self._budgets = service.billing.BudgetsAPI(self._api_client) @@ -1006,6 +1024,11 @@ def network_connectivity(self) -> service.settings.NetworkConnectivityAPI: """These APIs provide configurations for the network connectivity of your workspaces for serverless compute resources.""" return self._network_connectivity + @property + def network_policies(self) -> service.settings.NetworkPoliciesAPI: + """These APIs manage network policies for this account.""" + return self._network_policies + @property def networks(self) -> service.provisioning.NetworksAPI: """These APIs manage network configurations for customer-managed VPCs (optional).""" @@ -1076,6 +1099,11 @@ def workspace_assignment(self) -> service.iam.WorkspaceAssignmentAPI: """The Workspace Permission Assignment API allows you to manage workspace permissions for principals in your account.""" return self._workspace_assignment + @property + def workspace_network_configuration(self) -> service.settings.WorkspaceNetworkConfigurationAPI: + """These APIs allow configuration of network settings for Databricks workspaces.""" + return self._workspace_network_configuration + @property def workspaces(self) -> service.provisioning.WorkspacesAPI: """These APIs manage workspaces for this account.""" diff --git a/databricks/sdk/service/apps.py b/databricks/sdk/service/apps.py index 6f645641..e0ca7d9a 100755 --- a/databricks/sdk/service/apps.py +++ b/databricks/sdk/service/apps.py @@ -655,6 +655,8 @@ class AppResource: sql_warehouse: Optional[AppResourceSqlWarehouse] = None + uc_securable: Optional[AppResourceUcSecurable] = None + def as_dict(self) -> dict: """Serializes the AppResource into a dictionary suitable for use as a JSON request body.""" body = {} @@ -670,6 +672,8 @@ def as_dict(self) -> dict: body["serving_endpoint"] = self.serving_endpoint.as_dict() if self.sql_warehouse: body["sql_warehouse"] = self.sql_warehouse.as_dict() + if self.uc_securable: + body["uc_securable"] = self.uc_securable.as_dict() return body def as_shallow_dict(self) -> dict: @@ -687,6 +691,8 @@ def as_shallow_dict(self) -> dict: body["serving_endpoint"] = self.serving_endpoint if self.sql_warehouse: body["sql_warehouse"] = self.sql_warehouse + if self.uc_securable: + body["uc_securable"] = self.uc_securable return body @classmethod @@ -699,6 +705,7 @@ def from_dict(cls, d: Dict[str, Any]) -> AppResource: secret=_from_dict(d, "secret", AppResourceSecret), serving_endpoint=_from_dict(d, "serving_endpoint", AppResourceServingEndpoint), sql_warehouse=_from_dict(d, "sql_warehouse", AppResourceSqlWarehouse), + uc_securable=_from_dict(d, "uc_securable", AppResourceUcSecurable), ) @@ -880,6 +887,57 @@ class AppResourceSqlWarehouseSqlWarehousePermission(Enum): IS_OWNER = "IS_OWNER" +@dataclass +class AppResourceUcSecurable: + securable_full_name: str + + securable_type: AppResourceUcSecurableUcSecurableType + + permission: AppResourceUcSecurableUcSecurablePermission + + def as_dict(self) -> dict: + """Serializes the AppResourceUcSecurable into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.permission is not None: + body["permission"] = self.permission.value + if self.securable_full_name is not None: + body["securable_full_name"] = self.securable_full_name + if self.securable_type is not None: + body["securable_type"] = self.securable_type.value + return body + + def as_shallow_dict(self) -> dict: + """Serializes the AppResourceUcSecurable into a shallow dictionary of its immediate attributes.""" + body = {} + if self.permission is not None: + body["permission"] = self.permission + if self.securable_full_name is not None: + body["securable_full_name"] = self.securable_full_name + if self.securable_type is not None: + body["securable_type"] = self.securable_type + return body + + @classmethod + def from_dict(cls, d: Dict[str, Any]) -> AppResourceUcSecurable: + """Deserializes the AppResourceUcSecurable from a dictionary.""" + return cls( + permission=_enum(d, "permission", AppResourceUcSecurableUcSecurablePermission), + securable_full_name=d.get("securable_full_name", None), + securable_type=_enum(d, "securable_type", AppResourceUcSecurableUcSecurableType), + ) + + +class AppResourceUcSecurableUcSecurablePermission(Enum): + + READ_VOLUME = "READ_VOLUME" + WRITE_VOLUME = "WRITE_VOLUME" + + +class AppResourceUcSecurableUcSecurableType(Enum): + + VOLUME = "VOLUME" + + class ApplicationState(Enum): CRASHED = "CRASHED" diff --git a/databricks/sdk/service/catalog.py b/databricks/sdk/service/catalog.py index a77115ed..9553d887 100755 --- a/databricks/sdk/service/catalog.py +++ b/databricks/sdk/service/catalog.py @@ -587,6 +587,39 @@ def from_dict(cls, d: Dict[str, Any]) -> AwsIamRoleResponse: ) +@dataclass +class AwsSqsQueue: + managed_resource_id: Optional[str] = None + """Unique identifier included in the name of file events managed cloud resources.""" + + queue_url: Optional[str] = None + """The AQS queue url in the format https://sqs.{region}.amazonaws.com/{account id}/{queue name} + REQUIRED for provided_sqs.""" + + def as_dict(self) -> dict: + """Serializes the AwsSqsQueue into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.managed_resource_id is not None: + body["managed_resource_id"] = self.managed_resource_id + if self.queue_url is not None: + body["queue_url"] = self.queue_url + return body + + def as_shallow_dict(self) -> dict: + """Serializes the AwsSqsQueue into a shallow dictionary of its immediate attributes.""" + body = {} + if self.managed_resource_id is not None: + body["managed_resource_id"] = self.managed_resource_id + if self.queue_url is not None: + body["queue_url"] = self.queue_url + return body + + @classmethod + def from_dict(cls, d: Dict[str, Any]) -> AwsSqsQueue: + """Deserializes the AwsSqsQueue from a dictionary.""" + return cls(managed_resource_id=d.get("managed_resource_id", None), queue_url=d.get("queue_url", None)) + + @dataclass class AzureActiveDirectoryToken: """Azure Active Directory token, essentially the Oauth token for Azure Service Principal or Managed @@ -757,6 +790,60 @@ def from_dict(cls, d: Dict[str, Any]) -> AzureManagedIdentityResponse: ) +@dataclass +class AzureQueueStorage: + managed_resource_id: Optional[str] = None + """Unique identifier included in the name of file events managed cloud resources.""" + + queue_url: Optional[str] = None + """The AQS queue url in the format https://{storage account}.queue.core.windows.net/{queue name} + REQUIRED for provided_aqs.""" + + resource_group: Optional[str] = None + """The resource group for the queue, event grid subscription, and external location storage + account. ONLY REQUIRED for locations with a service principal storage credential""" + + subscription_id: Optional[str] = None + """OPTIONAL: The subscription id for the queue, event grid subscription, and external location + storage account. REQUIRED for locations with a service principal storage credential""" + + def as_dict(self) -> dict: + """Serializes the AzureQueueStorage into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.managed_resource_id is not None: + body["managed_resource_id"] = self.managed_resource_id + if self.queue_url is not None: + body["queue_url"] = self.queue_url + if self.resource_group is not None: + body["resource_group"] = self.resource_group + if self.subscription_id is not None: + body["subscription_id"] = self.subscription_id + return body + + def as_shallow_dict(self) -> dict: + """Serializes the AzureQueueStorage into a shallow dictionary of its immediate attributes.""" + body = {} + if self.managed_resource_id is not None: + body["managed_resource_id"] = self.managed_resource_id + if self.queue_url is not None: + body["queue_url"] = self.queue_url + if self.resource_group is not None: + body["resource_group"] = self.resource_group + if self.subscription_id is not None: + body["subscription_id"] = self.subscription_id + return body + + @classmethod + def from_dict(cls, d: Dict[str, Any]) -> AzureQueueStorage: + """Deserializes the AzureQueueStorage from a dictionary.""" + return cls( + managed_resource_id=d.get("managed_resource_id", None), + queue_url=d.get("queue_url", None), + resource_group=d.get("resource_group", None), + subscription_id=d.get("subscription_id", None), + ) + + @dataclass class AzureServicePrincipal: """The Azure service principal configuration. Only applicable when purpose is **STORAGE**.""" @@ -903,7 +990,8 @@ class CatalogInfo: provisioning_info: Optional[ProvisioningInfo] = None """Status of an asynchronously provisioned resource.""" - securable_type: Optional[str] = None + securable_type: Optional[SecurableType] = None + """The type of Unity Catalog securable.""" share_name: Optional[str] = None """The name of the share under the share provider.""" @@ -958,7 +1046,7 @@ def as_dict(self) -> dict: if self.provisioning_info: body["provisioning_info"] = self.provisioning_info.as_dict() if self.securable_type is not None: - body["securable_type"] = self.securable_type + body["securable_type"] = self.securable_type.value if self.share_name is not None: body["share_name"] = self.share_name if self.storage_location is not None: @@ -1045,7 +1133,7 @@ def from_dict(cls, d: Dict[str, Any]) -> CatalogInfo: properties=d.get("properties", None), provider_name=d.get("provider_name", None), provisioning_info=_from_dict(d, "provisioning_info", ProvisioningInfo), - securable_type=d.get("securable_type", None), + securable_type=_enum(d, "securable_type", SecurableType), share_name=d.get("share_name", None), storage_location=d.get("storage_location", None), storage_root=d.get("storage_root", None), @@ -1055,7 +1143,6 @@ def from_dict(cls, d: Dict[str, Any]) -> CatalogInfo: class CatalogIsolationMode(Enum): - """Whether the current securable is accessible from all workspaces or a specific set of workspaces.""" ISOLATED = "ISOLATED" OPEN = "OPEN" @@ -1066,8 +1153,11 @@ class CatalogType(Enum): DELTASHARING_CATALOG = "DELTASHARING_CATALOG" FOREIGN_CATALOG = "FOREIGN_CATALOG" + INTERNAL_CATALOG = "INTERNAL_CATALOG" MANAGED_CATALOG = "MANAGED_CATALOG" + MANAGED_ONLINE_CATALOG = "MANAGED_ONLINE_CATALOG" SYSTEM_CATALOG = "SYSTEM_CATALOG" + UNKNOWN_CATALOG_TYPE = "UNKNOWN_CATALOG_TYPE" @dataclass @@ -1772,12 +1862,12 @@ class CreateExternalLocation: credential_name: str """Name of the storage credential used with this location.""" - access_point: Optional[str] = None - """The AWS access point to use when accesing s3 for this external location.""" - comment: Optional[str] = None """User-provided free-form text description.""" + enable_file_events: Optional[bool] = None + """[Create:OPT Update:OPT] Whether to enable file events on this external location.""" + encryption_details: Optional[EncryptionDetails] = None """Encryption options that apply to clients connecting to cloud storage.""" @@ -1786,6 +1876,9 @@ class CreateExternalLocation: enabled, the access to the location falls back to cluster credentials if UC credentials are not sufficient.""" + file_event_queue: Optional[FileEventQueue] = None + """[Create:OPT Update:OPT] File event queue settings.""" + read_only: Optional[bool] = None """Indicates whether the external location is read-only.""" @@ -1795,16 +1888,18 @@ class CreateExternalLocation: def as_dict(self) -> dict: """Serializes the CreateExternalLocation into a dictionary suitable for use as a JSON request body.""" body = {} - if self.access_point is not None: - body["access_point"] = self.access_point if self.comment is not None: body["comment"] = self.comment if self.credential_name is not None: body["credential_name"] = self.credential_name + if self.enable_file_events is not None: + body["enable_file_events"] = self.enable_file_events if self.encryption_details: body["encryption_details"] = self.encryption_details.as_dict() if self.fallback is not None: body["fallback"] = self.fallback + if self.file_event_queue: + body["file_event_queue"] = self.file_event_queue.as_dict() if self.name is not None: body["name"] = self.name if self.read_only is not None: @@ -1818,16 +1913,18 @@ def as_dict(self) -> dict: def as_shallow_dict(self) -> dict: """Serializes the CreateExternalLocation into a shallow dictionary of its immediate attributes.""" body = {} - if self.access_point is not None: - body["access_point"] = self.access_point if self.comment is not None: body["comment"] = self.comment if self.credential_name is not None: body["credential_name"] = self.credential_name + if self.enable_file_events is not None: + body["enable_file_events"] = self.enable_file_events if self.encryption_details: body["encryption_details"] = self.encryption_details if self.fallback is not None: body["fallback"] = self.fallback + if self.file_event_queue: + body["file_event_queue"] = self.file_event_queue if self.name is not None: body["name"] = self.name if self.read_only is not None: @@ -1842,11 +1939,12 @@ def as_shallow_dict(self) -> dict: def from_dict(cls, d: Dict[str, Any]) -> CreateExternalLocation: """Deserializes the CreateExternalLocation from a dictionary.""" return cls( - access_point=d.get("access_point", None), comment=d.get("comment", None), credential_name=d.get("credential_name", None), + enable_file_events=d.get("enable_file_events", None), encryption_details=_from_dict(d, "encryption_details", EncryptionDetails), fallback=d.get("fallback", None), + file_event_queue=_from_dict(d, "file_event_queue", FileEventQueue), name=d.get("name", None), read_only=d.get("read_only", None), skip_validation=d.get("skip_validation", None), @@ -2864,33 +2962,6 @@ def from_dict(cls, d: Dict[str, Any]) -> CredentialValidationResult: return cls(message=d.get("message", None), result=_enum(d, "result", ValidateCredentialResult)) -@dataclass -class CurrentWorkspaceBindings: - """Currently assigned workspaces""" - - workspaces: Optional[List[int]] = None - """A list of workspace IDs.""" - - def as_dict(self) -> dict: - """Serializes the CurrentWorkspaceBindings into a dictionary suitable for use as a JSON request body.""" - body = {} - if self.workspaces: - body["workspaces"] = [v for v in self.workspaces] - return body - - def as_shallow_dict(self) -> dict: - """Serializes the CurrentWorkspaceBindings into a shallow dictionary of its immediate attributes.""" - body = {} - if self.workspaces: - body["workspaces"] = self.workspaces - return body - - @classmethod - def from_dict(cls, d: Dict[str, Any]) -> CurrentWorkspaceBindings: - """Deserializes the CurrentWorkspaceBindings from a dictionary.""" - return cls(workspaces=d.get("workspaces", None)) - - class DataSourceFormat(Enum): """Data source format""" @@ -2919,6 +2990,183 @@ class DataSourceFormat(Enum): WORKDAY_RAAS_FORMAT = "WORKDAY_RAAS_FORMAT" +@dataclass +class DatabaseCatalog: + name: str + """The name of the catalog in UC.""" + + database_instance_name: str + """The name of the DatabaseInstance housing the database.""" + + database_name: str + """The name of the database (in a instance) associated with the catalog.""" + + create_database_if_not_exists: Optional[bool] = None + + uid: Optional[str] = None + + def as_dict(self) -> dict: + """Serializes the DatabaseCatalog into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.create_database_if_not_exists is not None: + body["create_database_if_not_exists"] = self.create_database_if_not_exists + if self.database_instance_name is not None: + body["database_instance_name"] = self.database_instance_name + if self.database_name is not None: + body["database_name"] = self.database_name + if self.name is not None: + body["name"] = self.name + if self.uid is not None: + body["uid"] = self.uid + return body + + def as_shallow_dict(self) -> dict: + """Serializes the DatabaseCatalog into a shallow dictionary of its immediate attributes.""" + body = {} + if self.create_database_if_not_exists is not None: + body["create_database_if_not_exists"] = self.create_database_if_not_exists + if self.database_instance_name is not None: + body["database_instance_name"] = self.database_instance_name + if self.database_name is not None: + body["database_name"] = self.database_name + if self.name is not None: + body["name"] = self.name + if self.uid is not None: + body["uid"] = self.uid + return body + + @classmethod + def from_dict(cls, d: Dict[str, Any]) -> DatabaseCatalog: + """Deserializes the DatabaseCatalog from a dictionary.""" + return cls( + create_database_if_not_exists=d.get("create_database_if_not_exists", None), + database_instance_name=d.get("database_instance_name", None), + database_name=d.get("database_name", None), + name=d.get("name", None), + uid=d.get("uid", None), + ) + + +@dataclass +class DatabaseInstance: + """A DatabaseInstance represents a logical Postgres instance, comprised of both compute and + storage.""" + + name: str + """The name of the instance. This is the unique identifier for the instance.""" + + admin_password: Optional[str] = None + """Password for admin user to create. If not provided, no user will be created.""" + + admin_rolename: Optional[str] = None + """Name of the admin role for the instance. If not provided, defaults to 'databricks_admin'.""" + + capacity: Optional[str] = None + """The sku of the instance. Valid values are "CU_1", "CU_2", "CU_4".""" + + creation_time: Optional[str] = None + """The timestamp when the instance was created.""" + + creator: Optional[str] = None + """The email of the creator of the instance.""" + + pg_version: Optional[str] = None + """The version of Postgres running on the instance.""" + + read_write_dns: Optional[str] = None + """The DNS endpoint to connect to the instance for read+write access.""" + + state: Optional[DatabaseInstanceState] = None + """The current state of the instance.""" + + stopped: Optional[bool] = None + """Whether the instance is stopped.""" + + uid: Optional[str] = None + """An immutable UUID identifier for the instance.""" + + def as_dict(self) -> dict: + """Serializes the DatabaseInstance into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.admin_password is not None: + body["admin_password"] = self.admin_password + if self.admin_rolename is not None: + body["admin_rolename"] = self.admin_rolename + if self.capacity is not None: + body["capacity"] = self.capacity + if self.creation_time is not None: + body["creation_time"] = self.creation_time + if self.creator is not None: + body["creator"] = self.creator + if self.name is not None: + body["name"] = self.name + if self.pg_version is not None: + body["pg_version"] = self.pg_version + if self.read_write_dns is not None: + body["read_write_dns"] = self.read_write_dns + if self.state is not None: + body["state"] = self.state.value + if self.stopped is not None: + body["stopped"] = self.stopped + if self.uid is not None: + body["uid"] = self.uid + return body + + def as_shallow_dict(self) -> dict: + """Serializes the DatabaseInstance into a shallow dictionary of its immediate attributes.""" + body = {} + if self.admin_password is not None: + body["admin_password"] = self.admin_password + if self.admin_rolename is not None: + body["admin_rolename"] = self.admin_rolename + if self.capacity is not None: + body["capacity"] = self.capacity + if self.creation_time is not None: + body["creation_time"] = self.creation_time + if self.creator is not None: + body["creator"] = self.creator + if self.name is not None: + body["name"] = self.name + if self.pg_version is not None: + body["pg_version"] = self.pg_version + if self.read_write_dns is not None: + body["read_write_dns"] = self.read_write_dns + if self.state is not None: + body["state"] = self.state + if self.stopped is not None: + body["stopped"] = self.stopped + if self.uid is not None: + body["uid"] = self.uid + return body + + @classmethod + def from_dict(cls, d: Dict[str, Any]) -> DatabaseInstance: + """Deserializes the DatabaseInstance from a dictionary.""" + return cls( + admin_password=d.get("admin_password", None), + admin_rolename=d.get("admin_rolename", None), + capacity=d.get("capacity", None), + creation_time=d.get("creation_time", None), + creator=d.get("creator", None), + name=d.get("name", None), + pg_version=d.get("pg_version", None), + read_write_dns=d.get("read_write_dns", None), + state=_enum(d, "state", DatabaseInstanceState), + stopped=d.get("stopped", None), + uid=d.get("uid", None), + ) + + +class DatabaseInstanceState(Enum): + + AVAILABLE = "AVAILABLE" + DELETING = "DELETING" + FAILING_OVER = "FAILING_OVER" + STARTING = "STARTING" + STOPPED = "STOPPED" + UPDATING = "UPDATING" + + @dataclass class DatabricksGcpServiceAccount: """GCP long-lived credential. Databricks-created Google Cloud Storage service account.""" @@ -3052,6 +3300,42 @@ def from_dict(cls, d: Dict[str, Any]) -> DeleteCredentialResponse: return cls() +@dataclass +class DeleteDatabaseCatalogResponse: + def as_dict(self) -> dict: + """Serializes the DeleteDatabaseCatalogResponse into a dictionary suitable for use as a JSON request body.""" + body = {} + return body + + def as_shallow_dict(self) -> dict: + """Serializes the DeleteDatabaseCatalogResponse into a shallow dictionary of its immediate attributes.""" + body = {} + return body + + @classmethod + def from_dict(cls, d: Dict[str, Any]) -> DeleteDatabaseCatalogResponse: + """Deserializes the DeleteDatabaseCatalogResponse from a dictionary.""" + return cls() + + +@dataclass +class DeleteDatabaseInstanceResponse: + def as_dict(self) -> dict: + """Serializes the DeleteDatabaseInstanceResponse into a dictionary suitable for use as a JSON request body.""" + body = {} + return body + + def as_shallow_dict(self) -> dict: + """Serializes the DeleteDatabaseInstanceResponse into a shallow dictionary of its immediate attributes.""" + body = {} + return body + + @classmethod + def from_dict(cls, d: Dict[str, Any]) -> DeleteDatabaseInstanceResponse: + """Deserializes the DeleteDatabaseInstanceResponse from a dictionary.""" + return cls() + + @dataclass class DeleteResponse: def as_dict(self) -> dict: @@ -3070,6 +3354,24 @@ def from_dict(cls, d: Dict[str, Any]) -> DeleteResponse: return cls() +@dataclass +class DeleteSyncedDatabaseTableResponse: + def as_dict(self) -> dict: + """Serializes the DeleteSyncedDatabaseTableResponse into a dictionary suitable for use as a JSON request body.""" + body = {} + return body + + def as_shallow_dict(self) -> dict: + """Serializes the DeleteSyncedDatabaseTableResponse into a shallow dictionary of its immediate attributes.""" + body = {} + return body + + @classmethod + def from_dict(cls, d: Dict[str, Any]) -> DeleteSyncedDatabaseTableResponse: + """Deserializes the DeleteSyncedDatabaseTableResponse from a dictionary.""" + return cls() + + @dataclass class DeltaRuntimePropertiesKvPairs: """Properties pertaining to the current state of the delta table as given by the commit server. @@ -3336,13 +3638,55 @@ def from_dict(cls, d: Dict[str, Any]) -> EffectivePrivilegeAssignment: class EnablePredictiveOptimization(Enum): - """Whether predictive optimization should be enabled for this object and objects under it.""" DISABLE = "DISABLE" ENABLE = "ENABLE" INHERIT = "INHERIT" +@dataclass +class EnableRequest: + catalog_name: Optional[str] = None + """the catalog for which the system schema is to enabled in""" + + metastore_id: Optional[str] = None + """The metastore ID under which the system schema lives.""" + + schema_name: Optional[str] = None + """Full name of the system schema.""" + + def as_dict(self) -> dict: + """Serializes the EnableRequest into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.catalog_name is not None: + body["catalog_name"] = self.catalog_name + if self.metastore_id is not None: + body["metastore_id"] = self.metastore_id + if self.schema_name is not None: + body["schema_name"] = self.schema_name + return body + + def as_shallow_dict(self) -> dict: + """Serializes the EnableRequest into a shallow dictionary of its immediate attributes.""" + body = {} + if self.catalog_name is not None: + body["catalog_name"] = self.catalog_name + if self.metastore_id is not None: + body["metastore_id"] = self.metastore_id + if self.schema_name is not None: + body["schema_name"] = self.schema_name + return body + + @classmethod + def from_dict(cls, d: Dict[str, Any]) -> EnableRequest: + """Deserializes the EnableRequest from a dictionary.""" + return cls( + catalog_name=d.get("catalog_name", None), + metastore_id=d.get("metastore_id", None), + schema_name=d.get("schema_name", None), + ) + + @dataclass class EnableResponse: def as_dict(self) -> dict: @@ -3390,9 +3734,6 @@ def from_dict(cls, d: Dict[str, Any]) -> EncryptionDetails: @dataclass class ExternalLocationInfo: - access_point: Optional[str] = None - """The AWS access point to use when accesing s3 for this external location.""" - browse_only: Optional[bool] = None """Indicates whether the principal is limited to retrieving metadata for the associated object through the BROWSE privilege when include_browse is enabled in the request.""" @@ -3412,6 +3753,9 @@ class ExternalLocationInfo: credential_name: Optional[str] = None """Name of the storage credential used with this location.""" + enable_file_events: Optional[bool] = None + """[Create:OPT Update:OPT] Whether to enable file events on this external location.""" + encryption_details: Optional[EncryptionDetails] = None """Encryption options that apply to clients connecting to cloud storage.""" @@ -3420,6 +3764,9 @@ class ExternalLocationInfo: enabled, the access to the location falls back to cluster credentials if UC credentials are not sufficient.""" + file_event_queue: Optional[FileEventQueue] = None + """[Create:OPT Update:OPT] File event queue settings.""" + isolation_mode: Optional[IsolationMode] = None metastore_id: Optional[str] = None @@ -3446,8 +3793,6 @@ class ExternalLocationInfo: def as_dict(self) -> dict: """Serializes the ExternalLocationInfo into a dictionary suitable for use as a JSON request body.""" body = {} - if self.access_point is not None: - body["access_point"] = self.access_point if self.browse_only is not None: body["browse_only"] = self.browse_only if self.comment is not None: @@ -3460,10 +3805,14 @@ def as_dict(self) -> dict: body["credential_id"] = self.credential_id if self.credential_name is not None: body["credential_name"] = self.credential_name + if self.enable_file_events is not None: + body["enable_file_events"] = self.enable_file_events if self.encryption_details: body["encryption_details"] = self.encryption_details.as_dict() if self.fallback is not None: body["fallback"] = self.fallback + if self.file_event_queue: + body["file_event_queue"] = self.file_event_queue.as_dict() if self.isolation_mode is not None: body["isolation_mode"] = self.isolation_mode.value if self.metastore_id is not None: @@ -3485,8 +3834,6 @@ def as_dict(self) -> dict: def as_shallow_dict(self) -> dict: """Serializes the ExternalLocationInfo into a shallow dictionary of its immediate attributes.""" body = {} - if self.access_point is not None: - body["access_point"] = self.access_point if self.browse_only is not None: body["browse_only"] = self.browse_only if self.comment is not None: @@ -3499,10 +3846,14 @@ def as_shallow_dict(self) -> dict: body["credential_id"] = self.credential_id if self.credential_name is not None: body["credential_name"] = self.credential_name + if self.enable_file_events is not None: + body["enable_file_events"] = self.enable_file_events if self.encryption_details: body["encryption_details"] = self.encryption_details if self.fallback is not None: body["fallback"] = self.fallback + if self.file_event_queue: + body["file_event_queue"] = self.file_event_queue if self.isolation_mode is not None: body["isolation_mode"] = self.isolation_mode if self.metastore_id is not None: @@ -3525,15 +3876,16 @@ def as_shallow_dict(self) -> dict: def from_dict(cls, d: Dict[str, Any]) -> ExternalLocationInfo: """Deserializes the ExternalLocationInfo from a dictionary.""" return cls( - access_point=d.get("access_point", None), browse_only=d.get("browse_only", None), comment=d.get("comment", None), created_at=d.get("created_at", None), created_by=d.get("created_by", None), credential_id=d.get("credential_id", None), credential_name=d.get("credential_name", None), + enable_file_events=d.get("enable_file_events", None), encryption_details=_from_dict(d, "encryption_details", EncryptionDetails), fallback=d.get("fallback", None), + file_event_queue=_from_dict(d, "file_event_queue", FileEventQueue), isolation_mode=_enum(d, "isolation_mode", IsolationMode), metastore_id=d.get("metastore_id", None), name=d.get("name", None), @@ -3586,6 +3938,67 @@ def from_dict(cls, d: Dict[str, Any]) -> FailedStatus: ) +@dataclass +class FileEventQueue: + managed_aqs: Optional[AzureQueueStorage] = None + + managed_pubsub: Optional[GcpPubsub] = None + + managed_sqs: Optional[AwsSqsQueue] = None + + provided_aqs: Optional[AzureQueueStorage] = None + + provided_pubsub: Optional[GcpPubsub] = None + + provided_sqs: Optional[AwsSqsQueue] = None + + def as_dict(self) -> dict: + """Serializes the FileEventQueue into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.managed_aqs: + body["managed_aqs"] = self.managed_aqs.as_dict() + if self.managed_pubsub: + body["managed_pubsub"] = self.managed_pubsub.as_dict() + if self.managed_sqs: + body["managed_sqs"] = self.managed_sqs.as_dict() + if self.provided_aqs: + body["provided_aqs"] = self.provided_aqs.as_dict() + if self.provided_pubsub: + body["provided_pubsub"] = self.provided_pubsub.as_dict() + if self.provided_sqs: + body["provided_sqs"] = self.provided_sqs.as_dict() + return body + + def as_shallow_dict(self) -> dict: + """Serializes the FileEventQueue into a shallow dictionary of its immediate attributes.""" + body = {} + if self.managed_aqs: + body["managed_aqs"] = self.managed_aqs + if self.managed_pubsub: + body["managed_pubsub"] = self.managed_pubsub + if self.managed_sqs: + body["managed_sqs"] = self.managed_sqs + if self.provided_aqs: + body["provided_aqs"] = self.provided_aqs + if self.provided_pubsub: + body["provided_pubsub"] = self.provided_pubsub + if self.provided_sqs: + body["provided_sqs"] = self.provided_sqs + return body + + @classmethod + def from_dict(cls, d: Dict[str, Any]) -> FileEventQueue: + """Deserializes the FileEventQueue from a dictionary.""" + return cls( + managed_aqs=_from_dict(d, "managed_aqs", AzureQueueStorage), + managed_pubsub=_from_dict(d, "managed_pubsub", GcpPubsub), + managed_sqs=_from_dict(d, "managed_sqs", AwsSqsQueue), + provided_aqs=_from_dict(d, "provided_aqs", AzureQueueStorage), + provided_pubsub=_from_dict(d, "provided_pubsub", GcpPubsub), + provided_sqs=_from_dict(d, "provided_sqs", AwsSqsQueue), + ) + + @dataclass class ForeignKeyConstraint: name: str @@ -4136,6 +4549,41 @@ def from_dict(cls, d: Dict[str, Any]) -> GcpOauthToken: return cls(oauth_token=d.get("oauth_token", None)) +@dataclass +class GcpPubsub: + managed_resource_id: Optional[str] = None + """Unique identifier included in the name of file events managed cloud resources.""" + + subscription_name: Optional[str] = None + """The Pub/Sub subscription name in the format projects/{project}/subscriptions/{subscription name} + REQUIRED for provided_pubsub.""" + + def as_dict(self) -> dict: + """Serializes the GcpPubsub into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.managed_resource_id is not None: + body["managed_resource_id"] = self.managed_resource_id + if self.subscription_name is not None: + body["subscription_name"] = self.subscription_name + return body + + def as_shallow_dict(self) -> dict: + """Serializes the GcpPubsub into a shallow dictionary of its immediate attributes.""" + body = {} + if self.managed_resource_id is not None: + body["managed_resource_id"] = self.managed_resource_id + if self.subscription_name is not None: + body["subscription_name"] = self.subscription_name + return body + + @classmethod + def from_dict(cls, d: Dict[str, Any]) -> GcpPubsub: + """Deserializes the GcpPubsub from a dictionary.""" + return cls( + managed_resource_id=d.get("managed_resource_id", None), subscription_name=d.get("subscription_name", None) + ) + + @dataclass class GenerateTemporaryServiceCredentialAzureOptions: """The Azure cloud options to customize the requested temporary credential""" @@ -4353,12 +4801,29 @@ def from_dict(cls, d: Dict[str, Any]) -> GenerateTemporaryTableCredentialRespons ) -class GetBindingsSecurableType(Enum): +@dataclass +class GetCatalogWorkspaceBindingsResponse: + workspaces: Optional[List[int]] = None + """A list of workspace IDs""" + + def as_dict(self) -> dict: + """Serializes the GetCatalogWorkspaceBindingsResponse into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.workspaces: + body["workspaces"] = [v for v in self.workspaces] + return body + + def as_shallow_dict(self) -> dict: + """Serializes the GetCatalogWorkspaceBindingsResponse into a shallow dictionary of its immediate attributes.""" + body = {} + if self.workspaces: + body["workspaces"] = self.workspaces + return body - CATALOG = "catalog" - CREDENTIAL = "credential" - EXTERNAL_LOCATION = "external_location" - STORAGE_CREDENTIAL = "storage_credential" + @classmethod + def from_dict(cls, d: Dict[str, Any]) -> GetCatalogWorkspaceBindingsResponse: + """Deserializes the GetCatalogWorkspaceBindingsResponse from a dictionary.""" + return cls(workspaces=d.get("workspaces", None)) @dataclass @@ -4571,6 +5036,41 @@ def from_dict(cls, d: Dict[str, Any]) -> GetQuotaResponse: return cls(quota_info=_from_dict(d, "quota_info", QuotaInfo)) +@dataclass +class GetWorkspaceBindingsResponse: + bindings: Optional[List[WorkspaceBinding]] = None + """List of workspace bindings""" + + next_page_token: Optional[str] = None + """Opaque token to retrieve the next page of results. Absent if there are no more pages. + __page_token__ should be set to this value for the next request (for the next page of results).""" + + def as_dict(self) -> dict: + """Serializes the GetWorkspaceBindingsResponse into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.bindings: + body["bindings"] = [v.as_dict() for v in self.bindings] + if self.next_page_token is not None: + body["next_page_token"] = self.next_page_token + return body + + def as_shallow_dict(self) -> dict: + """Serializes the GetWorkspaceBindingsResponse into a shallow dictionary of its immediate attributes.""" + body = {} + if self.bindings: + body["bindings"] = self.bindings + if self.next_page_token is not None: + body["next_page_token"] = self.next_page_token + return body + + @classmethod + def from_dict(cls, d: Dict[str, Any]) -> GetWorkspaceBindingsResponse: + """Deserializes the GetWorkspaceBindingsResponse from a dictionary.""" + return cls( + bindings=_repeated_dict(d, "bindings", WorkspaceBinding), next_page_token=d.get("next_page_token", None) + ) + + class IsolationMode(Enum): ISOLATION_MODE_ISOLATED = "ISOLATION_MODE_ISOLATED" @@ -4730,6 +5230,41 @@ def from_dict(cls, d: Dict[str, Any]) -> ListCredentialsResponse: ) +@dataclass +class ListDatabaseInstancesResponse: + database_instances: Optional[List[DatabaseInstance]] = None + """List of instances.""" + + next_page_token: Optional[str] = None + """Pagination token to request the next page of instances.""" + + def as_dict(self) -> dict: + """Serializes the ListDatabaseInstancesResponse into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.database_instances: + body["database_instances"] = [v.as_dict() for v in self.database_instances] + if self.next_page_token is not None: + body["next_page_token"] = self.next_page_token + return body + + def as_shallow_dict(self) -> dict: + """Serializes the ListDatabaseInstancesResponse into a shallow dictionary of its immediate attributes.""" + body = {} + if self.database_instances: + body["database_instances"] = self.database_instances + if self.next_page_token is not None: + body["next_page_token"] = self.next_page_token + return body + + @classmethod + def from_dict(cls, d: Dict[str, Any]) -> ListDatabaseInstancesResponse: + """Deserializes the ListDatabaseInstancesResponse from a dictionary.""" + return cls( + database_instances=_repeated_dict(d, "database_instances", DatabaseInstance), + next_page_token=d.get("next_page_token", None), + ) + + @dataclass class ListExternalLocationsResponse: external_locations: Optional[List[ExternalLocationInfo]] = None @@ -6234,6 +6769,43 @@ def from_dict(cls, d: Dict[str, Any]) -> NamedTableConstraint: return cls(name=d.get("name", None)) +@dataclass +class NewPipelineSpec: + """Custom fields that user can set for pipeline while creating SyncedDatabaseTable. Note that other + fields of pipeline are still inferred by table def internally""" + + storage_catalog: Optional[str] = None + """UC catalog for the pipeline to store intermediate files (checkpoints, event logs etc). This + needs to be a standard catalog where the user has permissions to create Delta tables.""" + + storage_schema: Optional[str] = None + """UC schema for the pipeline to store intermediate files (checkpoints, event logs etc). This needs + to be in the standard catalog where the user has permissions to create Delta tables.""" + + def as_dict(self) -> dict: + """Serializes the NewPipelineSpec into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.storage_catalog is not None: + body["storage_catalog"] = self.storage_catalog + if self.storage_schema is not None: + body["storage_schema"] = self.storage_schema + return body + + def as_shallow_dict(self) -> dict: + """Serializes the NewPipelineSpec into a shallow dictionary of its immediate attributes.""" + body = {} + if self.storage_catalog is not None: + body["storage_catalog"] = self.storage_catalog + if self.storage_schema is not None: + body["storage_schema"] = self.storage_schema + return body + + @classmethod + def from_dict(cls, d: Dict[str, Any]) -> NewPipelineSpec: + """Deserializes the NewPipelineSpec from a dictionary.""" + return cls(storage_catalog=d.get("storage_catalog", None), storage_schema=d.get("storage_schema", None)) + + @dataclass class OnlineTable: """Online Table information.""" @@ -6643,6 +7215,9 @@ class PrimaryKeyConstraint: child_columns: List[str] """Column names for this constraint.""" + timeseries_columns: Optional[List[str]] = None + """Column names that represent a timeseries.""" + def as_dict(self) -> dict: """Serializes the PrimaryKeyConstraint into a dictionary suitable for use as a JSON request body.""" body = {} @@ -6650,6 +7225,8 @@ def as_dict(self) -> dict: body["child_columns"] = [v for v in self.child_columns] if self.name is not None: body["name"] = self.name + if self.timeseries_columns: + body["timeseries_columns"] = [v for v in self.timeseries_columns] return body def as_shallow_dict(self) -> dict: @@ -6659,12 +7236,18 @@ def as_shallow_dict(self) -> dict: body["child_columns"] = self.child_columns if self.name is not None: body["name"] = self.name + if self.timeseries_columns: + body["timeseries_columns"] = self.timeseries_columns return body @classmethod def from_dict(cls, d: Dict[str, Any]) -> PrimaryKeyConstraint: """Deserializes the PrimaryKeyConstraint from a dictionary.""" - return cls(child_columns=d.get("child_columns", None), name=d.get("name", None)) + return cls( + child_columns=d.get("child_columns", None), + name=d.get("name", None), + timeseries_columns=d.get("timeseries_columns", None), + ) class Privilege(Enum): @@ -6760,6 +7343,7 @@ class ProvisioningInfo: """Status of an asynchronously provisioned resource.""" state: Optional[ProvisioningInfoState] = None + """The provisioning state of the resource.""" def as_dict(self) -> dict: """Serializes the ProvisioningInfo into a dictionary suitable for use as a JSON request body.""" @@ -7188,7 +7772,6 @@ class SchemaInfo: effective_predictive_optimization_flag: Optional[EffectivePredictiveOptimizationFlag] = None enable_predictive_optimization: Optional[EnablePredictiveOptimization] = None - """Whether predictive optimization should be enabled for this object and objects under it.""" full_name: Optional[str] = None """Full name of schema, in form of __catalog_name__.__schema_name__.""" @@ -7336,13 +7919,14 @@ def from_dict(cls, d: Dict[str, Any]) -> SchemaInfo: class SecurableType(Enum): - """The type of Unity Catalog securable""" + """The type of Unity Catalog securable.""" CATALOG = "CATALOG" CLEAN_ROOM = "CLEAN_ROOM" CONNECTION = "CONNECTION" CREDENTIAL = "CREDENTIAL" EXTERNAL_LOCATION = "EXTERNAL_LOCATION" + EXTERNAL_METADATA = "EXTERNAL_METADATA" FUNCTION = "FUNCTION" METASTORE = "METASTORE" PIPELINE = "PIPELINE" @@ -7350,8 +7934,10 @@ class SecurableType(Enum): RECIPIENT = "RECIPIENT" SCHEMA = "SCHEMA" SHARE = "SHARE" + STAGING_TABLE = "STAGING_TABLE" STORAGE_CREDENTIAL = "STORAGE_CREDENTIAL" TABLE = "TABLE" + UNKNOWN_SECURABLE_TYPE = "UNKNOWN_SECURABLE_TYPE" VOLUME = "VOLUME" @@ -7460,10 +8046,11 @@ class SseEncryptionDetails: """Server-Side Encryption properties for clients communicating with AWS s3.""" algorithm: Optional[SseEncryptionDetailsAlgorithm] = None - """The type of key encryption to use (affects headers from s3 client).""" + """Sets the value of the 'x-amz-server-side-encryption' header in S3 request.""" aws_kms_key_arn: Optional[str] = None - """When algorithm is **AWS_SSE_KMS** this field specifies the ARN of the SSE key to use.""" + """Optional. The ARN of the SSE-KMS key used with the S3 location, when algorithm = "SSE-KMS". Sets + the value of the 'x-amz-server-side-encryption-aws-kms-key-id' header.""" def as_dict(self) -> dict: """Serializes the SseEncryptionDetails into a dictionary suitable for use as a JSON request body.""" @@ -7493,7 +8080,6 @@ def from_dict(cls, d: Dict[str, Any]) -> SseEncryptionDetails: class SseEncryptionDetailsAlgorithm(Enum): - """The type of key encryption to use (affects headers from s3 client).""" AWS_SSE_KMS = "AWS_SSE_KMS" AWS_SSE_S3 = "AWS_SSE_S3" @@ -7663,14 +8249,193 @@ def from_dict(cls, d: Dict[str, Any]) -> StorageCredentialInfo: ) +@dataclass +class SyncedDatabaseTable: + """Next field marker: 10""" + + name: str + """Full three-part (catalog, schema, table) name of the table.""" + + data_synchronization_status: Optional[OnlineTableStatus] = None + """Synced Table data synchronization status""" + + database_instance_name: Optional[str] = None + """Name of the target database instance. This is required when creating synced database tables in + standard catalogs. This is optional when creating synced database tables in registered catalogs. + If this field is specified when creating synced database tables in registered catalogs, the + database instance name MUST match that of the registered catalog (or the request will be + rejected).""" + + logical_database_name: Optional[str] = None + """Target Postgres database object (logical database) name for this table. This field is optional + in all scenarios. + + When creating a synced table in a registered Postgres catalog, the target Postgres database name + is inferred to be that of the registered catalog. If this field is specified in this scenario, + the Postgres database name MUST match that of the registered catalog (or the request will be + rejected). + + When creating a synced table in a standard catalog, the target database name is inferred to be + that of the standard catalog. In this scenario, specifying this field will allow targeting an + arbitrary postgres database.""" + + spec: Optional[SyncedTableSpec] = None + """Specification of a synced database table.""" + + table_serving_url: Optional[str] = None + """Data serving REST API URL for this table""" + + unity_catalog_provisioning_state: Optional[ProvisioningInfoState] = None + """The provisioning state of the synced table entity in Unity Catalog. This is distinct from the + state of the data synchronization pipeline (i.e. the table may be in "ACTIVE" but the pipeline + may be in "PROVISIONING" as it runs asynchronously).""" + + def as_dict(self) -> dict: + """Serializes the SyncedDatabaseTable into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.data_synchronization_status: + body["data_synchronization_status"] = self.data_synchronization_status.as_dict() + if self.database_instance_name is not None: + body["database_instance_name"] = self.database_instance_name + if self.logical_database_name is not None: + body["logical_database_name"] = self.logical_database_name + if self.name is not None: + body["name"] = self.name + if self.spec: + body["spec"] = self.spec.as_dict() + if self.table_serving_url is not None: + body["table_serving_url"] = self.table_serving_url + if self.unity_catalog_provisioning_state is not None: + body["unity_catalog_provisioning_state"] = self.unity_catalog_provisioning_state.value + return body + + def as_shallow_dict(self) -> dict: + """Serializes the SyncedDatabaseTable into a shallow dictionary of its immediate attributes.""" + body = {} + if self.data_synchronization_status: + body["data_synchronization_status"] = self.data_synchronization_status + if self.database_instance_name is not None: + body["database_instance_name"] = self.database_instance_name + if self.logical_database_name is not None: + body["logical_database_name"] = self.logical_database_name + if self.name is not None: + body["name"] = self.name + if self.spec: + body["spec"] = self.spec + if self.table_serving_url is not None: + body["table_serving_url"] = self.table_serving_url + if self.unity_catalog_provisioning_state is not None: + body["unity_catalog_provisioning_state"] = self.unity_catalog_provisioning_state + return body + + @classmethod + def from_dict(cls, d: Dict[str, Any]) -> SyncedDatabaseTable: + """Deserializes the SyncedDatabaseTable from a dictionary.""" + return cls( + data_synchronization_status=_from_dict(d, "data_synchronization_status", OnlineTableStatus), + database_instance_name=d.get("database_instance_name", None), + logical_database_name=d.get("logical_database_name", None), + name=d.get("name", None), + spec=_from_dict(d, "spec", SyncedTableSpec), + table_serving_url=d.get("table_serving_url", None), + unity_catalog_provisioning_state=_enum(d, "unity_catalog_provisioning_state", ProvisioningInfoState), + ) + + +class SyncedTableSchedulingPolicy(Enum): + + CONTINUOUS = "CONTINUOUS" + SNAPSHOT = "SNAPSHOT" + TRIGGERED = "TRIGGERED" + + +@dataclass +class SyncedTableSpec: + """Specification of a synced database table.""" + + create_database_objects_if_missing: Optional[bool] = None + """If true, the synced table's logical database and schema resources in PG will be created if they + do not already exist.""" + + new_pipeline_spec: Optional[NewPipelineSpec] = None + """Spec of new pipeline. Should be empty if pipeline_id is set""" + + pipeline_id: Optional[str] = None + """ID of the associated pipeline. Should be empty if new_pipeline_spec is set""" + + primary_key_columns: Optional[List[str]] = None + """Primary Key columns to be used for data insert/update in the destination.""" + + scheduling_policy: Optional[SyncedTableSchedulingPolicy] = None + """Scheduling policy of the underlying pipeline.""" + + source_table_full_name: Optional[str] = None + """Three-part (catalog, schema, table) name of the source Delta table.""" + + timeseries_key: Optional[str] = None + """Time series key to deduplicate (tie-break) rows with the same primary key.""" + + def as_dict(self) -> dict: + """Serializes the SyncedTableSpec into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.create_database_objects_if_missing is not None: + body["create_database_objects_if_missing"] = self.create_database_objects_if_missing + if self.new_pipeline_spec: + body["new_pipeline_spec"] = self.new_pipeline_spec.as_dict() + if self.pipeline_id is not None: + body["pipeline_id"] = self.pipeline_id + if self.primary_key_columns: + body["primary_key_columns"] = [v for v in self.primary_key_columns] + if self.scheduling_policy is not None: + body["scheduling_policy"] = self.scheduling_policy.value + if self.source_table_full_name is not None: + body["source_table_full_name"] = self.source_table_full_name + if self.timeseries_key is not None: + body["timeseries_key"] = self.timeseries_key + return body + + def as_shallow_dict(self) -> dict: + """Serializes the SyncedTableSpec into a shallow dictionary of its immediate attributes.""" + body = {} + if self.create_database_objects_if_missing is not None: + body["create_database_objects_if_missing"] = self.create_database_objects_if_missing + if self.new_pipeline_spec: + body["new_pipeline_spec"] = self.new_pipeline_spec + if self.pipeline_id is not None: + body["pipeline_id"] = self.pipeline_id + if self.primary_key_columns: + body["primary_key_columns"] = self.primary_key_columns + if self.scheduling_policy is not None: + body["scheduling_policy"] = self.scheduling_policy + if self.source_table_full_name is not None: + body["source_table_full_name"] = self.source_table_full_name + if self.timeseries_key is not None: + body["timeseries_key"] = self.timeseries_key + return body + + @classmethod + def from_dict(cls, d: Dict[str, Any]) -> SyncedTableSpec: + """Deserializes the SyncedTableSpec from a dictionary.""" + return cls( + create_database_objects_if_missing=d.get("create_database_objects_if_missing", None), + new_pipeline_spec=_from_dict(d, "new_pipeline_spec", NewPipelineSpec), + pipeline_id=d.get("pipeline_id", None), + primary_key_columns=d.get("primary_key_columns", None), + scheduling_policy=_enum(d, "scheduling_policy", SyncedTableSchedulingPolicy), + source_table_full_name=d.get("source_table_full_name", None), + timeseries_key=d.get("timeseries_key", None), + ) + + @dataclass class SystemSchemaInfo: - schema: Optional[str] = None + schema: str """Name of the system schema.""" - state: Optional[SystemSchemaInfoState] = None + state: str """The current state of enablement for the system schema. An empty string means the system schema - is available and ready for opt-in.""" + is available and ready for opt-in. Possible values: AVAILABLE | ENABLE_INITIALIZED | + ENABLE_COMPLETED | DISABLE_INITIALIZED | UNAVAILABLE""" def as_dict(self) -> dict: """Serializes the SystemSchemaInfo into a dictionary suitable for use as a JSON request body.""" @@ -7678,7 +8443,7 @@ def as_dict(self) -> dict: if self.schema is not None: body["schema"] = self.schema if self.state is not None: - body["state"] = self.state.value + body["state"] = self.state return body def as_shallow_dict(self) -> dict: @@ -7693,18 +8458,7 @@ def as_shallow_dict(self) -> dict: @classmethod def from_dict(cls, d: Dict[str, Any]) -> SystemSchemaInfo: """Deserializes the SystemSchemaInfo from a dictionary.""" - return cls(schema=d.get("schema", None), state=_enum(d, "state", SystemSchemaInfoState)) - - -class SystemSchemaInfoState(Enum): - """The current state of enablement for the system schema. An empty string means the system schema - is available and ready for opt-in.""" - - AVAILABLE = "AVAILABLE" - DISABLE_INITIALIZED = "DISABLE_INITIALIZED" - ENABLE_COMPLETED = "ENABLE_COMPLETED" - ENABLE_INITIALIZED = "ENABLE_INITIALIZED" - UNAVAILABLE = "UNAVAILABLE" + return cls(schema=d.get("schema", None), state=d.get("state", None)) @dataclass @@ -7843,7 +8597,6 @@ class TableInfo: effective_predictive_optimization_flag: Optional[EffectivePredictiveOptimizationFlag] = None enable_predictive_optimization: Optional[EnablePredictiveOptimization] = None - """Whether predictive optimization should be enabled for this object and objects under it.""" encryption_details: Optional[EncryptionDetails] = None """Encryption options that apply to clients connecting to cloud storage.""" @@ -8340,14 +9093,6 @@ def from_dict(cls, d: Dict[str, Any]) -> UpdateAssignmentResponse: return cls() -class UpdateBindingsSecurableType(Enum): - - CATALOG = "catalog" - CREDENTIAL = "credential" - EXTERNAL_LOCATION = "external_location" - STORAGE_CREDENTIAL = "storage_credential" - - @dataclass class UpdateCatalog: comment: Optional[str] = None @@ -8431,6 +9176,31 @@ def from_dict(cls, d: Dict[str, Any]) -> UpdateCatalog: ) +@dataclass +class UpdateCatalogWorkspaceBindingsResponse: + workspaces: Optional[List[int]] = None + """A list of workspace IDs""" + + def as_dict(self) -> dict: + """Serializes the UpdateCatalogWorkspaceBindingsResponse into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.workspaces: + body["workspaces"] = [v for v in self.workspaces] + return body + + def as_shallow_dict(self) -> dict: + """Serializes the UpdateCatalogWorkspaceBindingsResponse into a shallow dictionary of its immediate attributes.""" + body = {} + if self.workspaces: + body["workspaces"] = self.workspaces + return body + + @classmethod + def from_dict(cls, d: Dict[str, Any]) -> UpdateCatalogWorkspaceBindingsResponse: + """Deserializes the UpdateCatalogWorkspaceBindingsResponse from a dictionary.""" + return cls(workspaces=d.get("workspaces", None)) + + @dataclass class UpdateConnection: options: Dict[str, str] @@ -8601,15 +9371,15 @@ def from_dict(cls, d: Dict[str, Any]) -> UpdateCredentialRequest: @dataclass class UpdateExternalLocation: - access_point: Optional[str] = None - """The AWS access point to use when accesing s3 for this external location.""" - comment: Optional[str] = None """User-provided free-form text description.""" credential_name: Optional[str] = None """Name of the storage credential used with this location.""" + enable_file_events: Optional[bool] = None + """[Create:OPT Update:OPT] Whether to enable file events on this external location.""" + encryption_details: Optional[EncryptionDetails] = None """Encryption options that apply to clients connecting to cloud storage.""" @@ -8618,6 +9388,9 @@ class UpdateExternalLocation: enabled, the access to the location falls back to cluster credentials if UC credentials are not sufficient.""" + file_event_queue: Optional[FileEventQueue] = None + """[Create:OPT Update:OPT] File event queue settings.""" + force: Optional[bool] = None """Force update even if changing url invalidates dependent external tables or mounts.""" @@ -8644,16 +9417,18 @@ class UpdateExternalLocation: def as_dict(self) -> dict: """Serializes the UpdateExternalLocation into a dictionary suitable for use as a JSON request body.""" body = {} - if self.access_point is not None: - body["access_point"] = self.access_point if self.comment is not None: body["comment"] = self.comment if self.credential_name is not None: body["credential_name"] = self.credential_name + if self.enable_file_events is not None: + body["enable_file_events"] = self.enable_file_events if self.encryption_details: body["encryption_details"] = self.encryption_details.as_dict() if self.fallback is not None: body["fallback"] = self.fallback + if self.file_event_queue: + body["file_event_queue"] = self.file_event_queue.as_dict() if self.force is not None: body["force"] = self.force if self.isolation_mode is not None: @@ -8675,16 +9450,18 @@ def as_dict(self) -> dict: def as_shallow_dict(self) -> dict: """Serializes the UpdateExternalLocation into a shallow dictionary of its immediate attributes.""" body = {} - if self.access_point is not None: - body["access_point"] = self.access_point if self.comment is not None: body["comment"] = self.comment if self.credential_name is not None: body["credential_name"] = self.credential_name + if self.enable_file_events is not None: + body["enable_file_events"] = self.enable_file_events if self.encryption_details: body["encryption_details"] = self.encryption_details if self.fallback is not None: body["fallback"] = self.fallback + if self.file_event_queue: + body["file_event_queue"] = self.file_event_queue if self.force is not None: body["force"] = self.force if self.isolation_mode is not None: @@ -8707,11 +9484,12 @@ def as_shallow_dict(self) -> dict: def from_dict(cls, d: Dict[str, Any]) -> UpdateExternalLocation: """Deserializes the UpdateExternalLocation from a dictionary.""" return cls( - access_point=d.get("access_point", None), comment=d.get("comment", None), credential_name=d.get("credential_name", None), + enable_file_events=d.get("enable_file_events", None), encryption_details=_from_dict(d, "encryption_details", EncryptionDetails), fallback=d.get("fallback", None), + file_event_queue=_from_dict(d, "file_event_queue", FileEventQueue), force=d.get("force", None), isolation_mode=_enum(d, "isolation_mode", IsolationMode), name=d.get("name", None), @@ -9175,7 +9953,6 @@ class UpdateSchema: """User-provided free-form text description.""" enable_predictive_optimization: Optional[EnablePredictiveOptimization] = None - """Whether predictive optimization should be enabled for this object and objects under it.""" full_name: Optional[str] = None """Full name of the schema.""" @@ -9457,16 +10234,17 @@ def from_dict(cls, d: Dict[str, Any]) -> UpdateWorkspaceBindings: @dataclass class UpdateWorkspaceBindingsParameters: add: Optional[List[WorkspaceBinding]] = None - """List of workspace bindings""" + """List of workspace bindings.""" remove: Optional[List[WorkspaceBinding]] = None - """List of workspace bindings""" + """List of workspace bindings.""" securable_name: Optional[str] = None """The name of the securable.""" - securable_type: Optional[UpdateBindingsSecurableType] = None - """The type of the securable to bind to a workspace.""" + securable_type: Optional[str] = None + """The type of the securable to bind to a workspace (catalog, storage_credential, credential, or + external_location).""" def as_dict(self) -> dict: """Serializes the UpdateWorkspaceBindingsParameters into a dictionary suitable for use as a JSON request body.""" @@ -9478,7 +10256,7 @@ def as_dict(self) -> dict: if self.securable_name is not None: body["securable_name"] = self.securable_name if self.securable_type is not None: - body["securable_type"] = self.securable_type.value + body["securable_type"] = self.securable_type return body def as_shallow_dict(self) -> dict: @@ -9501,10 +10279,37 @@ def from_dict(cls, d: Dict[str, Any]) -> UpdateWorkspaceBindingsParameters: add=_repeated_dict(d, "add", WorkspaceBinding), remove=_repeated_dict(d, "remove", WorkspaceBinding), securable_name=d.get("securable_name", None), - securable_type=_enum(d, "securable_type", UpdateBindingsSecurableType), + securable_type=d.get("securable_type", None), ) +@dataclass +class UpdateWorkspaceBindingsResponse: + """A list of workspace IDs that are bound to the securable""" + + bindings: Optional[List[WorkspaceBinding]] = None + """List of workspace bindings.""" + + def as_dict(self) -> dict: + """Serializes the UpdateWorkspaceBindingsResponse into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.bindings: + body["bindings"] = [v.as_dict() for v in self.bindings] + return body + + def as_shallow_dict(self) -> dict: + """Serializes the UpdateWorkspaceBindingsResponse into a shallow dictionary of its immediate attributes.""" + body = {} + if self.bindings: + body["bindings"] = self.bindings + return body + + @classmethod + def from_dict(cls, d: Dict[str, Any]) -> UpdateWorkspaceBindingsResponse: + """Deserializes the UpdateWorkspaceBindingsResponse from a dictionary.""" + return cls(bindings=_repeated_dict(d, "bindings", WorkspaceBinding)) + + @dataclass class ValidateCredentialRequest: """Next ID: 17""" @@ -9990,9 +10795,11 @@ class VolumeType(Enum): @dataclass class WorkspaceBinding: - binding_type: Optional[WorkspaceBindingBindingType] = None + workspace_id: int + """Required""" - workspace_id: Optional[int] = None + binding_type: Optional[WorkspaceBindingBindingType] = None + """One of READ_WRITE/READ_ONLY. Default is READ_WRITE.""" def as_dict(self) -> dict: """Serializes the WorkspaceBinding into a dictionary suitable for use as a JSON request body.""" @@ -10021,48 +10828,13 @@ def from_dict(cls, d: Dict[str, Any]) -> WorkspaceBinding: class WorkspaceBindingBindingType(Enum): + """Using `BINDING_TYPE_` prefix here to avoid conflict with `TableOperation` enum in + `credentials_common.proto`.""" BINDING_TYPE_READ_ONLY = "BINDING_TYPE_READ_ONLY" BINDING_TYPE_READ_WRITE = "BINDING_TYPE_READ_WRITE" -@dataclass -class WorkspaceBindingsResponse: - """Currently assigned workspace bindings""" - - bindings: Optional[List[WorkspaceBinding]] = None - """List of workspace bindings""" - - next_page_token: Optional[str] = None - """Opaque token to retrieve the next page of results. Absent if there are no more pages. - __page_token__ should be set to this value for the next request (for the next page of results).""" - - def as_dict(self) -> dict: - """Serializes the WorkspaceBindingsResponse into a dictionary suitable for use as a JSON request body.""" - body = {} - if self.bindings: - body["bindings"] = [v.as_dict() for v in self.bindings] - if self.next_page_token is not None: - body["next_page_token"] = self.next_page_token - return body - - def as_shallow_dict(self) -> dict: - """Serializes the WorkspaceBindingsResponse into a shallow dictionary of its immediate attributes.""" - body = {} - if self.bindings: - body["bindings"] = self.bindings - if self.next_page_token is not None: - body["next_page_token"] = self.next_page_token - return body - - @classmethod - def from_dict(cls, d: Dict[str, Any]) -> WorkspaceBindingsResponse: - """Deserializes the WorkspaceBindingsResponse from a dictionary.""" - return cls( - bindings=_repeated_dict(d, "bindings", WorkspaceBinding), next_page_token=d.get("next_page_token", None) - ) - - class AccountMetastoreAssignmentsAPI: """These APIs manage metastore assignments to a workspace.""" @@ -10706,8 +11478,6 @@ def list( "Accept": "application/json", } - if "max_results" not in query: - query["max_results"] = 0 while True: json = self._api.do("GET", "/api/2.1/unity-catalog/catalogs", query=query, headers=headers) if "catalogs" in json: @@ -11316,6 +12086,241 @@ def validate_credential( return ValidateCredentialResponse.from_dict(res) +class DatabaseInstancesAPI: + """Database Instances provide access to a database via REST API or direct SQL.""" + + def __init__(self, api_client): + self._api = api_client + + def create_database_catalog(self, catalog: DatabaseCatalog) -> DatabaseCatalog: + """Create a Database Catalog. + + :param catalog: :class:`DatabaseCatalog` + + :returns: :class:`DatabaseCatalog` + """ + body = catalog.as_dict() + headers = { + "Accept": "application/json", + "Content-Type": "application/json", + } + + res = self._api.do("POST", "/api/2.0/database/catalogs", body=body, headers=headers) + return DatabaseCatalog.from_dict(res) + + def create_database_instance(self, database_instance: DatabaseInstance) -> DatabaseInstance: + """Create a Database Instance. + + :param database_instance: :class:`DatabaseInstance` + A DatabaseInstance represents a logical Postgres instance, comprised of both compute and storage. + + :returns: :class:`DatabaseInstance` + """ + body = database_instance.as_dict() + headers = { + "Accept": "application/json", + "Content-Type": "application/json", + } + + res = self._api.do("POST", "/api/2.0/database/instances", body=body, headers=headers) + return DatabaseInstance.from_dict(res) + + def create_synced_database_table(self, synced_table: SyncedDatabaseTable) -> SyncedDatabaseTable: + """Create a Synced Database Table. + + :param synced_table: :class:`SyncedDatabaseTable` + Next field marker: 10 + + :returns: :class:`SyncedDatabaseTable` + """ + body = synced_table.as_dict() + headers = { + "Accept": "application/json", + "Content-Type": "application/json", + } + + res = self._api.do("POST", "/api/2.0/database/synced_tables", body=body, headers=headers) + return SyncedDatabaseTable.from_dict(res) + + def delete_database_catalog(self, name: str): + """Delete a Database Catalog. + + :param name: str + + + """ + + headers = { + "Accept": "application/json", + } + + self._api.do("DELETE", f"/api/2.0/database/catalogs/{name}", headers=headers) + + def delete_database_instance(self, name: str, *, force: Optional[bool] = None, purge: Optional[bool] = None): + """Delete a Database Instance. + + :param name: str + Name of the instance to delete. + :param force: bool (optional) + By default, a instance cannot be deleted if it has descendant instances created via PITR. If this + flag is specified as true, all descendent instances will be deleted as well. + :param purge: bool (optional) + If false, the database instance is soft deleted. Soft deleted instances behave as if they are + deleted, and cannot be used for CRUD operations nor connected to. However they can be undeleted by + calling the undelete API for a limited time. If true, the database instance is hard deleted and + cannot be undeleted. + + + """ + + query = {} + if force is not None: + query["force"] = force + if purge is not None: + query["purge"] = purge + headers = { + "Accept": "application/json", + } + + self._api.do("DELETE", f"/api/2.0/database/instances/{name}", query=query, headers=headers) + + def delete_synced_database_table(self, name: str): + """Delete a Synced Database Table. + + :param name: str + + + """ + + headers = { + "Accept": "application/json", + } + + self._api.do("DELETE", f"/api/2.0/database/synced_tables/{name}", headers=headers) + + def find_database_instance_by_uid(self, *, uid: Optional[str] = None) -> DatabaseInstance: + """Find a Database Instance by uid. + + :param uid: str (optional) + UID of the cluster to get. + + :returns: :class:`DatabaseInstance` + """ + + query = {} + if uid is not None: + query["uid"] = uid + headers = { + "Accept": "application/json", + } + + res = self._api.do("GET", "/api/2.0/database/instances:findByUid", query=query, headers=headers) + return DatabaseInstance.from_dict(res) + + def get_database_catalog(self, name: str) -> DatabaseCatalog: + """Get a Database Catalog. + + :param name: str + + :returns: :class:`DatabaseCatalog` + """ + + headers = { + "Accept": "application/json", + } + + res = self._api.do("GET", f"/api/2.0/database/catalogs/{name}", headers=headers) + return DatabaseCatalog.from_dict(res) + + def get_database_instance(self, name: str) -> DatabaseInstance: + """Get a Database Instance. + + :param name: str + Name of the cluster to get. + + :returns: :class:`DatabaseInstance` + """ + + headers = { + "Accept": "application/json", + } + + res = self._api.do("GET", f"/api/2.0/database/instances/{name}", headers=headers) + return DatabaseInstance.from_dict(res) + + def get_synced_database_table(self, name: str) -> SyncedDatabaseTable: + """Get a Synced Database Table. + + :param name: str + + :returns: :class:`SyncedDatabaseTable` + """ + + headers = { + "Accept": "application/json", + } + + res = self._api.do("GET", f"/api/2.0/database/synced_tables/{name}", headers=headers) + return SyncedDatabaseTable.from_dict(res) + + def list_database_instances( + self, *, page_size: Optional[int] = None, page_token: Optional[str] = None + ) -> Iterator[DatabaseInstance]: + """List Database Instances. + + :param page_size: int (optional) + Upper bound for items returned. + :param page_token: str (optional) + Pagination token to go to the next page of Database Instances. Requests first page if absent. + + :returns: Iterator over :class:`DatabaseInstance` + """ + + query = {} + if page_size is not None: + query["page_size"] = page_size + if page_token is not None: + query["page_token"] = page_token + headers = { + "Accept": "application/json", + } + + while True: + json = self._api.do("GET", "/api/2.0/database/instances", query=query, headers=headers) + if "database_instances" in json: + for v in json["database_instances"]: + yield DatabaseInstance.from_dict(v) + if "next_page_token" not in json or not json["next_page_token"]: + return + query["page_token"] = json["next_page_token"] + + def update_database_instance( + self, name: str, database_instance: DatabaseInstance, update_mask: str + ) -> DatabaseInstance: + """Update a Database Instance. + + :param name: str + The name of the instance. This is the unique identifier for the instance. + :param database_instance: :class:`DatabaseInstance` + A DatabaseInstance represents a logical Postgres instance, comprised of both compute and storage. + :param update_mask: str + The list of fields to update. + + :returns: :class:`DatabaseInstance` + """ + body = database_instance.as_dict() + query = {} + if update_mask is not None: + query["update_mask"] = update_mask + headers = { + "Accept": "application/json", + "Content-Type": "application/json", + } + + res = self._api.do("PATCH", f"/api/2.0/database/instances/{name}", query=query, body=body, headers=headers) + return DatabaseInstance.from_dict(res) + + class ExternalLocationsAPI: """An external location is an object that combines a cloud storage path with a storage credential that authorizes access to the cloud storage path. Each external location is subject to Unity Catalog @@ -11337,10 +12342,11 @@ def create( url: str, credential_name: str, *, - access_point: Optional[str] = None, comment: Optional[str] = None, + enable_file_events: Optional[bool] = None, encryption_details: Optional[EncryptionDetails] = None, fallback: Optional[bool] = None, + file_event_queue: Optional[FileEventQueue] = None, read_only: Optional[bool] = None, skip_validation: Optional[bool] = None, ) -> ExternalLocationInfo: @@ -11356,16 +12362,18 @@ def create( Path URL of the external location. :param credential_name: str Name of the storage credential used with this location. - :param access_point: str (optional) - The AWS access point to use when accesing s3 for this external location. :param comment: str (optional) User-provided free-form text description. + :param enable_file_events: bool (optional) + [Create:OPT Update:OPT] Whether to enable file events on this external location. :param encryption_details: :class:`EncryptionDetails` (optional) Encryption options that apply to clients connecting to cloud storage. :param fallback: bool (optional) Indicates whether fallback mode is enabled for this external location. When fallback mode is enabled, the access to the location falls back to cluster credentials if UC credentials are not sufficient. + :param file_event_queue: :class:`FileEventQueue` (optional) + [Create:OPT Update:OPT] File event queue settings. :param read_only: bool (optional) Indicates whether the external location is read-only. :param skip_validation: bool (optional) @@ -11374,16 +12382,18 @@ def create( :returns: :class:`ExternalLocationInfo` """ body = {} - if access_point is not None: - body["access_point"] = access_point if comment is not None: body["comment"] = comment if credential_name is not None: body["credential_name"] = credential_name + if enable_file_events is not None: + body["enable_file_events"] = enable_file_events if encryption_details is not None: body["encryption_details"] = encryption_details.as_dict() if fallback is not None: body["fallback"] = fallback + if file_event_queue is not None: + body["file_event_queue"] = file_event_queue.as_dict() if name is not None: body["name"] = name if read_only is not None: @@ -11486,8 +12496,6 @@ def list( "Accept": "application/json", } - if "max_results" not in query: - query["max_results"] = 0 while True: json = self._api.do("GET", "/api/2.1/unity-catalog/external-locations", query=query, headers=headers) if "external_locations" in json: @@ -11501,11 +12509,12 @@ def update( self, name: str, *, - access_point: Optional[str] = None, comment: Optional[str] = None, credential_name: Optional[str] = None, + enable_file_events: Optional[bool] = None, encryption_details: Optional[EncryptionDetails] = None, fallback: Optional[bool] = None, + file_event_queue: Optional[FileEventQueue] = None, force: Optional[bool] = None, isolation_mode: Optional[IsolationMode] = None, new_name: Optional[str] = None, @@ -11522,18 +12531,20 @@ def update( :param name: str Name of the external location. - :param access_point: str (optional) - The AWS access point to use when accesing s3 for this external location. :param comment: str (optional) User-provided free-form text description. :param credential_name: str (optional) Name of the storage credential used with this location. + :param enable_file_events: bool (optional) + [Create:OPT Update:OPT] Whether to enable file events on this external location. :param encryption_details: :class:`EncryptionDetails` (optional) Encryption options that apply to clients connecting to cloud storage. :param fallback: bool (optional) Indicates whether fallback mode is enabled for this external location. When fallback mode is enabled, the access to the location falls back to cluster credentials if UC credentials are not sufficient. + :param file_event_queue: :class:`FileEventQueue` (optional) + [Create:OPT Update:OPT] File event queue settings. :param force: bool (optional) Force update even if changing url invalidates dependent external tables or mounts. :param isolation_mode: :class:`IsolationMode` (optional) @@ -11551,16 +12562,18 @@ def update( :returns: :class:`ExternalLocationInfo` """ body = {} - if access_point is not None: - body["access_point"] = access_point if comment is not None: body["comment"] = comment if credential_name is not None: body["credential_name"] = credential_name + if enable_file_events is not None: + body["enable_file_events"] = enable_file_events if encryption_details is not None: body["encryption_details"] = encryption_details.as_dict() if fallback is not None: body["fallback"] = fallback + if file_event_queue is not None: + body["file_event_queue"] = file_event_queue.as_dict() if force is not None: body["force"] = force if isolation_mode is not None: @@ -13429,7 +14442,6 @@ def update( :param comment: str (optional) User-provided free-form text description. :param enable_predictive_optimization: :class:`EnablePredictiveOptimization` (optional) - Whether predictive optimization should be enabled for this object and objects under it. :param new_name: str (optional) New name for the schema. :param owner: str (optional) @@ -13808,7 +14820,7 @@ def disable(self, metastore_id: str, schema_name: str): "DELETE", f"/api/2.1/unity-catalog/metastores/{metastore_id}/systemschemas/{schema_name}", headers=headers ) - def enable(self, metastore_id: str, schema_name: str): + def enable(self, metastore_id: str, schema_name: str, *, catalog_name: Optional[str] = None): """Enable a system schema. Enables the system schema and adds it to the system catalog. The caller must be an account admin or a @@ -13818,16 +14830,24 @@ def enable(self, metastore_id: str, schema_name: str): The metastore ID under which the system schema lives. :param schema_name: str Full name of the system schema. + :param catalog_name: str (optional) + the catalog for which the system schema is to enabled in """ - + body = {} + if catalog_name is not None: + body["catalog_name"] = catalog_name headers = { "Accept": "application/json", + "Content-Type": "application/json", } self._api.do( - "PUT", f"/api/2.1/unity-catalog/metastores/{metastore_id}/systemschemas/{schema_name}", headers=headers + "PUT", + f"/api/2.1/unity-catalog/metastores/{metastore_id}/systemschemas/{schema_name}", + body=body, + headers=headers, ) def list( @@ -13860,8 +14880,6 @@ def list( "Accept": "application/json", } - if "max_results" not in query: - query["max_results"] = 0 while True: json = self._api.do( "GET", f"/api/2.1/unity-catalog/metastores/{metastore_id}/systemschemas", query=query, headers=headers @@ -14539,12 +15557,12 @@ class WorkspaceBindingsAPI: the new path (/api/2.1/unity-catalog/bindings/{securable_type}/{securable_name}) which introduces the ability to bind a securable in READ_ONLY mode (catalogs only). - Securable types that support binding: - catalog - storage_credential - external_location""" + Securable types that support binding: - catalog - storage_credential - credential - external_location""" def __init__(self, api_client): self._api = api_client - def get(self, name: str) -> CurrentWorkspaceBindings: + def get(self, name: str) -> GetCatalogWorkspaceBindingsResponse: """Get catalog workspace bindings. Gets workspace bindings of the catalog. The caller must be a metastore admin or an owner of the @@ -14553,7 +15571,7 @@ def get(self, name: str) -> CurrentWorkspaceBindings: :param name: str The name of the catalog. - :returns: :class:`CurrentWorkspaceBindings` + :returns: :class:`GetCatalogWorkspaceBindingsResponse` """ headers = { @@ -14561,11 +15579,11 @@ def get(self, name: str) -> CurrentWorkspaceBindings: } res = self._api.do("GET", f"/api/2.1/unity-catalog/workspace-bindings/catalogs/{name}", headers=headers) - return CurrentWorkspaceBindings.from_dict(res) + return GetCatalogWorkspaceBindingsResponse.from_dict(res) def get_bindings( self, - securable_type: GetBindingsSecurableType, + securable_type: str, securable_name: str, *, max_results: Optional[int] = None, @@ -14576,8 +15594,9 @@ def get_bindings( Gets workspace bindings of the securable. The caller must be a metastore admin or an owner of the securable. - :param securable_type: :class:`GetBindingsSecurableType` - The type of the securable to bind to a workspace. + :param securable_type: str + The type of the securable to bind to a workspace (catalog, storage_credential, credential, or + external_location). :param securable_name: str The name of the securable. :param max_results: int (optional) @@ -14603,7 +15622,7 @@ def get_bindings( while True: json = self._api.do( "GET", - f"/api/2.1/unity-catalog/bindings/{securable_type.value}/{securable_name}", + f"/api/2.1/unity-catalog/bindings/{securable_type}/{securable_name}", query=query, headers=headers, ) @@ -14620,7 +15639,7 @@ def update( *, assign_workspaces: Optional[List[int]] = None, unassign_workspaces: Optional[List[int]] = None, - ) -> CurrentWorkspaceBindings: + ) -> UpdateCatalogWorkspaceBindingsResponse: """Update catalog workspace bindings. Updates workspace bindings of the catalog. The caller must be a metastore admin or an owner of the @@ -14633,7 +15652,7 @@ def update( :param unassign_workspaces: List[int] (optional) A list of workspace IDs. - :returns: :class:`CurrentWorkspaceBindings` + :returns: :class:`UpdateCatalogWorkspaceBindingsResponse` """ body = {} if assign_workspaces is not None: @@ -14648,31 +15667,32 @@ def update( res = self._api.do( "PATCH", f"/api/2.1/unity-catalog/workspace-bindings/catalogs/{name}", body=body, headers=headers ) - return CurrentWorkspaceBindings.from_dict(res) + return UpdateCatalogWorkspaceBindingsResponse.from_dict(res) def update_bindings( self, - securable_type: UpdateBindingsSecurableType, + securable_type: str, securable_name: str, *, add: Optional[List[WorkspaceBinding]] = None, remove: Optional[List[WorkspaceBinding]] = None, - ) -> WorkspaceBindingsResponse: + ) -> UpdateWorkspaceBindingsResponse: """Update securable workspace bindings. Updates workspace bindings of the securable. The caller must be a metastore admin or an owner of the securable. - :param securable_type: :class:`UpdateBindingsSecurableType` - The type of the securable to bind to a workspace. + :param securable_type: str + The type of the securable to bind to a workspace (catalog, storage_credential, credential, or + external_location). :param securable_name: str The name of the securable. :param add: List[:class:`WorkspaceBinding`] (optional) - List of workspace bindings + List of workspace bindings. :param remove: List[:class:`WorkspaceBinding`] (optional) - List of workspace bindings + List of workspace bindings. - :returns: :class:`WorkspaceBindingsResponse` + :returns: :class:`UpdateWorkspaceBindingsResponse` """ body = {} if add is not None: @@ -14685,9 +15705,6 @@ def update_bindings( } res = self._api.do( - "PATCH", - f"/api/2.1/unity-catalog/bindings/{securable_type.value}/{securable_name}", - body=body, - headers=headers, + "PATCH", f"/api/2.1/unity-catalog/bindings/{securable_type}/{securable_name}", body=body, headers=headers ) - return WorkspaceBindingsResponse.from_dict(res) + return UpdateWorkspaceBindingsResponse.from_dict(res) diff --git a/databricks/sdk/service/cleanrooms.py b/databricks/sdk/service/cleanrooms.py index 3f6d5a03..edf1cd25 100755 --- a/databricks/sdk/service/cleanrooms.py +++ b/databricks/sdk/service/cleanrooms.py @@ -338,6 +338,15 @@ class CleanRoomAssetNotebook: """Base 64 representation of the notebook contents. This is the same format as returned by :method:workspace/export with the format of **HTML**.""" + review_state: Optional[CleanRoomNotebookReviewNotebookReviewState] = None + """top-level status derived from all reviews""" + + reviews: Optional[List[CleanRoomNotebookReview]] = None + """All existing approvals or rejections""" + + runner_collaborator_aliases: Optional[List[str]] = None + """collaborators that can run the notebook""" + def as_dict(self) -> dict: """Serializes the CleanRoomAssetNotebook into a dictionary suitable for use as a JSON request body.""" body = {} @@ -345,6 +354,12 @@ def as_dict(self) -> dict: body["etag"] = self.etag if self.notebook_content is not None: body["notebook_content"] = self.notebook_content + if self.review_state is not None: + body["review_state"] = self.review_state.value + if self.reviews: + body["reviews"] = [v.as_dict() for v in self.reviews] + if self.runner_collaborator_aliases: + body["runner_collaborator_aliases"] = [v for v in self.runner_collaborator_aliases] return body def as_shallow_dict(self) -> dict: @@ -354,12 +369,24 @@ def as_shallow_dict(self) -> dict: body["etag"] = self.etag if self.notebook_content is not None: body["notebook_content"] = self.notebook_content + if self.review_state is not None: + body["review_state"] = self.review_state + if self.reviews: + body["reviews"] = self.reviews + if self.runner_collaborator_aliases: + body["runner_collaborator_aliases"] = self.runner_collaborator_aliases return body @classmethod def from_dict(cls, d: Dict[str, Any]) -> CleanRoomAssetNotebook: """Deserializes the CleanRoomAssetNotebook from a dictionary.""" - return cls(etag=d.get("etag", None), notebook_content=d.get("notebook_content", None)) + return cls( + etag=d.get("etag", None), + notebook_content=d.get("notebook_content", None), + review_state=_enum(d, "review_state", CleanRoomNotebookReviewNotebookReviewState), + reviews=_repeated_dict(d, "reviews", CleanRoomNotebookReview), + runner_collaborator_aliases=d.get("runner_collaborator_aliases", None), + ) class CleanRoomAssetStatusEnum(Enum): @@ -585,6 +612,78 @@ def from_dict(cls, d: Dict[str, Any]) -> CleanRoomCollaborator: ) +@dataclass +class CleanRoomNotebookReview: + comment: Optional[str] = None + """review comment""" + + created_at_millis: Optional[int] = None + """timestamp of when the review was submitted""" + + review_state: Optional[CleanRoomNotebookReviewNotebookReviewState] = None + """review outcome""" + + review_sub_reason: Optional[CleanRoomNotebookReviewNotebookReviewSubReason] = None + """specified when the review was not explicitly made by a user""" + + reviewer_collaborator_alias: Optional[str] = None + """collaborator alias of the reviewer""" + + def as_dict(self) -> dict: + """Serializes the CleanRoomNotebookReview into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.comment is not None: + body["comment"] = self.comment + if self.created_at_millis is not None: + body["created_at_millis"] = self.created_at_millis + if self.review_state is not None: + body["review_state"] = self.review_state.value + if self.review_sub_reason is not None: + body["review_sub_reason"] = self.review_sub_reason.value + if self.reviewer_collaborator_alias is not None: + body["reviewer_collaborator_alias"] = self.reviewer_collaborator_alias + return body + + def as_shallow_dict(self) -> dict: + """Serializes the CleanRoomNotebookReview into a shallow dictionary of its immediate attributes.""" + body = {} + if self.comment is not None: + body["comment"] = self.comment + if self.created_at_millis is not None: + body["created_at_millis"] = self.created_at_millis + if self.review_state is not None: + body["review_state"] = self.review_state + if self.review_sub_reason is not None: + body["review_sub_reason"] = self.review_sub_reason + if self.reviewer_collaborator_alias is not None: + body["reviewer_collaborator_alias"] = self.reviewer_collaborator_alias + return body + + @classmethod + def from_dict(cls, d: Dict[str, Any]) -> CleanRoomNotebookReview: + """Deserializes the CleanRoomNotebookReview from a dictionary.""" + return cls( + comment=d.get("comment", None), + created_at_millis=d.get("created_at_millis", None), + review_state=_enum(d, "review_state", CleanRoomNotebookReviewNotebookReviewState), + review_sub_reason=_enum(d, "review_sub_reason", CleanRoomNotebookReviewNotebookReviewSubReason), + reviewer_collaborator_alias=d.get("reviewer_collaborator_alias", None), + ) + + +class CleanRoomNotebookReviewNotebookReviewState(Enum): + + APPROVED = "APPROVED" + PENDING = "PENDING" + REJECTED = "REJECTED" + + +class CleanRoomNotebookReviewNotebookReviewSubReason(Enum): + + AUTO_APPROVED = "AUTO_APPROVED" + BACKFILLED = "BACKFILLED" + + @dataclass class CleanRoomNotebookTaskRun: """Stores information about a single task run.""" @@ -594,12 +693,18 @@ class CleanRoomNotebookTaskRun: LIST API. if the task was run within the same workspace the API is being called. If the task run was in a different workspace under the same metastore, only the workspace_id is included.""" + notebook_etag: Optional[str] = None + """Etag of the notebook executed in this task run, used to identify the notebook version.""" + notebook_job_run_state: Optional[jobs.CleanRoomTaskRunState] = None """State of the task run.""" notebook_name: Optional[str] = None """Asset name of the notebook executed in this task run.""" + notebook_updated_at: Optional[int] = None + """The timestamp of when the notebook was last updated.""" + output_schema_expiration_time: Optional[int] = None """Expiration time of the output schema of the task run (if any), in epoch milliseconds.""" @@ -617,10 +722,14 @@ def as_dict(self) -> dict: body = {} if self.collaborator_job_run_info: body["collaborator_job_run_info"] = self.collaborator_job_run_info.as_dict() + if self.notebook_etag is not None: + body["notebook_etag"] = self.notebook_etag if self.notebook_job_run_state: body["notebook_job_run_state"] = self.notebook_job_run_state.as_dict() if self.notebook_name is not None: body["notebook_name"] = self.notebook_name + if self.notebook_updated_at is not None: + body["notebook_updated_at"] = self.notebook_updated_at if self.output_schema_expiration_time is not None: body["output_schema_expiration_time"] = self.output_schema_expiration_time if self.output_schema_name is not None: @@ -636,10 +745,14 @@ def as_shallow_dict(self) -> dict: body = {} if self.collaborator_job_run_info: body["collaborator_job_run_info"] = self.collaborator_job_run_info + if self.notebook_etag is not None: + body["notebook_etag"] = self.notebook_etag if self.notebook_job_run_state: body["notebook_job_run_state"] = self.notebook_job_run_state if self.notebook_name is not None: body["notebook_name"] = self.notebook_name + if self.notebook_updated_at is not None: + body["notebook_updated_at"] = self.notebook_updated_at if self.output_schema_expiration_time is not None: body["output_schema_expiration_time"] = self.output_schema_expiration_time if self.output_schema_name is not None: @@ -655,8 +768,10 @@ def from_dict(cls, d: Dict[str, Any]) -> CleanRoomNotebookTaskRun: """Deserializes the CleanRoomNotebookTaskRun from a dictionary.""" return cls( collaborator_job_run_info=_from_dict(d, "collaborator_job_run_info", CollaboratorJobRunInfo), + notebook_etag=d.get("notebook_etag", None), notebook_job_run_state=_from_dict(d, "notebook_job_run_state", jobs.CleanRoomTaskRunState), notebook_name=d.get("notebook_name", None), + notebook_updated_at=d.get("notebook_updated_at", None), output_schema_expiration_time=d.get("output_schema_expiration_time", None), output_schema_name=d.get("output_schema_name", None), run_duration=d.get("run_duration", None), diff --git a/databricks/sdk/service/compute.py b/databricks/sdk/service/compute.py index b5e7306a..aa35234a 100755 --- a/databricks/sdk/service/compute.py +++ b/databricks/sdk/service/compute.py @@ -729,7 +729,8 @@ class ClusterAttributes: cluster_name: Optional[str] = None """Cluster name requested by the user. This doesn't have to be unique. If not specified at - creation, the cluster name will be an empty string.""" + creation, the cluster name will be an empty string. For job clusters, the cluster name is + automatically set based on the job and job run IDs.""" custom_tags: Optional[Dict[str, str]] = None """Additional tags for cluster resources. Databricks will tag all cluster resources (e.g., AWS @@ -1118,7 +1119,8 @@ class ClusterDetails: cluster_name: Optional[str] = None """Cluster name requested by the user. This doesn't have to be unique. If not specified at - creation, the cluster name will be an empty string.""" + creation, the cluster name will be an empty string. For job clusters, the cluster name is + automatically set based on the job and job run IDs.""" cluster_source: Optional[ClusterSource] = None """Determines whether the cluster was created by a user through the UI, created by the Databricks @@ -2300,7 +2302,8 @@ class ClusterSpec: cluster_name: Optional[str] = None """Cluster name requested by the user. This doesn't have to be unique. If not specified at - creation, the cluster name will be an empty string.""" + creation, the cluster name will be an empty string. For job clusters, the cluster name is + automatically set based on the job and job run IDs.""" custom_tags: Optional[Dict[str, str]] = None """Additional tags for cluster resources. Databricks will tag all cluster resources (e.g., AWS @@ -2803,7 +2806,8 @@ class CreateCluster: cluster_name: Optional[str] = None """Cluster name requested by the user. This doesn't have to be unique. If not specified at - creation, the cluster name will be an empty string.""" + creation, the cluster name will be an empty string. For job clusters, the cluster name is + automatically set based on the job and job run IDs.""" custom_tags: Optional[Dict[str, str]] = None """Additional tags for cluster resources. Databricks will tag all cluster resources (e.g., AWS @@ -4117,7 +4121,8 @@ class EditCluster: cluster_name: Optional[str] = None """Cluster name requested by the user. This doesn't have to be unique. If not specified at - creation, the cluster name will be an empty string.""" + creation, the cluster name will be an empty string. For job clusters, the cluster name is + automatically set based on the job and job run IDs.""" custom_tags: Optional[Dict[str, str]] = None """Additional tags for cluster resources. Databricks will tag all cluster resources (e.g., AWS @@ -4499,10 +4504,6 @@ class EditInstancePool: min_idle_instances: Optional[int] = None """Minimum number of idle instances to keep in the instance pool""" - node_type_flexibility: Optional[NodeTypeFlexibility] = None - """For Fleet-pool V2, this object contains the information about the alternate node type ids to use - when attempting to launch a cluster if the node type id is not available.""" - def as_dict(self) -> dict: """Serializes the EditInstancePool into a dictionary suitable for use as a JSON request body.""" body = {} @@ -4518,8 +4519,6 @@ def as_dict(self) -> dict: body["max_capacity"] = self.max_capacity if self.min_idle_instances is not None: body["min_idle_instances"] = self.min_idle_instances - if self.node_type_flexibility: - body["node_type_flexibility"] = self.node_type_flexibility.as_dict() if self.node_type_id is not None: body["node_type_id"] = self.node_type_id return body @@ -4539,8 +4538,6 @@ def as_shallow_dict(self) -> dict: body["max_capacity"] = self.max_capacity if self.min_idle_instances is not None: body["min_idle_instances"] = self.min_idle_instances - if self.node_type_flexibility: - body["node_type_flexibility"] = self.node_type_flexibility if self.node_type_id is not None: body["node_type_id"] = self.node_type_id return body @@ -4555,7 +4552,6 @@ def from_dict(cls, d: Dict[str, Any]) -> EditInstancePool: instance_pool_name=d.get("instance_pool_name", None), max_capacity=d.get("max_capacity", None), min_idle_instances=d.get("min_idle_instances", None), - node_type_flexibility=_from_dict(d, "node_type_flexibility", NodeTypeFlexibility), node_type_id=d.get("node_type_id", None), ) @@ -4782,9 +4778,7 @@ def from_dict(cls, d: Dict[str, Any]) -> EnforceClusterComplianceResponse: @dataclass class Environment: """The environment entity used to preserve serverless environment side panel, jobs' environment for - non-notebook task, and DLT's environment for classic and serverless pipelines. (Note: DLT uses a - copied version of the Environment proto below, at - //spark/pipelines/api/protos/copied/libraries-environments-copy.proto) In this minimal + non-notebook task, and DLT's environment for classic and serverless pipelines. In this minimal environment spec, only pip dependencies are supported.""" client: str @@ -4800,6 +4794,13 @@ class Environment: Databricks), E.g. dependencies: ["foo==0.0.1", "-r /Workspace/test/requirements.txt"]""" + environment_version: Optional[str] = None + """We renamed `client` to `environment_version` in notebook exports. This field is meant solely so + that imported notebooks with `environment_version` can be deserialized correctly, in a + backwards-compatible way (i.e. if `client` is specified instead of `environment_version`, it + will be deserialized correctly). Do NOT use this field for any other purpose, e.g. notebook + storage. This field is not yet exposed to customers (e.g. in the jobs API).""" + jar_dependencies: Optional[List[str]] = None """List of jar dependencies, should be string representing volume paths. For example: `/Volumes/path/to/test.jar`.""" @@ -4811,6 +4812,8 @@ def as_dict(self) -> dict: body["client"] = self.client if self.dependencies: body["dependencies"] = [v for v in self.dependencies] + if self.environment_version is not None: + body["environment_version"] = self.environment_version if self.jar_dependencies: body["jar_dependencies"] = [v for v in self.jar_dependencies] return body @@ -4822,6 +4825,8 @@ def as_shallow_dict(self) -> dict: body["client"] = self.client if self.dependencies: body["dependencies"] = self.dependencies + if self.environment_version is not None: + body["environment_version"] = self.environment_version if self.jar_dependencies: body["jar_dependencies"] = self.jar_dependencies return body @@ -4832,6 +4837,7 @@ def from_dict(cls, d: Dict[str, Any]) -> Environment: return cls( client=d.get("client", None), dependencies=d.get("dependencies", None), + environment_version=d.get("environment_version", None), jar_dependencies=d.get("jar_dependencies", None), ) @@ -5497,10 +5503,6 @@ class GetInstancePool: min_idle_instances: Optional[int] = None """Minimum number of idle instances to keep in the instance pool""" - node_type_flexibility: Optional[NodeTypeFlexibility] = None - """For Fleet-pool V2, this object contains the information about the alternate node type ids to use - when attempting to launch a cluster if the node type id is not available.""" - node_type_id: Optional[str] = None """This field encodes, through a single value, the resources available to each of the Spark nodes in this cluster. For example, the Spark nodes can be provisioned and optimized for memory or @@ -5551,8 +5553,6 @@ def as_dict(self) -> dict: body["max_capacity"] = self.max_capacity if self.min_idle_instances is not None: body["min_idle_instances"] = self.min_idle_instances - if self.node_type_flexibility: - body["node_type_flexibility"] = self.node_type_flexibility.as_dict() if self.node_type_id is not None: body["node_type_id"] = self.node_type_id if self.preloaded_docker_images: @@ -5594,8 +5594,6 @@ def as_shallow_dict(self) -> dict: body["max_capacity"] = self.max_capacity if self.min_idle_instances is not None: body["min_idle_instances"] = self.min_idle_instances - if self.node_type_flexibility: - body["node_type_flexibility"] = self.node_type_flexibility if self.node_type_id is not None: body["node_type_id"] = self.node_type_id if self.preloaded_docker_images: @@ -5626,7 +5624,6 @@ def from_dict(cls, d: Dict[str, Any]) -> GetInstancePool: instance_pool_name=d.get("instance_pool_name", None), max_capacity=d.get("max_capacity", None), min_idle_instances=d.get("min_idle_instances", None), - node_type_flexibility=_from_dict(d, "node_type_flexibility", NodeTypeFlexibility), node_type_id=d.get("node_type_id", None), preloaded_docker_images=_repeated_dict(d, "preloaded_docker_images", DockerImage), preloaded_spark_versions=d.get("preloaded_spark_versions", None), @@ -6461,10 +6458,6 @@ class InstancePoolAndStats: min_idle_instances: Optional[int] = None """Minimum number of idle instances to keep in the instance pool""" - node_type_flexibility: Optional[NodeTypeFlexibility] = None - """For Fleet-pool V2, this object contains the information about the alternate node type ids to use - when attempting to launch a cluster if the node type id is not available.""" - node_type_id: Optional[str] = None """This field encodes, through a single value, the resources available to each of the Spark nodes in this cluster. For example, the Spark nodes can be provisioned and optimized for memory or @@ -6515,8 +6508,6 @@ def as_dict(self) -> dict: body["max_capacity"] = self.max_capacity if self.min_idle_instances is not None: body["min_idle_instances"] = self.min_idle_instances - if self.node_type_flexibility: - body["node_type_flexibility"] = self.node_type_flexibility.as_dict() if self.node_type_id is not None: body["node_type_id"] = self.node_type_id if self.preloaded_docker_images: @@ -6558,8 +6549,6 @@ def as_shallow_dict(self) -> dict: body["max_capacity"] = self.max_capacity if self.min_idle_instances is not None: body["min_idle_instances"] = self.min_idle_instances - if self.node_type_flexibility: - body["node_type_flexibility"] = self.node_type_flexibility if self.node_type_id is not None: body["node_type_id"] = self.node_type_id if self.preloaded_docker_images: @@ -6590,7 +6579,6 @@ def from_dict(cls, d: Dict[str, Any]) -> InstancePoolAndStats: instance_pool_name=d.get("instance_pool_name", None), max_capacity=d.get("max_capacity", None), min_idle_instances=d.get("min_idle_instances", None), - node_type_flexibility=_from_dict(d, "node_type_flexibility", NodeTypeFlexibility), node_type_id=d.get("node_type_id", None), preloaded_docker_images=_repeated_dict(d, "preloaded_docker_images", DockerImage), preloaded_spark_versions=d.get("preloaded_spark_versions", None), @@ -8053,28 +8041,6 @@ def from_dict(cls, d: Dict[str, Any]) -> NodeType: ) -@dataclass -class NodeTypeFlexibility: - """For Fleet-V2 using classic clusters, this object contains the information about the alternate - node type ids to use when attempting to launch a cluster. It can be used with both the driver - and worker node types.""" - - def as_dict(self) -> dict: - """Serializes the NodeTypeFlexibility into a dictionary suitable for use as a JSON request body.""" - body = {} - return body - - def as_shallow_dict(self) -> dict: - """Serializes the NodeTypeFlexibility into a shallow dictionary of its immediate attributes.""" - body = {} - return body - - @classmethod - def from_dict(cls, d: Dict[str, Any]) -> NodeTypeFlexibility: - """Deserializes the NodeTypeFlexibility from a dictionary.""" - return cls() - - @dataclass class PendingInstanceError: """Error message of a failed pending instances""" @@ -9404,7 +9370,8 @@ class UpdateClusterResource: cluster_name: Optional[str] = None """Cluster name requested by the user. This doesn't have to be unique. If not specified at - creation, the cluster name will be an empty string.""" + creation, the cluster name will be an empty string. For job clusters, the cluster name is + automatically set based on the job and job run IDs.""" custom_tags: Optional[Dict[str, str]] = None """Additional tags for cluster resources. Databricks will tag all cluster resources (e.g., AWS @@ -10374,7 +10341,8 @@ def create( of executor logs is `$destination/$clusterId/executor`. :param cluster_name: str (optional) Cluster name requested by the user. This doesn't have to be unique. If not specified at creation, - the cluster name will be an empty string. + the cluster name will be an empty string. For job clusters, the cluster name is automatically set + based on the job and job run IDs. :param custom_tags: Dict[str,str] (optional) Additional tags for cluster resources. Databricks will tag all cluster resources (e.g., AWS instances and EBS volumes) with these tags in addition to `default_tags`. Notes: @@ -10766,7 +10734,8 @@ def edit( of executor logs is `$destination/$clusterId/executor`. :param cluster_name: str (optional) Cluster name requested by the user. This doesn't have to be unique. If not specified at creation, - the cluster name will be an empty string. + the cluster name will be an empty string. For job clusters, the cluster name is automatically set + based on the job and job run IDs. :param custom_tags: Dict[str,str] (optional) Additional tags for cluster resources. Databricks will tag all cluster resources (e.g., AWS instances and EBS volumes) with these tags in addition to `default_tags`. Notes: @@ -12227,7 +12196,6 @@ def edit( idle_instance_autotermination_minutes: Optional[int] = None, max_capacity: Optional[int] = None, min_idle_instances: Optional[int] = None, - node_type_flexibility: Optional[NodeTypeFlexibility] = None, ): """Edit an existing instance pool. @@ -12260,9 +12228,6 @@ def edit( upsize requests. :param min_idle_instances: int (optional) Minimum number of idle instances to keep in the instance pool - :param node_type_flexibility: :class:`NodeTypeFlexibility` (optional) - For Fleet-pool V2, this object contains the information about the alternate node type ids to use - when attempting to launch a cluster if the node type id is not available. """ @@ -12279,8 +12244,6 @@ def edit( body["max_capacity"] = max_capacity if min_idle_instances is not None: body["min_idle_instances"] = min_idle_instances - if node_type_flexibility is not None: - body["node_type_flexibility"] = node_type_flexibility.as_dict() if node_type_id is not None: body["node_type_id"] = node_type_id headers = { @@ -12440,8 +12403,10 @@ def add( ): """Register an instance profile. - In the UI, you can select the instance profile when launching clusters. This API is only available to - admin users. + Registers an instance profile in Databricks. In the UI, you can then give users the permission to use + this instance profile when launching clusters. + + This API is only available to admin users. :param instance_profile_arn: str The AWS ARN of the instance profile to register with Databricks. This field is required. diff --git a/databricks/sdk/service/dashboards.py b/databricks/sdk/service/dashboards.py index c340b746..6a394572 100755 --- a/databricks/sdk/service/dashboards.py +++ b/databricks/sdk/service/dashboards.py @@ -1255,6 +1255,9 @@ class MessageErrorType(Enum): COULD_NOT_GET_MODEL_DEPLOYMENTS_EXCEPTION = "COULD_NOT_GET_MODEL_DEPLOYMENTS_EXCEPTION" COULD_NOT_GET_UC_SCHEMA_EXCEPTION = "COULD_NOT_GET_UC_SCHEMA_EXCEPTION" DEPLOYMENT_NOT_FOUND_EXCEPTION = "DEPLOYMENT_NOT_FOUND_EXCEPTION" + DESCRIBE_QUERY_INVALID_SQL_ERROR = "DESCRIBE_QUERY_INVALID_SQL_ERROR" + DESCRIBE_QUERY_TIMEOUT = "DESCRIBE_QUERY_TIMEOUT" + DESCRIBE_QUERY_UNEXPECTED_FAILURE = "DESCRIBE_QUERY_UNEXPECTED_FAILURE" FUNCTIONS_NOT_AVAILABLE_EXCEPTION = "FUNCTIONS_NOT_AVAILABLE_EXCEPTION" FUNCTION_ARGUMENTS_INVALID_EXCEPTION = "FUNCTION_ARGUMENTS_INVALID_EXCEPTION" FUNCTION_ARGUMENTS_INVALID_JSON_EXCEPTION = "FUNCTION_ARGUMENTS_INVALID_JSON_EXCEPTION" @@ -1267,9 +1270,13 @@ class MessageErrorType(Enum): ILLEGAL_PARAMETER_DEFINITION_EXCEPTION = "ILLEGAL_PARAMETER_DEFINITION_EXCEPTION" INVALID_CERTIFIED_ANSWER_FUNCTION_EXCEPTION = "INVALID_CERTIFIED_ANSWER_FUNCTION_EXCEPTION" INVALID_CERTIFIED_ANSWER_IDENTIFIER_EXCEPTION = "INVALID_CERTIFIED_ANSWER_IDENTIFIER_EXCEPTION" + INVALID_CHAT_COMPLETION_ARGUMENTS_JSON_EXCEPTION = "INVALID_CHAT_COMPLETION_ARGUMENTS_JSON_EXCEPTION" INVALID_CHAT_COMPLETION_JSON_EXCEPTION = "INVALID_CHAT_COMPLETION_JSON_EXCEPTION" INVALID_COMPLETION_REQUEST_EXCEPTION = "INVALID_COMPLETION_REQUEST_EXCEPTION" INVALID_FUNCTION_CALL_EXCEPTION = "INVALID_FUNCTION_CALL_EXCEPTION" + INVALID_SQL_MULTIPLE_DATASET_REFERENCES_EXCEPTION = "INVALID_SQL_MULTIPLE_DATASET_REFERENCES_EXCEPTION" + INVALID_SQL_MULTIPLE_STATEMENTS_EXCEPTION = "INVALID_SQL_MULTIPLE_STATEMENTS_EXCEPTION" + INVALID_SQL_UNKNOWN_TABLE_EXCEPTION = "INVALID_SQL_UNKNOWN_TABLE_EXCEPTION" INVALID_TABLE_IDENTIFIER_EXCEPTION = "INVALID_TABLE_IDENTIFIER_EXCEPTION" LOCAL_CONTEXT_EXCEEDED_EXCEPTION = "LOCAL_CONTEXT_EXCEEDED_EXCEPTION" MESSAGE_CANCELLED_WHILE_EXECUTING_EXCEPTION = "MESSAGE_CANCELLED_WHILE_EXECUTING_EXCEPTION" diff --git a/databricks/sdk/service/iam.py b/databricks/sdk/service/iam.py index d5fe5645..0d8c72fe 100755 --- a/databricks/sdk/service/iam.py +++ b/databricks/sdk/service/iam.py @@ -385,7 +385,10 @@ class GrantRule: """Role that is assigned to the list of principals.""" principals: Optional[List[str]] = None - """Principals this grant rule applies to.""" + """Principals this grant rule applies to. A principal can be a user (for end users), a service + principal (for applications and compute workloads), or an account group. Each principal has its + own identifier format: * users/ * groups/ * + servicePrincipals/""" def as_dict(self) -> dict: """Serializes the GrantRule into a dictionary suitable for use as a JSON request body.""" @@ -1327,6 +1330,7 @@ class PermissionLevel(Enum): CAN_ATTACH_TO = "CAN_ATTACH_TO" CAN_BIND = "CAN_BIND" + CAN_CREATE = "CAN_CREATE" CAN_EDIT = "CAN_EDIT" CAN_EDIT_METADATA = "CAN_EDIT_METADATA" CAN_MANAGE = "CAN_MANAGE" @@ -1334,6 +1338,7 @@ class PermissionLevel(Enum): CAN_MANAGE_RUN = "CAN_MANAGE_RUN" CAN_MANAGE_STAGING_VERSIONS = "CAN_MANAGE_STAGING_VERSIONS" CAN_MONITOR = "CAN_MONITOR" + CAN_MONITOR_ONLY = "CAN_MONITOR_ONLY" CAN_QUERY = "CAN_QUERY" CAN_READ = "CAN_READ" CAN_RESTART = "CAN_RESTART" @@ -1410,50 +1415,6 @@ def from_dict(cls, d: Dict[str, Any]) -> PermissionsDescription: ) -@dataclass -class PermissionsRequest: - access_control_list: Optional[List[AccessControlRequest]] = None - - request_object_id: Optional[str] = None - """The id of the request object.""" - - request_object_type: Optional[str] = None - """The type of the request object. Can be one of the following: alerts, authorization, clusters, - cluster-policies, dashboards, dbsql-dashboards, directories, experiments, files, instance-pools, - jobs, notebooks, pipelines, queries, registered-models, repos, serving-endpoints, or warehouses.""" - - def as_dict(self) -> dict: - """Serializes the PermissionsRequest into a dictionary suitable for use as a JSON request body.""" - body = {} - if self.access_control_list: - body["access_control_list"] = [v.as_dict() for v in self.access_control_list] - if self.request_object_id is not None: - body["request_object_id"] = self.request_object_id - if self.request_object_type is not None: - body["request_object_type"] = self.request_object_type - return body - - def as_shallow_dict(self) -> dict: - """Serializes the PermissionsRequest into a shallow dictionary of its immediate attributes.""" - body = {} - if self.access_control_list: - body["access_control_list"] = self.access_control_list - if self.request_object_id is not None: - body["request_object_id"] = self.request_object_id - if self.request_object_type is not None: - body["request_object_type"] = self.request_object_type - return body - - @classmethod - def from_dict(cls, d: Dict[str, Any]) -> PermissionsRequest: - """Deserializes the PermissionsRequest from a dictionary.""" - return cls( - access_control_list=_repeated_dict(d, "access_control_list", AccessControlRequest), - request_object_id=d.get("request_object_id", None), - request_object_type=d.get("request_object_type", None), - ) - - @dataclass class PrincipalOutput: """Information about the principal assigned to the workspace.""" @@ -1619,13 +1580,19 @@ def from_dict(cls, d: Dict[str, Any]) -> Role: @dataclass class RuleSetResponse: - etag: Optional[str] = None - """Identifies the version of the rule set returned.""" + name: str + """Name of the rule set.""" - grant_rules: Optional[List[GrantRule]] = None + etag: str + """Identifies the version of the rule set returned. Etag used for versioning. The response is at + least as fresh as the eTag provided. Etag is used for optimistic concurrency control as a way to + help prevent simultaneous updates of a rule set from overwriting each other. It is strongly + suggested that systems make use of the etag in the read -> modify -> write pattern to perform + rule set updates in order to avoid race conditions that is get an etag from a GET rule set + request, and pass it with the PUT update request to identify the rule set version you are + updating.""" - name: Optional[str] = None - """Name of the rule set.""" + grant_rules: Optional[List[GrantRule]] = None def as_dict(self) -> dict: """Serializes the RuleSetResponse into a dictionary suitable for use as a JSON request body.""" @@ -1663,8 +1630,13 @@ class RuleSetUpdateRequest: """Name of the rule set.""" etag: str - """The expected etag of the rule set to update. The update will fail if the value does not match - the value that is stored in account access control service.""" + """Identifies the version of the rule set returned. Etag used for versioning. The response is at + least as fresh as the eTag provided. Etag is used for optimistic concurrency control as a way to + help prevent simultaneous updates of a rule set from overwriting each other. It is strongly + suggested that systems make use of the etag in the read -> modify -> write pattern to perform + rule set updates in order to avoid race conditions that is get an etag from a GET rule set + request, and pass it with the PUT update request to identify the rule set version you are + updating.""" grant_rules: Optional[List[GrantRule]] = None @@ -1795,6 +1767,94 @@ class ServicePrincipalSchema(Enum): URN_IETF_PARAMS_SCIM_SCHEMAS_CORE_2_0_SERVICE_PRINCIPAL = "urn:ietf:params:scim:schemas:core:2.0:ServicePrincipal" +@dataclass +class SetObjectPermissions: + access_control_list: Optional[List[AccessControlRequest]] = None + + request_object_id: Optional[str] = None + """The id of the request object.""" + + request_object_type: Optional[str] = None + """The type of the request object. Can be one of the following: alerts, authorization, clusters, + cluster-policies, dashboards, dbsql-dashboards, directories, experiments, files, instance-pools, + jobs, notebooks, pipelines, queries, registered-models, repos, serving-endpoints, or warehouses.""" + + def as_dict(self) -> dict: + """Serializes the SetObjectPermissions into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.access_control_list: + body["access_control_list"] = [v.as_dict() for v in self.access_control_list] + if self.request_object_id is not None: + body["request_object_id"] = self.request_object_id + if self.request_object_type is not None: + body["request_object_type"] = self.request_object_type + return body + + def as_shallow_dict(self) -> dict: + """Serializes the SetObjectPermissions into a shallow dictionary of its immediate attributes.""" + body = {} + if self.access_control_list: + body["access_control_list"] = self.access_control_list + if self.request_object_id is not None: + body["request_object_id"] = self.request_object_id + if self.request_object_type is not None: + body["request_object_type"] = self.request_object_type + return body + + @classmethod + def from_dict(cls, d: Dict[str, Any]) -> SetObjectPermissions: + """Deserializes the SetObjectPermissions from a dictionary.""" + return cls( + access_control_list=_repeated_dict(d, "access_control_list", AccessControlRequest), + request_object_id=d.get("request_object_id", None), + request_object_type=d.get("request_object_type", None), + ) + + +@dataclass +class UpdateObjectPermissions: + access_control_list: Optional[List[AccessControlRequest]] = None + + request_object_id: Optional[str] = None + """The id of the request object.""" + + request_object_type: Optional[str] = None + """The type of the request object. Can be one of the following: alerts, authorization, clusters, + cluster-policies, dashboards, dbsql-dashboards, directories, experiments, files, instance-pools, + jobs, notebooks, pipelines, queries, registered-models, repos, serving-endpoints, or warehouses.""" + + def as_dict(self) -> dict: + """Serializes the UpdateObjectPermissions into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.access_control_list: + body["access_control_list"] = [v.as_dict() for v in self.access_control_list] + if self.request_object_id is not None: + body["request_object_id"] = self.request_object_id + if self.request_object_type is not None: + body["request_object_type"] = self.request_object_type + return body + + def as_shallow_dict(self) -> dict: + """Serializes the UpdateObjectPermissions into a shallow dictionary of its immediate attributes.""" + body = {} + if self.access_control_list: + body["access_control_list"] = self.access_control_list + if self.request_object_id is not None: + body["request_object_id"] = self.request_object_id + if self.request_object_type is not None: + body["request_object_type"] = self.request_object_type + return body + + @classmethod + def from_dict(cls, d: Dict[str, Any]) -> UpdateObjectPermissions: + """Deserializes the UpdateObjectPermissions from a dictionary.""" + return cls( + access_control_list=_repeated_dict(d, "access_control_list", AccessControlRequest), + request_object_id=d.get("request_object_id", None), + request_object_type=d.get("request_object_type", None), + ) + + @dataclass class UpdateResponse: def as_dict(self) -> dict: @@ -2111,6 +2171,11 @@ def get_assignable_roles_for_resource(self, resource: str) -> GetAssignableRoles :param resource: str The resource name for which assignable roles will be listed. + Examples | Summary :--- | :--- `resource=accounts/` | A resource name for the account. + `resource=accounts//groups/` | A resource name for the group. + `resource=accounts//servicePrincipals/` | A resource name for the service + principal. + :returns: :class:`GetAssignableRolesForResourceResponse` """ @@ -2137,6 +2202,12 @@ def get_rule_set(self, name: str, etag: str) -> RuleSetResponse: :param name: str The ruleset name associated with the request. + + Examples | Summary :--- | :--- `name=accounts//ruleSets/default` | A name for a rule set + on the account. `name=accounts//groups//ruleSets/default` | A name for a rule + set on the group. + `name=accounts//servicePrincipals//ruleSets/default` | + A name for a rule set on the service principal. :param etag: str Etag used for versioning. The response is at least as fresh as the eTag provided. Etag is used for optimistic concurrency control as a way to help prevent simultaneous updates of a rule set from @@ -2145,6 +2216,10 @@ def get_rule_set(self, name: str, etag: str) -> RuleSetResponse: etag from a GET rule set request, and pass it with the PUT update request to identify the rule set version you are updating. + Examples | Summary :--- | :--- `etag=` | An empty etag can only be used in GET to indicate no + freshness requirements. `etag=RENUAAABhSweA4NvVmmUYdiU717H3Tgy0UJdor3gE4a+mq/oj9NjAf8ZsQ==` | An + etag encoded a specific version of the rule set to get or to be updated. + :returns: :class:`RuleSetResponse` """ @@ -2199,7 +2274,7 @@ def update_rule_set(self, name: str, rule_set: RuleSetUpdateRequest) -> RuleSetR class AccountAccessControlProxyAPI: """These APIs manage access rules on resources in an account. Currently, only grant rules are supported. A grant rule specifies a role assigned to a set of principals. A list of rules attached to a resource is - called a rule set. A workspace must belong to an account for these APIs to work.""" + called a rule set. A workspace must belong to an account for these APIs to work""" def __init__(self, api_client): self._api = api_client @@ -2207,12 +2282,17 @@ def __init__(self, api_client): def get_assignable_roles_for_resource(self, resource: str) -> GetAssignableRolesForResourceResponse: """Get assignable roles for a resource. - Gets all the roles that can be granted on an account-level resource. A role is grantable if the rule + Gets all the roles that can be granted on an account level resource. A role is grantable if the rule set on the resource can contain an access rule of the role. :param resource: str The resource name for which assignable roles will be listed. + Examples | Summary :--- | :--- `resource=accounts/` | A resource name for the account. + `resource=accounts//groups/` | A resource name for the group. + `resource=accounts//servicePrincipals/` | A resource name for the service + principal. + :returns: :class:`GetAssignableRolesForResourceResponse` """ @@ -2236,6 +2316,12 @@ def get_rule_set(self, name: str, etag: str) -> RuleSetResponse: :param name: str The ruleset name associated with the request. + + Examples | Summary :--- | :--- `name=accounts//ruleSets/default` | A name for a rule set + on the account. `name=accounts//groups//ruleSets/default` | A name for a rule + set on the group. + `name=accounts//servicePrincipals//ruleSets/default` | + A name for a rule set on the service principal. :param etag: str Etag used for versioning. The response is at least as fresh as the eTag provided. Etag is used for optimistic concurrency control as a way to help prevent simultaneous updates of a rule set from @@ -2244,6 +2330,10 @@ def get_rule_set(self, name: str, etag: str) -> RuleSetResponse: etag from a GET rule set request, and pass it with the PUT update request to identify the rule set version you are updating. + Examples | Summary :--- | :--- `etag=` | An empty etag can only be used in GET to indicate no + freshness requirements. `etag=RENUAAABhSweA4NvVmmUYdiU717H3Tgy0UJdor3gE4a+mq/oj9NjAf8ZsQ==` | An + etag encoded a specific version of the rule set to get or to be updated. + :returns: :class:`RuleSetResponse` """ @@ -2262,8 +2352,8 @@ def get_rule_set(self, name: str, etag: str) -> RuleSetResponse: def update_rule_set(self, name: str, rule_set: RuleSetUpdateRequest) -> RuleSetResponse: """Update a rule set. - Replace the rules of a rule set. First, use a GET rule set request to read the current version of the - rule set before modifying it. This pattern helps prevent conflicts between concurrent updates. + Replace the rules of a rule set. First, use get to read the current version of the rule set before + modifying it. This pattern helps prevent conflicts between concurrent updates. :param name: str Name of the rule set. @@ -3552,51 +3642,24 @@ def migrate_permissions( class PermissionsAPI: """Permissions API are used to create read, write, edit, update and manage access for various users on - different objects and endpoints. - - * **[Apps permissions](:service:apps)** — Manage which users can manage or use apps. - - * **[Cluster permissions](:service:clusters)** — Manage which users can manage, restart, or attach to - clusters. - - * **[Cluster policy permissions](:service:clusterpolicies)** — Manage which users can use cluster - policies. - - * **[Delta Live Tables pipeline permissions](:service:pipelines)** — Manage which users can view, - manage, run, cancel, or own a Delta Live Tables pipeline. - - * **[Job permissions](:service:jobs)** — Manage which users can view, manage, trigger, cancel, or own a - job. - - * **[MLflow experiment permissions](:service:experiments)** — Manage which users can read, edit, or - manage MLflow experiments. - - * **[MLflow registered model permissions](:service:modelregistry)** — Manage which users can read, edit, - or manage MLflow registered models. - - * **[Password permissions](:service:users)** — Manage which users can use password login when SSO is - enabled. - - * **[Instance Pool permissions](:service:instancepools)** — Manage which users can manage or attach to - pools. - - * **[Repo permissions](repos)** — Manage which users can read, run, edit, or manage a repo. - - * **[Serving endpoint permissions](:service:servingendpoints)** — Manage which users can view, query, or - manage a serving endpoint. - - * **[SQL warehouse permissions](:service:warehouses)** — Manage which users can use or manage SQL - warehouses. - - * **[Token permissions](:service:tokenmanagement)** — Manage which users can create or use tokens. - - * **[Workspace object permissions](:service:workspace)** — Manage which users can read, run, edit, or - manage alerts, dbsql-dashboards, directories, files, notebooks and queries. - - For the mapping of the required permissions for specific actions or abilities and other important - information, see [Access Control]. - - Note that to manage access control on service principals, use **[Account Access Control + different objects and endpoints. * **[Apps permissions](:service:apps)** — Manage which users can manage + or use apps. * **[Cluster permissions](:service:clusters)** — Manage which users can manage, restart, or + attach to clusters. * **[Cluster policy permissions](:service:clusterpolicies)** — Manage which users + can use cluster policies. * **[Delta Live Tables pipeline permissions](:service:pipelines)** — Manage + which users can view, manage, run, cancel, or own a Delta Live Tables pipeline. * **[Job + permissions](:service:jobs)** — Manage which users can view, manage, trigger, cancel, or own a job. * + **[MLflow experiment permissions](:service:experiments)** — Manage which users can read, edit, or manage + MLflow experiments. * **[MLflow registered model permissions](:service:modelregistry)** — Manage which + users can read, edit, or manage MLflow registered models. * **[Instance Pool + permissions](:service:instancepools)** — Manage which users can manage or attach to pools. * **[Repo + permissions](repos)** — Manage which users can read, run, edit, or manage a repo. * **[Serving endpoint + permissions](:service:servingendpoints)** — Manage which users can view, query, or manage a serving + endpoint. * **[SQL warehouse permissions](:service:warehouses)** — Manage which users can use or manage + SQL warehouses. * **[Token permissions](:service:tokenmanagement)** — Manage which users can create or + use tokens. * **[Workspace object permissions](:service:workspace)** — Manage which users can read, run, + edit, or manage alerts, dbsql-dashboards, directories, files, notebooks and queries. For the mapping of + the required permissions for specific actions or abilities and other important information, see [Access + Control]. Note that to manage access control on service principals, use **[Account Access Control Proxy](:service:accountaccesscontrolproxy)**. [Access Control]: https://docs.databricks.com/security/auth-authz/access-control/index.html""" @@ -3633,9 +3696,10 @@ def get_permission_levels(self, request_object_type: str, request_object_id: str Gets the permission levels that a user can have on an object. :param request_object_type: str - + The type of the request object. Can be one of the following: alerts, authorization, clusters, + cluster-policies, dashboards, dbsql-dashboards, directories, experiments, files, instance-pools, + jobs, notebooks, pipelines, queries, registered-models, repos, serving-endpoints, or warehouses. :param request_object_id: str - :returns: :class:`GetPermissionLevelsResponse` """ diff --git a/databricks/sdk/service/jobs.py b/databricks/sdk/service/jobs.py index 051b514c..1cb0ac4a 100755 --- a/databricks/sdk/service/jobs.py +++ b/databricks/sdk/service/jobs.py @@ -807,7 +807,7 @@ class ComputeConfig: num_gpus: int """Number of GPUs.""" - gpu_node_pool_id: str + gpu_node_pool_id: Optional[str] = None """IDof the GPU pool to use.""" gpu_type: Optional[str] = None @@ -2708,9 +2708,7 @@ class JobEnvironment: spec: Optional[compute.Environment] = None """The environment entity used to preserve serverless environment side panel, jobs' environment for - non-notebook task, and DLT's environment for classic and serverless pipelines. (Note: DLT uses a - copied version of the Environment proto below, at - //spark/pipelines/api/protos/copied/libraries-environments-copy.proto) In this minimal + non-notebook task, and DLT's environment for classic and serverless pipelines. In this minimal environment spec, only pip dependencies are supported.""" def as_dict(self) -> dict: @@ -8469,7 +8467,8 @@ def from_dict(cls, d: Dict[str, Any]) -> TaskNotificationSettings: class TerminationCodeCode(Enum): """The code indicates why the run was terminated. Additional codes might be introduced in future - releases. * `SUCCESS`: The run was completed successfully. * `USER_CANCELED`: The run was + releases. * `SUCCESS`: The run was completed successfully. * `SUCCESS_WITH_FAILURES`: The run + was completed successfully but some child runs failed. * `USER_CANCELED`: The run was successfully canceled during execution by a user. * `CANCELED`: The run was canceled during execution by the Databricks platform; for example, if the maximum run duration was exceeded. * `SKIPPED`: Run was never executed, for example, if the upstream task run failed, the dependency @@ -8525,6 +8524,7 @@ class TerminationCodeCode(Enum): SKIPPED = "SKIPPED" STORAGE_ACCESS_ERROR = "STORAGE_ACCESS_ERROR" SUCCESS = "SUCCESS" + SUCCESS_WITH_FAILURES = "SUCCESS_WITH_FAILURES" UNAUTHORIZED_ERROR = "UNAUTHORIZED_ERROR" USER_CANCELED = "USER_CANCELED" WORKSPACE_RUN_LIMIT_EXCEEDED = "WORKSPACE_RUN_LIMIT_EXCEEDED" @@ -8534,7 +8534,8 @@ class TerminationCodeCode(Enum): class TerminationDetails: code: Optional[TerminationCodeCode] = None """The code indicates why the run was terminated. Additional codes might be introduced in future - releases. * `SUCCESS`: The run was completed successfully. * `USER_CANCELED`: The run was + releases. * `SUCCESS`: The run was completed successfully. * `SUCCESS_WITH_FAILURES`: The run + was completed successfully but some child runs failed. * `USER_CANCELED`: The run was successfully canceled during execution by a user. * `CANCELED`: The run was canceled during execution by the Databricks platform; for example, if the maximum run duration was exceeded. * `SKIPPED`: Run was never executed, for example, if the upstream task run failed, the dependency diff --git a/databricks/sdk/service/ml.py b/databricks/sdk/service/ml.py index 7ec4d9cc..1e500f10 100755 --- a/databricks/sdk/service/ml.py +++ b/databricks/sdk/service/ml.py @@ -785,6 +785,98 @@ def from_dict(cls, d: Dict[str, Any]) -> CreateForecastingExperimentResponse: return cls(experiment_id=d.get("experiment_id", None)) +@dataclass +class CreateLoggedModelRequest: + experiment_id: str + """The ID of the experiment that owns the model.""" + + model_type: Optional[str] = None + """The type of the model, such as ``"Agent"``, ``"Classifier"``, ``"LLM"``.""" + + name: Optional[str] = None + """The name of the model (optional). If not specified one will be generated.""" + + params: Optional[List[LoggedModelParameter]] = None + """Parameters attached to the model.""" + + source_run_id: Optional[str] = None + """The ID of the run that created the model.""" + + tags: Optional[List[LoggedModelTag]] = None + """Tags attached to the model.""" + + def as_dict(self) -> dict: + """Serializes the CreateLoggedModelRequest into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.experiment_id is not None: + body["experiment_id"] = self.experiment_id + if self.model_type is not None: + body["model_type"] = self.model_type + if self.name is not None: + body["name"] = self.name + if self.params: + body["params"] = [v.as_dict() for v in self.params] + if self.source_run_id is not None: + body["source_run_id"] = self.source_run_id + if self.tags: + body["tags"] = [v.as_dict() for v in self.tags] + return body + + def as_shallow_dict(self) -> dict: + """Serializes the CreateLoggedModelRequest into a shallow dictionary of its immediate attributes.""" + body = {} + if self.experiment_id is not None: + body["experiment_id"] = self.experiment_id + if self.model_type is not None: + body["model_type"] = self.model_type + if self.name is not None: + body["name"] = self.name + if self.params: + body["params"] = self.params + if self.source_run_id is not None: + body["source_run_id"] = self.source_run_id + if self.tags: + body["tags"] = self.tags + return body + + @classmethod + def from_dict(cls, d: Dict[str, Any]) -> CreateLoggedModelRequest: + """Deserializes the CreateLoggedModelRequest from a dictionary.""" + return cls( + experiment_id=d.get("experiment_id", None), + model_type=d.get("model_type", None), + name=d.get("name", None), + params=_repeated_dict(d, "params", LoggedModelParameter), + source_run_id=d.get("source_run_id", None), + tags=_repeated_dict(d, "tags", LoggedModelTag), + ) + + +@dataclass +class CreateLoggedModelResponse: + model: Optional[LoggedModel] = None + """The newly created logged model.""" + + def as_dict(self) -> dict: + """Serializes the CreateLoggedModelResponse into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.model: + body["model"] = self.model.as_dict() + return body + + def as_shallow_dict(self) -> dict: + """Serializes the CreateLoggedModelResponse into a shallow dictionary of its immediate attributes.""" + body = {} + if self.model: + body["model"] = self.model + return body + + @classmethod + def from_dict(cls, d: Dict[str, Any]) -> CreateLoggedModelResponse: + """Deserializes the CreateLoggedModelResponse from a dictionary.""" + return cls(model=_from_dict(d, "model", LoggedModel)) + + @dataclass class CreateModelRequest: name: str @@ -1404,6 +1496,42 @@ def from_dict(cls, d: Dict[str, Any]) -> DeleteExperimentResponse: return cls() +@dataclass +class DeleteLoggedModelResponse: + def as_dict(self) -> dict: + """Serializes the DeleteLoggedModelResponse into a dictionary suitable for use as a JSON request body.""" + body = {} + return body + + def as_shallow_dict(self) -> dict: + """Serializes the DeleteLoggedModelResponse into a shallow dictionary of its immediate attributes.""" + body = {} + return body + + @classmethod + def from_dict(cls, d: Dict[str, Any]) -> DeleteLoggedModelResponse: + """Deserializes the DeleteLoggedModelResponse from a dictionary.""" + return cls() + + +@dataclass +class DeleteLoggedModelTagResponse: + def as_dict(self) -> dict: + """Serializes the DeleteLoggedModelTagResponse into a dictionary suitable for use as a JSON request body.""" + body = {} + return body + + def as_shallow_dict(self) -> dict: + """Serializes the DeleteLoggedModelTagResponse into a shallow dictionary of its immediate attributes.""" + body = {} + return body + + @classmethod + def from_dict(cls, d: Dict[str, Any]) -> DeleteLoggedModelTagResponse: + """Deserializes the DeleteLoggedModelTagResponse from a dictionary.""" + return cls() + + @dataclass class DeleteModelResponse: def as_dict(self) -> dict: @@ -2103,6 +2231,64 @@ def from_dict(cls, d: Dict[str, Any]) -> FileInfo: return cls(file_size=d.get("file_size", None), is_dir=d.get("is_dir", None), path=d.get("path", None)) +@dataclass +class FinalizeLoggedModelRequest: + status: LoggedModelStatus + """Whether or not the model is ready for use. ``"LOGGED_MODEL_UPLOAD_FAILED"`` indicates that + something went wrong when logging the model weights / agent code).""" + + model_id: Optional[str] = None + """The ID of the logged model to finalize.""" + + def as_dict(self) -> dict: + """Serializes the FinalizeLoggedModelRequest into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.model_id is not None: + body["model_id"] = self.model_id + if self.status is not None: + body["status"] = self.status.value + return body + + def as_shallow_dict(self) -> dict: + """Serializes the FinalizeLoggedModelRequest into a shallow dictionary of its immediate attributes.""" + body = {} + if self.model_id is not None: + body["model_id"] = self.model_id + if self.status is not None: + body["status"] = self.status + return body + + @classmethod + def from_dict(cls, d: Dict[str, Any]) -> FinalizeLoggedModelRequest: + """Deserializes the FinalizeLoggedModelRequest from a dictionary.""" + return cls(model_id=d.get("model_id", None), status=_enum(d, "status", LoggedModelStatus)) + + +@dataclass +class FinalizeLoggedModelResponse: + model: Optional[LoggedModel] = None + """The updated logged model.""" + + def as_dict(self) -> dict: + """Serializes the FinalizeLoggedModelResponse into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.model: + body["model"] = self.model.as_dict() + return body + + def as_shallow_dict(self) -> dict: + """Serializes the FinalizeLoggedModelResponse into a shallow dictionary of its immediate attributes.""" + body = {} + if self.model: + body["model"] = self.model + return body + + @classmethod + def from_dict(cls, d: Dict[str, Any]) -> FinalizeLoggedModelResponse: + """Deserializes the FinalizeLoggedModelResponse from a dictionary.""" + return cls(model=_from_dict(d, "model", LoggedModel)) + + @dataclass class ForecastingExperiment: """Represents a forecasting experiment with its unique identifier, URL, and state.""" @@ -2340,6 +2526,31 @@ def from_dict(cls, d: Dict[str, Any]) -> GetLatestVersionsResponse: return cls(model_versions=_repeated_dict(d, "model_versions", ModelVersion)) +@dataclass +class GetLoggedModelResponse: + model: Optional[LoggedModel] = None + """The retrieved logged model.""" + + def as_dict(self) -> dict: + """Serializes the GetLoggedModelResponse into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.model: + body["model"] = self.model.as_dict() + return body + + def as_shallow_dict(self) -> dict: + """Serializes the GetLoggedModelResponse into a shallow dictionary of its immediate attributes.""" + body = {} + if self.model: + body["model"] = self.model + return body + + @classmethod + def from_dict(cls, d: Dict[str, Any]) -> GetLoggedModelResponse: + """Deserializes the GetLoggedModelResponse from a dictionary.""" + return cls(model=_from_dict(d, "model", LoggedModel)) + + @dataclass class GetMetricHistoryResponse: metrics: Optional[List[Metric]] = None @@ -2782,6 +2993,49 @@ def from_dict(cls, d: Dict[str, Any]) -> ListExperimentsResponse: ) +@dataclass +class ListLoggedModelArtifactsResponse: + files: Optional[List[FileInfo]] = None + """File location and metadata for artifacts.""" + + next_page_token: Optional[str] = None + """Token that can be used to retrieve the next page of artifact results""" + + root_uri: Optional[str] = None + """Root artifact directory for the logged model.""" + + def as_dict(self) -> dict: + """Serializes the ListLoggedModelArtifactsResponse into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.files: + body["files"] = [v.as_dict() for v in self.files] + if self.next_page_token is not None: + body["next_page_token"] = self.next_page_token + if self.root_uri is not None: + body["root_uri"] = self.root_uri + return body + + def as_shallow_dict(self) -> dict: + """Serializes the ListLoggedModelArtifactsResponse into a shallow dictionary of its immediate attributes.""" + body = {} + if self.files: + body["files"] = self.files + if self.next_page_token is not None: + body["next_page_token"] = self.next_page_token + if self.root_uri is not None: + body["root_uri"] = self.root_uri + return body + + @classmethod + def from_dict(cls, d: Dict[str, Any]) -> ListLoggedModelArtifactsResponse: + """Deserializes the ListLoggedModelArtifactsResponse from a dictionary.""" + return cls( + files=_repeated_dict(d, "files", FileInfo), + next_page_token=d.get("next_page_token", None), + root_uri=d.get("root_uri", None), + ) + + @dataclass class ListModelsResponse: next_page_token: Optional[str] = None @@ -3008,6 +3262,56 @@ def from_dict(cls, d: Dict[str, Any]) -> LogInputsResponse: return cls() +@dataclass +class LogLoggedModelParamsRequest: + model_id: Optional[str] = None + """The ID of the logged model to log params for.""" + + params: Optional[List[LoggedModelParameter]] = None + """Parameters to attach to the model.""" + + def as_dict(self) -> dict: + """Serializes the LogLoggedModelParamsRequest into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.model_id is not None: + body["model_id"] = self.model_id + if self.params: + body["params"] = [v.as_dict() for v in self.params] + return body + + def as_shallow_dict(self) -> dict: + """Serializes the LogLoggedModelParamsRequest into a shallow dictionary of its immediate attributes.""" + body = {} + if self.model_id is not None: + body["model_id"] = self.model_id + if self.params: + body["params"] = self.params + return body + + @classmethod + def from_dict(cls, d: Dict[str, Any]) -> LogLoggedModelParamsRequest: + """Deserializes the LogLoggedModelParamsRequest from a dictionary.""" + return cls(model_id=d.get("model_id", None), params=_repeated_dict(d, "params", LoggedModelParameter)) + + +@dataclass +class LogLoggedModelParamsRequestResponse: + def as_dict(self) -> dict: + """Serializes the LogLoggedModelParamsRequestResponse into a dictionary suitable for use as a JSON request body.""" + body = {} + return body + + def as_shallow_dict(self) -> dict: + """Serializes the LogLoggedModelParamsRequestResponse into a shallow dictionary of its immediate attributes.""" + body = {} + return body + + @classmethod + def from_dict(cls, d: Dict[str, Any]) -> LogLoggedModelParamsRequestResponse: + """Deserializes the LogLoggedModelParamsRequestResponse from a dictionary.""" + return cls() + + @dataclass class LogMetric: key: str @@ -3170,6 +3474,56 @@ def from_dict(cls, d: Dict[str, Any]) -> LogModelResponse: return cls() +@dataclass +class LogOutputsRequest: + run_id: str + """The ID of the Run from which to log outputs.""" + + models: Optional[List[ModelOutput]] = None + """The model outputs from the Run.""" + + def as_dict(self) -> dict: + """Serializes the LogOutputsRequest into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.models: + body["models"] = [v.as_dict() for v in self.models] + if self.run_id is not None: + body["run_id"] = self.run_id + return body + + def as_shallow_dict(self) -> dict: + """Serializes the LogOutputsRequest into a shallow dictionary of its immediate attributes.""" + body = {} + if self.models: + body["models"] = self.models + if self.run_id is not None: + body["run_id"] = self.run_id + return body + + @classmethod + def from_dict(cls, d: Dict[str, Any]) -> LogOutputsRequest: + """Deserializes the LogOutputsRequest from a dictionary.""" + return cls(models=_repeated_dict(d, "models", ModelOutput), run_id=d.get("run_id", None)) + + +@dataclass +class LogOutputsResponse: + def as_dict(self) -> dict: + """Serializes the LogOutputsResponse into a dictionary suitable for use as a JSON request body.""" + body = {} + return body + + def as_shallow_dict(self) -> dict: + """Serializes the LogOutputsResponse into a shallow dictionary of its immediate attributes.""" + body = {} + return body + + @classmethod + def from_dict(cls, d: Dict[str, Any]) -> LogOutputsResponse: + """Deserializes the LogOutputsResponse from a dictionary.""" + return cls() + + @dataclass class LogParam: key: str @@ -3241,23 +3595,287 @@ def from_dict(cls, d: Dict[str, Any]) -> LogParamResponse: @dataclass -class Metric: - """Metric associated with a run, represented as a key-value pair.""" +class LoggedModel: + """A logged model message includes logged model attributes, tags, registration info, params, and + linked run metrics.""" - dataset_digest: Optional[str] = None - """The dataset digest of the dataset associated with the metric, e.g. an md5 hash of the dataset - that uniquely identifies it within datasets of the same name.""" + data: Optional[LoggedModelData] = None + """The params and metrics attached to the logged model.""" - dataset_name: Optional[str] = None - """The name of the dataset associated with the metric. E.g. “my.uc.table@2” - “nyc-taxi-dataset”, “fantastic-elk-3”""" + info: Optional[LoggedModelInfo] = None + """The logged model attributes such as model ID, status, tags, etc.""" - key: Optional[str] = None - """The key identifying the metric.""" + def as_dict(self) -> dict: + """Serializes the LoggedModel into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.data: + body["data"] = self.data.as_dict() + if self.info: + body["info"] = self.info.as_dict() + return body - model_id: Optional[str] = None - """The ID of the logged model or registered model version associated with the metric, if - applicable.""" + def as_shallow_dict(self) -> dict: + """Serializes the LoggedModel into a shallow dictionary of its immediate attributes.""" + body = {} + if self.data: + body["data"] = self.data + if self.info: + body["info"] = self.info + return body + + @classmethod + def from_dict(cls, d: Dict[str, Any]) -> LoggedModel: + """Deserializes the LoggedModel from a dictionary.""" + return cls(data=_from_dict(d, "data", LoggedModelData), info=_from_dict(d, "info", LoggedModelInfo)) + + +@dataclass +class LoggedModelData: + """A LoggedModelData message includes logged model params and linked metrics.""" + + metrics: Optional[List[Metric]] = None + """Performance metrics linked to the model.""" + + params: Optional[List[LoggedModelParameter]] = None + """Immutable string key-value pairs of the model.""" + + def as_dict(self) -> dict: + """Serializes the LoggedModelData into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.metrics: + body["metrics"] = [v.as_dict() for v in self.metrics] + if self.params: + body["params"] = [v.as_dict() for v in self.params] + return body + + def as_shallow_dict(self) -> dict: + """Serializes the LoggedModelData into a shallow dictionary of its immediate attributes.""" + body = {} + if self.metrics: + body["metrics"] = self.metrics + if self.params: + body["params"] = self.params + return body + + @classmethod + def from_dict(cls, d: Dict[str, Any]) -> LoggedModelData: + """Deserializes the LoggedModelData from a dictionary.""" + return cls( + metrics=_repeated_dict(d, "metrics", Metric), params=_repeated_dict(d, "params", LoggedModelParameter) + ) + + +@dataclass +class LoggedModelInfo: + """A LoggedModelInfo includes logged model attributes, tags, and registration info.""" + + artifact_uri: Optional[str] = None + """The URI of the directory where model artifacts are stored.""" + + creation_timestamp_ms: Optional[int] = None + """The timestamp when the model was created in milliseconds since the UNIX epoch.""" + + creator_id: Optional[int] = None + """The ID of the user or principal that created the model.""" + + experiment_id: Optional[str] = None + """The ID of the experiment that owns the model.""" + + last_updated_timestamp_ms: Optional[int] = None + """The timestamp when the model was last updated in milliseconds since the UNIX epoch.""" + + model_id: Optional[str] = None + """The unique identifier for the logged model.""" + + model_type: Optional[str] = None + """The type of model, such as ``"Agent"``, ``"Classifier"``, ``"LLM"``.""" + + name: Optional[str] = None + """The name of the model.""" + + source_run_id: Optional[str] = None + """The ID of the run that created the model.""" + + status: Optional[LoggedModelStatus] = None + """The status of whether or not the model is ready for use.""" + + status_message: Optional[str] = None + """Details on the current model status.""" + + tags: Optional[List[LoggedModelTag]] = None + """Mutable string key-value pairs set on the model.""" + + def as_dict(self) -> dict: + """Serializes the LoggedModelInfo into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.artifact_uri is not None: + body["artifact_uri"] = self.artifact_uri + if self.creation_timestamp_ms is not None: + body["creation_timestamp_ms"] = self.creation_timestamp_ms + if self.creator_id is not None: + body["creator_id"] = self.creator_id + if self.experiment_id is not None: + body["experiment_id"] = self.experiment_id + if self.last_updated_timestamp_ms is not None: + body["last_updated_timestamp_ms"] = self.last_updated_timestamp_ms + if self.model_id is not None: + body["model_id"] = self.model_id + if self.model_type is not None: + body["model_type"] = self.model_type + if self.name is not None: + body["name"] = self.name + if self.source_run_id is not None: + body["source_run_id"] = self.source_run_id + if self.status is not None: + body["status"] = self.status.value + if self.status_message is not None: + body["status_message"] = self.status_message + if self.tags: + body["tags"] = [v.as_dict() for v in self.tags] + return body + + def as_shallow_dict(self) -> dict: + """Serializes the LoggedModelInfo into a shallow dictionary of its immediate attributes.""" + body = {} + if self.artifact_uri is not None: + body["artifact_uri"] = self.artifact_uri + if self.creation_timestamp_ms is not None: + body["creation_timestamp_ms"] = self.creation_timestamp_ms + if self.creator_id is not None: + body["creator_id"] = self.creator_id + if self.experiment_id is not None: + body["experiment_id"] = self.experiment_id + if self.last_updated_timestamp_ms is not None: + body["last_updated_timestamp_ms"] = self.last_updated_timestamp_ms + if self.model_id is not None: + body["model_id"] = self.model_id + if self.model_type is not None: + body["model_type"] = self.model_type + if self.name is not None: + body["name"] = self.name + if self.source_run_id is not None: + body["source_run_id"] = self.source_run_id + if self.status is not None: + body["status"] = self.status + if self.status_message is not None: + body["status_message"] = self.status_message + if self.tags: + body["tags"] = self.tags + return body + + @classmethod + def from_dict(cls, d: Dict[str, Any]) -> LoggedModelInfo: + """Deserializes the LoggedModelInfo from a dictionary.""" + return cls( + artifact_uri=d.get("artifact_uri", None), + creation_timestamp_ms=d.get("creation_timestamp_ms", None), + creator_id=d.get("creator_id", None), + experiment_id=d.get("experiment_id", None), + last_updated_timestamp_ms=d.get("last_updated_timestamp_ms", None), + model_id=d.get("model_id", None), + model_type=d.get("model_type", None), + name=d.get("name", None), + source_run_id=d.get("source_run_id", None), + status=_enum(d, "status", LoggedModelStatus), + status_message=d.get("status_message", None), + tags=_repeated_dict(d, "tags", LoggedModelTag), + ) + + +@dataclass +class LoggedModelParameter: + """Parameter associated with a LoggedModel.""" + + key: Optional[str] = None + """The key identifying this param.""" + + value: Optional[str] = None + """The value of this param.""" + + def as_dict(self) -> dict: + """Serializes the LoggedModelParameter into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.key is not None: + body["key"] = self.key + if self.value is not None: + body["value"] = self.value + return body + + def as_shallow_dict(self) -> dict: + """Serializes the LoggedModelParameter into a shallow dictionary of its immediate attributes.""" + body = {} + if self.key is not None: + body["key"] = self.key + if self.value is not None: + body["value"] = self.value + return body + + @classmethod + def from_dict(cls, d: Dict[str, Any]) -> LoggedModelParameter: + """Deserializes the LoggedModelParameter from a dictionary.""" + return cls(key=d.get("key", None), value=d.get("value", None)) + + +class LoggedModelStatus(Enum): + """A LoggedModelStatus enum value represents the status of a logged model.""" + + LOGGED_MODEL_PENDING = "LOGGED_MODEL_PENDING" + LOGGED_MODEL_READY = "LOGGED_MODEL_READY" + LOGGED_MODEL_UPLOAD_FAILED = "LOGGED_MODEL_UPLOAD_FAILED" + + +@dataclass +class LoggedModelTag: + """Tag for a LoggedModel.""" + + key: Optional[str] = None + """The tag key.""" + + value: Optional[str] = None + """The tag value.""" + + def as_dict(self) -> dict: + """Serializes the LoggedModelTag into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.key is not None: + body["key"] = self.key + if self.value is not None: + body["value"] = self.value + return body + + def as_shallow_dict(self) -> dict: + """Serializes the LoggedModelTag into a shallow dictionary of its immediate attributes.""" + body = {} + if self.key is not None: + body["key"] = self.key + if self.value is not None: + body["value"] = self.value + return body + + @classmethod + def from_dict(cls, d: Dict[str, Any]) -> LoggedModelTag: + """Deserializes the LoggedModelTag from a dictionary.""" + return cls(key=d.get("key", None), value=d.get("value", None)) + + +@dataclass +class Metric: + """Metric associated with a run, represented as a key-value pair.""" + + dataset_digest: Optional[str] = None + """The dataset digest of the dataset associated with the metric, e.g. an md5 hash of the dataset + that uniquely identifies it within datasets of the same name.""" + + dataset_name: Optional[str] = None + """The name of the dataset associated with the metric. E.g. “my.uc.table@2” + “nyc-taxi-dataset”, “fantastic-elk-3”""" + + key: Optional[str] = None + """The key identifying the metric.""" + + model_id: Optional[str] = None + """The ID of the logged model or registered model version associated with the metric, if + applicable.""" run_id: Optional[str] = None """The ID of the run containing the metric.""" @@ -3523,6 +4141,40 @@ def from_dict(cls, d: Dict[str, Any]) -> ModelInput: return cls(model_id=d.get("model_id", None)) +@dataclass +class ModelOutput: + """Represents a LoggedModel output of a Run.""" + + model_id: str + """The unique identifier of the model.""" + + step: int + """The step at which the model was produced.""" + + def as_dict(self) -> dict: + """Serializes the ModelOutput into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.model_id is not None: + body["model_id"] = self.model_id + if self.step is not None: + body["step"] = self.step + return body + + def as_shallow_dict(self) -> dict: + """Serializes the ModelOutput into a shallow dictionary of its immediate attributes.""" + body = {} + if self.model_id is not None: + body["model_id"] = self.model_id + if self.step is not None: + body["step"] = self.step + return body + + @classmethod + def from_dict(cls, d: Dict[str, Any]) -> ModelOutput: + """Deserializes the ModelOutput from a dictionary.""" + return cls(model_id=d.get("model_id", None), step=d.get("step", None)) + + @dataclass class ModelTag: key: Optional[str] = None @@ -4903,49 +5555,246 @@ def as_shallow_dict(self) -> dict: return body @classmethod - def from_dict(cls, d: Dict[str, Any]) -> RunTag: - """Deserializes the RunTag from a dictionary.""" - return cls(key=d.get("key", None), value=d.get("value", None)) + def from_dict(cls, d: Dict[str, Any]) -> RunTag: + """Deserializes the RunTag from a dictionary.""" + return cls(key=d.get("key", None), value=d.get("value", None)) + + +@dataclass +class SearchExperiments: + filter: Optional[str] = None + """String representing a SQL filter condition (e.g. "name ILIKE 'my-experiment%'")""" + + max_results: Optional[int] = None + """Maximum number of experiments desired. Max threshold is 3000.""" + + order_by: Optional[List[str]] = None + """List of columns for ordering search results, which can include experiment name and last updated + timestamp with an optional "DESC" or "ASC" annotation, where "ASC" is the default. Tiebreaks are + done by experiment id DESC.""" + + page_token: Optional[str] = None + """Token indicating the page of experiments to fetch""" + + view_type: Optional[ViewType] = None + """Qualifier for type of experiments to be returned. If unspecified, return only active + experiments.""" + + def as_dict(self) -> dict: + """Serializes the SearchExperiments into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.filter is not None: + body["filter"] = self.filter + if self.max_results is not None: + body["max_results"] = self.max_results + if self.order_by: + body["order_by"] = [v for v in self.order_by] + if self.page_token is not None: + body["page_token"] = self.page_token + if self.view_type is not None: + body["view_type"] = self.view_type.value + return body + + def as_shallow_dict(self) -> dict: + """Serializes the SearchExperiments into a shallow dictionary of its immediate attributes.""" + body = {} + if self.filter is not None: + body["filter"] = self.filter + if self.max_results is not None: + body["max_results"] = self.max_results + if self.order_by: + body["order_by"] = self.order_by + if self.page_token is not None: + body["page_token"] = self.page_token + if self.view_type is not None: + body["view_type"] = self.view_type + return body + + @classmethod + def from_dict(cls, d: Dict[str, Any]) -> SearchExperiments: + """Deserializes the SearchExperiments from a dictionary.""" + return cls( + filter=d.get("filter", None), + max_results=d.get("max_results", None), + order_by=d.get("order_by", None), + page_token=d.get("page_token", None), + view_type=_enum(d, "view_type", ViewType), + ) + + +@dataclass +class SearchExperimentsResponse: + experiments: Optional[List[Experiment]] = None + """Experiments that match the search criteria""" + + next_page_token: Optional[str] = None + """Token that can be used to retrieve the next page of experiments. An empty token means that no + more experiments are available for retrieval.""" + + def as_dict(self) -> dict: + """Serializes the SearchExperimentsResponse into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.experiments: + body["experiments"] = [v.as_dict() for v in self.experiments] + if self.next_page_token is not None: + body["next_page_token"] = self.next_page_token + return body + + def as_shallow_dict(self) -> dict: + """Serializes the SearchExperimentsResponse into a shallow dictionary of its immediate attributes.""" + body = {} + if self.experiments: + body["experiments"] = self.experiments + if self.next_page_token is not None: + body["next_page_token"] = self.next_page_token + return body + + @classmethod + def from_dict(cls, d: Dict[str, Any]) -> SearchExperimentsResponse: + """Deserializes the SearchExperimentsResponse from a dictionary.""" + return cls( + experiments=_repeated_dict(d, "experiments", Experiment), next_page_token=d.get("next_page_token", None) + ) + + +@dataclass +class SearchLoggedModelsDataset: + dataset_name: str + """The name of the dataset.""" + + dataset_digest: Optional[str] = None + """The digest of the dataset.""" + + def as_dict(self) -> dict: + """Serializes the SearchLoggedModelsDataset into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.dataset_digest is not None: + body["dataset_digest"] = self.dataset_digest + if self.dataset_name is not None: + body["dataset_name"] = self.dataset_name + return body + + def as_shallow_dict(self) -> dict: + """Serializes the SearchLoggedModelsDataset into a shallow dictionary of its immediate attributes.""" + body = {} + if self.dataset_digest is not None: + body["dataset_digest"] = self.dataset_digest + if self.dataset_name is not None: + body["dataset_name"] = self.dataset_name + return body + + @classmethod + def from_dict(cls, d: Dict[str, Any]) -> SearchLoggedModelsDataset: + """Deserializes the SearchLoggedModelsDataset from a dictionary.""" + return cls(dataset_digest=d.get("dataset_digest", None), dataset_name=d.get("dataset_name", None)) + + +@dataclass +class SearchLoggedModelsOrderBy: + field_name: str + """The name of the field to order by, e.g. "metrics.accuracy".""" + + ascending: Optional[bool] = None + """Whether the search results order is ascending or not.""" + + dataset_digest: Optional[str] = None + """If ``field_name`` refers to a metric, this field specifies the digest of the dataset associated + with the metric. Only metrics associated with the specified dataset name and digest will be + considered for ordering. This field may only be set if ``dataset_name`` is also set.""" + + dataset_name: Optional[str] = None + """If ``field_name`` refers to a metric, this field specifies the name of the dataset associated + with the metric. Only metrics associated with the specified dataset name will be considered for + ordering. This field may only be set if ``field_name`` refers to a metric.""" + + def as_dict(self) -> dict: + """Serializes the SearchLoggedModelsOrderBy into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.ascending is not None: + body["ascending"] = self.ascending + if self.dataset_digest is not None: + body["dataset_digest"] = self.dataset_digest + if self.dataset_name is not None: + body["dataset_name"] = self.dataset_name + if self.field_name is not None: + body["field_name"] = self.field_name + return body + + def as_shallow_dict(self) -> dict: + """Serializes the SearchLoggedModelsOrderBy into a shallow dictionary of its immediate attributes.""" + body = {} + if self.ascending is not None: + body["ascending"] = self.ascending + if self.dataset_digest is not None: + body["dataset_digest"] = self.dataset_digest + if self.dataset_name is not None: + body["dataset_name"] = self.dataset_name + if self.field_name is not None: + body["field_name"] = self.field_name + return body + + @classmethod + def from_dict(cls, d: Dict[str, Any]) -> SearchLoggedModelsOrderBy: + """Deserializes the SearchLoggedModelsOrderBy from a dictionary.""" + return cls( + ascending=d.get("ascending", None), + dataset_digest=d.get("dataset_digest", None), + dataset_name=d.get("dataset_name", None), + field_name=d.get("field_name", None), + ) @dataclass -class SearchExperiments: +class SearchLoggedModelsRequest: + datasets: Optional[List[SearchLoggedModelsDataset]] = None + """List of datasets on which to apply the metrics filter clauses. For example, a filter with + `metrics.accuracy > 0.9` and dataset info with name "test_dataset" means we will return all + logged models with accuracy > 0.9 on the test_dataset. Metric values from ANY dataset matching + the criteria are considered. If no datasets are specified, then metrics across all datasets are + considered in the filter.""" + + experiment_ids: Optional[List[str]] = None + """The IDs of the experiments in which to search for logged models.""" + filter: Optional[str] = None - """String representing a SQL filter condition (e.g. "name ILIKE 'my-experiment%'")""" + """A filter expression over logged model info and data that allows returning a subset of logged + models. The syntax is a subset of SQL that supports AND'ing together binary operations. + + Example: ``params.alpha < 0.3 AND metrics.accuracy > 0.9``.""" max_results: Optional[int] = None - """Maximum number of experiments desired. Max threshold is 3000.""" + """The maximum number of Logged Models to return. The maximum limit is 50.""" - order_by: Optional[List[str]] = None - """List of columns for ordering search results, which can include experiment name and last updated - timestamp with an optional "DESC" or "ASC" annotation, where "ASC" is the default. Tiebreaks are - done by experiment id DESC.""" + order_by: Optional[List[SearchLoggedModelsOrderBy]] = None + """The list of columns for ordering the results, with additional fields for sorting criteria.""" page_token: Optional[str] = None - """Token indicating the page of experiments to fetch""" - - view_type: Optional[ViewType] = None - """Qualifier for type of experiments to be returned. If unspecified, return only active - experiments.""" + """The token indicating the page of logged models to fetch.""" def as_dict(self) -> dict: - """Serializes the SearchExperiments into a dictionary suitable for use as a JSON request body.""" + """Serializes the SearchLoggedModelsRequest into a dictionary suitable for use as a JSON request body.""" body = {} + if self.datasets: + body["datasets"] = [v.as_dict() for v in self.datasets] + if self.experiment_ids: + body["experiment_ids"] = [v for v in self.experiment_ids] if self.filter is not None: body["filter"] = self.filter if self.max_results is not None: body["max_results"] = self.max_results if self.order_by: - body["order_by"] = [v for v in self.order_by] + body["order_by"] = [v.as_dict() for v in self.order_by] if self.page_token is not None: body["page_token"] = self.page_token - if self.view_type is not None: - body["view_type"] = self.view_type.value return body def as_shallow_dict(self) -> dict: - """Serializes the SearchExperiments into a shallow dictionary of its immediate attributes.""" + """Serializes the SearchLoggedModelsRequest into a shallow dictionary of its immediate attributes.""" body = {} + if self.datasets: + body["datasets"] = self.datasets + if self.experiment_ids: + body["experiment_ids"] = self.experiment_ids if self.filter is not None: body["filter"] = self.filter if self.max_results is not None: @@ -4954,55 +5803,51 @@ def as_shallow_dict(self) -> dict: body["order_by"] = self.order_by if self.page_token is not None: body["page_token"] = self.page_token - if self.view_type is not None: - body["view_type"] = self.view_type return body @classmethod - def from_dict(cls, d: Dict[str, Any]) -> SearchExperiments: - """Deserializes the SearchExperiments from a dictionary.""" + def from_dict(cls, d: Dict[str, Any]) -> SearchLoggedModelsRequest: + """Deserializes the SearchLoggedModelsRequest from a dictionary.""" return cls( + datasets=_repeated_dict(d, "datasets", SearchLoggedModelsDataset), + experiment_ids=d.get("experiment_ids", None), filter=d.get("filter", None), max_results=d.get("max_results", None), - order_by=d.get("order_by", None), + order_by=_repeated_dict(d, "order_by", SearchLoggedModelsOrderBy), page_token=d.get("page_token", None), - view_type=_enum(d, "view_type", ViewType), ) @dataclass -class SearchExperimentsResponse: - experiments: Optional[List[Experiment]] = None - """Experiments that match the search criteria""" +class SearchLoggedModelsResponse: + models: Optional[List[LoggedModel]] = None + """Logged models that match the search criteria.""" next_page_token: Optional[str] = None - """Token that can be used to retrieve the next page of experiments. An empty token means that no - more experiments are available for retrieval.""" + """The token that can be used to retrieve the next page of logged models.""" def as_dict(self) -> dict: - """Serializes the SearchExperimentsResponse into a dictionary suitable for use as a JSON request body.""" + """Serializes the SearchLoggedModelsResponse into a dictionary suitable for use as a JSON request body.""" body = {} - if self.experiments: - body["experiments"] = [v.as_dict() for v in self.experiments] + if self.models: + body["models"] = [v.as_dict() for v in self.models] if self.next_page_token is not None: body["next_page_token"] = self.next_page_token return body def as_shallow_dict(self) -> dict: - """Serializes the SearchExperimentsResponse into a shallow dictionary of its immediate attributes.""" + """Serializes the SearchLoggedModelsResponse into a shallow dictionary of its immediate attributes.""" body = {} - if self.experiments: - body["experiments"] = self.experiments + if self.models: + body["models"] = self.models if self.next_page_token is not None: body["next_page_token"] = self.next_page_token return body @classmethod - def from_dict(cls, d: Dict[str, Any]) -> SearchExperimentsResponse: - """Deserializes the SearchExperimentsResponse from a dictionary.""" - return cls( - experiments=_repeated_dict(d, "experiments", Experiment), next_page_token=d.get("next_page_token", None) - ) + def from_dict(cls, d: Dict[str, Any]) -> SearchLoggedModelsResponse: + """Deserializes the SearchLoggedModelsResponse from a dictionary.""" + return cls(models=_repeated_dict(d, "models", LoggedModel), next_page_token=d.get("next_page_token", None)) @dataclass @@ -5244,6 +6089,56 @@ def from_dict(cls, d: Dict[str, Any]) -> SetExperimentTagResponse: return cls() +@dataclass +class SetLoggedModelTagsRequest: + model_id: Optional[str] = None + """The ID of the logged model to set the tags on.""" + + tags: Optional[List[LoggedModelTag]] = None + """The tags to set on the logged model.""" + + def as_dict(self) -> dict: + """Serializes the SetLoggedModelTagsRequest into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.model_id is not None: + body["model_id"] = self.model_id + if self.tags: + body["tags"] = [v.as_dict() for v in self.tags] + return body + + def as_shallow_dict(self) -> dict: + """Serializes the SetLoggedModelTagsRequest into a shallow dictionary of its immediate attributes.""" + body = {} + if self.model_id is not None: + body["model_id"] = self.model_id + if self.tags: + body["tags"] = self.tags + return body + + @classmethod + def from_dict(cls, d: Dict[str, Any]) -> SetLoggedModelTagsRequest: + """Deserializes the SetLoggedModelTagsRequest from a dictionary.""" + return cls(model_id=d.get("model_id", None), tags=_repeated_dict(d, "tags", LoggedModelTag)) + + +@dataclass +class SetLoggedModelTagsResponse: + def as_dict(self) -> dict: + """Serializes the SetLoggedModelTagsResponse into a dictionary suitable for use as a JSON request body.""" + body = {} + return body + + def as_shallow_dict(self) -> dict: + """Serializes the SetLoggedModelTagsResponse into a shallow dictionary of its immediate attributes.""" + body = {} + return body + + @classmethod + def from_dict(cls, d: Dict[str, Any]) -> SetLoggedModelTagsResponse: + """Deserializes the SetLoggedModelTagsResponse from a dictionary.""" + return cls() + + @dataclass class SetModelTagRequest: name: str @@ -6208,6 +7103,54 @@ def create_experiment( res = self._api.do("POST", "/api/2.0/mlflow/experiments/create", body=body, headers=headers) return CreateExperimentResponse.from_dict(res) + def create_logged_model( + self, + experiment_id: str, + *, + model_type: Optional[str] = None, + name: Optional[str] = None, + params: Optional[List[LoggedModelParameter]] = None, + source_run_id: Optional[str] = None, + tags: Optional[List[LoggedModelTag]] = None, + ) -> CreateLoggedModelResponse: + """Create a logged model. + + :param experiment_id: str + The ID of the experiment that owns the model. + :param model_type: str (optional) + The type of the model, such as ``"Agent"``, ``"Classifier"``, ``"LLM"``. + :param name: str (optional) + The name of the model (optional). If not specified one will be generated. + :param params: List[:class:`LoggedModelParameter`] (optional) + Parameters attached to the model. + :param source_run_id: str (optional) + The ID of the run that created the model. + :param tags: List[:class:`LoggedModelTag`] (optional) + Tags attached to the model. + + :returns: :class:`CreateLoggedModelResponse` + """ + body = {} + if experiment_id is not None: + body["experiment_id"] = experiment_id + if model_type is not None: + body["model_type"] = model_type + if name is not None: + body["name"] = name + if params is not None: + body["params"] = [v.as_dict() for v in params] + if source_run_id is not None: + body["source_run_id"] = source_run_id + if tags is not None: + body["tags"] = [v.as_dict() for v in tags] + headers = { + "Accept": "application/json", + "Content-Type": "application/json", + } + + res = self._api.do("POST", "/api/2.0/mlflow/logged-models", body=body, headers=headers) + return CreateLoggedModelResponse.from_dict(res) + def create_run( self, *, @@ -6277,6 +7220,38 @@ def delete_experiment(self, experiment_id: str): self._api.do("POST", "/api/2.0/mlflow/experiments/delete", body=body, headers=headers) + def delete_logged_model(self, model_id: str): + """Delete a logged model. + + :param model_id: str + The ID of the logged model to delete. + + + """ + + headers = { + "Accept": "application/json", + } + + self._api.do("DELETE", f"/api/2.0/mlflow/logged-models/{model_id}", headers=headers) + + def delete_logged_model_tag(self, model_id: str, tag_key: str): + """Delete a tag on a logged model. + + :param model_id: str + The ID of the logged model to delete the tag from. + :param tag_key: str + The tag key. + + + """ + + headers = { + "Accept": "application/json", + } + + self._api.do("DELETE", f"/api/2.0/mlflow/logged-models/{model_id}/tags/{tag_key}", headers=headers) + def delete_run(self, run_id: str): """Delete a run. @@ -6357,6 +7332,28 @@ def delete_tag(self, run_id: str, key: str): self._api.do("POST", "/api/2.0/mlflow/runs/delete-tag", body=body, headers=headers) + def finalize_logged_model(self, model_id: str, status: LoggedModelStatus) -> FinalizeLoggedModelResponse: + """Finalize a logged model. + + :param model_id: str + The ID of the logged model to finalize. + :param status: :class:`LoggedModelStatus` + Whether or not the model is ready for use. ``"LOGGED_MODEL_UPLOAD_FAILED"`` indicates that something + went wrong when logging the model weights / agent code). + + :returns: :class:`FinalizeLoggedModelResponse` + """ + body = {} + if status is not None: + body["status"] = status.value + headers = { + "Accept": "application/json", + "Content-Type": "application/json", + } + + res = self._api.do("PATCH", f"/api/2.0/mlflow/logged-models/{model_id}", body=body, headers=headers) + return FinalizeLoggedModelResponse.from_dict(res) + def get_by_name(self, experiment_name: str) -> GetExperimentByNameResponse: """Get an experiment by name. @@ -6490,6 +7487,22 @@ def get_history( return query["page_token"] = json["next_page_token"] + def get_logged_model(self, model_id: str) -> GetLoggedModelResponse: + """Get a logged model. + + :param model_id: str + The ID of the logged model to retrieve. + + :returns: :class:`GetLoggedModelResponse` + """ + + headers = { + "Accept": "application/json", + } + + res = self._api.do("GET", f"/api/2.0/mlflow/logged-models/{model_id}", headers=headers) + return GetLoggedModelResponse.from_dict(res) + def get_permission_levels(self, experiment_id: str) -> GetExperimentPermissionLevelsResponse: """Get experiment permission levels. @@ -6653,6 +7666,41 @@ def list_experiments( return query["page_token"] = json["next_page_token"] + def list_logged_model_artifacts( + self, model_id: str, *, artifact_directory_path: Optional[str] = None, page_token: Optional[str] = None + ) -> ListLoggedModelArtifactsResponse: + """List artifacts for a logged model. + + List artifacts for a logged model. Takes an optional ``artifact_directory_path`` prefix which if + specified, the response contains only artifacts with the specified prefix. + + :param model_id: str + The ID of the logged model for which to list the artifacts. + :param artifact_directory_path: str (optional) + Filter artifacts matching this path (a relative path from the root artifact directory). + :param page_token: str (optional) + Token indicating the page of artifact results to fetch. `page_token` is not supported when listing + artifacts in UC Volumes. A maximum of 1000 artifacts will be retrieved for UC Volumes. Please call + `/api/2.0/fs/directories{directory_path}` for listing artifacts in UC Volumes, which supports + pagination. See [List directory contents | Files API](/api/workspace/files/listdirectorycontents). + + :returns: :class:`ListLoggedModelArtifactsResponse` + """ + + query = {} + if artifact_directory_path is not None: + query["artifact_directory_path"] = artifact_directory_path + if page_token is not None: + query["page_token"] = page_token + headers = { + "Accept": "application/json", + } + + res = self._api.do( + "GET", f"/api/2.0/mlflow/logged-models/{model_id}/artifacts/directories", query=query, headers=headers + ) + return ListLoggedModelArtifactsResponse.from_dict(res) + def log_batch( self, *, @@ -6766,6 +7814,30 @@ def log_inputs( self._api.do("POST", "/api/2.0/mlflow/runs/log-inputs", body=body, headers=headers) + def log_logged_model_params(self, model_id: str, *, params: Optional[List[LoggedModelParameter]] = None): + """Log params for a logged model. + + Logs params for a logged model. A param is a key-value pair (string key, string value). Examples + include hyperparameters used for ML model training. A param can be logged only once for a logged + model, and attempting to overwrite an existing param with a different value will result in an error + + :param model_id: str + The ID of the logged model to log params for. + :param params: List[:class:`LoggedModelParameter`] (optional) + Parameters to attach to the model. + + + """ + body = {} + if params is not None: + body["params"] = [v.as_dict() for v in params] + headers = { + "Accept": "application/json", + "Content-Type": "application/json", + } + + self._api.do("POST", f"/api/2.0/mlflow/logged-models/{model_id}/params", body=body, headers=headers) + def log_metric( self, key: str, @@ -6859,6 +7931,32 @@ def log_model(self, *, model_json: Optional[str] = None, run_id: Optional[str] = self._api.do("POST", "/api/2.0/mlflow/runs/log-model", body=body, headers=headers) + def log_outputs(self, run_id: str, *, models: Optional[List[ModelOutput]] = None): + """Log outputs from a run. + + **NOTE**: Experimental: This API may change or be removed in a future release without warning. + + Logs outputs, such as models, from an MLflow Run. + + :param run_id: str + The ID of the Run from which to log outputs. + :param models: List[:class:`ModelOutput`] (optional) + The model outputs from the Run. + + + """ + body = {} + if models is not None: + body["models"] = [v.as_dict() for v in models] + if run_id is not None: + body["run_id"] = run_id + headers = { + "Accept": "application/json", + "Content-Type": "application/json", + } + + self._api.do("POST", "/api/2.0/mlflow/runs/outputs", body=body, headers=headers) + def log_param(self, key: str, value: str, *, run_id: Optional[str] = None, run_uuid: Optional[str] = None): """Log a param for a run. @@ -7028,6 +8126,63 @@ def search_experiments( return body["page_token"] = json["next_page_token"] + def search_logged_models( + self, + *, + datasets: Optional[List[SearchLoggedModelsDataset]] = None, + experiment_ids: Optional[List[str]] = None, + filter: Optional[str] = None, + max_results: Optional[int] = None, + order_by: Optional[List[SearchLoggedModelsOrderBy]] = None, + page_token: Optional[str] = None, + ) -> SearchLoggedModelsResponse: + """Search logged models. + + Search for Logged Models that satisfy specified search criteria. + + :param datasets: List[:class:`SearchLoggedModelsDataset`] (optional) + List of datasets on which to apply the metrics filter clauses. For example, a filter with + `metrics.accuracy > 0.9` and dataset info with name "test_dataset" means we will return all logged + models with accuracy > 0.9 on the test_dataset. Metric values from ANY dataset matching the criteria + are considered. If no datasets are specified, then metrics across all datasets are considered in the + filter. + :param experiment_ids: List[str] (optional) + The IDs of the experiments in which to search for logged models. + :param filter: str (optional) + A filter expression over logged model info and data that allows returning a subset of logged models. + The syntax is a subset of SQL that supports AND'ing together binary operations. + + Example: ``params.alpha < 0.3 AND metrics.accuracy > 0.9``. + :param max_results: int (optional) + The maximum number of Logged Models to return. The maximum limit is 50. + :param order_by: List[:class:`SearchLoggedModelsOrderBy`] (optional) + The list of columns for ordering the results, with additional fields for sorting criteria. + :param page_token: str (optional) + The token indicating the page of logged models to fetch. + + :returns: :class:`SearchLoggedModelsResponse` + """ + body = {} + if datasets is not None: + body["datasets"] = [v.as_dict() for v in datasets] + if experiment_ids is not None: + body["experiment_ids"] = [v for v in experiment_ids] + if filter is not None: + body["filter"] = filter + if max_results is not None: + body["max_results"] = max_results + if order_by is not None: + body["order_by"] = [v.as_dict() for v in order_by] + if page_token is not None: + body["page_token"] = page_token + headers = { + "Accept": "application/json", + "Content-Type": "application/json", + } + + res = self._api.do("POST", "/api/2.0/mlflow/logged-models/search", body=body, headers=headers) + return SearchLoggedModelsResponse.from_dict(res) + def search_runs( self, *, @@ -7127,6 +8282,26 @@ def set_experiment_tag(self, experiment_id: str, key: str, value: str): self._api.do("POST", "/api/2.0/mlflow/experiments/set-experiment-tag", body=body, headers=headers) + def set_logged_model_tags(self, model_id: str, *, tags: Optional[List[LoggedModelTag]] = None): + """Set a tag for a logged model. + + :param model_id: str + The ID of the logged model to set the tags on. + :param tags: List[:class:`LoggedModelTag`] (optional) + The tags to set on the logged model. + + + """ + body = {} + if tags is not None: + body["tags"] = [v.as_dict() for v in tags] + headers = { + "Accept": "application/json", + "Content-Type": "application/json", + } + + self._api.do("PATCH", f"/api/2.0/mlflow/logged-models/{model_id}/tags", body=body, headers=headers) + def set_permissions( self, experiment_id: str, *, access_control_list: Optional[List[ExperimentAccessControlRequest]] = None ) -> ExperimentPermissions: diff --git a/databricks/sdk/service/oauth2.py b/databricks/sdk/service/oauth2.py index 53d337ef..030633eb 100755 --- a/databricks/sdk/service/oauth2.py +++ b/databricks/sdk/service/oauth2.py @@ -368,6 +368,13 @@ class FederationPolicy: oidc_policy: Optional[OidcFederationPolicy] = None """Specifies the policy to use for validating OIDC claims in your federated tokens.""" + policy_id: Optional[str] = None + """The ID of the federation policy.""" + + service_principal_id: Optional[int] = None + """The service principal ID that this federation policy applies to. Only set for service principal + federation policies.""" + uid: Optional[str] = None """Unique, immutable id of the federation policy.""" @@ -385,6 +392,10 @@ def as_dict(self) -> dict: body["name"] = self.name if self.oidc_policy: body["oidc_policy"] = self.oidc_policy.as_dict() + if self.policy_id is not None: + body["policy_id"] = self.policy_id + if self.service_principal_id is not None: + body["service_principal_id"] = self.service_principal_id if self.uid is not None: body["uid"] = self.uid if self.update_time is not None: @@ -402,6 +413,10 @@ def as_shallow_dict(self) -> dict: body["name"] = self.name if self.oidc_policy: body["oidc_policy"] = self.oidc_policy + if self.policy_id is not None: + body["policy_id"] = self.policy_id + if self.service_principal_id is not None: + body["service_principal_id"] = self.service_principal_id if self.uid is not None: body["uid"] = self.uid if self.update_time is not None: @@ -416,6 +431,8 @@ def from_dict(cls, d: Dict[str, Any]) -> FederationPolicy: description=d.get("description", None), name=d.get("name", None), oidc_policy=_from_dict(d, "oidc_policy", OidcFederationPolicy), + policy_id=d.get("policy_id", None), + service_principal_id=d.get("service_principal_id", None), uid=d.get("uid", None), update_time=d.get("update_time", None), ) diff --git a/databricks/sdk/service/pipelines.py b/databricks/sdk/service/pipelines.py index b5284610..943810a3 100755 --- a/databricks/sdk/service/pipelines.py +++ b/databricks/sdk/service/pipelines.py @@ -89,6 +89,11 @@ class CreatePipeline: restart_window: Optional[RestartWindow] = None """Restart window of this pipeline.""" + root_path: Optional[str] = None + """Root path for this pipeline. This is used as the root directory when editing the pipeline in the + Databricks user interface and it is added to sys.path when executing Python sources during + pipeline execution.""" + run_as: Optional[RunAs] = None """Write-only setting, available only in Create/Update calls. Specifies the user or service principal that the pipeline runs as. If not specified, the pipeline runs as the user who created @@ -159,6 +164,8 @@ def as_dict(self) -> dict: body["photon"] = self.photon if self.restart_window: body["restart_window"] = self.restart_window.as_dict() + if self.root_path is not None: + body["root_path"] = self.root_path if self.run_as: body["run_as"] = self.run_as.as_dict() if self.schema is not None: @@ -218,6 +225,8 @@ def as_shallow_dict(self) -> dict: body["photon"] = self.photon if self.restart_window: body["restart_window"] = self.restart_window + if self.root_path is not None: + body["root_path"] = self.root_path if self.run_as: body["run_as"] = self.run_as if self.schema is not None: @@ -257,6 +266,7 @@ def from_dict(cls, d: Dict[str, Any]) -> CreatePipeline: notifications=_repeated_dict(d, "notifications", Notifications), photon=d.get("photon", None), restart_window=_from_dict(d, "restart_window", RestartWindow), + root_path=d.get("root_path", None), run_as=_from_dict(d, "run_as", RunAs), schema=d.get("schema", None), serverless=d.get("serverless", None), @@ -473,6 +483,11 @@ class EditPipeline: restart_window: Optional[RestartWindow] = None """Restart window of this pipeline.""" + root_path: Optional[str] = None + """Root path for this pipeline. This is used as the root directory when editing the pipeline in the + Databricks user interface and it is added to sys.path when executing Python sources during + pipeline execution.""" + run_as: Optional[RunAs] = None """Write-only setting, available only in Create/Update calls. Specifies the user or service principal that the pipeline runs as. If not specified, the pipeline runs as the user who created @@ -545,6 +560,8 @@ def as_dict(self) -> dict: body["pipeline_id"] = self.pipeline_id if self.restart_window: body["restart_window"] = self.restart_window.as_dict() + if self.root_path is not None: + body["root_path"] = self.root_path if self.run_as: body["run_as"] = self.run_as.as_dict() if self.schema is not None: @@ -606,6 +623,8 @@ def as_shallow_dict(self) -> dict: body["pipeline_id"] = self.pipeline_id if self.restart_window: body["restart_window"] = self.restart_window + if self.root_path is not None: + body["root_path"] = self.root_path if self.run_as: body["run_as"] = self.run_as if self.schema is not None: @@ -646,6 +665,7 @@ def from_dict(cls, d: Dict[str, Any]) -> EditPipeline: photon=d.get("photon", None), pipeline_id=d.get("pipeline_id", None), restart_window=_from_dict(d, "restart_window", RestartWindow), + root_path=d.get("root_path", None), run_as=_from_dict(d, "run_as", RunAs), schema=d.get("schema", None), serverless=d.get("serverless", None), @@ -1103,6 +1123,10 @@ class IngestionPipelineDefinition: objects: Optional[List[IngestionConfig]] = None """Required. Settings specifying tables to replicate and the destination for the replicated tables.""" + source_type: Optional[IngestionSourceType] = None + """The type of the foreign source. The source type will be inferred from the source connection or + ingestion gateway. This field is output only and will be ignored if provided.""" + table_configuration: Optional[TableSpecificConfig] = None """Configuration settings to control the ingestion of tables. These settings are applied to all tables in the pipeline.""" @@ -1116,6 +1140,8 @@ def as_dict(self) -> dict: body["ingestion_gateway_id"] = self.ingestion_gateway_id if self.objects: body["objects"] = [v.as_dict() for v in self.objects] + if self.source_type is not None: + body["source_type"] = self.source_type.value if self.table_configuration: body["table_configuration"] = self.table_configuration.as_dict() return body @@ -1129,6 +1155,8 @@ def as_shallow_dict(self) -> dict: body["ingestion_gateway_id"] = self.ingestion_gateway_id if self.objects: body["objects"] = self.objects + if self.source_type is not None: + body["source_type"] = self.source_type if self.table_configuration: body["table_configuration"] = self.table_configuration return body @@ -1140,10 +1168,27 @@ def from_dict(cls, d: Dict[str, Any]) -> IngestionPipelineDefinition: connection_name=d.get("connection_name", None), ingestion_gateway_id=d.get("ingestion_gateway_id", None), objects=_repeated_dict(d, "objects", IngestionConfig), + source_type=_enum(d, "source_type", IngestionSourceType), table_configuration=_from_dict(d, "table_configuration", TableSpecificConfig), ) +class IngestionSourceType(Enum): + + DYNAMICS365 = "DYNAMICS365" + GA4_RAW_DATA = "GA4_RAW_DATA" + MANAGED_POSTGRESQL = "MANAGED_POSTGRESQL" + MYSQL = "MYSQL" + NETSUITE = "NETSUITE" + ORACLE = "ORACLE" + POSTGRESQL = "POSTGRESQL" + SALESFORCE = "SALESFORCE" + SERVICENOW = "SERVICENOW" + SHAREPOINT = "SHAREPOINT" + SQLSERVER = "SQLSERVER" + WORKDAY_RAAS = "WORKDAY_RAAS" + + @dataclass class ListPipelineEventsResponse: events: Optional[List[PipelineEvent]] = None @@ -1508,6 +1553,31 @@ def from_dict(cls, d: Dict[str, Any]) -> Origin: ) +@dataclass +class PathPattern: + include: Optional[str] = None + """The source code to include for pipelines""" + + def as_dict(self) -> dict: + """Serializes the PathPattern into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.include is not None: + body["include"] = self.include + return body + + def as_shallow_dict(self) -> dict: + """Serializes the PathPattern into a shallow dictionary of its immediate attributes.""" + body = {} + if self.include is not None: + body["include"] = self.include + return body + + @classmethod + def from_dict(cls, d: Dict[str, Any]) -> PathPattern: + """Deserializes the PathPattern from a dictionary.""" + return cls(include=d.get("include", None)) + + @dataclass class PipelineAccessControlRequest: group_name: Optional[str] = None @@ -2018,6 +2088,10 @@ class PipelineLibrary: file: Optional[FileLibrary] = None """The path to a file that defines a pipeline and is stored in the Databricks Repos.""" + glob: Optional[PathPattern] = None + """The unified field to include source codes. Each entry can be a notebook path, a file path, or a + folder path that ends `/**`. This field cannot be used together with `notebook` or `file`.""" + jar: Optional[str] = None """URI of the jar to be installed. Currently only DBFS is supported.""" @@ -2035,6 +2109,8 @@ def as_dict(self) -> dict: body = {} if self.file: body["file"] = self.file.as_dict() + if self.glob: + body["glob"] = self.glob.as_dict() if self.jar is not None: body["jar"] = self.jar if self.maven: @@ -2050,6 +2126,8 @@ def as_shallow_dict(self) -> dict: body = {} if self.file: body["file"] = self.file + if self.glob: + body["glob"] = self.glob if self.jar is not None: body["jar"] = self.jar if self.maven: @@ -2065,6 +2143,7 @@ def from_dict(cls, d: Dict[str, Any]) -> PipelineLibrary: """Deserializes the PipelineLibrary from a dictionary.""" return cls( file=_from_dict(d, "file", FileLibrary), + glob=_from_dict(d, "glob", PathPattern), jar=d.get("jar", None), maven=_from_dict(d, "maven", compute.MavenLibrary), notebook=_from_dict(d, "notebook", NotebookLibrary), @@ -2293,6 +2372,11 @@ class PipelineSpec: restart_window: Optional[RestartWindow] = None """Restart window of this pipeline.""" + root_path: Optional[str] = None + """Root path for this pipeline. This is used as the root directory when editing the pipeline in the + Databricks user interface and it is added to sys.path when executing Python sources during + pipeline execution.""" + schema: Optional[str] = None """The default schema (database) where tables are read from or published to.""" @@ -2351,6 +2435,8 @@ def as_dict(self) -> dict: body["photon"] = self.photon if self.restart_window: body["restart_window"] = self.restart_window.as_dict() + if self.root_path is not None: + body["root_path"] = self.root_path if self.schema is not None: body["schema"] = self.schema if self.serverless is not None: @@ -2404,6 +2490,8 @@ def as_shallow_dict(self) -> dict: body["photon"] = self.photon if self.restart_window: body["restart_window"] = self.restart_window + if self.root_path is not None: + body["root_path"] = self.root_path if self.schema is not None: body["schema"] = self.schema if self.serverless is not None: @@ -2439,6 +2527,7 @@ def from_dict(cls, d: Dict[str, Any]) -> PipelineSpec: notifications=_repeated_dict(d, "notifications", Notifications), photon=d.get("photon", None), restart_window=_from_dict(d, "restart_window", RestartWindow), + root_path=d.get("root_path", None), schema=d.get("schema", None), serverless=d.get("serverless", None), storage=d.get("storage", None), @@ -2996,6 +3085,7 @@ class StartUpdateCause(Enum): """What triggered this update.""" API_CALL = "API_CALL" + INFRASTRUCTURE_MAINTENANCE = "INFRASTRUCTURE_MAINTENANCE" JOB_TASK = "JOB_TASK" RETRY_ON_FAILURE = "RETRY_ON_FAILURE" SCHEMA_CHANGE = "SCHEMA_CHANGE" @@ -3321,6 +3411,7 @@ class UpdateInfoCause(Enum): """What triggered this update.""" API_CALL = "API_CALL" + INFRASTRUCTURE_MAINTENANCE = "INFRASTRUCTURE_MAINTENANCE" JOB_TASK = "JOB_TASK" RETRY_ON_FAILURE = "RETRY_ON_FAILURE" SCHEMA_CHANGE = "SCHEMA_CHANGE" @@ -3472,6 +3563,7 @@ def create( notifications: Optional[List[Notifications]] = None, photon: Optional[bool] = None, restart_window: Optional[RestartWindow] = None, + root_path: Optional[str] = None, run_as: Optional[RunAs] = None, schema: Optional[str] = None, serverless: Optional[bool] = None, @@ -3528,6 +3620,10 @@ def create( Whether Photon is enabled for this pipeline. :param restart_window: :class:`RestartWindow` (optional) Restart window of this pipeline. + :param root_path: str (optional) + Root path for this pipeline. This is used as the root directory when editing the pipeline in the + Databricks user interface and it is added to sys.path when executing Python sources during pipeline + execution. :param run_as: :class:`RunAs` (optional) Write-only setting, available only in Create/Update calls. Specifies the user or service principal that the pipeline runs as. If not specified, the pipeline runs as the user who created the pipeline. @@ -3592,6 +3688,8 @@ def create( body["photon"] = photon if restart_window is not None: body["restart_window"] = restart_window.as_dict() + if root_path is not None: + body["root_path"] = root_path if run_as is not None: body["run_as"] = run_as.as_dict() if schema is not None: @@ -3980,6 +4078,7 @@ def update( notifications: Optional[List[Notifications]] = None, photon: Optional[bool] = None, restart_window: Optional[RestartWindow] = None, + root_path: Optional[str] = None, run_as: Optional[RunAs] = None, schema: Optional[str] = None, serverless: Optional[bool] = None, @@ -4039,6 +4138,10 @@ def update( Whether Photon is enabled for this pipeline. :param restart_window: :class:`RestartWindow` (optional) Restart window of this pipeline. + :param root_path: str (optional) + Root path for this pipeline. This is used as the root directory when editing the pipeline in the + Databricks user interface and it is added to sys.path when executing Python sources during pipeline + execution. :param run_as: :class:`RunAs` (optional) Write-only setting, available only in Create/Update calls. Specifies the user or service principal that the pipeline runs as. If not specified, the pipeline runs as the user who created the pipeline. @@ -4103,6 +4206,8 @@ def update( body["photon"] = photon if restart_window is not None: body["restart_window"] = restart_window.as_dict() + if root_path is not None: + body["root_path"] = root_path if run_as is not None: body["run_as"] = run_as.as_dict() if schema is not None: diff --git a/databricks/sdk/service/serving.py b/databricks/sdk/service/serving.py index cd8a4eb1..6feb1fa0 100755 --- a/databricks/sdk/service/serving.py +++ b/databricks/sdk/service/serving.py @@ -842,6 +842,66 @@ def from_dict(cls, d: Dict[str, Any]) -> CohereConfig: ) +@dataclass +class CreatePtEndpointRequest: + name: str + """The name of the serving endpoint. This field is required and must be unique across a Databricks + workspace. An endpoint name can consist of alphanumeric characters, dashes, and underscores.""" + + config: PtEndpointCoreConfig + """The core config of the serving endpoint.""" + + ai_gateway: Optional[AiGatewayConfig] = None + """The AI Gateway configuration for the serving endpoint.""" + + budget_policy_id: Optional[str] = None + """The budget policy associated with the endpoint.""" + + tags: Optional[List[EndpointTag]] = None + """Tags to be attached to the serving endpoint and automatically propagated to billing logs.""" + + def as_dict(self) -> dict: + """Serializes the CreatePtEndpointRequest into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.ai_gateway: + body["ai_gateway"] = self.ai_gateway.as_dict() + if self.budget_policy_id is not None: + body["budget_policy_id"] = self.budget_policy_id + if self.config: + body["config"] = self.config.as_dict() + if self.name is not None: + body["name"] = self.name + if self.tags: + body["tags"] = [v.as_dict() for v in self.tags] + return body + + def as_shallow_dict(self) -> dict: + """Serializes the CreatePtEndpointRequest into a shallow dictionary of its immediate attributes.""" + body = {} + if self.ai_gateway: + body["ai_gateway"] = self.ai_gateway + if self.budget_policy_id is not None: + body["budget_policy_id"] = self.budget_policy_id + if self.config: + body["config"] = self.config + if self.name is not None: + body["name"] = self.name + if self.tags: + body["tags"] = self.tags + return body + + @classmethod + def from_dict(cls, d: Dict[str, Any]) -> CreatePtEndpointRequest: + """Deserializes the CreatePtEndpointRequest from a dictionary.""" + return cls( + ai_gateway=_from_dict(d, "ai_gateway", AiGatewayConfig), + budget_policy_id=d.get("budget_policy_id", None), + config=_from_dict(d, "config", PtEndpointCoreConfig), + name=d.get("name", None), + tags=_repeated_dict(d, "tags", EndpointTag), + ) + + @dataclass class CreateServingEndpoint: name: str @@ -2292,6 +2352,96 @@ def from_dict(cls, d: Dict[str, Any]) -> PayloadTable: return cls(name=d.get("name", None), status=d.get("status", None), status_message=d.get("status_message", None)) +@dataclass +class PtEndpointCoreConfig: + served_entities: Optional[List[PtServedModel]] = None + """The list of served entities under the serving endpoint config.""" + + traffic_config: Optional[TrafficConfig] = None + + def as_dict(self) -> dict: + """Serializes the PtEndpointCoreConfig into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.served_entities: + body["served_entities"] = [v.as_dict() for v in self.served_entities] + if self.traffic_config: + body["traffic_config"] = self.traffic_config.as_dict() + return body + + def as_shallow_dict(self) -> dict: + """Serializes the PtEndpointCoreConfig into a shallow dictionary of its immediate attributes.""" + body = {} + if self.served_entities: + body["served_entities"] = self.served_entities + if self.traffic_config: + body["traffic_config"] = self.traffic_config + return body + + @classmethod + def from_dict(cls, d: Dict[str, Any]) -> PtEndpointCoreConfig: + """Deserializes the PtEndpointCoreConfig from a dictionary.""" + return cls( + served_entities=_repeated_dict(d, "served_entities", PtServedModel), + traffic_config=_from_dict(d, "traffic_config", TrafficConfig), + ) + + +@dataclass +class PtServedModel: + entity_name: str + """The name of the entity to be served. The entity may be a model in the Databricks Model Registry, + a model in the Unity Catalog (UC), or a function of type FEATURE_SPEC in the UC. If it is a UC + object, the full name of the object should be given in the form of + **catalog_name.schema_name.model_name**.""" + + provisioned_model_units: int + """The number of model units to be provisioned.""" + + entity_version: Optional[str] = None + + name: Optional[str] = None + """The name of a served entity. It must be unique across an endpoint. A served entity name can + consist of alphanumeric characters, dashes, and underscores. If not specified for an external + model, this field defaults to external_model.name, with '.' and ':' replaced with '-', and if + not specified for other entities, it defaults to entity_name-entity_version.""" + + def as_dict(self) -> dict: + """Serializes the PtServedModel into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.entity_name is not None: + body["entity_name"] = self.entity_name + if self.entity_version is not None: + body["entity_version"] = self.entity_version + if self.name is not None: + body["name"] = self.name + if self.provisioned_model_units is not None: + body["provisioned_model_units"] = self.provisioned_model_units + return body + + def as_shallow_dict(self) -> dict: + """Serializes the PtServedModel into a shallow dictionary of its immediate attributes.""" + body = {} + if self.entity_name is not None: + body["entity_name"] = self.entity_name + if self.entity_version is not None: + body["entity_version"] = self.entity_version + if self.name is not None: + body["name"] = self.name + if self.provisioned_model_units is not None: + body["provisioned_model_units"] = self.provisioned_model_units + return body + + @classmethod + def from_dict(cls, d: Dict[str, Any]) -> PtServedModel: + """Deserializes the PtServedModel from a dictionary.""" + return cls( + entity_name=d.get("entity_name", None), + entity_version=d.get("entity_version", None), + name=d.get("name", None), + provisioned_model_units=d.get("provisioned_model_units", None), + ) + + @dataclass class PutAiGatewayRequest: fallback_config: Optional[FallbackConfig] = None @@ -2867,6 +3017,9 @@ class ServedEntityInput: model, this field defaults to external_model.name, with '.' and ':' replaced with '-', and if not specified for other entities, it defaults to entity_name-entity_version.""" + provisioned_model_units: Optional[int] = None + """The number of model units provisioned.""" + scale_to_zero_enabled: Optional[bool] = None """Whether the compute resources for the served entity should scale down to zero.""" @@ -2906,6 +3059,8 @@ def as_dict(self) -> dict: body["min_provisioned_throughput"] = self.min_provisioned_throughput if self.name is not None: body["name"] = self.name + if self.provisioned_model_units is not None: + body["provisioned_model_units"] = self.provisioned_model_units if self.scale_to_zero_enabled is not None: body["scale_to_zero_enabled"] = self.scale_to_zero_enabled if self.workload_size is not None: @@ -2933,6 +3088,8 @@ def as_shallow_dict(self) -> dict: body["min_provisioned_throughput"] = self.min_provisioned_throughput if self.name is not None: body["name"] = self.name + if self.provisioned_model_units is not None: + body["provisioned_model_units"] = self.provisioned_model_units if self.scale_to_zero_enabled is not None: body["scale_to_zero_enabled"] = self.scale_to_zero_enabled if self.workload_size is not None: @@ -2953,6 +3110,7 @@ def from_dict(cls, d: Dict[str, Any]) -> ServedEntityInput: max_provisioned_throughput=d.get("max_provisioned_throughput", None), min_provisioned_throughput=d.get("min_provisioned_throughput", None), name=d.get("name", None), + provisioned_model_units=d.get("provisioned_model_units", None), scale_to_zero_enabled=d.get("scale_to_zero_enabled", None), workload_size=d.get("workload_size", None), workload_type=_enum(d, "workload_type", ServingModelWorkloadType), @@ -3006,6 +3164,9 @@ class ServedEntityOutput: model, this field defaults to external_model.name, with '.' and ':' replaced with '-', and if not specified for other entities, it defaults to entity_name-entity_version.""" + provisioned_model_units: Optional[int] = None + """The number of model units provisioned.""" + scale_to_zero_enabled: Optional[bool] = None """Whether the compute resources for the served entity should scale down to zero.""" @@ -3053,6 +3214,8 @@ def as_dict(self) -> dict: body["min_provisioned_throughput"] = self.min_provisioned_throughput if self.name is not None: body["name"] = self.name + if self.provisioned_model_units is not None: + body["provisioned_model_units"] = self.provisioned_model_units if self.scale_to_zero_enabled is not None: body["scale_to_zero_enabled"] = self.scale_to_zero_enabled if self.state: @@ -3088,6 +3251,8 @@ def as_shallow_dict(self) -> dict: body["min_provisioned_throughput"] = self.min_provisioned_throughput if self.name is not None: body["name"] = self.name + if self.provisioned_model_units is not None: + body["provisioned_model_units"] = self.provisioned_model_units if self.scale_to_zero_enabled is not None: body["scale_to_zero_enabled"] = self.scale_to_zero_enabled if self.state: @@ -3113,6 +3278,7 @@ def from_dict(cls, d: Dict[str, Any]) -> ServedEntityOutput: max_provisioned_throughput=d.get("max_provisioned_throughput", None), min_provisioned_throughput=d.get("min_provisioned_throughput", None), name=d.get("name", None), + provisioned_model_units=d.get("provisioned_model_units", None), scale_to_zero_enabled=d.get("scale_to_zero_enabled", None), state=_from_dict(d, "state", ServedModelState), workload_size=d.get("workload_size", None), @@ -3206,6 +3372,9 @@ class ServedModelInput: model, this field defaults to external_model.name, with '.' and ':' replaced with '-', and if not specified for other entities, it defaults to entity_name-entity_version.""" + provisioned_model_units: Optional[int] = None + """The number of model units provisioned.""" + workload_size: Optional[str] = None """The workload size of the served entity. The workload size corresponds to a range of provisioned concurrency that the compute autoscales between. A single unit of provisioned concurrency can @@ -3240,6 +3409,8 @@ def as_dict(self) -> dict: body["model_version"] = self.model_version if self.name is not None: body["name"] = self.name + if self.provisioned_model_units is not None: + body["provisioned_model_units"] = self.provisioned_model_units if self.scale_to_zero_enabled is not None: body["scale_to_zero_enabled"] = self.scale_to_zero_enabled if self.workload_size is not None: @@ -3265,6 +3436,8 @@ def as_shallow_dict(self) -> dict: body["model_version"] = self.model_version if self.name is not None: body["name"] = self.name + if self.provisioned_model_units is not None: + body["provisioned_model_units"] = self.provisioned_model_units if self.scale_to_zero_enabled is not None: body["scale_to_zero_enabled"] = self.scale_to_zero_enabled if self.workload_size is not None: @@ -3284,6 +3457,7 @@ def from_dict(cls, d: Dict[str, Any]) -> ServedModelInput: model_name=d.get("model_name", None), model_version=d.get("model_version", None), name=d.get("name", None), + provisioned_model_units=d.get("provisioned_model_units", None), scale_to_zero_enabled=d.get("scale_to_zero_enabled", None), workload_size=d.get("workload_size", None), workload_type=_enum(d, "workload_type", ServedModelInputWorkloadType), @@ -3325,6 +3499,9 @@ class ServedModelOutput: model, this field defaults to external_model.name, with '.' and ':' replaced with '-', and if not specified for other entities, it defaults to entity_name-entity_version.""" + provisioned_model_units: Optional[int] = None + """The number of model units provisioned.""" + scale_to_zero_enabled: Optional[bool] = None """Whether the compute resources for the served entity should scale down to zero.""" @@ -3364,6 +3541,8 @@ def as_dict(self) -> dict: body["model_version"] = self.model_version if self.name is not None: body["name"] = self.name + if self.provisioned_model_units is not None: + body["provisioned_model_units"] = self.provisioned_model_units if self.scale_to_zero_enabled is not None: body["scale_to_zero_enabled"] = self.scale_to_zero_enabled if self.state: @@ -3391,6 +3570,8 @@ def as_shallow_dict(self) -> dict: body["model_version"] = self.model_version if self.name is not None: body["name"] = self.name + if self.provisioned_model_units is not None: + body["provisioned_model_units"] = self.provisioned_model_units if self.scale_to_zero_enabled is not None: body["scale_to_zero_enabled"] = self.scale_to_zero_enabled if self.state: @@ -3412,6 +3593,7 @@ def from_dict(cls, d: Dict[str, Any]) -> ServedModelOutput: model_name=d.get("model_name", None), model_version=d.get("model_version", None), name=d.get("name", None), + provisioned_model_units=d.get("provisioned_model_units", None), scale_to_zero_enabled=d.get("scale_to_zero_enabled", None), state=_from_dict(d, "state", ServedModelState), workload_size=d.get("workload_size", None), @@ -4094,6 +4276,37 @@ def from_dict(cls, d: Dict[str, Any]) -> TrafficConfig: return cls(routes=_repeated_dict(d, "routes", Route)) +@dataclass +class UpdateProvisionedThroughputEndpointConfigRequest: + config: PtEndpointCoreConfig + + name: Optional[str] = None + """The name of the pt endpoint to update. This field is required.""" + + def as_dict(self) -> dict: + """Serializes the UpdateProvisionedThroughputEndpointConfigRequest into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.config: + body["config"] = self.config.as_dict() + if self.name is not None: + body["name"] = self.name + return body + + def as_shallow_dict(self) -> dict: + """Serializes the UpdateProvisionedThroughputEndpointConfigRequest into a shallow dictionary of its immediate attributes.""" + body = {} + if self.config: + body["config"] = self.config + if self.name is not None: + body["name"] = self.name + return body + + @classmethod + def from_dict(cls, d: Dict[str, Any]) -> UpdateProvisionedThroughputEndpointConfigRequest: + """Deserializes the UpdateProvisionedThroughputEndpointConfigRequest from a dictionary.""" + return cls(config=_from_dict(d, "config", PtEndpointCoreConfig), name=d.get("name", None)) + + @dataclass class V1ResponseChoiceElement: finish_reason: Optional[str] = None @@ -4310,6 +4523,70 @@ def create_and_wait( tags=tags, ).result(timeout=timeout) + def create_provisioned_throughput_endpoint( + self, + name: str, + config: PtEndpointCoreConfig, + *, + ai_gateway: Optional[AiGatewayConfig] = None, + budget_policy_id: Optional[str] = None, + tags: Optional[List[EndpointTag]] = None, + ) -> Wait[ServingEndpointDetailed]: + """Create a new PT serving endpoint. + + :param name: str + The name of the serving endpoint. This field is required and must be unique across a Databricks + workspace. An endpoint name can consist of alphanumeric characters, dashes, and underscores. + :param config: :class:`PtEndpointCoreConfig` + The core config of the serving endpoint. + :param ai_gateway: :class:`AiGatewayConfig` (optional) + The AI Gateway configuration for the serving endpoint. + :param budget_policy_id: str (optional) + The budget policy associated with the endpoint. + :param tags: List[:class:`EndpointTag`] (optional) + Tags to be attached to the serving endpoint and automatically propagated to billing logs. + + :returns: + Long-running operation waiter for :class:`ServingEndpointDetailed`. + See :method:wait_get_serving_endpoint_not_updating for more details. + """ + body = {} + if ai_gateway is not None: + body["ai_gateway"] = ai_gateway.as_dict() + if budget_policy_id is not None: + body["budget_policy_id"] = budget_policy_id + if config is not None: + body["config"] = config.as_dict() + if name is not None: + body["name"] = name + if tags is not None: + body["tags"] = [v.as_dict() for v in tags] + headers = { + "Accept": "application/json", + "Content-Type": "application/json", + } + + op_response = self._api.do("POST", "/api/2.0/serving-endpoints/pt", body=body, headers=headers) + return Wait( + self.wait_get_serving_endpoint_not_updating, + response=ServingEndpointDetailed.from_dict(op_response), + name=op_response["name"], + ) + + def create_provisioned_throughput_endpoint_and_wait( + self, + name: str, + config: PtEndpointCoreConfig, + *, + ai_gateway: Optional[AiGatewayConfig] = None, + budget_policy_id: Optional[str] = None, + tags: Optional[List[EndpointTag]] = None, + timeout=timedelta(minutes=20), + ) -> ServingEndpointDetailed: + return self.create_provisioned_throughput_endpoint( + ai_gateway=ai_gateway, budget_policy_id=budget_policy_id, config=config, name=name, tags=tags + ).result(timeout=timeout) + def delete(self, name: str): """Delete a serving endpoint. @@ -4848,6 +5125,43 @@ def update_permissions( ) return ServingEndpointPermissions.from_dict(res) + def update_provisioned_throughput_endpoint_config( + self, name: str, config: PtEndpointCoreConfig + ) -> Wait[ServingEndpointDetailed]: + """Update config of a PT serving endpoint. + + Updates any combination of the pt endpoint's served entities, the compute configuration of those + served entities, and the endpoint's traffic config. Updates are instantaneous and endpoint should be + updated instantly + + :param name: str + The name of the pt endpoint to update. This field is required. + :param config: :class:`PtEndpointCoreConfig` + + :returns: + Long-running operation waiter for :class:`ServingEndpointDetailed`. + See :method:wait_get_serving_endpoint_not_updating for more details. + """ + body = {} + if config is not None: + body["config"] = config.as_dict() + headers = { + "Accept": "application/json", + "Content-Type": "application/json", + } + + op_response = self._api.do("PUT", f"/api/2.0/serving-endpoints/pt/{name}/config", body=body, headers=headers) + return Wait( + self.wait_get_serving_endpoint_not_updating, + response=ServingEndpointDetailed.from_dict(op_response), + name=op_response["name"], + ) + + def update_provisioned_throughput_endpoint_config_and_wait( + self, name: str, config: PtEndpointCoreConfig, timeout=timedelta(minutes=20) + ) -> ServingEndpointDetailed: + return self.update_provisioned_throughput_endpoint_config(config=config, name=name).result(timeout=timeout) + class ServingEndpointsDataPlaneAPI: """Serving endpoints DataPlane provides a set of operations to interact with data plane endpoints for Serving diff --git a/databricks/sdk/service/settings.py b/databricks/sdk/service/settings.py index 673296a0..70b3bc0a 100755 --- a/databricks/sdk/service/settings.py +++ b/databricks/sdk/service/settings.py @@ -65,6 +65,49 @@ def from_dict(cls, d: Dict[str, Any]) -> AccountIpAccessEnable: ) +@dataclass +class AccountNetworkPolicy: + account_id: Optional[str] = None + """The associated account ID for this Network Policy object.""" + + egress: Optional[NetworkPolicyEgress] = None + """The network policies applying for egress traffic.""" + + network_policy_id: Optional[str] = None + """The unique identifier for the network policy.""" + + def as_dict(self) -> dict: + """Serializes the AccountNetworkPolicy into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.account_id is not None: + body["account_id"] = self.account_id + if self.egress: + body["egress"] = self.egress.as_dict() + if self.network_policy_id is not None: + body["network_policy_id"] = self.network_policy_id + return body + + def as_shallow_dict(self) -> dict: + """Serializes the AccountNetworkPolicy into a shallow dictionary of its immediate attributes.""" + body = {} + if self.account_id is not None: + body["account_id"] = self.account_id + if self.egress: + body["egress"] = self.egress + if self.network_policy_id is not None: + body["network_policy_id"] = self.network_policy_id + return body + + @classmethod + def from_dict(cls, d: Dict[str, Any]) -> AccountNetworkPolicy: + """Deserializes the AccountNetworkPolicy from a dictionary.""" + return cls( + account_id=d.get("account_id", None), + egress=_from_dict(d, "egress", NetworkPolicyEgress), + network_policy_id=d.get("network_policy_id", None), + ) + + @dataclass class AibiDashboardEmbeddingAccessPolicy: access_policy_type: AibiDashboardEmbeddingAccessPolicyAccessPolicyType @@ -1405,6 +1448,38 @@ def from_dict(cls, d: Dict[str, Any]) -> DeleteDisableLegacyFeaturesResponse: return cls(etag=d.get("etag", None)) +@dataclass +class DeleteLlmProxyPartnerPoweredWorkspaceResponse: + """The etag is returned.""" + + etag: str + """etag used for versioning. The response is at least as fresh as the eTag provided. This is used + for optimistic concurrency control as a way to help prevent simultaneous writes of a setting + overwriting each other. It is strongly suggested that systems make use of the etag in the read + -> delete pattern to perform setting deletions in order to avoid race conditions. That is, get + an etag from a GET request, and pass it with the DELETE request to identify the rule set version + you are deleting.""" + + def as_dict(self) -> dict: + """Serializes the DeleteLlmProxyPartnerPoweredWorkspaceResponse into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.etag is not None: + body["etag"] = self.etag + return body + + def as_shallow_dict(self) -> dict: + """Serializes the DeleteLlmProxyPartnerPoweredWorkspaceResponse into a shallow dictionary of its immediate attributes.""" + body = {} + if self.etag is not None: + body["etag"] = self.etag + return body + + @classmethod + def from_dict(cls, d: Dict[str, Any]) -> DeleteLlmProxyPartnerPoweredWorkspaceResponse: + """Deserializes the DeleteLlmProxyPartnerPoweredWorkspaceResponse from a dictionary.""" + return cls(etag=d.get("etag", None)) + + @dataclass class DeleteNetworkConnectivityConfigurationResponse: def as_dict(self) -> dict: @@ -1423,6 +1498,24 @@ def from_dict(cls, d: Dict[str, Any]) -> DeleteNetworkConnectivityConfigurationR return cls() +@dataclass +class DeleteNetworkPolicyRpcResponse: + def as_dict(self) -> dict: + """Serializes the DeleteNetworkPolicyRpcResponse into a dictionary suitable for use as a JSON request body.""" + body = {} + return body + + def as_shallow_dict(self) -> dict: + """Serializes the DeleteNetworkPolicyRpcResponse into a shallow dictionary of its immediate attributes.""" + body = {} + return body + + @classmethod + def from_dict(cls, d: Dict[str, Any]) -> DeleteNetworkPolicyRpcResponse: + """Deserializes the DeleteNetworkPolicyRpcResponse from a dictionary.""" + return cls() + + @dataclass class DeletePersonalComputeSettingResponse: """The etag is returned.""" @@ -1963,6 +2056,257 @@ class EgressNetworkPolicyInternetAccessPolicyStorageDestinationStorageDestinatio GOOGLE_CLOUD_STORAGE = "GOOGLE_CLOUD_STORAGE" +@dataclass +class EgressNetworkPolicyNetworkAccessPolicy: + restriction_mode: EgressNetworkPolicyNetworkAccessPolicyRestrictionMode + """The restriction mode that controls how serverless workloads can access the internet.""" + + allowed_internet_destinations: Optional[List[EgressNetworkPolicyNetworkAccessPolicyInternetDestination]] = None + """List of internet destinations that serverless workloads are allowed to access when in + RESTRICTED_ACCESS mode.""" + + allowed_storage_destinations: Optional[List[EgressNetworkPolicyNetworkAccessPolicyStorageDestination]] = None + """List of storage destinations that serverless workloads are allowed to access when in + RESTRICTED_ACCESS mode.""" + + policy_enforcement: Optional[EgressNetworkPolicyNetworkAccessPolicyPolicyEnforcement] = None + """Optional. When policy_enforcement is not provided, we default to ENFORCE_MODE_ALL_SERVICES""" + + def as_dict(self) -> dict: + """Serializes the EgressNetworkPolicyNetworkAccessPolicy into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.allowed_internet_destinations: + body["allowed_internet_destinations"] = [v.as_dict() for v in self.allowed_internet_destinations] + if self.allowed_storage_destinations: + body["allowed_storage_destinations"] = [v.as_dict() for v in self.allowed_storage_destinations] + if self.policy_enforcement: + body["policy_enforcement"] = self.policy_enforcement.as_dict() + if self.restriction_mode is not None: + body["restriction_mode"] = self.restriction_mode.value + return body + + def as_shallow_dict(self) -> dict: + """Serializes the EgressNetworkPolicyNetworkAccessPolicy into a shallow dictionary of its immediate attributes.""" + body = {} + if self.allowed_internet_destinations: + body["allowed_internet_destinations"] = self.allowed_internet_destinations + if self.allowed_storage_destinations: + body["allowed_storage_destinations"] = self.allowed_storage_destinations + if self.policy_enforcement: + body["policy_enforcement"] = self.policy_enforcement + if self.restriction_mode is not None: + body["restriction_mode"] = self.restriction_mode + return body + + @classmethod + def from_dict(cls, d: Dict[str, Any]) -> EgressNetworkPolicyNetworkAccessPolicy: + """Deserializes the EgressNetworkPolicyNetworkAccessPolicy from a dictionary.""" + return cls( + allowed_internet_destinations=_repeated_dict( + d, "allowed_internet_destinations", EgressNetworkPolicyNetworkAccessPolicyInternetDestination + ), + allowed_storage_destinations=_repeated_dict( + d, "allowed_storage_destinations", EgressNetworkPolicyNetworkAccessPolicyStorageDestination + ), + policy_enforcement=_from_dict( + d, "policy_enforcement", EgressNetworkPolicyNetworkAccessPolicyPolicyEnforcement + ), + restriction_mode=_enum(d, "restriction_mode", EgressNetworkPolicyNetworkAccessPolicyRestrictionMode), + ) + + +@dataclass +class EgressNetworkPolicyNetworkAccessPolicyInternetDestination: + """Users can specify accessible internet destinations when outbound access is restricted. We only + support DNS_NAME (FQDN format) destinations for the time being. Going forward we may extend + support to host names and IP addresses.""" + + destination: Optional[str] = None + """The internet destination to which access will be allowed. Format dependent on the destination + type.""" + + internet_destination_type: Optional[ + EgressNetworkPolicyNetworkAccessPolicyInternetDestinationInternetDestinationType + ] = None + """The type of internet destination. Currently only DNS_NAME is supported.""" + + def as_dict(self) -> dict: + """Serializes the EgressNetworkPolicyNetworkAccessPolicyInternetDestination into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.destination is not None: + body["destination"] = self.destination + if self.internet_destination_type is not None: + body["internet_destination_type"] = self.internet_destination_type.value + return body + + def as_shallow_dict(self) -> dict: + """Serializes the EgressNetworkPolicyNetworkAccessPolicyInternetDestination into a shallow dictionary of its immediate attributes.""" + body = {} + if self.destination is not None: + body["destination"] = self.destination + if self.internet_destination_type is not None: + body["internet_destination_type"] = self.internet_destination_type + return body + + @classmethod + def from_dict(cls, d: Dict[str, Any]) -> EgressNetworkPolicyNetworkAccessPolicyInternetDestination: + """Deserializes the EgressNetworkPolicyNetworkAccessPolicyInternetDestination from a dictionary.""" + return cls( + destination=d.get("destination", None), + internet_destination_type=_enum( + d, + "internet_destination_type", + EgressNetworkPolicyNetworkAccessPolicyInternetDestinationInternetDestinationType, + ), + ) + + +class EgressNetworkPolicyNetworkAccessPolicyInternetDestinationInternetDestinationType(Enum): + + DNS_NAME = "DNS_NAME" + + +@dataclass +class EgressNetworkPolicyNetworkAccessPolicyPolicyEnforcement: + dry_run_mode_product_filter: Optional[ + List[EgressNetworkPolicyNetworkAccessPolicyPolicyEnforcementDryRunModeProductFilter] + ] = None + """When empty, it means dry run for all products. When non-empty, it means dry run for specific + products and for the other products, they will run in enforced mode.""" + + enforcement_mode: Optional[EgressNetworkPolicyNetworkAccessPolicyPolicyEnforcementEnforcementMode] = None + """The mode of policy enforcement. ENFORCED blocks traffic that violates policy, while DRY_RUN only + logs violations without blocking. When not specified, defaults to ENFORCED.""" + + def as_dict(self) -> dict: + """Serializes the EgressNetworkPolicyNetworkAccessPolicyPolicyEnforcement into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.dry_run_mode_product_filter: + body["dry_run_mode_product_filter"] = [v.value for v in self.dry_run_mode_product_filter] + if self.enforcement_mode is not None: + body["enforcement_mode"] = self.enforcement_mode.value + return body + + def as_shallow_dict(self) -> dict: + """Serializes the EgressNetworkPolicyNetworkAccessPolicyPolicyEnforcement into a shallow dictionary of its immediate attributes.""" + body = {} + if self.dry_run_mode_product_filter: + body["dry_run_mode_product_filter"] = self.dry_run_mode_product_filter + if self.enforcement_mode is not None: + body["enforcement_mode"] = self.enforcement_mode + return body + + @classmethod + def from_dict(cls, d: Dict[str, Any]) -> EgressNetworkPolicyNetworkAccessPolicyPolicyEnforcement: + """Deserializes the EgressNetworkPolicyNetworkAccessPolicyPolicyEnforcement from a dictionary.""" + return cls( + dry_run_mode_product_filter=_repeated_enum( + d, + "dry_run_mode_product_filter", + EgressNetworkPolicyNetworkAccessPolicyPolicyEnforcementDryRunModeProductFilter, + ), + enforcement_mode=_enum( + d, "enforcement_mode", EgressNetworkPolicyNetworkAccessPolicyPolicyEnforcementEnforcementMode + ), + ) + + +class EgressNetworkPolicyNetworkAccessPolicyPolicyEnforcementDryRunModeProductFilter(Enum): + """The values should match the list of workloads used in networkconfig.proto""" + + DBSQL = "DBSQL" + ML_SERVING = "ML_SERVING" + + +class EgressNetworkPolicyNetworkAccessPolicyPolicyEnforcementEnforcementMode(Enum): + + DRY_RUN = "DRY_RUN" + ENFORCED = "ENFORCED" + + +class EgressNetworkPolicyNetworkAccessPolicyRestrictionMode(Enum): + """At which level can Databricks and Databricks managed compute access Internet. FULL_ACCESS: + Databricks can access Internet. No blocking rules will apply. RESTRICTED_ACCESS: Databricks can + only access explicitly allowed internet and storage destinations, as well as UC connections and + external locations.""" + + FULL_ACCESS = "FULL_ACCESS" + RESTRICTED_ACCESS = "RESTRICTED_ACCESS" + + +@dataclass +class EgressNetworkPolicyNetworkAccessPolicyStorageDestination: + """Users can specify accessible storage destinations.""" + + azure_storage_account: Optional[str] = None + """The Azure storage account name.""" + + azure_storage_service: Optional[str] = None + """The Azure storage service type (blob, dfs, etc.).""" + + bucket_name: Optional[str] = None + + region: Optional[str] = None + """The region of the S3 bucket.""" + + storage_destination_type: Optional[ + EgressNetworkPolicyNetworkAccessPolicyStorageDestinationStorageDestinationType + ] = None + """The type of storage destination.""" + + def as_dict(self) -> dict: + """Serializes the EgressNetworkPolicyNetworkAccessPolicyStorageDestination into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.azure_storage_account is not None: + body["azure_storage_account"] = self.azure_storage_account + if self.azure_storage_service is not None: + body["azure_storage_service"] = self.azure_storage_service + if self.bucket_name is not None: + body["bucket_name"] = self.bucket_name + if self.region is not None: + body["region"] = self.region + if self.storage_destination_type is not None: + body["storage_destination_type"] = self.storage_destination_type.value + return body + + def as_shallow_dict(self) -> dict: + """Serializes the EgressNetworkPolicyNetworkAccessPolicyStorageDestination into a shallow dictionary of its immediate attributes.""" + body = {} + if self.azure_storage_account is not None: + body["azure_storage_account"] = self.azure_storage_account + if self.azure_storage_service is not None: + body["azure_storage_service"] = self.azure_storage_service + if self.bucket_name is not None: + body["bucket_name"] = self.bucket_name + if self.region is not None: + body["region"] = self.region + if self.storage_destination_type is not None: + body["storage_destination_type"] = self.storage_destination_type + return body + + @classmethod + def from_dict(cls, d: Dict[str, Any]) -> EgressNetworkPolicyNetworkAccessPolicyStorageDestination: + """Deserializes the EgressNetworkPolicyNetworkAccessPolicyStorageDestination from a dictionary.""" + return cls( + azure_storage_account=d.get("azure_storage_account", None), + azure_storage_service=d.get("azure_storage_service", None), + bucket_name=d.get("bucket_name", None), + region=d.get("region", None), + storage_destination_type=_enum( + d, + "storage_destination_type", + EgressNetworkPolicyNetworkAccessPolicyStorageDestinationStorageDestinationType, + ), + ) + + +class EgressNetworkPolicyNetworkAccessPolicyStorageDestinationStorageDestinationType(Enum): + + AWS_S3 = "AWS_S3" + AZURE_STORAGE = "AZURE_STORAGE" + GOOGLE_CLOUD_STORAGE = "GOOGLE_CLOUD_STORAGE" + + class EgressResourceType(Enum): """The target resources that are supported by Network Connectivity Config. Note: some egress types can support general types that are not defined in EgressResourceType. E.g.: Azure private @@ -2803,6 +3147,41 @@ def from_dict(cls, d: Dict[str, Any]) -> ListNetworkConnectivityConfigurationsRe ) +@dataclass +class ListNetworkPoliciesResponse: + items: Optional[List[AccountNetworkPolicy]] = None + """List of network policies.""" + + next_page_token: Optional[str] = None + """A token that can be used to get the next page of results. If null, there are no more results to + show.""" + + def as_dict(self) -> dict: + """Serializes the ListNetworkPoliciesResponse into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.items: + body["items"] = [v.as_dict() for v in self.items] + if self.next_page_token is not None: + body["next_page_token"] = self.next_page_token + return body + + def as_shallow_dict(self) -> dict: + """Serializes the ListNetworkPoliciesResponse into a shallow dictionary of its immediate attributes.""" + body = {} + if self.items: + body["items"] = self.items + if self.next_page_token is not None: + body["next_page_token"] = self.next_page_token + return body + + @classmethod + def from_dict(cls, d: Dict[str, Any]) -> ListNetworkPoliciesResponse: + """Deserializes the ListNetworkPoliciesResponse from a dictionary.""" + return cls( + items=_repeated_dict(d, "items", AccountNetworkPolicy), next_page_token=d.get("next_page_token", None) + ) + + @dataclass class ListNotificationDestinationsResponse: next_page_token: Optional[str] = None @@ -2943,17 +3322,167 @@ class ListType(Enum): @dataclass -class MicrosoftTeamsConfig: - url: Optional[str] = None - """[Input-Only] URL for Microsoft Teams.""" - - url_set: Optional[bool] = None - """[Output-Only] Whether URL is set.""" +class LlmProxyPartnerPoweredAccount: + boolean_val: BooleanMessage - def as_dict(self) -> dict: - """Serializes the MicrosoftTeamsConfig into a dictionary suitable for use as a JSON request body.""" - body = {} - if self.url is not None: + etag: Optional[str] = None + """etag used for versioning. The response is at least as fresh as the eTag provided. This is used + for optimistic concurrency control as a way to help prevent simultaneous writes of a setting + overwriting each other. It is strongly suggested that systems make use of the etag in the read + -> update pattern to perform setting updates in order to avoid race conditions. That is, get an + etag from a GET request, and pass it with the PATCH request to identify the setting version you + are updating.""" + + setting_name: Optional[str] = None + """Name of the corresponding setting. This field is populated in the response, but it will not be + respected even if it's set in the request body. The setting name in the path parameter will be + respected instead. Setting name is required to be 'default' if the setting only has one instance + per workspace.""" + + def as_dict(self) -> dict: + """Serializes the LlmProxyPartnerPoweredAccount into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.boolean_val: + body["boolean_val"] = self.boolean_val.as_dict() + if self.etag is not None: + body["etag"] = self.etag + if self.setting_name is not None: + body["setting_name"] = self.setting_name + return body + + def as_shallow_dict(self) -> dict: + """Serializes the LlmProxyPartnerPoweredAccount into a shallow dictionary of its immediate attributes.""" + body = {} + if self.boolean_val: + body["boolean_val"] = self.boolean_val + if self.etag is not None: + body["etag"] = self.etag + if self.setting_name is not None: + body["setting_name"] = self.setting_name + return body + + @classmethod + def from_dict(cls, d: Dict[str, Any]) -> LlmProxyPartnerPoweredAccount: + """Deserializes the LlmProxyPartnerPoweredAccount from a dictionary.""" + return cls( + boolean_val=_from_dict(d, "boolean_val", BooleanMessage), + etag=d.get("etag", None), + setting_name=d.get("setting_name", None), + ) + + +@dataclass +class LlmProxyPartnerPoweredEnforce: + boolean_val: BooleanMessage + + etag: Optional[str] = None + """etag used for versioning. The response is at least as fresh as the eTag provided. This is used + for optimistic concurrency control as a way to help prevent simultaneous writes of a setting + overwriting each other. It is strongly suggested that systems make use of the etag in the read + -> update pattern to perform setting updates in order to avoid race conditions. That is, get an + etag from a GET request, and pass it with the PATCH request to identify the setting version you + are updating.""" + + setting_name: Optional[str] = None + """Name of the corresponding setting. This field is populated in the response, but it will not be + respected even if it's set in the request body. The setting name in the path parameter will be + respected instead. Setting name is required to be 'default' if the setting only has one instance + per workspace.""" + + def as_dict(self) -> dict: + """Serializes the LlmProxyPartnerPoweredEnforce into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.boolean_val: + body["boolean_val"] = self.boolean_val.as_dict() + if self.etag is not None: + body["etag"] = self.etag + if self.setting_name is not None: + body["setting_name"] = self.setting_name + return body + + def as_shallow_dict(self) -> dict: + """Serializes the LlmProxyPartnerPoweredEnforce into a shallow dictionary of its immediate attributes.""" + body = {} + if self.boolean_val: + body["boolean_val"] = self.boolean_val + if self.etag is not None: + body["etag"] = self.etag + if self.setting_name is not None: + body["setting_name"] = self.setting_name + return body + + @classmethod + def from_dict(cls, d: Dict[str, Any]) -> LlmProxyPartnerPoweredEnforce: + """Deserializes the LlmProxyPartnerPoweredEnforce from a dictionary.""" + return cls( + boolean_val=_from_dict(d, "boolean_val", BooleanMessage), + etag=d.get("etag", None), + setting_name=d.get("setting_name", None), + ) + + +@dataclass +class LlmProxyPartnerPoweredWorkspace: + boolean_val: BooleanMessage + + etag: Optional[str] = None + """etag used for versioning. The response is at least as fresh as the eTag provided. This is used + for optimistic concurrency control as a way to help prevent simultaneous writes of a setting + overwriting each other. It is strongly suggested that systems make use of the etag in the read + -> update pattern to perform setting updates in order to avoid race conditions. That is, get an + etag from a GET request, and pass it with the PATCH request to identify the setting version you + are updating.""" + + setting_name: Optional[str] = None + """Name of the corresponding setting. This field is populated in the response, but it will not be + respected even if it's set in the request body. The setting name in the path parameter will be + respected instead. Setting name is required to be 'default' if the setting only has one instance + per workspace.""" + + def as_dict(self) -> dict: + """Serializes the LlmProxyPartnerPoweredWorkspace into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.boolean_val: + body["boolean_val"] = self.boolean_val.as_dict() + if self.etag is not None: + body["etag"] = self.etag + if self.setting_name is not None: + body["setting_name"] = self.setting_name + return body + + def as_shallow_dict(self) -> dict: + """Serializes the LlmProxyPartnerPoweredWorkspace into a shallow dictionary of its immediate attributes.""" + body = {} + if self.boolean_val: + body["boolean_val"] = self.boolean_val + if self.etag is not None: + body["etag"] = self.etag + if self.setting_name is not None: + body["setting_name"] = self.setting_name + return body + + @classmethod + def from_dict(cls, d: Dict[str, Any]) -> LlmProxyPartnerPoweredWorkspace: + """Deserializes the LlmProxyPartnerPoweredWorkspace from a dictionary.""" + return cls( + boolean_val=_from_dict(d, "boolean_val", BooleanMessage), + etag=d.get("etag", None), + setting_name=d.get("setting_name", None), + ) + + +@dataclass +class MicrosoftTeamsConfig: + url: Optional[str] = None + """[Input-Only] URL for Microsoft Teams.""" + + url_set: Optional[bool] = None + """[Output-Only] Whether URL is set.""" + + def as_dict(self) -> dict: + """Serializes the MicrosoftTeamsConfig into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.url is not None: body["url"] = self.url if self.url_set is not None: body["url_set"] = self.url_set @@ -3372,6 +3901,37 @@ def from_dict(cls, d: Dict[str, Any]) -> NetworkConnectivityConfiguration: ) +@dataclass +class NetworkPolicyEgress: + """The network policies applying for egress traffic. This message is used by the UI/REST API. We + translate this message to the format expected by the dataplane in Lakehouse Network Manager (for + the format expected by the dataplane, see networkconfig.textproto). This policy should be + consistent with [[com.databricks.api.proto.settingspolicy.EgressNetworkPolicy]]. Details see + API-design: https://docs.google.com/document/d/1DKWO_FpZMCY4cF2O62LpwII1lx8gsnDGG-qgE3t3TOA/""" + + network_access: Optional[EgressNetworkPolicyNetworkAccessPolicy] = None + """The access policy enforced for egress traffic to the internet.""" + + def as_dict(self) -> dict: + """Serializes the NetworkPolicyEgress into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.network_access: + body["network_access"] = self.network_access.as_dict() + return body + + def as_shallow_dict(self) -> dict: + """Serializes the NetworkPolicyEgress into a shallow dictionary of its immediate attributes.""" + body = {} + if self.network_access: + body["network_access"] = self.network_access + return body + + @classmethod + def from_dict(cls, d: Dict[str, Any]) -> NetworkPolicyEgress: + """Deserializes the NetworkPolicyEgress from a dictionary.""" + return cls(network_access=_from_dict(d, "network_access", EgressNetworkPolicyNetworkAccessPolicy)) + + @dataclass class NotificationDestination: config: Optional[Config] = None @@ -5113,54 +5673,65 @@ def from_dict(cls, d: Dict[str, Any]) -> UpdateIpAccessList: @dataclass -class UpdateNotificationDestinationRequest: - config: Optional[Config] = None - """The configuration for the notification destination. Must wrap EXACTLY one of the nested configs.""" +class UpdateLlmProxyPartnerPoweredAccountRequest: + """Details required to update a setting.""" - display_name: Optional[str] = None - """The display name for the notification destination.""" + allow_missing: bool + """This should always be set to true for Settings API. Added for AIP compliance.""" - id: Optional[str] = None - """UUID identifying notification destination.""" + setting: LlmProxyPartnerPoweredAccount + + field_mask: str + """The field mask must be a single string, with multiple fields separated by commas (no spaces). + The field path is relative to the resource object, using a dot (`.`) to navigate sub-fields + (e.g., `author.given_name`). Specification of elements in sequence or map fields is not allowed, + as only the entire collection field can be specified. Field names must exactly match the + resource field names. + + A field mask of `*` indicates full replacement. It’s recommended to always explicitly list the + fields being updated and avoid using `*` wildcards, as it can lead to unintended results if the + API changes in the future.""" def as_dict(self) -> dict: - """Serializes the UpdateNotificationDestinationRequest into a dictionary suitable for use as a JSON request body.""" + """Serializes the UpdateLlmProxyPartnerPoweredAccountRequest into a dictionary suitable for use as a JSON request body.""" body = {} - if self.config: - body["config"] = self.config.as_dict() - if self.display_name is not None: - body["display_name"] = self.display_name - if self.id is not None: - body["id"] = self.id + if self.allow_missing is not None: + body["allow_missing"] = self.allow_missing + if self.field_mask is not None: + body["field_mask"] = self.field_mask + if self.setting: + body["setting"] = self.setting.as_dict() return body def as_shallow_dict(self) -> dict: - """Serializes the UpdateNotificationDestinationRequest into a shallow dictionary of its immediate attributes.""" + """Serializes the UpdateLlmProxyPartnerPoweredAccountRequest into a shallow dictionary of its immediate attributes.""" body = {} - if self.config: - body["config"] = self.config - if self.display_name is not None: - body["display_name"] = self.display_name - if self.id is not None: - body["id"] = self.id + if self.allow_missing is not None: + body["allow_missing"] = self.allow_missing + if self.field_mask is not None: + body["field_mask"] = self.field_mask + if self.setting: + body["setting"] = self.setting return body @classmethod - def from_dict(cls, d: Dict[str, Any]) -> UpdateNotificationDestinationRequest: - """Deserializes the UpdateNotificationDestinationRequest from a dictionary.""" + def from_dict(cls, d: Dict[str, Any]) -> UpdateLlmProxyPartnerPoweredAccountRequest: + """Deserializes the UpdateLlmProxyPartnerPoweredAccountRequest from a dictionary.""" return cls( - config=_from_dict(d, "config", Config), display_name=d.get("display_name", None), id=d.get("id", None) + allow_missing=d.get("allow_missing", None), + field_mask=d.get("field_mask", None), + setting=_from_dict(d, "setting", LlmProxyPartnerPoweredAccount), ) @dataclass -class UpdatePersonalComputeSettingRequest: +class UpdateLlmProxyPartnerPoweredEnforceRequest: """Details required to update a setting.""" allow_missing: bool """This should always be set to true for Settings API. Added for AIP compliance.""" - setting: PersonalComputeSetting + setting: LlmProxyPartnerPoweredEnforce field_mask: str """The field mask must be a single string, with multiple fields separated by commas (no spaces). @@ -5174,7 +5745,7 @@ class UpdatePersonalComputeSettingRequest: API changes in the future.""" def as_dict(self) -> dict: - """Serializes the UpdatePersonalComputeSettingRequest into a dictionary suitable for use as a JSON request body.""" + """Serializes the UpdateLlmProxyPartnerPoweredEnforceRequest into a dictionary suitable for use as a JSON request body.""" body = {} if self.allow_missing is not None: body["allow_missing"] = self.allow_missing @@ -5185,7 +5756,7 @@ def as_dict(self) -> dict: return body def as_shallow_dict(self) -> dict: - """Serializes the UpdatePersonalComputeSettingRequest into a shallow dictionary of its immediate attributes.""" + """Serializes the UpdateLlmProxyPartnerPoweredEnforceRequest into a shallow dictionary of its immediate attributes.""" body = {} if self.allow_missing is not None: body["allow_missing"] = self.allow_missing @@ -5196,43 +5767,188 @@ def as_shallow_dict(self) -> dict: return body @classmethod - def from_dict(cls, d: Dict[str, Any]) -> UpdatePersonalComputeSettingRequest: - """Deserializes the UpdatePersonalComputeSettingRequest from a dictionary.""" + def from_dict(cls, d: Dict[str, Any]) -> UpdateLlmProxyPartnerPoweredEnforceRequest: + """Deserializes the UpdateLlmProxyPartnerPoweredEnforceRequest from a dictionary.""" return cls( allow_missing=d.get("allow_missing", None), field_mask=d.get("field_mask", None), - setting=_from_dict(d, "setting", PersonalComputeSetting), + setting=_from_dict(d, "setting", LlmProxyPartnerPoweredEnforce), ) @dataclass -class UpdatePrivateEndpointRule: - """Properties of the new private endpoint rule. Note that you must approve the endpoint in Azure - portal after initialization.""" +class UpdateLlmProxyPartnerPoweredWorkspaceRequest: + """Details required to update a setting.""" - domain_names: Optional[List[str]] = None - """Only used by private endpoints to customer-managed resources. + allow_missing: bool + """This should always be set to true for Settings API. Added for AIP compliance.""" + + setting: LlmProxyPartnerPoweredWorkspace + + field_mask: str + """The field mask must be a single string, with multiple fields separated by commas (no spaces). + The field path is relative to the resource object, using a dot (`.`) to navigate sub-fields + (e.g., `author.given_name`). Specification of elements in sequence or map fields is not allowed, + as only the entire collection field can be specified. Field names must exactly match the + resource field names. - Domain names of target private link service. When updating this field, the full list of target - domain_names must be specified.""" + A field mask of `*` indicates full replacement. It’s recommended to always explicitly list the + fields being updated and avoid using `*` wildcards, as it can lead to unintended results if the + API changes in the future.""" def as_dict(self) -> dict: - """Serializes the UpdatePrivateEndpointRule into a dictionary suitable for use as a JSON request body.""" + """Serializes the UpdateLlmProxyPartnerPoweredWorkspaceRequest into a dictionary suitable for use as a JSON request body.""" body = {} - if self.domain_names: - body["domain_names"] = [v for v in self.domain_names] + if self.allow_missing is not None: + body["allow_missing"] = self.allow_missing + if self.field_mask is not None: + body["field_mask"] = self.field_mask + if self.setting: + body["setting"] = self.setting.as_dict() return body def as_shallow_dict(self) -> dict: - """Serializes the UpdatePrivateEndpointRule into a shallow dictionary of its immediate attributes.""" + """Serializes the UpdateLlmProxyPartnerPoweredWorkspaceRequest into a shallow dictionary of its immediate attributes.""" body = {} - if self.domain_names: - body["domain_names"] = self.domain_names + if self.allow_missing is not None: + body["allow_missing"] = self.allow_missing + if self.field_mask is not None: + body["field_mask"] = self.field_mask + if self.setting: + body["setting"] = self.setting return body @classmethod - def from_dict(cls, d: Dict[str, Any]) -> UpdatePrivateEndpointRule: - """Deserializes the UpdatePrivateEndpointRule from a dictionary.""" + def from_dict(cls, d: Dict[str, Any]) -> UpdateLlmProxyPartnerPoweredWorkspaceRequest: + """Deserializes the UpdateLlmProxyPartnerPoweredWorkspaceRequest from a dictionary.""" + return cls( + allow_missing=d.get("allow_missing", None), + field_mask=d.get("field_mask", None), + setting=_from_dict(d, "setting", LlmProxyPartnerPoweredWorkspace), + ) + + +@dataclass +class UpdateNotificationDestinationRequest: + config: Optional[Config] = None + """The configuration for the notification destination. Must wrap EXACTLY one of the nested configs.""" + + display_name: Optional[str] = None + """The display name for the notification destination.""" + + id: Optional[str] = None + """UUID identifying notification destination.""" + + def as_dict(self) -> dict: + """Serializes the UpdateNotificationDestinationRequest into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.config: + body["config"] = self.config.as_dict() + if self.display_name is not None: + body["display_name"] = self.display_name + if self.id is not None: + body["id"] = self.id + return body + + def as_shallow_dict(self) -> dict: + """Serializes the UpdateNotificationDestinationRequest into a shallow dictionary of its immediate attributes.""" + body = {} + if self.config: + body["config"] = self.config + if self.display_name is not None: + body["display_name"] = self.display_name + if self.id is not None: + body["id"] = self.id + return body + + @classmethod + def from_dict(cls, d: Dict[str, Any]) -> UpdateNotificationDestinationRequest: + """Deserializes the UpdateNotificationDestinationRequest from a dictionary.""" + return cls( + config=_from_dict(d, "config", Config), display_name=d.get("display_name", None), id=d.get("id", None) + ) + + +@dataclass +class UpdatePersonalComputeSettingRequest: + """Details required to update a setting.""" + + allow_missing: bool + """This should always be set to true for Settings API. Added for AIP compliance.""" + + setting: PersonalComputeSetting + + field_mask: str + """The field mask must be a single string, with multiple fields separated by commas (no spaces). + The field path is relative to the resource object, using a dot (`.`) to navigate sub-fields + (e.g., `author.given_name`). Specification of elements in sequence or map fields is not allowed, + as only the entire collection field can be specified. Field names must exactly match the + resource field names. + + A field mask of `*` indicates full replacement. It’s recommended to always explicitly list the + fields being updated and avoid using `*` wildcards, as it can lead to unintended results if the + API changes in the future.""" + + def as_dict(self) -> dict: + """Serializes the UpdatePersonalComputeSettingRequest into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.allow_missing is not None: + body["allow_missing"] = self.allow_missing + if self.field_mask is not None: + body["field_mask"] = self.field_mask + if self.setting: + body["setting"] = self.setting.as_dict() + return body + + def as_shallow_dict(self) -> dict: + """Serializes the UpdatePersonalComputeSettingRequest into a shallow dictionary of its immediate attributes.""" + body = {} + if self.allow_missing is not None: + body["allow_missing"] = self.allow_missing + if self.field_mask is not None: + body["field_mask"] = self.field_mask + if self.setting: + body["setting"] = self.setting + return body + + @classmethod + def from_dict(cls, d: Dict[str, Any]) -> UpdatePersonalComputeSettingRequest: + """Deserializes the UpdatePersonalComputeSettingRequest from a dictionary.""" + return cls( + allow_missing=d.get("allow_missing", None), + field_mask=d.get("field_mask", None), + setting=_from_dict(d, "setting", PersonalComputeSetting), + ) + + +@dataclass +class UpdatePrivateEndpointRule: + """Properties of the new private endpoint rule. Note that you must approve the endpoint in Azure + portal after initialization.""" + + domain_names: Optional[List[str]] = None + """Only used by private endpoints to customer-managed resources. + + Domain names of target private link service. When updating this field, the full list of target + domain_names must be specified.""" + + def as_dict(self) -> dict: + """Serializes the UpdatePrivateEndpointRule into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.domain_names: + body["domain_names"] = [v for v in self.domain_names] + return body + + def as_shallow_dict(self) -> dict: + """Serializes the UpdatePrivateEndpointRule into a shallow dictionary of its immediate attributes.""" + body = {} + if self.domain_names: + body["domain_names"] = self.domain_names + return body + + @classmethod + def from_dict(cls, d: Dict[str, Any]) -> UpdatePrivateEndpointRule: + """Deserializes the UpdatePrivateEndpointRule from a dictionary.""" return cls(domain_names=d.get("domain_names", None)) @@ -5309,6 +6025,40 @@ def from_dict(cls, d: Dict[str, Any]) -> UpdateRestrictWorkspaceAdminsSettingReq WorkspaceConf = Dict[str, str] +@dataclass +class WorkspaceNetworkOption: + network_policy_id: Optional[str] = None + """The network policy ID to apply to the workspace. This controls the network access rules for all + serverless compute resources in the workspace. Each workspace can only be linked to one policy + at a time. If no policy is explicitly assigned, the workspace will use 'default-policy'.""" + + workspace_id: Optional[int] = None + """The workspace ID.""" + + def as_dict(self) -> dict: + """Serializes the WorkspaceNetworkOption into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.network_policy_id is not None: + body["network_policy_id"] = self.network_policy_id + if self.workspace_id is not None: + body["workspace_id"] = self.workspace_id + return body + + def as_shallow_dict(self) -> dict: + """Serializes the WorkspaceNetworkOption into a shallow dictionary of its immediate attributes.""" + body = {} + if self.network_policy_id is not None: + body["network_policy_id"] = self.network_policy_id + if self.workspace_id is not None: + body["workspace_id"] = self.workspace_id + return body + + @classmethod + def from_dict(cls, d: Dict[str, Any]) -> WorkspaceNetworkOption: + """Deserializes the WorkspaceNetworkOption from a dictionary.""" + return cls(network_policy_id=d.get("network_policy_id", None), workspace_id=d.get("workspace_id", None)) + + class AccountIpAccessListsAPI: """The Accounts IP Access List API enables account admins to configure IP access lists for access to the account console. @@ -5559,6 +6309,8 @@ def __init__(self, api_client): self._disable_legacy_features = DisableLegacyFeaturesAPI(self._api) self._enable_ip_access_lists = EnableIpAccessListsAPI(self._api) self._esm_enablement_account = EsmEnablementAccountAPI(self._api) + self._llm_proxy_partner_powered_account = LlmProxyPartnerPoweredAccountAPI(self._api) + self._llm_proxy_partner_powered_enforce = LlmProxyPartnerPoweredEnforceAPI(self._api) self._personal_compute = PersonalComputeAPI(self._api) @property @@ -5581,6 +6333,16 @@ def esm_enablement_account(self) -> EsmEnablementAccountAPI: """The enhanced security monitoring setting at the account level controls whether to enable the feature on new workspaces.""" return self._esm_enablement_account + @property + def llm_proxy_partner_powered_account(self) -> LlmProxyPartnerPoweredAccountAPI: + """Determines if partner powered models are enabled or not for a specific account.""" + return self._llm_proxy_partner_powered_account + + @property + def llm_proxy_partner_powered_enforce(self) -> LlmProxyPartnerPoweredEnforceAPI: + """Determines if the account-level partner-powered setting value is enforced upon the workspace-level partner-powered setting.""" + return self._llm_proxy_partner_powered_enforce + @property def personal_compute(self) -> PersonalComputeAPI: """The Personal Compute enablement setting lets you control which users can use the Personal Compute default policy to create compute resources.""" @@ -6316,8 +7078,14 @@ def update(self, allow_missing: bool, setting: DisableLegacyAccess, field_mask: class DisableLegacyDbfsAPI: - """When this setting is on, access to DBFS root and DBFS mounts is disallowed (as well as creation of new - mounts). When the setting is off, all DBFS functionality is enabled""" + """Disabling legacy DBFS has the following implications: + + 1. Access to DBFS root and DBFS mounts is disallowed (as well as the creation of new mounts). 2. Disables + Databricks Runtime versions prior to 13.3LTS. + + When the setting is off, all DBFS functionality is enabled and no restrictions are imposed on Databricks + Runtime versions. This setting can take up to 20 minutes to take effect and requires a manual restart of + all-purpose compute clusters and SQL warehouses.""" def __init__(self, api_client): self._api = api_client @@ -7219,6 +7987,268 @@ def update( self._api.do("PATCH", f"/api/2.0/ip-access-lists/{ip_access_list_id}", body=body, headers=headers) +class LlmProxyPartnerPoweredAccountAPI: + """Determines if partner powered models are enabled or not for a specific account""" + + def __init__(self, api_client): + self._api = api_client + + def get(self, *, etag: Optional[str] = None) -> LlmProxyPartnerPoweredAccount: + """Get the enable partner powered AI features account setting. + + Gets the enable partner powered AI features account setting. + + :param etag: str (optional) + etag used for versioning. The response is at least as fresh as the eTag provided. This is used for + optimistic concurrency control as a way to help prevent simultaneous writes of a setting overwriting + each other. It is strongly suggested that systems make use of the etag in the read -> delete pattern + to perform setting deletions in order to avoid race conditions. That is, get an etag from a GET + request, and pass it with the DELETE request to identify the rule set version you are deleting. + + :returns: :class:`LlmProxyPartnerPoweredAccount` + """ + + query = {} + if etag is not None: + query["etag"] = etag + headers = { + "Accept": "application/json", + } + + res = self._api.do( + "GET", + f"/api/2.0/accounts/{self._api.account_id}/settings/types/llm_proxy_partner_powered/names/default", + query=query, + headers=headers, + ) + return LlmProxyPartnerPoweredAccount.from_dict(res) + + def update( + self, allow_missing: bool, setting: LlmProxyPartnerPoweredAccount, field_mask: str + ) -> LlmProxyPartnerPoweredAccount: + """Update the enable partner powered AI features account setting. + + Updates the enable partner powered AI features account setting. + + :param allow_missing: bool + This should always be set to true for Settings API. Added for AIP compliance. + :param setting: :class:`LlmProxyPartnerPoweredAccount` + :param field_mask: str + The field mask must be a single string, with multiple fields separated by commas (no spaces). The + field path is relative to the resource object, using a dot (`.`) to navigate sub-fields (e.g., + `author.given_name`). Specification of elements in sequence or map fields is not allowed, as only + the entire collection field can be specified. Field names must exactly match the resource field + names. + + A field mask of `*` indicates full replacement. It’s recommended to always explicitly list the + fields being updated and avoid using `*` wildcards, as it can lead to unintended results if the API + changes in the future. + + :returns: :class:`LlmProxyPartnerPoweredAccount` + """ + body = {} + if allow_missing is not None: + body["allow_missing"] = allow_missing + if field_mask is not None: + body["field_mask"] = field_mask + if setting is not None: + body["setting"] = setting.as_dict() + headers = { + "Accept": "application/json", + "Content-Type": "application/json", + } + + res = self._api.do( + "PATCH", + f"/api/2.0/accounts/{self._api.account_id}/settings/types/llm_proxy_partner_powered/names/default", + body=body, + headers=headers, + ) + return LlmProxyPartnerPoweredAccount.from_dict(res) + + +class LlmProxyPartnerPoweredEnforceAPI: + """Determines if the account-level partner-powered setting value is enforced upon the workspace-level + partner-powered setting""" + + def __init__(self, api_client): + self._api = api_client + + def get(self, *, etag: Optional[str] = None) -> LlmProxyPartnerPoweredEnforce: + """Get the enforcement status of partner powered AI features account setting. + + Gets the enforcement status of partner powered AI features account setting. + + :param etag: str (optional) + etag used for versioning. The response is at least as fresh as the eTag provided. This is used for + optimistic concurrency control as a way to help prevent simultaneous writes of a setting overwriting + each other. It is strongly suggested that systems make use of the etag in the read -> delete pattern + to perform setting deletions in order to avoid race conditions. That is, get an etag from a GET + request, and pass it with the DELETE request to identify the rule set version you are deleting. + + :returns: :class:`LlmProxyPartnerPoweredEnforce` + """ + + query = {} + if etag is not None: + query["etag"] = etag + headers = { + "Accept": "application/json", + } + + res = self._api.do( + "GET", + f"/api/2.0/accounts/{self._api.account_id}/settings/types/llm_proxy_partner_powered_enforce/names/default", + query=query, + headers=headers, + ) + return LlmProxyPartnerPoweredEnforce.from_dict(res) + + def update( + self, allow_missing: bool, setting: LlmProxyPartnerPoweredEnforce, field_mask: str + ) -> LlmProxyPartnerPoweredEnforce: + """Update the enforcement status of partner powered AI features account setting. + + Updates the enable enforcement status of partner powered AI features account setting. + + :param allow_missing: bool + This should always be set to true for Settings API. Added for AIP compliance. + :param setting: :class:`LlmProxyPartnerPoweredEnforce` + :param field_mask: str + The field mask must be a single string, with multiple fields separated by commas (no spaces). The + field path is relative to the resource object, using a dot (`.`) to navigate sub-fields (e.g., + `author.given_name`). Specification of elements in sequence or map fields is not allowed, as only + the entire collection field can be specified. Field names must exactly match the resource field + names. + + A field mask of `*` indicates full replacement. It’s recommended to always explicitly list the + fields being updated and avoid using `*` wildcards, as it can lead to unintended results if the API + changes in the future. + + :returns: :class:`LlmProxyPartnerPoweredEnforce` + """ + body = {} + if allow_missing is not None: + body["allow_missing"] = allow_missing + if field_mask is not None: + body["field_mask"] = field_mask + if setting is not None: + body["setting"] = setting.as_dict() + headers = { + "Accept": "application/json", + "Content-Type": "application/json", + } + + res = self._api.do( + "PATCH", + f"/api/2.0/accounts/{self._api.account_id}/settings/types/llm_proxy_partner_powered_enforce/names/default", + body=body, + headers=headers, + ) + return LlmProxyPartnerPoweredEnforce.from_dict(res) + + +class LlmProxyPartnerPoweredWorkspaceAPI: + """Determines if partner powered models are enabled or not for a specific workspace""" + + def __init__(self, api_client): + self._api = api_client + + def delete(self, *, etag: Optional[str] = None) -> DeleteLlmProxyPartnerPoweredWorkspaceResponse: + """Delete the enable partner powered AI features workspace setting. + + Reverts the enable partner powered AI features workspace setting to its default value. + + :param etag: str (optional) + etag used for versioning. The response is at least as fresh as the eTag provided. This is used for + optimistic concurrency control as a way to help prevent simultaneous writes of a setting overwriting + each other. It is strongly suggested that systems make use of the etag in the read -> delete pattern + to perform setting deletions in order to avoid race conditions. That is, get an etag from a GET + request, and pass it with the DELETE request to identify the rule set version you are deleting. + + :returns: :class:`DeleteLlmProxyPartnerPoweredWorkspaceResponse` + """ + + query = {} + if etag is not None: + query["etag"] = etag + headers = { + "Accept": "application/json", + } + + res = self._api.do( + "DELETE", "/api/2.0/settings/types/llm_proxy_partner_powered/names/default", query=query, headers=headers + ) + return DeleteLlmProxyPartnerPoweredWorkspaceResponse.from_dict(res) + + def get(self, *, etag: Optional[str] = None) -> LlmProxyPartnerPoweredWorkspace: + """Get the enable partner powered AI features workspace setting. + + Gets the enable partner powered AI features workspace setting. + + :param etag: str (optional) + etag used for versioning. The response is at least as fresh as the eTag provided. This is used for + optimistic concurrency control as a way to help prevent simultaneous writes of a setting overwriting + each other. It is strongly suggested that systems make use of the etag in the read -> delete pattern + to perform setting deletions in order to avoid race conditions. That is, get an etag from a GET + request, and pass it with the DELETE request to identify the rule set version you are deleting. + + :returns: :class:`LlmProxyPartnerPoweredWorkspace` + """ + + query = {} + if etag is not None: + query["etag"] = etag + headers = { + "Accept": "application/json", + } + + res = self._api.do( + "GET", "/api/2.0/settings/types/llm_proxy_partner_powered/names/default", query=query, headers=headers + ) + return LlmProxyPartnerPoweredWorkspace.from_dict(res) + + def update( + self, allow_missing: bool, setting: LlmProxyPartnerPoweredWorkspace, field_mask: str + ) -> LlmProxyPartnerPoweredWorkspace: + """Update the enable partner powered AI features workspace setting. + + Updates the enable partner powered AI features workspace setting. + + :param allow_missing: bool + This should always be set to true for Settings API. Added for AIP compliance. + :param setting: :class:`LlmProxyPartnerPoweredWorkspace` + :param field_mask: str + The field mask must be a single string, with multiple fields separated by commas (no spaces). The + field path is relative to the resource object, using a dot (`.`) to navigate sub-fields (e.g., + `author.given_name`). Specification of elements in sequence or map fields is not allowed, as only + the entire collection field can be specified. Field names must exactly match the resource field + names. + + A field mask of `*` indicates full replacement. It’s recommended to always explicitly list the + fields being updated and avoid using `*` wildcards, as it can lead to unintended results if the API + changes in the future. + + :returns: :class:`LlmProxyPartnerPoweredWorkspace` + """ + body = {} + if allow_missing is not None: + body["allow_missing"] = allow_missing + if field_mask is not None: + body["field_mask"] = field_mask + if setting is not None: + body["setting"] = setting.as_dict() + headers = { + "Accept": "application/json", + "Content-Type": "application/json", + } + + res = self._api.do( + "PATCH", "/api/2.0/settings/types/llm_proxy_partner_powered/names/default", body=body, headers=headers + ) + return LlmProxyPartnerPoweredWorkspace.from_dict(res) + + class NetworkConnectivityAPI: """These APIs provide configurations for the network connectivity of your workspaces for serverless compute resources. This API provides stable subnets for your workspace so that you can configure your firewalls on @@ -7519,6 +8549,134 @@ def update_ncc_azure_private_endpoint_rule_public( return NccAzurePrivateEndpointRule.from_dict(res) +class NetworkPoliciesAPI: + """These APIs manage network policies for this account. Network policies control which network destinations + can be accessed from the Databricks environment. Each Databricks account includes a default policy named + 'default-policy'. 'default-policy' is associated with any workspace lacking an explicit network policy + assignment, and is automatically associated with each newly created workspace. 'default-policy' is + reserved and cannot be deleted, but it can be updated to customize the default network access rules for + your account.""" + + def __init__(self, api_client): + self._api = api_client + + def create_network_policy_rpc(self, network_policy: AccountNetworkPolicy) -> AccountNetworkPolicy: + """Create a network policy. + + Creates a new network policy to manage which network destinations can be accessed from the Databricks + environment. + + :param network_policy: :class:`AccountNetworkPolicy` + + :returns: :class:`AccountNetworkPolicy` + """ + body = network_policy.as_dict() + headers = { + "Accept": "application/json", + "Content-Type": "application/json", + } + + res = self._api.do( + "POST", f"/api/2.0/accounts/{self._api.account_id}/network-policies", body=body, headers=headers + ) + return AccountNetworkPolicy.from_dict(res) + + def delete_network_policy_rpc(self, network_policy_id: str): + """Delete a network policy. + + Deletes a network policy. Cannot be called on 'default-policy'. + + :param network_policy_id: str + The unique identifier of the network policy to delete. + + + """ + + headers = { + "Accept": "application/json", + } + + self._api.do( + "DELETE", f"/api/2.0/accounts/{self._api.account_id}/network-policies/{network_policy_id}", headers=headers + ) + + def get_network_policy_rpc(self, network_policy_id: str) -> AccountNetworkPolicy: + """Get a network policy. + + Gets a network policy. + + :param network_policy_id: str + The unique identifier of the network policy to retrieve. + + :returns: :class:`AccountNetworkPolicy` + """ + + headers = { + "Accept": "application/json", + } + + res = self._api.do( + "GET", f"/api/2.0/accounts/{self._api.account_id}/network-policies/{network_policy_id}", headers=headers + ) + return AccountNetworkPolicy.from_dict(res) + + def list_network_policies_rpc(self, *, page_token: Optional[str] = None) -> Iterator[AccountNetworkPolicy]: + """List network policies. + + Gets an array of network policies. + + :param page_token: str (optional) + Pagination token to go to next page based on previous query. + + :returns: Iterator over :class:`AccountNetworkPolicy` + """ + + query = {} + if page_token is not None: + query["page_token"] = page_token + headers = { + "Accept": "application/json", + } + + while True: + json = self._api.do( + "GET", f"/api/2.0/accounts/{self._api.account_id}/network-policies", query=query, headers=headers + ) + if "items" in json: + for v in json["items"]: + yield AccountNetworkPolicy.from_dict(v) + if "next_page_token" not in json or not json["next_page_token"]: + return + query["page_token"] = json["next_page_token"] + + def update_network_policy_rpc( + self, network_policy_id: str, network_policy: AccountNetworkPolicy + ) -> AccountNetworkPolicy: + """Update a network policy. + + Updates a network policy. This allows you to modify the configuration of a network policy. + + :param network_policy_id: str + The unique identifier for the network policy. + :param network_policy: :class:`AccountNetworkPolicy` + + :returns: :class:`AccountNetworkPolicy` + """ + body = network_policy.as_dict() + headers = { + "Accept": "application/json", + "Content-Type": "application/json", + } + + res = self._api.do( + "PUT", + f"/api/2.0/accounts/{self._api.account_id}/network-policies/{network_policy_id}", + body=body, + headers=headers, + ) + return AccountNetworkPolicy.from_dict(res) + + class NotificationDestinationsAPI: """The notification destinations API lets you programmatically manage a workspace's notification destinations. Notification destinations are used to send notifications for query alerts and jobs to @@ -7894,6 +9052,7 @@ def __init__(self, api_client): self._enable_notebook_table_clipboard = EnableNotebookTableClipboardAPI(self._api) self._enable_results_downloading = EnableResultsDownloadingAPI(self._api) self._enhanced_security_monitoring = EnhancedSecurityMonitoringAPI(self._api) + self._llm_proxy_partner_powered_workspace = LlmProxyPartnerPoweredWorkspaceAPI(self._api) self._restrict_workspace_admins = RestrictWorkspaceAdminsAPI(self._api) @property @@ -7928,7 +9087,7 @@ def disable_legacy_access(self) -> DisableLegacyAccessAPI: @property def disable_legacy_dbfs(self) -> DisableLegacyDbfsAPI: - """When this setting is on, access to DBFS root and DBFS mounts is disallowed (as well as creation of new mounts).""" + """Disabling legacy DBFS has the following implications: 1.""" return self._disable_legacy_dbfs @property @@ -7951,6 +9110,11 @@ def enhanced_security_monitoring(self) -> EnhancedSecurityMonitoringAPI: """Controls whether enhanced security monitoring is enabled for the current workspace.""" return self._enhanced_security_monitoring + @property + def llm_proxy_partner_powered_workspace(self) -> LlmProxyPartnerPoweredWorkspaceAPI: + """Determines if partner powered models are enabled or not for a specific workspace.""" + return self._llm_proxy_partner_powered_workspace + @property def restrict_workspace_admins(self) -> RestrictWorkspaceAdminsAPI: """The Restrict Workspace Admins setting lets you control the capabilities of workspace admins.""" @@ -8247,3 +9411,64 @@ def set_status(self, contents: Dict[str, str]): } self._api.do("PATCH", "/api/2.0/workspace-conf", body=contents, headers=headers) + + +class WorkspaceNetworkConfigurationAPI: + """These APIs allow configuration of network settings for Databricks workspaces. Each workspace is always + associated with exactly one network policy that controls which network destinations can be accessed from + the Databricks environment. By default, workspaces are associated with the 'default-policy' network + policy. You cannot create or delete a workspace's network configuration, only update it to associate the + workspace with a different policy.""" + + def __init__(self, api_client): + self._api = api_client + + def get_workspace_network_option_rpc(self, workspace_id: int) -> WorkspaceNetworkOption: + """Get workspace network configuration. + + Gets the network configuration for a workspace. Every workspace has exactly one network policy + binding, with 'default-policy' used if no explicit assignment exists. + + :param workspace_id: int + The workspace ID. + + :returns: :class:`WorkspaceNetworkOption` + """ + + headers = { + "Accept": "application/json", + } + + res = self._api.do( + "GET", f"/api/2.0/accounts/{self._api.account_id}/workspaces/{workspace_id}/network", headers=headers + ) + return WorkspaceNetworkOption.from_dict(res) + + def update_workspace_network_option_rpc( + self, workspace_id: int, workspace_network_option: WorkspaceNetworkOption + ) -> WorkspaceNetworkOption: + """Update workspace network configuration. + + Updates the network configuration for a workspace. This operation associates the workspace with the + specified network policy. To revert to the default policy, specify 'default-policy' as the + network_policy_id. + + :param workspace_id: int + The workspace ID. + :param workspace_network_option: :class:`WorkspaceNetworkOption` + + :returns: :class:`WorkspaceNetworkOption` + """ + body = workspace_network_option.as_dict() + headers = { + "Accept": "application/json", + "Content-Type": "application/json", + } + + res = self._api.do( + "PUT", + f"/api/2.0/accounts/{self._api.account_id}/workspaces/{workspace_id}/network", + body=body, + headers=headers, + ) + return WorkspaceNetworkOption.from_dict(res) diff --git a/databricks/sdk/service/sharing.py b/databricks/sdk/service/sharing.py index 7325e5fd..09bf080f 100755 --- a/databricks/sdk/service/sharing.py +++ b/databricks/sdk/service/sharing.py @@ -534,6 +534,74 @@ def from_dict(cls, d: Dict[str, Any]) -> DeltaSharingTableDependency: return cls(schema_name=d.get("schema_name", None), table_name=d.get("table_name", None)) +@dataclass +class FederationPolicy: + comment: Optional[str] = None + """Description of the policy. This is a user-provided description.""" + + create_time: Optional[str] = None + """System-generated timestamp indicating when the policy was created.""" + + id: Optional[str] = None + """Unique, immutable system-generated identifier for the federation policy.""" + + name: Optional[str] = None + """Name of the federation policy. A recipient can have multiple policies with different names. The + name must contain only lowercase alphanumeric characters, numbers, and hyphens.""" + + oidc_policy: Optional[OidcFederationPolicy] = None + """Specifies the policy to use for validating OIDC claims in the federated tokens.""" + + update_time: Optional[str] = None + """System-generated timestamp indicating when the policy was last updated.""" + + def as_dict(self) -> dict: + """Serializes the FederationPolicy into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.comment is not None: + body["comment"] = self.comment + if self.create_time is not None: + body["create_time"] = self.create_time + if self.id is not None: + body["id"] = self.id + if self.name is not None: + body["name"] = self.name + if self.oidc_policy: + body["oidc_policy"] = self.oidc_policy.as_dict() + if self.update_time is not None: + body["update_time"] = self.update_time + return body + + def as_shallow_dict(self) -> dict: + """Serializes the FederationPolicy into a shallow dictionary of its immediate attributes.""" + body = {} + if self.comment is not None: + body["comment"] = self.comment + if self.create_time is not None: + body["create_time"] = self.create_time + if self.id is not None: + body["id"] = self.id + if self.name is not None: + body["name"] = self.name + if self.oidc_policy: + body["oidc_policy"] = self.oidc_policy + if self.update_time is not None: + body["update_time"] = self.update_time + return body + + @classmethod + def from_dict(cls, d: Dict[str, Any]) -> FederationPolicy: + """Deserializes the FederationPolicy from a dictionary.""" + return cls( + comment=d.get("comment", None), + create_time=d.get("create_time", None), + id=d.get("id", None), + name=d.get("name", None), + oidc_policy=_from_dict(d, "oidc_policy", OidcFederationPolicy), + update_time=d.get("update_time", None), + ) + + @dataclass class FunctionParameterInfo: """Represents a parameter of a function. The same message is used for both input and output @@ -805,6 +873,38 @@ def from_dict(cls, d: Dict[str, Any]) -> IpAccessList: return cls(allowed_ip_addresses=d.get("allowed_ip_addresses", None)) +@dataclass +class ListFederationPoliciesResponse: + next_page_token: Optional[str] = None + + policies: Optional[List[FederationPolicy]] = None + + def as_dict(self) -> dict: + """Serializes the ListFederationPoliciesResponse into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.next_page_token is not None: + body["next_page_token"] = self.next_page_token + if self.policies: + body["policies"] = [v.as_dict() for v in self.policies] + return body + + def as_shallow_dict(self) -> dict: + """Serializes the ListFederationPoliciesResponse into a shallow dictionary of its immediate attributes.""" + body = {} + if self.next_page_token is not None: + body["next_page_token"] = self.next_page_token + if self.policies: + body["policies"] = self.policies + return body + + @classmethod + def from_dict(cls, d: Dict[str, Any]) -> ListFederationPoliciesResponse: + """Deserializes the ListFederationPoliciesResponse from a dictionary.""" + return cls( + next_page_token=d.get("next_page_token", None), policies=_repeated_dict(d, "policies", FederationPolicy) + ) + + @dataclass class ListProviderShareAssetsResponse: """Response to ListProviderShareAssets, which contains the list of assets of a share.""" @@ -1061,6 +1161,74 @@ def from_dict(cls, d: Dict[str, Any]) -> NotebookFile: ) +@dataclass +class OidcFederationPolicy: + """Specifies the policy to use for validating OIDC claims in your federated tokens from Delta + Sharing Clients. Refer to https://docs.databricks.com/en/delta-sharing/create-recipient-oidc-fed + for more details.""" + + issuer: str + """The required token issuer, as specified in the 'iss' claim of federated tokens.""" + + subject_claim: str + """The claim that contains the subject of the token. Depending on the identity provider and the use + case (U2M or M2M), this can vary: - For Entra ID (AAD): * U2M flow (group access): Use `groups`. + * U2M flow (user access): Use `oid`. * M2M flow (OAuth App access): Use `azp`. - For other IdPs, + refer to the specific IdP documentation. + + Supported `subject_claim` values are: - `oid`: Object ID of the user. - `azp`: Client ID of the + OAuth app. - `groups`: Object ID of the group. - `sub`: Subject identifier for other use cases.""" + + subject: str + """The required token subject, as specified in the subject claim of federated tokens. The subject + claim identifies the identity of the user or machine accessing the resource. Examples for Entra + ID (AAD): - U2M flow (group access): If the subject claim is `groups`, this must be the Object + ID of the group in Entra ID. - U2M flow (user access): If the subject claim is `oid`, this must + be the Object ID of the user in Entra ID. - M2M flow (OAuth App access): If the subject claim is + `azp`, this must be the client ID of the OAuth app registered in Entra ID.""" + + audiences: Optional[List[str]] = None + """The allowed token audiences, as specified in the 'aud' claim of federated tokens. The audience + identifier is intended to represent the recipient of the token. Can be any non-empty string + value. As long as the audience in the token matches at least one audience in the policy,""" + + def as_dict(self) -> dict: + """Serializes the OidcFederationPolicy into a dictionary suitable for use as a JSON request body.""" + body = {} + if self.audiences: + body["audiences"] = [v for v in self.audiences] + if self.issuer is not None: + body["issuer"] = self.issuer + if self.subject is not None: + body["subject"] = self.subject + if self.subject_claim is not None: + body["subject_claim"] = self.subject_claim + return body + + def as_shallow_dict(self) -> dict: + """Serializes the OidcFederationPolicy into a shallow dictionary of its immediate attributes.""" + body = {} + if self.audiences: + body["audiences"] = self.audiences + if self.issuer is not None: + body["issuer"] = self.issuer + if self.subject is not None: + body["subject"] = self.subject + if self.subject_claim is not None: + body["subject_claim"] = self.subject_claim + return body + + @classmethod + def from_dict(cls, d: Dict[str, Any]) -> OidcFederationPolicy: + """Deserializes the OidcFederationPolicy from a dictionary.""" + return cls( + audiences=d.get("audiences", None), + issuer=d.get("issuer", None), + subject=d.get("subject", None), + subject_claim=d.get("subject_claim", None), + ) + + @dataclass class Partition: values: Optional[List[PartitionValue]] = None @@ -2232,6 +2400,9 @@ class Table: internal_attributes: Optional[TableInternalAttributes] = None """Internal information for D2D sharing that should not be disclosed to external users.""" + materialization_namespace: Optional[str] = None + """The catalog and schema of the materialized table""" + materialized_table_name: Optional[str] = None """The name of a materialized table.""" @@ -2259,6 +2430,8 @@ def as_dict(self) -> dict: body["id"] = self.id if self.internal_attributes: body["internal_attributes"] = self.internal_attributes.as_dict() + if self.materialization_namespace is not None: + body["materialization_namespace"] = self.materialization_namespace if self.materialized_table_name is not None: body["materialized_table_name"] = self.materialized_table_name if self.name is not None: @@ -2282,6 +2455,8 @@ def as_shallow_dict(self) -> dict: body["id"] = self.id if self.internal_attributes: body["internal_attributes"] = self.internal_attributes + if self.materialization_namespace is not None: + body["materialization_namespace"] = self.materialization_namespace if self.materialized_table_name is not None: body["materialized_table_name"] = self.materialized_table_name if self.name is not None: @@ -2303,6 +2478,7 @@ def from_dict(cls, d: Dict[str, Any]) -> Table: comment=d.get("comment", None), id=d.get("id", None), internal_attributes=_from_dict(d, "internal_attributes", TableInternalAttributes), + materialization_namespace=d.get("materialization_namespace", None), materialized_table_name=d.get("materialized_table_name", None), name=d.get("name", None), schema=d.get("schema", None), @@ -2592,6 +2768,9 @@ class UpdateSharePermissions: name: Optional[str] = None """The name of the share.""" + omit_permissions_list: Optional[bool] = None + """Optional. Whether to return the latest permissions list of the share in the response.""" + def as_dict(self) -> dict: """Serializes the UpdateSharePermissions into a dictionary suitable for use as a JSON request body.""" body = {} @@ -2599,6 +2778,8 @@ def as_dict(self) -> dict: body["changes"] = [v.as_dict() for v in self.changes] if self.name is not None: body["name"] = self.name + if self.omit_permissions_list is not None: + body["omit_permissions_list"] = self.omit_permissions_list return body def as_shallow_dict(self) -> dict: @@ -2608,12 +2789,18 @@ def as_shallow_dict(self) -> dict: body["changes"] = self.changes if self.name is not None: body["name"] = self.name + if self.omit_permissions_list is not None: + body["omit_permissions_list"] = self.omit_permissions_list return body @classmethod def from_dict(cls, d: Dict[str, Any]) -> UpdateSharePermissions: """Deserializes the UpdateSharePermissions from a dictionary.""" - return cls(changes=_repeated_dict(d, "changes", PermissionsChange), name=d.get("name", None)) + return cls( + changes=_repeated_dict(d, "changes", PermissionsChange), + name=d.get("name", None), + omit_permissions_list=d.get("omit_permissions_list", None), + ) @dataclass @@ -3088,6 +3275,197 @@ def retrieve_token(self, activation_url: str) -> RetrieveTokenResponse: return RetrieveTokenResponse.from_dict(res) +class RecipientFederationPoliciesAPI: + """The Recipient Federation Policies APIs are only applicable in the open sharing model where the recipient + object has the authentication type of `OIDC_RECIPIENT`, enabling data sharing from Databricks to + non-Databricks recipients. OIDC Token Federation enables secure, secret-less authentication for accessing + Delta Sharing servers. Users and applications authenticate using short-lived OIDC tokens issued by their + own Identity Provider (IdP), such as Azure Entra ID or Okta, without the need for managing static + credentials or client secrets. A federation policy defines how non-Databricks recipients authenticate + using OIDC tokens. It validates the OIDC claims in federated tokens and is set at the recipient level. The + caller must be the owner of the recipient to create or manage a federation policy. Federation policies + support the following scenarios: - User-to-Machine (U2M) flow: A user accesses Delta Shares using their + own identity, such as connecting through PowerBI Delta Sharing Client. - Machine-to-Machine (M2M) flow: An + application accesses Delta Shares using its own identity, typically for automation tasks like nightly jobs + through Python Delta Sharing Client. OIDC Token Federation enables fine-grained access control, supports + Multi-Factor Authentication (MFA), and enhances security by minimizing the risk of credential leakage + through the use of short-lived, expiring tokens. It is designed for strong identity governance, secure + cross-platform data sharing, and reduced operational overhead for credential management. + + For more information, see + https://www.databricks.com/blog/announcing-oidc-token-federation-enhanced-delta-sharing-security and + https://docs.databricks.com/en/delta-sharing/create-recipient-oidc-fed""" + + def __init__(self, api_client): + self._api = api_client + + def create(self, recipient_name: str, policy: FederationPolicy) -> FederationPolicy: + """Create recipient federation policy. + + Create a federation policy for an OIDC_FEDERATION recipient for sharing data from Databricks to + non-Databricks recipients. The caller must be the owner of the recipient. When sharing data from + Databricks to non-Databricks clients, you can define a federation policy to authenticate + non-Databricks recipients. The federation policy validates OIDC claims in federated tokens and is + defined at the recipient level. This enables secretless sharing clients to authenticate using OIDC + tokens. + + Supported scenarios for federation policies: 1. **User-to-Machine (U2M) flow** (e.g., PowerBI): A user + accesses a resource using their own identity. 2. **Machine-to-Machine (M2M) flow** (e.g., OAuth App): + An OAuth App accesses a resource using its own identity, typically for tasks like running nightly + jobs. + + For an overview, refer to: - Blog post: Overview of feature: + https://www.databricks.com/blog/announcing-oidc-token-federation-enhanced-delta-sharing-security + + For detailed configuration guides based on your use case: - Creating a Federation Policy as a + provider: https://docs.databricks.com/en/delta-sharing/create-recipient-oidc-fed - Configuration and + usage for Machine-to-Machine (M2M) applications (e.g., Python Delta Sharing Client): + https://docs.databricks.com/aws/en/delta-sharing/sharing-over-oidc-m2m - Configuration and usage for + User-to-Machine (U2M) applications (e.g., PowerBI): + https://docs.databricks.com/aws/en/delta-sharing/sharing-over-oidc-u2m + + :param recipient_name: str + Name of the recipient. This is the name of the recipient for which the policy is being created. + :param policy: :class:`FederationPolicy` + + :returns: :class:`FederationPolicy` + """ + body = policy.as_dict() + headers = { + "Accept": "application/json", + "Content-Type": "application/json", + } + + res = self._api.do( + "POST", f"/api/2.0/data-sharing/recipients/{recipient_name}/federation-policies", body=body, headers=headers + ) + return FederationPolicy.from_dict(res) + + def delete(self, recipient_name: str, name: str): + """Delete recipient federation policy. + + Deletes an existing federation policy for an OIDC_FEDERATION recipient. The caller must be the owner + of the recipient. + + :param recipient_name: str + Name of the recipient. This is the name of the recipient for which the policy is being deleted. + :param name: str + Name of the policy. This is the name of the policy to be deleted. + + + """ + + headers = { + "Accept": "application/json", + } + + self._api.do( + "DELETE", f"/api/2.0/data-sharing/recipients/{recipient_name}/federation-policies/{name}", headers=headers + ) + + def get_federation_policy(self, recipient_name: str, name: str) -> FederationPolicy: + """Get recipient federation policy. + + Reads an existing federation policy for an OIDC_FEDERATION recipient for sharing data from Databricks + to non-Databricks recipients. The caller must have read access to the recipient. + + :param recipient_name: str + Name of the recipient. This is the name of the recipient for which the policy is being retrieved. + :param name: str + Name of the policy. This is the name of the policy to be retrieved. + + :returns: :class:`FederationPolicy` + """ + + headers = { + "Accept": "application/json", + } + + res = self._api.do( + "GET", f"/api/2.0/data-sharing/recipients/{recipient_name}/federation-policies/{name}", headers=headers + ) + return FederationPolicy.from_dict(res) + + def list( + self, recipient_name: str, *, max_results: Optional[int] = None, page_token: Optional[str] = None + ) -> Iterator[FederationPolicy]: + """List recipient federation policies. + + Lists federation policies for an OIDC_FEDERATION recipient for sharing data from Databricks to + non-Databricks recipients. The caller must have read access to the recipient. + + :param recipient_name: str + Name of the recipient. This is the name of the recipient for which the policies are being listed. + :param max_results: int (optional) + :param page_token: str (optional) + + :returns: Iterator over :class:`FederationPolicy` + """ + + query = {} + if max_results is not None: + query["max_results"] = max_results + if page_token is not None: + query["page_token"] = page_token + headers = { + "Accept": "application/json", + } + + while True: + json = self._api.do( + "GET", + f"/api/2.0/data-sharing/recipients/{recipient_name}/federation-policies", + query=query, + headers=headers, + ) + if "policies" in json: + for v in json["policies"]: + yield FederationPolicy.from_dict(v) + if "next_page_token" not in json or not json["next_page_token"]: + return + query["page_token"] = json["next_page_token"] + + def update( + self, recipient_name: str, name: str, policy: FederationPolicy, *, update_mask: Optional[str] = None + ) -> FederationPolicy: + """Update recipient federation policy. + + Updates an existing federation policy for an OIDC_RECIPIENT. The caller must be the owner of the + recipient. + + :param recipient_name: str + Name of the recipient. This is the name of the recipient for which the policy is being updated. + :param name: str + Name of the policy. This is the name of the current name of the policy. + :param policy: :class:`FederationPolicy` + :param update_mask: str (optional) + The field mask specifies which fields of the policy to update. To specify multiple fields in the + field mask, use comma as the separator (no space). The special value '*' indicates that all fields + should be updated (full replacement). If unspecified, all fields that are set in the policy provided + in the update request will overwrite the corresponding fields in the existing policy. Example value: + 'comment,oidc_policy.audiences'. + + :returns: :class:`FederationPolicy` + """ + body = policy.as_dict() + query = {} + if update_mask is not None: + query["update_mask"] = update_mask + headers = { + "Accept": "application/json", + "Content-Type": "application/json", + } + + res = self._api.do( + "PATCH", + f"/api/2.0/data-sharing/recipients/{recipient_name}/federation-policies/{name}", + query=query, + body=body, + headers=headers, + ) + return FederationPolicy.from_dict(res) + + class RecipientsAPI: """A recipient is an object you create using :method:recipients/create to represent an organization which you want to allow access shares. The way how sharing works differs depending on whether or not your recipient @@ -3604,7 +3982,11 @@ def update( return ShareInfo.from_dict(res) def update_permissions( - self, name: str, *, changes: Optional[List[PermissionsChange]] = None + self, + name: str, + *, + changes: Optional[List[PermissionsChange]] = None, + omit_permissions_list: Optional[bool] = None, ) -> UpdateSharePermissionsResponse: """Update permissions. @@ -3618,12 +4000,16 @@ def update_permissions( The name of the share. :param changes: List[:class:`PermissionsChange`] (optional) Array of permission changes. + :param omit_permissions_list: bool (optional) + Optional. Whether to return the latest permissions list of the share in the response. :returns: :class:`UpdateSharePermissionsResponse` """ body = {} if changes is not None: body["changes"] = [v.as_dict() for v in changes] + if omit_permissions_list is not None: + body["omit_permissions_list"] = omit_permissions_list headers = { "Accept": "application/json", "Content-Type": "application/json", diff --git a/databricks/sdk/service/sql.py b/databricks/sdk/service/sql.py index c9baeee5..0cef4c2e 100755 --- a/databricks/sdk/service/sql.py +++ b/databricks/sdk/service/sql.py @@ -1555,30 +1555,6 @@ def from_dict(cls, d: Dict[str, Any]) -> CreateAlertRequestAlert: ) -@dataclass -class CreateAlertV2Request: - alert: Optional[AlertV2] = None - - def as_dict(self) -> dict: - """Serializes the CreateAlertV2Request into a dictionary suitable for use as a JSON request body.""" - body = {} - if self.alert: - body["alert"] = self.alert.as_dict() - return body - - def as_shallow_dict(self) -> dict: - """Serializes the CreateAlertV2Request into a shallow dictionary of its immediate attributes.""" - body = {} - if self.alert: - body["alert"] = self.alert - return body - - @classmethod - def from_dict(cls, d: Dict[str, Any]) -> CreateAlertV2Request: - """Deserializes the CreateAlertV2Request from a dictionary.""" - return cls(alert=_from_dict(d, "alert", AlertV2)) - - @dataclass class CreateQueryRequest: auto_resolve_display_name: Optional[bool] = None @@ -7423,6 +7399,10 @@ class UpdateAlertRequest: alert: Optional[UpdateAlertRequestAlert] = None + auto_resolve_display_name: Optional[bool] = None + """If true, automatically resolve alert display name conflicts. Otherwise, fail the request if the + alert's display name conflicts with an existing alert's display name.""" + id: Optional[str] = None def as_dict(self) -> dict: @@ -7430,6 +7410,8 @@ def as_dict(self) -> dict: body = {} if self.alert: body["alert"] = self.alert.as_dict() + if self.auto_resolve_display_name is not None: + body["auto_resolve_display_name"] = self.auto_resolve_display_name if self.id is not None: body["id"] = self.id if self.update_mask is not None: @@ -7441,6 +7423,8 @@ def as_shallow_dict(self) -> dict: body = {} if self.alert: body["alert"] = self.alert + if self.auto_resolve_display_name is not None: + body["auto_resolve_display_name"] = self.auto_resolve_display_name if self.id is not None: body["id"] = self.id if self.update_mask is not None: @@ -7452,6 +7436,7 @@ def from_dict(cls, d: Dict[str, Any]) -> UpdateAlertRequest: """Deserializes the UpdateAlertRequest from a dictionary.""" return cls( alert=_from_dict(d, "alert", UpdateAlertRequestAlert), + auto_resolve_display_name=d.get("auto_resolve_display_name", None), id=d.get("id", None), update_mask=d.get("update_mask", None), ) @@ -7546,52 +7531,6 @@ def from_dict(cls, d: Dict[str, Any]) -> UpdateAlertRequestAlert: ) -@dataclass -class UpdateAlertV2Request: - update_mask: str - """The field mask must be a single string, with multiple fields separated by commas (no spaces). - The field path is relative to the resource object, using a dot (`.`) to navigate sub-fields - (e.g., `author.given_name`). Specification of elements in sequence or map fields is not allowed, - as only the entire collection field can be specified. Field names must exactly match the - resource field names. - - A field mask of `*` indicates full replacement. It’s recommended to always explicitly list the - fields being updated and avoid using `*` wildcards, as it can lead to unintended results if the - API changes in the future.""" - - alert: Optional[AlertV2] = None - - id: Optional[str] = None - """UUID identifying the alert.""" - - def as_dict(self) -> dict: - """Serializes the UpdateAlertV2Request into a dictionary suitable for use as a JSON request body.""" - body = {} - if self.alert: - body["alert"] = self.alert.as_dict() - if self.id is not None: - body["id"] = self.id - if self.update_mask is not None: - body["update_mask"] = self.update_mask - return body - - def as_shallow_dict(self) -> dict: - """Serializes the UpdateAlertV2Request into a shallow dictionary of its immediate attributes.""" - body = {} - if self.alert: - body["alert"] = self.alert - if self.id is not None: - body["id"] = self.id - if self.update_mask is not None: - body["update_mask"] = self.update_mask - return body - - @classmethod - def from_dict(cls, d: Dict[str, Any]) -> UpdateAlertV2Request: - """Deserializes the UpdateAlertV2Request from a dictionary.""" - return cls(alert=_from_dict(d, "alert", AlertV2), id=d.get("id", None), update_mask=d.get("update_mask", None)) - - @dataclass class UpdateQueryRequest: update_mask: str @@ -7605,6 +7544,10 @@ class UpdateQueryRequest: fields being updated and avoid using `*` wildcards, as it can lead to unintended results if the API changes in the future.""" + auto_resolve_display_name: Optional[bool] = None + """If true, automatically resolve alert display name conflicts. Otherwise, fail the request if the + alert's display name conflicts with an existing alert's display name.""" + id: Optional[str] = None query: Optional[UpdateQueryRequestQuery] = None @@ -7612,6 +7555,8 @@ class UpdateQueryRequest: def as_dict(self) -> dict: """Serializes the UpdateQueryRequest into a dictionary suitable for use as a JSON request body.""" body = {} + if self.auto_resolve_display_name is not None: + body["auto_resolve_display_name"] = self.auto_resolve_display_name if self.id is not None: body["id"] = self.id if self.query: @@ -7623,6 +7568,8 @@ def as_dict(self) -> dict: def as_shallow_dict(self) -> dict: """Serializes the UpdateQueryRequest into a shallow dictionary of its immediate attributes.""" body = {} + if self.auto_resolve_display_name is not None: + body["auto_resolve_display_name"] = self.auto_resolve_display_name if self.id is not None: body["id"] = self.id if self.query: @@ -7635,6 +7582,7 @@ def as_shallow_dict(self) -> dict: def from_dict(cls, d: Dict[str, Any]) -> UpdateQueryRequest: """Deserializes the UpdateQueryRequest from a dictionary.""" return cls( + auto_resolve_display_name=d.get("auto_resolve_display_name", None), id=d.get("id", None), query=_from_dict(d, "query", UpdateQueryRequestQuery), update_mask=d.get("update_mask", None), @@ -8595,7 +8543,14 @@ def list( return query["page_token"] = json["next_page_token"] - def update(self, id: str, update_mask: str, *, alert: Optional[UpdateAlertRequestAlert] = None) -> Alert: + def update( + self, + id: str, + update_mask: str, + *, + alert: Optional[UpdateAlertRequestAlert] = None, + auto_resolve_display_name: Optional[bool] = None, + ) -> Alert: """Update an alert. Updates an alert. @@ -8612,12 +8567,17 @@ def update(self, id: str, update_mask: str, *, alert: Optional[UpdateAlertReques fields being updated and avoid using `*` wildcards, as it can lead to unintended results if the API changes in the future. :param alert: :class:`UpdateAlertRequestAlert` (optional) + :param auto_resolve_display_name: bool (optional) + If true, automatically resolve alert display name conflicts. Otherwise, fail the request if the + alert's display name conflicts with an existing alert's display name. :returns: :class:`Alert` """ body = {} if alert is not None: body["alert"] = alert.as_dict() + if auto_resolve_display_name is not None: + body["auto_resolve_display_name"] = auto_resolve_display_name if update_mask is not None: body["update_mask"] = update_mask headers = { @@ -8805,18 +8765,16 @@ class AlertsV2API: def __init__(self, api_client): self._api = api_client - def create_alert(self, *, alert: Optional[AlertV2] = None) -> AlertV2: + def create_alert(self, alert: AlertV2) -> AlertV2: """Create an alert. Create Alert - :param alert: :class:`AlertV2` (optional) + :param alert: :class:`AlertV2` :returns: :class:`AlertV2` """ - body = {} - if alert is not None: - body["alert"] = alert.as_dict() + body = alert.as_dict() headers = { "Accept": "application/json", "Content-Type": "application/json", @@ -8889,13 +8847,14 @@ def trash_alert(self, id: str): self._api.do("DELETE", f"/api/2.0/alerts/{id}", headers=headers) - def update_alert(self, id: str, update_mask: str, *, alert: Optional[AlertV2] = None) -> AlertV2: + def update_alert(self, id: str, alert: AlertV2, update_mask: str) -> AlertV2: """Update an alert. Update alert :param id: str UUID identifying the alert. + :param alert: :class:`AlertV2` :param update_mask: str The field mask must be a single string, with multiple fields separated by commas (no spaces). The field path is relative to the resource object, using a dot (`.`) to navigate sub-fields (e.g., @@ -8906,21 +8865,19 @@ def update_alert(self, id: str, update_mask: str, *, alert: Optional[AlertV2] = A field mask of `*` indicates full replacement. It’s recommended to always explicitly list the fields being updated and avoid using `*` wildcards, as it can lead to unintended results if the API changes in the future. - :param alert: :class:`AlertV2` (optional) :returns: :class:`AlertV2` """ - body = {} - if alert is not None: - body["alert"] = alert.as_dict() + body = alert.as_dict() + query = {} if update_mask is not None: - body["update_mask"] = update_mask + query["update_mask"] = update_mask headers = { "Accept": "application/json", "Content-Type": "application/json", } - res = self._api.do("PATCH", f"/api/2.0/alerts/{id}", body=body, headers=headers) + res = self._api.do("PATCH", f"/api/2.0/alerts/{id}", query=query, body=body, headers=headers) return AlertV2.from_dict(res) @@ -9535,7 +9492,14 @@ def list_visualizations( return query["page_token"] = json["next_page_token"] - def update(self, id: str, update_mask: str, *, query: Optional[UpdateQueryRequestQuery] = None) -> Query: + def update( + self, + id: str, + update_mask: str, + *, + auto_resolve_display_name: Optional[bool] = None, + query: Optional[UpdateQueryRequestQuery] = None, + ) -> Query: """Update a query. Updates a query. @@ -9551,11 +9515,16 @@ def update(self, id: str, update_mask: str, *, query: Optional[UpdateQueryReques A field mask of `*` indicates full replacement. It’s recommended to always explicitly list the fields being updated and avoid using `*` wildcards, as it can lead to unintended results if the API changes in the future. + :param auto_resolve_display_name: bool (optional) + If true, automatically resolve alert display name conflicts. Otherwise, fail the request if the + alert's display name conflicts with an existing alert's display name. :param query: :class:`UpdateQueryRequestQuery` (optional) :returns: :class:`Query` """ body = {} + if auto_resolve_display_name is not None: + body["auto_resolve_display_name"] = auto_resolve_display_name if query is not None: body["query"] = query.as_dict() if update_mask is not None: diff --git a/databricks/sdk/service/vectorsearch.py b/databricks/sdk/service/vectorsearch.py index 0f7e09d4..4a2a7100 100755 --- a/databricks/sdk/service/vectorsearch.py +++ b/databricks/sdk/service/vectorsearch.py @@ -744,12 +744,6 @@ def from_dict(cls, d: Dict[str, Any]) -> ListEndpointResponse: @dataclass class ListValue: - """copied from proto3 / Google Well Known Types, source: - https://github.com/protocolbuffers/protobuf/blob/450d24ca820750c5db5112a6f0b0c2efb9758021/src/google/protobuf/struct.proto - `ListValue` is a wrapper around a repeated field of values. - - The JSON representation for `ListValue` is JSON array.""" - values: Optional[List[Value]] = None """Repeated field of dynamically typed values.""" @@ -1308,15 +1302,6 @@ def from_dict(cls, d: Dict[str, Any]) -> ScanVectorIndexResponse: @dataclass class Struct: - """copied from proto3 / Google Well Known Types, source: - https://github.com/protocolbuffers/protobuf/blob/450d24ca820750c5db5112a6f0b0c2efb9758021/src/google/protobuf/struct.proto - `Struct` represents a structured data value, consisting of fields which map to dynamically typed - values. In some languages, `Struct` might be supported by a native representation. For example, - in scripting languages like JS a struct is represented as an object. The details of that - representation are described together with the proto support for the language. - - The JSON representation for `Struct` is JSON object.""" - fields: Optional[List[MapStringValueEntry]] = None """Data entry, corresponding to a row in a vector index.""" @@ -1532,25 +1517,12 @@ class Value: bool_value: Optional[bool] = None list_value: Optional[ListValue] = None - """copied from proto3 / Google Well Known Types, source: - https://github.com/protocolbuffers/protobuf/blob/450d24ca820750c5db5112a6f0b0c2efb9758021/src/google/protobuf/struct.proto - `ListValue` is a wrapper around a repeated field of values. - - The JSON representation for `ListValue` is JSON array.""" number_value: Optional[float] = None string_value: Optional[str] = None struct_value: Optional[Struct] = None - """copied from proto3 / Google Well Known Types, source: - https://github.com/protocolbuffers/protobuf/blob/450d24ca820750c5db5112a6f0b0c2efb9758021/src/google/protobuf/struct.proto - `Struct` represents a structured data value, consisting of fields which map to dynamically typed - values. In some languages, `Struct` might be supported by a native representation. For example, - in scripting languages like JS a struct is represented as an object. The details of that - representation are described together with the proto support for the language. - - The JSON representation for `Struct` is JSON object.""" def as_dict(self) -> dict: """Serializes the Value into a dictionary suitable for use as a JSON request body."""