Releases: googleapis/java-bigtable-hbase
Releases · googleapis/java-bigtable-hbase
bigtable-client-1.1.1
NOTE: Do not use this release. Use 1.1.2 instead.
- Attempted to fix
RefreshingOAuth2CredentialsInterceptor
due to a customer issue. Users in datacenters outside of GCP may have experienced problems related to the same issue. - App Profiles are now supported in the import / export process
- Parent project refactoring to clarify management of HBase 2.0 artifacts.
bigtable-client-1.1.0
- Upgrading grpc to 1.9.0
- Removing the deprecated
bigtable-hbase-dataflow
artifact. Usebigtable-hbase-beam
instead. - Adding initial (and partial) support for HBase 2.0's new APIs, including the async APIs.
- Changed the default settings of the client to improve throughput of
BulkMutator
andBufferedMutator
- Added a new
Filters
object to the core library that simplifies creation of Cloud BigtableRowFilter
objects - Added a new HBase
Filter
that wraps a Cloud BigtableRowFilter
- Fixed the implementation SingleColumnValueFilter to be more efficient.
- Removing the
bigtable-protos
jar in favor ofcom.google.api.grpc:proto-google-cloud-bigtable-v2
andcom.google.api.grpc:proto-google-cloud-bigtable-admin-v2
bigtable-client-1.0.0
- This release has minor bug fixes on top of bigtable-client-1.0.0-pre4, and is intended to symbolize the combined efforts of bigtable-client-1.0.0-pre1 - bigtable-client-1.0.0-pre4.
- The one notable feature is integration with Stackdriver tracing. Here is an example that leverages the new tracing features.
bigtable-client-1.0.0-pre4
The next release is likely to be 1.0.0 which will be based on the 1.0.0-pre4 version. No new features will be in 1.0.0, but there may be some minor bug fixes.
- Deprecated our Dataflow v1 artifacts
- Removed the bigtable-hbase-1.0, bigtable-hbase-1.1, bigtable-hbase-1.2 and bigtable-hbase-1.3 artifacts. bigtable-hbase-1.x should be used in most cases, and bigtable-hbase-1.x-hadoop should be used in hadoop environments
- Upgraded to grpc 1.7.0
Created a Beam based import/export tools. The Dataflow version is deprecated. - Minor Bug fixes: fixed a NPEs when canceling reads: fixed an issue when writing rows with large cell counts (100,000) in Buffered Mutator.
bigtable-client-1.0.0-pre3
- Added an Apache Beam-compatible version of the Cloud Dataflow connector. See the Cloud Dataflow release notes for information about migrating from the Dataflow 1.x SDK to the Dataflow 2.x (Beam-compatible) SDK.
Note: The previous version of the Cloud Dataflow connector is deprecated and will be removed in the next release.
Fixed a thread leak for scans that are only partially read.
- Enhanced the Cloud Dataflow connector's throttling feature to account for jobs that write to multiple instances.
- Added an option to treat HBase namespace operations as no-ops that log a warning, rather than prohibited operations that throw an exception. (Cloud Bigtable does not support namespaces.) To enable this feature, set the option google.bigtable.namespace.warnings to true, either in your hbase-site.xml file or programmatically.
bigtable-client-1.0.0-pre2
- The netty-tcnative-boringssl-static dependency is now included as a transitive dependency in the Cloud Bigtable Maven artifacts. You should remove this dependency from your pom.xml file.
- The method Table#checkAndPut now performs comparisons in the correct order.
- Using a ValueFilter with an empty string comparison now works correctly.
- Using an empty FilterList no longer triggers a full table scan.
- Optimized the method Table#exists, which checks whether a row exists.
- When possible, the client now retries failed requests for table administration operations.
- Added an experimental Maven plugin that runs the Cloud Bigtable emulator.
- Upgraded to gRPC 1.5.0.
bigtable-client-1.0.0-pre1
This version includes the following changes:
- The HBase client for Java now provides three Maven artifacts:
bigtable-hbase-1.x
: Use this artifact for standalone applications where you control your dependencies.bigtable-hbase-1.x-hadoop
: Use this artifact for Hadoop environments.bigtable-hbase-1.x-shaded
: Use this artifact for environments other than Hadoop that require older versions of the HBase client for Java's dependencies, such as protobuf and Guava.
- You must update your application to use one of these new artifacts. In addition, if your configuration settings include a value for
hbase.client.connection.impl,
you must change the value tocom.google.cloud.bigtable.hbase1_x.BigtableConnection
. - You no longer need to include an
hbase-client
artifact in your Maven project.
Improved the client-side throttling mechanism to reduce the likelihood of Cloud Dataflow jobs overloading a Cloud Bigtable cluster. To enable this feature, set the optiongoogle.bigtable.buffered.mutator.throttling.enable
to"true"
, either in yourhbase-site.xml
file or programmatically. Enabling this option is now recommended for all Cloud Dataflow jobs that write to Cloud Bigtable.
bigtable-client-0.9.7.1
- Fixed a bug in the Cloud Dataflow connector that caused some records to be read multiple times, or not at all, at the end of a long read job. This issue was introduced in version 0.9.4.
- Fixed an issue with long-running OAuth requests that caused gRPC to throw "Not started" exceptions. This issue was introduced in version 0.9.6.2.
- Added support for HBase's FamilyFilter, which you can use to filter based on the column family.
bigtable-client-0.9.7
- Added an experimental client-side throttling mechanism to reduce the likelihood of Cloud Dataflow jobs overloading a Cloud Bigtable cluster. To enable this feature, set the option
google.bigtable.buffered.mutator.throttling.enable
to true, either in yourhbase-site.xml
file or programmatically. - Added the ability to configure authentication by passing in a JSON service account key as a string. To use this feature, set the option
google.bigtable.auth.json.value
to the text of your JSON service account key, either in yourhbase-site.xml
file or programmatically. - Fixed an issue that prevented
Table.checkAnd*()
methods from working correctly with comparators other than EQUAL and NOT_EQUAL. - Improved error handling of BufferedMutator operations under heavy load.
- Upgraded to
gRPC
1.3.0.
bigtable-client-0.9.6.2
- You must now use version 1.1.33.Fork26 of the
netty-tcnative-boringssl-static
library. - Added HBase 1.3.x compatibility. HBase 1.3.x is now the default version for our core integration. Most applications will continue to work with older versions of HBase, with the exception of the HBase shell.
- When an OAuth token is revoked, causing the client to receive an
UNAUTHORIZED
response, the client now retrieves a new OAuth token and retries the request. - Improved the performance of
Table.get(List<Get> gets)
by requesting multiple concurrent batches instead of a single large batch of requests. - Added a cache for the underlying Netty/gRPC channel pool as a performance enhancement for systems that open many connections, such as Cloud Dataflow or Hadoop connectors. To enable this feature, set the option
google.bigtable.use.cached.data.channel.pool
to true, either in yourhbase-site.xml
file or programmatically. - Added support for HBase's
MultiRowRangeFilter
, and added a subclass ofScan
,BigtableExtendedScan
, to enable a scan with an arbitrary set of row keys and range. PrefixFilter
used to perform a full table scan. Now, under most conditions, it only scans for the rows in thePrefixFilter
. One exception is if you are using a complexFilterList
that contains aPrefixFilter
with the operatorFilterList.Operator.MUST_PASS_ALL
; in this case, the filter performs a full table scan. UsePrefixFilter
withBigtableExtendedScan
to optimize performance in this case.- Fixed an issue that could cause a
ValueFilter
withCompareFilter.CompareOp.EQUAL
to fail against the Cloud Bigtable emulator. - The client now uses Maven's
google-common-protos
artifact for protobuf objects rather than having a copy inbigtable-protos
. - Upgraded to gRPC 1.2.0.