Skip to content

Releases: googleapis/google-cloud-java

0.2.3

10 Jun 16:16

Choose a tag to compare

Features

BigQuery

  • Add support for the BYTES datatype. A field of type BYTES can be created by using Field.Value.bytes(). The byte[] bytesValue() method is added to FieldValue to return the value of a field as a byte array.
  • A Job waitFor(WaitForOption... waitOptions) method is added to Job class. This method waits for the job to complete and returns job's updated information:
Job completedJob = job.waitFor();
if (completedJob == null) {
  // job no longer exists
} else if (completedJob.status().error() != null) {
  // job failed, handle error
} else {
  // job completed successfully
}

By default, the job status is checked every 500 milliseconds, to configure this value WaitForOption.checkEvery(long, TimeUnit) can be used. WaitForOption.timeout(long, TimeUnit), instead, sets the maximum time to wait.

Core

Compute

  • A Operation waitFor(WaitForOption... waitOptions) method is added to Operation class. This method waits for the operation to complete and returns operation's updated information:
Operation completedOperation = operation.waitFor();
if (completedOperation == null) {
  // operation no longer exists
} else if (completedOperation.errors() != null) {
  // operation failed, handle error
} else {
  // operation completed successfully
}

By default, the operation status is checked every 500 milliseconds, to configure this value WaitForOption.checkEvery(long, TimeUnit) can be used. WaitForOption.timeout(long, TimeUnit), instead, sets the maximum time to wait.

Datastore

Fixes

Storage

  • StorageExample now contains examples on how to add ACLs to blobs and buckets (#1033).
  • BlobInfo.createTime() getter has been added. This method returns the time at which a blob was created (#1034).

0.2.2

20 May 21:35

Choose a tag to compare

Features

Core

  • Clock abstract class is moved out of ServiceOptions. ServiceOptions.clock() is now used by RetryHelper in all service calls. This enables mocking the Clock source used for retries when testing your code.

Storage

  • Refactor storage batches to use the common BatchResult class. Sending batch requests in Storage is now as simple as in DNS. See the following example of sending a batch request:
StorageBatch batch = storage.batch();
BlobId firstBlob = BlobId.of("bucket", "blob1");
BlobId secondBlob = BlobId.of("bucket", "blob2");
BlobId thirdBlob = BlobId.of("bucket", "blob3");
// Users can either register a callback on an operation
batch.delete(firstBlob).notify(new BatchResult.Callback<Boolean, StorageException>() {
  @Override
  public void success(Boolean result) {
    // handle delete result
  }

  @Override
  public void error(StorageException exception) {
    // handle exception
  }
});
// Ignore its result
batch.update(BlobInfo.builder(secondBlob).contentType("text/plain").build());
StorageBatchResult<Blob> result = batch.get(thirdBlob);
batch.submit();
// Or get the result
Blob blob = result.get(); // returns the operation's result or throws StorageException

Fixes

Datastore

  • Update datastore client to accept IP addresses for localhost (#1002).
  • LocalDatastoreHelper now uses https to download the emulator - thanks to @pehrs (#942).
  • Add example on embedded entities to DatastoreExample (#980).

Storage

  • Fix StorageImpl.signUrl for blob names that start with "/" - thanks to @clementdenis (#1013).
  • Fix readAllBytes permission error on Google AppEngine (#1010).

0.2.1

29 Apr 22:11

Choose a tag to compare

Features

Compute

  • gcloud-java-compute, a new client library to interact with Google Compute Engine is released and is in alpha. See the docs for more information. See ComputeExample for a complete example or API Documentation for gcloud-java-compute javadoc.
    The following snippet shows how to create a region external IP address, a persistent boot disk and a virtual machine instance that uses both the IP address and the persistent disk. See CreateAddressDiskAndInstance.java for the full source code.
    // Create a service object
    // Credentials are inferred from the environment.
    Compute compute = ComputeOptions.defaultInstance().service();

    // Create an external region address
    RegionAddressId addressId = RegionAddressId.of("us-central1", "test-address");
    Operation operation = compute.create(AddressInfo.of(addressId));
    // Wait for operation to complete
    while (!operation.isDone()) {
      Thread.sleep(1000L);
    }
    // Check operation errors
    operation = operation.reload();
    if (operation.errors() == null) {
      System.out.println("Address " + addressId + " was successfully created");
    } else {
      // inspect operation.errors()
      throw new RuntimeException("Address creation failed");
    }

    // Create a persistent disk
    ImageId imageId = ImageId.of("debian-cloud", "debian-8-jessie-v20160329");
    DiskId diskId = DiskId.of("us-central1-a", "test-disk");
    ImageDiskConfiguration diskConfiguration = ImageDiskConfiguration.of(imageId);
    DiskInfo disk = DiskInfo.of(diskId, diskConfiguration);
    operation = compute.create(disk);
    // Wait for operation to complete
    while (!operation.isDone()) {
      Thread.sleep(1000L);
    }
    // Check operation errors
    operation = operation.reload();
    if (operation.errors() == null) {
      System.out.println("Disk " + diskId + " was successfully created");
    } else {
      // inspect operation.errors()
      throw new RuntimeException("Disk creation failed");
    }

    // Create a virtual machine instance
    Address externalIp = compute.getAddress(addressId);
    InstanceId instanceId = InstanceId.of("us-central1-a", "test-instance");
    NetworkId networkId = NetworkId.of("default");
    PersistentDiskConfiguration attachConfiguration =
        PersistentDiskConfiguration.builder(diskId).boot(true).build();
    AttachedDisk attachedDisk = AttachedDisk.of("dev0", attachConfiguration);
    NetworkInterface networkInterface = NetworkInterface.builder(networkId)
        .accessConfigurations(AccessConfig.of(externalIp.address()))
        .build();
    MachineTypeId machineTypeId = MachineTypeId.of("us-central1-a", "n1-standard-1");
    InstanceInfo instance =
        InstanceInfo.of(instanceId, machineTypeId, attachedDisk, networkInterface);
    operation = compute.create(instance);
    // Wait for operation to complete
    while (!operation.isDone()) {
      Thread.sleep(1000L);
    }
    // Check operation errors
    operation = operation.reload();
    if (operation.errors() == null) {
      System.out.println("Instance " + instanceId + " was successfully created");
    } else {
      // inspect operation.errors()
      throw new RuntimeException("Instance creation failed");
    }

Datastore

  • options(String namespace) method has been added to LocalDatastoreHelper allowing to create testing options for a specific namespace (#936).
  • of methods have been added to ListValue to support specific types (String, long, double, boolean, DateTime, LatLng, Key, FullEntity and Blob). addValue methods have been added to ListValue.Builder to support the same set of specific types (#934).

DNS

  • Support for batches has been added to gcloud-java-dns (#940). Batches allow to perform a number of operations in one single RPC request.

Fixes

Core

  • The causing exception is now chained in BaseServiceException.getCause() (#774).

0.2.0

12 Apr 18:53

Choose a tag to compare

Features

General

  • gcloud-java has been repackaged. com.google.gcloud has now changed to com.google.cloud, and we're releasing our artifacts on maven under the Group ID com.google.cloud rather than com.google.gcloud. The new way to add our library as a dependency in your project is as follows:

If you're using Maven, add this to your pom.xml file

<dependency>
  <groupId>com.google.cloud</groupId>
  <artifactId>gcloud-java</artifactId>
  <version>0.2.0</version>
</dependency>

If you are using Gradle, add this to your dependencies

compile 'com.google.cloud:gcloud-java:0.2.0'

If you are using SBT, add this to your dependencies

libraryDependencies += "com.google.cloud" % "gcloud-java" % "0.2.0"

Storage

  • The interface ServiceAccountSigner was added. Both AppEngineAuthCredentials and ServiceAccountAuthCredentials extend this interface and can be used to sign Google Cloud Storage blob URLs (#701, #854).

Fixes

General

  • The default RPC retry parameters were changed to align with the backoff policy requirement listed in the Service Level Agreements (SLAs) for Cloud BigQuery, and Cloud Datastore, and Cloud Storage (#857, #860).
  • The expiration date is now properly populated for App Engine credentials (#873, #894).
  • gcloud-java now uses the project ID given in the credentials file specified by the environment variable GOOGLE_APPLICATION_CREDENTIALS (if set) (#845).

BigQuery

  • Job's isDone method is fixed to return true if the job is complete or the job doesn't exist (#853).

Datastore

  • LocalGcdHelper has been renamed to RemoteDatastoreHelper, and the command line startup/shutdown of the helper has been removed. The helper is now more consistent with other modules' test helpers and can be used via the create, start, and stop methods (#821).
  • ListValue no longer rejects empty lists, since Cloud Datastore v1beta3 supports empty array values (#862).

DNS

  • There were some minor changes to ChangeRequest, namely adding reload/isDone methods and changing the method signature of applyTo (#849).

Storage

  • RemoteGcsHelper was renamed to RemoteStorageHelper to be more consistent with other modules' test helpers (#821).

0.1.7

02 Apr 00:01

Choose a tag to compare

Features

Datastore

  • gcloud-java-datastore now uses Cloud Datastore v1beta3. You can read more about updates in Datastore v1beta3 here. Note that to use this new API, you may have to re-enable the Google Cloud Datastore API in the Developers Console. The following API changes are coupled with this update.
    • Entity-related changes:
      • Entities are indexed by default, and indexed has been changed to excludeFromIndexes. Properties of type EntityValue and type ListValue can now be indexed. Moreover, indexing and querying properties inside of entity values is now supported. Values inside entity values are indexed by default.
      • LatLng and LatLngValue, representing the new property type for latitude & longitude, are added.
      • The getter for a value's meaning has been made package scope instead of public, as it is a deprecated field.
    • Read/write-related changes:
      • Force writes have been removed. Since force writes were the only existing option in batch and transaction options, the BatchOption and TransactionOption classes are now removed.
      • ReadOption is added to allow users to specify eventual consistency on Datastore reads. This can be a useful optimization when strongly consistent results for get/fetch or ancestor queries aren't necessary.
    • Query-related changes:
      • QueryResults.cursorAfter() is updated to point to the position after the last consumed result. In v1beta2, cursorAfter was only updated after all results were consumed.
      • groupBy is replaced by distinctOn.
      • The Projection class in StructuredQuery is replaced with a string representing the property name. Aggregation functions are removed.
      • There are changes in GQL syntax:
        • In synthetic literal KEY, DATASET is now PROJECT.
        • The BLOBKEY synthetic literal is removed.
        • The FIRST aggregator is removed.
        • The GROUP BY clause is replaced with DISTINCT ON.
        • Fully-qualified property names are now supported.
        • Query filters on timestamps prior to the epoch are now supported.
    • Other miscellaneous changes:
      • The "userinfo.email" authentication scope is no longer required. This means you don't need to enable that permission when creating new instances on Google Compute Engine to use gcloud-java-datastore.
      • The default value for namespace is now an empty string rather than null.

Fixes

General

  • In gcloud-java-bigquery, gcloud-java-dns, and gcloud-java-storage, the field id() has been renamed to generatedId for classes that are assigned ids from the service.

Datastore

  • Issue #548 (internal errors when trying to load large numbers of entities without setting a limit) is fixed. The work around mentioned in that issue is no longer necessary.

0.1.6

29 Mar 01:29

Choose a tag to compare

Features

DNS

  • gcloud-java-dns, a new client library to interact with Google Cloud DNS, is released and is in alpha. See the docs for more information and samples.

Resource Manager

Fixes

Big Query

  • startPageToken is now called pageToken (#774) and maxResults is now called pageSize (#745) to be consistent with page-based listing methods in other gcloud-java modules.

Storage

  • Default content type, once a required field for bucket creation and copying/composing blobs, is now removed (#288, #762).
  • A new boolean overrideInfo is added to copy requests to denote whether metadata should be overridden (#762).
  • startPageToken is now called pageToken (#774) and maxResults is now called pageSize (#745) to be consistent with page-based listing methods in other gcloud-java modules.

0.1.5

08 Mar 19:15

Choose a tag to compare

Features

Storage

  • Add versions(boolean versions) option to BlobListOption to enable/disable versioned Blob listing. If enabled all versions of an object as distinct results (#688).
  • BlobTargetOption and BlobWriteOption classes are added to Bucket to allow setting options for create methods (#705).

Fixes

BigQuery

  • Fix pagination when listing tables and dataset with selected fields (#668).

Core

  • Fix authentication issue when using revoked Cloud SDK credentials with local test helpers. The NoAuthCredentials class is added with the AuthCredentials.noAuth() method, to ne used when testing service against local emulators (#719).

Storage

  • Fix pagination when listing blobs and buckets with selected fields (#668).
  • Fix wrong usage of Storage.BlobTargetOption and Storage.BlobWriteOption in Bucket's create methods. New classes (Bucket.BlobTargetOption and Bucket.BlobWriteOption) are added to provide options to Bucket.create (#705).
  • Fix "Failed to parse Content-Range header" error when BlobWriteChannel writes a blob whose size is a multiple of the chunk size used (#725).
  • Fix NPE when reading with BlobReadChannel a blob whose size is a multiple of the chunk/buffer size (#725).

0.1.4

19 Feb 16:36

Choose a tag to compare

Features

BigQuery

  • The JobInfo and TableInfo class hierarchies are flattened (#584, #600). Instead, JobInfo contains a field JobConfiguration, which is subclassed to provide configurations for different types of jobs. Likewise, TableInfo contains a new field TableDefinition, which is subclassed to provide table settings depending on the table type.
  • Functional classes (Job, Table, Dataset) now extend their associated metadata classes (JobInfo, TableInfo, DatasetInfo) (#530, #609). The BigQuery service methods now return functional objects instead of the metadata objects.

Datastore

  • Setting list properties containing values of a single type is more concise (#640, #648).

    For example, to set a list of string values as a property on an entity, you'd previously have to type:

    someEntity.set("someStringListProperty", StringValue.of("a"), StringValue.of("b"),
        StringValue.of("c"));

    Now you can set the property using:

    someEntity.set("someStringListProperty", "a", "b", "c");
  • There is now a more concise way to get the parent of an entity key (#640, #648).

    Key parentOfCompleteKey = someKey.parent();
  • The consistency setting (defaults to 0.9 both before and after this change) can be set in LocalGcdHelper (#639, #648).

  • You no longer have to cast or use the unknown type when getting a ListValue from an entity (#648). Now you can use something like the following to get a list of double values:

    List<DoubleValue> doublesList = someEntity.get("myDoublesList");

ResourceManager

  • Paging for the ResourceManager list method is now supported. (#651)
  • Project is now a subclass of ProjectInfo (#530). The ResourceManager service methods now return Project instead of ProjectInfo.

Storage

  • Functional classes (Bucket, Blob) now extend their associated metadata classes (BucketInfo, BlobInfo) (#530, #603, #614). The Storage service methods now return functional objects instead of metadata objects.

Fixes

BigQuery

  • The potential NPE in metadata objects equals methods is fixed (#632).
  • Methods in Table that were meant to be public but kept package scope are now fixed (#621).

0.1.3

27 Jan 02:34

Choose a tag to compare

Features

BigQuery

  • Resumable uploads via write channel are now supported (#540)

    An example of uploading a CSV file in chunks of CHUNK_SIZE bytes:

    try (FileChannel fileChannel = FileChannel.open(Paths.get("/path/to/your/file"))) {
      ByteBuffer buffer = ByteBuffer.allocate(256 * 1024);
      TableId tableId = TableId.of("YourDataset", "YourTable");
      LoadConfiguration configuration =
          LoadConfiguration.of(tableId, FormatOptions.of("CSV"));
      WriteChannel writeChannel = bigquery.writer(configuration);
      long position = 0;
      long written = fileChannel.transferTo(position, CHUNK_SIZE, writeChannel);
      while (written > 0) {
        position += written;
        written = fileChannel.transferTo(position, CHUNK_SIZE, writeChannel);
      }
      writeChannel.close();
    }
  • defaultDataset(String dataset) (in QueryJobInfo and QueryRequest) can be used to specify a default dataset (#567).

Storage

  • The name of the method to submit a batch request has changed from apply to submit (#562).

Fixes

BigQuery

  • hashCode and equals are now overridden in subclasses of BaseTableInfo (#565, #573).
  • jobComplete is renamed to jobCompleted in QueryResult (#567).

Datastore

  • The precondition check that cursors are UTF-8 strings has been removed (#578).

  • EntityQuery, KeyQuery, and ProjectionEntityQuery classes have been introduced (#585). This enables users to use modify projections and group by clauses for projection entity queries after using toBuilder(). For example, this now works:

    ProjectionEntityQuery query = Query.projectionEntityQueryBuilder()
        .kind("Person")
        .projection(Projection.property("name"))
        .build();
    ProjectionEntityQuery newQuery =
        query.toBuilder().projection(Projection.property("favorite_food")).build();

0.1.2

16 Jan 01:17

Choose a tag to compare

Features

Core

  • By default, requests are now retried (#547).

    For example:

    // Use the default retry strategy
    Storage storageWithRetries = StorageOptions.defaultInstance().service();
    
    // Don't use retries
    Storage storageWithoutRetries = StorageOptions.builder()
        .retryParams(RetryParams.noRetries())
        .build()
        .service()

BigQuery

Fixes

Datastore

  • QueryResults.cursorAfter() is now set when all results from a query have been exhausted (#549).

    When running large queries, users may see Datastore-internal errors with code 500 due to a Datastore issue. This issue will be fixed in the next version of Datastore. Until then, users can set a limit on their query and use the cursor to get more results in subsequent queries. Here is an example:

    int limit = 100;
    StructuredQuery<Entity> query = Query.entityQueryBuilder()
        .kind("user")
        .limit(limit)
        .build();
    while (true) {
      QueryResults<Entity> results = datastore.run(query);
      int resultCount = 0;
      while (results.hasNext()) {
        Entity result = results.next(); // consume all results
        // do something with the result
        resultCount++;
      }
      if (resultCount < limit) {
        break;
      }
      query = query.toBuilder().startCursor(results.cursorAfter()).build();
    }
  • load is renamed to get in functional classes (#535)