Releases: googleapis/google-cloud-java
0.2.3
Features
BigQuery
- Add support for the
BYTESdatatype. A field of typeBYTEScan be created by usingField.Value.bytes(). Thebyte[] bytesValue()method is added toFieldValueto return the value of a field as a byte array. - A
Job waitFor(WaitForOption... waitOptions)method is added toJobclass. This method waits for the job to complete and returns job's updated information:
Job completedJob = job.waitFor();
if (completedJob == null) {
// job no longer exists
} else if (completedJob.status().error() != null) {
// job failed, handle error
} else {
// job completed successfully
}By default, the job status is checked every 500 milliseconds, to configure this value WaitForOption.checkEvery(long, TimeUnit) can be used. WaitForOption.timeout(long, TimeUnit), instead, sets the maximum time to wait.
Core
AuthCredentials.createFor(String)andAuthCredentials.createFor(String, Date)methods have been added to createAuthCredentialsobjects given an OAuth2 access token (and possibly its expiration date).
Compute
- A
Operation waitFor(WaitForOption... waitOptions)method is added toOperationclass. This method waits for the operation to complete and returns operation's updated information:
Operation completedOperation = operation.waitFor();
if (completedOperation == null) {
// operation no longer exists
} else if (completedOperation.errors() != null) {
// operation failed, handle error
} else {
// operation completed successfully
}By default, the operation status is checked every 500 milliseconds, to configure this value WaitForOption.checkEvery(long, TimeUnit) can be used. WaitForOption.timeout(long, TimeUnit), instead, sets the maximum time to wait.
Datastore
Datastore.putandDatastoreBatchWriter.putnow support entities with incomplete keys. Bothputmethods return the just updated/created entities. AputWithDeferredIdAllocationmethod has been also added toDatastoreBatchWriter.
Fixes
Storage
0.2.2
Features
Core
Clockabstract class is moved out ofServiceOptions.ServiceOptions.clock()is now used byRetryHelperin all service calls. This enables mocking theClocksource used for retries when testing your code.
Storage
- Refactor storage batches to use the common
BatchResultclass. Sending batch requests in Storage is now as simple as in DNS. See the following example of sending a batch request:
StorageBatch batch = storage.batch();
BlobId firstBlob = BlobId.of("bucket", "blob1");
BlobId secondBlob = BlobId.of("bucket", "blob2");
BlobId thirdBlob = BlobId.of("bucket", "blob3");
// Users can either register a callback on an operation
batch.delete(firstBlob).notify(new BatchResult.Callback<Boolean, StorageException>() {
@Override
public void success(Boolean result) {
// handle delete result
}
@Override
public void error(StorageException exception) {
// handle exception
}
});
// Ignore its result
batch.update(BlobInfo.builder(secondBlob).contentType("text/plain").build());
StorageBatchResult<Blob> result = batch.get(thirdBlob);
batch.submit();
// Or get the result
Blob blob = result.get(); // returns the operation's result or throws StorageExceptionFixes
Datastore
- Update datastore client to accept IP addresses for localhost (#1002).
LocalDatastoreHelpernow uses https to download the emulator - thanks to @pehrs (#942).- Add example on embedded entities to
DatastoreExample(#980).
Storage
- Fix
StorageImpl.signUrlfor blob names that start with "/" - thanks to @clementdenis (#1013). - Fix
readAllBytespermission error on Google AppEngine (#1010).
0.2.1
Features
Compute
gcloud-java-compute, a new client library to interact with Google Compute Engine is released and is in alpha. See the docs for more information. See ComputeExample for a complete example or API Documentation forgcloud-java-computejavadoc.
The following snippet shows how to create a region external IP address, a persistent boot disk and a virtual machine instance that uses both the IP address and the persistent disk. See CreateAddressDiskAndInstance.java for the full source code.
// Create a service object
// Credentials are inferred from the environment.
Compute compute = ComputeOptions.defaultInstance().service();
// Create an external region address
RegionAddressId addressId = RegionAddressId.of("us-central1", "test-address");
Operation operation = compute.create(AddressInfo.of(addressId));
// Wait for operation to complete
while (!operation.isDone()) {
Thread.sleep(1000L);
}
// Check operation errors
operation = operation.reload();
if (operation.errors() == null) {
System.out.println("Address " + addressId + " was successfully created");
} else {
// inspect operation.errors()
throw new RuntimeException("Address creation failed");
}
// Create a persistent disk
ImageId imageId = ImageId.of("debian-cloud", "debian-8-jessie-v20160329");
DiskId diskId = DiskId.of("us-central1-a", "test-disk");
ImageDiskConfiguration diskConfiguration = ImageDiskConfiguration.of(imageId);
DiskInfo disk = DiskInfo.of(diskId, diskConfiguration);
operation = compute.create(disk);
// Wait for operation to complete
while (!operation.isDone()) {
Thread.sleep(1000L);
}
// Check operation errors
operation = operation.reload();
if (operation.errors() == null) {
System.out.println("Disk " + diskId + " was successfully created");
} else {
// inspect operation.errors()
throw new RuntimeException("Disk creation failed");
}
// Create a virtual machine instance
Address externalIp = compute.getAddress(addressId);
InstanceId instanceId = InstanceId.of("us-central1-a", "test-instance");
NetworkId networkId = NetworkId.of("default");
PersistentDiskConfiguration attachConfiguration =
PersistentDiskConfiguration.builder(diskId).boot(true).build();
AttachedDisk attachedDisk = AttachedDisk.of("dev0", attachConfiguration);
NetworkInterface networkInterface = NetworkInterface.builder(networkId)
.accessConfigurations(AccessConfig.of(externalIp.address()))
.build();
MachineTypeId machineTypeId = MachineTypeId.of("us-central1-a", "n1-standard-1");
InstanceInfo instance =
InstanceInfo.of(instanceId, machineTypeId, attachedDisk, networkInterface);
operation = compute.create(instance);
// Wait for operation to complete
while (!operation.isDone()) {
Thread.sleep(1000L);
}
// Check operation errors
operation = operation.reload();
if (operation.errors() == null) {
System.out.println("Instance " + instanceId + " was successfully created");
} else {
// inspect operation.errors()
throw new RuntimeException("Instance creation failed");
}Datastore
options(String namespace)method has been added toLocalDatastoreHelperallowing to create testing options for a specific namespace (#936).ofmethods have been added toListValueto support specific types (String,long,double,boolean,DateTime,LatLng,Key,FullEntityandBlob).addValuemethods have been added toListValue.Builderto support the same set of specific types (#934).
DNS
- Support for batches has been added to
gcloud-java-dns(#940). Batches allow to perform a number of operations in one single RPC request.
Fixes
Core
- The causing exception is now chained in
BaseServiceException.getCause()(#774).
0.2.0
Features
General
gcloud-javahas been repackaged.com.google.gcloudhas now changed tocom.google.cloud, and we're releasing our artifacts on maven under the Group IDcom.google.cloudrather thancom.google.gcloud. The new way to add our library as a dependency in your project is as follows:
If you're using Maven, add this to your pom.xml file
<dependency>
<groupId>com.google.cloud</groupId>
<artifactId>gcloud-java</artifactId>
<version>0.2.0</version>
</dependency>
If you are using Gradle, add this to your dependencies
compile 'com.google.cloud:gcloud-java:0.2.0'
If you are using SBT, add this to your dependencies
libraryDependencies += "com.google.cloud" % "gcloud-java" % "0.2.0"
Storage
- The interface
ServiceAccountSignerwas added. BothAppEngineAuthCredentialsandServiceAccountAuthCredentialsextend this interface and can be used to sign Google Cloud Storage blob URLs (#701, #854).
Fixes
General
- The default RPC retry parameters were changed to align with the backoff policy requirement listed in the Service Level Agreements (SLAs) for Cloud BigQuery, and Cloud Datastore, and Cloud Storage (#857, #860).
- The expiration date is now properly populated for App Engine credentials (#873, #894).
gcloud-javanow uses the project ID given in the credentials file specified by the environment variableGOOGLE_APPLICATION_CREDENTIALS(if set) (#845).
BigQuery
Job'sisDonemethod is fixed to return true if the job is complete or the job doesn't exist (#853).
Datastore
LocalGcdHelperhas been renamed toRemoteDatastoreHelper, and the command line startup/shutdown of the helper has been removed. The helper is now more consistent with other modules' test helpers and can be used via thecreate,start, andstopmethods (#821).ListValueno longer rejects empty lists, since Cloud Datastore v1beta3 supports empty array values (#862).
DNS
- There were some minor changes to
ChangeRequest, namely addingreload/isDonemethods and changing the method signature ofapplyTo(#849).
Storage
RemoteGcsHelperwas renamed toRemoteStorageHelperto be more consistent with other modules' test helpers (#821).
0.1.7
Features
Datastore
gcloud-java-datastorenow uses Cloud Datastore v1beta3. You can read more about updates in Datastore v1beta3 here. Note that to use this new API, you may have to re-enable the Google Cloud Datastore API in the Developers Console. The following API changes are coupled with this update.- Entity-related changes:
- Entities are indexed by default, and
indexedhas been changed toexcludeFromIndexes. Properties of typeEntityValueand typeListValuecan now be indexed. Moreover, indexing and querying properties inside of entity values is now supported. Values inside entity values are indexed by default. LatLngandLatLngValue, representing the new property type for latitude & longitude, are added.- The getter for a value's
meaninghas been made package scope instead of public, as it is a deprecated field.
- Entities are indexed by default, and
- Read/write-related changes:
- Force writes have been removed. Since force writes were the only existing option in batch and transaction options, the
BatchOptionandTransactionOptionclasses are now removed. ReadOptionis added to allow users to specify eventual consistency on Datastore reads. This can be a useful optimization when strongly consistent results forget/fetchor ancestor queries aren't necessary.
- Force writes have been removed. Since force writes were the only existing option in batch and transaction options, the
- Query-related changes:
QueryResults.cursorAfter()is updated to point to the position after the last consumed result. In v1beta2,cursorAfterwas only updated after all results were consumed.groupByis replaced bydistinctOn.- The
Projectionclass inStructuredQueryis replaced with a string representing the property name. Aggregation functions are removed. - There are changes in GQL syntax:
- In synthetic literal KEY, DATASET is now PROJECT.
- The BLOBKEY synthetic literal is removed.
- The FIRST aggregator is removed.
- The GROUP BY clause is replaced with DISTINCT ON.
- Fully-qualified property names are now supported.
- Query filters on timestamps prior to the epoch are now supported.
- Other miscellaneous changes:
- The "userinfo.email" authentication scope is no longer required. This means you don't need to enable that permission when creating new instances on Google Compute Engine to use
gcloud-java-datastore. - The default value for namespace is now an empty string rather than null.
- The "userinfo.email" authentication scope is no longer required. This means you don't need to enable that permission when creating new instances on Google Compute Engine to use
- Entity-related changes:
Fixes
General
- In
gcloud-java-bigquery,gcloud-java-dns, andgcloud-java-storage, the fieldid()has been renamed togeneratedIdfor classes that are assignedids from the service.
Datastore
- Issue #548 (internal errors when trying to load large numbers of entities without setting a limit) is fixed. The work around mentioned in that issue is no longer necessary.
0.1.6
Features
DNS
gcloud-java-dns, a new client library to interact with Google Cloud DNS, is released and is in alpha. See the docs for more information and samples.
Resource Manager
- Project-level IAM (Identity and Access Management) functionality is now available. See docs and example code here.
Fixes
Big Query
startPageTokenis now calledpageToken(#774) andmaxResultsis now calledpageSize(#745) to be consistent with page-based listing methods in othergcloud-javamodules.
Storage
- Default content type, once a required field for bucket creation and copying/composing blobs, is now removed (#288, #762).
- A new boolean
overrideInfois added to copy requests to denote whether metadata should be overridden (#762). startPageTokenis now calledpageToken(#774) andmaxResultsis now calledpageSize(#745) to be consistent with page-based listing methods in othergcloud-javamodules.
0.1.5
Features
Storage
- Add
versions(boolean versions)option toBlobListOptionto enable/disable versionedBloblisting. If enabled all versions of an object as distinct results (#688). BlobTargetOptionandBlobWriteOptionclasses are added toBucketto allow setting options forcreatemethods (#705).
Fixes
BigQuery
- Fix pagination when listing tables and dataset with selected fields (#668).
Core
- Fix authentication issue when using revoked Cloud SDK credentials with local test helpers. The
NoAuthCredentialsclass is added with theAuthCredentials.noAuth()method, to ne used when testing service against local emulators (#719).
Storage
- Fix pagination when listing blobs and buckets with selected fields (#668).
- Fix wrong usage of
Storage.BlobTargetOptionandStorage.BlobWriteOptioninBucket'screatemethods. New classes (Bucket.BlobTargetOptionandBucket.BlobWriteOption) are added to provide options toBucket.create(#705). - Fix "Failed to parse Content-Range header" error when
BlobWriteChannelwrites a blob whose size is a multiple of the chunk size used (#725). - Fix NPE when reading with
BlobReadChannela blob whose size is a multiple of the chunk/buffer size (#725).
0.1.4
Features
BigQuery
- The
JobInfoandTableInfoclass hierarchies are flattened (#584, #600). Instead,JobInfocontains a fieldJobConfiguration, which is subclassed to provide configurations for different types of jobs. Likewise,TableInfocontains a new fieldTableDefinition, which is subclassed to provide table settings depending on the table type. - Functional classes (
Job,Table,Dataset) now extend their associated metadata classes (JobInfo,TableInfo,DatasetInfo) (#530, #609). TheBigQueryservice methods now return functional objects instead of the metadata objects.
Datastore
-
Setting list properties containing values of a single type is more concise (#640, #648).
For example, to set a list of string values as a property on an entity, you'd previously have to type:
someEntity.set("someStringListProperty", StringValue.of("a"), StringValue.of("b"), StringValue.of("c"));
Now you can set the property using:
someEntity.set("someStringListProperty", "a", "b", "c");
-
There is now a more concise way to get the parent of an entity key (#640, #648).
Key parentOfCompleteKey = someKey.parent();
-
The consistency setting (defaults to 0.9 both before and after this change) can be set in
LocalGcdHelper(#639, #648). -
You no longer have to cast or use the unknown type when getting a
ListValuefrom an entity (#648). Now you can use something like the following to get a list of double values:List<DoubleValue> doublesList = someEntity.get("myDoublesList");
ResourceManager
- Paging for the
ResourceManagerlistmethod is now supported. (#651) Projectis now a subclass ofProjectInfo(#530). TheResourceManagerservice methods now returnProjectinstead ofProjectInfo.
Storage
- Functional classes (
Bucket,Blob) now extend their associated metadata classes (BucketInfo,BlobInfo) (#530, #603, #614). TheStorageservice methods now return functional objects instead of metadata objects.
Fixes
BigQuery
0.1.3
Features
BigQuery
-
Resumable uploads via write channel are now supported (#540)
An example of uploading a CSV file in chunks of CHUNK_SIZE bytes:
try (FileChannel fileChannel = FileChannel.open(Paths.get("/path/to/your/file"))) { ByteBuffer buffer = ByteBuffer.allocate(256 * 1024); TableId tableId = TableId.of("YourDataset", "YourTable"); LoadConfiguration configuration = LoadConfiguration.of(tableId, FormatOptions.of("CSV")); WriteChannel writeChannel = bigquery.writer(configuration); long position = 0; long written = fileChannel.transferTo(position, CHUNK_SIZE, writeChannel); while (written > 0) { position += written; written = fileChannel.transferTo(position, CHUNK_SIZE, writeChannel); } writeChannel.close(); }
-
defaultDataset(String dataset)(inQueryJobInfoandQueryRequest) can be used to specify a default dataset (#567).
Storage
- The name of the method to submit a batch request has changed from
applytosubmit(#562).
Fixes
BigQuery
hashCodeandequalsare now overridden in subclasses ofBaseTableInfo(#565, #573).jobCompleteis renamed tojobCompletedinQueryResult(#567).
Datastore
-
The precondition check that cursors are UTF-8 strings has been removed (#578).
-
EntityQuery,KeyQuery, andProjectionEntityQueryclasses have been introduced (#585). This enables users to use modify projections and group by clauses for projection entity queries after usingtoBuilder(). For example, this now works:ProjectionEntityQuery query = Query.projectionEntityQueryBuilder() .kind("Person") .projection(Projection.property("name")) .build(); ProjectionEntityQuery newQuery = query.toBuilder().projection(Projection.property("favorite_food")).build();
0.1.2
Features
Core
-
By default, requests are now retried (#547).
For example:
// Use the default retry strategy Storage storageWithRetries = StorageOptions.defaultInstance().service(); // Don't use retries Storage storageWithoutRetries = StorageOptions.builder() .retryParams(RetryParams.noRetries()) .build() .service()
BigQuery
- Functional classes for datasets, jobs, and tables are added (#516)
- Query Plan is now supported (#523).
- Template suffix is now supported (#514).
Fixes
Datastore
-
QueryResults.cursorAfter()is now set when all results from a query have been exhausted (#549).When running large queries, users may see Datastore-internal errors with code 500 due to a Datastore issue. This issue will be fixed in the next version of Datastore. Until then, users can set a limit on their query and use the cursor to get more results in subsequent queries. Here is an example:
int limit = 100; StructuredQuery<Entity> query = Query.entityQueryBuilder() .kind("user") .limit(limit) .build(); while (true) { QueryResults<Entity> results = datastore.run(query); int resultCount = 0; while (results.hasNext()) { Entity result = results.next(); // consume all results // do something with the result resultCount++; } if (resultCount < limit) { break; } query = query.toBuilder().startCursor(results.cursorAfter()).build(); }
-
loadis renamed togetin functional classes (#535)