Spring Cloud GCP

Authors

João André Martins, Jisha Abubaker, Ray Tsang, Mike Eltsufin, Artem Bilan, Andreas Berger, Balint Pato, Chengyuan Zhao, Dmitry Solomakha, Elena Felder, Daniel Zou

Table of Contents

1. Introduction
2. Dependency Management
3. Getting started
3.1. Spring Initializr
3.1.1. GCP Support
3.1.2. GCP Messaging
3.1.3. GCP Storage
3.2. Code Samples
3.3. Code Challenges
3.4. Getting Started Guides
4. Spring Cloud GCP Core
4.1. Project ID
4.2. Credentials
4.2.1. Scopes
4.3. Environment
4.4. Spring Initializr
5. Google Cloud Pub/Sub
5.1. Pub/Sub Operations & Template
5.1.1. Publishing to a topic
JSON support
5.1.2. Subscribing to a subscription
5.1.3. Pulling messages from a subscription
5.2. Pub/Sub management
5.2.1. Creating a topic
5.2.2. Deleting a topic
5.2.3. Listing topics
5.2.4. Creating a subscription
5.2.5. Deleting a subscription
5.2.6. Listing subscriptions
5.3. Configuration
5.4. Sample
6. Spring Resources
6.1. Google Cloud Storage
6.1.1. Setting the Content Type
6.2. Configuration
6.3. Sample
7. Spring JDBC
7.1. Prerequisites
7.2. Spring Boot Starter for Google Cloud SQL
7.2.1. DataSource creation flow
7.2.2. Troubleshooting tips
Connection issues
Errors like c.g.cloud.sql.core.SslSocketFactory : Re-throwing cached exception due to attempt to refresh instance information too soon after error
PostgreSQL: java.net.SocketException: already connected issue
7.3. Samples
8. Spring Integration
8.1. Channel Adapters for Cloud Pub/Sub
8.1.1. Inbound channel adapter
8.1.2. Outbound channel adapter
8.1.3. Header mapping
8.2. Sample
8.3. Channel Adapters for Google Cloud Storage
8.3.1. Inbound channel adapter
8.3.2. Inbound streaming channel adapter
8.3.3. Outbound channel adapter
8.4. Sample
9. Spring Cloud Stream
9.1. Overview
9.2. Configuration
9.2.1. Producer Destination Configuration
9.2.2. Consumer Destination Configuration
9.3. Sample
10. Spring Cloud Sleuth
10.1. Tracing
10.2. Spring Boot Starter for Stackdriver Trace
10.3. Integration with Logging
10.4. Sample
11. Stackdriver Logging
11.1. Web MVC Interceptor
11.2. Logback Support
11.2.1. Log via API
11.2.2. Log via Console
11.3. Sample
12. Spring Cloud Config
12.1. Configuration
12.2. Quick start
12.3. Refreshing the configuration at runtime
12.4. Sample
13. Spring Data Cloud Spanner
13.1. Configuration
13.1.1. Cloud Spanner settings
13.1.2. Repository settings
13.1.3. Autoconfiguration
13.2. Object Mapping
13.2.1. Constructors
13.2.2. Table
SpEL expressions for table names
13.2.3. Primary Keys
13.2.4. Columns
13.2.5. Embedded Objects
13.2.6. Relationships
13.2.7. Supported Types
13.2.8. Lists
13.2.9. Lists of Structs
13.2.10. Custom types
13.2.11. Custom Converter for Struct Array Columns
13.3. Spanner Operations & Template
13.3.1. SQL Query
13.3.2. Read
13.3.3. Advanced reads
Stale read
Read from a secondary index
Read with offsets and limits
Sorting
Partial read
Summary of options for Query vs Read
13.3.4. Write / Update
Insert
Update
Upsert
Partial Update
13.3.5. DML
13.3.6. Transactions
Read/Write Transaction
Read-only Transaction
Declarative Transactions with @Transactional Annotation
13.3.7. DML Statements
13.4. Repositories
13.4.1. CRUD Repository
13.4.2. Paging and Sorting Repository
13.4.3. Spanner Repository
13.5. Query Methods
13.5.1. Query methods by convention
13.5.2. Custom SQL/DML query methods
Query methods with named queries properties
Query methods with annotation
13.5.3. Projections
13.5.4. REST Repositories
13.6. Database and Schema Admin
13.7. Sample
14. Spring Data Cloud Datastore
14.1. Configuration
14.1.1. Cloud Datastore settings
14.1.2. Repository settings
14.1.3. Autoconfiguration
14.2. Object Mapping
14.2.1. Constructors
14.2.2. Kind
14.2.3. Keys
14.2.4. Fields
14.2.5. Supported Types
14.2.6. Custom types
14.2.7. Collections and arrays
14.2.8. Custom Converter for collections
14.3. Relationships
14.3.1. Embedded Entities
Maps
14.3.2. Ancestor-Descendant Relationships
14.3.3. Key Reference Relationships
14.4. Datastore Operations & Template
14.4.1. GQL Query
14.4.2. Find by ID(s)
Indexes
Read with offsets, limits, and sorting
Partial read
14.4.3. Write / Update
Partial Update
14.4.4. Transactions
Declarative Transactions with @Transactional Annotation
14.4.5. Read-Write Support for Maps
14.5. Repositories
14.5.1. Query methods by convention
14.5.2. Custom GQL query methods
Query methods with annotation
Query methods with named queries properties
14.5.3. Transactions
14.5.4. Projections
14.5.5. REST Repositories
14.6. Sample
15. Cloud Memorystore for Redis
15.1. Spring Caching
16. Cloud Identity-Aware Proxy (IAP) Authentication
16.1. Configuration
16.2. Sample
17. Google Cloud Vision
17.1. Cloud Vision Template
17.2. Detect Image Labels Example
17.3. Sample
18. Cloud Foundry
19. Kotlin Support
19.1. Prerequisites
20. Sample

1. Introduction

The Spring Cloud GCP project makes the Spring Framework a first-class citizen of Google Cloud Platform (GCP).

Spring Cloud GCP lets you leverage the power and simplicity of the Spring Framework to:

  1. Analyze your images for text, objects, and other content with Google Cloud Vision
  2. Use Spring Security via Google Cloud IAP
  3. Map objects, relationships, and collections with Spring Data Cloud Spanner and Spring Data Cloud Datastore
  4. Publish and subscribe to Google Cloud Pub/Sub topics
  5. Configure Spring JDBC with a few properties to use Google Cloud SQL
  6. Write and read from Spring Resources backed up by Google Cloud Storage
  7. Exchange messages with Spring Integration using Google Cloud Pub/Sub on the background
  8. Trace the execution of your app with Spring Cloud Sleuth and Google Stackdriver Trace
  9. Configure your app with Spring Cloud Config, backed up by the Google Runtime Configuration API
  10. Consume and produce Google Cloud Storage data via Spring Integration GCS Channel Adapters

2. Dependency Management

The Spring Cloud GCP Bill of Materials (BOM) contains the versions of all the dependencies it uses.

If you’re a Maven user, adding the following to your pom.xml file will allow you to not specify any Spring Cloud GCP dependency versions. Instead, the version of the BOM you’re using determines the versions of the used dependencies.

<dependencyManagement>
    <dependencies>
        <dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-gcp-dependencies</artifactId>
            <version>{project-version}</version>
            <type>pom</type>
            <scope>import</scope>
        </dependency>
    </dependencies>
</dependencyManagement>

In the following sections, it will be assumed you are using the Spring Cloud GCP BOM and the dependency snippets will not contain versions.

Gradle users can achieve the same kind of BOM experience using Spring’s dependency-management-plugin Gradle plugin. For simplicity, the Gradle dependency snippets in the remainder of this document will also omit their versions.

3. Getting started

There are many available resources to get you up to speed with our libraries as quickly as possible.

3.1 Spring Initializr

There are three entries in Spring Initializr for Spring Cloud GCP.

3.1.1 GCP Support

The GCP Support entry contains auto-configuration support for every Spring Cloud GCP integration. Most of the autoconfiguration code is only enabled if other dependencies are added to the classpath.

Spring Cloud GCP StarterRequired dependencies

Config

org.springframework.cloud:spring-cloud-gcp-starter-config

Cloud Spanner

org.springframework.cloud:spring-cloud-gcp-starter-data-spanner

Cloud Datastore

org.springframework.cloud:spring-cloud-gcp-starter-data-datastore

Logging

org.springframework.cloud:spring-cloud-gcp-starter-logging

SQL - MySql

org.springframework.cloud:spring-cloud-gcp-starter-sql-mysql

SQL - PostgreSQL

org.springframework.cloud:spring-cloud-gcp-starter-sql-postgres

Trace

org.springframework.cloud:spring-cloud-gcp-starter-trace

Vision

org.springframework.cloud:spring-cloud-gcp-starter-vision

Security - IAP

org.springframework.cloud:spring-cloud-gcp-starter-security-iap

3.1.2 GCP Messaging

The GCP Messaging entry adds the GCP Support entry and all the required dependencies so that the Google Cloud Pub/Sub integrations work out of the box.

3.1.3 GCP Storage

The GCP Storage entry adds the GCP Support entry and all the required dependencies so that the Google Cloud Storage integrations work out of the box.

3.2 Code Samples

There are code samples available that demonstrate the usage of all our integrations.

For example, the Vision API sample shows how to use spring-cloud-gcp-starter-vision to automatically configure Vision API clients.

3.3 Code Challenges

In a code challenge, you perform a task step by step, using one integration. There are a number of challenges available in the Google Developers Codelabs page.

3.4 Getting Started Guides

A Spring Getting Started guide on messaging with Spring Integration Channel Adapters for Google Cloud Pub/Sub is available from Spring Guides.

4. Spring Cloud GCP Core

Each Spring Cloud GCP module uses GcpProjectIdProvider and CredentialsProvider to get the GCP project ID and access credentials.

Spring Cloud GCP provides a Spring Boot starter to auto-configure the core components.

Maven coordinates, using Spring Cloud GCP BOM:

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-gcp-starter</artifactId>
</dependency>

Gradle coordinates:

dependencies {
    compile group: 'org.springframework.cloud', name: 'spring-cloud-gcp-starter'
}

4.1 Project ID

GcpProjectIdProvider is a functional interface that returns a GCP project ID string.

public interface GcpProjectIdProvider {
	String getProjectId();
}

The Spring Cloud GCP starter auto-configures a GcpProjectIdProvider. If a spring.cloud.gcp.project-id property is specified, the provided GcpProjectIdProvider returns that property value.

spring.cloud.gcp.project-id=my-gcp-project-id

Otherwise, the project ID is discovered based on an ordered list of rules:

  1. The project ID specified by the GOOGLE_CLOUD_PROJECT environment variable
  2. The Google App Engine project ID
  3. The project ID specified in the JSON credentials file pointed by the GOOGLE_APPLICATION_CREDENTIALS environment variable
  4. The Google Cloud SDK project ID
  5. The Google Compute Engine project ID, from the Google Compute Engine Metadata Server

4.2 Credentials

CredentialsProvider is a functional interface that returns the credentials to authenticate and authorize calls to Google Cloud Client Libraries.

public interface CredentialsProvider {
  Credentials getCredentials() throws IOException;
}

The Spring Cloud GCP starter auto-configures a CredentialsProvider. It uses the spring.cloud.gcp.credentials.location property to locate the OAuth2 private key of a Google service account. Keep in mind this property is a Spring Resource, so the credentials file can be obtained from a number of different locations such as the file system, classpath, URL, etc. The next example specifies the credentials location property in the file system.

spring.cloud.gcp.credentials.location=file:/usr/local/key.json

Alternatively, you can set the credentials by directly specifying the spring.cloud.gcp.credentials.encoded-key property. The value should be the base64-encoded account private key in JSON format.

If that credentials aren’t specified through properties, the starter tries to discover credentials from a number of places:

  1. Credentials file pointed to by the GOOGLE_APPLICATION_CREDENTIALS environment variable
  2. Credentials provided by the Google Cloud SDK gcloud auth application-default login command
  3. Google App Engine built-in credentials
  4. Google Cloud Shell built-in credentials
  5. Google Compute Engine built-in credentials

If your app is running on Google App Engine or Google Compute Engine, in most cases, you should omit the spring.cloud.gcp.credentials.location property and, instead, let the Spring Cloud GCP Starter get the correct credentials for those environments. On App Engine Standard, the App Identity service account credentials are used, on App Engine Flexible, the Flexible service account credential are used and on Google Compute Engine, the Compute Engine Default Service Account is used.

4.2.1 Scopes

By default, the credentials provided by the Spring Cloud GCP Starter contain scopes for every service supported by Spring Cloud GCP.

The Spring Cloud GCP starter allows you to configure a custom scope list for the provided credentials. To do that, specify a comma-delimited list of Google OAuth2 scopes in the spring.cloud.gcp.credentials.scopes property.

spring.cloud.gcp.credentials.scopes is a comma-delimited list of Google OAuth2 scopes for Google Cloud Platform services that the credentials returned by the provided CredentialsProvider support.

spring.cloud.gcp.credentials.scopes=https://www.googleapis.com/auth/pubsub,https://www.googleapis.com/auth/sqlservice.admin

You can also use DEFAULT_SCOPES placeholder as a scope to represent the starters default scopes, and append the additional scopes you need to add.

spring.cloud.gcp.credentials.scopes=DEFAULT_SCOPES,https://www.googleapis.com/auth/cloud-vision

4.3 Environment

GcpEnvironmentProvider is a functional interface, auto-configured by the Spring Cloud GCP starter, that returns a GcpEnvironment enum. The provider can help determine programmatically in which GCP environment (App Engine Flexible, App Engine Standard, Kubernetes Engine or Compute Engine) the application is deployed.

public interface GcpEnvironmentProvider {
	GcpEnvironment getCurrentEnvironment();
}

4.4 Spring Initializr

This starter is available from Spring Initializr through the GCP Support entry.

5. Google Cloud Pub/Sub

Spring Cloud GCP provides an abstraction layer to publish to and subscribe from Google Cloud Pub/Sub topics and to create, list or delete Google Cloud Pub/Sub topics and subscriptions.

A Spring Boot starter is provided to auto-configure the various required Pub/Sub components.

Maven coordinates, using Spring Cloud GCP BOM:

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-gcp-starter-pubsub</artifactId>
</dependency>

Gradle coordinates:

dependencies {
    compile group: 'org.springframework.cloud', name: 'spring-cloud-gcp-starter-pubsub'
}

This starter is also available from Spring Initializr through the GCP Messaging entry.

5.1 Pub/Sub Operations & Template

PubSubOperations is an abstraction that allows Spring users to use Google Cloud Pub/Sub without depending on any Google Cloud Pub/Sub API semantics. It provides the common set of operations needed to interact with Google Cloud Pub/Sub. PubSubTemplate is the default implementation of PubSubOperations and it uses the Google Cloud Java Client for Pub/Sub to interact with Google Cloud Pub/Sub.

PubSubTemplate depends on a PublisherFactory and a SubscriberFactory. The PublisherFactory provides a Google Cloud Java Client for Pub/Sub Publisher. The SubscriberFactory provides the Subscriber for asynchronous message pulling, as well as a SubscriberStub for synchronous pulling. The Spring Boot starter for GCP Pub/Sub auto-configures a PublisherFactory and SubscriberFactory with default settings and uses the GcpProjectIdProvider and CredentialsProvider auto-configured by the Spring Boot GCP starter.

The PublisherFactory implementation provided by Spring Cloud GCP Pub/Sub, DefaultPublisherFactory, caches Publisher instances by topic name, in order to optimize resource utilization.

The PubSubOperations interface is actually a combination of PubSubPublisherOperations and PubSubSubscriberOperations with the corresponding PubSubPublisherTemplate and PubSubSubscriberTemplate implementations, which can be used individually or via the composite PubSubTemplate. The rest of the documentation refers to PubSubTemplate, but the same applies to PubSubPublisherTemplate and PubSubSubscriberTemplate, depending on whether we’re talking about publishing or subscribing.

5.1.1 Publishing to a topic

PubSubTemplate provides asynchronous methods to publish messages to a Google Cloud Pub/Sub topic. The publish() method takes in a topic name to post the message to, a payload of a generic type and, optionally, a map with the message headers.

Here is an example of how to publish a message to a Google Cloud Pub/Sub topic:

public void publishMessage() {
    this.pubSubTemplate.publish("topic", "your message payload", ImmutableMap.of("key1", "val1"));
}

By default, the SimplePubSubMessageConverter is used to convert payloads of type byte[], ByteString, ByteBuffer, and String to Pub/Sub messages.

JSON support

For serialization and deserialization of POJOs using Jackson JSON, configure a JacksonPubSubMessageConverter bean, and the Spring Boot starter for GCP Pub/Sub will automatically wire it into the PubSubTemplate.

// Note: The ObjectMapper is used to convert Java POJOs to and from JSON.
// You will have to configure your own instance if you are unable to depend
// on the ObjectMapper provided by Spring Boot starters.
@Bean
public JacksonPubSubMessageConverter jacksonPubSubMessageConverter(ObjectMapper objectMapper) {
    return new JacksonPubSubMessageConverter(objectMapper);
}

Alternatively, you can set it directly by calling the setMessageConverter() method on the PubSubTemplate. Other implementations of the PubSubMessageConverter can also be configured in the same manner.

Please refer to our Pub/Sub JSON Payload Sample App as a reference for using this functionality.

5.1.2 Subscribing to a subscription

Google Cloud Pub/Sub allows many subscriptions to be associated to the same topic. PubSubTemplate allows you to listen to subscriptions via the subscribe() method. It relies on a SubscriberFactory object, whose only task is to generate Google Cloud Pub/Sub Subscriber objects. When listening to a subscription, messages will be pulled from Google Cloud Pub/Sub asynchronously, at a certain interval.

The Spring Boot starter for Google Cloud Pub/Sub auto-configures a SubscriberFactory.

If Pub/Sub message payload conversion is desired, you can use the subscribeAndConvert() method, which will use the converter configured in the template.

5.1.3 Pulling messages from a subscription

Google Cloud Pub/Sub supports synchronous pulling of messages from a subscription. This is different from subscribing to a subscription, in the sense that subscribing is an asynchronous task which polls the subscription on a set interval.

The pullNext() method allows for a single message to be pulled and automatically acknowledged from a subscription. The pull() method pulls a number of messages from a subscription, allowing for the retry settings to be configured. Any messages received by pull() are not automatically acknowledged. Instead, since they are of the kind AcknowledgeablePubsubMessage, you can acknowledge them by calling the ack() method, or negatively acknowledge them by calling the nack() method. The pullAndAck() method does the same as the pull() method and, additionally, acknowledges all received messages.

The pullAndConvert() method does the same as the pull() method and, additionally, converts the Pub/Sub binary payload to an object of the desired type, using the converter configured in the template.

To acknowledge multiple messages received from pull() or pullAndConvert() at once, you can use the PubSubTemplate.ack() method. You can also use the PubSubTemplate.nack() for negatively acknowledging messages.

Using these methods for acknowledging messages in batches is more efficient than acknowledging messages individually, but they require the collection of messages to be from the same project.

All ack(), nack(), and modifyAckDeadline() methods on messages as well as PubSubSubscriberTemplate are implemented asynchronously, returning a ListenableFuture<Void> to be able to process the asynchronous execution.

PubSubTemplate uses a special subscriber generated by its SubscriberFactory to synchronously pull messages.

5.2 Pub/Sub management

PubSubAdmin is the abstraction provided by Spring Cloud GCP to manage Google Cloud Pub/Sub resources. It allows for the creation, deletion and listing of topics and subscriptions.

PubSubAdmin depends on GcpProjectIdProvider and either a CredentialsProvider or a TopicAdminClient and a SubscriptionAdminClient. If given a CredentialsProvider, it creates a TopicAdminClient and a SubscriptionAdminClient with the Google Cloud Java Library for Pub/Sub default settings. The Spring Boot starter for GCP Pub/Sub auto-configures a PubSubAdmin object using the GcpProjectIdProvider and the CredentialsProvider auto-configured by the Spring Boot GCP Core starter.

5.2.1 Creating a topic

PubSubAdmin implements a method to create topics:

public Topic createTopic(String topicName)

Here is an example of how to create a Google Cloud Pub/Sub topic:

public void newTopic() {
    pubSubAdmin.createTopic("topicName");
}

5.2.2 Deleting a topic

PubSubAdmin implements a method to delete topics:

public void deleteTopic(String topicName)

Here is an example of how to delete a Google Cloud Pub/Sub topic:

public void deleteTopic() {
    pubSubAdmin.deleteTopic("topicName");
}

5.2.3 Listing topics

PubSubAdmin implements a method to list topics:

public List<Topic> listTopics

Here is an example of how to list every Google Cloud Pub/Sub topic name in a project:

public List<String> listTopics() {
    return pubSubAdmin
        .listTopics()
        .stream()
        .map(Topic::getNameAsTopicName)
        .map(TopicName::getTopic)
        .collect(Collectors.toList());
}

5.2.4 Creating a subscription

PubSubAdmin implements a method to create subscriptions to existing topics:

public Subscription createSubscription(String subscriptionName, String topicName, Integer ackDeadline, String pushEndpoint)

Here is an example of how to create a Google Cloud Pub/Sub subscription:

public void newSubscription() {
    pubSubAdmin.createSubscription("subscriptionName", "topicName", 10, “http://my.endpoint/push”);
}

Alternative methods with default settings are provided for ease of use. The default value for ackDeadline is 10 seconds. If pushEndpoint isn’t specified, the subscription uses message pulling, instead.

public Subscription createSubscription(String subscriptionName, String topicName)
public Subscription createSubscription(String subscriptionName, String topicName, Integer ackDeadline)
public Subscription createSubscription(String subscriptionName, String topicName, String pushEndpoint)

5.2.5 Deleting a subscription

PubSubAdmin implements a method to delete subscriptions:

public void deleteSubscription(String subscriptionName)

Here is an example of how to delete a Google Cloud Pub/Sub subscription:

public void deleteSubscription() {
    pubSubAdmin.deleteSubscription("subscriptionName");
}

5.2.6 Listing subscriptions

PubSubAdmin implements a method to list subscriptions:

public List<Subscription> listSubscriptions()

Here is an example of how to list every subscription name in a project:

public List<String> listSubscriptions() {
    return pubSubAdmin
        .listSubscriptions()
        .stream()
        .map(Subscription::getNameAsSubscriptionName)
        .map(SubscriptionName::getSubscription)
        .collect(Collectors.toList());
}

5.3 Configuration

The Spring Boot starter for Google Cloud Pub/Sub provides the following configuration options:

Name

Description

Required

Default value

spring.cloud.gcp.pubsub.enabled

Enables or disables Pub/Sub auto-configuration

No

true

spring.cloud.gcp.pubsub.subscriber.executor-threads

Number of threads used by Subscriber instances created by SubscriberFactory

No

4

spring.cloud.gcp.pubsub.publisher.executor-threads

Number of threads used by Publisher instances created by PublisherFactory

No

4

spring.cloud.gcp.pubsub.project-id

GCP project ID where the Google Cloud Pub/Sub API is hosted, if different from the one in the Spring Cloud GCP Core Module

No

 

spring.cloud.gcp.pubsub.credentials.location

OAuth2 credentials for authenticating with the Google Cloud Pub/Sub API, if different from the ones in the Spring Cloud GCP Core Module

No

 

spring.cloud.gcp.pubsub.credentials.encoded-key

Base64-encoded contents of OAuth2 account private key for authenticating with the Google Cloud Pub/Sub API, if different from the ones in the Spring Cloud GCP Core Module

No

 

spring.cloud.gcp.pubsub.credentials.scopes

OAuth2 scope for Spring Cloud GCP Pub/Sub credentials

No

https://www.googleapis.com/auth/pubsub

spring.cloud.gcp.pubsub.subscriber.parallel-pull-count

The number of pull workers

No

The available number of processors

spring.cloud.gcp.pubsub.subscriber.max-ack-extension-period

The maximum period a message ack deadline will be extended, in seconds

No

0

spring.cloud.gcp.pubsub.subscriber.pull-endpoint

The endpoint for synchronous pulling messages

No

pubsub.googleapis.com:443

spring.cloud.gcp.pubsub.[subscriber,publisher].retry.total-timeout-seconds

TotalTimeout has ultimate control over how long the logic should keep trying the remote call until it gives up completely. The higher the total timeout, the more retries can be attempted.

No

0

spring.cloud.gcp.pubsub.[subscriber,publisher].retry.initial-retry-delay-second

InitialRetryDelay controls the delay before the first retry. Subsequent retries will use this value adjusted according to the RetryDelayMultiplier.

No

0

spring.cloud.gcp.pubsub.[subscriber,publisher].retry.retry-delay-multiplier

RetryDelayMultiplier controls the change in retry delay. The retry delay of the previous call is multiplied by the RetryDelayMultiplier to calculate the retry delay for the next call.

No

1

spring.cloud.gcp.pubsub.[subscriber,publisher].retry.max-retry-delay-seconds

MaxRetryDelay puts a limit on the value of the retry delay, so that the RetryDelayMultiplier can’t increase the retry delay higher than this amount.

No

0

spring.cloud.gcp.pubsub.[subscriber,publisher].retry.max-attempts

MaxAttempts defines the maximum number of attempts to perform. If this value is greater than 0, and the number of attempts reaches this limit, the logic will give up retrying even if the total retry time is still lower than TotalTimeout.

No

0

spring.cloud.gcp.pubsub.[subscriber,publisher].retry.jittered

Jitter determines if the delay time should be randomized.

No

true

spring.cloud.gcp.pubsub.[subscriber,publisher].retry.initial-rpc-timeout-seconds

InitialRpcTimeout controls the timeout for the initial RPC. Subsequent calls will use this value adjusted according to the RpcTimeoutMultiplier.

No

0

spring.cloud.gcp.pubsub.[subscriber,publisher].retry.rpc-timeout-multiplier

RpcTimeoutMultiplier controls the change in RPC timeout. The timeout of the previous call is multiplied by the RpcTimeoutMultiplier to calculate the timeout for the next call.

No

1

spring.cloud.gcp.pubsub.[subscriber,publisher].retry.max-rpc-timeout-seconds

MaxRpcTimeout puts a limit on the value of the RPC timeout, so that the RpcTimeoutMultiplier can’t increase the RPC timeout higher than this amount.

No

0

spring.cloud.gcp.pubsub.[subscriber,publisher.batching].flow-control.max-outstanding-element-count

Maximum number of outstanding elements to keep in memory before enforcing flow control.

No

unlimited

spring.cloud.gcp.pubsub.[subscriber,publisher.batching].flow-control.max-outstanding-request-bytes

Maximum number of outstanding bytes to keep in memory before enforcing flow control.

No

unlimited

spring.cloud.gcp.pubsub.[subscriber,publisher.batching].flow-control.limit-exceeded-behavior

The behavior when the specified limits are exceeded.

No

Block

spring.cloud.gcp.pubsub.publisher.batching.element-count-threshold

The element count threshold to use for batching.

No

unset (threshold does not apply)

spring.cloud.gcp.pubsub.publisher.batching.request-byte-threshold

The request byte threshold to use for batching.

No

unset (threshold does not apply)

spring.cloud.gcp.pubsub.publisher.batching.delay-threshold-seconds

The delay threshold to use for batching. After this amount of time has elapsed (counting from the first element added), the elements will be wrapped up in a batch and sent.

No

unset (threshold does not apply)

spring.cloud.gcp.pubsub.publisher.batching.enabled

Enables batching.

No

false

5.4 Sample

A sample application is available.

6. Spring Resources

Spring Resources are an abstraction for a number of low-level resources, such as file system files, classpath files, servlet context-relative files, etc. Spring Cloud GCP adds a new resource type: a Google Cloud Storage (GCS) object.

A Spring Boot starter is provided to auto-configure the various Storage components.

Maven coordinates, using Spring Cloud GCP BOM:

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-gcp-starter-storage</artifactId>
</dependency>

Gradle coordinates:

dependencies {
    compile group: 'org.springframework.cloud', name: 'spring-cloud-gcp-starter-storage'
}

This starter is also available from Spring Initializr through the GCP Storage entry.

6.1 Google Cloud Storage

The Spring Resource Abstraction for Google Cloud Storage allows GCS objects to be accessed by their GCS URL using the @Value annotation:

@Value("gs://[YOUR_GCS_BUCKET]/[GCS_FILE_NAME]")
private Resource gcsResource;

…​or the Spring application context

SpringApplication.run(...).getResource("gs://[YOUR_GCS_BUCKET]/[GCS_FILE_NAME]");

This creates a Resource object that can be used to read the object, among other possible operations.

It is also possible to write to a Resource, although a WriteableResource is required.

@Value("gs://[YOUR_GCS_BUCKET]/[GCS_FILE_NAME]")
private Resource gcsResource;
...
try (OutputStream os = ((WritableResource) gcsResource).getOutputStream()) {
  os.write("foo".getBytes());
}

To work with the Resource as a Google Cloud Storage resource, cast it to GoogleStorageResource.

If the resource path refers to an object on Google Cloud Storage (as opposed to a bucket), then the getBlob method can be called to obtain a Blob. This type represents a GCS file, which has associated metadata, such as content-type, that can be set. The createSignedUrl method can also be used to obtain signed URLs for GCS objects. However, creating signed URLs requires that the resource was created using service account credentials.

The Spring Boot Starter for Google Cloud Storage auto-configures the Storage bean required by the spring-cloud-gcp-storage module, based on the CredentialsProvider provided by the Spring Boot GCP starter.

6.1.1 Setting the Content Type

You can set the content-type of Google Cloud Storage files from their corresponding Resource objects:

((GoogleStorageResource)gcsResource).getBlob().toBuilder().setContentType("text/html").build().update();

6.2 Configuration

The Spring Boot Starter for Google Cloud Storage provides the following configuration options:

Name

Description

Required

Default value

spring.cloud.gcp.storage.enabled

Enables the GCP storage APIs.

No

true

spring.cloud.gcp.storage.auto-create-files

Creates files and buckets on Google Cloud Storage when writes are made to non-existent files

No

true

spring.cloud.gcp.storage.credentials.location

OAuth2 credentials for authenticating with the Google Cloud Storage API, if different from the ones in the Spring Cloud GCP Core Module

No

 

spring.cloud.gcp.storage.credentials.encoded-key

Base64-encoded contents of OAuth2 account private key for authenticating with the Google Cloud Storage API, if different from the ones in the Spring Cloud GCP Core Module

No

 

spring.cloud.gcp.storage.credentials.scopes

OAuth2 scope for Spring Cloud GCP Storage credentials

No

https://www.googleapis.com/auth/devstorage.read_write

6.3 Sample

A sample application and a codelab are available.

7. Spring JDBC

Spring Cloud GCP adds integrations with Spring JDBC so you can run your MySQL or PostgreSQL databases in Google Cloud SQL using Spring JDBC, or other libraries that depend on it like Spring Data JPA.

The Cloud SQL support is provided by Spring Cloud GCP in the form of two Spring Boot starters, one for MySQL and another one for PostgreSQL. The role of the starters is to read configuration from properties and assume default settings so that user experience connecting to MySQL and PostgreSQL is as simple as possible.

Maven coordinates, using Spring Cloud GCP BOM:

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-gcp-starter-sql-mysql</artifactId>
</dependency>
<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-gcp-starter-sql-postgresql</artifactId>
</dependency>

Gradle coordinates:

dependencies {
    compile group: 'org.springframework.cloud', name: 'spring-cloud-gcp-starter-sql-mysql'
    compile group: 'org.springframework.cloud', name: 'spring-cloud-gcp-starter-sql-postgresql'
}

7.1 Prerequisites

In order to use the Spring Boot Starters for Google Cloud SQL, the Google Cloud SQL API must be enabled in your GCP project.

To do that, go to the API library page of the Google Cloud Console, search for "Cloud SQL API", click the first result and enable the API.

[Note]Note

There are several similar "Cloud SQL" results. You must access the "Google Cloud SQL API" one and enable the API from there.

7.2 Spring Boot Starter for Google Cloud SQL

The Spring Boot Starters for Google Cloud SQL provide an auto-configured DataSource object. Coupled with Spring JDBC, it provides a JdbcTemplate object bean that allows for operations such as querying and modifying a database.

public List<Map<String, Object>> listUsers() {
    return jdbcTemplate.queryForList("SELECT * FROM user;");
}

You can rely on Spring Boot data source auto-configuration to configure a DataSource bean. In other words, properties like the SQL username, spring.datasource.username, and password, spring.datasource.password can be used. There is also some configuration specific to Google Cloud SQL:

Property name

Description

Default value

Unused if specified property(ies)

spring.cloud.gcp.sql.enabled

Enables or disables Cloud SQL auto configuration

true

 

spring.cloud.gcp.sql.database-name

Name of the database to connect to.

 

spring.datasource.url

spring.cloud.gcp.sql.instance-connection-name

A string containing a Google Cloud SQL instance’s project ID, region and name, each separated by a colon. For example, my-project-id:my-region:my-instance-name.

 

spring.datasource.url

spring.cloud.gcp.sql.credentials.location

File system path to the Google OAuth2 credentials private key file. Used to authenticate and authorize new connections to a Google Cloud SQL instance.

Default credentials provided by the Spring GCP Boot starter

 

spring.cloud.gcp.sql.credentials.encoded-key

Base64-encoded contents of OAuth2 account private key in JSON format. Used to authenticate and authorize new connections to a Google Cloud SQL instance.

Default credentials provided by the Spring GCP Boot starter

 

7.2.1 DataSource creation flow

Based on the previous properties, the Spring Boot starter for Google Cloud SQL creates a CloudSqlJdbcInfoProvider object which is used to obtain an instance’s JDBC URL and driver class name. If you provide your own CloudSqlJdbcInfoProvider bean, it is used instead and the properties related to building the JDBC URL or driver class are ignored.

The DataSourceProperties object provided by Spring Boot Autoconfigure is mutated in order to use the JDBC URL and driver class names provided by CloudSqlJdbcInfoProvider, unless those values were provided in the properties. It is in the DataSourceProperties mutation step that the credentials factory is registered in a system property to be SqlCredentialFactory.

DataSource creation is delegated to Spring Boot. You can select the type of connection pool (e.g., Tomcat, HikariCP, etc.) by adding their dependency to the classpath.

Using the created DataSource in conjunction with Spring JDBC provides you with a fully configured and operational JdbcTemplate object that you can use to interact with your SQL database. You can connect to your database with as little as a database and instance names.

7.2.2 Troubleshooting tips

Connection issues

If you’re not able to connect to a database and see an endless loop of Connecting to Cloud SQL instance […​] on IP […​], it’s likely that exceptions are being thrown and logged at a level lower than your logger’s level. This may be the case with HikariCP, if your logger is set to INFO or higher level.

To see what’s going on in the background, you should add a logback.xml file to your application resources folder, that looks like this:

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
  <include resource="org/springframework/boot/logging/logback/base.xml"/>
  <logger name="com.zaxxer.hikari.pool" level="DEBUG"/>
</configuration>

Errors like c.g.cloud.sql.core.SslSocketFactory : Re-throwing cached exception due to attempt to refresh instance information too soon after error

If you see a lot of errors like this in a loop and can’t connect to your database, this is usually a symptom that something isn’t right with the permissions of your credentials or the Google Cloud SQL API is not enabled. Verify that the Google Cloud SQL API is enabled in the Cloud Console and that your service account has the necessary IAM roles.

To find out what’s causing the issue, you can enable DEBUG logging level as mentioned above.

PostgreSQL: java.net.SocketException: already connected issue

We found this exception to be common if your Maven project’s parent is spring-boot version 1.5.x, or in any other circumstance that would cause the version of the org.postgresql:postgresql dependency to be an older one (e.g., 9.4.1212.jre7).

To fix this, re-declare the dependency in its correct version. For example, in Maven:

<dependency>
  <groupId>org.postgresql</groupId>
  <artifactId>postgresql</artifactId>
  <version>42.1.1</version>
</dependency>

8. Spring Integration

Spring Cloud GCP provides Spring Integration adapters that allow your applications to use Enterprise Integration Patterns backed up by Google Cloud Platform services.

8.1 Channel Adapters for Cloud Pub/Sub

The channel adapters for Google Cloud Pub/Sub connect your Spring MessageChannels to Google Cloud Pub/Sub topics and subscriptions. This enables messaging between different processes, applications or micro-services backed up by Google Cloud Pub/Sub.

The Spring Integration Channel Adapters for Google Cloud Pub/Sub are included in the spring-cloud-gcp-pubsub module.

Maven coordinates, using Spring Cloud GCP BOM:

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-gcp-pubsub</artifactId>
</dependency>
<dependency>
    <groupId>org.springframework.integration</groupId>
    <artifactId>spring-integration-core</artifactId>
</dependency>

Gradle coordinates:

dependencies {
    compile group: 'org.springframework.cloud', name: 'spring-cloud-gcp-pubsub'
    compile group: 'org.springframework.integration', name: 'spring-integration-core'
}

8.1.1 Inbound channel adapter

PubSubInboundChannelAdapter is the inbound channel adapter for GCP Pub/Sub that listens to a GCP Pub/Sub subscription for new messages. It converts new messages to an internal Spring Message and then sends it to the bound output channel.

Google Pub/Sub treats message payloads as byte arrays. So, by default, the inbound channel adapter will construct the Spring Message with byte[] as the payload. However, you can change the desired payload type by setting the payloadType property of the PubSubInboundChannelAdapter. The PubSubInboundChannelAdapter delegates the conversion to the desired payload type to the PubSubMessageConverter configured in the PubSubTemplate.

To use the inbound channel adapter, a PubSubInboundChannelAdapter must be provided and configured on the user application side.

@Bean
public MessageChannel pubsubInputChannel() {
    return new PublishSubscribeChannel();
}

@Bean
public PubSubInboundChannelAdapter messageChannelAdapter(
    @Qualifier("pubsubInputChannel") MessageChannel inputChannel,
    SubscriberFactory subscriberFactory) {
    PubSubInboundChannelAdapter adapter =
        new PubSubInboundChannelAdapter(subscriberFactory, "subscriptionName");
    adapter.setOutputChannel(inputChannel);
    adapter.setAckMode(AckMode.MANUAL);

    return adapter;
}

In the example, we first specify the MessageChannel where the adapter is going to write incoming messages to. The MessageChannel implementation isn’t important here. Depending on your use case, you might want to use a MessageChannel other than PublishSubscribeChannel.

Then, we declare a PubSubInboundChannelAdapter bean. It requires the channel we just created and a SubscriberFactory, which creates Subscriber objects from the Google Cloud Java Client for Pub/Sub. The Spring Boot starter for GCP Pub/Sub provides a configured SubscriberFactory.

The PubSubInboundChannelAdapter supports three acknowledgement modes, with AckMode.AUTO being the default value;

Automatic acking (AckMode.AUTO)

A message is acked with GCP Pub/Sub if the adapter sent it to the channel and no exceptions were thrown. If a RuntimeException is thrown while the message is processed, then the message is nacked.

Automatic acking OK (AckMode.AUTO_ACK)

A message is acked with GCP Pub/Sub if the adapter sent it to the channel and no exceptions were thrown. If a RuntimeException is thrown while the message is processed, then the message is neither acked / nor nacked.

This is useful when using the subscription’s ack deadline timeout as a retry delivery backoff mechanism.

Manually acking (AckMode.MANUAL)

The adapter attaches a BasicAcknowledgeablePubsubMessage object to the Message headers. Users can extract the BasicAcknowledgeablePubsubMessage using the GcpPubSubHeaders.ORIGINAL_MESSAGE key and use it to (n)ack a message.

@Bean
@ServiceActivator(inputChannel = "pubsubInputChannel")
public MessageHandler messageReceiver() {
    return message -> {
        LOGGER.info("Message arrived! Payload: " + new String((byte[]) message.getPayload()));
        BasicAcknowledgeablePubsubMessage originalMessage =
              message.getHeaders().get(GcpPubSubHeaders.ORIGINAL_MESSAGE, BasicAcknowledgeablePubsubMessage.class);
        originalMessage.ack();
    };
}

8.1.2 Outbound channel adapter

PubSubMessageHandler is the outbound channel adapter for GCP Pub/Sub that listens for new messages on a Spring MessageChannel. It uses PubSubTemplate to post them to a GCP Pub/Sub topic.

To construct a Pub/Sub representation of the message, the outbound channel adapter needs to convert the Spring Message payload to a byte array representation expected by Pub/Sub. It delegates this conversion to the PubSubTemplate. To customize the conversion, you can specify a PubSubMessageConverter in the PubSubTemplate that should convert the Object payload and headers of the Spring Message to a PubsubMessage.

To use the outbound channel adapter, a PubSubMessageHandler bean must be provided and configured on the user application side.

@Bean
@ServiceActivator(inputChannel = "pubsubOutputChannel")
public MessageHandler messageSender(PubSubTemplate pubsubTemplate) {
    return new PubSubMessageHandler(pubsubTemplate, "topicName");
}

The provided PubSubTemplate contains all the necessary configuration to publish messages to a GCP Pub/Sub topic.

PubSubMessageHandler publishes messages asynchronously by default. A publish timeout can be configured for synchronous publishing. If none is provided, the adapter waits indefinitely for a response.

It is possible to set user-defined callbacks for the publish() call in PubSubMessageHandler through the setPublishFutureCallback() method. These are useful to process the message ID, in case of success, or the error if any was thrown.

To override the default destination you can use the GcpPubSubHeaders.DESTINATION header.

@Autowired
private MessageChannel pubsubOutputChannel;

public void handleMessage(Message<?> msg) throws MessagingException {
    final Message<?> message = MessageBuilder
        .withPayload(msg.getPayload())
        .setHeader(GcpPubSubHeaders.TOPIC, "customTopic").build();
    pubsubOutputChannel.send(message);
}

It is also possible to set an SpEL expression for the topic with the setTopicExpression() or setTopicExpressionString() methods.

8.1.3 Header mapping

These channel adapters contain header mappers that allow you to map, or filter out, headers from Spring to Google Cloud Pub/Sub messages, and vice-versa. By default, the inbound channel adapter maps every header on the Google Cloud Pub/Sub messages to the Spring messages produced by the adapter. The outbound channel adapter maps every header from Spring messages into Google Cloud Pub/Sub ones, except the ones added by Spring, like headers with key "id", "timestamp" and "gcp_pubsub_acknowledgement". In the process, the outbound mapper also converts the value of the headers into string.

Each adapter declares a setHeaderMapper() method to let you further customize which headers you want to map from Spring to Google Cloud Pub/Sub, and vice-versa.

For example, to filter out headers "foo", "bar" and all headers starting with the prefix "prefix_", you can use setHeaderMapper() along with the PubSubHeaderMapper implementation provided by this module.

PubSubMessageHandler adapter = ...
...
PubSubHeaderMapper headerMapper = new PubSubHeaderMapper();
headerMapper.setOutboundHeaderPatterns("!foo", "!bar", "!prefix_*", "*");
adapter.setHeaderMapper(headerMapper);
[Note]Note

The order in which the patterns are declared in PubSubHeaderMapper.setOutboundHeaderPatterns() and PubSubHeaderMapper.setInboundHeaderPatterns() matters. The first patterns have precedence over the following ones.

In the previous example, the "*" pattern means every header is mapped. However, because it comes last in the list, the previous patterns take precedence.

8.3 Channel Adapters for Google Cloud Storage

The channel adapters for Google Cloud Storage allow you to read and write files to Google Cloud Storage through MessageChannels.

Spring Cloud GCP provides two inbound adapters, GcsInboundFileSynchronizingMessageSource and GcsStreamingMessageSource, and one outbound adapter, GcsMessageHandler.

The Spring Integration Channel Adapters for Google Cloud Storage are included in the spring-cloud-gcp-storage module.

To use the Storage portion of Spring Integration for Spring Cloud GCP, you must also provide the spring-integration-file dependency, since it isn’t pulled transitively.

Maven coordinates, using Spring Cloud GCP BOM:

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-gcp-storage</artifactId>
</dependency>
<dependency>
    <groupId>org.springframework.integration</groupId>
    <artifactId>spring-integration-file</artifactId>
</dependency>

Gradle coordinates:

dependencies {
    compile group: 'org.springframework.cloud', name: 'spring-cloud-gcp-starter-storage'
    compile group: 'org.springframework.integration', name: 'spring-integration-file'
}

8.3.1 Inbound channel adapter

The Google Cloud Storage inbound channel adapter polls a Google Cloud Storage bucket for new files and sends each of them in a Message payload to the MessageChannel specified in the @InboundChannelAdapter annotation. The files are temporarily stored in a folder in the local file system.

Here is an example of how to configure a Google Cloud Storage inbound channel adapter.

@Bean
@InboundChannelAdapter(channel = "new-file-channel", poller = @Poller(fixedDelay = "5000"))
public MessageSource<File> synchronizerAdapter(Storage gcs) {
  GcsInboundFileSynchronizer synchronizer = new GcsInboundFileSynchronizer(gcs);
  synchronizer.setRemoteDirectory("your-gcs-bucket");

  GcsInboundFileSynchronizingMessageSource synchAdapter =
          new GcsInboundFileSynchronizingMessageSource(synchronizer);
  synchAdapter.setLocalDirectory(new File("local-directory"));

  return synchAdapter;
}

8.3.2 Inbound streaming channel adapter

The inbound streaming channel adapter is similar to the normal inbound channel adapter, except it does not require files to be stored in the file system.

Here is an example of how to configure a Google Cloud Storage inbound streaming channel adapter.

@Bean
@InboundChannelAdapter(channel = "streaming-channel", poller = @Poller(fixedDelay = "5000"))
public MessageSource<InputStream> streamingAdapter(Storage gcs) {
  GcsStreamingMessageSource adapter =
          new GcsStreamingMessageSource(new GcsRemoteFileTemplate(new GcsSessionFactory(gcs)));
  adapter.setRemoteDirectory("your-gcs-bucket");
  return adapter;
}

8.3.3 Outbound channel adapter

The outbound channel adapter allows files to be written to Google Cloud Storage. When it receives a Message containing a payload of type File, it writes that file to the Google Cloud Storage bucket specified in the adapter.

Here is an example of how to configure a Google Cloud Storage outbound channel adapter.

@Bean
@ServiceActivator(inputChannel = "writeFiles")
public MessageHandler outboundChannelAdapter(Storage gcs) {
  GcsMessageHandler outboundChannelAdapter = new GcsMessageHandler(new GcsSessionFactory(gcs));
  outboundChannelAdapter.setRemoteDirectoryExpression(new ValueExpression<>("your-gcs-bucket"));

  return outboundChannelAdapter;
}

8.4 Sample

A sample application is available.

9. Spring Cloud Stream

Spring Cloud GCP provides a Spring Cloud Stream binder to Google Cloud Pub/Sub.

The provided binder relies on the Spring Integration Channel Adapters for Google Cloud Pub/Sub.

Maven coordinates, using Spring Cloud GCP BOM:

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-gcp-pubsub-stream-binder</artifactId>
</dependency>

Gradle coordinates:

dependencies {
    compile group: 'org.springframework.cloud', name: 'spring-cloud-gcp-pubsub-stream-binder'
}

9.1 Overview

This binder binds producers to Google Cloud Pub/Sub topics and consumers to subscriptions.

[Note]Note

Partitioning is currently not supported by this binder.

9.2 Configuration

You can configure the Spring Cloud Stream Binder for Google Cloud Pub/Sub to automatically generate the underlying resources, like the Google Cloud Pub/Sub topics and subscriptions for producers and consumers. For that, you can use the spring.cloud.stream.gcp.pubsub.bindings.<channelName>.<consumer|producer>.auto-create-resources property, which is turned ON by default.

Starting with version 1.1, these and other binder properties can be configured globally for all the bindings, e.g. spring.cloud.stream.gcp.pubsub.default.consumer.auto-create-resources.

If you are using Pub/Sub auto-configuration from the Spring Cloud GCP Pub/Sub Starter, you should refer to the configuration section for other Pub/Sub parameters.

[Note]Note

To use this binder with a running emulator, configure its host and port via spring.cloud.gcp.pubsub.emulator-host.

9.2.1 Producer Destination Configuration

If automatic resource creation is turned ON and the topic corresponding to the destination name does not exist, it will be created.

For example, for the following configuration, a topic called myEvents would be created.

application.properties. 

spring.cloud.stream.bindings.events.destination=myEvents
spring.cloud.stream.gcp.pubsub.bindings.events.producer.auto-create-resources=true

9.2.2 Consumer Destination Configuration

If automatic resource creation is turned ON and the subscription and/or the topic do not exist for a consumer, a subscription and potentially a topic will be created. The topic name will be the same as the destination name, and the subscription name will be the destination name followed by the consumer group name.

Regardless of the auto-create-resources setting, if the consumer group is not specified, an anonymous one will be created with the name anonymous.<destinationName>.<randomUUID>. Then when the binder shuts down, all Pub/Sub subscriptions created for anonymous consumer groups will be automatically cleaned up.

For example, for the following configuration, a topic named myEvents and a subscription called myEvents.counsumerGroup1 would be created. If the consumer group is not specified, a subscription called anonymous.myEvents.a6d83782-c5a3-4861-ac38-e6e2af15a7be would be created and later cleaned up.

[Important]Important

If you are manually creating Pub/Sub subscriptions for consumers, make sure that they follow the naming convention of <destinationName>.<consumerGroup>.

application.properties. 

spring.cloud.stream.bindings.events.destination=myEvents
spring.cloud.stream.gcp.pubsub.bindings.events.consumer.auto-create-resources=true

# specify consumer group, and avoid anonymous consumer group generation
spring.cloud.stream.bindings.events.group=consumerGroup1

9.3 Sample

A sample application is available.

10. Spring Cloud Sleuth

Spring Cloud Sleuth is an instrumentation framework for Spring Boot applications. It captures trace information and can forward traces to services like Zipkin for storage and analysis.

Google Cloud Platform provides its own managed distributed tracing service called Stackdriver Trace. Instead of running and maintaining your own Zipkin instance and storage, you can use Stackdriver Trace to store traces, view trace details, generate latency distributions graphs, and generate performance regression reports.

This Spring Cloud GCP starter can forward Spring Cloud Sleuth traces to Stackdriver Trace without an intermediary Zipkin server.

Maven coordinates, using Spring Cloud GCP BOM:

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-gcp-starter-trace</artifactId>
</dependency>

Gradle coordinates:

dependencies {
    compile group: 'org.springframework.cloud', name: 'spring-cloud-gcp-starter-trace'
}

You must enable Stackdriver Trace API from the Google Cloud Console in order to capture traces. Navigate to the Stackdriver Trace API for your project and make sure it’s enabled.

[Note]Note

If you are already using a Zipkin server capturing trace information from multiple platform/frameworks, you can also use a Stackdriver Zipkin proxy to forward those traces to Stackdriver Trace without modifying existing applications.

10.1 Tracing

Spring Cloud Sleuth uses the Brave tracer to generate traces. This integration enables Brave to use the StackdriverTracePropagation propagation.

A propagation is responsible for extracting trace context from an entity (e.g., an HTTP servlet request) and injecting trace context into an entity. A canonical example of the propagation usage is a web server that receives an HTTP request, which triggers other HTTP requests from the server before returning an HTTP response to the original caller. In the case of StackdriverTracePropagation, first it looks for trace context in the x-cloud-trace-context key (e.g., an HTTP request header). The value of the x-cloud-trace-context key can be formatted in three different ways:

  • x-cloud-trace-context: TRACE_ID
  • x-cloud-trace-context: TRACE_ID/SPAN_ID
  • x-cloud-trace-context: TRACE_ID/SPAN_ID;o=TRACE_TRUE

TRACE_ID is a 32-character hexadecimal value that encodes a 128-bit number.

SPAN_ID is an unsigned long. Since Stackdriver Trace doesn’t support span joins, a new span ID is always generated, regardless of the one specified in x-cloud-trace-context.

TRACE_TRUE can either be 0 if the entity should be untraced, or 1 if it should be traced. This field forces the decision of whether or not to trace the request; if omitted then the decision is deferred to the sampler.

If a x-cloud-trace-context key isn’t found, StackdriverTracePropagation falls back to tracing with the X-B3 headers.

10.2 Spring Boot Starter for Stackdriver Trace

Spring Boot Starter for Stackdriver Trace uses Spring Cloud Sleuth and auto-configures a StackdriverSender that sends the Sleuth’s trace information to Stackdriver Trace.

All configurations are optional:

Name

Description

Required

Default value

spring.cloud.gcp.trace.enabled

Auto-configure Spring Cloud Sleuth to send traces to Stackdriver Trace.

No

true

spring.cloud.gcp.trace.project-id

Overrides the project ID from the Spring Cloud GCP Module

No

 

spring.cloud.gcp.trace.credentials.location

Overrides the credentials location from the Spring Cloud GCP Module

No

 

spring.cloud.gcp.trace.credentials.encoded-key

Overrides the credentials encoded key from the Spring Cloud GCP Module

No

 

spring.cloud.gcp.trace.credentials.scopes

Overrides the credentials scopes from the Spring Cloud GCP Module

No

 

spring.cloud.gcp.trace.num-executor-threads

Number of threads used by the Trace executor

No

4

spring.cloud.gcp.trace.authority

HTTP/2 authority the channel claims to be connecting to.

No

 

spring.cloud.gcp.trace.compression

Name of the compression to use in Trace calls

No

 

spring.cloud.gcp.trace.deadline-ms

Call deadline in milliseconds

No

 

spring.cloud.gcp.trace.max-inbound-size

Maximum size for inbound messages

No

 

spring.cloud.gcp.trace.max-outbound-size

Maximum size for outbound messages

No

 

spring.cloud.gcp.trace.wait-for-ready

Waits for the channel to be ready in case of a transient failure

No

false

You can use core Spring Cloud Sleuth properties to control Sleuth’s sampling rate, etc. Read Sleuth documentation for more information on Sleuth configurations.

For example, when you are testing to see the traces are going through, you can set the sampling rate to 100%.

spring.sleuth.sampler.probability=1                     # Send 100% of the request traces to Stackdriver.
spring.sleuth.web.skipPattern=(^cleanup.*|.+favicon.*)  # Ignore some URL paths.

Spring Cloud GCP Trace does override some Sleuth configurations:

  • Always uses 128-bit Trace IDs. This is required by Stackdriver Trace.
  • Does not use Span joins. Span joins will share the span ID between the client and server Spans. Stackdriver requires that every Span ID within a Trace to be unique, so Span joins are not supported.
  • Uses StackdriverHttpClientParser and StackdriverHttpServerParser by default to populate Stackdriver related fields.

10.3 Integration with Logging

Integration with Stackdriver Logging is available through the Stackdriver Logging Support. If the Trace integration is used together with the Logging one, the request logs will be associated to the corresponding traces. The trace logs can be viewed by going to the Google Cloud Console Trace List, selecting a trace and pressing the Logs → View link in the Details section.

10.4 Sample

A sample application and a codelab are available.

11. Stackdriver Logging

Maven coordinates, using Spring Cloud GCP BOM:

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-gcp-starter-logging</artifactId>
</dependency>

Gradle coordinates:

dependencies {
    compile group: 'org.springframework.cloud', name: 'spring-cloud-gcp-starter-logging'
}

Stackdriver Logging is the managed logging service provided by Google Cloud Platform.

This module provides support for associating a web request trace ID with the corresponding log entries. It does so by retrieving the X-B3-TraceId value from the Mapped Diagnostic Context (MDC), which is set by Spring Cloud Sleuth. If Spring Cloud Sleuth isn’t used, the configured TraceIdExtractor extracts the desired header value and sets it as the log entry’s trace ID. This allows grouping of log messages by request, for example, in the Google Cloud Console Logs viewer.

[Note]Note

Due to the way logging is set up, the GCP project ID and credentials defined in application.properties are ignored. Instead, you should set the GOOGLE_CLOUD_PROJECT and GOOGLE_APPLICATION_CREDENTIALS environment variables to the project ID and credentials private key location, respectively. You can do this easily if you’re using the Google Cloud SDK, using the gcloud config set project [YOUR_PROJECT_ID] and gcloud auth application-default login commands, respectively.

11.1 Web MVC Interceptor

For use in Web MVC-based applications, TraceIdLoggingWebMvcInterceptor is provided that extracts the request trace ID from an HTTP request using a TraceIdExtractor and stores it in a thread-local, which can then be used in a logging appender to add the trace ID metadata to log messages.

[Warning]Warning

If Spring Cloud GCP Trace is enabled, the logging module disables itself and delegates log correlation to Spring Cloud Sleuth.

LoggingWebMvcConfigurer configuration class is also provided to help register the TraceIdLoggingWebMvcInterceptor in Spring MVC applications.

Applications hosted on the Google Cloud Platform include trace IDs under the x-cloud-trace-context header, which will be included in log entries. However, if Sleuth is used the trace ID will be picked up from the MDC.

11.2 Logback Support

Currently, only Logback is supported and there are 2 possibilities to log to Stackdriver via this library with Logback: via direct API calls and through JSON-formatted console logs.

11.2.1 Log via API

A Stackdriver appender is available using org/springframework/cloud/gcp/autoconfigure/logging/logback-appender.xml. This appender builds a Stackdriver Logging log entry from a JUL or Logback log entry, adds a trace ID to it and sends it to Stackdriver Logging.

STACKDRIVER_LOG_NAME and STACKDRIVER_LOG_FLUSH_LEVEL environment variables can be used to customize the STACKDRIVER appender.

Your configuration may then look like this:

<configuration>
  <include resource="org/springframework/cloud/gcp/autoconfigure/logging/logback-appender.xml" />

  <root level="INFO">
    <appender-ref ref="STACKDRIVER" />
  </root>
</configuration>

If you want to have more control over the log output, you can further configure the appender. The following properties are available:

PropertyDefault ValueDescription

log

spring.log

The Stackdriver Log name. This can also be set via the STACKDRIVER_LOG_NAME environmental variable.

flushLevel

WARN

If a log entry with this level is encountered, trigger a flush of locally buffered log to Stackdriver Logging. This can also be set via the STACKDRIVER_LOG_FLUSH_LEVEL environmental variable.

11.2.2 Log via Console

For Logback, a org/springframework/cloud/gcp/autoconfigure/logging/logback-json-appender.xml file is made available for import to make it easier to configure the JSON Logback appender.

Your configuration may then look something like this:

<configuration>
  <include resource="org/springframework/cloud/gcp/autoconfigure/logging/logback-json-appender.xml" />

  <root level="INFO">
    <appender-ref ref="CONSOLE_JSON" />
  </root>
</configuration>

If your application is running on Google Kubernetes Engine, Google Compute Engine or Google App Engine Flexible, your console logging is automatically saved to Google Stackdriver Logging. Therefore, you can just include org/springframework/cloud/gcp/autoconfigure/logging/logback-json-appender.xml in your logging configuration, which logs JSON entries to the console. The trace id will be set correctly.

If you want to have more control over the log output, you can further configure the appender. The following properties are available:

PropertyDefault ValueDescription

projectId

If not set, default value is determined in the following order:

  1. SPRING_CLOUD_GCP_LOGGING_PROJECT_ID Environmental Variable.
  2. Value of DefaultGcpProjectIdProvider.getProjectId()

This is used to generate fully qualified Stackdriver Trace ID format: projects/[PROJECT-ID]/traces/[TRACE-ID].

This format is required to correlate trace between Stackdriver Trace and Stackdriver Logging.

If projectId is not set and cannot be determined, then it’ll log traceId without the fully qualified format.

includeTraceId

true

Should the traceId be included

includeSpanId

true

Should the spanId be included

includeLevel

true

Should the severity be included

includeThreadName

true

Should the thread name be included

includeMDC

true

Should all MDC properties be included. The MDC properties X-B3-TraceId, X-B3-SpanId and X-Span-Export provided by Spring Sleuth will get excluded as they get handled separately

includeLoggerName

true

Should the name of the logger be included

includeFormattedMessage

true

Should the formatted log message be included.

includeExceptionInMessage

true

Should the stacktrace be appended to the formatted log message. This setting is only evaluated if includeFormattedMessage is true

includeContextName

true

Should the logging context be included

includeMessage

false

Should the log message with blank placeholders be included

includeException

false

Should the stacktrace be included as a own field

This is an example of such an Logback configuration:

<configuration >
  <property name="projectId" value="${projectId:-${GOOGLE_CLOUD_PROJECT}}"/>

  <appender name="CONSOLE_JSON" class="ch.qos.logback.core.ConsoleAppender">
    <encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder">
      <layout class="org.springframework.cloud.gcp.logging.StackdriverJsonLayout">
        <projectId>${projectId}</projectId>

        <!--<includeTraceId>true</includeTraceId>-->
        <!--<includeSpanId>true</includeSpanId>-->
        <!--<includeLevel>true</includeLevel>-->
        <!--<includeThreadName>true</includeThreadName>-->
        <!--<includeMDC>true</includeMDC>-->
        <!--<includeLoggerName>true</includeLoggerName>-->
        <!--<includeFormattedMessage>true</includeFormattedMessage>-->
        <!--<includeExceptionInMessage>true</includeExceptionInMessage>-->
        <!--<includeContextName>true</includeContextName>-->
        <!--<includeMessage>false</includeMessage>-->
        <!--<includeException>false</includeException>-->
      </layout>
    </encoder>
  </appender>
</configuration>

11.3 Sample

A Sample Spring Boot Application is provided to show how to use the Cloud logging starter.

12. Spring Cloud Config

Spring Cloud GCP makes it possible to use the Google Runtime Configuration API as a Spring Cloud Config server to remotely store your application configuration data.

The Spring Cloud GCP Config support is provided via its own Spring Boot starter. It enables the use of the Google Runtime Configuration API as a source for Spring Boot configuration properties.

[Note]Note

The Google Cloud Runtime Configuration service is in beta status.

Maven coordinates, using Spring Cloud GCP BOM:

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-gcp-starter-config</artifactId>
</dependency>

Gradle coordinates:

dependencies {
    compile group: 'org.springframework.cloud', name: 'spring-cloud-gcp-starter-config'
}

12.1 Configuration

The following parameters are configurable in Spring Cloud GCP Config:

Name

Description

Required

Default value

spring.cloud.gcp.config.enabled

Enables the Config client

No

false

spring.cloud.gcp.config.name

Name of your application

No

Value of the spring.application.name property. If none, application

spring.cloud.gcp.config.profile

Active profile

No

Value of the spring.profiles.active property. If more than a single profile, last one is chosen

spring.cloud.gcp.config.timeout-millis

Timeout in milliseconds for connecting to the Google Runtime Configuration API

No

60000

spring.cloud.gcp.config.project-id

GCP project ID where the Google Runtime Configuration API is hosted

No

 

spring.cloud.gcp.config.credentials.location

OAuth2 credentials for authenticating with the Google Runtime Configuration API

No

 

spring.cloud.gcp.config.credentials.encoded-key

Base64-encoded OAuth2 credentials for authenticating with the Google Runtime Configuration API

No

 

spring.cloud.gcp.config.credentials.scopes

OAuth2 scope for Spring Cloud GCP Config credentials

No

https://www.googleapis.com/auth/cloudruntimeconfig

[Note]Note

These properties should be specified in a bootstrap.yml/bootstrap.properties file, rather than the usual applications.yml/application.properties.

[Note]Note

Core properties, as described in Spring Cloud GCP Core Module, do not apply to Spring Cloud GCP Config.

12.2 Quick start

  1. Create a configuration in the Google Runtime Configuration API that is called ${spring.application.name}_${spring.profiles.active}. In other words, if spring.application.name is myapp and spring.profiles.active is prod, the configuration should be called myapp_prod.

    In order to do that, you should have the Google Cloud SDK installed, own a Google Cloud Project and run the following command:

gcloud init # if this is your first Google Cloud SDK run.
gcloud beta runtime-config configs create myapp_prod
gcloud beta runtime-config configs variables set myapp.queue-size 25 --config-name myapp_prod
  1. Configure your bootstrap.properties file with your application’s configuration data:

    spring.application.name=myapp
    spring.profiles.active=prod
  2. Add the @ConfigurationProperties annotation to a Spring-managed bean:

    @Component
    @ConfigurationProperties("myapp")
    public class SampleConfig {
    
      private int queueSize;
    
      public int getQueueSize() {
        return this.queueSize;
      }
    
      public void setQueueSize(int queueSize) {
        this.queueSize = queueSize;
      }
    }

When your Spring application starts, the queueSize field value will be set to 25 for the above SampleConfig bean.

12.3 Refreshing the configuration at runtime

Spring Cloud provides support to have configuration parameters be reloadable with the POST request to /actuator/refresh endpoint.

  1. Add the Spring Boot Actuator dependency:

Maven coordinates:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-actuator</artifactId>
</dependency>

Gradle coordinates:

dependencies {
    compile group: 'org.springframework.boot', name: 'spring-boot-starter-actuator'
}
  1. Add @RefreshScope to your Spring configuration class to have parameters be reloadable at runtime.
  2. Add management.endpoints.web.exposure.include=refresh to your application.properties to allow unrestricted access to /actuator/refresh.
  3. Update a property with gcloud:

    $ gcloud beta runtime-config configs variables set \
      myapp.queue_size 200 \
      --config-name myapp_prod
  4. Send a POST request to the refresh endpoint:

    $ curl -XPOST http://myapp.host.com/actuator/refresh

12.4 Sample

A sample application and a codelab are available.

13. Spring Data Cloud Spanner

Spring Data is an abstraction for storing and retrieving POJOs in numerous storage technologies. Spring Cloud GCP adds Spring Data support for Google Cloud Spanner.

Maven coordinates for this module only, using Spring Cloud GCP BOM:

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-gcp-data-spanner</artifactId>
</dependency>

Gradle coordinates:

dependencies {
    compile group: 'org.springframework.cloud', name: 'spring-cloud-gcp-data-spanner'
}

We provide a Spring Boot Starter for Spring Data Spanner, with which you can leverage our recommended auto-configuration setup. To use the starter, see the coordinates see below.

Maven:

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-gcp-starter-data-spanner</artifactId>
</dependency>

Gradle:

dependencies {
    compile group: 'org.springframework.cloud', name: 'spring-cloud-gcp-starter-data-spanner'
}

This setup takes care of bringing in the latest compatible version of Cloud Java Cloud Spanner libraries as well.

13.1 Configuration

To setup Spring Data Cloud Spanner, you have to configure the following:

  • Setup the connection details to Google Cloud Spanner.
  • Enable Spring Data Repositories (optional).

13.1.1 Cloud Spanner settings

You can the use Spring Boot Starter for Spring Data Spanner to autoconfigure Google Cloud Spanner in your Spring application. It contains all the necessary setup that makes it easy to authenticate with your Google Cloud project. The following configuration options are available:

Name

Description

Required

Default value

spring.cloud.gcp.spanner.instance-id

Cloud Spanner instance to use

Yes

 

spring.cloud.gcp.spanner.database

Cloud Spanner database to use

Yes

 

spring.cloud.gcp.spanner.project-id

GCP project ID where the Google Cloud Spanner API is hosted, if different from the one in the Spring Cloud GCP Core Module

No

 

spring.cloud.gcp.spanner.credentials.location

OAuth2 credentials for authenticating with the Google Cloud Spanner API, if different from the ones in the Spring Cloud GCP Core Module

No

 

spring.cloud.gcp.spanner.credentials.encoded-key

Base64-encoded OAuth2 credentials for authenticating with the Google Cloud Spanner API, if different from the ones in the Spring Cloud GCP Core Module

No

 

spring.cloud.gcp.spanner.credentials.scopes

OAuth2 scope for Spring Cloud GCP Cloud Spanner credentials

No

https://www.googleapis.com/auth/spanner.data

spring.cloud.gcp.spanner.createInterleavedTableDdlOnDeleteCascade

If true, then schema statements generated by SpannerSchemaUtils for tables with interleaved parent-child relationships will be "ON DELETE CASCADE". The schema for the tables will be "ON DELETE NO ACTION" if false.

No

true

spring.cloud.gcp.spanner.numRpcChannels

Number of gRPC channels used to connect to Cloud Spanner

No

4 - Determined by Cloud Spanner client library

spring.cloud.gcp.spanner.prefetchChunks

Number of chunks prefetched by Cloud Spanner for read and query

No

4 - Determined by Cloud Spanner client library

spring.cloud.gcp.spanner.minSessions

Minimum number of sessions maintained in the session pool

No

0 - Determined by Cloud Spanner client library

spring.cloud.gcp.spanner.maxSessions

Maximum number of sessions session pool can have

No

400 - Determined by Cloud Spanner client library

spring.cloud.gcp.spanner.maxIdleSessions

Maximum number of idle sessions session pool will maintain

No

0 - Determined by Cloud Spanner client library

spring.cloud.gcp.spanner.writeSessionsFraction

Fraction of sessions to be kept prepared for write transactions

No

0.2 - Determined by Cloud Spanner client library

spring.cloud.gcp.spanner.keepAliveIntervalMinutes

How long to keep idle sessions alive

No

30 - Determined by Cloud Spanner client library

13.1.2 Repository settings

Spring Data Repositories can be configured via the @EnableSpannerRepositories annotation on your main @Configuration class. With our Spring Boot Starter for Spring Data Cloud Spanner, @EnableSpannerRepositories is automatically added. It is not required to add it to any other class, unless there is a need to override finer grain configuration parameters provided by @EnableSpannerRepositories.

13.1.3 Autoconfiguration

Our Spring Boot autoconfiguration creates the following beans available in the Spring application context:

  • an instance of SpannerTemplate
  • an instance of SpannerDatabaseAdminTemplate for generating table schemas from object hierarchies and creating and deleting tables and databases
  • an instance of all user-defined repositories extending SpannerRepository, CrudRepository, PagingAndSortingRepository, when repositories are enabled
  • an instance of DatabaseClient from the Google Cloud Java Client for Spanner, for convenience and lower level API access

13.2 Object Mapping

Spring Data Cloud Spanner allows you to map domain POJOs to Cloud Spanner tables via annotations:

@Table(name = "traders")
public class Trader {

	@PrimaryKey
	@Column(name = "trader_id")
	String traderId;

	String firstName;

	String lastName;

	@NotMapped
	Double temporaryNumber;
}

Spring Data Cloud Spanner will ignore any property annotated with @NotMapped. These properties will not be written to or read from Spanner.

13.2.1 Constructors

Simple constructors are supported on POJOs. The constructor arguments can be a subset of the persistent properties. Every constructor argument needs to have the same name and type as a persistent property on the entity and the constructor should set the property from the given argument. Arguments that are not directly set to properties are not supported.

@Table(name = "traders")
public class Trader {
	@PrimaryKey
	@Column(name = "trader_id")
	String traderId;

	String firstName;

	String lastName;

	@NotMapped
	Double temporaryNumber;

	public Trader(String traderId, String firstName) {
	    this.traderId = traderId;
	    this.firstName = firstName;
	}
}

13.2.2 Table

The @Table annotation can provide the name of the Cloud Spanner table that stores instances of the annotated class, one per row. This annotation is optional, and if not given, the name of the table is inferred from the class name with the first character uncapitalized.

SpEL expressions for table names

In some cases, you might want the @Table table name to be determined dynamically. To do that, you can use Spring Expression Language.

For example:

@Table(name = "trades_#{tableNameSuffix}")
public class Trade {
	// ...
}

The table name will be resolved only if the tableNameSuffix value/bean in the Spring application context is defined. For example, if tableNameSuffix has the value "123", the table name will resolve to trades_123.

13.2.3 Primary Keys

For a simple table, you may only have a primary key consisting of a single column. Even in that case, the @PrimaryKey annotation is required. @PrimaryKey identifies the one or more ID properties corresponding to the primary key.

Spanner has first class support for composite primary keys of multiple columns. You have to annotate all of your POJO’s fields that the primary key consists of with @PrimaryKey as below:

@Table(name = "trades")
public class Trade {
	@PrimaryKey(keyOrder = 2)
	@Column(name = "trade_id")
	private String tradeId;

	@PrimaryKey(keyOrder = 1)
	@Column(name = "trader_id")
	private String traderId;

	private String action;

	private Double price;

	private Double shares;

	private String symbol;
}

The keyOrder parameter of @PrimaryKey identifies the properties corresponding to the primary key columns in order, starting with 1 and increasing consecutively. Order is important and must reflect the order defined in the Cloud Spanner schema. In our example the DDL to create the table and its primary key is as follows:

CREATE TABLE trades (
    trader_id STRING(MAX),
    trade_id STRING(MAX),
    action STRING(15),
    symbol STRING(10),
    price FLOAT64,
    shares FLOAT64
) PRIMARY KEY (trader_id, trade_id)

Spanner does not have automatic ID generation. For most use-cases, sequential IDs should be used with caution to avoid creating data hotspots in the system. Read Spanner Primary Keys documentation for a better understanding of primary keys and recommended practices.

13.2.4 Columns

All accessible properties on POJOs are automatically recognized as a Cloud Spanner column. Column naming is generated by the PropertyNameFieldNamingStrategy by default defined on the SpannerMappingContext bean. The @Column annotation optionally provides a different column name than that of the property and some other settings:

  • name is the optional name of the column
  • spannerTypeMaxLength specifies for STRING and BYTES columns the maximum length. This setting is only used when generating DDL schema statements based on domain types.
  • nullable specifies if the column is created as NOT NULL. This setting is only used when generating DDL schema statements based on domain types.
  • spannerType is the Cloud Spanner column type you can optionally specify. If this is not specified then a compatible column type is inferred from the Java property type.
  • spannerCommitTimestamp is a boolean specifying if this property corresponds to an auto-populated commit timestamp column. Any value set in this property will be ignored when writing to Cloud Spanner.

13.2.5 Embedded Objects

If an object of type B is embedded as a property of A, then the columns of B will be saved in the same Cloud Spanner table as those of A.

If B has primary key columns, those columns will be included in the primary key of A. B can also have embedded properties. Embedding allows reuse of columns between multiple entities, and can be useful for implementing parent-child situations, because Cloud Spanner requires child tables to include the key columns of their parents.

For example:

class X {
  @PrimaryKey
  String grandParentId;

  long age;
}

class A {
  @PrimaryKey
  @Embedded
  X grandParent;

  @PrimaryKey(keyOrder = 2)
  String parentId;

  String value;
}

@Table(name = "items")
class B {
  @PrimaryKey
  @Embedded
  A parent;

  @PrimaryKey(keyOrder = 2)
  String id;

  @Column(name = "child_value")
  String value;
}

Entities of B can be stored in a table defined as:

CREATE TABLE items (
    grandParentId STRING(MAX),
    parentId STRING(MAX),
    id STRING(MAX),
    value STRING(MAX),
    child_value STRING(MAX),
    age INT64
) PRIMARY KEY (grandParentId, parentId, id)

Note that embedded properties' column names must all be unique.

13.2.6 Relationships

Spring Data Cloud Spanner supports parent-child relationships using the Cloud Spanner parent-child interleaved table mechanism. Cloud Spanner interleaved tables enforce the one-to-many relationship and provide efficient queries and operations on entities of a single domain parent entity. These relationships can be up to 7 levels deep. Cloud Spanner also provides automatic cascading delete or enforces the deletion of child entities before parents.

While one-to-one and many-to-many relationships can be implemented in Cloud Spanner and Spring Data Cloud Spanner using constructs of interleaved parent-child tables, only the parent-child relationship is natively supported. Cloud Spanner does not support the foreign key constraint, though the parent-child key constraint enforces a similar requirement when used with interleaved tables.

For example, the following Java entities:

@Table(name = "Singers")
class Singer {
  @PrimaryKey
  long SingerId;

  String FirstName;

  String LastName;

  byte[] SingerInfo;

  @Interleaved
  List<Album> albums;
}

@Table(name = "Albums")
class Album {
  @PrimaryKey
  long SingerId;

  @PrimaryKey(keyOrder = 2)
  long AlbumId;

  String AlbumTitle;
}

These classes can correspond to an existing pair of interleaved tables. The @Interleaved annotation may be applied to Collection properties and the inner type is resolved as the child entity type. The schema needed to create them can also be generated using the SpannerSchemaUtils and executed using the SpannerDatabaseAdminTemplate:

@Autowired
SpannerSchemaUtils schemaUtils;

@Autowired
SpannerDatabaseAdminTemplate databaseAdmin;
...

// Get the create statmenets for all tables in the table structure rooted at Singer
List<String> createStrings = this.schemaUtils.getCreateTableDdlStringsForInterleavedHierarchy(Singer.class);

// Create the tables and also create the database if necessary
this.databaseAdmin.executeDdlStrings(createStrings, true);

The createStrings list contains table schema statements using column names and types compatible with the provided Java type and any resolved child relationship types contained within based on the configured custom converters.

CREATE TABLE Singers (
  SingerId   INT64 NOT NULL,
  FirstName  STRING(1024),
  LastName   STRING(1024),
  SingerInfo BYTES(MAX),
) PRIMARY KEY (SingerId);

CREATE TABLE Albums (
  SingerId     INT64 NOT NULL,
  AlbumId      INT64 NOT NULL,
  AlbumTitle   STRING(MAX),
) PRIMARY KEY (SingerId, AlbumId),
  INTERLEAVE IN PARENT Singers ON DELETE CASCADE;

The ON DELETE CASCADE clause indicates that Cloud Spanner will delete all Albums of a singer if the Singer is deleted. The alternative is ON DELETE NO ACTION, where a Singer cannot be deleted until all of its Albums have already been deleted. When using SpannerSchemaUtils to generate the schema strings, the spring.cloud.gcp.spanner.createInterleavedTableDdlOnDeleteCascade boolean setting determines if these schema are generated as ON DELETE CASCADE for true and ON DELETE NO ACTION for false.

Cloud Spanner restricts these relationships to 7 child layers. A table may have multiple child tables.

On updating or inserting an object to Cloud Spanner, all of its referenced children objects are also updated or inserted in the same request, respectively. On read, all of the interleaved child rows are also all read.

13.2.7 Supported Types

Spring Data Cloud Spanner natively supports the following types for regular fields but also utilizes custom converters (detailed in following sections) and dozens of pre-defined Spring Data custom converters to handle other common Java types.

Natively supported types:

  • com.google.cloud.ByteArray
  • com.google.cloud.Date
  • com.google.cloud.Timestamp
  • java.lang.Boolean, boolean
  • java.lang.Double, double
  • java.lang.Long, long
  • java.lang.Integer, int
  • java.lang.String
  • double[]
  • long[]
  • boolean[]
  • java.util.Date
  • java.util.Instant
  • java.sql.Date

13.2.8 Lists

Spanner supports ARRAY types for columns. ARRAY columns are mapped to List fields in POJOS.

Example:

List<Double> curve;

The types inside the lists can be any singular property type.

13.2.9 Lists of Structs

Cloud Spanner queries can construct STRUCT values that appear as columns in the result. Cloud Spanner requires STRUCT values appear in ARRAYs at the root level: SELECT ARRAY(SELECT STRUCT(1 as val1, 2 as val2)) as pair FROM Users.

Spring Data Cloud Spanner will attempt to read the column STRUCT values into a property that is an Iterable of an entity type compatible with the schema of the column STRUCT value.

For the previous array-select example, the following property can be mapped with the constructed ARRAY<STRUCT> column: List<TwoInts> pair; where the TwoInts type is defined:

class TwoInts {

  int val1;

  int val2;
}

13.2.10 Custom types

Custom converters can be used to extend the type support for user defined types.

  1. Converters need to implement the org.springframework.core.convert.converter.Converter interface in both directions.
  2. The user defined type needs to be mapped to one of the basic types supported by Spanner:

    • com.google.cloud.ByteArray
    • com.google.cloud.Date
    • com.google.cloud.Timestamp
    • java.lang.Boolean, boolean
    • java.lang.Double, double
    • java.lang.Long, long
    • java.lang.String
    • double[]
    • long[]
    • boolean[]
    • enum types
  3. An instance of both Converters needs to be passed to a ConverterAwareMappingSpannerEntityProcessor, which then has to be made available as a @Bean for SpannerEntityProcessor.

For example:

We would like to have a field of type Person on our Trade POJO:

@Table(name = "trades")
public class Trade {
  //...
  Person person;
  //...
}

Where Person is a simple class:

public class Person {

  public String firstName;
  public String lastName;

}

We have to define the two converters:

  public class PersonWriteConverter implements Converter<Person, String> {

    @Override
    public String convert(Person person) {
      return person.firstName + " " + person.lastName;
    }
  }

  public class PersonReadConverter implements Converter<String, Person> {

    @Override
    public Person convert(String s) {
      Person person = new Person();
      person.firstName = s.split(" ")[0];
      person.lastName = s.split(" ")[1];
      return person;
    }
  }

That will be configured in our @Configuration file:

@Configuration
public class ConverterConfiguration {

	@Bean
	public SpannerEntityProcessor spannerEntityProcessor(SpannerMappingContext spannerMappingContext) {
		return new ConverterAwareMappingSpannerEntityProcessor(spannerMappingContext,
				Arrays.asList(new PersonWriteConverter()),
				Arrays.asList(new PersonReadConverter()));
	}
}

13.2.11 Custom Converter for Struct Array Columns

If a Converter<Struct, A> is provided, then properties of type List<A> can be used in your entity types.

13.3 Spanner Operations & Template

SpannerOperations and its implementation, SpannerTemplate, provides the Template pattern familiar to Spring developers. It provides:

  • Resource management
  • One-stop-shop to Spanner operations with the Spring Data POJO mapping and conversion features
  • Exception conversion

Using the autoconfigure provided by our Spring Boot Starter for Spanner, your Spring application context will contain a fully configured SpannerTemplate object that you can easily autowire in your application:

@SpringBootApplication
public class SpannerTemplateExample {

	@Autowired
	SpannerTemplate spannerTemplate;

	public void doSomething() {
		this.spannerTemplate.delete(Trade.class, KeySet.all());
		//...
		Trade t = new Trade();
		//...
		this.spannerTemplate.insert(t);
		//...
		List<Trade> tradesByAction = spannerTemplate.findAll(Trade.class);
		//...
	}
}

The Template API provides convenience methods for:

  • Reads, and by providing SpannerReadOptions and SpannerQueryOptions

    • Stale read
    • Read with secondary indices
    • Read with limits and offsets
    • Read with sorting
  • Queries
  • DML operations (delete, insert, update, upsert)
  • Partial reads

    • You can define a set of columns to be read into your entity
  • Partial writes

    • Persist only a few properties from your entity
  • Read-only transactions
  • Locking read-write transactions

13.3.1 SQL Query

Cloud Spanner has SQL support for running read-only queries. All the query related methods start with query on SpannerTemplate. Using SpannerTemplate you can execute SQL queries that map to POJOs:

List<Trade> trades = this.spannerTemplate.query(Trade.class, Statement.of("SELECT * FROM trades"));

13.3.2 Read

Spanner exposes a Read API for reading single row or multiple rows in a table or in a secondary index.

Using SpannerTemplate you can execute reads, for example:

List<Trade> trades = this.spannerTemplate.readAll(Trade.class);

Main benefit of reads over queries is reading multiple rows of a certain pattern of keys is much easier using the features of the KeySet class.

13.3.3 Advanced reads

Stale read

All reads and queries are strong reads by default. A strong read is a read at a current timestamp and is guaranteed to see all data that has been committed up until the start of this read. A stale read on the other hand is read at a timestamp in the past. Cloud Spanner allows you to determine how current the data should be when you read data. With SpannerTemplate you can specify the Timestamp by setting it on SpannerQueryOptions or SpannerReadOptions to the appropriate read or query methods:

Reads:

// a read with options:
SpannerReadOptions spannerReadOptions = new SpannerReadOptions().setTimestamp(Timestamp.now());
List<Trade> trades = this.spannerTemplate.readAll(Trade.class, spannerReadOptions);

Queries:

// a query with options:
SpannerQueryOptions spannerQueryOptions = new SpannerQueryOptions().setTimestamp(Timestamp.now());
List<Trade> trades = this.spannerTemplate.query(Trade.class, Statement.of("SELECT * FROM trades"), spannerQueryOptions);

Read from a secondary index

Using a secondary index is available for Reads via the Template API and it is also implicitly available via SQL for Queries.

The following shows how to read rows from a table using a secondary index simply by setting index on SpannerReadOptions:

SpannerReadOptions spannerReadOptions = new SpannerReadOptions().setIndex("TradesByTrader");
List<Trade> trades = this.spannerTemplate.readAll(Trade.class, spannerReadOptions);

Read with offsets and limits

Limits and offsets are only supported by Queries. The following will get only the first two rows of the query:

SpannerQueryOptions spannerQueryOptions = new SpannerQueryOptions().setLimit(2).setOffset(3);
List<Trade> trades = this.spannerTemplate.query(Trade.class, Statement.of("SELECT * FROM trades"), spannerQueryOptions);

Note that the above is equivalent of executing SELECT * FROM trades LIMIT 2 OFFSET 3.

Sorting

Reads by keys do not support sorting. However, queries on the Template API support sorting through standard SQL and also via Spring Data Sort API:

List<Trade> trades = this.spannerTemplate.queryAll(Trade.class, Sort.by("action"));

If the provided sorted field name is that of a property of the domain type, then the column name corresponding to that property will be used in the query. Otherwise, the given field name is assumed to be the name of the column in the Cloud Spanner table. Sorting on columns of Cloud Spanner types STRING and BYTES can be done while ignoring case:

Sort.by(Order.desc("action").ignoreCase())

Partial read

Partial read is only possible when using Queries. In case the rows returned by the query have fewer columns than the entity that it will be mapped to, Spring Data will map the returned columns only. This setting also applies to nested structs and their corresponding nested POJO properties.

List<Trade> trades = this.spannerTemplate.query(Trade.class, Statement.of("SELECT action, symbol FROM trades"),
    new SpannerQueryOptions().setAllowMissingResultSetColumns(true));

If the setting is set to false, then an exception will be thrown if there are missing columns in the query result.

Summary of options for Query vs Read

Feature

Query supports it

Read supports it

SQL

yes

no

Partial read

yes

no

Limits

yes

no

Offsets

yes

no

Secondary index

yes

yes

Read using index range

no

yes

Sorting

yes

no

13.3.4 Write / Update

The write methods of SpannerOperations accept a POJO and writes all of its properties to Spanner. The corresponding Spanner table and entity metadata is obtained from the given object’s actual type.

If a POJO was retrieved from Spanner and its primary key properties values were changed and then written or updated, the operation will occur as if against a row with the new primary key values. The row with the original primary key values will not be affected.

Insert

The insert method of SpannerOperations accepts a POJO and writes all of its properties to Spanner, which means the operation will fail if a row with the POJO’s primary key already exists in the table.

Trade t = new Trade();
this.spannerTemplate.insert(t);

Update

The update method of SpannerOperations accepts a POJO and writes all of its properties to Spanner, which means the operation will fail if the POJO’s primary key does not already exist in the table.

// t was retrieved from a previous operation
this.spannerTemplate.update(t);

Upsert

The upsert method of SpannerOperations accepts a POJO and writes all of its properties to Spanner using update-or-insert.

// t was retrieved from a previous operation or it's new
this.spannerTemplate.upsert(t);

Partial Update

The update methods of SpannerOperations operate by default on all properties within the given object, but also accept String[] and Optional<Set<String>> of column names. If the Optional of set of column names is empty, then all columns are written to Spanner. However, if the Optional is occupied by an empty set, then no columns will be written.

// t was retrieved from a previous operation or it's new
this.spannerTemplate.update(t, "symbol", "action");

13.3.5 DML

DML statements can be executed using SpannerOperations.executeDmlStatement. Inserts, updates, and deletions can affect any number of rows and entities.

13.3.6 Transactions

SpannerOperations provides methods to run java.util.Function objects within a single transaction while making available the read and write methods from SpannerOperations.

Read/Write Transaction

Read and write transactions are provided by SpannerOperations via the performReadWriteTransaction method:

@Autowired
SpannerOperations mySpannerOperations;

public String doWorkInsideTransaction() {
  return mySpannerOperations.performReadWriteTransaction(
    transActionSpannerOperations -> {
      // Work with transActionSpannerOperations here.
      // It is also a SpannerOperations object.

      return "transaction completed";
    }
  );
}

The performReadWriteTransaction method accepts a Function that is provided an instance of a SpannerOperations object. The final returned value and type of the function is determined by the user. You can use this object just as you would a regular SpannerOperations with a few exceptions:

  • Its read functionality cannot perform stale reads, because all reads and writes happen at the single point in time of the transaction.
  • It cannot perform sub-transactions via performReadWriteTransaction or performReadOnlyTransaction.

As these read-write transactions are locking, it is recommended that you use the performReadOnlyTransaction if your function does not perform any writes.

Read-only Transaction

The performReadOnlyTransaction method is used to perform read-only transactions using a SpannerOperations:

@Autowired
SpannerOperations mySpannerOperations;

public String doWorkInsideTransaction() {
  return mySpannerOperations.performReadOnlyTransaction(
    transActionSpannerOperations -> {
      // Work with transActionSpannerOperations here.
      // It is also a SpannerOperations object.

      return "transaction completed";
    }
  );
}

The performReadOnlyTransaction method accepts a Function that is provided an instance of a SpannerOperations object. This method also accepts a ReadOptions object, but the only attribute used is the timestamp used to determine the snapshot in time to perform the reads in the transaction. If the timestamp is not set in the read options the transaction is run against the current state of the database. The final returned value and type of the function is determined by the user. You can use this object just as you would a regular SpannerOperations with a few exceptions:

  • Its read functionality cannot perform stale reads, because all reads happen at the single point in time of the transaction.
  • It cannot perform sub-transactions via performReadWriteTransaction or performReadOnlyTransaction
  • It cannot perform any write operations.

Because read-only transactions are non-locking and can be performed on points in time in the past, these are recommended for functions that do not perform write operations.

Declarative Transactions with @Transactional Annotation

This feature requires a bean of SpannerTransactionManager, which is provided when using spring-cloud-gcp-starter-data-spanner.

SpannerTemplate and SpannerRepository support running methods with the @Transactional [annotation](https://docs.spring.io/spring/docs/current/spring-framework-reference/data-access.html#transaction-declarative) as transactions. If a method annotated with @Transactional calls another method also annotated, then both methods will work within the same transaction. performReadOnlyTransaction and performReadWriteTransaction cannot be used in @Transactional annotated methods because Cloud Spanner does not support transactions within transactions.

13.3.7 DML Statements

SpannerTemplate supports [DML](https://cloud.google.com/spanner/docs/dml-tasks) Statements. DML statements can be executed in transactions via performReadWriteTransaction or using the @Transactional annotation.

When DML statements are executed outside of transactions, they are executed in [partitioned-mode](https://cloud.google.com/spanner/docs/dml-tasks#partitioned-dml).

13.4 Repositories

Spring Data Repositories are a powerful abstraction that can save you a lot of boilerplate code.

For example:

public interface TraderRepository extends SpannerRepository<Trader, String> {
}

Spring Data generates a working implementation of the specified interface, which can be conveniently autowired into an application.

The Trader type parameter to SpannerRepository refers to the underlying domain type. The second type parameter, String in this case, refers to the type of the key of the domain type.

For POJOs with a composite primary key, this ID type parameter can be any descendant of Object[] compatible with all primary key properties, any descendant of Iterable, or com.google.cloud.spanner.Key. If the domain POJO type only has a single primary key column, then the primary key property type can be used or the Key type.

For example in case of Trades, that belong to a Trader, TradeRepository would look like this:

public interface TradeRepository extends SpannerRepository<Trade, String[]> {

}
public class MyApplication {

	@Autowired
	SpannerTemplate spannerTemplate;

	@Autowired
	StudentRepository studentRepository;

	public void demo() {

		this.tradeRepository.deleteAll();
		String traderId = "demo_trader";
		Trade t = new Trade();
		t.symbol = stock;
		t.action = action;
		t.traderId = traderId;
		t.price = 100.0;
		t.shares = 12345.6;
		this.spannerTemplate.insert(t);

		Iterable<Trade> allTrades = this.tradeRepository.findAll();

		int count = this.tradeRepository.countByAction("BUY");

	}
}

13.4.1 CRUD Repository

CrudRepository methods work as expected, with one thing Spanner specific: the save and saveAll methods work as update-or-insert.

13.4.2 Paging and Sorting Repository

You can also use PagingAndSortingRepository with Spanner Spring Data. The sorting and pageable findAll methods available from this interface operate on the current state of the Spanner database. As a result, beware that the state of the database (and the results) might change when moving page to page.

13.4.3 Spanner Repository

The SpannerRepository extends the PagingAndSortingRepository, but adds the read-only and the read-write transaction functionality provided by Spanner. These transactions work very similarly to those of SpannerOperations, but is specific to the repository’s domain type and provides repository functions instead of template functions.

For example, this is a read-write transaction:

@Autowired
SpannerRepository myRepo;

public String doWorkInsideTransaction() {
  return myRepo.performReadOnlyTransaction(
    transactionSpannerRepo -> {
      // Work with the single-transaction transactionSpannerRepo here.
      // This is a SpannerRepository object.

      return "transaction completed";
    }
  );
}

When creating custom repositories for your own domain types and query methods, you can extend SpannerRepository to access Cloud Spanner-specific features as well as all features from PagingAndSortingRepository and CrudRepository.

13.5 Query Methods

SpannerRepository supports Query Methods. Described in the following sections, these are methods residing in your custom repository interfaces of which implementations are generated based on their names and annotations. Query Methods can read, write, and delete entities in Cloud Spanner. Parameters to these methods can be any Cloud Spanner data type supported directly or via custom configured converters. Parameters can also be of type Struct or POJOs. If a POJO is given as a parameter, it will be converted to a Struct with the same type-conversion logic as used to create write mutations. Comparisons using Struct parameters are limited to what is available with Cloud Spanner.

13.5.1 Query methods by convention

public interface TradeRepository extends SpannerRepository<Trade, String[]> {
    List<Trade> findByAction(String action);

	int countByAction(String action);

	// Named methods are powerful, but can get unwieldy
	List<Trade> findTop3DistinctByActionAndSymbolIgnoreCaseOrTraderIdOrderBySymbolDesc(
  			String action, String symbol, String traderId);
}

In the example above, the query methods in TradeRepository are generated based on the name of the methods, using the Spring Data Query creation naming convention.

List<Trade> findByAction(String action) would translate to a SELECT * FROM trades WHERE action = ?.

The function List<Trade> findTop3DistinctByActionAndSymbolIgnoreCaseOrTraderIdOrderBySymbolDesc(String action, String symbol, String traderId); will be translated as the equivalent of this SQL query:

SELECT DISTINCT * FROM trades
WHERE ACTION = ? AND LOWER(SYMBOL) = LOWER(?) AND TRADER_ID = ?
ORDER BY SYMBOL DESC
LIMIT 3

The following filter options are supported:

  • Equality
  • Greater than or equals
  • Greater than
  • Less than or equals
  • Less than
  • Is null
  • Is not null
  • Is true
  • Is false
  • Like a string
  • Not like a string
  • Contains a string
  • Not contains a string

Note that the phrase SymbolIgnoreCase is translated to LOWER(SYMBOL) = LOWER(?) indicating a non-case-sensitive matching. The IgnoreCase phrase may only be appended to fields that correspond to columns of type STRING or BYTES. The Spring Data "AllIgnoreCase" phrase appended at the end of the method name is not supported.

The Like or NotLike naming conventions:

List<Trade> findBySymbolLike(String symbolFragment);

The param symbolFragment can contain wildcard characters for string matching such as _ and %.

The Contains and NotContains naming conventions:

List<Trade> findBySymbolContains(String symbolFragment);

The param symbolFragment is a regular expression that is checked for occurrences.

Delete queries are also supported. For example, query methods such as deleteByAction or removeByAction delete entities found by findByAction. The delete operation happens in a single transaction.

Delete queries can have the following return types: * An integer type that is the number of entities deleted * A collection of entities that were deleted * void

13.5.2 Custom SQL/DML query methods

The example above for List<Trade> fetchByActionNamedQuery(String action) does not match the Spring Data Query creation naming convention, so we have to map a parametrized Spanner SQL query to it.

The SQL query for the method can be mapped to repository methods in one of two ways:

  • namedQueries properties file
  • using the @Query annotation

The names of the tags of the SQL correspond to the @Param annotated names of the method parameters.

Custom SQL query methods can accept a single Sort or Pageable parameter that is applied on top of any sorting or paging in the SQL:

	@Query("SELECT * FROM trades ORDER BY action DESC")
	List<Trade> sortedTrades(Pageable pageable);

	@Query("SELECT * FROM trades ORDER BY action DESC LIMIT 1")
 	Trade sortedTopTrade(Pageable pageable);

This can be used:

	List<Trade> customSortedTrades = tradeRepository.sortedTrades(PageRequest
  				.of(2, 2, org.springframework.data.domain.Sort.by(Order.asc("id"))));

The results would be sorted by "id" in ascending order.

Your query method can also return non-entity types:

  	@Query("SELECT COUNT(1) FROM trades WHERE action = @action")
  	int countByActionQuery(String action);

  	@Query("SELECT EXISTS(SELECT COUNT(1) FROM trades WHERE action = @action)")
  	boolean existsByActionQuery(String action);

  	@Query("SELECT action FROM trades WHERE action = @action LIMIT 1")
  	String getFirstString(@Param("action") String action);

  	@Query("SELECT action FROM trades WHERE action = @action")
  	List<String> getFirstStringList(@Param("action") String action);

DML statements can also be executed by query methods, but the only possible return value is a long representing the number of affected rows. The dmlStatement boolean setting must be set on @Query to indicate that the query method is executed as a DML statement.

  	@Query(value = "DELETE FROM trades WHERE action = @action", dmlStatement = true)
  	long deleteByActionQuery(String action);

Query methods with named queries properties

By default, the namedQueriesLocation attribute on @EnableSpannerRepositories points to the META-INF/spanner-named-queries.properties file. You can specify the query for a method in the properties file by providing the SQL as the value for the "interface.method" property:

Trade.fetchByActionNamedQuery=SELECT * FROM trades WHERE trades.action = @tag0
public interface TradeRepository extends SpannerRepository<Trade, String[]> {
	// This method uses the query from the properties file instead of one generated based on name.
	List<Trade> fetchByActionNamedQuery(@Param("tag0") String action);
}

Query methods with annotation

Using the @Query annotation:

public interface TradeRepository extends SpannerRepository<Trade, String[]> {
    @Query("SELECT * FROM trades WHERE trades.action = @tag0")
    List<Trade> fetchByActionNamedQuery(@Param("tag0") String action);
}

Table names can be used directly. For example, "trades" in the above example. Alternatively, table names can be resolved from the @Table annotation on domain classes as well. In this case, the query should refer to table names with fully qualified class names between : characters: :fully.qualified.ClassName:. A full example would look like:

@Query("SELECT * FROM :com.example.Trade: WHERE trades.action = @tag0")
List<Trade> fetchByActionNamedQuery(String action);

This allows table names evaluated with SpEL to be used in custom queries.

SpEL can also be used to provide SQL parameters:

@Query("SELECT * FROM :com.example.Trade: WHERE trades.action = @tag0
  AND price > #{#priceRadius * -1} AND price < #{#priceRadius * 2}")
List<Trade> fetchByActionNamedQuery(String action, Double priceRadius);

13.5.3 Projections

Spring Data Spanner supports projections. You can define projection interfaces based on domain types and add query methods that return them in your repository:

public interface TradeProjection {

	String getAction();

	@Value("#{target.symbol + ' ' + target.action}")
	String getSymbolAndAction();
}

public interface TradeRepository extends SpannerRepository<Trade, Key> {

	List<Trade> findByTraderId(String traderId);

	List<TradeProjection> findByAction(String action);

	@Query("SELECT action, symbol FROM trades WHERE action = @action")
	List<TradeProjection> findByQuery(String action);
}

Projections can be provided by name-convention-based query methods as well as by custom SQL queries. If using custom SQL queries, you can further restrict the columns retrieved from Spanner to just those required by the projection to improve performance.

Properties of projection types defined using SpEL use the fixed name target for the underlying domain object. As a result accessing underlying properties take the form target.<property-name>.

13.5.4 REST Repositories

When running with Spring Boot, repositories can be exposed as REST services by simply adding this dependency to your pom file:

<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-data-rest</artifactId>
</dependency>

If you prefer to configure parameters (such as path), you can use @RepositoryRestResource annotation:

@RepositoryRestResource(collectionResourceRel = "trades", path = "trades")
public interface TradeRepository extends SpannerRepository<Trade, String[]> {
}

For example, you can retrieve all Trade objects in the repository by using curl http://<server>:<port>/trades, or any specific trade via curl http://<server>:<port>/trades/<trader_id>,<trade_id>.

The separator between your primary key components, id and trader_id in this case, is a comma by default, but can be configured to any string not found in your key values by extending the SpannerKeyIdConverter class:

@Component
class MySpecialIdConverter extends SpannerKeyIdConverter {

    @Override
    protected String getUrlIdSeparator() {
        return ":";
    }
}

You can also write trades using curl -XPOST -H"Content-Type: application/json" -[email protected] http://<server>:<port>/trades/ where the file test.json holds the JSON representation of a Trade object.

13.6 Database and Schema Admin

Databases and tables inside Spanner instances can be created automatically from SpannerPersistentEntity objects:

@Autowired
private SpannerSchemaUtils spannerSchemaUtils;

@Autowired
private SpannerDatabaseAdminTemplate spannerDatabaseAdminTemplate;

public void createTable(SpannerPersistentEntity entity) {
	if(!spannerDatabaseAdminTemplate.tableExists(entity.tableName()){

	  // The boolean parameter indicates that the database will be created if it does not exist.
	  spannerDatabaseAdminTemplate.executeDdlStrings(Arrays.asList(
            spannerSchemaUtils.getCreateTableDDLString(entity.getType())), true);
	}
}

Schemas can be generated for entire object hierarchies with interleaved relationships and composite keys.

13.7 Sample

A sample application is available.

14. Spring Data Cloud Datastore

Spring Data is an abstraction for storing and retrieving POJOs in numerous storage technologies. Spring Cloud GCP adds Spring Data support for Google Cloud Datastore.

Maven coordinates for this module only, using Spring Cloud GCP BOM:

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-gcp-data-datastore</artifactId>
</dependency>

Gradle coordinates:

dependencies {
    compile group: 'org.springframework.cloud', name: 'spring-cloud-gcp-data-datastore'
}

We provide a Spring Boot Starter for Spring Data Datastore, with which you can use our recommended auto-configuration setup. To use the starter, see the coordinates below.

Maven:

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-gcp-starter-data-datastore</artifactId>
</dependency>

Gradle:

dependencies {
    compile group: 'org.springframework.cloud', name: 'spring-cloud-gcp-starter-data-datastore'
}

This setup takes care of bringing in the latest compatible version of Cloud Java Cloud Datastore libraries as well.

14.1 Configuration

To setup Spring Data Cloud Datastore, you have to configure the following:

  • Setup the connection details to Google Cloud Datastore.

14.1.1 Cloud Datastore settings

You can the use Spring Boot Starter for Spring Data Datastore to autoconfigure Google Cloud Datastore in your Spring application. It contains all the necessary setup that makes it easy to authenticate with your Google Cloud project. The following configuration options are available:

Name

Description

Required

Default value

spring.cloud.gcp.datastore.enabled

Enables the Cloud Datastore client

No

true

spring.cloud.gcp.datastore.project-id

GCP project ID where the Google Cloud Datastore API is hosted, if different from the one in the Spring Cloud GCP Core Module

No

 

spring.cloud.gcp.datastore.credentials.location

OAuth2 credentials for authenticating with the Google Cloud Datastore API, if different from the ones in the Spring Cloud GCP Core Module

No

 

spring.cloud.gcp.datastore.credentials.encoded-key

Base64-encoded OAuth2 credentials for authenticating with the Google Cloud Datastore API, if different from the ones in the Spring Cloud GCP Core Module

No

 

spring.cloud.gcp.datastore.credentials.scopes

OAuth2 scope for Spring Cloud GCP Cloud Datastore credentials

No

https://www.googleapis.com/auth/datastore

spring.cloud.gcp.datastore.namespace

The Cloud Datastore namespace to use

No

the Default namespace of Cloud Datastore in your GCP project

14.1.2 Repository settings

Spring Data Repositories can be configured via the @EnableDatastoreRepositories annotation on your main @Configuration class. With our Spring Boot Starter for Spring Data Cloud Datastore, @EnableDatastoreRepositories is automatically added. It is not required to add it to any other class, unless there is a need to override finer grain configuration parameters provided by @EnableDatastoreRepositories.

14.1.3 Autoconfiguration

Our Spring Boot autoconfiguration creates the following beans available in the Spring application context:

  • an instance of DatastoreTemplate
  • an instance of all user defined repositories extending CrudRepository, PagingAndSortingRepository, and DatastoreRepository (an extension of PagingAndSortingRepository with additional Cloud Datastore features) when repositories are enabled
  • an instance of Datastore from the Google Cloud Java Client for Datastore, for convenience and lower level API access

14.2 Object Mapping

Spring Data Cloud Datastore allows you to map domain POJOs to Cloud Datastore kinds and entities via annotations:

@Entity(name = "traders")
public class Trader {

	@Id
	@Field(name = "trader_id")
	String traderId;

	String firstName;

	String lastName;

	@Transient
	Double temporaryNumber;
}

Spring Data Cloud Datastore will ignore any property annotated with @Transient. These properties will not be written to or read from Cloud Datastore.

14.2.1 Constructors

Simple constructors are supported on POJOs. The constructor arguments can be a subset of the persistent properties. Every constructor argument needs to have the same name and type as a persistent property on the entity and the constructor should set the property from the given argument. Arguments that are not directly set to properties are not supported.

@Entity(name = "traders")
public class Trader {

	@Id
	@Field(name = "trader_id")
	String traderId;

	String firstName;

	String lastName;

	@Transient
	Double temporaryNumber;

	public Trader(String traderId, String firstName) {
	    this.traderId = traderId;
	    this.firstName = firstName;
	}
}

14.2.2 Kind

The @Entity annotation can provide the name of the Cloud Datastore kind that stores instances of the annotated class, one per row.

14.2.3 Keys

@Id identifies the property corresponding to the ID value.

You must annotate one of your POJO’s fields as the ID value, because every entity in Cloud Datastore requires a single ID value:

@Entity(name = "trades")
public class Trade {
	@Id
	@Field(name = "trade_id")
	String tradeId;

	@Field(name = "trader_id")
	String traderId;

	String action;

	Double price;

	Double shares;

	String symbol;
}

Datastore can automatically allocate integer ID values. If a POJO instance with a Long ID property is written to Cloud Datastore with null as the ID value, then Spring Data Cloud Datastore will obtain a newly allocated ID value from Cloud Datastore and set that in the POJO for saving. Because primitive long ID properties cannot be null and default to 0, keys will not be allocated.

14.2.4 Fields

All accessible properties on POJOs are automatically recognized as a Cloud Datastore field. Field naming is generated by the PropertyNameFieldNamingStrategy by default defined on the DatastoreMappingContext bean. The @Field annotation optionally provides a different field name than that of the property.

14.2.5 Supported Types

Spring Data Cloud Datastore supports the following types for regular fields and elements of collections:

TypeStored as

com.google.cloud.Timestamp

com.google.cloud.datastore.TimestampValue

com.google.cloud.datastore.Blob

com.google.cloud.datastore.BlobValue

com.google.cloud.datastore.LatLng

com.google.cloud.datastore.LatLngValue

java.lang.Boolean, boolean

com.google.cloud.datastore.BooleanValue

java.lang.Double, double

com.google.cloud.datastore.DoubleValue

java.lang.Long, long

com.google.cloud.datastore.LongValue

java.lang.Integer, int

com.google.cloud.datastore.LongValue

java.lang.String

com.google.cloud.datastore.StringValue

com.google.cloud.datastore.Entity

com.google.cloud.datastore.EntityValue

com.google.cloud.datastore.Key

com.google.cloud.datastore.KeyValue

byte[]

com.google.cloud.datastore.BlobValue

Java enum values

com.google.cloud.datastore.StringValue

In addition, all types that can be converted to the ones listed in the table by org.springframework.core.convert.support.DefaultConversionService are supported.

14.2.6 Custom types

Custom converters can be used extending the type support for user defined types.

  1. Converters need to implement the org.springframework.core.convert.converter.Converter interface in both directions.
  2. The user defined type needs to be mapped to one of the basic types supported by Cloud Datastore.
  3. An instance of both Converters (read and write) needs to be passed to the DatastoreCustomConversions constructor, which then has to be made available as a @Bean for DatastoreCustomConversions.

For example:

We would like to have a field of type Album on our Singer POJO and want it to be stored as a string property:

@Entity
public class Singer {

	@Id
	String singerId;

	String name;

	Album album;
}

Where Album is a simple class:

public class Album {
	String albumName;

	LocalDate date;
}

We have to define the two converters:

	//Converter to write custom Album type
	static final Converter<Album, String> ALBUM_STRING_CONVERTER =
			new Converter<Album, String>() {
				@Override
				public String convert(Album album) {
					return album.getAlbumName() + " " + album.getDate().format(DateTimeFormatter.ISO_DATE);
				}
			};

	//Converters to read custom Album type
	static final Converter<String, Album> STRING_ALBUM_CONVERTER =
			new Converter<String, Album>() {
				@Override
				public Album convert(String s) {
					String[] parts = s.split(" ");
					return new Album(parts[0], LocalDate.parse(parts[parts.length - 1], DateTimeFormatter.ISO_DATE));
				}
			};

That will be configured in our @Configuration file:

@Configuration
public class ConverterConfiguration {
	@Bean
	public DatastoreCustomConversions datastoreCustomConversions() {
		return new DatastoreCustomConversions(
				Arrays.asList(
						ALBUM_STRING_CONVERTER,
						STRING_ALBUM_CONVERTER));
	}
}

14.2.7 Collections and arrays

Arrays and collections (types that implement java.util.Collection) of supported types are supported. They are stored as com.google.cloud.datastore.ListValue. Elements are converted to Cloud Datastore supported types individually. byte[] is an exception, it is converted to com.google.cloud.datastore.Blob.

14.2.8 Custom Converter for collections

Users can provide converters from List<?> to the custom collection type. Only read converter is necessary, the Collection API is used on the write side to convert a collection to the internal list type.

Collection converters need to implement the org.springframework.core.convert.converter.Converter interface.

Example:

Let’s improve the Singer class from the previous example. Instead of a field of type Album, we would like to have a field of type ImmutableSet<Album>:

@Entity
public class Singer {

	@Id
	String singerId;

	String name;

	ImmutableSet<Album> albums;
}

We have to define a read converter only:

static final Converter<List<?>, ImmutableSet<?>> LIST_IMMUTABLE_SET_CONVERTER =
			new Converter<List<?>, ImmutableSet<?>>() {
				@Override
				public ImmutableSet<?> convert(List<?> source) {
					return ImmutableSet.copyOf(source);
				}
			};

And add it to the list of custom converters:

@Configuration
public class ConverterConfiguration {
	@Bean
	public DatastoreCustomConversions datastoreCustomConversions() {
		return new DatastoreCustomConversions(
				Arrays.asList(
						LIST_IMMUTABLE_SET_CONVERTER,

						ALBUM_STRING_CONVERTER,
						STRING_ALBUM_CONVERTER));
	}
}

14.3 Relationships

There are three ways to represent relationships between entities that are described in this section:

  • Embedded entities stored directly in the field of the containing entity
  • @Descendant annotated properties for one-to-many relationships
  • @Reference annotated properties for general relationships without hierarchy

14.3.1 Embedded Entities

Fields whose types are also annotated with @Entity are converted to EntityValue and stored inside the parent entity.

Here is an example of Cloud Datastore entity containing an embedded entity in JSON:

{
  "name" : "Alexander",
  "age" : 47,
  "child" : {"name" : "Philip"  }
}

This corresponds to a simple pair of Java entities:

import org.springframework.cloud.gcp.data.datastore.core.mapping.Entity;
import org.springframework.data.annotation.Id;

@Entity("parents")
public class Parent {
  @Id
  String name;

  Child child;
}

@Entity
public class Child {
  String name;
}

Child entities are not stored in their own kind. They are stored in their entirety in the child field of the parents kind.

Multiple levels of embedded entities are supported.

[Note]Note

Embedded entities don’t need to have @Id field, it is only required for top level entities.

Example:

Entities can hold embedded entities that are their own type. We can store trees in Cloud Datastore using this feature:

import org.springframework.cloud.gcp.data.datastore.core.mapping.Embedded;
import org.springframework.cloud.gcp.data.datastore.core.mapping.Entity;
import org.springframework.data.annotation.Id;

@Entity
public class EmbeddableTreeNode {
  @Id
  long value;

  EmbeddableTreeNode left;

  EmbeddableTreeNode right;

  Map<String, Long> longValues;

  Map<String, List<Timestamp>> listTimestamps;

  public EmbeddableTreeNode(long value, EmbeddableTreeNode left, EmbeddableTreeNode right) {
    this.value = value;
    this.left = left;
    this.right = right;
  }
}

Maps

Maps will be stored as embedded entities where the key values become the field names in the embedded entity. The value types in these maps can be any regularly supported property type, and the key values will be converted to String using the configured converters.

Also, a collection of entities can be embedded; it will be converted to ListValue on write.

Example:

Instead of a binary tree from the previous example, we would like to store a general tree (each node can have an arbitrary number of children) in Cloud Datastore. To do that, we need to create a field of type List<EmbeddableTreeNode>:

import org.springframework.cloud.gcp.data.datastore.core.mapping.Embedded;
import org.springframework.data.annotation.Id;

public class EmbeddableTreeNode {
  @Id
  long value;

  List<EmbeddableTreeNode> children;

  Map<String, EmbeddableTreeNode> siblingNodes;

  Map<String, Set<EmbeddableTreeNode>> subNodeGroups;

  public EmbeddableTreeNode(List<EmbeddableTreeNode> children) {
    this.children = children;
  }
}

Because Maps are stored as entities, they can further hold embedded entities:

  • Singular embedded objects in the value can be stored in the values of embedded Maps.
  • Collections of embedded objects in the value can also be stored as the values of embedded Maps.
  • Maps in the value are further stored as embedded entities with the same rules applied recursively for their values.

14.3.2 Ancestor-Descendant Relationships

Parent-child relationships are supported via the @Descendants annotation.

Unlike embedded children, descendants are fully-formed entities residing in their own kinds. The parent entity does not have an extra field to hold the descendant entities. Instead, the relationship is captured in the descendants' keys, which refer to their parent entities:

import org.springframework.cloud.gcp.data.datastore.core.mapping.Descendants;
import org.springframework.cloud.gcp.data.datastore.core.mapping.Entity;
import org.springframework.data.annotation.Id;

@Entity("orders")
public class ShoppingOrder {
  @Id
  long id;

  @Descendants
  List<Item> items;
}

@Entity("purchased_item")
public class Item {
  @Id
  Key purchasedItemKey;

  String name;

  Timestamp timeAddedToOrder;
}

For example, an instance of a GQL key-literal representation for Item would also contain the parent ShoppingOrder ID value:

Key(orders, '12345', purchased_item, 'eggs')

The GQL key-literal representation for the parent ShoppingOrder would be:

Key(orders, '12345')

The Cloud Datastore entities exist separately in their own kinds.

The ShoppingOrder:

{
  "id" : 12345
}

The two items inside that order:

{
  "purchasedItemKey" : Key(orders, '12345', purchased_item, 'eggs'),
  "name" : "eggs",
  "timeAddedToOrder" : "2014-09-27 12:30:00.45-8:00"
}

{
  "purchasedItemKey" : Key(orders, '12345', purchased_item, 'sausage'),
  "name" : "sausage",
  "timeAddedToOrder" : "2014-09-28 11:30:00.45-9:00"
}

The parent-child relationship structure of objects is stored in Cloud Datastore using Datastore’s ancestor relationships. Because the relationships are defined by the Ancestor mechanism, there is no extra column needed in either the parent or child entity to store this relationship. The relationship link is part of the descendant entity’s key value. These relationships can be many levels deep.

Properties holding child entities must be collection-like, but they can be any of the supported inter-convertible collection-like types that are supported for regular properties such as List, arrays, Set, etc…​ Child items must have Key as their ID type because Cloud Datastore stores the ancestor relationship link inside the keys of the children.

Reading or saving an entity automatically causes all subsequent levels of children under that entity to be read or saved, respectively. If a new child is created and added to a property annotated @Descendants and the key property is left null, then a new key will be allocated for that child. The ordering of the retrieved children may not be the same as the ordering in the original property that was saved.

Child entities cannot be moved from the property of one parent to that of another unless the child’s key property is set to null or a value that contains the new parent as an ancestor. Since Cloud Datastore entity keys can have multiple parents, it is possible that a child entity appears in the property of multiple parent entities. Because entity keys are immutable in Cloud Datastore, to change the key of a child you must delete the existing one and re-save it with the new key.

14.3.3 Key Reference Relationships

General relationships can be stored using the @Reference annotation.

import org.springframework.cloud.gcp.data.datastore.core.mapping.Reference;
import org.springframework.data.annotation.Id;

@Entity
public class ShoppingOrder {
  @Id
  long id;

  @Reference
  List<Item> items;

  @Reference
  Item specialSingleItem;
}

@Entity
public class Item {
  @Id
  Key purchasedItemKey;

  String name;

  Timestamp timeAddedToOrder;
}

@Reference relationships are between fully-formed entities residing in their own kinds. The relationship between ShoppingOrder and Item entities are stored as a Key field inside ShoppingOrder, which are resolved to the underlying Java entity type by Spring Data Cloud Datastore:

{
  "id" : 12345,
  "specialSingleItem" : Key(item, "milk"),
  "items" : [ Key(item, "eggs"), Key(item, "sausage") ]
}

Reference properties can either be singular or collection-like. These properties correspond to actual columns in the entity and Cloud Datastore Kind that hold the key values of the referenced entities. The referenced entities are full-fledged entities of other Kinds.

Similar to the @Descendants relationships, reading or writing an entity will recursively read or write all of the referenced entities at all levels. If referenced entities have null ID values, then they will be saved as new entities and will have ID values allocated by Cloud Datastore. There are no requirements for relationships between the key of an entity and the keys that entity holds as references. The order of collection-like reference properties is not preserved when reading back from Cloud Datastore.

14.4 Datastore Operations & Template

DatastoreOperations and its implementation, DatastoreTemplate, provides the Template pattern familiar to Spring developers.

Using the auto-configuration provided by Spring Boot Starter for Datastore, your Spring application context will contain a fully configured DatastoreTemplate object that you can autowire in your application:

@SpringBootApplication
public class DatastoreTemplateExample {

	@Autowired
	DatastoreTemplate datastoreTemplate;

	public void doSomething() {
		this.datastoreTemplate.deleteAll(Trader.class);
		//...
		Trader t = new Trader();
		//...
		this.datastoreTemplate.save(t);
		//...
		List<Trader> traders = datastoreTemplate.findAll(Trader.class);
		//...
	}
}

The Template API provides convenience methods for:

  • Write operations (saving and deleting)
  • Read-write transactions

14.4.1 GQL Query

In addition to retrieving entities by their IDs, you can also submit queries.

  <T> Iterable<T> query(Query<? extends BaseEntity> query, Class<T> entityClass);

  <A, T> Iterable<T> query(Query<A> query, Function<A, T> entityFunc);

  Iterable<Key> queryKeys(Query<Key> query);

These methods, respectively, allow querying for: * entities mapped by a given entity class using all the same mapping and converting features * arbitrary types produced by a given mapping function * only the Cloud Datastore keys of the entities found by the query

14.4.2 Find by ID(s)

Datstore reading a single entity or multiple entities in a kind.

Using DatastoreTemplate you can execute reads, for example:

Trader trader = this.datastoreTemplate.findById("trader1", Trader.class);

List<Trader> traders = this.datastoreTemplate.findAllById(ImmutableList.of("trader1", "trader2"), Trader.class);

List<Trader> allTraders = this.datastoreTemplate.findAll(Trader.class);

Cloud Datastore executes key-based reads with strong consistency, but queries with eventual consistency. In the example above the first two reads utilize keys, while the third is executed using a query based on the corresponding Kind of Trader.

Indexes

By default, all fields are indexed. To disable indexing on a particular field, @Unindexed annotation can be used.

Example:

import org.springframework.cloud.gcp.data.datastore.core.mapping.Unindexed;

public class ExampleItem {
	long indexedField;

	@Unindexed
	long unindexedField;
}

When using queries directly or via Query Methods, Cloud Datastore requires composite custom indexes if the select statement is not SELECT * or if there is more than one filtering condition in the WHERE clause.

Read with offsets, limits, and sorting

DatastoreRepository and custom-defined entity repositories implement the Spring Data PagingAndSortingRepository, which supports offsets and limits using page numbers and page sizes. Paging and sorting options are also supported in DatastoreTemplate by supplying a DatastoreQueryOptions to findAll.

Partial read

This feature is not supported yet.

14.4.3 Write / Update

The write methods of DatastoreOperations accept a POJO and writes all of its properties to Datastore. The required Datastore kind and entity metadata is obtained from the given object’s actual type.

If a POJO was retrieved from Datastore and its ID value was changed and then written or updated, the operation will occur as if against a row with the new ID value. The entity with the original ID value will not be affected.

Trader t = new Trader();
this.datastoreTemplate.save(t);

The save method behaves as update-or-insert.

Partial Update

This feature is not supported yet.

14.4.4 Transactions

Read and write transactions are provided by DatastoreOperations via the performTransaction method:

@Autowired
DatastoreOperations myDatastoreOperations;

public String doWorkInsideTransaction() {
  return myDatastoreOperations.performTransaction(
    transactionDatastoreOperations -> {
      // Work with transactionDatastoreOperations here.
      // It is also a DatastoreOperations object.

      return "transaction completed";
    }
  );
}

The performTransaction method accepts a Function that is provided an instance of a DatastoreOperations object. The final returned value and type of the function is determined by the user. You can use this object just as you would a regular DatastoreOperations with an exception:

  • It cannot perform sub-transactions.

Because of Cloud Datastore’s consistency guarantees, there are limitations to the operations and relationships among entities used inside transactions.

Declarative Transactions with @Transactional Annotation

This feature requires a bean of DatastoreTransactionManager, which is provided when using spring-cloud-gcp-starter-data-datastore.

DatastoreTemplate and DatastoreRepository support running methods with the @Transactional annotation as transactions. If a method annotated with @Transactional calls another method also annotated, then both methods will work within the same transaction. performTransaction cannot be used in @Transactional annotated methods because Cloud Datastore does not support transactions within transactions.

14.4.5 Read-Write Support for Maps

You can work with Maps of type Map<String, ?> instead of with entity objects by directly reading and writing them to and from Cloud Datastore.

[Note]Note

This is a different situation than using entity objects that contain Map properties.

The map keys are used as field names for a Datastore entity and map values are converted to Datastore supported types. Only simple types are supported (i.e. collections are not supported). Converters for custom value types can be added (see Section 13.2.10, “Custom types” section).

Example:

Map<String, Long> map = new HashMap<>();
map.put("field1", 1L);
map.put("field2", 2L);
map.put("field3", 3L);

keyForMap = datastoreTemplate.createKey("kindName", "id");

//write a map
datastoreTemplate.writeMap(keyForMap, map);

//read a map
Map<String, Long> loadedMap = datastoreTemplate.findByIdAsMap(keyForMap, Long.class);

14.5 Repositories

Spring Data Repositories are an abstraction that can reduce boilerplate code.

For example:

public interface TraderRepository extends DatastoreRepository<Trader, String> {
}

Spring Data generates a working implementation of the specified interface, which can be autowired into an application.

The Trader type parameter to DatastoreRepository refers to the underlying domain type. The second type parameter, String in this case, refers to the type of the key of the domain type.

public class MyApplication {

	@Autowired
	TraderRepository traderRepository;

	public void demo() {

		this.traderRepository.deleteAll();
		String traderId = "demo_trader";
		Trader t = new Trader();
		t.traderId = traderId;
		this.tradeRepository.save(t);

		Iterable<Trader> allTraders = this.traderRepository.findAll();

		int count = this.traderRepository.count();
	}
}

Repositories allow you to define custom Query Methods (detailed in the following sections) for retrieving, counting, and deleting based on filtering and paging parameters. Filtering parameters can be of types supported by your configured custom converters.

14.5.1 Query methods by convention

public interface TradeRepository extends DatastoreRepository<Trade, String[]> {
  List<Trader> findByAction(String action);

  int countByAction(String action);

  boolean existsByAction(String action);

  List<Trade> findTop3ByActionAndSymbolAndPriceGreaterThanAndPriceLessThanOrEqualOrderBySymbolDesc(
  			String action, String symbol, double priceFloor, double priceCeiling);

  Page<TestEntity> findByAction(String action, Pageable pageable);

  Slice<TestEntity> findBySymbol(String symbol, Pageable pageable);

  List<TestEntity> findBySymbol(String symbol, Sort sort);
}

In the example above the query methods in TradeRepository are generated based on the name of the methods using thehttps://docs.spring.io/spring-data/data-commons/docs/current/reference/html#repositories.query-methods.query-creation[Spring Data Query creation naming convention].

Cloud Datastore only supports filter components joined by AND, and the following operations:

  • equals
  • greater than or equals
  • greater than
  • less than or equals
  • less than
  • is null

After writing a custom repository interface specifying just the signatures of these methods, implementations are generated for you and can be used with an auto-wired instance of the repository. Because of Cloud Datastore’s requirement that explicitly selected fields must all appear in a composite index together, find name-based query methods are run as SELECT *.

Delete queries are also supported. For example, query methods such as deleteByAction or removeByAction delete entities found by findByAction. Delete queries are executed as separate read and delete operations instead of as a single transaction because Cloud Datastore cannot query in transactions unless ancestors for queries are specified. As a result, removeBy and deleteBy name-convention query methods cannot be used inside transactions via either performInTransaction or @Transactional annotation.

Delete queries can have the following return types:

  • An integer type that is the number of entities deleted
  • A collection of entities that were deleted
  • 'void'

Methods can have org.springframework.data.domain.Pageable parameter to control pagination and sorting, or org.springframework.data.domain.Sort parameter to control sorting only. See Spring Data documentation for details.

For returning multiple items in a repository method, we support Java collections as well as org.springframework.data.domain.Page and org.springframework.data.domain.Slice. If a method’s return type is org.springframework.data.domain.Page, the returned object will include current page, total number of results and total number of pages.

[Note]Note

Methods that return Page execute an additional query to compute total number of pages. Methods that return Slice, on the other hand, don’t execute any additional queries and therefore are much more efficient.

14.5.2 Custom GQL query methods

Custom GQL queries can be mapped to repository methods in one of two ways:

  • namedQueries properties file
  • using the @Query annotation

Query methods with annotation

Using the @Query annotation:

The names of the tags of the GQL correspond to the @Param annotated names of the method parameters.

public interface TraderRepository extends DatastoreRepository<Trader, String> {

  @Query("SELECT * FROM traders WHERE name = @trader_name")
  List<Trader> tradersByName(@Param("trader_name") String traderName);

  @Query("SELECT * FROM  test_entities_ci WHERE id = @id_val")
  TestEntity getOneTestEntity(@Param("id_val") long id);
}

The following parameter types are supported:

  • com.google.cloud.Timestamp
  • com.google.cloud.datastore.Blob
  • com.google.cloud.datastore.Key
  • com.google.cloud.datastore.Cursor
  • java.lang.Boolean
  • java.lang.Double
  • java.lang.Long
  • java.lang.String
  • enum values. These are queried as String values.

With the exception of Cursor, array forms of each of the types are also supported.

If you would like to obtain the count of items of a query or if there are any items returned by the query, set the count = true or exists = true properties of the @Query annotation, respectively. The return type of the query method in these cases should be an integer type or a boolean type.

Cloud Datastore provides provides the SELECT key FROM …​ special column for all kinds that retrieves the Key`s of each row. Selecting this special `key column is especially useful and efficient for count and exists queries.

You can also query for non-entity types:

	@Query(value = "SELECT __key__ from test_entities_ci")
	List<Key> getKeys();

	@Query(value = "SELECT __key__ from test_entities_ci limit 1")
	Key getKey();

	@Query("SELECT id FROM test_entities_ci WHERE id <= @id_val")
	List<String> getIds(@Param("id_val") long id);

	@Query("SELECT id FROM test_entities_ci WHERE id <= @id_val limit 1")
	String getOneId(@Param("id_val") long id);

SpEL can be used to provide GQL parameters:

@Query("SELECT * FROM |com.example.Trade| WHERE trades.action = @act
  AND price > :#{#priceRadius * -1} AND price < :#{#priceRadius * 2}")
List<Trade> fetchByActionNamedQuery(@Param("act") String action, @Param("priceRadius") Double r);

Kind names can be directly written in the GQL annotations. Kind names can also be resolved from the @Entity annotation on domain classes.

In this case, the query should refer to table names with fully qualified class names surrounded by | characters: |fully.qualified.ClassName|. This is useful when SpEL expressions appear in the kind name provided to the @Entity annotation. For example:

@Query("SELECT * FROM |com.example.Trade| WHERE trades.action = @act")
List<Trade> fetchByActionNamedQuery(@Param("act") String action);

Query methods with named queries properties

You can also specify queries with Cloud Datastore parameter tags and SpEL expressions in properties files.

By default, the namedQueriesLocation attribute on @EnableDatastoreRepositories points to the META-INF/datastore-named-queries.properties file. You can specify the query for a method in the properties file by providing the GQL as the value for the "interface.method" property:

Trader.fetchByName=SELECT * FROM traders WHERE name = @tag0
public interface TraderRepository extends DatastoreRepository<Trader, String> {

	// This method uses the query from the properties file instead of one generated based on name.
	List<Trader> fetchByName(@Param("tag0") String traderName);

}

14.5.3 Transactions

These transactions work very similarly to those of DatastoreOperations, but is specific to the repository’s domain type and provides repository functions instead of template functions.

For example, this is a read-write transaction:

@Autowired
DatastoreRepository myRepo;

public String doWorkInsideTransaction() {
  return myRepo.performTransaction(
    transactionDatastoreRepo -> {
      // Work with the single-transaction transactionDatastoreRepo here.
      // This is a DatastoreRepository object.

      return "transaction completed";
    }
  );
}

14.5.4 Projections

Spring Data Cloud Datastore supports projections. You can define projection interfaces based on domain types and add query methods that return them in your repository:

public interface TradeProjection {

	String getAction();

	@Value("#{target.symbol + ' ' + target.action}")
	String getSymbolAndAction();
}

public interface TradeRepository extends DatastoreRepository<Trade, Key> {

	List<Trade> findByTraderId(String traderId);

	List<TradeProjection> findByAction(String action);

	@Query("SELECT action, symbol FROM trades WHERE action = @action")
	List<TradeProjection> findByQuery(String action);
}

Projections can be provided by name-convention-based query methods as well as by custom GQL queries. If using custom GQL queries, you can further restrict the fields retrieved from Cloud Datastore to just those required by the projection. However, custom select statements (those not using SELECT *) require composite indexes containing the selected fields.

Properties of projection types defined using SpEL use the fixed name target for the underlying domain object. As a result, accessing underlying properties take the form target.<property-name>.

14.5.5 REST Repositories

When running with Spring Boot, repositories can be exposed as REST services by simply adding this dependency to your pom file:

<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-data-rest</artifactId>
</dependency>

If you prefer to configure parameters (such as path), you can use @RepositoryRestResource annotation:

@RepositoryRestResource(collectionResourceRel = "trades", path = "trades")
public interface TradeRepository extends DatastoreRepository<Trade, String[]> {
}

For example, you can retrieve all Trade objects in the repository by using curl http://<server>:<port>/trades, or any specific trade via curl http://<server>:<port>/trades/<trader_id>.

You can also write trades using curl -XPOST -H"Content-Type: application/json" -[email protected] http://<server>:<port>/trades/ where the file test.json holds the JSON representation of a Trade object.

To delete trades, you can use curl -XDELETE http://<server>:<port>/trades/<trader_id>

14.6 Sample

A Simple Spring Boot Application and more advanced Sample Spring Boot Application are provided to show how to use the Spring Data Cloud Datastore starter and template.

15. Cloud Memorystore for Redis

15.1 Spring Caching

Cloud Memorystore for Redis provides a fully managed in-memory data store service. Cloud Memorystore is compatible with the Redis protocol, allowing easy integration with Spring Caching.

All you have to do is create a Cloud Memorystore instance and use its IP address in application.properties file as spring.redis.host property value. Everything else is exactly the same as setting up redis-backed Spring caching.

[Note]Note

Memorystore instances and your application instances have to be located in the same region.

In short, the following dependencies are needed:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-cache</artifactId>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>

And then you can use org.springframework.cache.annotation.Cacheable annotation for methods you’d like to be cached.

@Cacheable("cache1")
public String hello(@PathVariable String name) {
    ....
}

If you are interested in a detailed how-to guide, please check Spring Boot Caching using Cloud Memorystore codelab.

Cloud Memorystore documentation can be found here.

16. Cloud Identity-Aware Proxy (IAP) Authentication

Cloud Identity-Aware Proxy (IAP) provides a security layer over applications deployed to Google Cloud.

The IAP starter uses Spring Security OAuth 2.0 Resource Server functionality to automatically extract user identity from the proxy-injected x-goog-iap-jwt-assertion HTTP header.

The following claims are validated automatically:

  • Issue time
  • Expiration time
  • Issuer
  • Audience

The audience ("aud") validation is automatically configured when the application is running on App Engine Standard or App Engine Flexible. For other runtime environments, a custom audience must be provided through spring.cloud.gcp.security.iap.audience property. The custom property, if specified, overrides the automatic App Engine audience detection.

[Important]Important

There is no automatic audience string configuration for Compute Engine or Kubernetes Engine. To use the IAP starter on GCE/GKE, find the Audience string per instructions in the Verify the JWT payload guide, and specify it in the spring.cloud.gcp.security.iap.audience property. Otherwise, the application will fail to start with No qualifying bean of type 'org.springframework.cloud.gcp.security.iap.AudienceProvider' available message.

[Note]Note

If you create a custom WebSecurityConfigurerAdapter, enable extracting user identity by adding .oauth2ResourceServer().jwt() configuration to the HttpSecurity object. If no custom WebSecurityConfigurerAdapter is present, nothing needs to be done because Spring Boot will add this customization by default.

Starter Maven coordinates, using Spring Cloud GCP BOM:

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-gcp-starter-security-iap</artifactId>
</dependency>

Starter Gradle coordinates:

dependencies {
    compile group: 'org.springframework.cloud', name: 'spring-cloud-gcp-starter-security-iap'
}

16.1 Configuration

The following properties are available.

[Caution]Caution

Modifying registry, algorithm, and header properties might be useful for testing, but the defaults should not be changed in production.

NameDescriptionRequiredDefault

spring.cloud.gcp.security.iap.registry

Link to JWK public key registry.

true

https://www.gstatic.com/iap/verify/public_key-jwk

spring.cloud.gcp.security.iap.algorithm

Encryption algorithm used to sign the JWK token.

true

ES256

spring.cloud.gcp.security.iap.header

Header from which to extract the JWK key.

true

x-goog-iap-jwt-assertion

spring.cloud.gcp.security.iap.issuer

JWK issuer to verify.

true

https://cloud.google.com/iap

spring.cloud.gcp.security.iap.audience

Custom JWK audience to verify.

false on App Engine; true on GCE/GKE

 

16.2 Sample

A sample application is available.

17. Google Cloud Vision

The Google Cloud Vision API allows users to leverage machine learning algorithms for processing images including: image classification, face detection, text extraction, and others.

Spring Cloud GCP provides:

  • A convenience starter which automatically configures authentication settings and client objects needed to begin using the Google Cloud Vision API.
  • A Cloud Vision Template which simplifies interactions with the Cloud Vision API.

    • Allows you to easily send images to the API as Spring Resources.
    • Offers convenience methods for common operations, such as extracting the text from an image.

Maven coordinates, using Spring Cloud GCP BOM:

<dependency>
  <groupId>org.springframework.cloud</groupId>
  <artifactId>spring-cloud-gcp-starter-vision</artifactId>
</dependency>

Gradle coordinates:

dependencies {
  compile group: 'org.springframework.cloud', name: 'spring-cloud-gcp-starter-vision'
}

17.1 Cloud Vision Template

The CloudVisionTemplate offers a simple way to use the Cloud Vision APIs with Spring Resources.

After you add the spring-cloud-gcp-starter-vision dependency to your project, you may @Autowire an instance of CloudVisionTemplate to use in your code.

The CloudVisionTemplate offers the following method for interfacing with Cloud Vision:

public AnnotateImageResponse analyzeImage(Resource imageResource, Feature.Type…​ featureTypes)

Parameters:

  • Resource imageResource refers to the Spring Resource of the image object you wish to analyze. The Google Cloud Vision documentation provides a list of the image types that they support.
  • Feature.Type…​ featureTypes refers to a var-arg array of Cloud Vision Features to extract from the image. A feature refers to a kind of image analysis one wishes to perform on an image, such as label detection, OCR recognition, facial detection, etc. One may specify multiple features to analyze within one request. A full list of Cloud Vision Features is provided in the Cloud Vision Feature docs.

Returns:

  • AnnotateImageResponse contains the results of all the feature analyses that were specified in the request. For each feature type that you provide in the request, AnnotateImageResponse provides a getter method to get the result of that feature analysis. For example, if you analyzed an image using the LABEL_DETECTION feature, you would retrieve the results from the response using annotateImageResponse.getLabelAnnotationsList().

    AnnotateImageResponse is provided by the Google Cloud Vision libraries; please consult the RPC reference or Javadoc for more details. Additionally, you may consult the Cloud Vision docs to familiarize yourself with the concepts and features of the API.

17.2 Detect Image Labels Example

Image labeling refers to producing labels that describe the contents of an image. Below is a code sample of how this is done using the Cloud Vision Spring Template.

@Autowired
private ResourceLoader resourceLoader;

@Autowired
private CloudVisionTemplate cloudVisionTemplate;

public void processImage() {
  Resource imageResource = this.resourceLoader.getResource("my_image.jpg");
  AnnotateImageResponse response = this.cloudVisionTemplate.analyzeImage(
      imageResource, Type.LABEL_DETECTION);
  System.out.println("Image Classification results: " + response.getLabelAnnotationsList());
}

17.3 Sample

A Sample Spring Boot Application is provided to show how to use the Cloud Vision starter and template.

18. Cloud Foundry

Spring Cloud GCP provides support for Cloud Foundry’s GCP Service Broker. Our Pub/Sub, Cloud Spanner, Storage, Stackdriver Trace and Cloud SQL MySQL and PostgreSQL starters are Cloud Foundry aware and retrieve properties like project ID, credentials, etc., that are used in auto configuration from the Cloud Foundry environment.

In cases like Pub/Sub’s topic and subscription, or Storage’s bucket name, where those parameters are not used in auto configuration, you can fetch them using the VCAP mapping provided by Spring Boot. For example, to retrieve the provisioned Pub/Sub topic, you can use the vcap.services.mypubsub.credentials.topic_name property from the application environment.

[Note]Note

If the same service is bound to the same application more than once, the auto configuration will not be able to choose among bindings and will not be activated for that service. This includes both MySQL and PostgreSQL bindings to the same app.

[Warning]Warning

In order for the Cloud SQL integration to work in Cloud Foundry, auto-reconfiguration must be disabled. You can do so using the cf set-env <APP> JBP_CONFIG_SPRING_AUTO_RECONFIGURATION '{enabled: false}' command. Otherwise, Cloud Foundry will produce a DataSource with an invalid JDBC URL (i.e., jdbc:mysql://null/null).

19. Kotlin Support

The latest version of the Spring Framework provides first-class support for Kotlin. For Kotlin users of Spring, the Spring Cloud GCP libraries work out-of-the-box and are fully interoperable with Kotlin applications.

For more information on building a Spring application in Kotlin, please consult the Spring Kotlin documentation.

19.1 Prerequisites

Ensure that your Kotlin application is properly set up. Based on your build system, you will need to include the correct Kotlin build plugin in your project:

Depending on your application’s needs, you may need to augment your build configuration with compiler plugins:

Once your Kotlin project is properly configured, the Spring Cloud GCP libraries will work within your application without any additional setup.

20. Sample

A Kotlin sample application is provided to demonstrate a working Maven setup and various Spring Cloud GCP integrations from within Kotlin.