Table of Contents
The Spring Cloud GCP project makes the Spring Framework a first-class citizen of Google Cloud Platform (GCP).
Spring Cloud GCP lets you leverage the power and simplicity of the Spring Framework to:
The Spring Cloud GCP Bill of Materials (BOM) contains the versions of all the dependencies it uses.
If you’re a Maven user, adding the following to your pom.xml file will allow you to not specify any Spring Cloud GCP dependency versions. Instead, the version of the BOM you’re using determines the versions of the used dependencies.
<dependencyManagement> <dependencies> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-gcp-dependencies</artifactId> <version>{project-version}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement>
In the following sections, it will be assumed you are using the Spring Cloud GCP BOM and the dependency snippets will not contain versions.
Gradle users can achieve the same kind of BOM experience using Spring’s dependency-management-plugin Gradle plugin. For simplicity, the Gradle dependency snippets in the remainder of this document will also omit their versions.
There are many available resources to get you up to speed with our libraries as quickly as possible.
There are three entries in Spring Initializr for Spring Cloud GCP.
The GCP Support entry contains auto-configuration support for every Spring Cloud GCP integration. Most of the autoconfiguration code is only enabled if other dependencies are added to the classpath.
Spring Cloud GCP Starter | Required dependencies |
---|---|
Config | org.springframework.cloud:spring-cloud-gcp-starter-config |
Cloud Spanner | org.springframework.cloud:spring-cloud-gcp-starter-data-spanner |
Cloud Datastore | org.springframework.cloud:spring-cloud-gcp-starter-data-datastore |
Logging | org.springframework.cloud:spring-cloud-gcp-starter-logging |
SQL - MySql | org.springframework.cloud:spring-cloud-gcp-starter-sql-mysql |
SQL - PostgreSQL | org.springframework.cloud:spring-cloud-gcp-starter-sql-postgres |
Trace | org.springframework.cloud:spring-cloud-gcp-starter-trace |
Vision | org.springframework.cloud:spring-cloud-gcp-starter-vision |
Security - IAP | org.springframework.cloud:spring-cloud-gcp-starter-security-iap |
The GCP Messaging entry adds the GCP Support entry and all the required dependencies so that the Google Cloud Pub/Sub integrations work out of the box.
There are code samples available that demonstrate the usage of all our integrations.
For example, the Vision API sample shows how to use spring-cloud-gcp-starter-vision
to automatically configure Vision API clients.
In a code challenge, you perform a task step by step, using one integration. There are a number of challenges available in the Google Developers Codelabs page.
A Spring Getting Started guide on messaging with Spring Integration Channel Adapters for Google Cloud Pub/Sub is available from Spring Guides.
Each Spring Cloud GCP module uses GcpProjectIdProvider
and CredentialsProvider
to get the GCP project ID and access credentials.
Spring Cloud GCP provides a Spring Boot starter to auto-configure the core components.
Maven coordinates, using Spring Cloud GCP BOM:
<dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-gcp-starter</artifactId> </dependency>
Gradle coordinates:
dependencies { compile group: 'org.springframework.cloud', name: 'spring-cloud-gcp-starter' }
GcpProjectIdProvider
is a functional interface that returns a GCP project ID string.
public interface GcpProjectIdProvider { String getProjectId(); }
The Spring Cloud GCP starter auto-configures a GcpProjectIdProvider
.
If a spring.cloud.gcp.project-id
property is specified, the provided GcpProjectIdProvider
returns that property value.
spring.cloud.gcp.project-id=my-gcp-project-id
Otherwise, the project ID is discovered based on an ordered list of rules:
GOOGLE_CLOUD_PROJECT
environment variableGOOGLE_APPLICATION_CREDENTIALS
environment variableCredentialsProvider
is a functional interface that returns the credentials to authenticate and authorize calls to Google Cloud Client Libraries.
public interface CredentialsProvider { Credentials getCredentials() throws IOException; }
The Spring Cloud GCP starter auto-configures a CredentialsProvider
.
It uses the spring.cloud.gcp.credentials.location
property to locate the OAuth2 private key of a Google service account.
Keep in mind this property is a Spring Resource, so the credentials file can be obtained from a number of different locations such as the file system, classpath, URL, etc.
The next example specifies the credentials location property in the file system.
spring.cloud.gcp.credentials.location=file:/usr/local/key.json
Alternatively, you can set the credentials by directly specifying the spring.cloud.gcp.credentials.encoded-key
property.
The value should be the base64-encoded account private key in JSON format.
If that credentials aren’t specified through properties, the starter tries to discover credentials from a number of places:
GOOGLE_APPLICATION_CREDENTIALS
environment variablegcloud auth application-default login
commandIf your app is running on Google App Engine or Google Compute Engine, in most cases, you should omit the spring.cloud.gcp.credentials.location
property and, instead, let the Spring Cloud GCP Starter get the correct credentials for those environments.
On App Engine Standard, the App Identity service account credentials are used, on App Engine Flexible, the Flexible service account credential are used and on Google Compute Engine, the Compute Engine Default Service Account is used.
By default, the credentials provided by the Spring Cloud GCP Starter contain scopes for every service supported by Spring Cloud GCP.
Service | Scope |
Spanner | https://www.googleapis.com/auth/spanner.admin, https://www.googleapis.com/auth/spanner.data |
Datastore | |
Pub/Sub | |
Storage (Read Only) | |
Storage (Write/Write) | |
Runtime Config | |
Trace (Append) | |
Cloud Platform | |
Vision |
The Spring Cloud GCP starter allows you to configure a custom scope list for the provided credentials.
To do that, specify a comma-delimited list of Google OAuth2 scopes in the spring.cloud.gcp.credentials.scopes
property.
spring.cloud.gcp.credentials.scopes
is a comma-delimited list of Google OAuth2 scopes for Google Cloud Platform services that the credentials returned by the provided CredentialsProvider
support.
spring.cloud.gcp.credentials.scopes=https://www.googleapis.com/auth/pubsub,https://www.googleapis.com/auth/sqlservice.admin
You can also use DEFAULT_SCOPES
placeholder as a scope to represent the starters default scopes, and append the additional scopes you need to add.
spring.cloud.gcp.credentials.scopes=DEFAULT_SCOPES,https://www.googleapis.com/auth/cloud-vision
GcpEnvironmentProvider
is a functional interface, auto-configured by the Spring Cloud GCP starter, that returns a GcpEnvironment
enum.
The provider can help determine programmatically in which GCP environment (App Engine Flexible, App Engine Standard, Kubernetes Engine or Compute Engine) the application is deployed.
public interface GcpEnvironmentProvider { GcpEnvironment getCurrentEnvironment(); }
This starter is available from Spring Initializr through the GCP Support
entry.
Spring Cloud GCP provides an abstraction layer to publish to and subscribe from Google Cloud Pub/Sub topics and to create, list or delete Google Cloud Pub/Sub topics and subscriptions.
A Spring Boot starter is provided to auto-configure the various required Pub/Sub components.
Maven coordinates, using Spring Cloud GCP BOM:
<dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-gcp-starter-pubsub</artifactId> </dependency>
Gradle coordinates:
dependencies { compile group: 'org.springframework.cloud', name: 'spring-cloud-gcp-starter-pubsub' }
This starter is also available from Spring Initializr through the GCP Messaging
entry.
PubSubOperations
is an abstraction that allows Spring users to use Google Cloud Pub/Sub without depending on any Google Cloud Pub/Sub API semantics.
It provides the common set of operations needed to interact with Google Cloud Pub/Sub.
PubSubTemplate
is the default implementation of PubSubOperations
and it uses the Google Cloud Java Client for Pub/Sub to interact with Google Cloud Pub/Sub.
PubSubTemplate
depends on a PublisherFactory
and a SubscriberFactory
.
The PublisherFactory
provides a Google Cloud Java Client for Pub/Sub Publisher
.
The SubscriberFactory
provides the Subscriber
for asynchronous message pulling, as well as a SubscriberStub
for synchronous pulling.
The Spring Boot starter for GCP Pub/Sub auto-configures a PublisherFactory
and SubscriberFactory
with default settings and uses the GcpProjectIdProvider
and CredentialsProvider
auto-configured by the Spring Boot GCP starter.
The PublisherFactory
implementation provided by Spring Cloud GCP Pub/Sub, DefaultPublisherFactory
, caches Publisher
instances by topic name, in order to optimize resource utilization.
The PubSubOperations
interface is actually a combination of PubSubPublisherOperations
and PubSubSubscriberOperations
with the corresponding PubSubPublisherTemplate
and PubSubSubscriberTemplate
implementations, which can be used individually or via the composite PubSubTemplate
.
The rest of the documentation refers to PubSubTemplate
, but the same applies to PubSubPublisherTemplate
and PubSubSubscriberTemplate
, depending on whether we’re talking about publishing or subscribing.
PubSubTemplate
provides asynchronous methods to publish messages to a Google Cloud Pub/Sub topic.
The publish()
method takes in a topic name to post the message to, a payload of a generic type and, optionally, a map with the message headers.
Here is an example of how to publish a message to a Google Cloud Pub/Sub topic:
public void publishMessage() { this.pubSubTemplate.publish("topic", "your message payload", ImmutableMap.of("key1", "val1")); }
By default, the SimplePubSubMessageConverter
is used to convert payloads of type byte[]
, ByteString
, ByteBuffer
, and String
to Pub/Sub messages.
For serialization and deserialization of POJOs using Jackson JSON, configure a JacksonPubSubMessageConverter
bean, and the Spring Boot starter for GCP Pub/Sub will automatically wire it into the PubSubTemplate
.
// Note: The ObjectMapper is used to convert Java POJOs to and from JSON. // You will have to configure your own instance if you are unable to depend // on the ObjectMapper provided by Spring Boot starters. @Bean public JacksonPubSubMessageConverter jacksonPubSubMessageConverter(ObjectMapper objectMapper) { return new JacksonPubSubMessageConverter(objectMapper); }
Alternatively, you can set it directly by calling the setMessageConverter()
method on the PubSubTemplate
.
Other implementations of the PubSubMessageConverter
can also be configured in the same manner.
Please refer to our Pub/Sub JSON Payload Sample App as a reference for using this functionality.
Google Cloud Pub/Sub allows many subscriptions to be associated to the same topic.
PubSubTemplate
allows you to listen to subscriptions via the subscribe()
method.
It relies on a SubscriberFactory
object, whose only task is to generate Google Cloud Pub/Sub
Subscriber
objects.
When listening to a subscription, messages will be pulled from Google Cloud Pub/Sub
asynchronously, at a certain interval.
The Spring Boot starter for Google Cloud Pub/Sub auto-configures a SubscriberFactory
.
If Pub/Sub message payload conversion is desired, you can use the subscribeAndConvert()
method, which will use the converter configured in the template.
Google Cloud Pub/Sub supports synchronous pulling of messages from a subscription. This is different from subscribing to a subscription, in the sense that subscribing is an asynchronous task which polls the subscription on a set interval.
The pullNext()
method allows for a single message to be pulled and automatically acknowledged from a subscription.
The pull()
method pulls a number of messages from a subscription, allowing for the retry settings to be configured.
Any messages received by pull()
are not automatically acknowledged.
Instead, since they are of the kind AcknowledgeablePubsubMessage
, you can acknowledge them by calling the ack()
method, or negatively acknowledge them by calling the nack()
method.
The pullAndAck()
method does the same as the pull()
method and, additionally, acknowledges all received messages.
The pullAndConvert()
method does the same as the pull()
method and, additionally, converts the Pub/Sub binary payload to an object of the desired type, using the converter configured in the template.
To acknowledge multiple messages received from pull()
or pullAndConvert()
at once, you can use the PubSubTemplate.ack()
method.
You can also use the PubSubTemplate.nack()
for negatively acknowledging messages.
Using these methods for acknowledging messages in batches is more efficient than acknowledging messages individually, but they require the collection of messages to be from the same project.
All ack()
, nack()
, and modifyAckDeadline()
methods on messages as well as PubSubSubscriberTemplate
are implemented asynchronously, returning a ListenableFuture<Void>
to be able to process the asynchronous execution.
PubSubTemplate
uses a special subscriber generated by its SubscriberFactory
to synchronously pull messages.
PubSubAdmin
is the abstraction provided by Spring Cloud GCP to manage Google Cloud Pub/Sub resources.
It allows for the creation, deletion and listing of topics and subscriptions.
PubSubAdmin
depends on GcpProjectIdProvider
and either a CredentialsProvider
or a TopicAdminClient
and a SubscriptionAdminClient
.
If given a CredentialsProvider
, it creates a TopicAdminClient
and a SubscriptionAdminClient
with the Google Cloud Java Library for Pub/Sub default settings.
The Spring Boot starter for GCP Pub/Sub auto-configures a PubSubAdmin
object using the GcpProjectIdProvider
and the CredentialsProvider
auto-configured by the Spring Boot GCP Core starter.
PubSubAdmin
implements a method to create topics:
public Topic createTopic(String topicName)
Here is an example of how to create a Google Cloud Pub/Sub topic:
public void newTopic() { pubSubAdmin.createTopic("topicName"); }
PubSubAdmin
implements a method to delete topics:
public void deleteTopic(String topicName)
Here is an example of how to delete a Google Cloud Pub/Sub topic:
public void deleteTopic() { pubSubAdmin.deleteTopic("topicName"); }
PubSubAdmin
implements a method to list topics:
public List<Topic> listTopics
Here is an example of how to list every Google Cloud Pub/Sub topic name in a project:
public List<String> listTopics() { return pubSubAdmin .listTopics() .stream() .map(Topic::getNameAsTopicName) .map(TopicName::getTopic) .collect(Collectors.toList()); }
PubSubAdmin
implements a method to create subscriptions to existing topics:
public Subscription createSubscription(String subscriptionName, String topicName, Integer ackDeadline, String pushEndpoint)
Here is an example of how to create a Google Cloud Pub/Sub subscription:
public void newSubscription() { pubSubAdmin.createSubscription("subscriptionName", "topicName", 10, “http://my.endpoint/push”); }
Alternative methods with default settings are provided for ease of use.
The default value for ackDeadline
is 10 seconds.
If pushEndpoint
isn’t specified, the subscription uses message pulling, instead.
public Subscription createSubscription(String subscriptionName, String topicName)
public Subscription createSubscription(String subscriptionName, String topicName, Integer ackDeadline)
public Subscription createSubscription(String subscriptionName, String topicName, String pushEndpoint)
PubSubAdmin
implements a method to delete subscriptions:
public void deleteSubscription(String subscriptionName)
Here is an example of how to delete a Google Cloud Pub/Sub subscription:
public void deleteSubscription() { pubSubAdmin.deleteSubscription("subscriptionName"); }
PubSubAdmin
implements a method to list subscriptions:
public List<Subscription> listSubscriptions()
Here is an example of how to list every subscription name in a project:
public List<String> listSubscriptions() { return pubSubAdmin .listSubscriptions() .stream() .map(Subscription::getNameAsSubscriptionName) .map(SubscriptionName::getSubscription) .collect(Collectors.toList()); }
The Spring Boot starter for Google Cloud Pub/Sub provides the following configuration options:
Name | Description | Required | Default value |
| Enables or disables Pub/Sub auto-configuration | No |
|
| Number of threads used by | No | 4 |
| Number of threads used by | No | 4 |
| GCP project ID where the Google Cloud Pub/Sub API is hosted, if different from the one in the Spring Cloud GCP Core Module | No | |
| OAuth2 credentials for authenticating with the Google Cloud Pub/Sub API, if different from the ones in the Spring Cloud GCP Core Module | No | |
| Base64-encoded contents of OAuth2 account private key for authenticating with the Google Cloud Pub/Sub API, if different from the ones in the Spring Cloud GCP Core Module | No | |
| OAuth2 scope for Spring Cloud GCP Pub/Sub credentials | No | |
| The number of pull workers | No | The available number of processors |
| The maximum period a message ack deadline will be extended, in seconds | No | 0 |
| The endpoint for synchronous pulling messages | No | pubsub.googleapis.com:443 |
| TotalTimeout has ultimate control over how long the logic should keep trying the remote call until it gives up completely. The higher the total timeout, the more retries can be attempted. | No | 0 |
| InitialRetryDelay controls the delay before the first retry. Subsequent retries will use this value adjusted according to the RetryDelayMultiplier. | No | 0 |
| RetryDelayMultiplier controls the change in retry delay. The retry delay of the previous call is multiplied by the RetryDelayMultiplier to calculate the retry delay for the next call. | No | 1 |
| MaxRetryDelay puts a limit on the value of the retry delay, so that the RetryDelayMultiplier can’t increase the retry delay higher than this amount. | No | 0 |
| MaxAttempts defines the maximum number of attempts to perform. If this value is greater than 0, and the number of attempts reaches this limit, the logic will give up retrying even if the total retry time is still lower than TotalTimeout. | No | 0 |
| Jitter determines if the delay time should be randomized. | No | true |
| InitialRpcTimeout controls the timeout for the initial RPC. Subsequent calls will use this value adjusted according to the RpcTimeoutMultiplier. | No | 0 |
| RpcTimeoutMultiplier controls the change in RPC timeout. The timeout of the previous call is multiplied by the RpcTimeoutMultiplier to calculate the timeout for the next call. | No | 1 |
| MaxRpcTimeout puts a limit on the value of the RPC timeout, so that the RpcTimeoutMultiplier can’t increase the RPC timeout higher than this amount. | No | 0 |
| Maximum number of outstanding elements to keep in memory before enforcing flow control. | No | unlimited |
| Maximum number of outstanding bytes to keep in memory before enforcing flow control. | No | unlimited |
| The behavior when the specified limits are exceeded. | No | Block |
| The element count threshold to use for batching. | No | unset (threshold does not apply) |
| The request byte threshold to use for batching. | No | unset (threshold does not apply) |
| The delay threshold to use for batching. After this amount of time has elapsed (counting from the first element added), the elements will be wrapped up in a batch and sent. | No | unset (threshold does not apply) |
| Enables batching. | No | false |
A sample application is available.
Spring Resources are an abstraction for a number of low-level resources, such as file system files, classpath files, servlet context-relative files, etc. Spring Cloud GCP adds a new resource type: a Google Cloud Storage (GCS) object.
A Spring Boot starter is provided to auto-configure the various Storage components.
Maven coordinates, using Spring Cloud GCP BOM:
<dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-gcp-starter-storage</artifactId> </dependency>
Gradle coordinates:
dependencies { compile group: 'org.springframework.cloud', name: 'spring-cloud-gcp-starter-storage' }
This starter is also available from Spring Initializr through the GCP Storage
entry.
The Spring Resource Abstraction for Google Cloud Storage allows GCS objects to be accessed by their GCS URL using the @Value
annotation:
@Value("gs://[YOUR_GCS_BUCKET]/[GCS_FILE_NAME]") private Resource gcsResource;
…or the Spring application context
SpringApplication.run(...).getResource("gs://[YOUR_GCS_BUCKET]/[GCS_FILE_NAME]");
This creates a Resource
object that can be used to read the object, among other possible operations.
It is also possible to write to a Resource
, although a WriteableResource
is required.
@Value("gs://[YOUR_GCS_BUCKET]/[GCS_FILE_NAME]") private Resource gcsResource; ... try (OutputStream os = ((WritableResource) gcsResource).getOutputStream()) { os.write("foo".getBytes()); }
To work with the Resource
as a Google Cloud Storage resource, cast it to GoogleStorageResource
.
If the resource path refers to an object on Google Cloud Storage (as opposed to a bucket), then the getBlob
method can be called to obtain a Blob
.
This type represents a GCS file, which has associated metadata, such as content-type, that can be set.
The createSignedUrl
method can also be used to obtain signed URLs for GCS objects.
However, creating signed URLs requires that the resource was created using service account credentials.
The Spring Boot Starter for Google Cloud Storage auto-configures the Storage
bean required by the spring-cloud-gcp-storage
module, based on the CredentialsProvider
provided by the Spring Boot GCP starter.
The Spring Boot Starter for Google Cloud Storage provides the following configuration options:
Name | Description | Required | Default value |
| Enables the GCP storage APIs. | No |
|
| Creates files and buckets on Google Cloud Storage when writes are made to non-existent files | No |
|
| OAuth2 credentials for authenticating with the Google Cloud Storage API, if different from the ones in the Spring Cloud GCP Core Module | No | |
| Base64-encoded contents of OAuth2 account private key for authenticating with the Google Cloud Storage API, if different from the ones in the Spring Cloud GCP Core Module | No | |
| OAuth2 scope for Spring Cloud GCP Storage credentials | No |
A sample application and a codelab are available.
Spring Cloud GCP adds integrations with Spring JDBC so you can run your MySQL or PostgreSQL databases in Google Cloud SQL using Spring JDBC, or other libraries that depend on it like Spring Data JPA.
The Cloud SQL support is provided by Spring Cloud GCP in the form of two Spring Boot starters, one for MySQL and another one for PostgreSQL. The role of the starters is to read configuration from properties and assume default settings so that user experience connecting to MySQL and PostgreSQL is as simple as possible.
Maven coordinates, using Spring Cloud GCP BOM:
<dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-gcp-starter-sql-mysql</artifactId> </dependency> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-gcp-starter-sql-postgresql</artifactId> </dependency>
Gradle coordinates:
dependencies { compile group: 'org.springframework.cloud', name: 'spring-cloud-gcp-starter-sql-mysql' compile group: 'org.springframework.cloud', name: 'spring-cloud-gcp-starter-sql-postgresql' }
In order to use the Spring Boot Starters for Google Cloud SQL, the Google Cloud SQL API must be enabled in your GCP project.
To do that, go to the API library page of the Google Cloud Console, search for "Cloud SQL API", click the first result and enable the API.
Note | |
---|---|
There are several similar "Cloud SQL" results. You must access the "Google Cloud SQL API" one and enable the API from there. |
The Spring Boot Starters for Google Cloud SQL provide an auto-configured DataSource
object.
Coupled with Spring JDBC, it provides a
JdbcTemplate
object bean that allows for operations such as querying and modifying a database.
public List<Map<String, Object>> listUsers() { return jdbcTemplate.queryForList("SELECT * FROM user;"); }
You can rely on
Spring Boot data source auto-configuration to configure a DataSource
bean.
In other words, properties like the SQL username, spring.datasource.username
, and password, spring.datasource.password
can be used.
There is also some configuration specific to Google Cloud SQL:
Property name | Description | Default value | Unused if specified property(ies) |
| Enables or disables Cloud SQL auto configuration |
| |
| Name of the database to connect to. |
| |
| A string containing a Google Cloud SQL instance’s project ID, region and name, each separated by a colon.
For example, |
| |
| File system path to the Google OAuth2 credentials private key file. Used to authenticate and authorize new connections to a Google Cloud SQL instance. | Default credentials provided by the Spring GCP Boot starter | |
| Base64-encoded contents of OAuth2 account private key in JSON format. Used to authenticate and authorize new connections to a Google Cloud SQL instance. | Default credentials provided by the Spring GCP Boot starter |
Based on the previous properties, the Spring Boot starter for Google Cloud SQL creates a CloudSqlJdbcInfoProvider
object which is used to obtain an instance’s JDBC URL and driver class name.
If you provide your own CloudSqlJdbcInfoProvider
bean, it is used instead and the properties related to building the JDBC URL or driver class are ignored.
The DataSourceProperties
object provided by Spring Boot Autoconfigure is mutated in order to use the JDBC URL and driver class names provided by CloudSqlJdbcInfoProvider
, unless those values were provided in the properties.
It is in the DataSourceProperties
mutation step that the credentials factory is registered in a system property to be SqlCredentialFactory
.
DataSource
creation is delegated to
Spring Boot.
You can select the type of connection pool (e.g., Tomcat, HikariCP, etc.) by adding their dependency to the classpath.
Using the created DataSource
in conjunction with Spring JDBC provides you with a fully configured and operational JdbcTemplate
object that you can use to interact with your SQL database.
You can connect to your database with as little as a database and instance names.
If you’re not able to connect to a database and see an endless loop of Connecting to Cloud SQL instance […] on IP […]
, it’s likely that exceptions are being thrown and logged at a level lower than your logger’s level.
This may be the case with HikariCP, if your logger is set to INFO or higher level.
To see what’s going on in the background, you should add a logback.xml
file to your application resources folder, that looks like this:
<?xml version="1.0" encoding="UTF-8"?> <configuration> <include resource="org/springframework/boot/logging/logback/base.xml"/> <logger name="com.zaxxer.hikari.pool" level="DEBUG"/> </configuration>
If you see a lot of errors like this in a loop and can’t connect to your database, this is usually a symptom that something isn’t right with the permissions of your credentials or the Google Cloud SQL API is not enabled. Verify that the Google Cloud SQL API is enabled in the Cloud Console and that your service account has the necessary IAM roles.
To find out what’s causing the issue, you can enable DEBUG logging level as mentioned above.
We found this exception to be common if your Maven project’s parent is spring-boot
version 1.5.x
, or in any other circumstance that would cause the version of the org.postgresql:postgresql
dependency to be an older one (e.g., 9.4.1212.jre7
).
To fix this, re-declare the dependency in its correct version. For example, in Maven:
<dependency> <groupId>org.postgresql</groupId> <artifactId>postgresql</artifactId> <version>42.1.1</version> </dependency>
Available sample applications and codelabs:
Spring Cloud GCP provides Spring Integration adapters that allow your applications to use Enterprise Integration Patterns backed up by Google Cloud Platform services.
The channel adapters for Google Cloud Pub/Sub connect your Spring MessageChannels
to Google Cloud Pub/Sub topics and subscriptions.
This enables messaging between different processes, applications or micro-services backed up by Google Cloud Pub/Sub.
The Spring Integration Channel Adapters for Google Cloud Pub/Sub are included in the spring-cloud-gcp-pubsub
module.
Maven coordinates, using Spring Cloud GCP BOM:
<dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-gcp-pubsub</artifactId> </dependency> <dependency> <groupId>org.springframework.integration</groupId> <artifactId>spring-integration-core</artifactId> </dependency>
Gradle coordinates:
dependencies { compile group: 'org.springframework.cloud', name: 'spring-cloud-gcp-pubsub' compile group: 'org.springframework.integration', name: 'spring-integration-core' }
PubSubInboundChannelAdapter
is the inbound channel adapter for GCP Pub/Sub that listens to a GCP Pub/Sub subscription for new messages.
It converts new messages to an internal Spring Message
and then sends it to the bound output channel.
Google Pub/Sub treats message payloads as byte arrays.
So, by default, the inbound channel adapter will construct the Spring Message
with byte[]
as the payload.
However, you can change the desired payload type by setting the payloadType
property of the PubSubInboundChannelAdapter
.
The PubSubInboundChannelAdapter
delegates the conversion to the desired payload type to the PubSubMessageConverter
configured in the PubSubTemplate
.
To use the inbound channel adapter, a PubSubInboundChannelAdapter
must be provided and configured on the user application side.
@Bean public MessageChannel pubsubInputChannel() { return new PublishSubscribeChannel(); } @Bean public PubSubInboundChannelAdapter messageChannelAdapter( @Qualifier("pubsubInputChannel") MessageChannel inputChannel, SubscriberFactory subscriberFactory) { PubSubInboundChannelAdapter adapter = new PubSubInboundChannelAdapter(subscriberFactory, "subscriptionName"); adapter.setOutputChannel(inputChannel); adapter.setAckMode(AckMode.MANUAL); return adapter; }
In the example, we first specify the MessageChannel
where the adapter is going to write incoming messages to.
The MessageChannel
implementation isn’t important here.
Depending on your use case, you might want to use a MessageChannel
other than PublishSubscribeChannel
.
Then, we declare a PubSubInboundChannelAdapter
bean.
It requires the channel we just created and a SubscriberFactory
, which creates Subscriber
objects from the Google Cloud Java Client for Pub/Sub.
The Spring Boot starter for GCP Pub/Sub provides a configured SubscriberFactory
.
The PubSubInboundChannelAdapter
supports three acknowledgement modes, with AckMode.AUTO
being the default value;
Automatic acking (AckMode.AUTO
)
A message is acked with GCP Pub/Sub if the adapter sent it to the channel and no exceptions were thrown.
If a RuntimeException
is thrown while the message is processed, then the message is nacked.
Automatic acking OK (AckMode.AUTO_ACK
)
A message is acked with GCP Pub/Sub if the adapter sent it to the channel and no exceptions were thrown.
If a RuntimeException
is thrown while the message is processed, then the message is neither acked / nor nacked.
This is useful when using the subscription’s ack deadline timeout as a retry delivery backoff mechanism.
Manually acking (AckMode.MANUAL
)
The adapter attaches a BasicAcknowledgeablePubsubMessage
object to the Message
headers.
Users can extract the BasicAcknowledgeablePubsubMessage
using the GcpPubSubHeaders.ORIGINAL_MESSAGE
key and use it to (n)ack a message.
@Bean @ServiceActivator(inputChannel = "pubsubInputChannel") public MessageHandler messageReceiver() { return message -> { LOGGER.info("Message arrived! Payload: " + new String((byte[]) message.getPayload())); BasicAcknowledgeablePubsubMessage originalMessage = message.getHeaders().get(GcpPubSubHeaders.ORIGINAL_MESSAGE, BasicAcknowledgeablePubsubMessage.class); originalMessage.ack(); }; }
PubSubMessageHandler
is the outbound channel adapter for GCP Pub/Sub that listens for new messages on a Spring MessageChannel
.
It uses PubSubTemplate
to post them to a GCP Pub/Sub topic.
To construct a Pub/Sub representation of the message, the outbound channel adapter needs to convert the Spring Message
payload to a byte array representation expected by Pub/Sub.
It delegates this conversion to the PubSubTemplate
.
To customize the conversion, you can specify a PubSubMessageConverter
in the PubSubTemplate
that should convert the Object
payload and headers of the Spring Message
to a PubsubMessage
.
To use the outbound channel adapter, a PubSubMessageHandler
bean must be provided and configured on the user application side.
@Bean @ServiceActivator(inputChannel = "pubsubOutputChannel") public MessageHandler messageSender(PubSubTemplate pubsubTemplate) { return new PubSubMessageHandler(pubsubTemplate, "topicName"); }
The provided PubSubTemplate
contains all the necessary configuration to publish messages to a GCP Pub/Sub topic.
PubSubMessageHandler
publishes messages asynchronously by default.
A publish timeout can be configured for synchronous publishing.
If none is provided, the adapter waits indefinitely for a response.
It is possible to set user-defined callbacks for the publish()
call in PubSubMessageHandler
through the setPublishFutureCallback()
method.
These are useful to process the message ID, in case of success, or the error if any was thrown.
To override the default destination you can use the GcpPubSubHeaders.DESTINATION
header.
@Autowired private MessageChannel pubsubOutputChannel; public void handleMessage(Message<?> msg) throws MessagingException { final Message<?> message = MessageBuilder .withPayload(msg.getPayload()) .setHeader(GcpPubSubHeaders.TOPIC, "customTopic").build(); pubsubOutputChannel.send(message); }
It is also possible to set an SpEL expression for the topic with the setTopicExpression()
or setTopicExpressionString()
methods.
These channel adapters contain header mappers that allow you to map, or filter out, headers from Spring to Google Cloud Pub/Sub messages, and vice-versa.
By default, the inbound channel adapter maps every header on the Google Cloud Pub/Sub messages to the Spring messages produced by the adapter.
The outbound channel adapter maps every header from Spring messages into Google Cloud Pub/Sub ones, except the ones added by Spring, like headers with key "id"
, "timestamp"
and "gcp_pubsub_acknowledgement"
.
In the process, the outbound mapper also converts the value of the headers into string.
Each adapter declares a setHeaderMapper()
method to let you further customize which headers you want to map from Spring to Google Cloud Pub/Sub, and vice-versa.
For example, to filter out headers "foo"
, "bar"
and all headers starting with the prefix "prefix_", you can use setHeaderMapper()
along with the PubSubHeaderMapper
implementation provided by this module.
PubSubMessageHandler adapter = ... ... PubSubHeaderMapper headerMapper = new PubSubHeaderMapper(); headerMapper.setOutboundHeaderPatterns("!foo", "!bar", "!prefix_*", "*"); adapter.setHeaderMapper(headerMapper);
Note | |
---|---|
The order in which the patterns are declared in |
In the previous example, the "*"
pattern means every header is mapped.
However, because it comes last in the list, the previous patterns take precedence.
Available examples:
The channel adapters for Google Cloud Storage allow you to read and write files to Google Cloud Storage through MessageChannels
.
Spring Cloud GCP provides two inbound adapters, GcsInboundFileSynchronizingMessageSource
and GcsStreamingMessageSource
, and one outbound adapter, GcsMessageHandler
.
The Spring Integration Channel Adapters for Google Cloud Storage are included in the spring-cloud-gcp-storage
module.
To use the Storage portion of Spring Integration for Spring Cloud GCP, you must also provide the spring-integration-file
dependency, since it isn’t pulled transitively.
Maven coordinates, using Spring Cloud GCP BOM:
<dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-gcp-storage</artifactId> </dependency> <dependency> <groupId>org.springframework.integration</groupId> <artifactId>spring-integration-file</artifactId> </dependency>
Gradle coordinates:
dependencies { compile group: 'org.springframework.cloud', name: 'spring-cloud-gcp-starter-storage' compile group: 'org.springframework.integration', name: 'spring-integration-file' }
The Google Cloud Storage inbound channel adapter polls a Google Cloud Storage bucket for new files and sends each of them in a Message
payload to the MessageChannel
specified in the @InboundChannelAdapter
annotation.
The files are temporarily stored in a folder in the local file system.
Here is an example of how to configure a Google Cloud Storage inbound channel adapter.
@Bean @InboundChannelAdapter(channel = "new-file-channel", poller = @Poller(fixedDelay = "5000")) public MessageSource<File> synchronizerAdapter(Storage gcs) { GcsInboundFileSynchronizer synchronizer = new GcsInboundFileSynchronizer(gcs); synchronizer.setRemoteDirectory("your-gcs-bucket"); GcsInboundFileSynchronizingMessageSource synchAdapter = new GcsInboundFileSynchronizingMessageSource(synchronizer); synchAdapter.setLocalDirectory(new File("local-directory")); return synchAdapter; }
The inbound streaming channel adapter is similar to the normal inbound channel adapter, except it does not require files to be stored in the file system.
Here is an example of how to configure a Google Cloud Storage inbound streaming channel adapter.
@Bean @InboundChannelAdapter(channel = "streaming-channel", poller = @Poller(fixedDelay = "5000")) public MessageSource<InputStream> streamingAdapter(Storage gcs) { GcsStreamingMessageSource adapter = new GcsStreamingMessageSource(new GcsRemoteFileTemplate(new GcsSessionFactory(gcs))); adapter.setRemoteDirectory("your-gcs-bucket"); return adapter; }
The outbound channel adapter allows files to be written to Google Cloud Storage.
When it receives a Message
containing a payload of type File
, it writes that file to the Google Cloud Storage bucket specified in the adapter.
Here is an example of how to configure a Google Cloud Storage outbound channel adapter.
@Bean @ServiceActivator(inputChannel = "writeFiles") public MessageHandler outboundChannelAdapter(Storage gcs) { GcsMessageHandler outboundChannelAdapter = new GcsMessageHandler(new GcsSessionFactory(gcs)); outboundChannelAdapter.setRemoteDirectoryExpression(new ValueExpression<>("your-gcs-bucket")); return outboundChannelAdapter; }
A sample application is available.
Spring Cloud GCP provides a Spring Cloud Stream binder to Google Cloud Pub/Sub.
The provided binder relies on the Spring Integration Channel Adapters for Google Cloud Pub/Sub.
Maven coordinates, using Spring Cloud GCP BOM:
<dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-gcp-pubsub-stream-binder</artifactId> </dependency>
Gradle coordinates:
dependencies { compile group: 'org.springframework.cloud', name: 'spring-cloud-gcp-pubsub-stream-binder' }
This binder binds producers to Google Cloud Pub/Sub topics and consumers to subscriptions.
Note | |
---|---|
Partitioning is currently not supported by this binder. |
You can configure the Spring Cloud Stream Binder for Google Cloud Pub/Sub to automatically generate the underlying resources, like the Google Cloud Pub/Sub topics and subscriptions for producers and consumers.
For that, you can use the spring.cloud.stream.gcp.pubsub.bindings.<channelName>.<consumer|producer>.auto-create-resources
property, which is turned ON by default.
Starting with version 1.1, these and other binder properties can be configured globally for all the bindings, e.g. spring.cloud.stream.gcp.pubsub.default.consumer.auto-create-resources
.
If you are using Pub/Sub auto-configuration from the Spring Cloud GCP Pub/Sub Starter, you should refer to the configuration section for other Pub/Sub parameters.
Note | |
---|---|
To use this binder with a running emulator, configure its host and port via |
If automatic resource creation is turned ON and the topic corresponding to the destination name does not exist, it will be created.
For example, for the following configuration, a topic called myEvents
would be created.
application.properties.
spring.cloud.stream.bindings.events.destination=myEvents spring.cloud.stream.gcp.pubsub.bindings.events.producer.auto-create-resources=true
If automatic resource creation is turned ON and the subscription and/or the topic do not exist for a consumer, a subscription and potentially a topic will be created. The topic name will be the same as the destination name, and the subscription name will be the destination name followed by the consumer group name.
Regardless of the auto-create-resources
setting, if the consumer group is not specified, an anonymous one will be created with the name anonymous.<destinationName>.<randomUUID>
.
Then when the binder shuts down, all Pub/Sub subscriptions created for anonymous consumer groups will be automatically cleaned up.
For example, for the following configuration, a topic named myEvents
and a subscription called myEvents.counsumerGroup1
would be created.
If the consumer group is not specified, a subscription called anonymous.myEvents.a6d83782-c5a3-4861-ac38-e6e2af15a7be
would be created and later cleaned up.
Important | |
---|---|
If you are manually creating Pub/Sub subscriptions for consumers, make sure that they follow the naming convention of |
application.properties.
spring.cloud.stream.bindings.events.destination=myEvents spring.cloud.stream.gcp.pubsub.bindings.events.consumer.auto-create-resources=true # specify consumer group, and avoid anonymous consumer group generation spring.cloud.stream.bindings.events.group=consumerGroup1
A sample application is available.
Spring Cloud Sleuth is an instrumentation framework for Spring Boot applications. It captures trace information and can forward traces to services like Zipkin for storage and analysis.
Google Cloud Platform provides its own managed distributed tracing service called Stackdriver Trace. Instead of running and maintaining your own Zipkin instance and storage, you can use Stackdriver Trace to store traces, view trace details, generate latency distributions graphs, and generate performance regression reports.
This Spring Cloud GCP starter can forward Spring Cloud Sleuth traces to Stackdriver Trace without an intermediary Zipkin server.
Maven coordinates, using Spring Cloud GCP BOM:
<dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-gcp-starter-trace</artifactId> </dependency>
Gradle coordinates:
dependencies { compile group: 'org.springframework.cloud', name: 'spring-cloud-gcp-starter-trace' }
You must enable Stackdriver Trace API from the Google Cloud Console in order to capture traces. Navigate to the Stackdriver Trace API for your project and make sure it’s enabled.
Note | |
---|---|
If you are already using a Zipkin server capturing trace information from multiple platform/frameworks, you can also use a Stackdriver Zipkin proxy to forward those traces to Stackdriver Trace without modifying existing applications. |
Spring Cloud Sleuth uses the Brave tracer to generate traces.
This integration enables Brave to use the StackdriverTracePropagation
propagation.
A propagation is responsible for extracting trace context from an entity (e.g., an HTTP servlet request) and injecting trace context into an entity.
A canonical example of the propagation usage is a web server that receives an HTTP request, which triggers other HTTP requests from the server before returning an HTTP response to the original caller.
In the case of StackdriverTracePropagation
, first it looks for trace context in the x-cloud-trace-context
key (e.g., an HTTP request header).
The value of the x-cloud-trace-context
key can be formatted in three different ways:
x-cloud-trace-context: TRACE_ID
x-cloud-trace-context: TRACE_ID/SPAN_ID
x-cloud-trace-context: TRACE_ID/SPAN_ID;o=TRACE_TRUE
TRACE_ID
is a 32-character hexadecimal value that encodes a 128-bit number.
SPAN_ID
is an unsigned long.
Since Stackdriver Trace doesn’t support span joins, a new span ID is always generated, regardless of the one specified in x-cloud-trace-context
.
TRACE_TRUE
can either be 0
if the entity should be untraced, or 1
if it should be traced.
This field forces the decision of whether or not to trace the request; if omitted then the decision is deferred to the sampler.
If a x-cloud-trace-context
key isn’t found, StackdriverTracePropagation
falls back to tracing with the X-B3 headers.
Spring Boot Starter for Stackdriver Trace uses Spring Cloud Sleuth and auto-configures a StackdriverSender that sends the Sleuth’s trace information to Stackdriver Trace.
All configurations are optional:
Name | Description | Required | Default value |
| Auto-configure Spring Cloud Sleuth to send traces to Stackdriver Trace. | No |
|
| Overrides the project ID from the Spring Cloud GCP Module | No | |
| Overrides the credentials location from the Spring Cloud GCP Module | No | |
| Overrides the credentials encoded key from the Spring Cloud GCP Module | No | |
| Overrides the credentials scopes from the Spring Cloud GCP Module | No | |
| Number of threads used by the Trace executor | No | 4 |
| HTTP/2 authority the channel claims to be connecting to. | No | |
| Name of the compression to use in Trace calls | No | |
| Call deadline in milliseconds | No | |
| Maximum size for inbound messages | No | |
| Maximum size for outbound messages | No | |
| Waits for the channel to be ready in case of a transient failure | No |
|
You can use core Spring Cloud Sleuth properties to control Sleuth’s sampling rate, etc. Read Sleuth documentation for more information on Sleuth configurations.
For example, when you are testing to see the traces are going through, you can set the sampling rate to 100%.
spring.sleuth.sampler.probability=1 # Send 100% of the request traces to Stackdriver. spring.sleuth.web.skipPattern=(^cleanup.*|.+favicon.*) # Ignore some URL paths.
Spring Cloud GCP Trace does override some Sleuth configurations:
StackdriverHttpClientParser
and StackdriverHttpServerParser
by default to populate Stackdriver related fields.Integration with Stackdriver Logging is available through the Stackdriver Logging Support.
If the Trace integration is used together with the Logging one, the request logs will be associated to the corresponding traces.
The trace logs can be viewed by going to the Google Cloud Console Trace List, selecting a trace and pressing the Logs → View
link in the Details
section.
A sample application and a codelab are available.
Maven coordinates, using Spring Cloud GCP BOM:
<dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-gcp-starter-logging</artifactId> </dependency>
Gradle coordinates:
dependencies { compile group: 'org.springframework.cloud', name: 'spring-cloud-gcp-starter-logging' }
Stackdriver Logging is the managed logging service provided by Google Cloud Platform.
This module provides support for associating a web request trace ID with the corresponding log entries.
It does so by retrieving the X-B3-TraceId
value from the Mapped Diagnostic Context (MDC), which is set by Spring Cloud Sleuth.
If Spring Cloud Sleuth isn’t used, the configured TraceIdExtractor
extracts the desired header value and sets it as the log entry’s trace ID.
This allows grouping of log messages by request, for example, in the Google Cloud Console Logs viewer.
Note | |
---|---|
Due to the way logging is set up, the GCP project ID and credentials defined in |
For use in Web MVC-based applications, TraceIdLoggingWebMvcInterceptor
is provided that extracts the request trace ID from an HTTP request using a TraceIdExtractor
and stores it in a thread-local, which can then be used in a logging appender to add the trace ID metadata to log messages.
Warning | |
---|---|
If Spring Cloud GCP Trace is enabled, the logging module disables itself and delegates log correlation to Spring Cloud Sleuth. |
LoggingWebMvcConfigurer
configuration class is also provided to help register the TraceIdLoggingWebMvcInterceptor
in Spring MVC applications.
Applications hosted on the Google Cloud Platform include trace IDs under the x-cloud-trace-context
header, which will be included in log entries.
However, if Sleuth is used the trace ID will be picked up from the MDC.
Currently, only Logback is supported and there are 2 possibilities to log to Stackdriver via this library with Logback: via direct API calls and through JSON-formatted console logs.
A Stackdriver appender is available using org/springframework/cloud/gcp/autoconfigure/logging/logback-appender.xml
.
This appender builds a Stackdriver Logging log entry from a JUL or Logback log entry, adds a trace ID to it and sends it to Stackdriver Logging.
STACKDRIVER_LOG_NAME
and STACKDRIVER_LOG_FLUSH_LEVEL
environment variables can be used to customize the STACKDRIVER
appender.
Your configuration may then look like this:
<configuration> <include resource="org/springframework/cloud/gcp/autoconfigure/logging/logback-appender.xml" /> <root level="INFO"> <appender-ref ref="STACKDRIVER" /> </root> </configuration>
If you want to have more control over the log output, you can further configure the appender. The following properties are available:
Property | Default Value | Description |
---|---|---|
|
| The Stackdriver Log name.
This can also be set via the |
|
| If a log entry with this level is encountered, trigger a flush of locally buffered log to Stackdriver Logging.
This can also be set via the |
For Logback, a org/springframework/cloud/gcp/autoconfigure/logging/logback-json-appender.xml
file is made available for import to make it easier to configure the JSON Logback appender.
Your configuration may then look something like this:
<configuration> <include resource="org/springframework/cloud/gcp/autoconfigure/logging/logback-json-appender.xml" /> <root level="INFO"> <appender-ref ref="CONSOLE_JSON" /> </root> </configuration>
If your application is running on Google Kubernetes Engine, Google Compute Engine or Google App Engine Flexible, your console logging is automatically saved to Google Stackdriver Logging.
Therefore, you can just include org/springframework/cloud/gcp/autoconfigure/logging/logback-json-appender.xml
in your logging configuration, which logs JSON entries to the console.
The trace id will be set correctly.
If you want to have more control over the log output, you can further configure the appender. The following properties are available:
Property | Default Value | Description |
---|---|---|
| If not set, default value is determined in the following order:
| This is used to generate fully qualified Stackdriver Trace ID format: This format is required to correlate trace between Stackdriver Trace and Stackdriver Logging. If |
|
| Should the |
|
| Should the |
|
| Should the severity be included |
|
| Should the thread name be included |
|
| Should all MDC properties be included.
The MDC properties |
|
| Should the name of the logger be included |
|
| Should the formatted log message be included. |
|
| Should the stacktrace be appended to the formatted log message.
This setting is only evaluated if |
|
| Should the logging context be included |
|
| Should the log message with blank placeholders be included |
|
| Should the stacktrace be included as a own field |
This is an example of such an Logback configuration:
<configuration > <property name="projectId" value="${projectId:-${GOOGLE_CLOUD_PROJECT}}"/> <appender name="CONSOLE_JSON" class="ch.qos.logback.core.ConsoleAppender"> <encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder"> <layout class="org.springframework.cloud.gcp.logging.StackdriverJsonLayout"> <projectId>${projectId}</projectId> <!--<includeTraceId>true</includeTraceId>--> <!--<includeSpanId>true</includeSpanId>--> <!--<includeLevel>true</includeLevel>--> <!--<includeThreadName>true</includeThreadName>--> <!--<includeMDC>true</includeMDC>--> <!--<includeLoggerName>true</includeLoggerName>--> <!--<includeFormattedMessage>true</includeFormattedMessage>--> <!--<includeExceptionInMessage>true</includeExceptionInMessage>--> <!--<includeContextName>true</includeContextName>--> <!--<includeMessage>false</includeMessage>--> <!--<includeException>false</includeException>--> </layout> </encoder> </appender> </configuration>
A Sample Spring Boot Application is provided to show how to use the Cloud logging starter.
Spring Cloud GCP makes it possible to use the Google Runtime Configuration API as a Spring Cloud Config server to remotely store your application configuration data.
The Spring Cloud GCP Config support is provided via its own Spring Boot starter. It enables the use of the Google Runtime Configuration API as a source for Spring Boot configuration properties.
Note | |
---|---|
The Google Cloud Runtime Configuration service is in beta status. |
Maven coordinates, using Spring Cloud GCP BOM:
<dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-gcp-starter-config</artifactId> </dependency>
Gradle coordinates:
dependencies { compile group: 'org.springframework.cloud', name: 'spring-cloud-gcp-starter-config' }
The following parameters are configurable in Spring Cloud GCP Config:
Name | Description | Required | Default value |
| Enables the Config client | No |
|
| Name of your application | No | Value of the |
| Active profile | No | Value of the |
| Timeout in milliseconds for connecting to the Google Runtime Configuration API | No |
|
| GCP project ID where the Google Runtime Configuration API is hosted | No | |
| OAuth2 credentials for authenticating with the Google Runtime Configuration API | No | |
| Base64-encoded OAuth2 credentials for authenticating with the Google Runtime Configuration API | No | |
| OAuth2 scope for Spring Cloud GCP Config credentials | No |
Note | |
---|---|
These properties should be specified in a |
Note | |
---|---|
Core properties, as described in Spring Cloud GCP Core Module, do not apply to Spring Cloud GCP Config. |
Create a configuration in the Google Runtime Configuration API that is called ${spring.application.name}_${spring.profiles.active}
.
In other words, if spring.application.name
is myapp
and spring.profiles.active
is prod
, the configuration should be called myapp_prod
.
In order to do that, you should have the Google Cloud SDK installed, own a Google Cloud Project and run the following command:
gcloud init # if this is your first Google Cloud SDK run. gcloud beta runtime-config configs create myapp_prod gcloud beta runtime-config configs variables set myapp.queue-size 25 --config-name myapp_prod
Configure your bootstrap.properties
file with your application’s configuration data:
spring.application.name=myapp spring.profiles.active=prod
Add the @ConfigurationProperties
annotation to a Spring-managed bean:
@Component @ConfigurationProperties("myapp") public class SampleConfig { private int queueSize; public int getQueueSize() { return this.queueSize; } public void setQueueSize(int queueSize) { this.queueSize = queueSize; } }
When your Spring application starts, the queueSize
field value will be set to 25 for the above SampleConfig
bean.
Spring Cloud provides support to have configuration parameters be reloadable with the POST request to /actuator/refresh
endpoint.
Maven coordinates:
<dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-actuator</artifactId> </dependency>
Gradle coordinates:
dependencies { compile group: 'org.springframework.boot', name: 'spring-boot-starter-actuator' }
@RefreshScope
to your Spring configuration class to have parameters be reloadable at runtime.management.endpoints.web.exposure.include=refresh
to your application.properties
to allow unrestricted access to /actuator/refresh
.Update a property with gcloud
:
$ gcloud beta runtime-config configs variables set \ myapp.queue_size 200 \ --config-name myapp_prod
Send a POST request to the refresh endpoint:
$ curl -XPOST http://myapp.host.com/actuator/refresh
A sample application and a codelab are available.
Spring Data is an abstraction for storing and retrieving POJOs in numerous storage technologies. Spring Cloud GCP adds Spring Data support for Google Cloud Spanner.
Maven coordinates for this module only, using Spring Cloud GCP BOM:
<dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-gcp-data-spanner</artifactId> </dependency>
Gradle coordinates:
dependencies { compile group: 'org.springframework.cloud', name: 'spring-cloud-gcp-data-spanner' }
We provide a Spring Boot Starter for Spring Data Spanner, with which you can leverage our recommended auto-configuration setup. To use the starter, see the coordinates see below.
Maven:
<dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-gcp-starter-data-spanner</artifactId> </dependency>
Gradle:
dependencies { compile group: 'org.springframework.cloud', name: 'spring-cloud-gcp-starter-data-spanner' }
This setup takes care of bringing in the latest compatible version of Cloud Java Cloud Spanner libraries as well.
To setup Spring Data Cloud Spanner, you have to configure the following:
You can the use Spring Boot Starter for Spring Data Spanner to autoconfigure Google Cloud Spanner in your Spring application. It contains all the necessary setup that makes it easy to authenticate with your Google Cloud project. The following configuration options are available:
Name | Description | Required | Default value |
| Cloud Spanner instance to use | Yes | |
| Cloud Spanner database to use | Yes | |
| GCP project ID where the Google Cloud Spanner API is hosted, if different from the one in the Spring Cloud GCP Core Module | No | |
| OAuth2 credentials for authenticating with the Google Cloud Spanner API, if different from the ones in the Spring Cloud GCP Core Module | No | |
| Base64-encoded OAuth2 credentials for authenticating with the Google Cloud Spanner API, if different from the ones in the Spring Cloud GCP Core Module | No | |
| OAuth2 scope for Spring Cloud GCP Cloud Spanner credentials | No | |
| If | No |
|
| Number of gRPC channels used to connect to Cloud Spanner | No | 4 - Determined by Cloud Spanner client library |
| Number of chunks prefetched by Cloud Spanner for read and query | No | 4 - Determined by Cloud Spanner client library |
| Minimum number of sessions maintained in the session pool | No | 0 - Determined by Cloud Spanner client library |
| Maximum number of sessions session pool can have | No | 400 - Determined by Cloud Spanner client library |
| Maximum number of idle sessions session pool will maintain | No | 0 - Determined by Cloud Spanner client library |
| Fraction of sessions to be kept prepared for write transactions | No | 0.2 - Determined by Cloud Spanner client library |
| How long to keep idle sessions alive | No | 30 - Determined by Cloud Spanner client library |
Spring Data Repositories can be configured via the @EnableSpannerRepositories
annotation on your main @Configuration
class.
With our Spring Boot Starter for Spring Data Cloud Spanner, @EnableSpannerRepositories
is automatically added.
It is not required to add it to any other class, unless there is a need to override finer grain configuration parameters provided by @EnableSpannerRepositories
.
Our Spring Boot autoconfiguration creates the following beans available in the Spring application context:
SpannerTemplate
SpannerDatabaseAdminTemplate
for generating table schemas from object hierarchies and creating and deleting tables and databasesSpannerRepository
, CrudRepository
, PagingAndSortingRepository
, when repositories are enabledDatabaseClient
from the Google Cloud Java Client for Spanner, for convenience and lower level API accessSpring Data Cloud Spanner allows you to map domain POJOs to Cloud Spanner tables via annotations:
@Table(name = "traders") public class Trader { @PrimaryKey @Column(name = "trader_id") String traderId; String firstName; String lastName; @NotMapped Double temporaryNumber; }
Spring Data Cloud Spanner will ignore any property annotated with @NotMapped
.
These properties will not be written to or read from Spanner.
Simple constructors are supported on POJOs. The constructor arguments can be a subset of the persistent properties. Every constructor argument needs to have the same name and type as a persistent property on the entity and the constructor should set the property from the given argument. Arguments that are not directly set to properties are not supported.
@Table(name = "traders") public class Trader { @PrimaryKey @Column(name = "trader_id") String traderId; String firstName; String lastName; @NotMapped Double temporaryNumber; public Trader(String traderId, String firstName) { this.traderId = traderId; this.firstName = firstName; } }
The @Table
annotation can provide the name of the Cloud Spanner table that stores instances of the annotated class, one per row.
This annotation is optional, and if not given, the name of the table is inferred from the class name with the first character uncapitalized.
In some cases, you might want the @Table
table name to be determined dynamically.
To do that, you can use Spring Expression Language.
For example:
@Table(name = "trades_#{tableNameSuffix}") public class Trade { // ... }
The table name will be resolved only if the tableNameSuffix
value/bean in the Spring application context is defined.
For example, if tableNameSuffix
has the value "123", the table name will resolve to trades_123
.
For a simple table, you may only have a primary key consisting of a single column.
Even in that case, the @PrimaryKey
annotation is required.
@PrimaryKey
identifies the one or more ID properties corresponding to the primary key.
Spanner has first class support for composite primary keys of multiple columns.
You have to annotate all of your POJO’s fields that the primary key consists of with @PrimaryKey
as below:
@Table(name = "trades") public class Trade { @PrimaryKey(keyOrder = 2) @Column(name = "trade_id") private String tradeId; @PrimaryKey(keyOrder = 1) @Column(name = "trader_id") private String traderId; private String action; private Double price; private Double shares; private String symbol; }
The keyOrder
parameter of @PrimaryKey
identifies the properties corresponding to the primary key columns in order, starting with 1 and increasing consecutively.
Order is important and must reflect the order defined in the Cloud Spanner schema.
In our example the DDL to create the table and its primary key is as follows:
CREATE TABLE trades ( trader_id STRING(MAX), trade_id STRING(MAX), action STRING(15), symbol STRING(10), price FLOAT64, shares FLOAT64 ) PRIMARY KEY (trader_id, trade_id)
Spanner does not have automatic ID generation. For most use-cases, sequential IDs should be used with caution to avoid creating data hotspots in the system. Read Spanner Primary Keys documentation for a better understanding of primary keys and recommended practices.
All accessible properties on POJOs are automatically recognized as a Cloud Spanner column.
Column naming is generated by the PropertyNameFieldNamingStrategy
by default defined on the SpannerMappingContext
bean.
The @Column
annotation optionally provides a different column name than that of the property and some other settings:
name
is the optional name of the columnspannerTypeMaxLength
specifies for STRING
and BYTES
columns the maximum length.
This setting is only used when generating DDL schema statements based on domain types.nullable
specifies if the column is created as NOT NULL
.
This setting is only used when generating DDL schema statements based on domain types.spannerType
is the Cloud Spanner column type you can optionally specify.
If this is not specified then a compatible column type is inferred from the Java property type.spannerCommitTimestamp
is a boolean specifying if this property corresponds to an auto-populated commit timestamp column.
Any value set in this property will be ignored when writing to Cloud Spanner.If an object of type B
is embedded as a property of A
, then the columns of B
will be saved in the same Cloud Spanner table as those of A
.
If B
has primary key columns, those columns will be included in the primary key of A
. B
can also have embedded properties.
Embedding allows reuse of columns between multiple entities, and can be useful for implementing parent-child situations, because Cloud Spanner requires child tables to include the key columns of their parents.
For example:
class X { @PrimaryKey String grandParentId; long age; } class A { @PrimaryKey @Embedded X grandParent; @PrimaryKey(keyOrder = 2) String parentId; String value; } @Table(name = "items") class B { @PrimaryKey @Embedded A parent; @PrimaryKey(keyOrder = 2) String id; @Column(name = "child_value") String value; }
Entities of B
can be stored in a table defined as:
CREATE TABLE items ( grandParentId STRING(MAX), parentId STRING(MAX), id STRING(MAX), value STRING(MAX), child_value STRING(MAX), age INT64 ) PRIMARY KEY (grandParentId, parentId, id)
Note that embedded properties' column names must all be unique.
Spring Data Cloud Spanner supports parent-child relationships using the Cloud Spanner parent-child interleaved table mechanism. Cloud Spanner interleaved tables enforce the one-to-many relationship and provide efficient queries and operations on entities of a single domain parent entity. These relationships can be up to 7 levels deep. Cloud Spanner also provides automatic cascading delete or enforces the deletion of child entities before parents.
While one-to-one and many-to-many relationships can be implemented in Cloud Spanner and Spring Data Cloud Spanner using constructs of interleaved parent-child tables, only the parent-child relationship is natively supported. Cloud Spanner does not support the foreign key constraint, though the parent-child key constraint enforces a similar requirement when used with interleaved tables.
For example, the following Java entities:
@Table(name = "Singers") class Singer { @PrimaryKey long SingerId; String FirstName; String LastName; byte[] SingerInfo; @Interleaved List<Album> albums; } @Table(name = "Albums") class Album { @PrimaryKey long SingerId; @PrimaryKey(keyOrder = 2) long AlbumId; String AlbumTitle; }
These classes can correspond to an existing pair of interleaved tables.
The @Interleaved
annotation may be applied to Collection
properties and the inner type is resolved as the child entity type.
The schema needed to create them can also be generated using the SpannerSchemaUtils
and executed using the SpannerDatabaseAdminTemplate
:
@Autowired SpannerSchemaUtils schemaUtils; @Autowired SpannerDatabaseAdminTemplate databaseAdmin; ... // Get the create statmenets for all tables in the table structure rooted at Singer List<String> createStrings = this.schemaUtils.getCreateTableDdlStringsForInterleavedHierarchy(Singer.class); // Create the tables and also create the database if necessary this.databaseAdmin.executeDdlStrings(createStrings, true);
The createStrings
list contains table schema statements using column names and types compatible with the provided Java type and any resolved child relationship types contained within based on the configured custom converters.
CREATE TABLE Singers ( SingerId INT64 NOT NULL, FirstName STRING(1024), LastName STRING(1024), SingerInfo BYTES(MAX), ) PRIMARY KEY (SingerId); CREATE TABLE Albums ( SingerId INT64 NOT NULL, AlbumId INT64 NOT NULL, AlbumTitle STRING(MAX), ) PRIMARY KEY (SingerId, AlbumId), INTERLEAVE IN PARENT Singers ON DELETE CASCADE;
The ON DELETE CASCADE
clause indicates that Cloud Spanner will delete all Albums of a singer if the Singer is deleted.
The alternative is ON DELETE NO ACTION
, where a Singer cannot be deleted until all of its Albums have already been deleted.
When using SpannerSchemaUtils
to generate the schema strings, the spring.cloud.gcp.spanner.createInterleavedTableDdlOnDeleteCascade
boolean setting determines if these schema are generated as ON DELETE CASCADE
for true
and ON DELETE NO ACTION
for false
.
Cloud Spanner restricts these relationships to 7 child layers. A table may have multiple child tables.
On updating or inserting an object to Cloud Spanner, all of its referenced children objects are also updated or inserted in the same request, respectively. On read, all of the interleaved child rows are also all read.
Spring Data Cloud Spanner natively supports the following types for regular fields but also utilizes custom converters (detailed in following sections) and dozens of pre-defined Spring Data custom converters to handle other common Java types.
Natively supported types:
com.google.cloud.ByteArray
com.google.cloud.Date
com.google.cloud.Timestamp
java.lang.Boolean
, boolean
java.lang.Double
, double
java.lang.Long
, long
java.lang.Integer
, int
java.lang.String
double[]
long[]
boolean[]
java.util.Date
java.util.Instant
java.sql.Date
Spanner supports ARRAY
types for columns.
ARRAY
columns are mapped to List
fields in POJOS.
Example:
List<Double> curve;
The types inside the lists can be any singular property type.
Cloud Spanner queries can construct STRUCT values that appear as columns in the result.
Cloud Spanner requires STRUCT values appear in ARRAYs at the root level: SELECT ARRAY(SELECT STRUCT(1 as val1, 2 as val2)) as pair FROM Users
.
Spring Data Cloud Spanner will attempt to read the column STRUCT values into a property that is an Iterable
of an entity type compatible with the schema of the column STRUCT value.
For the previous array-select example, the following property can be mapped with the constructed ARRAY<STRUCT>
column: List<TwoInts> pair;
where the TwoInts
type is defined:
class TwoInts { int val1; int val2; }
Custom converters can be used to extend the type support for user defined types.
org.springframework.core.convert.converter.Converter
interface in both directions.The user defined type needs to be mapped to one of the basic types supported by Spanner:
com.google.cloud.ByteArray
com.google.cloud.Date
com.google.cloud.Timestamp
java.lang.Boolean
, boolean
java.lang.Double
, double
java.lang.Long
, long
java.lang.String
double[]
long[]
boolean[]
enum
typesConverterAwareMappingSpannerEntityProcessor
, which then has to be made available as a @Bean
for SpannerEntityProcessor
.For example:
We would like to have a field of type Person
on our Trade
POJO:
@Table(name = "trades") public class Trade { //... Person person; //... }
Where Person is a simple class:
public class Person { public String firstName; public String lastName; }
We have to define the two converters:
public class PersonWriteConverter implements Converter<Person, String> { @Override public String convert(Person person) { return person.firstName + " " + person.lastName; } } public class PersonReadConverter implements Converter<String, Person> { @Override public Person convert(String s) { Person person = new Person(); person.firstName = s.split(" ")[0]; person.lastName = s.split(" ")[1]; return person; } }
That will be configured in our @Configuration
file:
@Configuration public class ConverterConfiguration { @Bean public SpannerEntityProcessor spannerEntityProcessor(SpannerMappingContext spannerMappingContext) { return new ConverterAwareMappingSpannerEntityProcessor(spannerMappingContext, Arrays.asList(new PersonWriteConverter()), Arrays.asList(new PersonReadConverter())); } }
SpannerOperations
and its implementation, SpannerTemplate
, provides the Template pattern familiar to Spring developers.
It provides:
Using the autoconfigure
provided by our Spring Boot Starter for Spanner, your Spring application context will contain a fully configured SpannerTemplate
object that you can easily autowire in your application:
@SpringBootApplication public class SpannerTemplateExample { @Autowired SpannerTemplate spannerTemplate; public void doSomething() { this.spannerTemplate.delete(Trade.class, KeySet.all()); //... Trade t = new Trade(); //... this.spannerTemplate.insert(t); //... List<Trade> tradesByAction = spannerTemplate.findAll(Trade.class); //... } }
The Template API provides convenience methods for:
Reads, and by providing SpannerReadOptions and SpannerQueryOptions
Partial reads
Partial writes
Cloud Spanner has SQL support for running read-only queries.
All the query related methods start with query
on SpannerTemplate
.
Using SpannerTemplate
you can execute SQL queries that map to POJOs:
List<Trade> trades = this.spannerTemplate.query(Trade.class, Statement.of("SELECT * FROM trades"));
Spanner exposes a Read API for reading single row or multiple rows in a table or in a secondary index.
Using SpannerTemplate
you can execute reads, for example:
List<Trade> trades = this.spannerTemplate.readAll(Trade.class);
Main benefit of reads over queries is reading multiple rows of a certain pattern of keys is much easier using the features of the KeySet
class.
All reads and queries are strong reads by default.
A strong read is a read at a current timestamp and is guaranteed to see all data that has been committed up until the start of this read.
A stale read on the other hand is read at a timestamp in the past.
Cloud Spanner allows you to determine how current the data should be when you read data.
With SpannerTemplate
you can specify the Timestamp
by setting it on SpannerQueryOptions
or SpannerReadOptions
to the appropriate read or query methods:
Reads:
// a read with options: SpannerReadOptions spannerReadOptions = new SpannerReadOptions().setTimestamp(Timestamp.now()); List<Trade> trades = this.spannerTemplate.readAll(Trade.class, spannerReadOptions);
Queries:
// a query with options: SpannerQueryOptions spannerQueryOptions = new SpannerQueryOptions().setTimestamp(Timestamp.now()); List<Trade> trades = this.spannerTemplate.query(Trade.class, Statement.of("SELECT * FROM trades"), spannerQueryOptions);
Using a secondary index is available for Reads via the Template API and it is also implicitly available via SQL for Queries.
The following shows how to read rows from a table using a secondary index simply by setting index
on SpannerReadOptions
:
SpannerReadOptions spannerReadOptions = new SpannerReadOptions().setIndex("TradesByTrader"); List<Trade> trades = this.spannerTemplate.readAll(Trade.class, spannerReadOptions);
Limits and offsets are only supported by Queries. The following will get only the first two rows of the query:
SpannerQueryOptions spannerQueryOptions = new SpannerQueryOptions().setLimit(2).setOffset(3); List<Trade> trades = this.spannerTemplate.query(Trade.class, Statement.of("SELECT * FROM trades"), spannerQueryOptions);
Note that the above is equivalent of executing SELECT * FROM trades LIMIT 2 OFFSET 3
.
Reads by keys do not support sorting. However, queries on the Template API support sorting through standard SQL and also via Spring Data Sort API:
List<Trade> trades = this.spannerTemplate.queryAll(Trade.class, Sort.by("action"));
If the provided sorted field name is that of a property of the domain type, then the column name corresponding to that property will be used in the query. Otherwise, the given field name is assumed to be the name of the column in the Cloud Spanner table. Sorting on columns of Cloud Spanner types STRING and BYTES can be done while ignoring case:
Sort.by(Order.desc("action").ignoreCase())
Partial read is only possible when using Queries. In case the rows returned by the query have fewer columns than the entity that it will be mapped to, Spring Data will map the returned columns only. This setting also applies to nested structs and their corresponding nested POJO properties.
List<Trade> trades = this.spannerTemplate.query(Trade.class, Statement.of("SELECT action, symbol FROM trades"), new SpannerQueryOptions().setAllowMissingResultSetColumns(true));
If the setting is set to false
, then an exception will be thrown if there are missing columns in the query result.
The write methods of SpannerOperations
accept a POJO and writes all of its properties to Spanner.
The corresponding Spanner table and entity metadata is obtained from the given object’s actual type.
If a POJO was retrieved from Spanner and its primary key properties values were changed and then written or updated, the operation will occur as if against a row with the new primary key values. The row with the original primary key values will not be affected.
The insert
method of SpannerOperations
accepts a POJO and writes all of its properties to Spanner, which means the operation will fail if a row with the POJO’s primary key already exists in the table.
Trade t = new Trade(); this.spannerTemplate.insert(t);
The update
method of SpannerOperations
accepts a POJO and writes all of its properties to Spanner, which means the operation will fail if the POJO’s primary key does not already exist in the table.
// t was retrieved from a previous operation this.spannerTemplate.update(t);
The upsert
method of SpannerOperations
accepts a POJO and writes all of its properties to Spanner using update-or-insert.
// t was retrieved from a previous operation or it's new this.spannerTemplate.upsert(t);
The update methods of SpannerOperations
operate by default on all properties within the given object, but also accept String[]
and Optional<Set<String>>
of column names.
If the Optional
of set of column names is empty, then all columns are written to Spanner.
However, if the Optional is occupied by an empty set, then no columns will be written.
// t was retrieved from a previous operation or it's new this.spannerTemplate.update(t, "symbol", "action");
DML statements can be executed using SpannerOperations.executeDmlStatement
.
Inserts, updates, and deletions can affect any number of rows and entities.
SpannerOperations
provides methods to run java.util.Function
objects within a single transaction while making available the read and write methods from SpannerOperations
.
Read and write transactions are provided by SpannerOperations
via the performReadWriteTransaction
method:
@Autowired SpannerOperations mySpannerOperations; public String doWorkInsideTransaction() { return mySpannerOperations.performReadWriteTransaction( transActionSpannerOperations -> { // Work with transActionSpannerOperations here. // It is also a SpannerOperations object. return "transaction completed"; } ); }
The performReadWriteTransaction
method accepts a Function
that is provided an instance of a SpannerOperations
object.
The final returned value and type of the function is determined by the user.
You can use this object just as you would a regular SpannerOperations
with a few exceptions:
performReadWriteTransaction
or performReadOnlyTransaction
.As these read-write transactions are locking, it is recommended that you use the performReadOnlyTransaction
if your function does not perform any writes.
The performReadOnlyTransaction
method is used to perform read-only transactions using a SpannerOperations
:
@Autowired SpannerOperations mySpannerOperations; public String doWorkInsideTransaction() { return mySpannerOperations.performReadOnlyTransaction( transActionSpannerOperations -> { // Work with transActionSpannerOperations here. // It is also a SpannerOperations object. return "transaction completed"; } ); }
The performReadOnlyTransaction
method accepts a Function
that is provided an instance of a
SpannerOperations
object.
This method also accepts a ReadOptions
object, but the only attribute used is the timestamp used to determine the snapshot in time to perform the reads in the transaction.
If the timestamp is not set in the read options the transaction is run against the current state of the database.
The final returned value and type of the function is determined by the user.
You can use this object just as you would a regular SpannerOperations
with
a few exceptions:
performReadWriteTransaction
or performReadOnlyTransaction
Because read-only transactions are non-locking and can be performed on points in time in the past, these are recommended for functions that do not perform write operations.
This feature requires a bean of SpannerTransactionManager
, which is provided when using spring-cloud-gcp-starter-data-spanner
.
SpannerTemplate
and SpannerRepository
support running methods with the @Transactional
[annotation](https://docs.spring.io/spring/docs/current/spring-framework-reference/data-access.html#transaction-declarative) as transactions.
If a method annotated with @Transactional
calls another method also annotated, then both methods will work within the same transaction.
performReadOnlyTransaction
and performReadWriteTransaction
cannot be used in @Transactional
annotated methods because Cloud Spanner does not support transactions within transactions.
SpannerTemplate
supports [DML](https://cloud.google.com/spanner/docs/dml-tasks) Statements
.
DML statements can be executed in transactions via performReadWriteTransaction
or using the @Transactional
annotation.
When DML statements are executed outside of transactions, they are executed in [partitioned-mode](https://cloud.google.com/spanner/docs/dml-tasks#partitioned-dml).
Spring Data Repositories are a powerful abstraction that can save you a lot of boilerplate code.
For example:
public interface TraderRepository extends SpannerRepository<Trader, String> { }
Spring Data generates a working implementation of the specified interface, which can be conveniently autowired into an application.
The Trader
type parameter to SpannerRepository
refers to the underlying domain type.
The second type parameter, String
in this case, refers to the type of the key of the domain type.
For POJOs with a composite primary key, this ID type parameter can be any descendant of Object[]
compatible with all primary key properties, any descendant of Iterable
, or com.google.cloud.spanner.Key
.
If the domain POJO type only has a single primary key column, then the primary key property type can be used or the Key
type.
For example in case of Trades, that belong to a Trader, TradeRepository
would look like this:
public interface TradeRepository extends SpannerRepository<Trade, String[]> { }
public class MyApplication { @Autowired SpannerTemplate spannerTemplate; @Autowired StudentRepository studentRepository; public void demo() { this.tradeRepository.deleteAll(); String traderId = "demo_trader"; Trade t = new Trade(); t.symbol = stock; t.action = action; t.traderId = traderId; t.price = 100.0; t.shares = 12345.6; this.spannerTemplate.insert(t); Iterable<Trade> allTrades = this.tradeRepository.findAll(); int count = this.tradeRepository.countByAction("BUY"); } }
CrudRepository
methods work as expected, with one thing Spanner specific: the save
and saveAll
methods work as update-or-insert.
You can also use PagingAndSortingRepository
with Spanner Spring Data.
The sorting and pageable findAll
methods available from this interface operate on the current state of the Spanner database.
As a result, beware that the state of the database (and the results) might change when moving page to page.
The SpannerRepository
extends the PagingAndSortingRepository
, but adds the read-only and the read-write transaction functionality provided by Spanner.
These transactions work very similarly to those of SpannerOperations
, but is specific to the repository’s domain type and provides repository functions instead of template functions.
For example, this is a read-write transaction:
@Autowired SpannerRepository myRepo; public String doWorkInsideTransaction() { return myRepo.performReadOnlyTransaction( transactionSpannerRepo -> { // Work with the single-transaction transactionSpannerRepo here. // This is a SpannerRepository object. return "transaction completed"; } ); }
When creating custom repositories for your own domain types and query methods, you can extend SpannerRepository
to access Cloud Spanner-specific features as well as all features from PagingAndSortingRepository
and CrudRepository
.
SpannerRepository
supports Query Methods.
Described in the following sections, these are methods residing in your custom repository interfaces of which implementations are generated based on their names and annotations.
Query Methods can read, write, and delete entities in Cloud Spanner.
Parameters to these methods can be any Cloud Spanner data type supported directly or via custom configured converters.
Parameters can also be of type Struct
or POJOs.
If a POJO is given as a parameter, it will be converted to a Struct
with the same type-conversion logic as used to create write mutations.
Comparisons using Struct parameters are limited to what is available with Cloud Spanner.
public interface TradeRepository extends SpannerRepository<Trade, String[]> { List<Trade> findByAction(String action); int countByAction(String action); // Named methods are powerful, but can get unwieldy List<Trade> findTop3DistinctByActionAndSymbolIgnoreCaseOrTraderIdOrderBySymbolDesc( String action, String symbol, String traderId); }
In the example above, the query methods in TradeRepository
are generated based on the name of the methods, using the Spring Data Query creation naming convention.
List<Trade> findByAction(String action)
would translate to a SELECT * FROM trades WHERE action = ?
.
The function List<Trade> findTop3DistinctByActionAndSymbolIgnoreCaseOrTraderIdOrderBySymbolDesc(String action, String symbol, String traderId);
will be translated as the equivalent of this SQL query:
SELECT DISTINCT * FROM trades WHERE ACTION = ? AND LOWER(SYMBOL) = LOWER(?) AND TRADER_ID = ? ORDER BY SYMBOL DESC LIMIT 3
The following filter options are supported:
Note that the phrase SymbolIgnoreCase
is translated to LOWER(SYMBOL) = LOWER(?)
indicating a non-case-sensitive matching.
The IgnoreCase
phrase may only be appended to fields that correspond to columns of type STRING or BYTES.
The Spring Data "AllIgnoreCase" phrase appended at the end of the method name is not supported.
The Like
or NotLike
naming conventions:
List<Trade> findBySymbolLike(String symbolFragment);
The param symbolFragment
can contain wildcard characters for string matching such as _
and %
.
The Contains
and NotContains
naming conventions:
List<Trade> findBySymbolContains(String symbolFragment);
The param symbolFragment
is a regular expression that is checked for occurrences.
Delete queries are also supported.
For example, query methods such as deleteByAction
or removeByAction
delete entities found by findByAction
.
The delete operation happens in a single transaction.
Delete queries can have the following return types:
* An integer type that is the number of entities deleted
* A collection of entities that were deleted
* void
The example above for List<Trade> fetchByActionNamedQuery(String action)
does not match the Spring Data Query creation naming convention, so we have to map a parametrized Spanner SQL query to it.
The SQL query for the method can be mapped to repository methods in one of two ways:
namedQueries
properties file@Query
annotationThe names of the tags of the SQL correspond to the @Param
annotated names of the method parameters.
Custom SQL query methods can accept a single Sort
or Pageable
parameter that is applied on top of any sorting or paging in the SQL:
@Query("SELECT * FROM trades ORDER BY action DESC") List<Trade> sortedTrades(Pageable pageable); @Query("SELECT * FROM trades ORDER BY action DESC LIMIT 1") Trade sortedTopTrade(Pageable pageable);
This can be used:
List<Trade> customSortedTrades = tradeRepository.sortedTrades(PageRequest .of(2, 2, org.springframework.data.domain.Sort.by(Order.asc("id"))));
The results would be sorted by "id" in ascending order.
Your query method can also return non-entity types:
@Query("SELECT COUNT(1) FROM trades WHERE action = @action") int countByActionQuery(String action); @Query("SELECT EXISTS(SELECT COUNT(1) FROM trades WHERE action = @action)") boolean existsByActionQuery(String action); @Query("SELECT action FROM trades WHERE action = @action LIMIT 1") String getFirstString(@Param("action") String action); @Query("SELECT action FROM trades WHERE action = @action") List<String> getFirstStringList(@Param("action") String action);
DML statements can also be executed by query methods, but the only possible return value is a long
representing the number of affected rows.
The dmlStatement
boolean setting must be set on @Query
to indicate that the query method is executed as a DML statement.
@Query(value = "DELETE FROM trades WHERE action = @action", dmlStatement = true) long deleteByActionQuery(String action);
By default, the namedQueriesLocation
attribute on @EnableSpannerRepositories
points to the META-INF/spanner-named-queries.properties
file.
You can specify the query for a method in the properties file by providing the SQL as the value for the "interface.method" property:
Trade.fetchByActionNamedQuery=SELECT * FROM trades WHERE trades.action = @tag0
public interface TradeRepository extends SpannerRepository<Trade, String[]> { // This method uses the query from the properties file instead of one generated based on name. List<Trade> fetchByActionNamedQuery(@Param("tag0") String action); }
Using the @Query
annotation:
public interface TradeRepository extends SpannerRepository<Trade, String[]> { @Query("SELECT * FROM trades WHERE trades.action = @tag0") List<Trade> fetchByActionNamedQuery(@Param("tag0") String action); }
Table names can be used directly.
For example, "trades" in the above example.
Alternatively, table names can be resolved from the @Table
annotation on domain classes as well.
In this case, the query should refer to table names with fully qualified class names between :
characters: :fully.qualified.ClassName:
.
A full example would look like:
@Query("SELECT * FROM :com.example.Trade: WHERE trades.action = @tag0")
List<Trade> fetchByActionNamedQuery(String action);
This allows table names evaluated with SpEL to be used in custom queries.
SpEL can also be used to provide SQL parameters:
@Query("SELECT * FROM :com.example.Trade: WHERE trades.action = @tag0
AND price > #{#priceRadius * -1} AND price < #{#priceRadius * 2}")
List<Trade> fetchByActionNamedQuery(String action, Double priceRadius);
Spring Data Spanner supports projections. You can define projection interfaces based on domain types and add query methods that return them in your repository:
public interface TradeProjection { String getAction(); @Value("#{target.symbol + ' ' + target.action}") String getSymbolAndAction(); } public interface TradeRepository extends SpannerRepository<Trade, Key> { List<Trade> findByTraderId(String traderId); List<TradeProjection> findByAction(String action); @Query("SELECT action, symbol FROM trades WHERE action = @action") List<TradeProjection> findByQuery(String action); }
Projections can be provided by name-convention-based query methods as well as by custom SQL queries. If using custom SQL queries, you can further restrict the columns retrieved from Spanner to just those required by the projection to improve performance.
Properties of projection types defined using SpEL use the fixed name target
for the underlying domain object.
As a result accessing underlying properties take the form target.<property-name>
.
When running with Spring Boot, repositories can be exposed as REST services by simply adding this dependency to your pom file:
<dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-rest</artifactId> </dependency>
If you prefer to configure parameters (such as path), you can use @RepositoryRestResource
annotation:
@RepositoryRestResource(collectionResourceRel = "trades", path = "trades") public interface TradeRepository extends SpannerRepository<Trade, String[]> { }
For example, you can retrieve all Trade
objects in the repository by using curl http://<server>:<port>/trades
, or any specific trade via curl http://<server>:<port>/trades/<trader_id>,<trade_id>
.
The separator between your primary key components, id
and trader_id
in this case, is a comma by default, but can be configured to any string not found in your key values by extending the SpannerKeyIdConverter
class:
@Component class MySpecialIdConverter extends SpannerKeyIdConverter { @Override protected String getUrlIdSeparator() { return ":"; } }
You can also write trades using curl -XPOST -H"Content-Type: application/json" -[email protected] http://<server>:<port>/trades/
where the file test.json
holds the JSON representation of a Trade
object.
Databases and tables inside Spanner instances can be created automatically from SpannerPersistentEntity
objects:
@Autowired private SpannerSchemaUtils spannerSchemaUtils; @Autowired private SpannerDatabaseAdminTemplate spannerDatabaseAdminTemplate; public void createTable(SpannerPersistentEntity entity) { if(!spannerDatabaseAdminTemplate.tableExists(entity.tableName()){ // The boolean parameter indicates that the database will be created if it does not exist. spannerDatabaseAdminTemplate.executeDdlStrings(Arrays.asList( spannerSchemaUtils.getCreateTableDDLString(entity.getType())), true); } }
Schemas can be generated for entire object hierarchies with interleaved relationships and composite keys.
A sample application is available.
Spring Data is an abstraction for storing and retrieving POJOs in numerous storage technologies. Spring Cloud GCP adds Spring Data support for Google Cloud Datastore.
Maven coordinates for this module only, using Spring Cloud GCP BOM:
<dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-gcp-data-datastore</artifactId> </dependency>
Gradle coordinates:
dependencies { compile group: 'org.springframework.cloud', name: 'spring-cloud-gcp-data-datastore' }
We provide a Spring Boot Starter for Spring Data Datastore, with which you can use our recommended auto-configuration setup. To use the starter, see the coordinates below.
Maven:
<dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-gcp-starter-data-datastore</artifactId> </dependency>
Gradle:
dependencies { compile group: 'org.springframework.cloud', name: 'spring-cloud-gcp-starter-data-datastore' }
This setup takes care of bringing in the latest compatible version of Cloud Java Cloud Datastore libraries as well.
To setup Spring Data Cloud Datastore, you have to configure the following:
You can the use Spring Boot Starter for Spring Data Datastore to autoconfigure Google Cloud Datastore in your Spring application. It contains all the necessary setup that makes it easy to authenticate with your Google Cloud project. The following configuration options are available:
Name | Description | Required | Default value |
| Enables the Cloud Datastore client | No |
|
| GCP project ID where the Google Cloud Datastore API is hosted, if different from the one in the Spring Cloud GCP Core Module | No | |
| OAuth2 credentials for authenticating with the Google Cloud Datastore API, if different from the ones in the Spring Cloud GCP Core Module | No | |
| Base64-encoded OAuth2 credentials for authenticating with the Google Cloud Datastore API, if different from the ones in the Spring Cloud GCP Core Module | No | |
| OAuth2 scope for Spring Cloud GCP Cloud Datastore credentials | No | |
| The Cloud Datastore namespace to use | No | the Default namespace of Cloud Datastore in your GCP project |
Spring Data Repositories can be configured via the @EnableDatastoreRepositories
annotation on your main @Configuration
class.
With our Spring Boot Starter for Spring Data Cloud Datastore, @EnableDatastoreRepositories
is automatically added.
It is not required to add it to any other class, unless there is a need to override finer grain configuration parameters provided by @EnableDatastoreRepositories
.
Our Spring Boot autoconfiguration creates the following beans available in the Spring application context:
DatastoreTemplate
CrudRepository
, PagingAndSortingRepository
, and DatastoreRepository
(an extension of PagingAndSortingRepository
with additional Cloud Datastore features) when repositories are enabledDatastore
from the Google Cloud Java Client for Datastore, for convenience and lower level API accessSpring Data Cloud Datastore allows you to map domain POJOs to Cloud Datastore kinds and entities via annotations:
@Entity(name = "traders") public class Trader { @Id @Field(name = "trader_id") String traderId; String firstName; String lastName; @Transient Double temporaryNumber; }
Spring Data Cloud Datastore will ignore any property annotated with @Transient
.
These properties will not be written to or read from Cloud Datastore.
Simple constructors are supported on POJOs. The constructor arguments can be a subset of the persistent properties. Every constructor argument needs to have the same name and type as a persistent property on the entity and the constructor should set the property from the given argument. Arguments that are not directly set to properties are not supported.
@Entity(name = "traders") public class Trader { @Id @Field(name = "trader_id") String traderId; String firstName; String lastName; @Transient Double temporaryNumber; public Trader(String traderId, String firstName) { this.traderId = traderId; this.firstName = firstName; } }
The @Entity
annotation can provide the name of the Cloud Datastore kind that stores instances of the annotated class, one per row.
@Id
identifies the property corresponding to the ID value.
You must annotate one of your POJO’s fields as the ID value, because every entity in Cloud Datastore requires a single ID value:
@Entity(name = "trades") public class Trade { @Id @Field(name = "trade_id") String tradeId; @Field(name = "trader_id") String traderId; String action; Double price; Double shares; String symbol; }
Datastore can automatically allocate integer ID values.
If a POJO instance with a Long
ID property is written to Cloud Datastore with null
as the ID value, then Spring Data Cloud Datastore will obtain a newly allocated ID value from Cloud Datastore and set that in the POJO for saving.
Because primitive long
ID properties cannot be null
and default to 0
, keys will not be allocated.
All accessible properties on POJOs are automatically recognized as a Cloud Datastore field.
Field naming is generated by the PropertyNameFieldNamingStrategy
by default defined on the DatastoreMappingContext
bean.
The @Field
annotation optionally provides a different field name than that of the property.
Spring Data Cloud Datastore supports the following types for regular fields and elements of collections:
Type | Stored as |
---|---|
| com.google.cloud.datastore.TimestampValue |
| com.google.cloud.datastore.BlobValue |
| com.google.cloud.datastore.LatLngValue |
| com.google.cloud.datastore.BooleanValue |
| com.google.cloud.datastore.DoubleValue |
| com.google.cloud.datastore.LongValue |
| com.google.cloud.datastore.LongValue |
| com.google.cloud.datastore.StringValue |
| com.google.cloud.datastore.EntityValue |
| com.google.cloud.datastore.KeyValue |
| com.google.cloud.datastore.BlobValue |
Java | com.google.cloud.datastore.StringValue |
In addition, all types that can be converted to the ones listed in the table by
org.springframework.core.convert.support.DefaultConversionService
are supported.
Custom converters can be used extending the type support for user defined types.
org.springframework.core.convert.converter.Converter
interface in both directions.DatastoreCustomConversions
constructor, which then has to be made available as a @Bean
for DatastoreCustomConversions
.For example:
We would like to have a field of type Album
on our Singer
POJO and want it to be stored as a string property:
@Entity public class Singer { @Id String singerId; String name; Album album; }
Where Album is a simple class:
public class Album { String albumName; LocalDate date; }
We have to define the two converters:
//Converter to write custom Album type static final Converter<Album, String> ALBUM_STRING_CONVERTER = new Converter<Album, String>() { @Override public String convert(Album album) { return album.getAlbumName() + " " + album.getDate().format(DateTimeFormatter.ISO_DATE); } }; //Converters to read custom Album type static final Converter<String, Album> STRING_ALBUM_CONVERTER = new Converter<String, Album>() { @Override public Album convert(String s) { String[] parts = s.split(" "); return new Album(parts[0], LocalDate.parse(parts[parts.length - 1], DateTimeFormatter.ISO_DATE)); } };
That will be configured in our @Configuration
file:
@Configuration public class ConverterConfiguration { @Bean public DatastoreCustomConversions datastoreCustomConversions() { return new DatastoreCustomConversions( Arrays.asList( ALBUM_STRING_CONVERTER, STRING_ALBUM_CONVERTER)); } }
Arrays and collections (types that implement java.util.Collection
) of supported types are supported.
They are stored as com.google.cloud.datastore.ListValue
.
Elements are converted to Cloud Datastore supported types individually. byte[]
is an exception, it is converted to
com.google.cloud.datastore.Blob
.
Users can provide converters from List<?>
to the custom collection type.
Only read converter is necessary, the Collection API is used on the write side to convert a collection to the internal list type.
Collection converters need to implement the org.springframework.core.convert.converter.Converter
interface.
Example:
Let’s improve the Singer class from the previous example.
Instead of a field of type Album
, we would like to have a field of type ImmutableSet<Album>
:
@Entity public class Singer { @Id String singerId; String name; ImmutableSet<Album> albums; }
We have to define a read converter only:
static final Converter<List<?>, ImmutableSet<?>> LIST_IMMUTABLE_SET_CONVERTER = new Converter<List<?>, ImmutableSet<?>>() { @Override public ImmutableSet<?> convert(List<?> source) { return ImmutableSet.copyOf(source); } };
And add it to the list of custom converters:
@Configuration public class ConverterConfiguration { @Bean public DatastoreCustomConversions datastoreCustomConversions() { return new DatastoreCustomConversions( Arrays.asList( LIST_IMMUTABLE_SET_CONVERTER, ALBUM_STRING_CONVERTER, STRING_ALBUM_CONVERTER)); } }
There are three ways to represent relationships between entities that are described in this section:
@Descendant
annotated properties for one-to-many relationships@Reference
annotated properties for general relationships without hierarchyFields whose types are also annotated with @Entity
are converted to EntityValue
and stored inside the parent entity.
Here is an example of Cloud Datastore entity containing an embedded entity in JSON:
{ "name" : "Alexander", "age" : 47, "child" : {"name" : "Philip" } }
This corresponds to a simple pair of Java entities:
import org.springframework.cloud.gcp.data.datastore.core.mapping.Entity; import org.springframework.data.annotation.Id; @Entity("parents") public class Parent { @Id String name; Child child; } @Entity public class Child { String name; }
Child
entities are not stored in their own kind.
They are stored in their entirety in the child
field of the parents
kind.
Multiple levels of embedded entities are supported.
Note | |
---|---|
Embedded entities don’t need to have |
Example:
Entities can hold embedded entities that are their own type. We can store trees in Cloud Datastore using this feature:
import org.springframework.cloud.gcp.data.datastore.core.mapping.Embedded; import org.springframework.cloud.gcp.data.datastore.core.mapping.Entity; import org.springframework.data.annotation.Id; @Entity public class EmbeddableTreeNode { @Id long value; EmbeddableTreeNode left; EmbeddableTreeNode right; Map<String, Long> longValues; Map<String, List<Timestamp>> listTimestamps; public EmbeddableTreeNode(long value, EmbeddableTreeNode left, EmbeddableTreeNode right) { this.value = value; this.left = left; this.right = right; } }
Maps will be stored as embedded entities where the key values become the field names in the embedded entity. The value types in these maps can be any regularly supported property type, and the key values will be converted to String using the configured converters.
Also, a collection of entities can be embedded; it will be converted to ListValue
on write.
Example:
Instead of a binary tree from the previous example, we would like to store a general tree
(each node can have an arbitrary number of children) in Cloud Datastore.
To do that, we need to create a field of type List<EmbeddableTreeNode>
:
import org.springframework.cloud.gcp.data.datastore.core.mapping.Embedded; import org.springframework.data.annotation.Id; public class EmbeddableTreeNode { @Id long value; List<EmbeddableTreeNode> children; Map<String, EmbeddableTreeNode> siblingNodes; Map<String, Set<EmbeddableTreeNode>> subNodeGroups; public EmbeddableTreeNode(List<EmbeddableTreeNode> children) { this.children = children; } }
Because Maps are stored as entities, they can further hold embedded entities:
Parent-child relationships are supported via the @Descendants
annotation.
Unlike embedded children, descendants are fully-formed entities residing in their own kinds. The parent entity does not have an extra field to hold the descendant entities. Instead, the relationship is captured in the descendants' keys, which refer to their parent entities:
import org.springframework.cloud.gcp.data.datastore.core.mapping.Descendants; import org.springframework.cloud.gcp.data.datastore.core.mapping.Entity; import org.springframework.data.annotation.Id; @Entity("orders") public class ShoppingOrder { @Id long id; @Descendants List<Item> items; } @Entity("purchased_item") public class Item { @Id Key purchasedItemKey; String name; Timestamp timeAddedToOrder; }
For example, an instance of a GQL key-literal representation for Item
would also contain the parent ShoppingOrder
ID value:
Key(orders, '12345', purchased_item, 'eggs')
The GQL key-literal representation for the parent ShoppingOrder
would be:
Key(orders, '12345')
The Cloud Datastore entities exist separately in their own kinds.
The ShoppingOrder
:
{ "id" : 12345 }
The two items inside that order:
{ "purchasedItemKey" : Key(orders, '12345', purchased_item, 'eggs'), "name" : "eggs", "timeAddedToOrder" : "2014-09-27 12:30:00.45-8:00" } { "purchasedItemKey" : Key(orders, '12345', purchased_item, 'sausage'), "name" : "sausage", "timeAddedToOrder" : "2014-09-28 11:30:00.45-9:00" }
The parent-child relationship structure of objects is stored in Cloud Datastore using Datastore’s ancestor relationships. Because the relationships are defined by the Ancestor mechanism, there is no extra column needed in either the parent or child entity to store this relationship. The relationship link is part of the descendant entity’s key value. These relationships can be many levels deep.
Properties holding child entities must be collection-like, but they can be any of the supported inter-convertible collection-like types that are supported for regular properties such as List
, arrays, Set
, etc…
Child items must have Key
as their ID type because Cloud Datastore stores the ancestor relationship link inside the keys of the children.
Reading or saving an entity automatically causes all subsequent levels of children under that entity to be read or saved, respectively.
If a new child is created and added to a property annotated @Descendants
and the key property is left null, then a new key will be allocated for that child.
The ordering of the retrieved children may not be the same as the ordering in the original property that was saved.
Child entities cannot be moved from the property of one parent to that of another unless the child’s key property is set to null
or a value that contains the new parent as an ancestor.
Since Cloud Datastore entity keys can have multiple parents, it is possible that a child entity appears in the property of multiple parent entities.
Because entity keys are immutable in Cloud Datastore, to change the key of a child you must delete the existing one and re-save it with the new key.
General relationships can be stored using the @Reference
annotation.
import org.springframework.cloud.gcp.data.datastore.core.mapping.Reference; import org.springframework.data.annotation.Id; @Entity public class ShoppingOrder { @Id long id; @Reference List<Item> items; @Reference Item specialSingleItem; } @Entity public class Item { @Id Key purchasedItemKey; String name; Timestamp timeAddedToOrder; }
@Reference
relationships are between fully-formed entities residing in their own kinds.
The relationship between ShoppingOrder
and Item
entities are stored as a Key field inside ShoppingOrder
, which are resolved to the underlying Java entity type by Spring Data Cloud Datastore:
{ "id" : 12345, "specialSingleItem" : Key(item, "milk"), "items" : [ Key(item, "eggs"), Key(item, "sausage") ] }
Reference properties can either be singular or collection-like. These properties correspond to actual columns in the entity and Cloud Datastore Kind that hold the key values of the referenced entities. The referenced entities are full-fledged entities of other Kinds.
Similar to the @Descendants
relationships, reading or writing an entity will recursively read or write all of the referenced entities at all levels.
If referenced entities have null
ID values, then they will be saved as new entities and will have ID values allocated by Cloud Datastore.
There are no requirements for relationships between the key of an entity and the keys that entity holds as references.
The order of collection-like reference properties is not preserved when reading back from Cloud Datastore.
DatastoreOperations
and its implementation, DatastoreTemplate
, provides the Template pattern familiar to Spring developers.
Using the auto-configuration provided by Spring Boot Starter for Datastore, your Spring application context will contain a fully configured DatastoreTemplate
object that you can autowire in your application:
@SpringBootApplication public class DatastoreTemplateExample { @Autowired DatastoreTemplate datastoreTemplate; public void doSomething() { this.datastoreTemplate.deleteAll(Trader.class); //... Trader t = new Trader(); //... this.datastoreTemplate.save(t); //... List<Trader> traders = datastoreTemplate.findAll(Trader.class); //... } }
The Template API provides convenience methods for:
In addition to retrieving entities by their IDs, you can also submit queries.
<T> Iterable<T> query(Query<? extends BaseEntity> query, Class<T> entityClass);
<A, T> Iterable<T> query(Query<A> query, Function<A, T> entityFunc);
Iterable<Key> queryKeys(Query<Key> query);
These methods, respectively, allow querying for: * entities mapped by a given entity class using all the same mapping and converting features * arbitrary types produced by a given mapping function * only the Cloud Datastore keys of the entities found by the query
Datstore reading a single entity or multiple entities in a kind.
Using DatastoreTemplate
you can execute reads, for example:
Trader trader = this.datastoreTemplate.findById("trader1", Trader.class); List<Trader> traders = this.datastoreTemplate.findAllById(ImmutableList.of("trader1", "trader2"), Trader.class); List<Trader> allTraders = this.datastoreTemplate.findAll(Trader.class);
Cloud Datastore executes key-based reads with strong consistency, but queries with eventual consistency.
In the example above the first two reads utilize keys, while the third is executed using a query based on the corresponding Kind of Trader
.
By default, all fields are indexed.
To disable indexing on a particular field, @Unindexed
annotation can be used.
Example:
import org.springframework.cloud.gcp.data.datastore.core.mapping.Unindexed; public class ExampleItem { long indexedField; @Unindexed long unindexedField; }
When using queries directly or via Query Methods, Cloud Datastore requires composite custom indexes if the select statement is not SELECT *
or if there is more than one filtering condition in the WHERE
clause.
DatastoreRepository
and custom-defined entity repositories implement the Spring Data PagingAndSortingRepository
, which supports offsets and limits using page numbers and page sizes.
Paging and sorting options are also supported in DatastoreTemplate
by supplying a DatastoreQueryOptions
to findAll
.
The write methods of DatastoreOperations
accept a POJO and writes all of its properties to Datastore.
The required Datastore kind and entity metadata is obtained from the given object’s actual type.
If a POJO was retrieved from Datastore and its ID value was changed and then written or updated, the operation will occur as if against a row with the new ID value. The entity with the original ID value will not be affected.
Trader t = new Trader(); this.datastoreTemplate.save(t);
The save
method behaves as update-or-insert.
Read and write transactions are provided by DatastoreOperations
via the performTransaction
method:
@Autowired DatastoreOperations myDatastoreOperations; public String doWorkInsideTransaction() { return myDatastoreOperations.performTransaction( transactionDatastoreOperations -> { // Work with transactionDatastoreOperations here. // It is also a DatastoreOperations object. return "transaction completed"; } ); }
The performTransaction
method accepts a Function
that is provided an instance of a DatastoreOperations
object.
The final returned value and type of the function is determined by the user.
You can use this object just as you would a regular DatastoreOperations
with an exception:
Because of Cloud Datastore’s consistency guarantees, there are limitations to the operations and relationships among entities used inside transactions.
This feature requires a bean of DatastoreTransactionManager
, which is provided when using spring-cloud-gcp-starter-data-datastore
.
DatastoreTemplate
and DatastoreRepository
support running methods with the @Transactional
annotation as transactions.
If a method annotated with @Transactional
calls another method also annotated, then both methods will work within the same transaction.
performTransaction
cannot be used in @Transactional
annotated methods because Cloud Datastore does not support transactions within transactions.
You can work with Maps of type Map<String, ?>
instead of with entity objects by directly reading and writing them to and from Cloud Datastore.
Note | |
---|---|
This is a different situation than using entity objects that contain Map properties. |
The map keys are used as field names for a Datastore entity and map values are converted to Datastore supported types. Only simple types are supported (i.e. collections are not supported). Converters for custom value types can be added (see Section 13.2.10, “Custom types” section).
Example:
Map<String, Long> map = new HashMap<>(); map.put("field1", 1L); map.put("field2", 2L); map.put("field3", 3L); keyForMap = datastoreTemplate.createKey("kindName", "id"); //write a map datastoreTemplate.writeMap(keyForMap, map); //read a map Map<String, Long> loadedMap = datastoreTemplate.findByIdAsMap(keyForMap, Long.class);
Spring Data Repositories are an abstraction that can reduce boilerplate code.
For example:
public interface TraderRepository extends DatastoreRepository<Trader, String> { }
Spring Data generates a working implementation of the specified interface, which can be autowired into an application.
The Trader
type parameter to DatastoreRepository
refers to the underlying domain type.
The second type parameter, String
in this case, refers to the type of the key of the domain type.
public class MyApplication { @Autowired TraderRepository traderRepository; public void demo() { this.traderRepository.deleteAll(); String traderId = "demo_trader"; Trader t = new Trader(); t.traderId = traderId; this.tradeRepository.save(t); Iterable<Trader> allTraders = this.traderRepository.findAll(); int count = this.traderRepository.count(); } }
Repositories allow you to define custom Query Methods (detailed in the following sections) for retrieving, counting, and deleting based on filtering and paging parameters. Filtering parameters can be of types supported by your configured custom converters.
public interface TradeRepository extends DatastoreRepository<Trade, String[]> { List<Trader> findByAction(String action); int countByAction(String action); boolean existsByAction(String action); List<Trade> findTop3ByActionAndSymbolAndPriceGreaterThanAndPriceLessThanOrEqualOrderBySymbolDesc( String action, String symbol, double priceFloor, double priceCeiling); Page<TestEntity> findByAction(String action, Pageable pageable); Slice<TestEntity> findBySymbol(String symbol, Pageable pageable); List<TestEntity> findBySymbol(String symbol, Sort sort); }
In the example above the query methods in TradeRepository
are generated based on the name of the methods using thehttps://docs.spring.io/spring-data/data-commons/docs/current/reference/html#repositories.query-methods.query-creation[Spring Data Query creation naming convention].
Cloud Datastore only supports filter components joined by AND, and the following operations:
equals
greater than or equals
greater than
less than or equals
less than
is null
After writing a custom repository interface specifying just the signatures of these methods, implementations are generated for you and can be used with an auto-wired instance of the repository.
Because of Cloud Datastore’s requirement that explicitly selected fields must all appear in a composite index together, find
name-based query methods are run as SELECT *
.
Delete queries are also supported.
For example, query methods such as deleteByAction
or removeByAction
delete entities found by findByAction
.
Delete queries are executed as separate read and delete operations instead of as a single transaction because Cloud Datastore cannot query in transactions unless ancestors for queries are specified.
As a result, removeBy
and deleteBy
name-convention query methods cannot be used inside transactions via either performInTransaction
or @Transactional
annotation.
Delete queries can have the following return types:
Methods can have org.springframework.data.domain.Pageable
parameter to control pagination and sorting, or org.springframework.data.domain.Sort
parameter to control sorting only.
See Spring Data documentation for details.
For returning multiple items in a repository method, we support Java collections as well as org.springframework.data.domain.Page
and org.springframework.data.domain.Slice
.
If a method’s return type is org.springframework.data.domain.Page
, the returned object will include current page, total number of results and total number of pages.
Note | |
---|---|
Methods that return |
Custom GQL queries can be mapped to repository methods in one of two ways:
namedQueries
properties file@Query
annotationUsing the @Query
annotation:
The names of the tags of the GQL correspond to the @Param
annotated names of the method parameters.
public interface TraderRepository extends DatastoreRepository<Trader, String> { @Query("SELECT * FROM traders WHERE name = @trader_name") List<Trader> tradersByName(@Param("trader_name") String traderName); @Query("SELECT * FROM test_entities_ci WHERE id = @id_val") TestEntity getOneTestEntity(@Param("id_val") long id); }
The following parameter types are supported:
com.google.cloud.Timestamp
com.google.cloud.datastore.Blob
com.google.cloud.datastore.Key
com.google.cloud.datastore.Cursor
java.lang.Boolean
java.lang.Double
java.lang.Long
java.lang.String
enum
values.
These are queried as String
values.With the exception of Cursor
, array forms of each of the types are also supported.
If you would like to obtain the count of items of a query or if there are any items returned by the query, set the count = true
or exists = true
properties of the @Query
annotation, respectively.
The return type of the query method in these cases should be an integer type or a boolean type.
Cloud Datastore provides provides the SELECT key FROM …
special column for all kinds that retrieves the Key`s of each row.
Selecting this special `key
column is especially useful and efficient for count
and exists
queries.
You can also query for non-entity types:
@Query(value = "SELECT __key__ from test_entities_ci") List<Key> getKeys(); @Query(value = "SELECT __key__ from test_entities_ci limit 1") Key getKey(); @Query("SELECT id FROM test_entities_ci WHERE id <= @id_val") List<String> getIds(@Param("id_val") long id); @Query("SELECT id FROM test_entities_ci WHERE id <= @id_val limit 1") String getOneId(@Param("id_val") long id);
SpEL can be used to provide GQL parameters:
@Query("SELECT * FROM |com.example.Trade| WHERE trades.action = @act AND price > :#{#priceRadius * -1} AND price < :#{#priceRadius * 2}") List<Trade> fetchByActionNamedQuery(@Param("act") String action, @Param("priceRadius") Double r);
Kind names can be directly written in the GQL annotations.
Kind names can also be resolved from the @Entity
annotation on domain classes.
In this case, the query should refer to table names with fully qualified class names surrounded by |
characters: |fully.qualified.ClassName|
.
This is useful when SpEL expressions appear in the kind name provided to the @Entity
annotation.
For example:
@Query("SELECT * FROM |com.example.Trade| WHERE trades.action = @act") List<Trade> fetchByActionNamedQuery(@Param("act") String action);
You can also specify queries with Cloud Datastore parameter tags and SpEL expressions in properties files.
By default, the namedQueriesLocation
attribute on @EnableDatastoreRepositories
points to the META-INF/datastore-named-queries.properties
file.
You can specify the query for a method in the properties file by providing the GQL as the value for the "interface.method" property:
Trader.fetchByName=SELECT * FROM traders WHERE name = @tag0
public interface TraderRepository extends DatastoreRepository<Trader, String> { // This method uses the query from the properties file instead of one generated based on name. List<Trader> fetchByName(@Param("tag0") String traderName); }
These transactions work very similarly to those of DatastoreOperations
, but is specific to the repository’s domain type and provides repository functions instead of template functions.
For example, this is a read-write transaction:
@Autowired DatastoreRepository myRepo; public String doWorkInsideTransaction() { return myRepo.performTransaction( transactionDatastoreRepo -> { // Work with the single-transaction transactionDatastoreRepo here. // This is a DatastoreRepository object. return "transaction completed"; } ); }
Spring Data Cloud Datastore supports projections. You can define projection interfaces based on domain types and add query methods that return them in your repository:
public interface TradeProjection { String getAction(); @Value("#{target.symbol + ' ' + target.action}") String getSymbolAndAction(); } public interface TradeRepository extends DatastoreRepository<Trade, Key> { List<Trade> findByTraderId(String traderId); List<TradeProjection> findByAction(String action); @Query("SELECT action, symbol FROM trades WHERE action = @action") List<TradeProjection> findByQuery(String action); }
Projections can be provided by name-convention-based query methods as well as by custom GQL queries.
If using custom GQL queries, you can further restrict the fields retrieved from Cloud Datastore to just those required by the projection.
However, custom select statements (those not using SELECT *
) require composite indexes containing the selected fields.
Properties of projection types defined using SpEL use the fixed name target
for the underlying domain object.
As a result, accessing underlying properties take the form target.<property-name>
.
When running with Spring Boot, repositories can be exposed as REST services by simply adding this dependency to your pom file:
<dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-rest</artifactId> </dependency>
If you prefer to configure parameters (such as path), you can use @RepositoryRestResource
annotation:
@RepositoryRestResource(collectionResourceRel = "trades", path = "trades") public interface TradeRepository extends DatastoreRepository<Trade, String[]> { }
For example, you can retrieve all Trade
objects in the repository by using curl http://<server>:<port>/trades
, or any specific trade via curl http://<server>:<port>/trades/<trader_id>
.
You can also write trades using curl -XPOST -H"Content-Type: application/json" -[email protected] http://<server>:<port>/trades/
where the file test.json
holds the JSON representation of a Trade
object.
To delete trades, you can use curl -XDELETE http://<server>:<port>/trades/<trader_id>
A Simple Spring Boot Application and more advanced Sample Spring Boot Application are provided to show how to use the Spring Data Cloud Datastore starter and template.
Cloud Memorystore for Redis provides a fully managed in-memory data store service. Cloud Memorystore is compatible with the Redis protocol, allowing easy integration with Spring Caching.
All you have to do is create a Cloud Memorystore instance and use its IP address in application.properties
file as spring.redis.host
property value.
Everything else is exactly the same as setting up redis-backed Spring caching.
Note | |
---|---|
Memorystore instances and your application instances have to be located in the same region. |
In short, the following dependencies are needed:
<dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-cache</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-redis</artifactId> </dependency>
And then you can use org.springframework.cache.annotation.Cacheable
annotation for methods you’d like to be cached.
@Cacheable("cache1") public String hello(@PathVariable String name) { .... }
If you are interested in a detailed how-to guide, please check Spring Boot Caching using Cloud Memorystore codelab.
Cloud Memorystore documentation can be found here.
Cloud Identity-Aware Proxy (IAP) provides a security layer over applications deployed to Google Cloud.
The IAP starter uses Spring Security OAuth 2.0 Resource Server functionality to automatically extract user identity from the proxy-injected x-goog-iap-jwt-assertion
HTTP header.
The following claims are validated automatically:
The audience ("aud"
) validation is automatically configured when the application is running on App Engine Standard or App Engine Flexible.
For other runtime environments, a custom audience must be provided through spring.cloud.gcp.security.iap.audience
property.
The custom property, if specified, overrides the automatic App Engine audience detection.
Important | |
---|---|
There is no automatic audience string configuration for Compute Engine or Kubernetes Engine.
To use the IAP starter on GCE/GKE, find the Audience string per instructions in the Verify the JWT payload guide, and specify it in the |
Note | |
---|---|
If you create a custom |
Starter Maven coordinates, using Spring Cloud GCP BOM:
<dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-gcp-starter-security-iap</artifactId> </dependency>
Starter Gradle coordinates:
dependencies { compile group: 'org.springframework.cloud', name: 'spring-cloud-gcp-starter-security-iap' }
The following properties are available.
Caution | |
---|---|
Modifying registry, algorithm, and header properties might be useful for testing, but the defaults should not be changed in production. |
Name | Description | Required | Default |
---|---|---|---|
| Link to JWK public key registry. | true | |
| Encryption algorithm used to sign the JWK token. | true |
|
| Header from which to extract the JWK key. | true |
|
| JWK issuer to verify. | true | |
| Custom JWK audience to verify. | false on App Engine; true on GCE/GKE |
A sample application is available.
The Google Cloud Vision API allows users to leverage machine learning algorithms for processing images including: image classification, face detection, text extraction, and others.
Spring Cloud GCP provides:
A Cloud Vision Template which simplifies interactions with the Cloud Vision API.
Maven coordinates, using Spring Cloud GCP BOM:
<dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-gcp-starter-vision</artifactId> </dependency>
Gradle coordinates:
dependencies { compile group: 'org.springframework.cloud', name: 'spring-cloud-gcp-starter-vision' }
The CloudVisionTemplate
offers a simple way to use the Cloud Vision APIs with Spring Resources.
After you add the spring-cloud-gcp-starter-vision
dependency to your project, you may @Autowire
an instance of CloudVisionTemplate
to use in your code.
The CloudVisionTemplate
offers the following method for interfacing with Cloud Vision:
public AnnotateImageResponse analyzeImage(Resource imageResource, Feature.Type… featureTypes)
Parameters:
Resource imageResource
refers to the Spring Resource of the image object you wish to analyze.
The Google Cloud Vision documentation provides a list of the image types that they support.Feature.Type… featureTypes
refers to a var-arg array of Cloud Vision Features to extract from the image.
A feature refers to a kind of image analysis one wishes to perform on an image, such as label detection, OCR recognition, facial detection, etc.
One may specify multiple features to analyze within one request.
A full list of Cloud Vision Features is provided in the Cloud Vision Feature docs.Returns:
AnnotateImageResponse
contains the results of all the feature analyses that were specified in the request.
For each feature type that you provide in the request, AnnotateImageResponse
provides a getter method to get the result of that feature analysis.
For example, if you analyzed an image using the LABEL_DETECTION
feature, you would retrieve the results from the response using annotateImageResponse.getLabelAnnotationsList()
.
AnnotateImageResponse
is provided by the Google Cloud Vision libraries; please consult the RPC reference or Javadoc for more details.
Additionally, you may consult the Cloud Vision docs to familiarize yourself with the concepts and features of the API.
Image labeling refers to producing labels that describe the contents of an image. Below is a code sample of how this is done using the Cloud Vision Spring Template.
@Autowired private ResourceLoader resourceLoader; @Autowired private CloudVisionTemplate cloudVisionTemplate; public void processImage() { Resource imageResource = this.resourceLoader.getResource("my_image.jpg"); AnnotateImageResponse response = this.cloudVisionTemplate.analyzeImage( imageResource, Type.LABEL_DETECTION); System.out.println("Image Classification results: " + response.getLabelAnnotationsList()); }
A Sample Spring Boot Application is provided to show how to use the Cloud Vision starter and template.
Spring Cloud GCP provides support for Cloud Foundry’s GCP Service Broker. Our Pub/Sub, Cloud Spanner, Storage, Stackdriver Trace and Cloud SQL MySQL and PostgreSQL starters are Cloud Foundry aware and retrieve properties like project ID, credentials, etc., that are used in auto configuration from the Cloud Foundry environment.
In cases like Pub/Sub’s topic and subscription, or Storage’s bucket name, where those parameters are not used in auto configuration, you can fetch them using the VCAP mapping provided by Spring Boot.
For example, to retrieve the provisioned Pub/Sub topic, you can use the vcap.services.mypubsub.credentials.topic_name
property from the application environment.
Note | |
---|---|
If the same service is bound to the same application more than once, the auto configuration will not be able to choose among bindings and will not be activated for that service. This includes both MySQL and PostgreSQL bindings to the same app. |
Warning | |
---|---|
In order for the Cloud SQL integration to work in Cloud Foundry, auto-reconfiguration must be disabled.
You can do so using the |
The latest version of the Spring Framework provides first-class support for Kotlin. For Kotlin users of Spring, the Spring Cloud GCP libraries work out-of-the-box and are fully interoperable with Kotlin applications.
For more information on building a Spring application in Kotlin, please consult the Spring Kotlin documentation.
Ensure that your Kotlin application is properly set up. Based on your build system, you will need to include the correct Kotlin build plugin in your project:
Depending on your application’s needs, you may need to augment your build configuration with compiler plugins:
Once your Kotlin project is properly configured, the Spring Cloud GCP libraries will work within your application without any additional setup.
A Kotlin sample application is provided to demonstrate a working Maven setup and various Spring Cloud GCP integrations from within Kotlin.