This topic explains how to build a simple <%=vars.product_name_spring_long%> application.
This is a Customer Service application that uses VMware GemFire to manage Customer interactions. You should already be familiar with Spring Boot and VMware GemFire.
The primary focus of this sample is to demonstrate the <%=vars.product_name_spring%>’s auto-configuration feature.
This guide topic builds on the Simplifying VMware GemFire with Spring Data presentation by John Blum during the 2017 SpringOne Platform conference. While this example as well as the example presented in the talk both use Spring Boot, only this example is using <%=vars.product_name_spring%>. This topic improves on the example from the presentation by using <%=vars.product_name_spring%>.
Application Domain Classes
We will build the Spring Boot, Customer Service application from the ground up.
Customer
class
Like any sensible application development project, we begin by modeling the data our application needs to manage, namely a Customer
. For this example, the Customer
class is implemented as follows:
Customer class
@Region("Customers")
@EqualsAndHashCode
@ToString(of = "name")
@RequiredArgsConstructor(staticName = "newCustomer")
public class Customer {
@Id @NonNull @Getter
private Long id;
@NonNull @Getter
private String name;
}
The Customer
class uses Project Lombok to simplify the implementation so we can focus on the details we care about. Lombok is useful for testing or prototyping purposes. However, using Project Lombok is optional and in most production applications, and I would not recommend it.
Additionally, the Customer
class is annotated with Spring Data for VMware GemFire’s @Region
annotation. @Region
is a mapping annotation declaring the VMware GemFire cache Region
in which Customer
data will be persisted.
Finally, the org.springframework.data.annotation.Id
annotation was used to designate the Customer.id
field as the identifier for Customer
objects. The identifier is the Key used in the Entry stored in the “Customers”`Region`. A Region
is a distributed version of java.util.Map
.
If the @Region
annotation is not explicitly declared, then Spring Data for VMware GemFire uses the simple name of the class, which in this case is “Customer”, to identify the Region
. However, there is another reason we explicitly annotated the Customer
class with @Region
, which we will cover below.
CustomerRepository
interface
Next, we create a Data Access Object (DAO) to persist Customers
to VMware GemFire. We create the DAO using Spring Data’s Repository abstraction:
CustomerRepository inteface
public interface CustomerRepository extends CrudRepository<Customer, Long> {
Customer findByNameLike(String name);
}
CustomerRepository
is a Spring Data CrudRepository
. CrudRepository
provides basic CRUD (CREATE, READ, UPDATE, and DELETE) data access operations along with the ability to define simple queries on Customers
.
Spring Data for VMware GemFire will create a proxy implementation for your application-specific Repository interfaces, implementing any query methods you may have explicitly defined on the interface in addition to the data access operations provided in the CrudRepository
interface extension.
In addition to the base CrudRepository
operations, CustomerRepository
has additionally defined a findByNameLike(:String):Customer
query method. The VMware GemFire OQL query is derived from the method declaration.
Note:
Though it is beyond the scope of this document,
Spring Data's Repository infrastructure is capable of
generating data store specific queries (e.g. VMware GemFire OQL) for
Repository interface query method declarations just by
introspecting the method signature. The query methods must conform to
specific conventions. Alternatively, users may use @Query
to annotate query methods to specify the raw query instead (i.e. OQL for
VMware GemFire, SQL for JDBC, possibly HQL for JPA).
CustomerServiceApplication
(Spring Boot main class)
Now that we have created the basic domain classes of our Customer Service application, we need a main application class to drive the interactions with Customers:
CustomerServiceApplication class
@SpringBootApplication
@EnableEntityDefinedRegions(basePackageClasses = Customer.class, clientRegionShortcut = ClientRegionShortcut.LOCAL)
public class CustomerServiceApplication {
public static void main(String[] args) {
new SpringApplicationBuilder(CustomerServiceApplication.class)
.web(WebApplicationType.NONE)
.build()
.run(args);
}
@Bean
ApplicationRunner runner(CustomerRepository customerRepository) {
return args -> {
assertThat(customerRepository.count()).isEqualTo(0);
Customer jonDoe = Customer.newCustomer(1L, "Jon Doe");
System.err.printf("Saving Customer [%s]%n", jonDoe);
jonDoe = customerRepository.save(jonDoe);
assertThat(jonDoe).isNotNull();
assertThat(jonDoe.getId()).isEqualTo(1L);
assertThat(jonDoe.getName()).isEqualTo("Jon Doe");
assertThat(customerRepository.count()).isEqualTo(1);
System.err.println("Querying for Customer [SELECT * FROM /Customers WHERE name LIKE '%Doe']");
Customer queriedJonDoe = customerRepository.findByNameLike("%Doe");
assertThat(queriedJonDoe).isEqualTo(jonDoe);
System.err.printf("Customer was [%s]%n", queriedJonDoe);
};
}
}
The CustomerServiceApplication
class is annotated with @SpringBootApplication
. Therefore, the main class is a proper Spring Boot application equipped with all the features of Spring Boot (e.g. auto-configuration).
Additionally, we use Spring Boot’s SpringApplicationBuilder
in the main
method to configure and bootstrap the Customer Service application.
Then, we declare a Spring Boot ApplicationRunner
bean, which is invoked by Spring Boot after the Spring container (i.e. ApplicationContext
) has been properly initialized and started. Our ApplicationRunner
defines the Customer interactions performed by our Customer Service application.
Specifically, the runner creates a new Customer
object (“Jon Doe”), saves him to the “Customers” Region, and then queries for “Jon Doe” using an OQL query with the predicate: name LIKE '%Doe'
.
Note:
%
is the wildcard for OQL text
searches.
Running the Example
You can run the CustomerServiceApplication
class from your IDE (e.g. IntelliJ IDEA) or from the command-line with the gradlew
command.
There is nothing special you must do to run the CustomerServiceApplication
class from inside your IDE. Simply create a run profile configuration and run it.
There is also nothing special about running the CustomerServiceApplication
class from the command-line using gradlew
. Simply execute it with bootRun
:
$ gradlew :spring-boot:boot:configuration:bootRun
If you wish to adjust the log levels for either VMware GemFire or Spring Boot while running the example, then you can set the log level for the individual Loggers (i.e. org.apache
or org.springframework
) in src/main/resources/logback.xml
:
spring-boot/boot/configuration/src/main/resources/logback.xml
<?xml version="1.0" encoding="UTF-8"?>
<configuration debug="false">
<statusListener class="ch.qos.logback.core.status.NopStatusListener"/>
<appender name="console" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>%d %5p %40.40c:%4L - %m%n</pattern>
</encoder>
</appender>
<logger name="ch.qos.logback" level="${logback.log.level:-ERROR}"/>
<logger name="org.apache" level="${logback.log.level:-ERROR}"/>
<logger name="org.springframework" level="${logback.log.level:-ERROR}"/>
<root level="${logback.log.level:-ERROR}">
<appender-ref ref="console"/>
</root>
</configuration>
Auto-configuration for VMware GemFire, Take One
Cache instance
To put anything into VMware GemFire you need a cache instance. A cache instance is also required to create Regions
which ultimately store the application’s data (state). Again, a Region
is just a Key/Value data structure, like java.util.Map
, mapping a Key to a Value, or an Object. A Region
is actually much more than a simple Map
since it is distributed. However, since Region
implements java.util.Map
, it can be treated as such.
Note:
A complete discussion of Region
and it
concepts are beyond the scope of this document. For more information, see Data Regions in the VMware GemFire product documentation.
<%=vars.product_name_spring%> is opinionated and assumes most VMware GemFire applications will be client applications in VMware GemFire’s client/server topology. Therefore, <%=vars.product_name_spring%> auto-configures a ClientCache
instance by default.
The intrinsic ClientCache
auto-configuration provided by <%=vars.product_name_spring%> can be made apparent by disabling it:
Disabling ClientCache Auto-configuration
@SpringBootApplication(exclude = ClientCacheAutoConfiguration.class)
@EnableEntityDefinedRegions(basePackageClasses = Customer.class, clientRegionShortcut = ClientRegionShortcut.LOCAL)
public class CustomerServiceApplication {
// ...
}
Note the exclude
on the ClientCacheAutoConfiguration.class
.
With the correct log level set, you will see an error message similar to:
Error resulting from no ClientCache instance
16:20:47.543 [main] DEBUG o.s.b.d.LoggingFailureAnalysisReporter - Application failed to start due to an exception
org.springframework.beans.factory.NoSuchBeanDefinitionException: No qualifying bean of type 'example.app.crm.repo.CustomerRepository' available: expected at least one bean which qualifies as autowire candidate. Dependency annotations: {}
at org.springframework.beans.factory.support.DefaultListableBeanFactory.raiseNoMatchingBeanFound(DefaultListableBeanFactory.java:1509) ~[spring-beans-5.0.13.RELEASE.jar:5.0.13.RELEASE]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:1104) ~[spring-beans-5.0.13.RELEASE.jar:5.0.13.RELEASE]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:1065) ~[spring-beans-5.0.13.RELEASE.jar:5.0.13.RELEASE]
at org.springframework.beans.factory.support.ConstructorResolver.resolveAutowiredArgument(ConstructorResolver.java:819) ~[spring-beans-5.0.13.RELEASE.jar:5.0.13.RELEASE]
...
16:20:47.548 [main] ERROR o.s.b.d.LoggingFailureAnalysisReporter -
***************************
APPLICATION FAILED TO START
***************************
Description:
Parameter 0 of method runner in example.app.crm.CustomerServiceApplication required a bean of type 'example.app.crm.repo.CustomerRepository' that could not be found.
Essentially, the CustomerRepository
could not be injected into our CustomerServiceApplication
class, ApplicationRunner
bean method because the CustomerRepository
, which depends on the “Customers” Region, could not be created. The CustomerRepository
could not be created because the “Customers” Region could not be created. The “Customers” Region could not be created because there was no cache instance available (e.g. ClientCache
) to create Regions
, resulting in a trickling effect.
The ClientCache
auto-configuration is equivalent to the following:
Equivalent ClientCache configuration
@SpringBootApplication
@ClientCacheApplication
@EnableEntityDefinedRegions(basePackageClasses = Customer.class, clientRegionShortcut = ClientRegionShortcut.LOCAL)
public class CustomerServiceApplication {
// ...
}
That is, you would need to explicitly declare the @ClientCacheApplication
annotation if you were not using <%=vars.product_name_spring%>.
Repository instance
We are also using the Spring Data (GemFire) Repository infrastructure in the Customer Service application. This should be evident from our declaration and definition of the application-specific CustomerRepository
interface.
If we disable the Spring Data Repository auto-configuration:
Disabling Spring Data Repositories Auto-configuration
@SpringBootApplication(exclude = RepositoriesAutoConfiguration.class)
@EnableEntityDefinedRegions(basePackageClasses = Customer.class, clientRegionShortcut = ClientRegionShortcut.LOCAL)
public class CustomerServiceApplication {
// ...
}
The application would throw a similar error on startup:
Error resulting from no proxied CustomerRepository
instance
17:31:21.231 [main] DEBUG o.s.b.d.LoggingFailureAnalysisReporter - Application failed to start due to an exception
org.springframework.beans.factory.NoSuchBeanDefinitionException: No qualifying bean of type 'example.app.crm.repo.CustomerRepository' available: expected at least one bean which qualifies as autowire candidate. Dependency annotations: {}
at org.springframework.beans.factory.support.DefaultListableBeanFactory.raiseNoMatchingBeanFound(DefaultListableBeanFactory.java:1509) ~[spring-beans-5.0.13.RELEASE.jar:5.0.13.RELEASE]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:1104) ~[spring-beans-5.0.13.RELEASE.jar:5.0.13.RELEASE]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:1065) ~[spring-beans-5.0.13.RELEASE.jar:5.0.13.RELEASE]
at org.springframework.beans.factory.support.ConstructorResolver.resolveAutowiredArgument(ConstructorResolver.java:819) ~[spring-beans-5.0.13.RELEASE.jar:5.0.13.RELEASE]
...
17:31:21.235 [main] ERROR o.s.b.d.LoggingFailureAnalysisReporter -
***************************
APPLICATION FAILED TO START
***************************
Description:
Parameter 0 of method runner in example.app.crm.CustomerServiceApplication required a bean of type 'example.app.crm.repo.CustomerRepository' that could not be found.
In this case, there was simply no proxy implementation for the CustomerRepository
interface provided by the framework since the auto-configuration was disabled. The ClientCache
and “Customers” Region
do exist in this case, though.
The Spring Data Repository auto-configuration even takes care of locating our application Repository interface definitions for us.
Without auto-configuration, you would need to explicitly:
Equivalent Spring Data Repositories configuration
@SpringBootApplication(exclude = RepositoriesAutoConfiguration.class)
@EnableEntityDefinedRegions(basePackageClasses = Customer.class, clientRegionShortcut = ClientRegionShortcut.LOCAL)
@EnableGemfireRepositories(basePackageClasses = CustomerRepository.class)
public class CustomerServiceApplication {
// ...
}
That is, you would need to explicitly declare the @EnableGemfireRepositories
annotation and set the basePackages
attribute, or the equivalent, type-safe basePackageClasses
attribute, to the package containing your application Repository interfaces, if you were not using <%=vars.product_name_spring%>.
Entity-defined Regions
So far, the only explicit declaration of configuration in our Customer Service application is the @EnableEntityDefinedRegions
annotation.
As was alluded to above, there was another reason we explicitly declared the @Region
annotation on our Customer
class.
We could have defined the client LOCAL
“Customers” Region using Spring JavaConfig, explicitly:
JavaConfig Bean Definition for the “Customers” Region
@Configuration
class ApplicationConfiguration {
@Bean("Customers")
public ClientRegionFactoryBean<Long, Customer> customersRegion(GemFireCache gemfireCache) {
ClientRegionFactoryBean<Long, Customer> customersRegion = new ClientRegionFactoryBean<>();
customersRegion.setCache(gemfireCache);
customersRegion.setShortcut(ClientRegionShortcut.LOCAL);
return customersRegion;
}
}
Or, even define the “Customers” Region using Spring XML, explicitly:
XML Bean Definition for the “Customers” Region
<gfe:client-region id="Customers" shortcut="LOCAL"/>
But, using Spring Data for VMware GemFire’s @EnableEntityDefinedRegions
annotation is very convenient and can scan for the Regions (whether client or server (peer) Regions) required by your application based the entity classes themselves (e.g. Customer
):
Annotation-based config for the “Customers” Region
@EnableEntityDefinedRegions(basePackageClasses = Customer.class, clientRegionShortcut = ClientRegionShortcut.LOCAL)
class CustomerServiceApplication { }
The basePackageClasses
attribute is an alternative to basePackages
, and a type-safe way to target the packages (and subpackages) containing the entity classes that your application will persist to VMware GemFire. You only need to choose one class from each top-level package for where you want the scan to begin. Spring Data for VMware GemFire uses this class to determine the package to begin the scan. ’basePackageClasses` accepts an array of Class
types so you can specify multiple independent top-level packages. The annotation also includes the ability to filter types.
However, the @EnableEntityDefinedRegions
annotation only works when the entity class (e.g. Customer
) is explicitly annotated with the @Region
annotation (e.g. @Region("Customers")
), otherwise it ignores the class.
You will also notice that the data policy type (i.e. clientRegionShort
, or simply shortcut
) is set to LOCAL
in our example. Why?
Well, initially we just want to get up and running as quickly as possible, without a lot of ceremony and fuss. By using a client LOCAL
Region to begin with, we are not required to start a cluster of servers for the client to be able to store data.
While client LOCAL
Regions can be useful for some purposes (e.g. local processing, querying and aggregating of data), it is more common for a client to persist data in a cluster of servers, and for that data to be shared by multiple clients (instances) in the application architecture, especially as the application is scaled out to handle demand.
Switching to Client/Server
We continue with our example by switching from a local context to a client/server topology.
If you are rapidly prototyping and developing your application and simply want to lift off the ground quickly, then it is useful to start locally and gradually migrate towards a client/server architecture.
To switch to client/server, all you need to do is remove the clientRegionShortcut
attribute configuration from the @EnableEntityDefinedRegions
annotation declaration:
Client/Server Topology Region Configuration
@EnableEntityDefinedRegions(basePackageClasses = Customer.class)
class CustomerServiceApplication { }
The default value for the clientRegionShortcut
attribute is ClientRegionShortcut.PROXY
. This means no data is stored locally. All data is sent from the client to one or more servers in a cluster.
However, if we try to run the application, it will fail:
NoAvailableServersException
Caused by: org.apache.geode.cache.client.NoAvailableServersException
at org.apache.geode.cache.client.internal.pooling.ConnectionManagerImpl.borrowConnection(ConnectionManagerImpl.java:234) ~[geode-core-1.2.1.jar:?]
at org.apache.geode.cache.client.internal.OpExecutorImpl.execute(OpExecutorImpl.java:136) ~[geode-core-1.2.1.jar:?]
at org.apache.geode.cache.client.internal.OpExecutorImpl.execute(OpExecutorImpl.java:115) ~[geode-core-1.2.1.jar:?]
at org.apache.geode.cache.client.internal.PoolImpl.execute(PoolImpl.java:763) ~[geode-core-1.2.1.jar:?]
at org.apache.geode.cache.client.internal.QueryOp.execute(QueryOp.java:58) ~[geode-core-1.2.1.jar:?]
at org.apache.geode.cache.client.internal.ServerProxy.query(ServerProxy.java:70) ~[geode-core-1.2.1.jar:?]
at org.apache.geode.cache.query.internal.DefaultQuery.executeOnServer(DefaultQuery.java:456) ~[geode-core-1.2.1.jar:?]
at org.apache.geode.cache.query.internal.DefaultQuery.execute(DefaultQuery.java:338) ~[geode-core-1.2.1.jar:?]
at org.springframework.data.gemfire.GemfireTemplate.find(GemfireTemplate.java:311) ~[spring-data-for-vmware-gemfire-2.0.14.RELEASE.jar:2.0.14.RELEASE]
at org.springframework.data.gemfire.repository.support.SimpleGemfireRepository.count(SimpleGemfireRepository.java:129) ~[spring-data-for-vmware-gemfire-2.0.14.RELEASE.jar:2.0.14.RELEASE]
...
at example.app.crm.CustomerServiceApplication.lambda$runner$0(CustomerServiceApplication.java:59) ~[classes/:?]
at org.springframework.boot.SpringApplication.callRunner(SpringApplication.java:783) ~[spring-boot-2.0.9.RELEASE.jar:2.0.9.RELEASE]
The client is expecting a cluster of servers to communicate with and to store and access data from. No servers or clusters are running yet.
There are several ways in which to start a cluster. For example, you may use Spring to configure and bootstrap the cluster. For this example, we will use the VMware GemFire Shell tool (gfsh), provided with VMware GemFire.
You need to install a full distribution of VMware GemFire to make use of the provided tools.
Once VMware GemFire has been successfully installed, you can open a command prompt (terminal) and start the GemFire Shell (gfsh):
Running Gfsh
$ echo $GEMFIRE_HOME
/Users/user1/pivdev/vmware-gemfire-10.0.0
$ gfsh
_________________________ __
/ _____/ ______/ ______/ /____/ /
/ / __/ /___ /_____ / _____ /
/ /__/ / ____/ _____/ / / / /
/______/_/ /______/_/ /_/ 10.0.0
Monitor and Manage VMware GemFire
gfsh>
You are set to go.
For your convenience, a gfsh
shell script is provided to start a cluster:
Gfsh shell script
# Gfsh shell script to start a simple VMware GemFire/GemFire cluster
start locator --name=LocatorOne --log-level=config
start server --name=ServerOne --log-level=config
Specifically, we are starting one Locator and one Server, all running with the default ports.
Execute the gfsh
shell script using:
Run Gfsh shell script
gfsh>run --file=/path/to/spring-for-gemfire-examples/boot/configuration/src/main/resources/geode/bin/start-simple-cluster.gfsh
1. Executing - start locator --name=LocatorOne --log-level=config
Starting a GemFire Locator in /Users/jblum/pivdev/lab/LocatorOne...
....
Locator in /Users/jblum/pivdev/lab/LocatorOne on 10.99.199.24[10334] as LocatorOne is currently online.
Process ID: 68425
Uptime: 2 seconds
GemFire Version: 1.2.1
Java Version: 1.8.0_192
Log File: /Users/jblum/pivdev/lab/LocatorOne/LocatorOne.log
JVM Arguments: -Dgemfire.log-level=config -Dgemfire.enable-cluster-configuration=true -Dgemfire.load-cluster-configuration-from-dir=false -Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true -Dsun.rmi.dgc.server.gcInterval=9223372036854775806
Class-Path: /Users/user1/pivdev/vmware-gemfire-10.0.0/lib/geode-core-1.2.1.jar:/Users/user1/pivdev/vmware-gemfire-10.0.0/lib/geode-dependencies.jar
Successfully connected to: JMX Manager [host=10.99.199.24, port=1099]
Cluster configuration service is up and running.
2. Executing - start server --name=ServerOne --log-level=config
Starting a GemFire Server in /Users/jblum/pivdev/lab/ServerOne...
.....
Server in /Users/jblum/pivdev/lab/ServerOne on 10.99.199.24[40404] as ServerOne is currently online.
Process ID: 68434
Uptime: 2 seconds
GemFire Version: 1.2.1
Java Version: 1.8.0_192
Log File: /Users/jblum/pivdev/lab/ServerOne/ServerOne.log
JVM Arguments: -Dgemfire.default.locators=10.99.199.24[10334] -Dgemfire.use-cluster-configuration=true -Dgemfire.start-dev-rest-api=false -Dgemfire.log-level=config -XX:OnOutOfMemoryError=kill -KILL %p -Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true -Dsun.rmi.dgc.server.gcInterval=9223372036854775806
Class-Path: /Users/user1/pivdev/vmware-gemfire-10.0.0/lib/geode-core-1.2.1.jar:/Users/user1/pivdev/vmware-gemfire-10.0.0/lib/geode-dependencies.jar
You will need to change the path to the spring-for-gemfire-examples/spring-boot/configuration
directory in the run –file=…
Gfsh command above based on where you git cloned the spring-boot-for-vmware-gemfire
project to your computer.
Now, our simple cluster with an VMware GemFire Locator and (Cache) Server is running. We can verify by listing and describing the members:
List and Describe Members
gfsh>list members
Name | Id
---------- | ---------------------------------------------------
LocatorOne | 10.99.199.24(LocatorOne:68425:locator)<ec><v0>:1024
ServerOne | 10.99.199.24(ServerOne:68434)<v1>:1025
gfsh>describe member --name=ServerOne
Name : ServerOne
Id : 10.99.199.24(ServerOne:68434)<v1>:1025
Host : 10.99.199.24
Regions :
PID : 68434
Groups :
Used Heap : 27M
Max Heap : 3641M
Working Dir : /Users/jblum/pivdev/lab/ServerOne
Log file : /Users/jblum/pivdev/lab/ServerOne/ServerOne.log
Locators : 10.99.199.24[10334]
Cache Server Information
Server Bind : null
Server Port : 40404
Running : true
Client Connections : 0
What happens if we try to run the application now?
RegionNotFoundException
17:42:16.873 [main] ERROR o.s.b.SpringApplication - Application run failed
java.lang.IllegalStateException: Failed to execute ApplicationRunner
...
at example.app.crm.CustomerServiceApplication.main(CustomerServiceApplication.java:51) [classes/:?]
Caused by: org.springframework.dao.DataAccessResourceFailureException: remote server on 10.99.199.24(SpringBasedCacheClientApplication:68473:loner):51142:f9f4573d:SpringBasedCacheClientApplication: While performing a remote query; nested exception is org.apache.geode.cache.client.ServerOperationException: remote server on 10.99.199.24(SpringBasedCacheClientApplication:68473:loner):51142:f9f4573d:SpringBasedCacheClientApplication: While performing a remote query
at org.springframework.data.gemfire.GemfireCacheUtils.convertGemfireAccessException(GemfireCacheUtils.java:230) ~[spring-data-for-vmware-gemfire-2.0.14.RELEASE.jar:2.0.14.RELEASE]
at org.springframework.data.gemfire.GemfireAccessor.convertGemFireAccessException(GemfireAccessor.java:91) ~[spring-data-for-vmware-gemfire-2.0.14.RELEASE.jar:2.0.14.RELEASE]
at org.springframework.data.gemfire.GemfireTemplate.find(GemfireTemplate.java:329) ~[spring-data-for-vmware-gemfire-2.0.14.RELEASE.jar:2.0.14.RELEASE]
at org.springframework.data.gemfire.repository.support.SimpleGemfireRepository.count(SimpleGemfireRepository.java:129) ~[spring-data-for-vmware-gemfire-2.0.14.RELEASE.jar:2.0.14.RELEASE]
...
at example.app.crm.CustomerServiceApplication.lambda$runner$0(CustomerServiceApplication.java:59) ~[classes/:?]
at org.springframework.boot.SpringApplication.callRunner(SpringApplication.java:783) ~[spring-boot-2.0.9.RELEASE.jar:2.0.9.RELEASE]
... 3 more
Caused by: org.apache.geode.cache.client.ServerOperationException: remote server on 10.99.199.24(SpringBasedCacheClientApplication:68473:loner):51142:f9f4573d:SpringBasedCacheClientApplication: While performing a remote query
at org.apache.geode.cache.client.internal.AbstractOp.processChunkedResponse(AbstractOp.java:352) ~[geode-core-1.2.1.jar:?]
at org.apache.geode.cache.client.internal.QueryOp$QueryOpImpl.processResponse(QueryOp.java:170) ~[geode-core-1.2.1.jar:?]
at org.apache.geode.cache.client.internal.AbstractOp.processResponse(AbstractOp.java:230) ~[geode-core-1.2.1.jar:?]
at org.apache.geode.cache.client.internal.AbstractOp.attempt(AbstractOp.java:394) ~[geode-core-1.2.1.jar:?]
at org.apache.geode.cache.client.internal.AbstractOp.attemptReadResponse(AbstractOp.java:203) ~[geode-core-1.2.1.jar:?]
at org.apache.geode.cache.client.internal.ConnectionImpl.execute(ConnectionImpl.java:275) ~[geode-core-1.2.1.jar:?]
at org.apache.geode.cache.client.internal.pooling.PooledConnection.execute(PooledConnection.java:332) ~[geode-core-1.2.1.jar:?]
at org.apache.geode.cache.client.internal.OpExecutorImpl.executeWithPossibleReAuthentication(OpExecutorImpl.java:900) ~[geode-core-1.2.1.jar:?]
at org.apache.geode.cache.client.internal.OpExecutorImpl.execute(OpExecutorImpl.java:158) ~[geode-core-1.2.1.jar:?]
at org.apache.geode.cache.client.internal.OpExecutorImpl.execute(OpExecutorImpl.java:115) ~[geode-core-1.2.1.jar:?]
at org.apache.geode.cache.client.internal.PoolImpl.execute(PoolImpl.java:763) ~[geode-core-1.2.1.jar:?]
at org.apache.geode.cache.client.internal.QueryOp.execute(QueryOp.java:58) ~[geode-core-1.2.1.jar:?]
at org.apache.geode.cache.client.internal.ServerProxy.query(ServerProxy.java:70) ~[geode-core-1.2.1.jar:?]
at org.apache.geode.cache.query.internal.DefaultQuery.executeOnServer(DefaultQuery.java:456) ~[geode-core-1.2.1.jar:?]
at org.apache.geode.cache.query.internal.DefaultQuery.execute(DefaultQuery.java:338) ~[geode-core-1.2.1.jar:?]
at org.springframework.data.gemfire.GemfireTemplate.find(GemfireTemplate.java:311) ~[spring-data-for-vmware-gemfire-2.0.14.RELEASE.jar:2.0.14.RELEASE]
at org.springframework.data.gemfire.repository.support.SimpleGemfireRepository.count(SimpleGemfireRepository.java:129) ~[spring-data-for-vmware-gemfire-2.0.14.RELEASE.jar:2.0.14.RELEASE]
...
at example.app.crm.CustomerServiceApplication.lambda$runner$0(CustomerServiceApplication.java:59) ~[classes/:?]
at org.springframework.boot.SpringApplication.callRunner(SpringApplication.java:783) ~[spring-boot-2.0.9.RELEASE.jar:2.0.9.RELEASE]
... 3 more
Caused by: org.apache.geode.cache.query.RegionNotFoundException: Region not found: /Customers
at org.apache.geode.cache.query.internal.DefaultQuery.checkQueryOnPR(DefaultQuery.java:599) ~[geode-core-1.2.1.jar:?]
at org.apache.geode.cache.query.internal.DefaultQuery.execute(DefaultQuery.java:348) ~[geode-core-1.2.1.jar:?]
at org.apache.geode.cache.query.internal.DefaultQuery.execute(DefaultQuery.java:319) ~[geode-core-1.2.1.jar:?]
at org.apache.geode.internal.cache.tier.sockets.BaseCommandQuery.processQueryUsingParams(BaseCommandQuery.java:121) ~[geode-core-1.2.1.jar:?]
at org.apache.geode.internal.cache.tier.sockets.BaseCommandQuery.processQuery(BaseCommandQuery.java:65) ~[geode-core-1.2.1.jar:?]
at org.apache.geode.internal.cache.tier.sockets.command.Query.cmdExecute(Query.java:91) ~[geode-core-1.2.1.jar:?]
at org.apache.geode.internal.cache.tier.sockets.BaseCommand.execute(BaseCommand.java:165) ~[geode-core-1.2.1.jar:?]
at org.apache.geode.internal.cache.tier.sockets.ServerConnection.doNormalMsg(ServerConnection.java:791) ~[geode-core-1.2.1.jar:?]
at org.apache.geode.internal.cache.tier.sockets.ServerConnection.doOneMessage(ServerConnection.java:922) ~[geode-core-1.2.1.jar:?]
at org.apache.geode.internal.cache.tier.sockets.ServerConnection.run(ServerConnection.java:1180) ~[geode-core-1.2.1.jar:?]
...
The application fails to run because we (deliberately) did not create a corresponding, server-side, “Customers” Region. In order for a client to send data via a client PROXY
Region (a Region with no local state) to a server in a cluster, at least one server in the cluster must have a matching Region by name (i.e. “Customers”).
There are no Regions in the cluster:
List Regions
gfsh>list regions
No Regions Found
Of course, you could create the matching server-side, “Customers” Region using gfsh
:
gfsh>create region --name=Customers --type=PARTITION
But, what if you have hundreds of application domain objects each requiring a Region for persistence? It is not an unusual or unreasonable requirement in any practical enterprise scale application.
While it is not a “convention” in <%=vars.product_name_spring%>, Spring Data for VMware GemFire comes to our rescue. We simply only need to enable cluster configuration from the client:
Enable Cluster Configuration
@SpringBootApplication
@EnableEntityDefinedRegions(basePackageClasses = Customer.class)
@EnableClusterConfiguration(useHttp = true)
public class CustomerServiceApplication {
// ...
}
That is, we additionally annotate our Customer Service application class with Spring Data for VMware GemFire’s @EnableClusterConfiguration
annotation. We have also set the useHttp
attribute to true
. This sends the configuration metadata from the client to the cluster via VMware GemFire’s Management REST API.
This is useful when your VMware GemFire cluster may be running behind a firewall, such as on public cloud infrastructure. However, there are other benefits to using HTTP as well. As stated, the client sends configuration metadata to VMware GemFire’s Management REST interface, which is a facade for the server-side Cluster Configuration Service. If another peer (e.g. server) is added to the cluster as a member, then this member will get the same configuration. If the entire cluster goes down, it will have the same configuration when it is restarted.
Spring Data for VMware GemFire is careful not to stomp on existing Regions since those Regions may have data already. Declaring the @EnableClusterConfiguration
annotation is a useful development-time feature, but it is recommended that you explicitly define and declare your Regions in production environments, either using gfsh
or Spring confg.
Note:
It is now possible to replace the Spring Data for VMware GemFire
@EnableClusterConfiguration
annotation with <%=vars.product_name_spring%>'s
@EnableClusterAware
annotation, which has the same effect
of pushing configuration metadata from the client to the server (or
cluster). Additionally, <%=vars.product_name_spring%>'s @EnableClusterAware
annotation makes it unnecessary to explicitly have to configure the
clientRegionShortcut
on the Spring Data for VMware GemFire
@EnableEntityDefinedRegions
annotation (or similar
annotation, e.g. Spring Data for VMware GemFire's @EnableCachingDefinedRegions
).
Finally, because the <%=vars.product_name_spring%> @EnableClusterAware
annotation is
meta-annotated with Spring Data for VMware GemFire's
@EnableClusterConfiguration annotation
is automatically
configures the useHttp
attribute to true
.
Now, we can run our application again, and this time, it works!
Client/Server Run Successful
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v2.0.9.RELEASE)
Saving Customer [Customer(name=Jon Doe)]
Querying for Customer [SELECT * FROM /Customers WHERE name LIKE '%Doe']
Customer was [Customer(name=Jon Doe)]
Process finished with exit code 0
In the cluster (server-side), we will also see that the “Customers” Region was created successfully:
List and Describe Regions
gfsh>list regions
List of regions
---------------
Customers
gfsh>describe region --name=/Customers
..........................................................
Name : Customers
Data Policy : partition
Hosting Members : ServerOne
Non-Default Attributes Shared By Hosting Members
Type | Name | Value
------ | ----------- | ---------
Region | size | 1
| data-policy | PARTITION
We see that the “Customers” Region has a size of 1, containing “Jon Doe”.
We can verify this by querying the “Customers” Region:
Query for all Customers
gfsh>query --query="SELECT customer.name FROM /Customers customer"
Result : true
Limit : 100
Rows : 1
Result
-------
Jon Doe
That was easy!
Auto-configuration for VMware GemFire, Take Two
What may not be apparent in this example up to this point is how the data got from the client to the server. Certainly, our client did send Jon Doe
to the server, but our Customer
class is not java.io.Serializable
. So, how was an instance of Customer
streamed and sent from the client to the server then (it is using a Socket)?
Any object sent over a network, between two Java processes, or streamed to/from disk, must be serializable, no exceptions!
Furthermore, when we started the cluster, we did not include any application domain classes on the classpath of any server in the cluster.
As further evidence, we an adjust our query slightly:
Invalid Query
gfsh>query --query="SELECT * FROM /Customers"
Message : Could not create an instance of a class example.app.crm.model.Customer
Result : false
If you tried to perform a get
, you would hit a similar error:
Region.get(key)
gfsh>get --region=/Customers --key=1 --key-class=java.lang.Long
Message : Could not create an instance of a class example.app.crm.model.Customer
Result : false
So, how was the data sent then? How were we able to access the data stored in the server(s) on the cluster with the OQL query SELECT customer.name FROM /Customers customer
as seen above?
Well, VMware GemFire provides 2 proprietary serialization formats in addition to Java Serialization: Data Serialization and PDX, or Portable Data Exchange.
While Data Serialization is more efficient, PDX is more flexible (i.e. “portable”). PDX enables data to be queried in serialized form and is the format used to support both Java and Native Clients (C++, C#) simultaneously. Therefore, PDX is auto-configured in <%=vars.product_name_spring%> by default.
This is convenient since you may not want to implement java.io.Serializable
for all your application domain model types that you store in VMware GemFire. In other cases, you may not even have control over the types referred to by your application domain model types to make them Serializable
, such as when using a third-party library.
So, <%=vars.product_name_spring%> auto-configures PDX and uses Spring Data for VMware GemFire’s MappingPdxSerializer
as the PdxSerializer
to de/serialize all application domain model types.
If we disable PDX auto-configuration, we will see the effects of trying to serialize a non-serializable type, Customer
.
First, let’s back up a few steps and destroy the server-side “Customers” Region:
Destroy “Customers” Region
gfsh>destroy region --name=/Customers
"/Customers" destroyed successfully.
gfsh>list regions
No Regions Found
Then, we disable PDX auto-configuration:
Disable PDX Auto-configuration
@SpringBootApplication(exclude = PdxSerializationAutoConfiguration.class)
@EnableEntityDefinedRegions(basePackageClasses = Customer.class)
@EnableClusterConfiguration(useHttp = true)
public class CustomerServiceApplication {
// ...
}
When we re-run the application, we get the error we would expect:
NotSerializableException
Caused by: java.io.NotSerializableException: example.app.crm.model.Customer
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1184) ~[?:1.8.0_192]
at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348) ~[?:1.8.0_192]
at org.apache.geode.internal.InternalDataSerializer.writeSerializableObject(InternalDataSerializer.java:2248) ~[geode-core-1.2.1.jar:?]
at org.apache.geode.internal.InternalDataSerializer.basicWriteObject(InternalDataSerializer.java:2123) ~[geode-core-1.2.1.jar:?]
at org.apache.geode.DataSerializer.writeObject(DataSerializer.java:2936) ~[geode-core-1.2.1.jar:?]
at org.apache.geode.internal.util.BlobHelper.serializeTo(BlobHelper.java:66) ~[geode-core-1.2.1.jar:?]
at org.apache.geode.internal.cache.tier.sockets.Message.serializeAndAddPart(Message.java:396) ~[geode-core-1.2.1.jar:?]
at org.apache.geode.internal.cache.tier.sockets.Message.addObjPart(Message.java:340) ~[geode-core-1.2.1.jar:?]
at org.apache.geode.internal.cache.tier.sockets.Message.addObjPart(Message.java:319) ~[geode-core-1.2.1.jar:?]
at org.apache.geode.cache.client.internal.PutOp$PutOpImpl.<init>(PutOp.java:281) ~[geode-core-1.2.1.jar:?]
at org.apache.geode.cache.client.internal.PutOp.execute(PutOp.java:66) ~[geode-core-1.2.1.jar:?]
at org.apache.geode.cache.client.internal.ServerRegionProxy.put(ServerRegionProxy.java:162) ~[geode-core-1.2.1.jar:?]
at org.apache.geode.internal.cache.LocalRegion.serverPut(LocalRegion.java:3006) ~[geode-core-1.2.1.jar:?]
at org.apache.geode.internal.cache.LocalRegion.cacheWriteBeforePut(LocalRegion.java:3115) ~[geode-core-1.2.1.jar:?]
at org.apache.geode.internal.cache.ProxyRegionMap.basicPut(ProxyRegionMap.java:222) ~[geode-core-1.2.1.jar:?]
at org.apache.geode.internal.cache.LocalRegion.virtualPut(LocalRegion.java:5628) ~[geode-core-1.2.1.jar:?]
at org.apache.geode.internal.cache.LocalRegionDataView.putEntry(LocalRegionDataView.java:151) ~[geode-core-1.2.1.jar:?]
at org.apache.geode.internal.cache.LocalRegion.basicPut(LocalRegion.java:5057) ~[geode-core-1.2.1.jar:?]
at org.apache.geode.internal.cache.LocalRegion.validatedPut(LocalRegion.java:1595) ~[geode-core-1.2.1.jar:?]
at org.apache.geode.internal.cache.LocalRegion.put(LocalRegion.java:1582) ~[geode-core-1.2.1.jar:?]
at org.apache.geode.internal.cache.AbstractRegion.put(AbstractRegion.java:325) ~[geode-core-1.2.1.jar:?]
at org.springframework.data.gemfire.GemfireTemplate.put(GemfireTemplate.java:193) ~[spring-data-for-vmware-gemfire-2.0.14.RELEASE.jar:2.0.14.RELEASE]
at org.springframework.data.gemfire.repository.support.SimpleGemfireRepository.save(SimpleGemfireRepository.java:86) ~[spring-data-for-vmware-gemfire-2.0.14.RELEASE.jar:2.0.14.RELEASE]
...
at example.app.crm.CustomerServiceApplication.lambda$runner$0(CustomerServiceApplication.java:70) ~[spring-samples-boot-configuration-1.0.0.RELEASE.jar]
at org.springframework.boot.SpringApplication.callRunner(SpringApplication.java:783) ~[spring-boot-2.0.9.RELEASE.jar:2.0.9.RELEASE]
...
Our “Customers” Region is recreated, but is empty:
Empty “Customers” Region
gfsh>list regions
List of regions
---------------
Customers
gfsh>describe region --name=/Customers
..........................................................
Name : Customers
Data Policy : partition
Hosting Members : ServerOne
Non-Default Attributes Shared By Hosting Members
Type | Name | Value
------ | ----------- | ---------
Region | size | 0
| data-policy | PARTITION
So, <%=vars.product_name_spring%> takes care of all your serialization needs without you having to configure serialization or implement java.io.Serializable
in all your application domain model types, including types your application domain model types might refer to, which may not be possible.
If you were not using <%=vars.product_name_spring%>, then you would need to enable PDX serialization explicitly.
The PDX auto-configuration provided by <%=vars.product_name_spring%> is equivalent to:
Equivalent PDX Configuration
@SpringBootApplication
@ClientCacheApplication
@EnableEntityDefinedRegions(basePackageClasses = Customer.class)
@EnableClusterConfiguration(useHttp = true)
@EnablePdx
public class CustomerServiceApplication {
// ...
}
In addition to the @ClientCacheApplication
annotation, you would need to annotate the CustomerServiceApplication
class with Spring Data for VMware GemFire’s @EnablePdx
annotation, which is responsible for configuring PDX serialization and registering Spring Data for VMware GemFire’s MappingPdxSerializer
.
Securing the Client and Server
The last bit of auto-configuration provided by <%=vars.product_name_spring%> that we will look at in this guide involves Security, and specifically, Authentication/Authorization (Auth) along with Transport Layer Security (TLS) using SSL.
In today’s age, Security is no laughing matter and making sure your applications are secure is a first-class concern. This is why <%=vars.product_name_spring%> takes Security very seriously and attempts to make this as simple as possible.
We will now expand on our example to secure the client and server processes, with both Auth and TLS using SSL, and then see how <%=vars.product_name_spring%> helps us properly configure these concerns, easily and reliably.
Securing the server
First, we must secure the cluster, that is, the Locator and Server.
When using the VMware GemFire API with no help from Spring, you must do the following:
-
(Auth) Implement the
org.apache.geode.security.SecurityManager
interface. -
(Auth) Configure your custom
SecurityManager
using the VMware GemFiresecurity-manager
property ingemfire.properties
. -
(Auth) Either create a
gfsecurity.properties
file and set thesecurity-username
andsecurity-password
properties, or… -
(Auth) Implement the
org.apache.geode.security.AuthInitialize
interface and set thesecurity-peer-auth-init
property ingemfire.properties
as described in Implementing Authentication of the VMware GemFire User Guide. -
(SSL) Then, you must create Java KeyStore (jks) files for both the keystore and truststore used to configure the SSL Socket.
-
(SSL) Configure the Java KeyStores using the VMware GemFire
ssl-keystore
andssl-truststore
properties ingemfire.properties
. -
(SSL) If you secured your Java KeyStores (recommended) then you must additionally set the
ssl-keystore-password
andssl-truststore-password
properties. -
(SSL) Optionally, configure the VMware GemFire components that should be enabled with SSL using the
ssl-enabled-components
property (e.g.locator
andserver
for client/server and Locator connections). -
Then launch the cluster, and its members using
gfsh
in the proper order.
This is a lot of tedious work and if you get any bit of the configuration wrong, then either your servers will fail to start correctly, or worse, they will not be secure.
Fortunately, this sample provides gfsh
shell scripts to get you going:
Gfsh shell script to start a secure cluster
# Gfsh shell script to start a secure VMware GemFire/GemFire cluster
set variable --name=CLASSPATH --value=${<%=vars.product_name_spring%>_HOME}/vmware-gemfire-extensions/build/libs/vmware-gemfire-extensions-@project-version@.jar
set variable --name=GEMFIRE_PROPERTIES --value=${<%=vars.product_name_spring%>_HOME}/spring-boot/boot/configuration/build/resources/main/geode/config/gemfire.properties
start locator --name=LocatorOne --classpath=${CLASSPATH} --properties-file=${GEMFIRE_PROPERTIES}
connect --user=test --password=test
start server --name=ServerOne --classpath=${CLASSPATH} --properties-file=${GEMFIRE_PROPERTIES}
Note: <%=vars.product_name_spring%> does provide server-side, peer Security auto-configuration support. However, you must then configure and bootstrap your VMware GemFire servers with Spring. ### Securing the client #### Authentication If you were to run the client, Customer Service application when SSL is not enabled, the application would throw the following error on startup: AuthenticationRequiredException ``` highlight 15:26:10.598 [main] ERROR o.a.g.i.c.GemFireCacheImpl - org.apache.geode.security.AuthenticationRequiredException: No security credentials are provided 15:26:10.607 [main] ERROR o.s.b.SpringApplication - Application run failed org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'runner' defined in example.app.crm.CustomerServiceApplication: Unsatisfied dependency expressed through method 'runner' parameter 0; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'customerRepository': Initialization of bean failed; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'Customers': Cannot resolve reference to bean 'gemfireCache' while setting bean property 'cache'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'gemfireCache': FactoryBean threw exception on object creation; nested exception is java.lang.RuntimeException: Error occurred when initializing peer cache .... at org.springframework.boot.SpringApplication.run(SpringApplication.java:307) [spring-boot-2.0.9.RELEASE.jar:2.0.9.RELEASE] at example.app.crm.CustomerServiceApplication.main(CustomerServiceApplication.java:51) [main/:?] .... Caused by: org.apache.geode.security.AuthenticationRequiredException: No security credentials are provided at org.apache.geode.internal.cache.tier.sockets.HandShake.readMessage(HandShake.java:1396) ~[geode-core-1.2.1.jar:?] at org.apache.geode.internal.cache.tier.sockets.HandShake.handshakeWithServer(HandShake.java:1251) ~[geode-core-1.2.1.jar:?] at org.apache.geode.cache.client.internal.ConnectionImpl.connect(ConnectionImpl.java:117) ~[geode-core-1.2.1.jar:?] at org.apache.geode.cache.client.internal.ConnectionFactoryImpl.createClientToServerConnection(ConnectionFactoryImpl.java:136) ~[geode-core-1.2.1.jar:?] at org.apache.geode.cache.client.internal.QueueManagerImpl.initializeConnections(QueueManagerImpl.java:466) ~[geode-core-1.2.1.jar:?] at org.apache.geode.cache.client.internal.QueueManagerImpl.start(QueueManagerImpl.java:303) ~[geode-core-1.2.1.jar:?] at org.apache.geode.cache.client.internal.PoolImpl.start(PoolImpl.java:343) ~[geode-core-1.2.1.jar:?] at org.apache.geode.cache.client.internal.PoolImpl.finishCreate(PoolImpl.java:173) ~[geode-core-1.2.1.jar:?] at org.apache.geode.cache.client.internal.PoolImpl.create(PoolImpl.java:159) ~[geode-core-1.2.1.jar:?] at org.apache.geode.internal.cache.PoolFactoryImpl.create(PoolFactoryImpl.java:321) ~[geode-core-1.2.1.jar:?] at org.apache.geode.internal.cache.GemFireCacheImpl.determineDefaultPool(GemFireCacheImpl.java:2922) ~[geode-core-1.2.1.jar:?] at org.apache.geode.internal.cache.GemFireCacheImpl.initializeDeclarativeCache(GemFireCacheImpl.java:1369) ~[geode-core-1.2.1.jar:?] at org.apache.geode.internal.cache.GemFireCacheImpl.initialize(GemFireCacheImpl.java:1195) ~[geode-core-1.2.1.jar:?] at org.apache.geode.internal.cache.GemFireCacheImpl.basicCreate(GemFireCacheImpl.java:758) ~[geode-core-1.2.1.jar:?] at org.apache.geode.internal.cache.GemFireCacheImpl.createClient(GemFireCacheImpl.java:731) ~[geode-core-1.2.1.jar:?] at org.apache.geode.cache.client.ClientCacheFactory.basicCreate(ClientCacheFactory.java:262) ~[geode-core-1.2.1.jar:?] at org.apache.geode.cache.client.ClientCacheFactory.create(ClientCacheFactory.java:212) ~[geode-core-1.2.1.jar:?] at org.springframework.data.gemfire.client.ClientCacheFactoryBean.createCache(ClientCacheFactoryBean.java:400) ~[spring-data-for-vmware-gemfire-2.0.14.RELEASE.jar:2.0.14.RELEASE] ... ``` Even though <%=vars.product_name_spring%> provides *auto-configuration* support for client Security, and specifically Auth in this case, you still must supply a username and password, minimally. This is as easy as setting a username/password in Spring Boot `application.properties` using Spring Data for VMware GemFire's well-known and documented properties: Application security configuration properties ``` highlight # Security configuration for VMware GemFire using Spring Boot and Spring Data for VMware GemFire properties spring.boot.data.gemfire.security.ssl.keystore.name=example-trusted-keystore.jks spring.data.gemfire.security.username=test spring.data.gemfire.security.password=test spring.data.gemfire.security.ssl.keystore.password=s3cr3t spring.data.gemfire.security.ssl.truststore.password=s3cr3t ``` The act of setting a username and password triggers the client Security *auto-configuration* provided by <%=vars.product_name_spring%>. There are many steps to configuring client Security in VMware GemFire properly, as there was on the server. All you need to worry about is supplying the credentials. Easy! To include the `application-security.properties`, simply enable the Spring "security" profile in your run configuration when running the `CustomerServiceApplication` class: Enable Spring "security" Profile ``` highlight -Dspring.profiles.active=security ``` By doing so, the `application-security.properties` file containing the configured username/password properties is included on application startup and our application is able to authenticate with the cluster successfully. To illustrate that there is more to configuring Authentication than simply setting a username/password, if you were to disable the client Security *auto-configuration*: Disabling Client Security Auto-configuration ``` highlight @SpringBootApplication(exclude = ClientSecurityAutoConfiguration.class) @EnableEntityDefinedRegions(basePackageClasses = Customer.class) @EnableClusterConfiguration(useHttp = true) public class CustomerServiceApplication { // ... } ``` Then, our application would not be able to authenticate with the cluster, and again, an error would be thrown: AuthenticationRequiredException ``` highlight Caused by: org.apache.geode.security.AuthenticationRequiredException: No security credentials are provided at org.apache.geode.internal.cache.tier.sockets.HandShake.readMessage(HandShake.java:1396) ~[geode-core-1.2.1.jar:?] at org.apache.geode.internal.cache.tier.sockets.HandShake.handshakeWithServer(HandShake.java:1251) ~[geode-core-1.2.1.jar:?] .... ``` Without the support of <%=vars.product_name_spring%>'s client Security *auto-configuration*, you would need to explicitly enable Security: Explicitly Enable Security ``` highlight @SpringBootApplication @ClientCacheApplication @EnableSecurity @EnableEntityDefinedRegions(basePackageClasses = Customer.class) @EnableClusterConfiguration(useHttp = true) public class CustomerServiceApplication { // ... } ``` That is, in addition to the `@ClientCacheApplication` annotation, you would still need to 1) set the username/password properties in Spring Boot `application.properties` and 2) explicitly declare the `@EnableSecurity` annotation. Therefore, <%=vars.product_name_spring%> (with help from Spring Data for VMware GemFire, under-the-hood) does the heavy lifting, automatically for you. #### TLS with SSL What about SSL? With either <%=vars.product_name_spring%> SSL *auto-configuration* disabled: Disable SSL Auto-configuration ``` highlight @SpringBootApplication(exclude = SslAutoConfiguration.class) @EnableEntityDefinedRegions(basePackageClasses = Customer.class) @EnableClusterConfiguration(useHttp = true) public class CustomerServiceApplication { // ... } ``` Or optionally, no explicit Java KeyStore configuration, iff necessary, such as: Java KeyStore Configuration for SSL using <%=vars.product_name_spring%> ``` highlight spring.boot.data.gemfire.security.ssl.keystore.name=myTrustedKeyStore.jks spring.data.gemfire.security.ssl.keystore.password=s3cr3t spring.data.gemfire.security.ssl.truststore.password=s3cr3t ``` Or possibly: Java KeyStore Configuration for SSL using Spring Data for VMware GemFire ``` highlight spring.data.gemfire.security.ssl.keystore=/file/system/path/to/trusted-keystore.jks spring.data.gemfire.security.ssl.keystore.password=s3cr3t spring.data.gemfire.security.ssl.keystore.type=JKS spring.data.gemfire.security.ssl.truststore=/file/system/path/to/trusted-keystore.jks spring.data.gemfire.security.ssl.truststore.password=s3cr3t ``` Then, the application will throw the following error: Connectivity Exception ``` highlight Caused by: org.apache.geode.security.AuthenticationRequiredException: Server expecting SSL connection at org.apache.geode.internal.cache.tier.sockets.HandShake.handshakeWithServer(HandShake.java:1222) ~[geode-core-1.2.1.jar:?] at org.apache.geode.cache.client.internal.ConnectionImpl.connect(ConnectionImpl.java:117) ~[geode-core-1.2.1.jar:?] at org.apache.geode.cache.client.internal.ConnectionFactoryImpl.createClientToServerConnection(ConnectionFactoryImpl.java:136) ~[geode-core-1.2.1.jar:?] at org.apache.geode.cache.client.internal.QueueManagerImpl.initializeConnections(QueueManagerImpl.java:466) ~[geode-core-1.2.1.jar:?] at org.apache.geode.cache.client.internal.PoolImpl.start(PoolImpl.java:343) ~[geode-core-1.2.1.jar:?] at org.apache.geode.cache.client.internal.QueueManagerImpl.start(QueueManagerImpl.java:303) ~[geode-core-1.2.1.jar:?] at org.apache.geode.cache.client.internal.PoolImpl.finishCreate(PoolImpl.java:173) ~[geode-core-1.2.1.jar:?] at org.apache.geode.cache.client.internal.PoolImpl.create(PoolImpl.java:159) ~[geode-core-1.2.1.jar:?] at org.apache.geode.internal.cache.PoolFactoryImpl.create(PoolFactoryImpl.java:321) ~[geode-core-1.2.1.jar:?] at org.apache.geode.internal.cache.GemFireCacheImpl.determineDefaultPool(GemFireCacheImpl.java:2922) ~[geode-core-1.2.1.jar:?] at org.apache.geode.internal.cache.GemFireCacheImpl.initializeDeclarativeCache(GemFireCacheImpl.java:1369) ~[geode-core-1.2.1.jar:?] at org.apache.geode.internal.cache.GemFireCacheImpl.initialize(GemFireCacheImpl.java:1195) ~[geode-core-1.2.1.jar:?] at org.apache.geode.internal.cache.GemFireCacheImpl.basicCreate(GemFireCacheImpl.java:758) ~[geode-core-1.2.1.jar:?] at org.apache.geode.internal.cache.GemFireCacheImpl.createClient(GemFireCacheImpl.java:731) ~[geode-core-1.2.1.jar:?] at org.apache.geode.cache.client.ClientCacheFactory.basicCreate(ClientCacheFactory.java:262) ~[geode-core-1.2.1.jar:?] at org.apache.geode.cache.client.ClientCacheFactory.create(ClientCacheFactory.java:212) ~[geode-core-1.2.1.jar:?] ``` With very minimal to no configuration, <%=vars.product_name_spring%> can automatically configure SSL. In fact, no configuration is actually required if the trusted Java KeyStore file is named "trusted.keystore", is in the root of the classpath and the JKS file is unsecured, i.e. not protected by a password. However, if the you named your Java KeyStore (JKS) file something other than "trusted.keystore", then you can set the `spring-boot-data-gemfire.security.ssl.keystore.name` property: Delcaring the Java KeyStore filename ``` highlight spring.boot.data.gemfire.security.ssl.keystore.name=myTrustedKeyStore.jks ``` If you Java KeyStore (JKS) file is secure, then you can specify the password: Java KeyStore Configuration for SSL using <%=vars.product_name_spring%> ``` highlight spring.data.gemfire.security.ssl.keystore.password=s3cr3t spring.data.gemfire.security.ssl.truststore.password=s3cr3t ``` Or, if the Java KeyStore files for SSL are completely of a different variety: Complete Java KeyStore Configuration for SSL using Spring Data for VMware GemFire ``` highlight spring.data.gemfire.security.ssl.keystore=/file/system/path/to/trusted-keystore.pks11 spring.data.gemfire.security.ssl.keystore.password=s3cr3t spring.data.gemfire.security.ssl.keystore.type=PKS11 spring.data.gemfire.security.ssl.truststore=/file/system/path/to/trusted-keystore.pks11 spring.data.gemfire.security.ssl.truststore.password=differentS3cr3t ``` Again, you can customize your configuration as much as needed or let <%=vars.product_name_spring%> handle things by following the defaults. The <%=vars.product_name_spring%> SSL *auto-configuration* is equivalent to the following in Spring Data for VMware GemFire: Spring Data for VMware GemFire SSL Configuration ``` highlight @SpringBootApplication @ClientCacheApplication @EnableSsl @EnableEntityDefinedRegions(basePackageClasses = Customer.class) @EnableClusterConfiguration(useHttp = true) public class CustomerServiceApplication { // ... } ``` In addition to the `@ClientCacheApplication` annotation, you must additional declare the `@EnableSsl` annotation along with the `spring.data.gemfire.security.ssl.keystore` and `spring.data.gemfire.security.ssl.truststore` properties in Spring Boot `application.properties`. In total, it is just simpler to start with the defaults and then customize bits of the configuration as your UC and application requirements grow. ## Conclusion Hopefully this guide has now given you a better understanding of what the *auto-configuration* support provided by <%=vars.product_name_spring%> is giving you when developing VMware GemFire applications with Spring. In this guide, we have seen that <%=vars.product_name_spring%> provides *auto-configuration* support for the following Spring Data for VMware GemFire's annotations: - `@ClientCacheApplication` - `@EnableGemfireRepositories` - `@EnablePdx` - `@EnableSecurity` - `@EnableSsl` While we also presented these additional Spring Data for VMware GemFire annotations, which are not auto-configured by Spring Data for VMware GemFire: - `@EnableEntityDefinedRegions` - `@EnableClusterConfiguration` They are optional and were shown for pure convenience. Technically, the only annotation you are required to declare when <%=vars.product_name_spring%> is on the classpath, is `@SpringBootApplication`, leaving our Customer Service application declaration as simple as: Basic CustomerServiceApplication class ``` highlight @SpringBootApplication public class CustomerServiceApplication { // ... } ``` That is it! That is all! However, this guide is by no means complete. This guide does not cover all the *auto-configuration* provided by <%=vars.product_name_spring%>. <%=vars.product_name_spring%> additionally provides *auto-configuration* for Spring's Cache Abstraction, Continuous Query (CQ), Function Execution & Implementations, `GemfireTemplates` and Spring Session. However, the concepts and effects are similar to what has been presented above. We leave it as an exercise for you to explore and understand the remaining *auto-configuration* bits using this guide as a reference for your learning purposes.
Content feedback and comments