Thursday, November 14, 2024

Modularize Spring Boot micro-service with Spring Modulith - Notes from my exploration . . .

The dictionary meaning of Modularity is - the use of individually distinct functional units, as in assembling an electronic or mechanical system. In other words, it is the degree to which components of a system can be separated. 

In Software Development, achieving modularity involves structuring various components involved into distinct modules or packages. The separation of components of various concerns is where Software Developers often fall into the traps of architectural layers. In other words, structure code that aligns with familiar technology layers like controller, service, persistence, domain etc by grouping similar kind of classes into a package with the name that mainly describes the logical layer. Structuring code by architectural layers that gives an architectural layered overview was once a good practice, but is quite common and a norm in modern application development. With this kind of code packaging structure, code within one package or layer gets interwoven with many functional/business use-cases. This also achieves modularity, but gets driven by the technical architecture, and not driven by business domain use-cases. The top-level domain or functional visibility is lost and pushed underneath architectural layers.

Spring Modulith

Spring Modulith is fairly new addition to Spring Projects. It not only helps bring in well-structured application modules that can be driven by domain, but also helps in verifying their arrangement, and even facilitates creating documentation.

It comes with few key fundamental arrangement/accessibility principles/rules/conventions with some limitations. The arrangement rules are considered violated only if you have a verification test in place. Otherwise, with your own arrangement, you will be able to generate documentation like PlantUML diagram showing Modules, their dependencies and boundaries.

Some points learned and noted

  • Modulith modules are analogous to Java packages.
  • Without making any modular/structural changes, if you add Modulith dependency and a unit test-case to verify, it will detect and report circular references which is very useful by itself.
  • By default each direct sub-package of the application main package is considered an application module, a.k.a module's base package.
  • Each module's base package is treated as an API package with all public classes under a module's base package made available for access and dependency injection from other modules.
  • Sub-packages under a module's base package are treated as internal to that and are not accessible by any other module, though Java allows if classes under that sub-package are public.
  • In order to expose or make a sub-package under a module's base package accessible to other packages, you need to provide a package-info.java file under that sub-package by annotating that package with @NamedInetrface in that file. With this the annotated sub-package which otherwise is treated as internal to the module that is under becomes available and is accessible from other application modules.
  • The file package-info.java is Java's standard way of adding package documentation which was introduced in Java 5. This file can contain Javadoc comment with Javadoc tags for the package along with package declaration and package annotations. Spring framework's null safety annotations like @NonNullFields, @NonNullApi can be specified in this file with which those specified annotations can be applied to all classes under that package.
  • By default the name of a module in UML generated is nothing but the module's base package name with the uppercase first letter. This name can be customized by annotating the package with @ApplicationModule and specifying the value for displayName property. This applies to only base package of the module. Sub-packages cannot be shown in the generated UML diagram anyway.
  • Additional customizations are possible. But I wouldn't overuse as it defeats simplicity.  
An example of package-info.java in a sub-package made accessible to other modules is described below:

Application main package: com.giri.myapp
Application module base package: event
Sub-package of module events exposed to other modules: publishers
The file package-info.java under com.giri.myapp.event.publisher looks like below: 
package-info.java
@org.springframework.modulith.NamedInterface package com.giri.myapp.event.publisher;

An example of package-info.java added to a module (base package) with different module name (EventConsumers) than the default name (Consumer) is shown below:
@org.springframework.modulith.ApplicationModule(displayName = "EventConsumers") package com.giri.myapp.consumer;

Limitations

  • Sub-package exposed as package cannot be shown in generated diagram.
Documentation can be generated by having a simple test-case as outlined in the documentation. UML diagram is particularly useful to get to see Architectural overview of the application modules, their dependencies and boundaries. However, if you organized any of the related components into sub-packages of a module base package and exposed those by annotating the package by adding package-info.java as shown above, it's quite natural to expect that the sub-package is shown in the generated UML as a module since it's treated as a module from exposure point of view for other modules. But the UML diagram seems only restricted and limited to showing the default application modules, the direct sub-packages under the project main package.

I did explore the API little bit to find out if there is a way to override this rule by any means, I couldn't find a way to do so. Hope the future version will consider this kind of expectation and provide an option for specifying for documentation as the sub-package exposed as a module by annotating the package with @NamedInterface is visible and used/depended on by other modules.
  • Generated visible diagram files should be treated as code to be checked in.
The clickable modules and visual diagram generated as .puml files are only useful to developers with IDE plugins. A mechanism to integrate into code documentation files like README.md or wiki would be more useful to let the Visible Architecture up-to-date with the codebase.
 

TIP

IntelliJ IDEA has a PlantUML Integration plugin available and can be used to view generated .puml files as UML diagram in the IDE. Follow these steps in order to generate Spring Modulith modules diagram and view the UML diagram in IntelliJ IDEA.
  • Make sure you have graphviz installed.
  • Install IntelliJ IDEA's PlantUML Integration plugin
  • Run test-case: ModularityTest which generates modulith PlantUML files (.puml) under application's target/spring-modulith-docs directory.
  • When you open any .puml file generated in IntelliJ, the plugin shows it as PlantUML diagram.
A sample JUnit test-case (ModularityTest.java) is shown below:
package com.giri.myapp.modularity; import com.giri.myapp.MyApplication; import org.junit.jupiter.api.Test; import org.springframework.modulith.core.ApplicationModules; import org.springframework.modulith.docs.Documenter; class ModularityTest { ApplicationModules modules = ApplicationModules.of(MyApplication.class); /** * Test to verify application structure. Rejects cyclic dependencies and access to internal types. */ @Test void verifyModularity() { System.out.println(modules); modules.verify(); } /** * Test to generate Application Module Component diagrams under target/spring-modulith-docs. */ @SuppressWarnings("squid:S2699") @Test void writeDocumentationSnippets() { new Documenter(modules) .writeModulesAsPlantUml() .writeIndividualModulesAsPlantUml(); } }

Summary

Structuring by domain use-cases vs architectural layers cannot be a personal preference. Viewing an application from domain aspect with underneath familiar technological layers gives a view that aligns better with business than viewing an application from technological layers. Spring Modulith helps achieve domain-driven modularity by following simple package level modular conventions where modules align with domain concepts.

Achieving better modularity doesn't necessarily need to be showing only business domain-based modules. Certain aspects of technologies like GraphQL let's say to indicate that the application provides GraphQL API for its client can as well get depicted in the generated UML module diagram by structuring controllers into a module with name graphql. With a right balanced mix of modularity, code can be restructured to show all main business/domain use-cases along with some technology related modules like Rest, GraphQL, Events etc. added to the mix. This kind of balanced approach gives good architectural and structural overview of the application showing how modules interact with each other, in some cases event showing high-level flow.

Spring Modulith comes with added support for structural validation, visual documentation of the modular arrangement, modular testability and observability.

Domain-driven modularity brings in more maintainable and understandable structure. However, keep things simple and do not overuse modularity, it defeats simplicity that Software Development is badly in need of. ;)

References




Friday, September 20, 2024

Java - Gotcha - Sealed interface and mocking in unit tests . . .

Seal(noun)
dictionary meaning - a device or substance that is used to join two things together so as to prevent them from coming apart or to prevent anything from passing between them. 

Prevent anything from passing between them. That's exactly what sometimes you want to put in place. When you have an interface and want to restrict other interfaces to extend or classes to implement to have a control on, you ned to seal your interface by specifying all those are permitted to extend or implement.

Java sealed interfaces is a feature introduced in Java 15 as a preview feature and became a standard feature in Java 17. Sealed interface restricts which classes or interfaces can implement or extend it. Classes that implement a sealed interface must be declared as final, sealed, or non-sealed. This provides more control over the inheritance hierarchy and helps to enforce certain design constraints.
 
To declare a sealed interface, use the sealed keyword followed by the permits clause, which lists the permitted subtypes.

E.g.
public sealed interface Shape permits Circle, Rectangle, Triangle { double area(); }

Each permitted subtype must be declared as one of the following:
  • Final: Cannot be extended further.
  • Sealed: Can specify its own permitted subtypes.
  • Non-Sealed: Removes the sealing restriction, allowing any class to extend it.
// Final class public final class Circle implements Shape { ... } // Sealed class public sealed class Rectangle implements Shape permits Square { ... } // Non-sealed class, additional permitted sub-type public final Square extends Rectangle { ... }

Benefits of Sealed Interfaces

Enhanced Control: Provide more control over the inheritance hierarchy, ensuring that only specific classes can implement the interface.
Improved Maintainability: By restricting the set of permitted subtypes, you can make your codebase easier to understand and maintain.
Better Exhaustiveness Checking: Sealed interfaces improve exhaustiveness checking in switch statements, especially when used with pattern matching (introduced in later Java versions).

The exhaustive checking in switch statement itself is very useful feature to have that makes your code not to miss handling a case of interface type in switch which otherwise is prone to bugs. The compiler would not let your code compile until all possible cases are handled in a switch statement making your code robust.

Shape aShape; ... switch(aShape) { case Circle circle -> circle.radius(); case Rectangle rectangle -> // do something // handle all remaining cases or provide default case, otherwise you code fails compilation }

Gotcha - Mockito, mocking sealed interface

Mocking is common in unit testing. If you are writing unit test for an object A that depends on object B, you will not be interested in B and can simply mock it's behavior. If Mockito is your mocking framework, and B happens to be a sealed interface with some permitted implementations, then you will not be able to mock like usually you do as follows:

class ATest { ... @Mock private B objB; ... }

Your test fails with the following error when it is run:
org.mockito.exceptions.base.MockitoException: Mockito cannot mock this class: interface B. If you're not sure why you're getting this error, please open an issue on GitHub. Java : 22 JVM vendor name : Amazon.com Inc. JVM vendor version : 22.0.1+8-FR JVM name : OpenJDK 64-Bit Server VM JVM version : 22.0.1+8-FR JVM info : mixed mode, sharing OS name : Mac OS X OS version : 13.6.6 You are seeing this disclaimer because Mockito is configured to create inlined mocks. You can learn about inline mocks and their limitations under item #39 of the Mockito class javadoc. Underlying exception : org.mockito.exceptions.base.MockitoException: Unsupported settings with this type 'B'

Solution
Change mock to a specific implementation of the interface.
private final B objB = mock(BImpl.class); // sealed interface, specify specific implementation class to be mocked

Monday, September 16, 2024

Spring Code TIP-2: Assert expected log message in a test-case . . .

Logging is helpful to know the insights of a running application. It is also useful in investigating issues. Sometimes, we would like to log certain configuration details after the application starts up. This can be achieved by implementing functional interface CommandLineRunner. Any bean that implements this interface is a special type of bean that runs after the application context is fully loaded.

CommandLineRunner is a functional interface with a single method run(String... args). Beans implementing this interface are executed after the application context is loaded and before the Spring Boot application starts.

For example the following application class defines a CommandLineRunner bean. The bean returns a lambda expression which defines the behavior of the CommandLineRunner. As an altervative to defining a bean, the MyApplication class can also implement this interface and provide implementation for run method. As shown below, this bean checks and if the jdbcClient is available, it executes a SQL query to get the database version and logs the result. This is run after the application context is fully loaded so that we know the version of the Database that the application is using. This is one such very useful information.

@SpringBootApplication @Slf4j public class MyApplication { public static void main(String[] args) { SpringApplication.run(MyApplication.class, args); } @Bean public CommandLineRunner initializationChecks(@Autowired(required = false) JdbcClient jdbcClient) { return args -> { if (jdbcClient != null) { log.info("Database check: {}", jdbcClient.sql("SELECT version()").query(String.class).single()); } }; } }

Now, say we want to assert this log message in a test-case. That way we ensure that the log message contains the expected version of the Database, and the Database won't get changed/upgraded without a test-case catching it.

Spring Code TIP - test log message

JUnit's @ExtendsWith annotation and Spring Boot's OutputCaptureExtension can be leveraged to  achieve this.

The following is an integration test, that does this:
@SpringBootTest(useMainMethod = SpringBootTest.UseMainMethod.ALWAYS) @ExtendWith(OutputCaptureExtension.class) class MyApplicationIntegrationTest { @Autowired ApplicationContext applicationContext; @Autowired MyService myService; @Test @DisplayName("An integration Smoke Test to ensure that the application context loads, autowiring works, and checks DB version.") void smokeTest_context_loads_and_autowiring_works(CapturedOutput output) { var otherService = applicationContext.getBean(OtherService.class); assertThat(otherService).isInstanceOf(OtherService.class); assertThat(myService).isNotNull(); assertThat(output).contains("Database version: PostgreSQL 15.3"); } }

Highlighted is the code in focus, assuming that the Database used is PostgreSQL 15.3.

Sunday, September 15, 2024

Spring Code TIP-1: Get code coverage for the main method in Spring Boot application . . .

Setting up a new Spring boot project is made trivial by the Spring Initializer. IDEs like IntelliJ has integrated support for this as well. The application generated by this initializer contains three files under src.
    1) The application file (e.g. DemoAppliction.java) is Spring Boot main class that bootstraps the project.
    2) A properties file (application.properties) is the application configuration file.
    3) A test-case file (DemoApplictionTests.java). 

The main application file looks like:
package com.example.demo; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; @SpringBootApplication public class DemoApplication { public static void main(String[] args) { SpringApplication.run(DemoApplication.class, args); } }
The SpringApplication.run(DemoApplication.class, args) performs bootstrapping, creates application context, and runs the application.

The configuration file looks like:
spring.application.name=demo

The test-case looks like:
package com.example.demo; import org.junit.jupiter.api.Test; import org.springframework.boot.test.context.SpringBootTest; @SpringBootTest class DemoApplicationTests { @Test void contextLoads() { } }
This is in fact a simple yet very useful test class and I would keep it around. The is an integration test-case that ensures that the application context loads successfully. If there are any issues this test fails. So, this can be treated like a integration smoke-test for your application. I would rename that test method name to smokeTest_contextLoads().

Spring Code TIP - Code coverage

However, once you progress on with the application and have code coverage check applied using build system like maven and its plugins Surefire, Failsafe and JaCoCo plugins, the main method is left uncovered by this test-case. Little strange!

Having a main method in the application supported by integrated test-case, one expects that the main method is invoked by the test-case. The annotation @SpringBootTest by default has this turned off. Hence @SpringBootTest calls SpringApplication.run(DemoApplication.class, args) and not the main method. So, in order to get the main method called instead when the test-case is run, we need to explicitly set a property. Once this is set, the test-case invokes the main method and the main method gets the code coverage.

The property is shown below to get the test coverage for the main method.
@SpringBootTest(useMainMethod = SpringBootTest.UseMainMethod.ALWAYS)

Not just for the code coverage, you might also want to use the useMainMethod property in scenarios where your application's main method performs additional setup or configuration that is necessary for your tests as well.

Summary

The useMainMethod property of the @SpringBootTest annotation allows you to control whether the main method of your application is invoked to start the Spring Boot application context during testing. By default, useMainMethod is set to UseMainMethod.NEVER, but you can set it to UseMainMethod.ALWAYS or UseMainMethod.WHEN_AVAILABLE to ensure that the main method is called during testing.

Wednesday, August 21, 2024

Spring Data JPA limitation with LIMIT fetch . . .

In modern Java Spring based applications Spring Data JPA is quite common way to interface with database. Domain/Business objects carry persistable state of the business process. With few JPA annotations, POJOs can be enhanced to persistable domain objects. Unlike Grails framework that underpins HIBERNATE, leverages GORM and elevates and enriches domain objects to the higher level by making them persistence aware, Spring Data JPA keeps the persistence in another abstraction layer called Repository.

With Spring Data JPA, Repository is the central interface and it requires one be familiar with Repository abstractions. Queries can be defined as interface methods and implementation is provided by Spring Data JPA framework by 1) deriving from method naming conventions 2) using manually defined queries with @Query annotation by writing JPQL or native SQL queries. My first choice is interface method naming by following the conventions. Next is JPQL. I avoid native queries unless there is a strong reason for.

JPQL Limitation with LIMIT fetch

One of the limitations I ran into recently with JPQL was limiting query results to limit fetching to limited number of records, say one record from the query results. Typically in native SQL this is done by adding LIMIT clause by specifying LIMIT 1 to limit to the first result to fetch. JPQL lets you specify LIMIT which also works, but under the covers the LIMIT is applied in memory to the results fetched. In other words the LIMIT clause doesn't exist in the generated native SQL. So, the SQL fetches all the results that match the criteria and a collection of entity objects get created and then the LIMIT is applied to get one object. With this the JPQL query does it's job as specified but will incur into expensive query by fetching more than needed records and creating the objects in collection and then returning one object by considering the LIMIT 1.

So, an example Repository method annotated like the following would return one object, but fetches all records that match the criteria into memory and return the first one from the collection.
@Query(""" SELECT msg FROM Message msg WHERE msg.type = :type ORDER BY msg.createdOn DESC LIMIT 1 """) Optional<Message> findLatestByType(MessageType type);

In order to truly fetch the most recent message of a given type the JPQL needs to be optimized to fetch only one record.

With JPQL the query may need to be rewritten something like the following without using LIMIT, assuming id is primary key, and is a sequence. It is more performant with no additional index created than using createdOn auditable column if there is one.
@Query(""" SELECT msg FROM Message msg WHERE msg.id = ( SELECT MAX(m.id) FROM Message m WHERE m.type = :type ) """) Optional<Message> findLatestByType(MessageType type);

The last resort is by writing a native query and using LIMIT 1 to fetch one.

References


Monday, May 20, 2024

Spring Boot logs in JSON format, assert logged output in a test-case . . .

Development is fun and sometimes frustrating too. Everything comes with some kind or other issues attached.

Scenario

Lately, I had to switch Spring Boot out-of-the-box Logback to Log4j2 logging, and specifically to JSON format. One of the test-cases that I had written was to test a feature-flag-based conditional scenario. The conditional code that depends on the feature-flag which is exposed as an external property and injected via @ConfigurationProperties bean into a service, when disabled writes a log message at WARN level to indicate that the feature is disabled. The unit-test has a test case written to test the disabled scenario which also tests the expected log message by leveraging Spring provided OutputCaptureExtension. This annotation, when used at test class level or method level like: @ExtendWith(OutputCaptureExtension.class) makes output log available to the test-case for verification.

That test-case when I switched logging to JSON using Log4j2 failed due to output log unavailable.

Environment: Java 21, Spring Boot 3.2.5, maven 3.9.6 on macOS Catalina 10.15.7

This post is about few things learned along the way: Log4j2 JSON logging, Spring Boot's JUnit Jupiter extension to capture System output, and the Log4j2 JSON Layout property to let the output be captured and available.

JSON logs

To switch Spring boot application logging to JSON using Log4j2 the following dependencies need to be added to maven build file pom.xml. Also, make sure to run mvn dependency:tree and see if you see spring-boot-starter-logging. If you have that as a transitive dependency, exclude it from all dependencies that bring it in. In the following dependencies, I had Spring Modulith that I had to exclude it from.

<dependency> <groupId>org.springframework.modulith</groupId> <artifactId>spring-modulith-starter-core</artifactId> <exclusions> <exclusion> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-logging</artifactId> </exclusion> </exclusions> </dependency> <!-- logging --> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-log4j2</artifactId> </dependency> <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-layout-template-json</artifactId> </dependency> <dependency> <groupId>com.fasterxml.jackson.dataformat</groupId> <artifactId>jackson-dataformat-yaml</artifactId> </dependency>

Add the following configuration file: main/resources/log4j2.yml. Log4j2 supports XML and YML:

Configuration: name: default Appenders: Console: name: console_appender target: SYSTEM_OUT follow: true JSONLayout: compact: true objectMessageAsJsonObject: true eventEol: true stacktraceAsString: true properties: true KeyValuePair: - key: '@timestamp' value: '$${date:yyyy-MM-dd HH:mm:ss.SSS}' Loggers: Root: name: root.logger level: info AppenderRef: ref: console_appender

With the above changes, application logs will be in JSON format.

Assert Captured output in a test-case

If you have any test-case that was verifying the captured log output in a test-case, it would fail.

For instance, I had a test-case like the following which was verifying the log message logged by myService.method(). To make it work, the above highlighted property - follow: true needs to be added in the log4j2.yml configuration. The details about this console appender property is documented in the OutputCaptureExtension Java doc

import org.junit.jupiter.api.Test; import org.junit.jupiter.api.extension.ExtendWith; import org.mockito.InjectMocks; import org.mockito.Mock; import org.mockito.junit.jupiter.MockitoExtension; import org.springframework.boot.test.system.CapturedOutput; import org.springframework.boot.test.system.OutputCaptureExtension; import org.springframework.test.util.ReflectionTestUtils; import static org.assertj.core.api.Assertions.*; import static org.junit.jupiter.api.Assertions.assertAll; import static org.mockito.Mockito.verify; @ExtendWith(MockitoExtension.class) @ExtendWith(OutputCaptureExtension.class) class MyFeatureFlagTest { @InjectMocks private MyService myService; @Test void call_should_not_handleEvent_when_featureFlag_is_disabled(CapturedOutput output) { // given: mocked behavior ReflectionTestUtils.setField(myService, "featureFlag", false); // when: method under test is called myService.method(); // verify: the output captured assertAll( () -> assertThat(output).contains("The featureFlag is DISABLED."); } }

That's it.

References

Saturday, March 16, 2024

Spring Data JPA - Join Fetch, Entity Graphs - to fetch nested levels of OneToMany related data, and JPQL . . .

One-to-many relationship in Databases is quite common. It is also quite cumbersome in terms of how many aspects that need to be considered for getting it correctly implemented. Just to list few aspects - related JPA annotations, relationship keys specified in annotations, fetch modes, fetch types, joins, query types, performance, N+1 data problem, cartesian product, DISTINCT to eliminate duplicates, indexes etc. With Spring Data, JPA and Hibernate as the default implementation provider there are few JPA annotations, Hibernate specific annotations, JPQL queries, Java collection types, all these will get added to the mix.

Environment: Java 21, Spring Boot 3.2.3, PostgreSQL 15.3, maven 3.9.6 on macOS Catalina 10.15.7

If you have JPA entities related through OneToMany relationships two multiple levels, then some special care is required to fetch data in a performant manner by avoiding the classic N+1 query issue, or even multiple queries. Each query is a roundtrip to Database and it adds up its own baggage.

Let's take a simple example of Country has many States, State has many Cities. We want to represent this relationship in JPA Entities and query using Spring Data JPA Repositories.

The entities with relationships look something like:

public abstract class BaseEntity implements Serializable { private static final long serialVersionUID = 1L; @Id @GeneratedValue(strategy = GenerationType.IDENTITY) @Column(nullable = false, updatable = false) @ToString.Exclude protected Long id; /** For optimistic locking */ @Version protected Long version; @CreationTimestamp @Column(nullable = false, updatable = false, columnDefinition = "TIMESTAMP WITH TIME ZONE") protected OffsetDateTime dateCreated = OffsetDateTime.now(); @UpdateTimestamp @Column(nullable = false, columnDefinition = "TIMESTAMP WITH TIME ZONE") protected OffsetDateTime lastUpdated = OffsetDateTime.now(); } @Entity public class Country extends BaseEntity { private String name; @OneToMany(mappedBy = "country", cascade = CascadeType.ALL, orphanRemoval = true) @Builder.Default @ToString.Exclude private Set<State> states = new LinkedHashSet<>(); } @Entity public class State extends BaseEntity { private String name; @ManyToOne(fetch = FetchType.LAZY) @JoinColumn(name = "country_id") @ToString.Exclude private Country country; // owning side of the relationship @OneToMany(mappedBy = "state", cascade = CascadeType.ALL, orphanRemoval = true) @Builder.Default @ToString.Exclude private Set<City> cities = new LinkedHashSet<>(); } @Entity public class City extends BaseEntity { private String name; @ManyToOne @JoinColumn(name = "state_id") @ToString.Exclude private State state; }

And a repository interface like:
@Repository public interface CountryRepository extends JpaRepository<Country, Long> { Optional<State> findByName(String name); }

One way to fetch all related data in a single query is by writing JPQL query with JOIN FETCH. This involves making sure to use all @OneToMany annotated properties to use Set and not List, and not using FetchType.EAGER and FetchMode.JOIN., and by writing a JPQL query with @Query annotation as shown below. Make a note of both DISTINCT and JOIN FETCH. This will result into one query which fetches for a Country all States, for each State all Cities data. If it is huge set of records, your best bet is to use @EntityGraph recommended. Lets say that our data is not huge and we want to use JPQL. In this case, the repository method annotatted with JPQL Query looks like:

@Repository public interface CountryRepository extends JpaRepository { @Query(""" SELECT DISTINCT country from Country country JOIN FETCH country.states state JOIN FETCH state.cities city WHERE country.name = :name """) Optional<Country> findByName(String name); }

JPQL

Java Persistencw Query Language (JPQL) is portable query language to query persistent entities irrespective of the mechanism used to store those entities. Typically in a Java application the entities are Java classes. Similar to SQL, it provides select, update, delete statements, join operations, aggregations, subqueries etc. Hibernate supports both JPQL and HQL.

Spring Data JPA offers different ways to define query methods in Repository interface like: 1) Derived queries for which the query is derived/generated from the name of the method by following conventions 2) Declared Queries by annotating query method with @Query annotation 3) Named Queries etc.

JPQL - Fetch Entities
In the above JPQL declared query in CountryRepository using @Query annotation, a typical JPQL to query entity object (Country) is shown. Typically, entity object is mapped to Database table and entity query results into SQL query generated by the underlying JPA implementation like Hibernate. The query result is Database records fetched from table(s) and the raw data is transformed into Entity objects.

JPQL - Fetch Custom objects
JPQL also supports custom objects through JPQL Constructor expressions. A constructor expression can be specified in JPQL to return a custom object instead of Entity object. Below is an example a code snippet in which a light-weight custom Java record is returned instead of Entity object.

@Repository public interface StateRepository extends JpaRepository<State, Long> { /** * JPQL - Query to fetch specific fields of Entity and return a non-entity custom objects * * @param population the population * @return list of light-weight StatePopulation objects */ @Query(""" SELECT new com.giri.countrystatecity.domain.StatePopulation(state.name, state.population) from State state WHERE state.population > :population """) List<StatePopulation> findAllStatesByPopulationGreaterThan(Long population); }
where StatePopulation is simply a record like: public record StatePopulation(String name, Long population) { }

JPQL - Fetch Raw specified column data 
Similar to the custom object, raw column data can also be fetched and then required object csn be constructed from the returned data. Following is code snippet for fetching raw data and constructing a data record object.

/** * JPQL - Query to fetch specific fields of Entity and return Raw data * * @param population the population * @return list of light-weight StatePopulation objects */ @Query(""" SELECT state.name, state.population from State state WHERE state.population > :population """) List<List<Object>> findAllStatesByPopulationGreaterThanJpqlRaw(Long population);
where raw data fetched can be converted into required objects in a service method like shown below:
public List<StatePopulation> getAllByPopulationGreaterThanJpqlRaw(Long population) { List<List<Object>> states = stateRepository.findAllStatesByPopulationGreaterThanJpqlRaw(population); return states.stream() .map(record -> new StatePopulation((String) record.get(0), (Long)record.get(1))) .toList(); }

Gotcha

  • If you use List instead of Set, you might bump into infamous HIBERNATE concept called bag and an exception like - MultipleBagFetchException: cannot simultaneously fetch multiple bags which will force you to read a whole lot of text in finding the information you need, digging through googled links and StackOverflow without much luck, and eventually breaking your head ;)
  • There are also other ways to tackle this N+1 query problem. Writing native query in the @Query annotation is another way. I wouldn't go that route as I don't want to get sucked by databases. I am sure if you take that route, you will have someone around you ready to argue in favor of Sub Selects, Stored Procedures etc. ;). My personal preference is to stay away from diving deep into database,  avoid abyss ;)

Sample Spring Boot Application - Country-State-City

Here is the link to a sample Spring Boot 3.2.3 GraphQL application which has both @Query JPQL way and @EntityGraph way of getting the single generated query that is performant in fetching all related data in one roundtrip.

References

Saturday, March 09, 2024

Spring Boot - Java GraphQL - extended scalar types . . .

This is my first te(a)ch note on GraphQL. I had hit a couple road blocks in a few of days of my hands on journey with it. Unlike good-old-days when books were the primary source of learning that had everything documented, there is no single place to find all details these days.

Environment: Java 21, Spring Boot 3.2.3, PostgreSQL 15.3, maven 3.9.6 on macOS Catalina 10.15.7

Extended or Custom Scalar types

GraphQL specifies very limited set of well-defined built-in scalar data types (primitive data types): Int, Float, String, Boolean and ID. GraphQL systems must support these as described in the specification. Everything else is an extended or custom scalar data type.

That, obviously is a very limited set supported. All other data types need custom scalar implementations which basically require coercing values at run-time and converting those to Java run-time representation. Luckily, the Java ecosystem is so huge that you almost don't need to break the ground in doing so. You will always find few open-source libraries that have tackled it already for you. GraphQL Java Scalars is one such in this Java GraphQL for extended scalar data types.

The primitive data type set supported is just not enough. You at least need support for few other data types used in any Java application like: Long, UUIDDateTime etc. They all need special considerations in your application's GraphQL schema. The DateTime takes a very special seat. In fact, anything around Dates in Java always scares me. To humans, Date and Time are the most obvious types in day-to-day life, but non in Software systems. Date is the most abused type than any other data type. Just recollect how many billions of dollars of money was wasted on this one data type in 1998 and 1999 around the globe. After 23 years of learning the mistake, the Date is still not dealt easily; it is still a complex data type to deal with ;).

To use custom scalar types other than that limited primitive set, you have to look for writing code that handles serialization, parsing and literal parsing for each additional data type. The graphql-java-extended-scalars library provides implementation for many other data types.

With a maven dependency added for this library, all you need to do is to register a scalar data type  with RuntimeWiringConfigurer as described in the README. If you need to register multiple types, it's a builder, so you can just chain those like:

@Configuration @Slf4j public class GraphQLConfiguration { /** * Custom scalar support for UUID, Long, and DateTime. Registers extended scalar types used GraphQL query schema. */ @Bean public RuntimeWiringConfigurer runtimeWiringConfigurer() { log.info("Registering extended GraphQL scalar types for UUID, Long, DateTime..."); return wiringBuilder -> wiringBuilder.scalar(ExtendedScalars.UUID) .scalar(ExtendedScalars.GraphQLLong) .scalar(ExtendedScalars.DateTime); } }

In addition to this, specify these scalar types in your application's schema.graphqls schema specification like:
"Extended scalar types" scalar UUID @specifiedBy(url: "https://tools.ietf.org/html/rfc4122") scalar Long @specifiedBy(url: "https://ibm.github.io/graphql-specs/custom-scalars/long.html") scalar DateTime @specifiedBy(url: "https://scalars.graphql.org/andimarek/date-time.html") ...

You are good to go.

Note that the extended scalar type for Long is named as GraphQLLong by this library. But, you should use Long in your your schema when you specify it as shown above. The directive @sprifiedBy is recommended to be used by GraphQL specification and is also a good practice to follow. Never ignore good practices ;)

Gotcha

Java JPA -  Instant vs. OffsetDateTime

If you are dealing with DateTime, make sure that whatever the Java type used in your code, it complies with GraphQL specification that requires date time offset.

I initially used Instant type in my JPA BaseEntity class for two properties: createdOn and updatedOn that are mapped by Hibernate provided @CreationTimestamp and @UpdateTimestamp mapped to PostgreSQL column type TIMESTAMP WITH TIME ZONE.  I switched to OffsetDateTime type because Instant is not supported and will never be by this library due to it not complying with the specification for DateTime. Java's Instant, Date and LocalDateTime do not include offset.

OffsetDateTime is an immutable representation of a date-time with an offset. This class stores all date and time fields, to a precision of nanoseconds, as well as the offset from UTC/Greenwich.

TIP

PostgreSQL offers two date time types: timestamp, timestamptz (is abbreviation of timestamp with time zone).

The following query results tell the date time story on this day light savings day of this year (Sun Mar 10, 2024). I ran it on my local PostgreSQL 15.3 running in Docker container.

-- Ran the query on Mar 10, 2024 day light savings day at EST 5:13:13 PM, EDT: 17:13:13 select version(); -- PostgreSQL 15.3 (Debian 15.3-1.pgdg120+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit show time zone; -- UTC SELECT now(), -- 2024-03-10 21:13:13.956877 +00:00 (timestamp with time zone UTC) now() AT TIME ZONE 'EST' AS est, -- 2024-03-10 16:13:13.956877 (??) now() AT TIME ZONE 'EDT' AS edt, -- 2024-03-10 17:13:13.956877 (right) now() AT TIME ZONE 'CST' AS cst, -- 2024-03-10 15:13:13.956877 (??) now() AT TIME ZONE 'CDT' AS cdt, -- 2024-03-10 16:13:13.956877 (right) now()::timestamp AT TIME ZONE 'EDT' AS timestamp_without_tz, -- 2024-03-11 01:13:13.956877 +00:00 (wrong) now()::timestamptz AT TIME ZONE 'EDT' AS timestamptz; -- 2024-03-10 17:13:13.956877 (right)

Here is the DbFiddle playground to play with the above query.

That's it in this te(a)ch note, more might come in as I walk forward along this GraphQL path.

Sunday, March 03, 2024

Enums - all the way to persistence (revisited and revised for today's tech stack) . . .

About two years ago I blogged on this combination: Enums - all the way to persistence. Technology is moving at faster pace than ever before. Java's release cadence is moving at rapid 6 months cycle, every March and September. Spring boot catches Java and other technologies and moves along at the same pace as Java, every 6 months in May and November. Of course, the PostgreSQL database, Hibernate and even Maven build system keep moving as well, at their own pace.

The challenge for Java developer is to keep up with all the moving technologies. As software development requires talented developers with years of experience and knowledge, debates go on Artificial Intelligence (AI). Some who have moved away from coding strongly feel that the generative AI which currently is capable of generating even code will replace software developers. I don't believe in that, at least at this time. The add-on 'at least at this time' is only a cautious extension to that non-generative human statement). I have tried CodeGPT lately at work, a couple of times when I was stuck with things not working together as described in documentations and blog posts, asking it's generative trained intelligence to see if it would be able to be my copilot in those development situations. It couldn't really stand up the hype in anyway, and I had to go and figure out myself all those situations.

Enums persistence - is one such problem again I did hit roadblocks lately after two years. The only change is newer versions of all of these technologies. It required additional exploration of few things before arriving at a solution that worked eventually.

Environment: Java 21, Spring Boot 3.2.3, PostgreSQL 15.3, maven 3.9.6 on macOS Catalina 10.15.7

Spring boot 3.2.3 data jpa brings in Hibernate 6.4.4 dependency.

The same persistent model described in my earlier blog post: Enums - all the way to persistence would need the following changes for enums to work.

The DDL script requires an extra PostgreSQL casting as shown below for enums:

-- create enum type genders CREATE TYPE genders AS ENUM( 'MALE', 'FEMALE' ); CREATE CAST (varchar AS genders) WITH INOUT AS IMPLICIT;

In maven pom.xml, the spring boot version is 3.2.3 and the hibernate-types-55 dependency is not needed.

Changes to domain object Person.java are shown below (the @TypeDef annotation is not required):

... import jakarta.persistence.EnumType; import jakarta.persistence.Enumerated; import org.hibernate.annotations.JdbcTypeCode; import org.hibernate.type.SqlTypes; ... @NotNull @Enumerated(EnumType.STRING) @JdbcTypeCode(SqlTypes.NAMED_ENUM) Gender gender; ... }

Changes look simple after figuring out and making things to work, but finding things that work required a bit of exploration ;)

References

Thursday, February 08, 2024

Spring Boot - log database query and binding parameters . . .

Environment: Java 21, Spring Boot 3.2.2, PostgreSQL 16, maven 3.9.6 on macOS Catalina 10.15.7

In a Spring Boot, Spring Data JPA application with Hibernate as the default JPA implementation, to log generated SQLs along with the values bound to parameters configure the following properties in relevant env related application.properties or application.yml file.
E.g. application-test.yml

logging: level: org.hibernate.SQL: debug org.hibernate.orm.jdbc.bind: trace
With the above Hibernate log levels, generated SQL query gets logged followed by binding parameter values.

To have the generated SQL query formatted, add the following JPA property:
spring: jpa: properties: hibernate: format_sql: true

Gotcha

Setting the property spring.jpa.show-sql to true is another way to see generated SQL, but it only gets outputted to the standard out, not to logs.


Monday, January 15, 2024

Spring Boot - Docker Compose - Run init script . . .

Spring Boot 3.1 enhanced docker-compose support, made it lot simpler and better suited for local development. With that we don't need to worry about installing services like database locally and managing them manually, letting Docker do that for us and Spring Boot do the rest of starting and stopping docker container.
 
This post is about details explored on - how to run additional init db script with PostgreSQL service defined in Docker compose file in a Spring Boot application.

Environment: Java 21, Spring Boot 3.2.1, PostgreSQL 16, maven 3.9.6 on macOS Catalina 10.15.7

The Scenario

My Spring Boot 3.2.x application uses PostgreSQL database, a specific version of it. By leveraging Spring Boot support for docker-compose in development, I would like to have a new schema and user  created, granting the user required privileges on the schema.

A typical PostgreSQL Service configuration in docker compose file looks like:
docker-compose.yml
version: '3' services: PostgreSQL16: image: 'postgres:16.1' ports: - '54321:5432' environment: - 'POSTGRES_DB=my_app' - 'POSTGRES_USER=postgres' - 'POSTGRES_PASSWORD=s3cr3t'

In the above docker compose configuration, we have specified database name, user, and password through environment variables in the container, and mapped host port (local port) to container port (default postgres port). With this when the docker-compose command: docker-compose up is run to create and start the container, the my_app database gets created, and PostgreSQL will be up and running in the container. The postgres user created is the superuser with access and ownership to all database objects including the public schema. When the docker container is created for PostgreSQL16 database service, the value of POSTGRES_USER environment variable is used to create the superuser, and public is the default schema created.

PostgreSQL 15 changes to public schema

From version 15 onwards, privileges on public schema are restricted and the schema is accessible to superuser only. So, it is good to create an application specific schema, and application specific database user with all needed privileges granted on the application schema. This requires a way to run a one-time initial databases script for creating application schema and user. The following shell script is an example to do so:
init-database.sh
#!/bin/sh set -e psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" <<-EOSQL /* Create schema, user and grant permissions */ CREATE SCHEMA my_app_schema; CREATE USER my_app_user_local WITH PASSWORD 'password'; GRANT ALL PRIVILEGES ON SCHEMA my_app_schema TO my_app_user_local; EOSQL

In order to run the above db init script when the container is created, reference the shell script file under volumes: to attach the init file directory (./) to the container directory (/docker-entrypoint-initdb.d/) as shown below:
docker-compose.yml
version: '3' services: PostgreSQL16: image: 'postgres:16.1' ports: - '54321:5432' environment: - 'POSTGRES_DB=my_app' - 'POSTGRES_USER=postgres' - 'POSTGRES_PASSWORD=s3cr3t' volumes: -  ./init-database.sh:/docker-entrypoint-initdb.d/init-database.sh
 
With this, when the docker container is created for PostgreSQL service, the init db script gets executed which results with new schema my_app_schema and user my_app_user_local. with privileges granted.

Gotchas

Auto configured datasource properties
When the application is run, the PostgreSQL container is created and run by Spring Boot. It also auto configures dataSource bean with properties: url, username, and password taking them from docker compose file. The user is superuser created from the POSTGRES_USER container environment variable. If the application has any initialization database scripts within the application under main/resources dir like schema.sql for initial schema or even flyway scripts in flyway enabled application under main/resources/db.migration dir, all the database tables and other objects created are owned by the superuser as the datasource uses superuser for connecting to the database.

If you want the data objects like tables, indices created in the application schema instead of public, you may need to specify it as the prefix in your schema.sql or for an app with flyway support, add the property spring.flyway.schemas appropriately, e.g in application.yml.

TIPS

1. With docker-compose managing the database service, if you need to use psql, the terminal based frontend to PostgreSQL to connect to db and run commands, invoke psql like:

$ # list docker containers running $ docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 7caf031c31a4 postgres:16.0 "docker-entrypoint.s…" 56 minutes ago Up 56 minutes 0.0.0.0:5222->5432/tcp docker-PostgreSQL16-1 45fa1a477ac3 postgres:15.3 "docker-entrypoint.s…" 4 days ago Up 3 days 0.0.0.0:54321->5432/tcp docker-compose-postgres15-1 $ # run psql command to coonect to PostgreSQL16 db and list users $ docker exec -it docker-PostgreSQL16-1 psql -U postgres psql (16.0 (Debian 16.0-1.pgdg120+1)) Type "help" for help. postgres=# postgres=# \? General \bind [PARAM]... set query parameters \copyright show PostgreSQL usage and distribution terms \crosstabview [COLUMNS] execute query and display result in crosstab \errverbose show most recent error message at maximum verbosity \g [(OPTIONS)] [FILE] execute query (and send result to file or |pipe); \g with no arguments is equivalent to a semicolon \gdesc describe result of query, without executing it \gexec execute query, then execute each value in its result \gset [PREFIX] execute query and store result in psql variables \gx [(OPTIONS)] [FILE] as \g, but forces expanded output mode \q quit psql --More-- postgres=# \dn List of schemas Name | Owner --------+------------------- public | pg_database_owner (1 row) postgres=# \du List of roles Role name | Attributes -------------------+------------------------------------------------------------ my_app_user_local | postgres | Superuser, Create role, Create DB, Replication, Bypass RLS postgres=# SELECT version(); version --------------------------------------------------------------------------------------------------------------------- PostgreSQL 16.0 (Debian 16.0-1.pgdg120+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit (1 row) postgres=# \conninfo You are connected to database "postgres" as user "postgres" via socket in "/var/run/postgresql" at port "5432". postgres=# select current_date; current_date -------------- 2024-01-15 (1 row) postgres=# SHOW search_path; search_path ----------------- "$user", public (1 row) postgres=# \l List of databases Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges --------------+----------+----------+-----------------+------------+------------+------------+-----------+----------------------- boot-graalvm | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + | | | | | | | | postgres=CTc/postgres template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + | | | | | | | | postgres=CTc/postgres (4 rows) postgres=# \c boot-graalvm You are now connected to database "boot-graalvm" as user "postgres". boot-graalvm=# \dt List of relations Schema | Name | Type | Owner --------+-----------------------+-------+---------- public | account_holder | table | postgres public | accounts | table | postgres public | addresses | table | postgres public | flyway_schema_history | table | postgres (4 rows) boot-graalvm=# boot-graalvm=# \c postgres You are now connected to database "postgres" as user "postgres". postgres=# postgres=# \q

References

Monday, January 08, 2024

Spring Boot - Check database connectivity after the application starts up . . .

Database Integration is much simpler with Spring Boot's non-invasive Auto Configuration feature. A typical Spring Boot application is configured to run in multiple environments, a.k.a profiles. However, there are multiple options available when it comes to configuring Database, like Docker Compose, Testcontainers, explicit DataSource profile based properties/yaml, externalized DataSource properties through Vault etc. In any case, it is good to have a database connection check in place to make sure that the database connection looks good once the application boots up and starts to run.

Environment: Java 21, Spring Boot 3.2.1, PostgreSQL 16, maven 3.9.6 on macOS Catalina 10.15.7

The Scenario

The Database is PostgreSQL and we want to run a simple query to make sure that the database connection looks good once the application starts up.

One way to achieve this

One way to achieve this is to execute a simple query after the application starts up. Spring Boot's CommandLineRunner or ApplicationRunner can be leveraged to do this. This is a good place to run specific code after the application has started.

Here is a code snippet for this:
import lombok.extern.slf4j.Slf4j; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.boot.CommandLineRunner; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; import org.springframework.context.annotation.Bean; import org.springframework.jdbc.core.simple.JdbcClient; @SpringBootApplication @Slf4j public class MyApplication { public static void main(String[] args) { SpringApplication.run(MyApplication.class, args); } @Autowired(required = false) JdbcClient jdbcClient; @Bean public CommandLineRunner commandLineRunner() { return args -> { if (jdbcClient != null) { log.info("Database check: {}", jdbcClient.sql("SELECT version()").query(String.class).single()); } }; } }

The above highlighted is the code snippet that gets executed after the application gets started. It just logs the executed query result, nothing but the database version. 

An integration test case can also be put in place as shown below, which makes sure that the database connection and version look good. This kind of testcase is good to have to make sure that the code is tested against the same db version as the production.

import org.junit.jupiter.api.Test; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.boot.test.autoconfigure.jdbc.AutoConfigureTestDatabase; import org.springframework.boot.test.autoconfigure.orm.jpa.DataJpaTest; import org.springframework.context.annotation.Import; import org.springframework.jdbc.core.simple.JdbcClient; import org.springframework.test.context.ActiveProfiles; import static org.assertj.core.api.Assertions.*; /** * An integration test to check Database connectivity. */ @ActiveProfiles("test") // We don't want the H2 in-memory database. // We will provide a custom 'test container' as DataSource, so don't replace it. @AutoConfigureTestDatabase(replace = AutoConfigureTestDatabase.Replace.NONE) @DataJpaTest @Import(TestContainersConfiguration.class) public class DatabaseCheckIT { @Autowired JdbcClient jdbcClient; @Test void database_connection_works_and_version_looks_good() { assertThat(jdbcClient.sql("SELECT version()").query(String.class).single()) .contains("16.0"); } }

The above test case uses Testcontainers and a test configuration as shown below for unit/integration tests:

import lombok.extern.slf4j.Slf4j; import org.springframework.boot.test.context.TestConfiguration; import org.springframework.boot.testcontainers.service.connection.ServiceConnection; import org.springframework.context.annotation.Bean; import org.testcontainers.containers.PostgreSQLContainer; /** * Test Configuration for testcontainers. */ @TestConfiguration(proxyBeanMethods = false) @Slf4j public class TestContainersConfiguration { private static final String POSTGRES_IMAGE_TAG = "postgres:16.0"; @Bean @ServiceConnection PostgreSQLContainer postgreSQLContainer() { return new PostgreSQLContainer<>(POSTGRES_IMAGE_TAG) .withDatabaseName("my-application") .withUsername("my-application") .withPassword("s3cr3t") .withReuse(true); } }

Gotcha

Note that in the main application class, for JdbcClient @Autowired annotation, the optional property required is explicitly set to false.  The reason for this is if there are any integration test cases to test specific layers (test slices like @GraphQlTest) that do not auto configure datasource, when the application class is run as part of starting Spring Boot run, the testcase runs into exception as JdbcClient bean is not available for auto wiring. So, in those cases jdbcClient property would be null. So, a non null check is required to safely run the SQL statement.

💡 TIPS

The CommandLineRunner bean in the application class can conditionally be defined on some DataSource related bean/class by annotation it with @ConditionalOnBean or @ConditionalOnClass. I couldn't find a way to get it conditionally defined and be working for all scenarios.

Resources


Monday, January 01, 2024

Polyglot makes you a think better and do better - my musing . . .

Fluency in multiple spoken languages (Polyglot) always makes you think better and communicate even better. In Software Development, Polyglot programming makes you a better Software Developer. Being able to code in more than one language makes you think different and write better code.

No language is superior or best for all use-cases. Polyglot experience is very beneficial. It makes you think better when approaching a problem for a solution. In software programming world, it matters more than in the normal world.

Java is undoubtedly the programming language that has been dominant in Software world, longer than any other, and probably will continue to remain dominant for many more years. I worked in Java for a decade before I moved to Groovy. For several years I enjoyed coding in Groovy and did not want to go back to Java. Life doesn't go your way. And now, I am back to Java. I'd rather say, I am back to Java with Groovy eyes and coding experience ;)

Groovy taught me many things in programming, which otherwise, I wouldn't have learnt or changed my object-oriented mindset to think different, if I had just stick myself to Java. I do notice a lot that Java developers who have been coding in just Java for awhile still write Java 1.2 code. Java is evolving faster now for good. But Java developers are not evolving at the same pace with it. Coming back to Java from Groovy, I am not hesitant to use any of the new features that Java is adding version after Java at a fast pace. I did write production Java 13 code with multi-line text blocks which was only a preview feature in Java 13 with --enable-preview flag for compilation and execution. Having experienced even superior multi-line text blocks in Groovy on JVM, I just couldn't write code with several "s,  and +s. Some developer wouldn't even use spaces in between concatenating strings. My eyes get blurry and mind goes blank when I see such code. Polyglot helped me embrace that multi-line text blocks even as an experimental feature in Java 13.

Once in recent years, I had to get my hands dirty with a super-rigid, early twenty-first-century-way written Java family of simple applications with main methods, and tightly coupled code with inheritance, only static member variables in the class hierarchy, no sensible differences between a class and an object, the worst of all- quite a bit of blindly followed manual code changes to be done and checked in after every single run of the code, and a lot of manual copying of both input files before the run and result files after the run. Bringing in a new Java application member to this family of applications require copying one of the applications and start making changes to meet the new application's needs with much of code inherited from the hierarchy.

When I had to add in a new member application to that family of applications, I couldn't follow that family legacy of copy-and-paste tradition. DRY - Don't Repeat Yourself, is the principle that I believe should be taught before even teaching programming. I added a new member to that family following all the messy inheritance as the family was super adamant upfront not to refactor anything. OK, that tells the how bad the code smells. At least I wanted to change the manual procedures and automate them, wanted to change the practice of changing code for every run. Java application's main method takes arguments for this reason. I worked for a financial company (very rigid domain in Software field) in the past and rewrote their bread and butter Oracle stored procedures that computed earnings at the end of each month with its 10,000 lines of code with not event a single line of documentation and the person who wrote it left the company. Nobody was dared to touch the code. People only knew how to calculate earnings, but had no clue how it was implemented in Stored Procedures. I rewrote the whole app in Groovy as a simple runnable Java app with superior command line support with all possible flexibilities to run. The whole app rewritten in Groovy with just few hundred lines of code, made it multi-threaded by bringing down the month end run-time from hours to minutes. That was about a decade ago. If I had to this in Java at that time, it would have made the number of lines of code at least 5 times that Groovy with noise and boilerplate code in dealing with database.

In my current day-to-day development, Groovy is not a choice for production code; only Java. But, we catch up fast using latest versions of Java in production code, few months after a newer version gets  released. That makes me leverage, most recent syntax improvements, language constructs, and feature enhancements and additions being added in every version. In some cases, now, Java code looks little closer to Groovy like code when newer language features are used in support with frameworks.

The very first step I took in adding a new application member to the legacy family was to find good CLI Java framework. I found Picocli, which is super simple to use with no coding, just annotating code. There you go, I used it and brought in a change to the family and paved path for newly joining members to follow the path. This eliminated the need to change code for every run by changing hard-coded constants and check the modified code into version control. By leveraging Picocli, and main method arguments, I externalized few hardcoded values as coming from arguments. That eliminated the need to touch code for every single run. Then automated some more tasks like renaming the generated file manually to meet certain expected naming convention, copying that to another source repo, and checking in that file etc.

Groovy's CliBuilder

In my Groovy development days, I had used Groovy's CliBuilder that comes with Groovy. Only few lines of code makes the application super flexible for driving the inside implementation, processing, or any such logic that depends on values that get passes as arguments to run the application. My Groovy experience helped me a lot to think better, and make the newly added Java application member a very flexible super-kid in the family by leveraging Java's modern features and frameworks like Picocli. 

Java - Picocli

Annotate class and fields, and add either the dependent Picocli class or maven/gradle dependency. With a quick couple of hours of exploration and reading the docs, in few minutes you can add the powerful CLI  feature to your Java Application. It makes it runnable for various scenarios by passing values through different arguments that can drive its functionality in specific ways.

Conclusion

Writing code should be more for developers to read than it is for machines to execute. After all, machine can execute any type of syntactically correct code. There is more than just syntax and semantics in programming, which is READABILITY for humans. Code must first be readable before it is executable.

Change is a constant and there is always scope for improvement, ONLY if you are willing to learn, change, and not afraid to improve ;)

References