Saturday, March 16, 2024

Spring Data JPA - Join Fetch, Entity Graphs - to fetch nested levels of OneToMany related data . . .

One-to-many relationship in Databases is quite common. It is also quite cumbersome in terms of how many aspects that need to be considered for getting it correctly implemented. Just to list few aspects - related JPA annotations, relationship keys specified in annotations, fetch modes, fetch types, joins, query types, performance, N+1 data problem, cartesian product, DISTINCT to eliminate duplicates, indexes etc. With Spring Data, JPA and Hibernate as the default implementation provider there are few JPA annotations, Hibernate specific annotations, JPQL queries, Java collection types, all these will get added to the mix.

Environment: Java 21, Spring Boot 3.2.3, PostgreSQL 15.3, maven 3.9.6 on macOS Catalina 10.15.7

If you have JPA entities related through OneToMany relationships two multiple levels, then some special care is required to fetch data in a performant manner by avoiding the classic N+1 query issue, or even multiple queries. Eash query is a roundtrip to Database and it adds up its own baggage.

Let's take a simple example of Country has many States, State has many Cities. We want to represent this relationship in JPA Entities and query using Spring Data JPA Repositories.

The entities with relationships look something like:

public abstract class BaseEntity implements Serializable { private static final long serialVersionUID = 1L; @Id @GeneratedValue(strategy = GenerationType.IDENTITY) @Column(nullable = false, updatable = false) @ToString.Exclude protected Long id; /** For optimistic locking */ @Version protected Long version; @CreationTimestamp @Column(nullable = false, updatable = false, columnDefinition = "TIMESTAMP WITH TIME ZONE") protected OffsetDateTime dateCreated = OffsetDateTime.now(); @UpdateTimestamp @Column(nullable = false, columnDefinition = "TIMESTAMP WITH TIME ZONE") protected OffsetDateTime lastUpdated = OffsetDateTime.now(); } @Entity public class Country extends BaseEntity { private String name; @OneToMany(mappedBy = "country", cascade = CascadeType.ALL, orphanRemoval = true) @Builder.Default @ToString.Exclude private Set<State> states = new LinkedHashSet<>(); } @Entity public class State extends BaseEntity { private String name; @ManyToOne(fetch = FetchType.LAZY) @JoinColumn(name = "country_id") @ToString.Exclude private Country country; // owning side of the relationship @OneToMany(mappedBy = "state", cascade = CascadeType.ALL, orphanRemoval = true) @Builder.Default @ToString.Exclude private Set<City> cities = new LinkedHashSet<>(); } @Entity public class City extends BaseEntity { private String name; @ManyToOne @JoinColumn(name = "state_id") @ToString.Exclude private State state; }

And a repository interface like:
@Repository public interface CountryRepository extends JpaRepository<Country, Long> { Optional<State> findByName(String name); }

One way to fetch all related data in a single query is by writing JPQL query with JOIN FETCH. This involves making sure to use all @OneToMany annotated properties to use Set and not List, and not using FetchType.EAGER and FetchMode.JOIN., and by writing a JPQL query with @Query annotation as shown below. Make a note of both DISTINCT and JOIN FETCH. This will result into one query which fetches for a Country all States, for each State all Cities data. If it is huge set of records, your best bet is to use @EntityGraph recommended. Lets say that our data is not huge and we want to use JPQL. In this case, the repository method annotatted with JPQL Query looks like:

@Repository public interface CountryRepository extends JpaRepository { @Query(""" SELECT DISTINCT country from Country country JOIN FETCH country.states state JOIN FETCH state.cities city WHERE country.name = :name """) Optional<Country> findByName(String name); }

Gotcha

  • If you use List instead of Set, you might bump into infamous HIBERNATE concept called bag and an exception like - MultipleBagFetchException: cannot simultaneously fetch multiple bags which will force you to read a whole lot of text in finding the information you need, digging through googled links and StackOverflow without much luck, and eventually breaking your head ;)
  • There are also other ways to tackle this problem. Writing native query in the @Query annotation is another way. I wouldn't go that route as I don't want to get sucked by databases. I am sure if you take that route, you will have someone around you ready to argue in favor of Sub Selects, Stored Procedures etc. ;). My personal preference is to stay away from diving deep into database,  avoid abyss ;)

Sample Spring Boot Application - Country-State-City

Here is the link to a sample Spring Boot 3.2.3 GraphQL application which has both @Query JPQL way and @EntityGraph way of getting the single generated query that is performant in fetching all related data in one roundtrip.

References

Saturday, March 09, 2024

Spring Boot - Java GraphQL - extended scalar types . . .

This is my first te(a)ch note on GraphQL. I had hit a couple road blocks in a few of days of my hands on journey with it. Unlike good-old-days when books were the primary source of learning that had everything documented, there is no single place to find all details these days.

Environment: Java 21, Spring Boot 3.2.3, PostgreSQL 15.3, maven 3.9.6 on macOS Catalina 10.15.7

Extended or Custom Scalar types

GraphQL specifies very limited set of well-defined built-in scalar data types (primitive data types): Int, Float, String, Boolean and ID. GraphQL systems must support these as described in the specification. Everything else is an extended or custom scalar data type.

That, obviously is a very limited set supported. All other data types need custom scalar implementations which basically require coercing values at run-time and converting those to Java run-time representation. Luckily, the Java ecosystem is so huge that you almost don't need to break the ground in doing so. You will always find few open-source libraries that have tackled it already for you. GraphQL Java Scalars is one such in this Java GraphQL for extended scalar data types.

The primitive data type set supported is just not enough. You at least need support for few other data types used in any Java application like: Long, UUIDDateTime etc. They all need special considerations in your application's GraphQL schema. The DateTime takes a very special seat. In fact, anything around Dates in Java always scares me. To humans, Date and Time are the most obvious types in day-to-day life, but non in Software systems. Date is the most abused type than any other data type. Just recollect how many billions of dollars of money was wasted on this one data type in 1998 and 1999 around the globe. After 23 years of learning the mistake, the Date is still not dealt easily; it is still a complex data type to deal with ;).

To use custom scalar types other than that limited primitive set, you have to look for writing code that handles serialization, parsing and literal parsing for each additional data type. The graphql-java-extended-scalars library provides implementation for many other data types.

With a maven dependency added for this library, all you need to do is to register a scalar data type  with RuntimeWiringConfigurer as described in the README. If you need to register multiple types, it's a builder, so you can just chain those like:

@Configuration @Slf4j public class GraphQLConfiguration { /** * Custom scalar support for UUID, Long, and DateTime. Registers extended scalar types used GraphQL query schema. */ @Bean public RuntimeWiringConfigurer runtimeWiringConfigurer() { log.info("Registering extended GraphQL scalar types for UUID, Long, DateTime..."); return wiringBuilder -> wiringBuilder.scalar(ExtendedScalars.UUID) .scalar(ExtendedScalars.GraphQLLong) .scalar(ExtendedScalars.DateTime); } }

In addition to this, specify these scalar types in your application's schema.graphqls schema specification like:
"Extended scalar types" scalar UUID @specifiedBy(url: "https://tools.ietf.org/html/rfc4122") scalar Long @specifiedBy(url: "https://ibm.github.io/graphql-specs/custom-scalars/long.html") scalar DateTime @specifiedBy(url: "https://scalars.graphql.org/andimarek/date-time.html") ...

You are good to go.

Note that the extended scalar type for Long is named as GraphQLLong by this library. But, you should use Long in your your schema when you specify it as shown above. The directive @sprifiedBy is recommended to be used by GraphQL specification and is also a good practice to follow. Never ignore good practices ;)

Gotcha

Java JPA -  Instant vs. OffsetDateTime

If you are dealing with DateTime, make sure that whatever the Java type used in your code, it complies with GraphQL specification that requires date time offset.

I initially used Instant type in my JPA BaseEntity class for two properties: createdOn and updatedOn that are mapped by Hibernate provided @CreationTimestamp and @UpdateTimestamp mapped to PostgreSQL column type TIMESTAMP WITH TIME ZONE.  I switched to OffsetDateTime type because Instant is not supported and will never be by this library due to it not complying with the specification for DateTime. Java's Instant, Date and LocalDateTime do not include offset.

OffsetDateTime is an immutable representation of a date-time with an offset. This class stores all date and time fields, to a precision of nanoseconds, as well as the offset from UTC/Greenwich.

TIP

PostgreSQL offers two date time types: timestamp, timestamptz (is abbreviation of timestamp with time zone).

The following query results tell the date time story on this day light savings day of this year (Sun Mar 10, 2024). I ran it on my local PostgreSQL 15.3 running in Docker container.

-- Ran the query on Mar 10, 2024 day light savings day at EST 5:13:13 PM, EDT: 17:13:13 select version(); -- PostgreSQL 15.3 (Debian 15.3-1.pgdg120+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit show time zone; -- UTC SELECT now(), -- 2024-03-10 21:13:13.956877 +00:00 (timestamp with time zone UTC) now() AT TIME ZONE 'EST' AS est, -- 2024-03-10 16:13:13.956877 (??) now() AT TIME ZONE 'EDT' AS edt, -- 2024-03-10 17:13:13.956877 (right) now() AT TIME ZONE 'CST' AS cst, -- 2024-03-10 15:13:13.956877 (??) now() AT TIME ZONE 'CDT' AS cdt, -- 2024-03-10 16:13:13.956877 (right) now()::timestamp AT TIME ZONE 'EDT' AS timestamp_without_tz, -- 2024-03-11 01:13:13.956877 +00:00 (wrong) now()::timestamptz AT TIME ZONE 'EDT' AS timestamptz; -- 2024-03-10 17:13:13.956877 (right)

Here is the DbFiddle playground to play with the above query.

That's it in this te(a)ch note, more might come in as I walk forward along this GraphQL path.

Sunday, March 03, 2024

Enums - all the way to persistence (revisited and revised for today's tech stack) . . .

About two years ago I blogged on this combination: Enums - all the way to persistence. Technology is moving at faster pace than ever before. Java's release cadence is moving at rapid 6 months cycle, every March and September. Spring boot catches Java and other technologies and moves along at the same pace as Java, every 6 months in May and November. Of course, the PostgreSQL database, Hibernate and even Maven build system keep moving as well, at their own pace.

The challenge for Java developer is to keep up with all the moving technologies. As software development requires talented developers with years of experience and knowledge, debates go on Artificial Intelligence (AI). Some who have moved away from coding strongly feel that the generative AI which currently is capable of generating even code will replace software developers. I don't believe in that, at least at this time. The add-on 'at least at this time' is only a cautious extension to that non-generative human statement). I have tried CodeGPT lately at work, a couple of times when I was stuck with things not working together as described in documentations and blog posts, asking it's generative trained intelligence to see if it would be able to be my copilot in those development situations. It couldn't really stand up the hype in anyway, and I had to go and figure out myself all those situations.

Enums persistence - is one such problem again I did hit roadblocks lately after two years. The only change is newer versions of all of these technologies. It required additional exploration of few things before arriving at a solution that worked eventually.

Environment: Java 21, Spring Boot 3.2.3, PostgreSQL 15.3, maven 3.9.6 on macOS Catalina 10.15.7

Spring boot 3.2.3 data jpa brings in Hibernate 6.4.4 dependency.

The same persistent model described in my earlier blog post: Enums - all the way to persistence would need the following changes for enums to work.

The DDL script requires an extra PostgreSQL casting as shown below for enums:

-- create enum type genders CREATE TYPE genders AS ENUM( 'MALE', 'FEMALE' ); CREATE CAST (varchar AS genders) WITH INOUT AS IMPLICIT;

In maven pom.xml, the spring boot version is 3.2.3 and the hibernate-types-55 dependency is not needed.

Changes to domain object Person.java are shown below (the @TypeDef annotation is not required):

... import jakarta.persistence.EnumType; import jakarta.persistence.Enumerated; import org.hibernate.annotations.JdbcTypeCode; import org.hibernate.type.SqlTypes; ... @NotNull @Enumerated(EnumType.STRING) @JdbcTypeCode(SqlTypes.NAMED_ENUM) Gender gender; ... }

Changes look simple after figuring out and making things to work, but finding things that work required a bit of exploration ;)

References

Thursday, February 08, 2024

Spring Boot - log database query and binding parameters . . .

Environment: Java 21, Spring Boot 3.2.2, PostgreSQL 16, maven 3.9.6 on macOS Catalina 10.15.7

In a Spring Boot, Spring Data JPA application with Hibernate as the default JPA implementation, to log generated SQLs along with the values bound to parameters configure the following properties in relevant env related application.properties or application.yml file.
E.g. application-test.yml

logging: level: org.hibernate.SQL: debug org.hibernate.orm.jdbc.bind: trace
With the above Hibernate log levels, generated SQL query gets logged followed by binding parameter values.

To have the generated SQL query formatted, add the following JPA property:
spring: jpa: properties: hibernate: format_sql: true

Gotcha

Setting the property spring.jpa.show-sql to true is another way to see generated SQL, but it only gets outputted to the standard out, not to logs.


Monday, January 15, 2024

Spring Boot - Docker Compose - Run init script . . .

Spring Boot 3.1 enhanced docker-compose support, made it lot simpler and better suited for local development. With that we don't need to worry about installing services like database locally and managing them manually, letting Docker do that for us and Spring Boot do the rest of starting and stopping docker container.
 
This post is about details explored on - how to run additional init db script with PostgreSQL service defined in Docker compose file in a Spring Boot application.

Environment: Java 21, Spring Boot 3.2.1, PostgreSQL 16, maven 3.9.6 on macOS Catalina 10.15.7

The Scenario

My Spring Boot 3.2.x application uses PostgreSQL database, a specific version of it. By leveraging Spring Boot support for docker-compose in development, I would like to have a new schema and user  created, granting the user required privileges on the schema.

A typical PostgreSQL Service configuration in docker compose file looks like:
docker-compose.yml
version: '3' services: PostgreSQL16: image: 'postgres:16.1' ports: - '54321:5432' environment: - 'POSTGRES_DB=my_app' - 'POSTGRES_USER=postgres' - 'POSTGRES_PASSWORD=s3cr3t'

In the above docker compose configuration, we have specified database name, user, and password through environment variables in the container, and mapped host port (local port) to container port (default postgres port). With this when the docker-compose command: docker-compose up is run to create and start the container, the my_app database gets created, and PostgreSQL will be up and running in the container. The postgres user created is the superuser with access and ownership to all database objects including the public schema. When the docker container is created for PostgreSQL16 database service, the value of POSTGRES_USER environment variable is used to create the superuser, and public is the default schema created.

PostgreSQL 15 changes to public schema

From version 15 onwards, privileges on public schema are restricted and the schema is accessible to superuser only. So, it is good to create an application specific schema, and application specific database user with all needed privileges granted on the application schema. This requires a way to run a one-time initial databases script for creating application schema and user. The following shell script is an example to do so:
init-database.sh
#!/bin/sh set -e psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" <<-EOSQL /* Create schema, user and grant permissions */ CREATE SCHEMA my_app_schema; CREATE USER my_app_user_local WITH PASSWORD 'password'; GRANT ALL PRIVILEGES ON SCHEMA my_app_schema TO my_app_user_local; EOSQL

In order to run the above db init script when the container is created, reference the shell script file under volumes: to attach the init file directory (./) to the container directory (/docker-entrypoint-initdb.d/) as shown below:
docker-compose.yml
version: '3' services: PostgreSQL16: image: 'postgres:16.1' ports: - '54321:5432' environment: - 'POSTGRES_DB=my_app' - 'POSTGRES_USER=postgres' - 'POSTGRES_PASSWORD=s3cr3t' volumes: -  ./init-database.sh:/docker-entrypoint-initdb.d/init-database.sh
 
With this, when the docker container is created for PostgreSQL service, the init db script gets executed which results with new schema my_app_schema and user my_app_user_local. with privileges granted.

Gotchas

Auto configured datasource properties
When the application is run, the PostgreSQL container is created and run by Spring Boot. It also auto configures dataSource bean with properties: url, username, and password taking them from docker compose file. The user is superuser created from the POSTGRES_USER container environment variable. If the application has any initialization database scripts within the application under main/resources dir like schema.sql for initial schema or even flyway scripts in flyway enabled application under main/resources/db.migration dir, all the database tables and other objects created are owned by the superuser as the datasource uses superuser for connecting to the database.

If you want the data objects like tables, indices created in the application schema instead of public, you may need to specify it as the prefix in your schema.sql or for an app with flyway support, add the property spring.flyway.schemas appropriately, e.g in application.yml.

TIPS

1. With docker-compose managing the database service, if you need to use psql, the terminal based frontend to PostgreSQL to connect to db and run commands, invoke psql like:

$ # list docker containers running $ docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 7caf031c31a4 postgres:16.0 "docker-entrypoint.s…" 56 minutes ago Up 56 minutes 0.0.0.0:5222->5432/tcp docker-PostgreSQL16-1 45fa1a477ac3 postgres:15.3 "docker-entrypoint.s…" 4 days ago Up 3 days 0.0.0.0:54321->5432/tcp docker-compose-postgres15-1 $ # run psql command to coonect to PostgreSQL16 db and list users $ docker exec -it docker-PostgreSQL16-1 psql -U postgres psql (16.0 (Debian 16.0-1.pgdg120+1)) Type "help" for help. postgres=# postgres=# \? General \bind [PARAM]... set query parameters \copyright show PostgreSQL usage and distribution terms \crosstabview [COLUMNS] execute query and display result in crosstab \errverbose show most recent error message at maximum verbosity \g [(OPTIONS)] [FILE] execute query (and send result to file or |pipe); \g with no arguments is equivalent to a semicolon \gdesc describe result of query, without executing it \gexec execute query, then execute each value in its result \gset [PREFIX] execute query and store result in psql variables \gx [(OPTIONS)] [FILE] as \g, but forces expanded output mode \q quit psql --More-- postgres=# \dn List of schemas Name | Owner --------+------------------- public | pg_database_owner (1 row) postgres=# \du List of roles Role name | Attributes -------------------+------------------------------------------------------------ my_app_user_local | postgres | Superuser, Create role, Create DB, Replication, Bypass RLS postgres=# SELECT version(); version --------------------------------------------------------------------------------------------------------------------- PostgreSQL 16.0 (Debian 16.0-1.pgdg120+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit (1 row) postgres=# \conninfo You are connected to database "postgres" as user "postgres" via socket in "/var/run/postgresql" at port "5432". postgres=# select current_date; current_date -------------- 2024-01-15 (1 row) postgres=# SHOW search_path; search_path ----------------- "$user", public (1 row) postgres=# \l List of databases Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges --------------+----------+----------+-----------------+------------+------------+------------+-----------+----------------------- boot-graalvm | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | postgres | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | template0 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + | | | | | | | | postgres=CTc/postgres template1 | postgres | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | =c/postgres + | | | | | | | | postgres=CTc/postgres (4 rows) postgres=# \c boot-graalvm You are now connected to database "boot-graalvm" as user "postgres". boot-graalvm=# \dt List of relations Schema | Name | Type | Owner --------+-----------------------+-------+---------- public | account_holder | table | postgres public | accounts | table | postgres public | addresses | table | postgres public | flyway_schema_history | table | postgres (4 rows) boot-graalvm=# boot-graalvm=# \c postgres You are now connected to database "postgres" as user "postgres". postgres=# postgres=# \q

References

Monday, January 08, 2024

Spring Boot - Check database connectivity after the application starts up . . .

Database Integration is much simpler with Spring Boot's non-invasive Auto Configuration feature. A typical Spring Boot application is configured to run in multiple environments, a.k.a profiles. However, there are multiple options available when it comes to configuring Database, like Docker Compose, Testcontainers, explicit DataSource profile based properties/yaml, externalized DataSource properties through Vault etc. In any case, it is good to have a database connection check in place to make sure that the database connection looks good once the application boots up and starts to run.

Environment: Java 21, Spring Boot 3.2.1, PostgreSQL 16, maven 3.9.6 on macOS Catalina 10.15.7

The Scenario

The Database is PostgreSQL and we want to run a simple query to make sure that the database connection looks good once the application starts up.

One way to achieve this

One way to achieve this is to execute a simple query after the application starts up. Spring Boot's CommandLineRunner or ApplicationRunner can be leveraged to do this. This is a good place to run specific code after the application has started.

Here is a code snippet for this:
import lombok.extern.slf4j.Slf4j; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.boot.CommandLineRunner; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; import org.springframework.context.annotation.Bean; import org.springframework.jdbc.core.simple.JdbcClient; @SpringBootApplication @Slf4j public class MyApplication { public static void main(String[] args) { SpringApplication.run(MyApplication.class, args); } @Autowired(required = false) JdbcClient jdbcClient; @Bean public CommandLineRunner commandLineRunner() { return args -> { if (jdbcClient != null) { log.info("Database check: {}", jdbcClient.sql("SELECT version()").query(String.class).single()); } }; } }

The above highlighted is the code snippet that gets executed after the application gets started. It just logs the executed query result, nothing but the database version. 

An integration test case can also be put in place as shown below, which makes sure that the database connection and version look good. This kind of testcase is good to have to make sure that the code is tested against the same db version as the production.

import org.junit.jupiter.api.Test; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.boot.test.autoconfigure.jdbc.AutoConfigureTestDatabase; import org.springframework.boot.test.autoconfigure.orm.jpa.DataJpaTest; import org.springframework.context.annotation.Import; import org.springframework.jdbc.core.simple.JdbcClient; import org.springframework.test.context.ActiveProfiles; import static org.assertj.core.api.Assertions.*; /** * An integration test to check Database connectivity. */ @ActiveProfiles("test") // We don't want the H2 in-memory database. // We will provide a custom 'test container' as DataSource, so don't replace it. @AutoConfigureTestDatabase(replace = AutoConfigureTestDatabase.Replace.NONE) @DataJpaTest @Import(TestContainersConfiguration.class) public class DatabaseCheckIT { @Autowired JdbcClient jdbcClient; @Test void database_connection_works_and_version_looks_good() { assertThat(jdbcClient.sql("SELECT version()").query(String.class).single()) .contains("16.0"); } }

The above test case uses Testcontainers and a test configuration as shown below for unit/integration tests:

import lombok.extern.slf4j.Slf4j; import org.springframework.boot.test.context.TestConfiguration; import org.springframework.boot.testcontainers.service.connection.ServiceConnection; import org.springframework.context.annotation.Bean; import org.testcontainers.containers.PostgreSQLContainer; /** * Test Configuration for testcontainers. */ @TestConfiguration(proxyBeanMethods = false) @Slf4j public class TestContainersConfiguration { private static final String POSTGRES_IMAGE_TAG = "postgres:16.0"; @Bean @ServiceConnection PostgreSQLContainer postgreSQLContainer() { return new PostgreSQLContainer<>(POSTGRES_IMAGE_TAG) .withDatabaseName("my-application") .withUsername("my-application") .withPassword("s3cr3t") .withReuse(true); } }

Gotcha

Note that in the main application class, for JdbcClient @Autowired annotation, the optional property required is explicitly set to false.  The reason for this is if there are any integration test cases to test specific layers (test slices like @GraphQlTest) that do not auto configure datasource, when the application class is run as part of starting Spring Boot run, the testcase runs into exception as JdbcClient bean is not available for auto wiring. So, in those cases jdbcClient property would be null. So, a non null check is required to safely run the SQL statement.

💡 TIPS

The CommandLineRunner bean in the application class can conditionally be defined on some DataSource related bean/class by annotation it with @ConditionalOnBean or @ConditionalOnClass. I couldn't find a way to get it conditionally defined and be working for all scenarios.

Resources


Monday, January 01, 2024

Polyglot makes you a think better and do better - my musing . . .

Fluency in multiple spoken languages (Polyglot) always makes you think better and communicate even better. In Software Development, Polyglot programming makes you a better Software Developer. Being able to code in more than one language makes you think different and write better code.

No language is superior or best for all use-cases. Polyglot experience is very beneficial. It makes you think better when approaching a problem for a solution. In software programming world, it matters more than in the normal world.

Java is undoubtedly the programming language that has been dominant in Software world, longer than any other, and probably will continue to remain dominant for many more years. I worked in Java for a decade before I moved to Groovy. For several years I enjoyed coding in Groovy and did not want to go back to Java. Life doesn't go your way. And now, I am back to Java. I'd rather say, I am back to Java with Groovy eyes and coding experience ;)

Groovy taught me many things in programming, which otherwise, I wouldn't have learnt or changed my object-oriented mindset to think different, if I had just stick myself to Java. I do notice a lot that Java developers who have been coding in just Java for awhile still write Java 1.2 code. Java is evolving faster now for good. But Java developers are not evolving at the same pace with it. Coming back to Java from Groovy, I am not hesitant to use any of the new features that Java is adding version after Java at a fast pace. I did write production Java 13 code with multi-line text blocks which was only a preview feature in Java 13 with --enable-preview flag for compilation and execution. Having experienced even superior multi-line text blocks in Groovy on JVM, I just couldn't write code with several "s,  and +s. Some developer wouldn't even use spaces in between concatenating strings. My eyes get blurry and mind goes blank when I see such code. Polyglot helped me embrace that multi-line text blocks even as an experimental feature in Java 13.

Once in recent years, I had to get my hands dirty with a super-rigid, early twenty-first-century-way written Java family of simple applications with main methods, and tightly coupled code with inheritance, only static member variables in the class hierarchy, no sensible differences between a class and an object, the worst of all- quite a bit of blindly followed manual code changes to be done and checked in after every single run of the code, and a lot of manual copying of both input files before the run and result files after the run. Bringing in a new Java application member to this family of applications require copying one of the applications and start making changes to meet the new application's needs with much of code inherited from the hierarchy.

When I had to add in a new member application to that family of applications, I couldn't follow that family legacy of copy-and-paste tradition. DRY - Don't Repeat Yourself, is the principle that I believe should be taught before even teaching programming. I added a new member to that family following all the messy inheritance as the family was super adamant upfront not to refactor anything. OK, that tells the how bad the code smells. At least I wanted to change the manual procedures and automate them, wanted to change the practice of changing code for every run. Java application's main method takes arguments for this reason. I worked for a financial company (very rigid domain in Software field) in the past and rewrote their bread and butter Oracle stored procedures that computed earnings at the end of each month with its 10,000 lines of code with not event a single line of documentation and the person who wrote it left the company. Nobody was dared to touch the code. People only knew how to calculate earnings, but had no clue how it was implemented in Stored Procedures. I rewrote the whole app in Groovy as a simple runnable Java app with superior command line support with all possible flexibilities to run. The whole app rewritten in Groovy with just few hundred lines of code, made it multi-threaded by bringing down the month end run-time from hours to minutes. That was about a decade ago. If I had to this in Java at that time, it would have made the number of lines of code at least 5 times that Groovy with noise and boilerplate code in dealing with database.

In my current day-to-day development, Groovy is not a choice for production code; only Java. But, we catch up fast using latest versions of Java in production code, few months after a newer version gets  released. That makes me leverage, most recent syntax improvements, language constructs, and feature enhancements and additions being added in every version. In some cases, now, Java code looks little closer to Groovy like code when newer language features are used in support with frameworks.

The very first step I took in adding a new application member to the legacy family was to find good CLI Java framework. I found Picocli, which is super simple to use with no coding, just annotating code. There you go, I used it and brought in a change to the family and paved path for newly joining members to follow the path. This eliminated the need to change code for every run by changing hard-coded constants and check the modified code into version control. By leveraging Picocli, and main method arguments, I externalized few hardcoded values as coming from arguments. That eliminated the need to touch code for every single run. Then automated some more tasks like renaming the generated file manually to meet certain expected naming convention, copying that to another source repo, and checking in that file etc.

Groovy's CliBuilder

In my Groovy development days, I had used Groovy's CliBuilder that comes with Groovy. Only few lines of code makes the application super flexible for driving the inside implementation, processing, or any such logic that depends on values that get passes as arguments to run the application. My Groovy experience helped me a lot to think better, and make the newly added Java application member a very flexible super-kid in the family by leveraging Java's modern features and frameworks like Picocli. 

Java - Picocli

Annotate class and fields, and add either the dependent Picocli class or maven/gradle dependency. With a quick couple of hours of exploration and reading the docs, in few minutes you can add the powerful CLI  feature to your Java Application. It makes it runnable for various scenarios by passing values through different arguments that can drive its functionality in specific ways.

Conclusion

Writing code should be more for developers to read than it is for machines to execute. After all, machine can execute any type of syntactically correct code. There is more than just syntax and semantics in programming, which is READABILITY for humans. Code must first be readable before it is executable.

Change is a constant and there is always scope for improvement, ONLY if you are willing to learn, change, and not afraid to improve ;)

References