Showing posts with label java. Show all posts
Showing posts with label java. Show all posts

Friday, September 20, 2024

Java - Gotcha - Sealed interface and mocking in unit tests . . .

Seal(noun)
dictionary meaning - a device or substance that is used to join two things together so as to prevent them from coming apart or to prevent anything from passing between them. 

Prevent anything from passing between them. That's exactly what sometimes you want to put in place. When you have an interface and want to restrict other interfaces to extend or classes to implement to have a control on, you ned to seal your interface by specifying all those are permitted to extend or implement.

Java sealed interfaces is a feature introduced in Java 15 as a preview feature and became a standard feature in Java 17. Sealed interface restricts which classes or interfaces can implement or extend it. Classes that implement a sealed interface must be declared as final, sealed, or non-sealed. This provides more control over the inheritance hierarchy and helps to enforce certain design constraints.
 
To declare a sealed interface, use the sealed keyword followed by the permits clause, which lists the permitted subtypes.

E.g.
public sealed interface Shape permits Circle, Rectangle, Triangle { double area(); }

Each permitted subtype must be declared as one of the following:
  • Final: Cannot be extended further.
  • Sealed: Can specify its own permitted subtypes.
  • Non-Sealed: Removes the sealing restriction, allowing any class to extend it.
// Final class public final class Circle implements Shape { ... } // Sealed class public sealed class Rectangle implements Shape permits Square { ... } // Non-sealed class, additional permitted sub-type public final Square extends Rectangle { ... }

Benefits of Sealed Interfaces

Enhanced Control: Provide more control over the inheritance hierarchy, ensuring that only specific classes can implement the interface.
Improved Maintainability: By restricting the set of permitted subtypes, you can make your codebase easier to understand and maintain.
Better Exhaustiveness Checking: Sealed interfaces improve exhaustiveness checking in switch statements, especially when used with pattern matching (introduced in later Java versions).

The exhaustive checking in switch statement itself is very useful feature to have that makes your code not to miss handling a case of interface type in switch which otherwise is prone to bugs. The compiler would not let your code compile until all possible cases are handled in a switch statement making your code robust.

Shape aShape; ... switch(aShape) { case Circle circle -> circle.radius(); case Rectangle rectangle -> // do something // handle all remaining cases or provide default case, otherwise you code fails compilation }

Gotcha - Mockito, mocking sealed interface

Mocking is common in unit testing. If you are writing unit test for an object A that depends on object B, you will not be interested in B and can simply mock it's behavior. If Mockito is your mocking framework, and B happens to be a sealed interface with some permitted implementations, then you will not be able to mock like usually you do as follows:

class ATest { ... @Mock private B objB; ... }

Your test fails with the following error when it is run:
org.mockito.exceptions.base.MockitoException: Mockito cannot mock this class: interface B. If you're not sure why you're getting this error, please open an issue on GitHub. Java : 22 JVM vendor name : Amazon.com Inc. JVM vendor version : 22.0.1+8-FR JVM name : OpenJDK 64-Bit Server VM JVM version : 22.0.1+8-FR JVM info : mixed mode, sharing OS name : Mac OS X OS version : 13.6.6 You are seeing this disclaimer because Mockito is configured to create inlined mocks. You can learn about inline mocks and their limitations under item #39 of the Mockito class javadoc. Underlying exception : org.mockito.exceptions.base.MockitoException: Unsupported settings with this type 'B'

Solution
Change mock to a specific implementation of the interface.
private final B objB = mock(BImpl.class); // sealed interface, specify specific implementation class to be mocked

Wednesday, August 21, 2024

Spring Data JPA limitation with LIMIT fetch . . .

In modern Java Spring based applications Spring Data JPA is quite common way to interface with database. Domain/Business objects carry persistable state of the business process. With few JPA annotations, POJOs can be enhanced to persistable domain objects. Unlike Grails framework that underpins HIBERNATE, leverages GORM and elevates and enriches domain objects to the higher level by making them persistence aware, Spring Data JPA keeps the persistence in another abstraction layer called Repository.

With Spring Data JPA, Repository is the central interface and it requires one be familiar with Repository abstractions. Queries can be defined as interface methods and implementation is provided by Spring Data JPA framework by 1) deriving from method naming conventions 2) using manually defined queries with @Query annotation by writing JPQL or native SQL queries. My first choice is interface method naming by following the conventions. Next is JPQL. I avoid native queries unless there is a strong reason for.

JPQL Limitation with LIMIT fetch

One of the limitations I ran into recently with JPQL was limiting query results to limit fetching to limited number of records, say one record from the query results. Typically in native SQL this is done by adding LIMIT clause by specifying LIMIT 1 to limit to the first result to fetch. JPQL lets you specify LIMIT which also works, but under the covers the LIMIT is applied in memory to the results fetched. In other words the LIMIT clause doesn't exist in the generated native SQL. So, the SQL fetches all the results that match the criteria and a collection of entity objects get created and then the LIMIT is applied to get one object. With this the JPQL query does it's job as specified but will incur into expensive query by fetching more than needed records and creating the objects in collection and then returning one object by considering the LIMIT 1.

So, an example Repository method annotated like the following would return one object, but fetches all records that match the criteria into memory and return the first one from the collection.
@Query(""" SELECT msg FROM Message msg WHERE msg.type = :type ORDER BY msg.createdOn DESC LIMIT 1 """) Optional<Message> findLatestByType(MessageType type);

In order to truly fetch the most recent message of a given type the JPQL needs to be optimized to fetch only one record.

With JPQL the query may need to be rewritten something like the following without using LIMIT, assuming id is primary key, and is a sequence. It is more performant with no additional index created than using createdOn auditable column if there is one.
@Query(""" SELECT msg FROM Message msg WHERE msg.id = ( SELECT MAX(m.id) FROM Message m WHERE m.type = :type ) """) Optional<Message> findLatestByType(MessageType type);

The last resort is by writing a native query and using LIMIT 1 to fetch one.

References


Monday, May 20, 2024

Spring Boot logs in JSON format, assert logged output in a test-case . . .

Development is fun and sometimes frustrating too. Everything comes with some kind or other issues attached.

Scenario

Lately, I had to switch Spring Boot out-of-the-box Logback to Log4j2 logging, and specifically to JSON format. One of the test-cases that I had written was to test a feature-flag-based conditional scenario. The conditional code that depends on the feature-flag which is exposed as an external property and injected via @ConfigurationProperties bean into a service, when disabled writes a log message at WARN level to indicate that the feature is disabled. The unit-test has a test case written to test the disabled scenario which also tests the expected log message by leveraging Spring provided OutputCaptureExtension. This annotation, when used at test class level or method level like: @ExtendWith(OutputCaptureExtension.class) makes output log available to the test-case for verification.

That test-case when I switched logging to JSON using Log4j2 failed due to output log unavailable.

Environment: Java 21, Spring Boot 3.2.5, maven 3.9.6 on macOS Catalina 10.15.7

This post is about few things learned along the way: Log4j2 JSON logging, Spring Boot's JUnit Jupiter extension to capture System output, and the Log4j2 JSON Layout property to let the output be captured and available.

JSON logs

To switch Spring boot application logging to JSON using Log4j2 the following dependencies need to be added to maven build file pom.xml. Also, make sure to run mvn dependency:tree and see if you see spring-boot-starter-logging. If you have that as a transitive dependency, exclude it from all dependencies that bring it in. In the following dependencies, I had Spring Modulith that I had to exclude it from.

<dependency> <groupId>org.springframework.modulith</groupId> <artifactId>spring-modulith-starter-core</artifactId> <exclusions> <exclusion> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-logging</artifactId> </exclusion> </exclusions> </dependency> <!-- logging --> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-log4j2</artifactId> </dependency> <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-layout-template-json</artifactId> </dependency> <dependency> <groupId>com.fasterxml.jackson.dataformat</groupId> <artifactId>jackson-dataformat-yaml</artifactId> </dependency>

Add the following configuration file: main/resources/log4j2.yml. Log4j2 supports XML and YML:

Configuration: name: default Appenders: Console: name: console_appender target: SYSTEM_OUT follow: true JSONLayout: compact: true objectMessageAsJsonObject: true eventEol: true stacktraceAsString: true properties: true KeyValuePair: - key: '@timestamp' value: '$${date:yyyy-MM-dd HH:mm:ss.SSS}' Loggers: Root: name: root.logger level: info AppenderRef: ref: console_appender

With the above changes, application logs will be in JSON format.

Assert Captured output in a test-case

If you have any test-case that was verifying the captured log output in a test-case, it would fail.

For instance, I had a test-case like the following which was verifying the log message logged by myService.method(). To make it work, the above highlighted property - follow: true needs to be added in the log4j2.yml configuration. The details about this console appender property is documented in the OutputCaptureExtension Java doc

import org.junit.jupiter.api.Test; import org.junit.jupiter.api.extension.ExtendWith; import org.mockito.InjectMocks; import org.mockito.Mock; import org.mockito.junit.jupiter.MockitoExtension; import org.springframework.boot.test.system.CapturedOutput; import org.springframework.boot.test.system.OutputCaptureExtension; import org.springframework.test.util.ReflectionTestUtils; import static org.assertj.core.api.Assertions.*; import static org.junit.jupiter.api.Assertions.assertAll; import static org.mockito.Mockito.verify; @ExtendWith(MockitoExtension.class) @ExtendWith(OutputCaptureExtension.class) class MyFeatureFlagTest { @InjectMocks private MyService myService; @Test void call_should_not_handleEvent_when_featureFlag_is_disabled(CapturedOutput output) { // given: mocked behavior ReflectionTestUtils.setField(myService, "featureFlag", false); // when: method under test is called myService.method(); // verify: the output captured assertAll( () -> assertThat(output).contains("The featureFlag is DISABLED."); } }

That's it.

References

Saturday, March 09, 2024

Spring Boot - Java GraphQL - extended scalar types . . .

This is my first te(a)ch note on GraphQL. I had hit a couple road blocks in a few of days of my hands on journey with it. Unlike good-old-days when books were the primary source of learning that had everything documented, there is no single place to find all details these days.

Environment: Java 21, Spring Boot 3.2.3, PostgreSQL 15.3, maven 3.9.6 on macOS Catalina 10.15.7

Extended or Custom Scalar types

GraphQL specifies very limited set of well-defined built-in scalar data types (primitive data types): Int, Float, String, Boolean and ID. GraphQL systems must support these as described in the specification. Everything else is an extended or custom scalar data type.

That, obviously is a very limited set supported. All other data types need custom scalar implementations which basically require coercing values at run-time and converting those to Java run-time representation. Luckily, the Java ecosystem is so huge that you almost don't need to break the ground in doing so. You will always find few open-source libraries that have tackled it already for you. GraphQL Java Scalars is one such in this Java GraphQL for extended scalar data types.

The primitive data type set supported is just not enough. You at least need support for few other data types used in any Java application like: Long, UUIDDateTime etc. They all need special considerations in your application's GraphQL schema. The DateTime takes a very special seat. In fact, anything around Dates in Java always scares me. To humans, Date and Time are the most obvious types in day-to-day life, but non in Software systems. Date is the most abused type than any other data type. Just recollect how many billions of dollars of money was wasted on this one data type in 1998 and 1999 around the globe. After 23 years of learning the mistake, the Date is still not dealt easily; it is still a complex data type to deal with ;).

To use custom scalar types other than that limited primitive set, you have to look for writing code that handles serialization, parsing and literal parsing for each additional data type. The graphql-java-extended-scalars library provides implementation for many other data types.

With a maven dependency added for this library, all you need to do is to register a scalar data type  with RuntimeWiringConfigurer as described in the README. If you need to register multiple types, it's a builder, so you can just chain those like:

@Configuration @Slf4j public class GraphQLConfiguration { /** * Custom scalar support for UUID, Long, and DateTime. Registers extended scalar types used GraphQL query schema. */ @Bean public RuntimeWiringConfigurer runtimeWiringConfigurer() { log.info("Registering extended GraphQL scalar types for UUID, Long, DateTime..."); return wiringBuilder -> wiringBuilder.scalar(ExtendedScalars.UUID) .scalar(ExtendedScalars.GraphQLLong) .scalar(ExtendedScalars.DateTime); } }

In addition to this, specify these scalar types in your application's schema.graphqls schema specification like:
"Extended scalar types" scalar UUID @specifiedBy(url: "https://tools.ietf.org/html/rfc4122") scalar Long @specifiedBy(url: "https://ibm.github.io/graphql-specs/custom-scalars/long.html") scalar DateTime @specifiedBy(url: "https://scalars.graphql.org/andimarek/date-time.html") ...

You are good to go.

Note that the extended scalar type for Long is named as GraphQLLong by this library. But, you should use Long in your your schema when you specify it as shown above. The directive @sprifiedBy is recommended to be used by GraphQL specification and is also a good practice to follow. Never ignore good practices ;)

Gotcha

Java JPA -  Instant vs. OffsetDateTime

If you are dealing with DateTime, make sure that whatever the Java type used in your code, it complies with GraphQL specification that requires date time offset.

I initially used Instant type in my JPA BaseEntity class for two properties: createdOn and updatedOn that are mapped by Hibernate provided @CreationTimestamp and @UpdateTimestamp mapped to PostgreSQL column type TIMESTAMP WITH TIME ZONE.  I switched to OffsetDateTime type because Instant is not supported and will never be by this library due to it not complying with the specification for DateTime. Java's Instant, Date and LocalDateTime do not include offset.

OffsetDateTime is an immutable representation of a date-time with an offset. This class stores all date and time fields, to a precision of nanoseconds, as well as the offset from UTC/Greenwich.

TIP

PostgreSQL offers two date time types: timestamp, timestamptz (is abbreviation of timestamp with time zone).

The following query results tell the date time story on this day light savings day of this year (Sun Mar 10, 2024). I ran it on my local PostgreSQL 15.3 running in Docker container.

-- Ran the query on Mar 10, 2024 day light savings day at EST 5:13:13 PM, EDT: 17:13:13 select version(); -- PostgreSQL 15.3 (Debian 15.3-1.pgdg120+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit show time zone; -- UTC SELECT now(), -- 2024-03-10 21:13:13.956877 +00:00 (timestamp with time zone UTC) now() AT TIME ZONE 'EST' AS est, -- 2024-03-10 16:13:13.956877 (??) now() AT TIME ZONE 'EDT' AS edt, -- 2024-03-10 17:13:13.956877 (right) now() AT TIME ZONE 'CST' AS cst, -- 2024-03-10 15:13:13.956877 (??) now() AT TIME ZONE 'CDT' AS cdt, -- 2024-03-10 16:13:13.956877 (right) now()::timestamp AT TIME ZONE 'EDT' AS timestamp_without_tz, -- 2024-03-11 01:13:13.956877 +00:00 (wrong) now()::timestamptz AT TIME ZONE 'EDT' AS timestamptz; -- 2024-03-10 17:13:13.956877 (right)

Here is the DbFiddle playground to play with the above query.

That's it in this te(a)ch note, more might come in as I walk forward along this GraphQL path.

Monday, January 01, 2024

Polyglot makes you a think better and do better - my musing . . .

Fluency in multiple spoken languages (Polyglot) always makes you think better and communicate even better. In Software Development, Polyglot programming makes you a better Software Developer. Being able to code in more than one language makes you think different and write better code.

No language is superior or best for all use-cases. Polyglot experience is very beneficial. It makes you think better when approaching a problem for a solution. In software programming world, it matters more than in the normal world.

Java is undoubtedly the programming language that has been dominant in Software world, longer than any other, and probably will continue to remain dominant for many more years. I worked in Java for a decade before I moved to Groovy. For several years I enjoyed coding in Groovy and did not want to go back to Java. Life doesn't go your way. And now, I am back to Java. I'd rather say, I am back to Java with Groovy eyes and coding experience ;)

Groovy taught me many things in programming, which otherwise, I wouldn't have learnt or changed my object-oriented mindset to think different, if I had just stick myself to Java. I do notice a lot that Java developers who have been coding in just Java for awhile still write Java 1.2 code. Java is evolving faster now for good. But Java developers are not evolving at the same pace with it. Coming back to Java from Groovy, I am not hesitant to use any of the new features that Java is adding version after Java at a fast pace. I did write production Java 13 code with multi-line text blocks which was only a preview feature in Java 13 with --enable-preview flag for compilation and execution. Having experienced even superior multi-line text blocks in Groovy on JVM, I just couldn't write code with several "s,  and +s. Some developer wouldn't even use spaces in between concatenating strings. My eyes get blurry and mind goes blank when I see such code. Polyglot helped me embrace that multi-line text blocks even as an experimental feature in Java 13.

Once in recent years, I had to get my hands dirty with a super-rigid, early twenty-first-century-way written Java family of simple applications with main methods, and tightly coupled code with inheritance, only static member variables in the class hierarchy, no sensible differences between a class and an object, the worst of all- quite a bit of blindly followed manual code changes to be done and checked in after every single run of the code, and a lot of manual copying of both input files before the run and result files after the run. Bringing in a new Java application member to this family of applications require copying one of the applications and start making changes to meet the new application's needs with much of code inherited from the hierarchy.

When I had to add in a new member application to that family of applications, I couldn't follow that family legacy of copy-and-paste tradition. DRY - Don't Repeat Yourself, is the principle that I believe should be taught before even teaching programming. I added a new member to that family following all the messy inheritance as the family was super adamant upfront not to refactor anything. OK, that tells the how bad the code smells. At least I wanted to change the manual procedures and automate them, wanted to change the practice of changing code for every run. Java application's main method takes arguments for this reason. I worked for a financial company (very rigid domain in Software field) in the past and rewrote their bread and butter Oracle stored procedures that computed earnings at the end of each month with its 10,000 lines of code with not event a single line of documentation and the person who wrote it left the company. Nobody was dared to touch the code. People only knew how to calculate earnings, but had no clue how it was implemented in Stored Procedures. I rewrote the whole app in Groovy as a simple runnable Java app with superior command line support with all possible flexibilities to run. The whole app rewritten in Groovy with just few hundred lines of code, made it multi-threaded by bringing down the month end run-time from hours to minutes. That was about a decade ago. If I had to this in Java at that time, it would have made the number of lines of code at least 5 times that Groovy with noise and boilerplate code in dealing with database.

In my current day-to-day development, Groovy is not a choice for production code; only Java. But, we catch up fast using latest versions of Java in production code, few months after a newer version gets  released. That makes me leverage, most recent syntax improvements, language constructs, and feature enhancements and additions being added in every version. In some cases, now, Java code looks little closer to Groovy like code when newer language features are used in support with frameworks.

The very first step I took in adding a new application member to the legacy family was to find good CLI Java framework. I found Picocli, which is super simple to use with no coding, just annotating code. There you go, I used it and brought in a change to the family and paved path for newly joining members to follow the path. This eliminated the need to change code for every run by changing hard-coded constants and check the modified code into version control. By leveraging Picocli, and main method arguments, I externalized few hardcoded values as coming from arguments. That eliminated the need to touch code for every single run. Then automated some more tasks like renaming the generated file manually to meet certain expected naming convention, copying that to another source repo, and checking in that file etc.

Groovy's CliBuilder

In my Groovy development days, I had used Groovy's CliBuilder that comes with Groovy. Only few lines of code makes the application super flexible for driving the inside implementation, processing, or any such logic that depends on values that get passes as arguments to run the application. My Groovy experience helped me a lot to think better, and make the newly added Java application member a very flexible super-kid in the family by leveraging Java's modern features and frameworks like Picocli. 

Java - Picocli

Annotate class and fields, and add either the dependent Picocli class or maven/gradle dependency. With a quick couple of hours of exploration and reading the docs, in few minutes you can add the powerful CLI  feature to your Java Application. It makes it runnable for various scenarios by passing values through different arguments that can drive its functionality in specific ways.

Conclusion

Writing code should be more for developers to read than it is for machines to execute. After all, machine can execute any type of syntactically correct code. There is more than just syntax and semantics in programming, which is READABILITY for humans. Code must first be readable before it is executable.

Change is a constant and there is always scope for improvement, ONLY if you are willing to learn, change, and not afraid to improve ;)

References

Saturday, August 05, 2023

Java bytecode - compiler version options and compatibilities . . .

One of many strengths of Java platform is its backward compatibility with the language. As language keeps evolving and moving forward, the good old syntax is still supported for backward compatibility. However, the compiler adds certain indicative options for specifying version details. The --source, --target are two such compiler (javac) options. From Java 9 onwards a third option --release got added to this mix. Getting a good understanding of these options is not trivial without actually experiencing all three. When compiling source code of a single class you may not need to specify these options. But in Java project when building with maven like build system, one needs to understand these options and their implications.

Environment: Java 20, Spring Boot 2.7.15, maven 3.9.3 on macOS Catalina 10.15.7

The maven-compiler-plugin

Maven build system uses maven-compiler-plugin for compiling source code. This plugin documentation upfront talks about source and target options and highly recommends to change these in plugin configuration. In order to change these per application/module needs, one needs to look under the hood for understanding.

Various extra Java compiler options can be specified in the maven-compiler-plugin configuration. The actual Java compiler options related to version are: --source, --target and --release that can be specified and passed to the compiler during code compilation through maven-compiler-plugin configuration. This can be done in two different ways in pom.xml:

1. Through maven properties: maven.compiler.source,  maven.compiler.target and maven.compiler.release as highlighted below:

<properties> <maven.compiler.source>20</maven.compiler.source> <maven.compiler.target>20</maven.compiler.target> <maven.compiler.target>20</maven.compiler.target> </properties>

If these properties are not explicitly defined, maven compiler plugin uses 1.8 for source and target.

2. Through the plugin configuration settings as highlighted below. Note - For convenience defined extra properties and used for source, target and release but straight version numbers can be used.

... <properties> <java.version>20</java.version> <javac.source.version>${java.version}</javac.source.version> <javac.target.version>${java.version}</javac.target.version> <javac.release.version>${java.version}</javac.release.version> <!-- Maven plugins --> <maven-compiler-plugin.version>3.11.0</maven-compiler-plugin.version> </properties> <build> <pluginManagement> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>${maven-compiler-plugin.version}</version> <configuration> <source>${javac.source.version}</source> <target>${javac.target.version}</target> <release>${javac.release.version}</release> <compilerArgs> <arg>-Xlint:all</arg> </compilerArgs> </configuration> </plugin> </plugins> </pluginManagement> ...

If no special configuration is required, it doesn't require even to specify maven-compiler-plugin. If specified, the above are two ways to control/change default 1.8 set by the plugin for these options which eventually get passed to the Java compiler (javac) during code compilation of sources (both under src and test

Note from Java 9 onwards, the values to these options are not like 1.7, 1.8 but must be 7 and 8.

Java 20

Java 20 compiler doesn't support version 7 for source, target anymore. The supported releases are 8 through 20. So, for any reason if maven compiler plugin is set explicitly with 1.7, build fails with ERRORS saying: Source option 7 is no longer supported. Use 8 or later. , and Target option 7 is no longer supported. Use 8 or later. 

Now it's time to understand what these options actually tell the compiler, javac. The compiler's help option (javac -help) lists all available options and a brief description about each option. The -source, -target, -release options descriptions are helpful to some extent.

--source <release>, -source <release> Provide source compatibility with the specified Java SE release. Supported releases: 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20 --target <release>, -target <release> Generate class files suitable for the specified Java SE release. Supported releases: 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20 --release <release> Compile for the specified Java SE release. Supported releases: 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20

To understand these compiler options better, we can compile a simple Java application class with just main method.

HelloJava.java
import java.util.Properties; public class HelloJava { public static void main(String[] args) { Properties systemProperties = System.getProperties(); System.out.println(String.format("Hello Java %s!", systemProperties.getProperty("java.vm.specification.version"))); systemProperties.entrySet().stream() .filter(entry -> entry.getKey().toString().startsWith("java")) .toList().stream() .forEach(entry -> System.out.println(entry.getKey() + "=" + systemProperties.getProperty(entry.getKey().toString())) ); } }
Note - The above class prints system properties that start with "java" with their values. It uses toList() method that Java 16 added to Stream class. With this the expectation is- the code should not be compiled for Java/JVM version less than 16.

Java compiler version options

Let's compile the class with different Java versions and compiler options.
 
// compile with Java 20: no options specified $ sdk use java 20.0.2-amzn // check: major version $ javap -verbose HelloJava.class | grep major major version: 64 // run on Java 20: works $ javac HelloJava.java "Hello Java 20!" // switch to Java 17 and run: fails with LinkageError $ sdk use java 17.0.1.12.1-amzn $ java HelloJava Error: LinkageError occurred while loading main class HelloJava java.lang.UnsupportedClassVersionError: HelloJava has been compiled by a more recent version of the Java Runtime (class file version 64.0), this version of the Java Runtime only recognizes class file versions up to 61.0 // switch to Java 15 and run: fails with LinkageError $ sdk use java 15.0.2.7.1-amzn $ java HelloJava Error: LinkageError occurred while loading main class HelloJava java.lang.UnsupportedClassVersionError: HelloJava has been compiled by a more recent version of the Java Runtime (class file version 64.0), this version of the Java Runtime only recognizes class file versions up to 59.0 // switch to Java 8 and run: fails with UnsupportedClassVersionError Exception $ sdk use java 8.0.352-amzn $ java HelloJava Error: A JNI error has occurred, please check your installation and try again Exception in thread "main" java.lang.UnsupportedClassVersionError: HelloJava has been compiled by a more recent version of the Java Runtime (class file version 64.0), this version of the Java Runtime only recognizes class file versions up to 52.0 at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(ClassLoader.java:756) at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) at java.net.URLClassLoader.defineClass(URLClassLoader.java:473) at java.net.URLClassLoader.access$100(URLClassLoader.java:74) at java.net.URLClassLoader$1.run(URLClassLoader.java:369) at java.net.URLClassLoader$1.run(URLClassLoader.java:363) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:362) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:601)

So. when compiled with a specific version of java compiler (in this case Java 20 and no version options are specified), it gets compiled with default target of the Java compiler which is 20. The class generated cannot be run on prior JVM versions (prior to 20). To find the target of JVM code the byte-code is generated for, use javap -verbose HelloJava.class | grep major.

Now, let's try compiling for target 17 and try to run on different JVM versions.

// compile with Java 20 for target 17: --source and --target options specified $ sdk use java 20.0.2-amzn $ javac --source=17 --target=17 HelloJava.java warning: [options] system modules path not set in conjunction with -source 17 1 warning // check: major version $ javap -verbose HelloJava.class | grep major major version: 61 // run on Java 20: works $ java HelloJava Hello Java 20! // switch to Java 17 and run: works $ sdk use java 17.0.1.12.1-amzn // run on Java 17: works $ java HelloJava Hello Java 17! // swicth to Java 15 and run: fails with LinkageError $ sdk use java 15.0.2.7.1-amzn $ java HelloJava Error: LinkageError occurred while loading main class HelloJava java.lang.UnsupportedClassVersionError: HelloJava has been compiled by a more recent version of the Java Runtime (class file version 61.0), this version of the Java Runtime only recognizes class file versions up to 59.0 // compile with Java 20 for target 17: --source and --target options specified $ sdk use java 20.0.2-amzn $ javac --source=15 --target=15 HelloJava.java warning: [options] system modules path not set in conjunction with -source 15 1 warning // check: major version $ javap -verbose HelloJava.class | grep major major version: 59 // switch to Java 17 and run: works $ sdk use java 17.0.1.12.1-amzn $ java HelloJava Hello Java 17! // swicth to Java 15 and run: fails with NoSuchMethodError $ sdk use java 15.0.2.7.1-amzn $ java HelloJava Hello Java 15! Exception in thread "main" java.lang.NoSuchMethodError: 'java.util.List java.util.stream.Stream.toList()' at HelloJava.main(HelloJava.java:15) // switch to Java 8 and run: fais with UnsupportedClassVersionError $ sdk use java 8.0.352-amzn $ java HelloJava Error: A JNI error has occurred, please check your installation and try again Exception in thread "main" java.lang.UnsupportedClassVersionError: HelloJava has been compiled by a more recent version of the Java Runtime (class file version 59.0), this version of the Java Runtime only recognizes class file versions up to 52.0 at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(ClassLoader.java:756) at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) at java.net.URLClassLoader.defineClass(URLClassLoader.java:473) at java.net.URLClassLoader.access$100(URLClassLoader.java:74) at java.net.URLClassLoader$1.run(URLClassLoader.java:369) at java.net.URLClassLoader$1.run(URLClassLoader.java:363) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:362) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:601) // compile with Java 20 for target 8: --source and --target options specified $ sdk use java 20.0.2-amzn $ javac --source=8 --target=8 HelloJava.java javac --source=8 --target=8 HelloJava.java warning: [options] bootstrap class path not set in conjunction with -source 8 warning: [options] source value 8 is obsolete and will be removed in a future release warning: [options] target value 8 is obsolete and will be removed in a future release warning: [options] To suppress warnings about obsolete options, use -Xlint:-options. 4 warnings // check: major version $ javap -verbose HelloJava.class | grep major major version: 52 // switch to Java 20 and run: works $ sdk use java 20.0.2-amzn $ java HelloJava Hello Java 20! // switch to Java 17 and run: works $ sdk use java 17.0.1.12.1-amzn $ java HelloJava Hello Java 17! // swicth to Java 15 and run: fails with NoSuchMethodError $ sdk use java 15.0.2.7.1-amzn $ java HelloJava Hello Java 15! Exception in thread "main" java.lang.NoSuchMethodError: 'java.util.List java.util.stream.Stream.toList()' at HelloJava.main(HelloJava.java:15)

So, when compiled for a target version, it cannot be run on prior JVM versions.

Let's try target 7.
$ sdk use java 20.0.2-amzn $ javac --source=7 --target=7 HelloJava.java warning: [options] bootstrap class path not set in conjunction with -source 7 error: Source option 7 is no longer supported. Use 8 or later. error: Target option 7 is no longer supported. Use 8 or later.
Java version 7 is not supported anymore.

Let's experience --release option.
// compile with Java 20 using options --source and --target options specified $ sdk use java 20.0.2-amzn $ javac --source=17 --target=17 --release=17 HelloJava.java error: option --source cannot be used together with --release error: option --target cannot be used together with --release Usage: javac <options> <source files> use --help for a list of possible options // compile with Java 20 for target 17: --release options specified $ sdk use java 20.0.2-amzn $ javac --release=17 HelloJava.java // check: major version $ javap -verbose HelloJava.class | grep major major version: 61 // switch to Java 20 and run: works $ sdk use java 20.0.2-amzn $ java HelloJava Hello Java 20! // switch to Java 17 and run: works $ sdk use java 17.0.1.12.1-amzn $ java HelloJava Hello Java 17! // switch to Java 15 and run: fails with NoSuchMethodError $ sdk use java 15.0.2.7.1-amzn $ java HelloJava Hello Java Exception in thread "main" java.lang.NoSuchMethodError: 'java.util.List java.util.stream.Stream.toList()' at HelloJava.main(HelloJava.java:14) // compile with Java 20 for target 15: --release options specified $ sdk use java 20.0.2-amzn $ javac --release=15 HelloJava.java HelloJava.java:14: error: cannot find symbol .toList().stream() ^ symbol: method toList() location: interface Stream<Entry<Object,Object>> 1 error // compile with Java 20 for target 15: --source --target options specified $ sdk use java 20.0.2-amzn $ javac --source=15 --target=15 HelloJava.java warning: [options] system modules path not set in conjunction with -source 15 1 warning // switch to Java 15 and run: fails with NoSuchMethodError $ sdk use java 15.0.2.7.1-amzn $ java HelloJava Exception in thread "main" java.lang.NoSuchMethodError: 'java.util.List java.util.stream.Stream.toList()' at HelloJava.main(HelloJava.java:14)

So, --release option does a strict compilation time checks to see if the code is compliant with the release target and fails to compile if a method not supported in target version is used in the code. This makes sure the compiled class works on target release (the target JVM version that the code is released to run on). Whereas, the --target option doesn't do code compliance checks during compilation time, it simply compiles code but fails during runtime. So, --release seems like better option to leverage when specifying.

Implications of version options

  • No compiler options specified: The code gets compiled with default target as the version of the Java compiler.
  • All 3 options  --source, --target and --release specified:  Not allowed.
  • Options --source--target specified: 1) Both can be same (e.g. 17, 17). 2) The option: --source can be lower version (e.g. 15) and --target can be higher version (e.g. 17), but not the other way. If --source is higher version (e.g.17) and --target is lower version (e.g.15), compiler fails with a warning: warning: source release 17 requires target release 17, it doesn't get compiled.
  • Only option --source: The option --source can be any version but default target would be the version of Java compiler being used.
  • Only option --target: The option: --target cannot be lower than the version of Java compiler being used because default source would be the version of the compiler being used. A lower target version gets into compiler option source higher, target lower and fails compilation. E.g javac --target=19 HelloJava.java with Java 20 compiler fails with warning: target release 19 conflicts with default source release 20 and code doesn't get compiled.
  •  Only option --release: Strict code check during compilation to make sure that compiled code gets compiled and works on the target version. Also, the byte-code generated runs only on the release specified and higher, but doesn't run on any lower versions.
    • Compiling using Java 20, and no --release option or --release=20 results with major version 64 (Java 20).
    • Compiling using Java 20 with --release=19 results with major version 63 (Java 19).
    • Compiling using Java 20 with --release=17 results with major version 61 (Java 17) and would result with LinkageError when run on lower version other than 17. Works on 17 and higher.

The maven-compiler-plugin variations with these options

Maven, out of the box with no maven-compiler-plugin specified in pom.xml, has the following variations with it's compiler version option properties (maven.compiler.source, maven.compiler.target and maven.compiler.release).

<properties> <maven.compiler.source>20</maven.compiler.source> <maven.compiler.target>20</maven.compiler.target> <maven.compiler.release>20</maven.compiler.release> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> </properties>
  • No version properties are specified: (No maven.compiler.source, maven.compiler.target and maven.compiler.release properties): Build fails with compilation ERRORS: Source option 5 is no longer supported. Use 8 or later. and Target option 5 is no longer supported. Use 8 or later.
  • All 3 version properties are specified: The option: maven.compiler.release is ignored, it can be any junk. Only source and target properties matter. The value for target option cannot be less than the source. For instance, source 20, target 19 fails build with warning: source release 20 requires target release 20.
  • Only source version property is specified: When only source is specified, target must also be specified. Otherwise, maven build fails with: Fatal error compiling: warning: source release 20 requires target release 20
  • Only target version property is specified: When only target is specified, source must also be specified. Otherwise, maven build fails with: Source option 5 is no longer supported. Use 8 or later.
  • The release property specified: When release is specified and is 8 or later, this takes the precedence. Make a special NOTE of it. When release is specified, it takes the precedence and sourcetarget options are ignored. In this case, any non-sense value will make the build work and code gets compiled for the release version specified. But this must be 8 or later.
With maven-compiler-plugin specified in pom.xml, has the following variations with these options specified in the plugin configuration (<configuration>): 

<properties> <maven.compiler.source>20</maven.compiler.source> <maven.compiler.target>20</maven.compiler.target> <maven.compiler.release>20</maven.compiler.release> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> </properties> ... <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>3.11.0</version> <configuration> <source>20</source> <target>20</target> <release>20</release> </configuration> </plugin> </plugins> </build>

With the above way of having both defined set of properties, and maven-compiler-plugin configuration, the configuration values override the property values defined. If no configuration is specified for the maven-compiler-plugin, it uses the set of properties defined. With configuration <source>, <target>, and <release> taking the precedence, the variations are as follows:
  • No <configuration> specified for the plugin, but set of properties are defined: It uses defined set of properties if exist, release takes the precedence over target and source.
  • No properties are defined, and no <configuration> is specified for the plugin : It defaults to target 1.8.
  • All 3 configuration options are specified: The release configuration takes the precedence and is built for the release version.
  • No properties are set, and only source configuration option is specified: When only source is specified, target must also be specified. Otherwise, maven build fails with: Fatal error compiling: warning: source release 20 requires target release 20
  • No properties are set, and only target configuration option is specified: Code gets compiled for the target specified.
  • No properties are set, and both target and release configuration options are specified: The release option takes the precedence and code gets compiled for the release specified.

Summary

For Java 9 and after, use --release option.
For older versions prior to Java 9 use --source and --target options.

TIPS

  • Use SDKMAN to install multiple Java versions and easily switch between different versions.
  • If You see noisy warning: Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection:, then add compiler argument <arg>-parameters</arg> to the maven-compiler-plugin configuration as shown below:
<plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>${maven-compiler-plugin.version}</version> <configuration> <source>${javac.source.version}</source> <target>${javac.target.version}</target> <release>${javac.release.version}</release> <compilerArgs> <arg>-Xlint:all</arg> <arg>-parameters</arg> </compilerArgs> </configuration> </plugin>

References

Thursday, September 22, 2022

Enums - all the way to persistence . . .

In any application there would be a need for pre-defined ordered set of constants. Enum is a data type good for such cases. Java has enum since 1.5. Databases do support enum type and PostgreSQL has a special support for this since release 8.3. JPA and Spring Data is a good match to use in modern Java applications for persistence, especially in Spring Boot applications. 

Environment: Java 17, Spring Boot 2.6,7 on macOS Catalina 10.15.7

Example Scenario - A persistable entity object in a Spring Boot micro-service application with JPA and PostgreSQL DB.

DDL Script
-- create enum type genders CREATE TYPE genders AS ENUM( 'MALE', 'FEMALE' ); -- create people table CREATE TABLE people( id VARCHAR(36) PRIMARY KEY, first_name VARCHAR(50) NOT NULL, last_name VARCHAR(50) NOT NULL, gender genders NOT NULL ); -- Unique Constraints ALTER TABLE people ADD CONSTRAINT people_fname_lname_uk UNIQUE (first_name, last_name);

Maven dependencies: pom.xml
... <dependencyManagement> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>2.6.3</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-jpa</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-validation</artifactId> </dependency> <dependency> <groupId>com.vladmihalcea</groupId> <artifactId>hibernate-types-55</artifactId> <version>2.16.2</version> </dependency> <dependency> <groupId>org.projectlombok</groupId> <artifactId>lombok</artifactId> <version>1.18.20</version> <scope>provided</scope> </dependency> ... </dependencies> ...

Enum: Gender.java
import lombok.AllArgsConstructor; @AllArgsConstructor public enum Gender { MALE("Male"), FEMALE("Female"); String genderName; }

Domain Object: Person.java
import com.vladmihalcea.hibernate.type.basic.PostgreSQLEnumType; import lombok.AllArgsConstructor; import lombok.Builder; import lombok.Data; import lombok.NoArgsConstructor; import org.hibernate.annotations.Type; import org.hibernate.annotations.TypeDef; import javax.persistence.Column; import javax.persistence.Entity; import javax.persistence.EnumType; import javax.persistence.Enumerated; import javax.persistence.GeneratedValue; import javax.persistence.GenerationType; import javax.persistence.Id; import javax.persistence.Table; import javax.persistence.UniqueConstraint; import javax.validation.constraints.NotNull; import java.util.UUID; @Entity @Table( name = "people", uniqueConstraints = { @UniqueConstraint( columnNames = {"firstName", "lastName"}, name = "people_fname_lname_uk" ) } ) @TypeDef( name = "pgsql_enum", typeClass = PostgreSQLEnumType.class ) @Data @Builder @NoArgsConstructor @AllArgsConstructor public class Person { @Id @GeneratedValue(strategy = GenerationType.AUTO) @Column(length = 36, nullable = false, updatable = false) @Type(type="org.hibernate.type.UUIDCharType") private UUID id; @NotNull String firstName; @NotNull String lastName; @NotNull @Enumerated(EnumType.STRING) @Type(type = "pgsql_enum") Gender gender; }

JPA Repository: PersonRepository.java
import org.springframework.data.jpa.repository.JpaRepository; import org.springframework.data.jpa.repository.Query; import java.util.UUID; public interface PersonRepository extends JpaRepository<Person, UUID> { Person findByFirstNameAndLastNameAndGender(String firstName, String lastName, Gender gender); @Query(value = "SELECT * FROM people WHERE first_name = :firstName AND last_name = :lastName AND gender = CAST(:#{#gender.name()} as genders)", nativeQuery = true) Person findByFirstNameAndLastNameAndGenderNQ(String firstName, String lastName, Gender gender); }

JPA Repository Test: PersonRepositoryIT.java
import org.junit.Test; import org.junit.runner.RunWith; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.boot.test.autoconfigure.jdbc.AutoConfigureTestDatabase; import org.springframework.boot.test.autoconfigure.orm.jpa.DataJpaTest; import org.springframework.boot.test.autoconfigure.orm.jpa.TestEntityManager; import org.springframework.test.context.ActiveProfiles; import org.springframework.test.context.junit4.SpringRunner; import java.util.List; import static org.assertj.core.api.Assertions.assertThat; @ActiveProfiles("test") @RunWith(SpringRunner.class) @DataJpaTest @AutoConfigureTestDatabase(replace = AutoConfigureTestDatabase.Replace.NONE) public class PersonRepositoryIT { @Autowired TestEntityManager testEntityManager; @Autowired PersonRepository personRepository; @Test public void findBy_returns_expected() { // given: data in DB List.of( Person.builder().firstName("Giri").lastName("Potte").gender(Gender.MALE), Person.builder().firstName("Boo").lastName("Potte").gender(Gender.MALE) ).forEach(testEntityManager::persist); testEntityManager.flush(); // expect: assertThat( personRepository.findByFirstNameAndLastNameAndGender("Giri", "Potte", Gender.MALE) ).isNotNull() .extracting("firstName", "lastName", "gender") .isEqualTo(List.of("Giri", "Potte", Gender.MALE)); // expect: assertThat( personRepository.findByFirstNameAndLastNameAndGenderNQ("Boo", "Potte", Gender.MALE) ).isNotNull() .extracting("firstName", "lastName", "gender") .isEqualTo(List.of("Boo", "Potte", Gender.MALE)); } }

TIP

From Database side, sometimes, it's handy to have some SQLs dealing with enum type to create, drop, list values, add a new value, modify existing value etc.

I was just fiddling with PostgreSQL enum type and the following is the link:

References

Saturday, September 17, 2022

Do more with less - the way to write code, what Groovy taught me . . .

I was not happy at all going back to verbose native Java, leaving my years of happy Groovy and Grails development on JVM. The shrinking market for Groovy pushed me out, but I am not away from it. I still stay with it and write Groovy code on any given day. At least on the side, when I explore and experience some of new Java language features along the way. One question always comes to my mind is - how did we do it in Groovy, why didn't we have to worry about this obvious expectations. The comparison always pleasantly proves that Groovy stood by it's literal word-meaning- very pleasant, to work with, and a very well thought-out and super consistent language on JVM right from day-1.

Environment: Java 17, Groovy 4.0.2 on macOS Catalina 10.15.7

In my recent Java code, I was dealing with a list of request objects that come in a specific order, and had to give the response objects back in the same order after doing some internal processing that included database query for the list by ids with Spring Data's findAllBy conventional interface. In between I built a map from the request list for faster lookup of specific object's request details. I wanted to retain the order in the map and then the Java's functional feature threw me out to know some internals and come back before take things granted for to use it.

The following is a simple code snippet comparing both Java and Groovy features in the same Groovy script. That's another beauty of Groovy, you can write both Java and Groovy code in one class file.

import java.util.stream.Collectors // a data item record Item(Integer id, String name){} //groovy List items = [ new Item(1, 'One'), new Item(2, 'Two'), new Item(3, 'Three'), new Item(4, 'Four'), new Item(5, 'Five'), new Item(6, 'Six'), new Item(7, 'Seven'), new Item(8, 'Eight'), new Item(9, 'Nine'), new Item(10, 'Ten'), ] println "Groovy\n======" println "Items[id:Integer, name:String]: ${items}" def itemsMap = items.collectEntries{[it.name(), it.id()]} println "Items map by name (order preserved): ${itemsMap}" println "Items map keys (order preserved): ${itemsMap.keySet()}" // java List itemsJava = List.of( new Item(1, "One"), new Item(2, "Two"), new Item(3, "Three"), new Item(4, "Four"), new Item(5, "Five"), new Item(6, "Six"), new Item(7, "Seven"), new Item(8, "Eight"), new Item(9, "Nine"), new Item(10, "Ten") ); System.out.println("Java\n===="); System.out.println("Items[id:Integer, name:String]:" + itemsJava); var itemsMapJava = itemsJava.stream() .collect( Collectors.toMap( item -> item.name, item -> item.id ) ); System.out.println("Items map by name (order NOT preserved): " + itemsMapJava); System.out.println("Items map keys (order NOT preserved): " + itemsMapJava.keySet().stream().toList()); var itemsMapOrderPreserved = itemsJava.stream() .collect( Collectors.toMap( item -> item.name, item -> item.id, (key1, key2) -> key1, // key conflict resolver LinkedHashMap::new // pass the underlying map implementation you want, to preserve the order, defaults to HashMap ) ); System.out.println("Items map by name (order preserved): " + itemsMapOrderPreserved); System.out.println("Items map keys (order preserved): " + itemsMapOrderPreserved.keySet().stream().toList());

The output:

Groovy ====== Items[id:Integer, name:String]: [Item[id=1, name=One], Item[id=2, name=Two], Item[id=3, name=Three], Item[id=4, name=Four], Item[id=5, name=Five], Item[id=6, name=Six], Item[id=7, name=Seven], Item[id=8, name=Eight], Item[id=9, name=Nine], Item[id=10, name=Ten]] Items map by name (order preserved): [One:1, Two:2, Three:3, Four:4, Five:5, Six:6, Seven:7, Eight:8, Nine:9, Ten:10] Items map keys (order preserved): [One, Two, Three, Four, Five, Six, Seven, Eight, Nine, Ten] Java ==== Items[id:Integer, name:String]:[Item[id=1, name=One], Item[id=2, name=Two], Item[id=3, name=Three], Item[id=4, name=Four], Item[id=5, name=Five], Item[id=6, name=Six], Item[id=7, name=Seven], Item[id=8, name=Eight], Item[id=9, name=Nine], Item[id=10, name=Ten]] Items map by name (order NOT preserved): [Eight:8, Five:5, Six:6, One:1, Four:4, Nine:9, Seven:7, Ten:10, Two:2, Three:3] Items map keys (order NOT preserved): [Eight, Five, Six, One, Four, Nine, Seven, Ten, Two, Three] Items map by name (order preserved): [One:1, Two:2, Three:3, Four:4, Five:5, Six:6, Seven:7, Eight:8, Nine:9, Ten:10] Items map keys (order preserved): [One, Two, Three, Four, Five, Six, Seven, Eight, Nine, Ten]

Conclusion

Java started to evolve, changing faster than it used to be and faster than the community can catch up- all for good. It's slowly getting to be less verbose. Still, certain obvious expectations are not so obvious in Java.

Groovy shined next to legacy Java, still shines next to modern Java, on the JVM.

Wednesday, October 06, 2021

I still love Groovy . . .

I was joyfully coding in Groovy for several years. Back to Java two years ago, and have not been writing any production code in Groovy, still writing my own productive non-production utilities in Groovy whenever and wherever I can.

I am trying my best to apply neat things that I learned while working in Groovy projects by using its ecosystem frameworks like Grails framework, Gradle build tool,  Spock framework etc. Within the limitations of Java, Spring Boot, and Maven development world, I am trying hard to write less verbose, and more readable code by leveraging new Java language features including some of each of its version's preview features.

Java is evolving at a steady pace now. Better late than never ;). Still far-away compared to what Groovy was 10+ years ago, or any of current modern languages, in terms of developer's productivity.

I was bit happy to see some convenient factory methods making into Java's collection classes, version after version, since Java 9. Have been happily using one such static factory method .of() on List and Map without caring much of their internal implementations. 

Environment: Java 16, Groovy 3.0.9 on macOS Catalina 10.15.7

Today, I was happily writing code using Map.of() method and kept on adding elements, I had about a dozen of static keys and values to add. IntelliJ was also going happy with me. At some point suddenly IntelliJ turned angry (red) at me. The error was not clear, another Java classic hard-to-understand compilation error. I started to wonder what did I do wrong, was going back and forth on each element I was adding. Quickly realized I was hitting some limitation. Java language team chose the lucky number 10 for these convenient factory methods. There are actually 10 static factory methods named of() that take one to ten arguments.

I fell in love with it, the very first-time I started using it as it's little more concise and readable (not as concise and readable as groovy, but close), but quickly ran into limitations.
  • Map.of() method introduced in Java 9 allows to create an immutable map with up to 10 keys-value pairs.
  • It return an immutable map.
So, use it when you are ok with immutable small Map of up to 10 elements.

The following groovy snippet shows how close (still little more verbose) Java got to Groovy from the painful-finger-typing way to create and initialize a Map with a fixed set of elements. 

// Groovy def groovyMap = [ 'a' : [1], 'b' : [1,2], 'c' : [1,2,3], 'd' : [1,2,3,4], 'e' : [1,2,3,4,5], 'f' : [1,2,3,4,5,6], 'g' : [1,2,3,4,5,6,7], 'h' : [1,2,3,4,5,6,7,8], 'i' : [1,2,3,4,5,6,7,8,9], 'j' : [1,2,3,4,5,6,7,8,9,10], 'k' : [1,2,3,4,5,6,7,8,9,10,11], 'l' : [1,2,3,4,5,6,7,8,9,10,11,12], ] println groovyMap // Java var javaMap = Map.of( "a" , List.of(1), "b" , List.of(1,2), "c" , List.of(1,2,3), "d" , List.of(1,2,3,4), "e" , List.of(1,2,3,4,5), "f" , List.of(1,2,3,4,5,6), "g" , List.of(1,2,3,4,5,6,7), "h" , List.of(1,2,3,4,5,6,7,8), "i" , List.of(1,2,3,4,5,6,7,8,9), "j" , List.of(1,2,3,4,5,6,7,8,9,10), "k" , List.of(1,2,3,4,5,6,7,8,9,10,11), "l" , List.of(1,2,3,4,5,6,7,8,9,10,11,12) ) println javaMap

IntelliJ goes unhappy, with error:
Cannot resolve method 'of(java.lang.String, java.util.List<E>, java.lang.String, java.util.List<E>, java.lang.String, java.util.List<E>, java.lang.String, java.util.List<E>, java.lang.String, java.util.List<E>, java.lang.String, java.util.List<E>, java.lang.String, java.util.List<E>, java.lang.String, java.util.List<E>, java.lang.String, java.util.List<E>, java.lang.String, java.util.List<E>, java.lang.String, java.util.List<E>, java.lang.String, java.util.List<E>

Java compiler stays unhappy with compilation error:

no suitable method found for of(java.lang.String,java.util.List<java.lang.Integer>,java.lang.String,java.util.List<java.lang.Integer>,java.lang.String,java.util.List<java.lang.Integer>,java.lang.String,java.util.List<java.lang.Integer>,java.lang.String,java.util.List<java.lang.Integer>,java.lang.String,java.util.List<java.lang.Integer>,java.lang.String,java.util.List<java.lang.Integer>,java.lang.String,java.util.List<java.lang.Integer>,java.lang.String,java.util.List<java.lang.Integer>,java.lang.String,java.util.List<java.lang.Integer>,java.lang.String,java.util.List<java.lang.Integer>,java.lang.String,java.util.List<java.lang.Integer>) method java.util.Map.<K,V>of() is not applicable (cannot infer type-variable(s) K,V (actual and formal argument lists differ in length)) method java.util.Map.<K,V>of(K,V) is not applicable (cannot infer type-variable(s) K,V (actual and formal argument lists differ in length)) method java.util.Map.<K,V>of(K,V,K,V) is not applicable (cannot infer type-variable(s) K,V (actual and formal argument lists differ in length)) method java.util.Map.<K,V>of(K,V,K,V,K,V) is not applicable (cannot infer type-variable(s) K,V (actual and formal argument lists differ in length)) method java.util.Map.<K,V>of(K,V,K,V,K,V,K,V) is not applicable (cannot infer type-variable(s) K,V (actual and formal argument lists differ in length)) method java.util.Map.<K,V>of(K,V,K,V,K,V,K,V,K,V) is not applicable (cannot infer type-variable(s) K,V (actual and formal argument lists differ in length)) method java.util.Map.<K,V>of(K,V,K,V,K,V,K,V,K,V,K,V) is not applicable (cannot infer type-variable(s) K,V (actual and formal argument lists differ in length)) method java.util.Map.<K,V>of(K,V,K,V,K,V,K,V,K,V,K,V,K,V) is not applicable (cannot infer type-variable(s) K,V (actual and formal argument lists differ in length)) method java.util.Map.<K,V>of(K,V,K,V,K,V,K,V,K,V,K,V,K,V,K,V) is not applicable (cannot infer type-variable(s) K,V (actual and formal argument lists differ in length)) method java.util.Map.<K,V>of(K,V,K,V,K,V,K,V,K,V,K,V,K,V,K,V,K,V) is not applicable (cannot infer type-variable(s) K,V (actual and formal argument lists differ in length)) method java.util.Map.<K,V>of(K,V,K,V,K,V,K,V,K,V,K,V,K,V,K,V,K,V,K,V) is not applicable (cannot infer type-variable(s) K,V (actual and formal argument lists differ in length))

When I executed the above code snippet in groovyconsole Groovy compiler at least gave me little better message pointing at the line (29: "k" , List.of(1,2,3,4,5,6,7,8,9,10,11),) that failed compilation and made me think of exceeding some limitation.

groovy.lang.MissingMethodException: No signature of method: static java.util.Map.of() is applicable for argument types: (String, List12, String, List12, String, ListN, String, ListN, String...) values: [a, [1], b, [1, 2], c, [1, 2, 3], d, [1, 2, 3, 4], e, [1, 2, ...], ...] at ConsoleScript9.run(ConsoleScript9:29)

The workaround, I had to go more verbose; at least, better than old way of painful-finger-typing. ;)
import static java.util.Map.entry; // Groovy def groovyMap = [ 'a' : [1], 'b' : [1,2], 'c' : [1,2,3], 'd' : [1,2,3,4], 'e' : [1,2,3,4,5], 'f' : [1,2,3,4,5,6], 'g' : [1,2,3,4,5,6,7], 'h' : [1,2,3,4,5,6,7,8], 'i' : [1,2,3,4,5,6,7,8,9], 'j' : [1,2,3,4,5,6,7,8,9,10], 'k' : [1,2,3,4,5,6,7,8,9,10,11], 'l' : [1,2,3,4,5,6,7,8,9,10,11,12], ] println groovyMap // Java var javaMap = Map.ofEntries( entry("a" , List.of(1)), entry("b" , List.of(1,2)), entry("c" , List.of(1,2,3)), entry("d" , List.of(1,2,3,4)), entry("e" , List.of(1,2,3,4,5)), entry("f" , List.of(1,2,3,4,5,6)), entry("g" , List.of(1,2,3,4,5,6,7)), entry("h" , List.of(1,2,3,4,5,6,7,8)), entry("i" , List.of(1,2,3,4,5,6,7,8,9)), entry("j" , List.of(1,2,3,4,5,6,7,8,9,10)), entry("k" , List.of(1,2,3,4,5,6,7,8,9,10,11)), entry("l" , List.of(1,2,3,4,5,6,7,8,9,10,11,12)) ) println javaMap

NOTE: It's only the Map.of() that has this limitation, the methods List.of(), Set.of() do not have.

Gotcha

  • The method map.of() returns an immutable map though the method signature says it returns simply a Map.
  • In other words it is an unmodifiable map, keys and values cannot be added, removed or updated.
  • When operation to modify the returned Map like put(), replace(), or remove() are performed, they would result with an UnsupportedOperationException with a null exception exception message ;)

Conclusion

I still love Groovy for its simple, less confusing, yet more expressive syntax.

References