Thursday, October 22, 2020

UUID support in PostgreSQL . . .

UUID (universally unique identifier) is used for unique identifiers in and across systems. It is widely supported with algorithm(s) implemented and available as library/utility in programming  languages. Also, databases have it implemented and made available in queries via DB function(s). There are 4 different version of uuid.

I recently attempted to migrate a Spring Boot application from MySQL database to PostgreSQL database. The application was written in Plain-Old-JDBC-DAO (Data Access Object interface/implementation pattern) style with hand-coded SQLs mixed and tightly coupled with Java code. Luckily there were fairly decent number of Integration Test-cases in place already. Otherwise, it would have been very challenging and nasty to identify all the SQLs mixed with code that needed migration.

In my attempt, I learned the fact that both MySQL and PostgreSQL databases have functions to generate random UUID. However, Postgres versions prior to 13 require little more effort to get access to those functions. This post is to share some details on that. I am not a database guy and would only love  touching the surface ;) 

Environment: Java 13, Spring Boot 2.2.4.RELEASE, PostgreSQL 11.8, MySQL 5.7.31 on macOS Catalina 10.15.6

After migrating schema using AWS Schema Conversion Tool (AWS SCT) and non-transactional data as SQL exports into baseline Flyway scripts, the next step was to identify code changes.

Most of integration test cases failed with org.springframework.jdbc.BadSqlGrammarException revealing all inline SQLs that needed migration. One such SQL was using MySQL uuid() function which failed with the following error, indicating that uunid() was MySQL specific:

org.postgresql.util.PSQLException: ERROR: function uuid() does not exist

MySQL - uuid support

MySql has uuid() function generates uuid of Version 1 algorithm which involves the MAC address of the computer and timestamp.

-- generates a random uuid of version 1 SELECT uuid(); 635ef48c-1498-11eb-a38e-4bb3a0084954

PostgreSQL

PostgreSQL distribution comes with additional supplied modules but are not installed by default. Any user account with CREATE privilege can install module(s). There is a uuid generation function (gen_random_uuid()) available in pgcrypto module which generates uuid of Version 4 algorithm which is derived entirely from random hexadecimal numbers.

Without having pgcrypto module installed, the SQL:
SELECT gen_random_uuid(); -- genertes random UUID

would result with the following error:
SQL Error [42883]: ERROR: function gen_random_uuid() does not exist Hint: No function matches the given name and argument types. You might need to add explicit type casts.

The following SQLs are handy to find out all available extensions and see the list of extensions installed.
SELECT * FROM pg_avilable_extensions ORDER BY name; -- list available extensions SELECT * FROM pg_extension; -- list installed extensions

To install pgcrypto extension, use the following command.
This can go into application's Flyway script to get the extension installed into the database as needed.
CREATE EXTENSION IF NOT EXISTS "pgcrypto"; -- creates extension

Once, pgcrypto extension is installed, the following returns a random UUID generated:
SELECT gen_random_uuid(); -- generate a random uuid of version 4 842d3fae-7788-4ecb-b441-7c7e8130b8bf

NOTE
In PostgreSQL 13 this function is made available in the core. There is no need for installing pgcrypto module in this case.

TIP

Finding version number of a given uuid string is no big deal.
The M in uuid format: xxxxxxxx-xxxx-Mxxx-xxxx-xxxxxxxxxxxx tells the version number.

Groovy script to find out the version number of a given uuid string:
println "uuid version: ${UUID.fromString('842d3fae-7788-4ecb-b441-7c7e8130b8bf').version()}" // 4, PostgreSQL gen_random_uuid() generated println "uuid version: ${UUID.fromString('635ef48c-1498-11eb-a38e-4bb3a0084954').version()}" // 1, MySQL uuid() generated

References



Thursday, October 08, 2020

Make your Spring Boot application's API documentation a complete specification with enhanced Swagger-UI annotations . . .

In a RESTful application, documenting end-point specification/schema is very important. There are various frameworks with different approaches available in Java space addressing this problem. It is obvious the best way is: to generate API specification from the source code so that it stays on up-to-date and accurate with your source code.

Spring RESTDocs offers a very good solution. It generates API doc from hand-written Asciidoctor templates merged with auto-generated snippets that are generated from unit/integration tests by promoting end-point testing to great levels. (Refer to my earlier post on this in a Grails application.)

Swagger UI is another solution which generates visual documentation from the source code. This also generates a testable Swagger UI page for all end-points along with Open API specification for each end-point. To get this right and complete, it requires adding additional details for documenting API specification/schema either in an yml file or by annotating source code, basically end-point action methods and objects involved in request/response handling.

Swagger UI is very useful and convenient to not only to know the specification details, but also to test REST APIs both from the same page. Spring boot comes with good support for this. I am not going to go into details of how to add Swagger UI support with Open API specification for a Spring Boot application. There are numerous posts on this.

This post is more on leveraging Swagger (OpenAPI 3 implementation) annotations in order to get better API specification/schema generated. Also, it goes into details on customizing the example end-point request/response JSON that shows sample request with meaningful data, rather than default data. Without adding any specific annotations for API specification, you will get a decent Swagger UI page. However, it is good to add little more details and make the specification much cleaner and clear.

Environment: Java 13, Spring Boot 2.2.4.RELEASE, PostgreSQL 12.1, Maven 3.6.2 on macOS Catalina 10.15.6

Without any additional Swagger annotations

For instance, in a Spring Boot application, a POST operation end-point method to create a Person, and the request objects with no additional swagger annotations like shown below:

@RestController @Slf4j public class PersonController { ... @PostMapping(value = "/person", produces = { MediaType.APPLICATION_JSON_VALUE, MediaTypes.HAL_JSON_VALUE }) public ResponseEntity create(@Valid @RequestBody Person person) { ... return new ResponseEntity<>(newPerson, HttpStatus.OK); } } @Data public class Person { private String firstName; private String lastName; private Gender gender private int age; private String email; private Address address; } @Data public class Address { private String address1; private String address2; private String city; private String state; private String zip; } public enum Gender { FEMALE, MALE }

would result into Swagger UI as shown below:


and request schema details look like:


Note that the example request JSON is not good with respect to data for fields. When you click on Try it out button to test the API, you will have to edit the values of all fields with good data. To have a  good example request with good sample data generated in Swagger UI page requires additional Swagger annotations.

With additional Swagger annotations

Enhancing code by adding annotations as shown below:

@RestController @Slf4j public class PersonController { ... @Operation(summary = "Creates a new Person.", tags = { "Person" }) @ApiResponses(value = { @ApiResponse(responseCode = "200", description = "Returns newly created Person."), @ApiResponse(responseCode = "403", description = "Authorization key is missing or invalid."), @ApiResponse(responseCode = "400", description = "Invalid request.") }) @PostMapping(value = "/person", produces = { MediaType.APPLICATION_JSON_VALUE, MediaTypes.HAL_JSON_VALUE }) public ResponseEntity create(@Valid @RequestBody Person person) { ... return new ResponseEntity<>(newPerson, HttpStatus.OK); } } @Data @Schema( description = "A JSON request object to create Person" ) public class Person { @NotNull @Size(min = 4, max = 128) @Schema(example = "John") private String firstName; @NotNull @Size(min = 4, max = 128) @Schema(example = "Smith") private String lastName; @NotNull @Schema(type = "enum", example = "MALE") private Gender gender; @NotNull @Min(1) @Max(100) @Schema(type = "integer", example = "25") private int age; @NotEmpty @Email @Schema(example = "john.smith@smith.com") private String email; @NotNull @Valid private Address address; } @Data @Schema( example = """ { "address1" : "1240 E Diehl Rd.", "address2" : "#560", "city" : "Naperville", "state" : "IL", "zip" : "60563" } """ ) public class Address { @NotNull @Size(min = 4, max = 128) @Schema(example = "1 N Main St.") private String address1; @Schema(example = "Apt. 100") private String address2; @NotNull @Size(min = 4, max = 128) @Schema(example = "Sharon") private String city; @NotEmpty @Size(min = 2, max = 2) @Schema(example = "MA") private String state; @NotEmpty @Size(min = 5, max = 5) @Schema(example = "02067") private String zip; } public enum Gender { FEMALE, MALE }


would result into Swagger UI as shown below:


and request schema details look like:


Note that the specification and example is much cleaner with good data for all elements.

@NotNull, @NotEmpty, etc. - javax Validation Annotations

Also, javax field constraint annotations used for validation are very well considered. For instance all required fields (annotated with @NotNull or @NotEmpty) are marked as required elements with suffix * added to the element name.

Also, any invalid request results with more meaningful error response. In the example shown below, required field gender is missing and age has invalid value 0 sent in the request: 




@Operation - swagger ui annotation

Annotate resource operations (controller methods) to add more details. The summary element of this annotation can be leveraged to add a meaningful description about the operation. The default will not add any description. The tags element can be leveraged to logically group operations so that they all show up under that tag on the page. If not specified, the default tag value is hyphenated class-name, in the above code example (without annotations), the default tag value is: person-controller.

@Operation(summary = "Returns a list of MyDomain", tags = { "MyDomain" })

@ApiResponsse - swagger ui annotation

Further enhance Response descriptions by annotating controller method with @ApiResponses and describing every possible response code as shown below:

@ApiResponses(value = { @ApiResponse(responseCode = "200", description = "Returns list of MyDomains."), @ApiResponse(responseCode = "403", description = "Authorization key is missing or invalid."), @ApiResponse(responseCode = "400", description = "Invalid request.") })

Also, the class can be annotated with @ApiResponse annotation for describing all common response codes like 400, 401, 404, 500 etc. to keep annotations DRY. The controller methods can just describe 200 and any additional specific response codes. Also, can override class level annotated common response code descriptions. The following is an example annotation at the class level common for all controller methods:
 
@ApiResponses(value = { @ApiResponse( responseCode = "400", description = "Bad Request.", content = { @Content(mediaType = "application/json", schema = @Schema(implementation = Errors.class)) } ), @ApiResponse( responseCode = "401", description = "Unauthorized. Authorization key is missing or invalid.", content = { @Content(schema = @Schema(implementation = Void.class)) } ), @ApiResponse( responseCode = "404", description = "Not Found.", content = { @Content(schema = @Schema(implementation = Errors.class)) } ), @ApiResponse( responseCode = "500", description = "Internal Server Error.", content = { @Content(schema = @Schema(implementation = Errors.class)) } ) }) public class PersonController { ... @ApiResponses(value = { @ApiResponse(responseCode = "200", description = "Returns newly created Person.") }) @PostMapping(value = "/person", produces = { MediaType.APPLICATION_JSON_VALUE, MediaTypes.HAL_JSON_VALUE }) public ResponseEntity create(@Valid @RequestBody Person person) { ... return new ResponseEntity<>(newPerson, HttpStatus.OK); } }

For error responses with codes like 400, 404, 500 etc., that return spring Errors object for any kind of failures like validation, exceptions etc. the implementation class can be specified as shown above. If there no-content for any response like 401, then Void.class is suitable which results with no details/schema for the response.

@Schema - swagger ui annotation

Annotate request and response objects with this annotation to describe it. Also, annotate object properties to add data type and example data in order to enhance sample request with more meaningful data.

The type element of this annotation can be used to specify type data type and example element to specify example value. Otherwise, the value defaults to Java default. For enums it picks the first one in the list of enumerations.

For String data types, if there is a specific set of values expected the list can be specified as an array of Strings for allowedValues element. This shows up in the schema for that element as an enumeration of values allowed.

TIPS

  • @Schema annotation can be used at class level to specify a JSON representation of the object with meaningful data for all the object fields as an example as shown for the Address object in the code snippet above. When specified at this level it takes precedence over field level example data. I used Java 13 preview feature of multiline string in there.
  • http://localhost:8080/swagger-ui.html shows Swagger UI page of your application. It basically redirects to http://localhost:8080/swagger-ui/index.html?configUrl=/v3/api-docs/swagger-config.
  • http://localhost:8080/swagger-ui/index.html gives the Swagger UI page for pet-store based on https://petstore.swagger.io/v2/swagger.json. This is enabled by default. I have not found a way to disable this :(
  • If there is a collection property like List in the object, for instances a List<Address> address; then you need to annotate it as shown below:
@ArraySchema(schema = @Schema(implementation = Address.class)) List<Address> addresses;
  • Operations can be logically grouped by tags. Each tag can have a name and description properties. If annotations are used, then @Operation annotation can only take tag names, but no description. This is a limitation. The @Tag annotation supports both name, and description properties. So, if there are couple of operations that need to be grouped into one by one tag name, but also want to have a description, then one operation/method can use @Tag annotation with name and description, the other operation/method can use @Operation with tags property. This works and both operations get grouped under same tag name, and tag description is also shown along with tag name. So, @Tag and @Operation can mix and match across various operations/methods for the same tag group.
  • By default all response messages are generated for response codes: 200, 400, 403, 404, 405, 406, 500, 503 in the responses section of the page, though the method is annotated with @ApiResponses annotation for only response codes 200, 400, and 403. In order to fix this you need to add the following property in application.yml or application.properties appropriately (springdoc properties).
springdoc: override-with-generic-response: false

Friday, September 18, 2020

Why I had to downgrade my IntelliJ IDEA to an older version . . .

Why do you ever downgrade to older version of an IDE, especially IntelliJ IDEA?

Well, I recently had to.

Environment: Java 13.0.2, Spring Boot 2.2.4.RELEASE, Maven 3.6.2, IntelliJ IDEA ULTIMATE 2020.2.2 on macOS High Catalina 10.15.6

Reason for Downgrade

I had a Spring Boot project with JDK 13 preview features like multi-line-text and enhanced switch statement used. So, I had to explicitly select that option as shown below for the project initially, for the code to get compiled in the IDE:


Everything was fine until I updated IntelliJ from version 2020.1.4 to 2020.2.2. With IntelliJ 2020.2 version, a unit test case failed to run from the IDE with the following error:
java: error: invalid source release: 14

When I checked project setting, I was puzzled to see the following:

I thought, something got messed up, and wanted to change that to option: 13 (Preview) - Switch expressions, text blocks. To my surprise, that was not even possible from the list as shown below:


What the heck is going on? I had to google to find why I did not have that option anymore. The IntelliJ Supported Java versions and features page that I landed onto (https://www.jetbrains.com/help/idea/supported-java-versions.html#2020) indicates as shown below that IntelliJ IDEA 2020.2 does not have that feature anymore, 2020.1 has it. Alas!

IntelliJ IDEA Version: 2020.2


IntelliJ IDEA Version: 2020.1


Now I had no option other than downgrading to 2020.1.x version.

JetBrains Toolbox App - Handy for keep using multiple IntelliJ version

What if I want to keep both the versions and keep switching between? 
Along the way, I came across this tool called JetBrains TOOLBOX and tried it. It has become my new friend now and I have started using it. The following shows the two versions I have installed using this ToolBox. Now I can easily switch to the version that I want to use.


TIP

Even if you have an older version of IntelliJ IDEA installed earlier, you may need to reinstall it using JetBrains Toolbox.

References

Friday, March 06, 2020

Fly safe within limits with Flyway in a Spring Boot application . . .

Flyway seems more popular than Liquibase in Java world. Coming back to Java after few years of joy with Grails and it's much more flexible db migration solution offered by grails database-migration plugin which has Liquibase under covers, I certainly felt little limited flying in with Java-Flyway in the very first couple of hours of exploring it.

Liquibase offers more flexibility through a ledger, a change-log XML file in which you define the order of your migration scrips. Grails database-migration plugin enhances migration scripts typically written in SQL with added DSL Groovy support. Also, the change-log file can be in groovy instead of XML. XML was once hot and is a legacy now (except for Maven, it's still modern). Grails database-migration plugin offers full power of dealing with database migrations including full support for generating base-level or starting migration script, incremental change scripts, a rollback mechanism etc. The documentation is also top notch.

With Flyway, you do not have that flexibility dealing with the order or migration scripts through change-log like ledger file. You have to follow version-embedded filename (SQL or Java) conventions. It is highly recommended to follow timestamp based filename versioning. I am yet to explore it's Java way of dealing with complex migrations, but I am sure it is not going to be as pleasing as working with database migrations in Grails projects with expressive nature of Groovy code.

There are tons of articles comparing both Flyway and Liquibase. This post is not to compare, but some exploration of Flyway and JPA capabilities with Grails database-plugin mindset in a Java-based Spring Boot project with JPA.

Environment: Java 13, Spring Boot 2.2.4.RELEASE, PostgreSQL 12.1, Maven 3.6.2 on macOS High Sierra 10.13.6

Generate BASE DDL

It is tempting to start hand-coding Flyway SQL scripts once you make your initial domain model ready with JPA annotations. This is highly error prone and disconnects your domain model powered with JPA from DB in the process of initializing DB with schema and getting it validated against the model. One way to achieve this is to generate DDL scripts from the model.

I prefer to have DDL scripts generated than hand-coding. JPA has this feature and Hibernate offers a decent implementation. This will give you a jumpstart with db migration scripts. You can take generated script by the well-known tool: copy and paste into Flyway migration script file and polish it further. This way, your model gets verified through the generated script taken into Flyway script and applied to DB. Thus any discrepancies between the model and DB can be avoided later in the game.

In order to get the DDL script generated, you need to make some run-time configuration changes for your local environment (the environment for which you need to get DDL generated). There are three ways to do this (at least the possible ways I've explored).

Option-1: Make changes to your environment properties/yml file as shown below:

bootstrap-local.yml
spring: jpa: properties: hibernate: # generating DDL - add me, Hibernate 5.1.0 onwards the default end of SQL statement delimiter is none in generated DDLs hbm2ddl.delimiter: ';' # generating DDL - add me javax: persistence: schema-generation: scripts: action: create create-target: create.sql flyway: # generating DDL, make sure I am turned off enabled: false

Run your app with the above changes, and you will have create.sql file generated in the directory where you run your app from. Examine and make any necessary changes to the DDL generated before copying that into Flyway Base SQL script.

Revert the changes done to your environment properties/yml file and bring up the application. Flyway should be flying happily taking the base DDL script file and applying it to your database.

Option-2: Set those properties on the maven command line (*fine-print: Due to some reason, this option doesn't work consistently for me, I am not at all happy with Spring Boot Maven Plugin's documentation. You need to depend on extensive and tireless search to find out how to get this done :( )

Alternatively, you can simply override those run-time config properties for your local env in the maven command and get the DDL generated. This way you don't have to temporarily change your local run-time config file every time when you need to generate DDL and revert it afterwards. An example of running maven wrapper command on the root project when you have a spring-boot project (my-service-api) as one of modules, is shown below:

./mvnw -pl myservice-api clean install spring-boot:run -Dspring-boot.run.profiles=local -DskipTests \ -Dspring-boot.run.arguments=\ --spring.flyway.enabled=false,\ --spring.jpa.properties.javax.persistence.schema-generation.scripts.action=create,\ --spring.jpa.properties.javax.persistence.schema-generation.scripts.create-target=create.sql,\ --spring.jpa.properties.hibernate.hbm2ddl.delimiter=\;

In the above command, we basically have overridden four run-time config properties earlier shown in yml file for DDL generation:
  1) disabled Flyway
  2) specified schema-generation type
  3) specified the DDL file name to be generated
  4) specified the delimiter character, the end of statement character for SQL statements generated in the DDL file.

All backslashes (\) are just shell line-breakers except the very last one to escape the end of statement delimiter character (;) in the generated DDL script.

If you are lucky, you will have create.sql file generated in the directory you ran this command from. Examine the DDL generated before copying it into Flyway Base SQL script.

Simply bring up your application. Flyway should be flying happily taking the base DDL script file and applying it to your database.

Option-3 (My preferred option): Run with your runnable jar

Have a runnable jar created (typically under target directory in your module). Simply bring up the application by passing all those properties to override on the command line. This way, you can stay away from Maven and from all issues it brings in along with it. An example is shown below:

For action create:
java --enable-preview -Dspring.profiles.active=local -jar <path/to/your/jar-file/executable/jar-file.jar> \ --spring.flyway.enabled=false \ --spring.jpa.properties.javax.persistence.schema-generation.scripts.action=create \ --spring.jpa.properties.javax.persistence.schema-generation.scripts.create-target=create.sql \ --spring.jpa.properties.hibernate.hbm2ddl.delimiter=\;

For action update:
java --enable-preview -Dspring.profiles.active=local -jar <path/to/your/jar-file/executable/jar-file.jar> \ --spring.flyway.enabled=false \ --spring.jpa.properties.javax.persistence.schema-generation.scripts.action=update \ --spring.jpa.properties.javax.persistence.schema-generation.scripts.create-target=update.sql \ --spring.jpa.properties.hibernate.hbm2ddl.delimiter=\;


Again, all backslashes (\) are just shell line-breakers except the very last one to escape the end of statement delimiter character (;) in the generated DDL script.

If you want to run it from IntelliJ instead of command-line, setup a Run Configuration as shown below:


Incremental DDL changes

Once you have base DDL Flyway script applied, as you progress with your development, there will be changes made to domain model as it starts to evolve. As and when your domain model goes through changes, you need to put corresponding Flyway SQL migration scripts in place.

I'VE NOT FOUND A WAY TO GET THIS DONE!

NOTE: Though I have not found an action like update authoritatively documented anywhere, I just tried and it does work and generating something but not very useful. All I tried was changing action to update from create and create-target to update.sql from create.sql.

If you have your previously generated create.sql/update.sql file hanging around and use the same for incremental changes, it simply gets appended with the resulted incremental DDL statements. That is definitely not what you want. So, make sure that you delete or use a different name.

Once, you have the incremental DDL script, examine it, and copy it to new Flyway script file. Bring up the app to have Flyway flying again taking the newly added script with it and applying it to the Database.

Leverage JPA Annotations as much as you can in order to generate your DDL accurately

A good Database schema design should have all data constraints applied. These constraints include primary key constraints, foreign key constraints, unique constrains etc. JPA offers annotations that can be leveraged in generating constraint creation DDL commands as well.

PRIMARY KEY Constraint
public class MyDomain { @Id @GeneratedValue(strategy = GenerationType.IDENTITY) @Column(nullable = false, updatable = false) private Long id; ... }

The above JPA annotation generates the following DDL script:

CREATE TABLE my_domain {
id SERIAL PRIMARY KEY, ... }

When the type is SERIAL PostgreSQL generates a table specific sequence my_domain_seq and with IDENTITY generation strategy, this sequence is used both by database and JPA.

UNIQUE KEY Constraint
@Table( uniqueConstraints = @UniqueConstraint( columnNames = {"prop1", "prop2"} name = "my_domain_p1_p2_uk" ) ) public class MyDomain { ... String prop1; String prop2; }

The above JPA annotation generates the following DDL script:
ALTER TABLE my_domain ADD CONSTRAINT my_domain_p1_p2_uk UNIQUE (prop1, prop2);

FOREIGN KEY Constraint
public class MyDomain { ... @ManyToOne(fetch = FetchType.EAGER, optional = false) @JoinColumn( name = "my_prop_type_id" foreignKey = @ForeignKey(name = "my_domain_mpt_fk"), nullable = false, insertable = flase, updatable = false ) private MyPropType myPropType; ... }

The above JPA annotation generates the following DDL script:
ALTER TABLE my_domain ADD CONSTRAINT my_domain_mpt_fk FOREIGN KEY (my_prop_type_id) REFERENCES my_prop_type;

TIPS

Get that missing Semicolon back

Without explicitly setting the property spring.jpa.properties.hibernate.hbm2ddl.delimiter=; the generated DDL statements will not end with semi-colon. If you set it on the command line instead of the env specific application yml/properties file, make sure to escape ; with \ as shown below:
spring.jpa.properties.hibernate.hbm2ddl.delimiter=\;

Turn Flyway on/off

Flyway can be turned on/off by setting the property spring.flyway.enabled=true/false. It can either be set in application yml/properties files or on the command line when mvn/mvnw is run. I am not happy with overriding on the maven command line is as it takes up my time with stupid errors that I do not want to break my head with anymore, use this option at your own discretion :)

Happy Coding!
Have a limited but safe flight with Flyway and Maven in a Spring Boot application!!

Tuesday, February 25, 2020

Maven multi-module project in IntelliJ IDEA - several ways to try when stuck with an issue . . .

"Nobody uses Maven, Maven uses you." - Venkat Subramaniyam

I heard at least few times Venkat saying that in his presentations on various topics in Conferences and at Java User Group meetups. I used to laugh with many when he would to say that after asking "Who uses Maven here?". I was happy NOT to be one to raise a hand for that question. I have been using Gradle for a longtime. Now, if I attend any of his sessions and happen to hear that question again, I will be one of those to raise a hand. "Welcome back!", Maven said and started using me ;(

I worked with Gradle-based projects and imported into IntelliJ, numerous times. I even brought multiple Gralde projects into multi-project Gradle builds under one project with proper project inter-dependencies. But I never had to break my head with issues for hours.

Environment: Java 13.01, Spring Boot 2.2.4.RELEASE, Maven 3.6.2, IntelliJ IDEA ULTIMATE 2019.3 on macOS High Sierra 10.13.6

The issue I ran into lately with a maven multi-module project

I recently had to start a new module in a multi-module maven project. The new module was a fairly simple Java application (executable jar) with just spring-boot-starter-batch dependency to start with and some other related dependencies as well. Suddenly, IntelliJ was unhappy by all means with the newly added module in dealing with dependencies and adding dependent libraries to the classpath for compilation. The module was just fine running maven tasks like compile, install, package etc. from the command line. I was even able to run the app from the executable jar created.

I literally ran out of all options dealing with this issue in IntelliJ. Several people seem to have faced similar issue and there was no consistent solution that worked for all. Some of the solutions that worked for some but not for all are:
  1) Invalidating caches and restarting IntelliJ (an option found under File menu item, beware that IntelliJ takes a while after restart to index depending on the size of all your projects imported into IntelliJ)
  2) Deleting the project from the initial welcome pane and importing again
  3) Deleting the project from the initial welcome pane, but instead of importing, opening it and going through steps.
  4) Clearing local maven cache all the way etc.

None of the above options worked for me. Phrases like: "That worked for me",  "This worked yesterday, doesn't work anymore." are quite commonly heard. This is a common pattern in Software Development. I call it a head-breaking pattern ;)

Insights of Maven multi-module project in IntelliJ

A multi-module maven project contains a root pom.xml with module specific pom.xml files. Easiest way is to import a maven project into IntelliJ IDEA is from the welcome pane. Click on Import Project and select the root pom.xml. If everything goes well, IntelliJ imports the project resolves each module's dependencies, compiles code and reports errors or issues. During this process it creates <module-name>.iml files under every module including the root project (the module-name is taken from the corresponding module's name property specified in it's pom.xml). If open any of these .iml XML files created, they contain dependency details with <orderEntry type="library" name="Maven: ..."/>. Also, from the dependencies, IntelliJ detects frameworks needed and sets up proper facets like Spring, JPA, Web etc. in getting all needed tool support for the module.

When stuck with an issue, there are many ways to add, delete, import specific module into IntelliJ IDEA

When there are issues with any specific module, that particular module can be deleted and added/imported again by few different ways in IntelliJ.

1) From the Project Tool Window (Did not not solve my iisue)

Right click the module and select Remove Module as shown below:



and import the removed module by adding the module to the project again as shown below:


2) From Project Settings - New Module (Did not not solve my issue)

Press Cmd + ; (⌘;) or go to File > Project Structure and delete module as shown below:


and add the module again by clicking + on the top. A window with two options pops up as shown below:



select New Module, then select Maven and go through steps from New Module Pane on the left as shown below:


Make sure Module SDK looks good, Next > Select Parent from the drop-down, enter same module name for Name: , and notice that the Location: gets updated with name as entered.

3) From Project Settings - Import Module (This solved my issue)

Press Cmd + ; (⌘;) or go to File > Project Structure and delete module as shown below:


and add the module again by clicking + on the top. A window with two options pops up as shown below:



select Import Module instead of the New Module. Select module's pom.xml, from the finder window opened and click open. Then, make sure correct directories of your module are marked for Mark as:  Sources (main/java), Tests(test/java), Resources(main/resources), Test Resources (if any) and click OK.

This properly updates IntelliJ's classpath in module's *.iml file that gets created and found under your module's root folder. Once imported, IntelliJ also recognizes frameworks appropriately from the dependencies. In my case it was just Spring framework/facet. When you right click on the module in your project structure pane, and mouseover the +Add option, you will see the list of frameworks/facets available as shown below:



Summary

We are living in times where a solution to an issue is just a google-search away. But sometimes, something that worked for others won't work for you. This is one such issue with which I was almost about to bang my head against the wall. After tirelessly exploring possible ways of deleting and creating modules, I finally found the option that worked for me consistently.

Software Development is not easy and will never become easy. This is a hard fact ;)


Saturday, February 15, 2020

Bank on Lombok in a Spring Boot application . . .

It's been over a decade since my eyes had seen Java boiler plate code like getters, setters, various overloaded constructors,  toString(), equals(), hashCode() methods etc. My brain and eyes got used to very quiet and clean code. Now, I suddenly realize the fact that I have been quietly (joyfully) coding in Groovy for a long time. I am back to Java and all that noise is back and started to bother both my brain and eyes :(

To push all that noise away from your eye-sight into compiled Java byte-code, there is this nice Java library called: Lombok. Java developers never say NO to another jar file dependency as Java world simply loves to have tonnes and tonnes of libraries in projects, anyway ;)

Lombok is a neat Java library, both developer and compiler friendly, saves a lot of time, makes code look less noisy, and increases the life of both keyboard and your fingers ;). It provides various useful annotations to generate all that boiler plate code into compiled byte-code to please Java compiler and many Java frameworks. There are many resources and blog-posts on Lombok. I am only describing few annotations that I have explored in the context of Spring Boot with JPA and thought would be useful across many Java projects. I will definitely take Lombok with me into every Java project that I get into.

Environment: Java 13.01, Spring Boot 2.2.4.RELEASE, Maven 3.6.2, IntelliJ IDEA ULTIMATE 2019.3 on macOS High Sierra 10.13.6

All you need to start leveraging Lombok in any Java project is just a dependency in your build configuration(maven/grade). That takes care of giving you the power to auto-generate all that noise and push it away into byte-code by annotating your code, when your code gets compiled as part of the build process. But, IDEs compile code as we write and may need a bit more setup in order for the compiled classes to have all boiler plate code generated into the bytecode.

IntelliJ IDEA Support and Setup

IntelliJ IDEA requires the following 2 steps:
  1. Install Lombok plugin.
      Press Cmd + , (⌘,) or go to IntelliJ IDEA > Preferences
      Click Plugins, Search for Lombok and install
  2. Enable Java compiler feature: Annotation Processors.
      Press Cmd + , (⌘,) or go to IntelliJ IDEA > Preferences
      Go to Build, Execution, Deployment > Compiler > Annotation Processors and Check Enable Annotation Processing


Eclipse based IDE Setup

Check this article: Setting up Lombok with Eclipse and IntelliJ

Some Useful Lombok Annotations


This annotation takes a Java POJO (Plain Old Java Object) nearer to Groovy POGO (Plain Old Groovy Object) by taking away lot of boiler plate methods. Typically, domain objects do not contain any logic other than fields/properties to carry data for persistence. JPA Entities or any kind of objects that carry data are good candidates to leverage this annotation. Annotate a class with this and forget all getters, setters, toString(), hashCode(), equals() etc.


Annotating a class with this, you don't have to worry about providing a constructor to initialize required object properties to initialize the object with. Very useful in Spring beans/components like Services where in you typically write an all args constructor that takes all dependency beans and set the required dependencies. This is preferred over using @Autowired for dependencies for various good reasons. In this case, if you add a new dependency to an existing service, you don't have to worry changing/missing-to-change the constructor.

Also, in Enums if you have extra properties set for each enum instance, you can skip writing and maintaining a constructor which is required by annotating an enum with this.


Usually no args constructor, also called default constructor comes free and provided by Java compiler. By writing specific constructors, this freebie is taken away. On those instances, you still may need to provide this constructor for frameworks that need it. This annotation is useful in such cases.

@RequiredArgsConstructor

Useful in a SpringBoot application when you use constructor based injection than field based  injection (@Autowired). Constructor based injection is preferable than field based injection anyway for various good reasons. In this case, you typically declare all required dependent beans as static final fields by providing a constructor that initializes all of those required beans. SpringBoot auto injects all those beans by calling the constructor.

This annotation is right for this kind of situation with which you don't need to write the constructor and maintain it as you add more dependency beans. Also, with this the moment you add another static final required bean dependency, somewhere in your unit tests where you had used this provided constructor to initialize dependencies fails to compile right away.


These flexible annotations for fields/properties of a class reduce Java bean noisy methods required by many Java frameworks like Hibernate. This itself is good relief for eyes!

@Builder

This annotation brings in builder pattern implemented into the bytecode. Oftentimes, simple POJOs contain many properties. Creating and object becomes bit complex by traditional POJO way of create an object and populate properties by calling setters one by one which may lead to missing setting some properties. A builder pattern brings in fluent object creation by using a builder method followed by setters and the end calling a method to build.


If you use builder pattern/support provided to facilitate readable complex object instantiation and have an object hierarchy, you need to annotate your super class(es) with this annotation for all the properties inherited from the super class to be available to build setter methods. Though it is still listed as an experimental feature, it is very useful and safe to use.


It is typical in Java code you may write or come across utility classes with just static methods. Code coverage tools like JaCoCo report the class definition line (e.g. public class MyUtilityClass {) as uncovered for these classes as there won't be an instance created. You can fool the tool by just creating an instance of it, but that's stupid to do to get coverage. Even if you make the class final and provide a private constructor to fully protect it from creating an instance (a typical utility class should be like this anyway), this additional noise will not get any coverage as there won't be any test for private constructor to get coverage. Also, there is NO reason to break your head to get coverage for private constructor.

So, the best way is to take away all that noise from code into byte-code and exempt it from coverage. The annotation @UtilityClass gives you exactly this by making the class final and providing a private constructor in the byte-code. It not only takes away the noisy boilerplate code away but also improves the coverage as you tell JaCoCo anyways to ignore Lombok generated methods in byte-code. Neat!

Code coverage is only a measure to see how much of code is covered in automated tests. But little things like these add up and bring down the total percentage way down in some cases. It's a time saver if all such nasty noise goes away into byte-code without even bothering about code coverage.

e.g.
/** * This class is lean and clean. The annotation takes away boiler-plate code like final with private constructor into bytecode. * Also, all public methods are static. * Once you write tests for all methods and conditions, you are guaranteed to get 100% coverage. */ @UtilityClass public class MyUtil { public final String MY_CONSTANT = "Just a constant!"; public void m1() { ... } public void m2() { ... } }


Java's NullPointerException is a billion dollar mistake. Though Java is strongly typed language, the weakness lies in the null type and compiler doesn't provide any mechanism to safeguard that null reference. Kotlin addresses this issue by distinguishing types further into nullable types and non- nullable types and enforcing checks during compilation time. This is one of Kotlin's selling and compelling features to Java developers.

Checking each argument of each method for null is so much of noise in code. Java 7's added Objects.requireNonNull() method may only lessen the noise by eliminating the need for if(arg != null){...} else {...} kind of checks with one statement per argument, but still is smelly and noisy.

Java SE 8 added another convenient class java.util.Optional<T> around this problem to deal with in code, which helps design better APIs by indicating that the users whether to expect a null and forcing them to unwrap Optional object to check for the value. Also, it provides some convenient methods to make code more readable. However, it is not a solution to replace every null reference in your codebase.

Lombok's annotation @NonNul comes to rescue. Every method argument that cannot be null can simply be annotated with this which eliminates all the noise and makes code lot more readable. The intent goes into method definition. Under the covers, it just wraps the method body with a similar if null else check that we write otherwise. All that is invisible and is only visible in bytecode. Using this annotation doesn't take away the response to write tests for these null conditional checks if you have code coverage tools like JaCoCo used which still sees all such if conditional check in the bytecode anyway. It doesn't make sense to add more boiler plate code in unit tests by writing test cases just to test those if null check generated into byte-code. Fortunately, there is a Lombok setting that can tell JaCoCo to ignore these wrapped if null checks in byte-code.

lombok.nonNull.exceptionType=JDK

@Generated

Though there is no mention of this in the list of annotations in Lombok's stable or experimental features, it's good to know that there is one like this not for developer's to use in the code, but is for tools to indicate tell not to bother checking for coverage. The api doc has enough details on this.

@Slf4j

Last but not least, Lombok comes with variations of logging annotation for all widely used logging implementations in Java.

Logging is absolutely a needed feature in any application. In Java world, this feature becomes noisy as your number of classes start to grow more than one. Every class/object that needs to log must order a logger object from the factory. The factory needs to know the class for which the logger is needed. This class you give to the factory is typically the class itself that is making an order to the factory. In doing so, every class that needs to log must have a static final logger field initialized with the classname passing to the log factory.

All that factory business was very exciting in the beginning of this millennium. After two decades, there is no reason to have all this routine noise from the log factories to be visible in the code. In my opinion, this one annotation alone is good enough for adding Lombok to a Java application. Annotate classes with this and move that business with factories into bytecode.

Tips


When a class is annotated with Builder annotation, make sure that in some cases for frameworks like Jackson used for JSON serialization that require no-are constructor. The @Builder Design Pattern takes away the default no-arg constructor and forces one to use builder method to create an object. This will make frameworks that leverage no-arg constructor fail.

In this case, your better option would be to add @NoArgsConstuctor and @AllArgsConstructor in addition to @Builder. Both constructor annotation are needed.

@Data, @Getter, @Setter - override specific getter(s) or setter(s)

When a class is annotated with @Data, or @Getter and @Setter, getter and setter methods are generated. If due to any reason if a custom/overriding getter or custom setter is needed for any property, simple provide one the way you would like to, following Java bean style. Lombok won't generate for those ones you have provided.

JaCoCo - code coverage

If you have Java Code Coverage tools like JaCoCo configured for your project and a high coverage threshold level is set, you will get disappointed with the coverage metrics showing the levels suddenly dropping down due to Lombok. This is all due to JaCoCo working at the bytecode level considering all methods including constructors, getter, setters, hashCode etc. that got generated by Lombok. This boiler plate code that got synthesized by Lombok during compilation time doesn't need code coverage. In order to tell JaCoCo not to consider Lombok generated code in the bytecode, create a file with name lombok.config at the root of your project and have the following properties. Your coverage numbers will come back to normal.

# Jacoco >= 0.8.0 and lombok >= 1.16.14 feature # This property adds annotation lombok.@Generated to relevant classes, methods and fields. Jacoco code-coverage # identifies, detects and ignores all Lombok generated boilerplate code getter, setters, hashCode, builder etc. lombok.addLombokGeneratedAnnotation = true # Set lombok to throw Just JDK NullPointerException (default anyway) in the wrapped code. # Also, let JaCoCo honor and not complain coverage for if(!null){} method wrapper generated in the byte-code lombok.nonNull.exceptionType=JDK # Stop Lombok from serching for config files further config.stopBubbling = true

Summary

Lombok is a pretty neat Java library which not only takes noise away from code into bytecode, but also makes code more readable by showing the intention clearly with annotated code. The minimalist phrase "Less is more" becomes a reality with Lombok's addition to a Java project.

Source Code is for Java developers, whereas bytecode is for Java virtual machine. Noise is noise for humans, but not for machines. Java is evolving and changing fast, but still is very noisy and verbose. Any little effort made to make code less noisy and more readable goes a long way in the life of any Java project by saving lot of developers time who read the code later. After all code is written once, but read many times by many in the life of a project.

"Lean and clean" is always beautiful, makes everyone smile and feel better ;)

Make friendship with Lombok, stay healthy, keep your eye-sight better, and your brain calmer!!

References