The new Java version 16 includes a new feature: Records
https://openjdk.java.net/jeps/395 “Enhance the Java programming language with records, which are classes that act as transparent carriers for immutable data. Records can be thought of as nominal tuples.”
Let’s try Java records with JPA and jOOQ.
JPA Constructor Expression
One way to use projection in JPA queries is using the constructor expression. The name constructor expression implies that the constructor is called with the fields from the projection.
select new com.demo.dto.EmployeeDTO(e.name, e.department.name) from Employee e
In the example we have a DTO called EmployeeDTO and the constructor takes two Strings as parameters.
With Java before Java 16 we would create a class like this:
public final class EmployeeDTO {
private final String employeeName;
private final String departmentName;
public EmployeeDTO(String employeeName, String departmentName) {
this.employeeName = employeeName;
this.departmentName = departmentName;
}
public String employeeName() {
return employeeName;
}
public String departmentName() {
return departmentName;
}
@Override
public boolean equals(Object obj) {
if (obj == this) return true;
if (obj == null || obj.getClass() != this.getClass()) return false;
var that = (EmployeeDTO) obj;
return Objects.equals(this.employeeName, that.employeeName) &&
Objects.equals(this.departmentName, that.departmentName);
}
@Override
public int hashCode() {
return Objects.hash(employeeName, departmentName);
}
@Override
public String toString() {
return "EmployeeDTO[" +
"employeeName=" + employeeName + ", " +
"departmentName=" + departmentName + ']';
}
}
Thanks to Java 16 Records this is now much simpler:
public record EmployeeDTO(String employeeName, String departmentName) {
}
This Record will contain the required constructor and also the methods to get the employeeName and the departmentName so it’s a perfect fit for JPAs constructor expression!
jOOQ SQL Projection
Besides JPA there is another great solution for accessing relational database systems: jOOQ
With jOOQ, we can write type-safe SQL in Java. And very often we also want DTOs as a result. Also here Java Records shine:
Because the usage of SQL with the Java API JDBC (Java Database Connectivity) is painful and error-prone the first choice is usually an ORM like JPA/Hibernate.
ORM
Let’s have a look at the definition of ORM on Wikipedia:
Object-relational mapping (ORM, O/RM, and O/R mapping tool) in computer science is a programming technique for converting data between incompatible type systems using object-oriented programming languages. This creates, in effect, a “virtual object database” that can be used from within the programming language.
The idea behind an ORM framework is to hide the database access from the user. Another goal is to introduce the capability to the database access layer that does not exist in a relational database like inheritance. But this abstraction is leaky and leads to the so-called impedance mismatch:
The Impedance Mismatch
The object-relational impedance mismatch is a set of conceptual and technical difficulties that are often encountered when a relational database management system (RDBMS) is being served by an application program (or multiple application programs) written in an object-oriented programming language or style, particularly because objects or class definitions must be mapped to database tables defined by a relational schema. Source: Wikipedia
The problem with ORM is that the user by default does not have full control over the database access and this can cause several problems. The most common problem is poor performance caused by the fact that developers usually don’t deep dive into the details of a framework. This naive approach usually leads to too many SQL statements executed by the ORM framework.
The fact that it’s possible to define parent-child relationships in ORM raises the question when and how the children are loaded. By default, this is done in a lazy way. So let’s assume that we have a customer order with many items and we want to fetch the customer orders the generated SQL statements will look like this:
select * from customer_order;
The above query returns all customer orders (e.g. 1,2,3,4). If the program accesses the children the ORM framework will produce a query per customer order:
select * from item where customer_order_id = 1;
select * from item where customer_order_id = 2;
select * from item where customer_order_id = 3;
select * from item where customer_order_id = 4;
This problem is called n+1 select problem and happens in every application that uses ORM. The ORM usually provides techniques to overcome this problem but as initially said developers usually are not ORM experts.
What are the alternatives?
As you can see ORM may not be the silver bullet you’re looking for. But as initially mentioned using SQL can be very painful. Luckily there are two popular alternatives.
1. MyBatis (former iBatis) 2. jOOQ
MyBatis
MyBatis was first released in 2001 under the name iBatis and the idea behind this framework is to map SQL statements to Java objects. In contrast to ORM where the SQL statements are generated by the framework, you have full control over the SQL statements because you have to write them on your own.
The code examples show how you write the SQL statement in an annotation (it also supports XML) and that the method returns a Java object and not a JDBC ResultSet:
public interface DepartmentMapper {
@Select("select id, name from department WHERE name = #{name}")
Department findByName(String name);
}
The downside of MyBatis is that there is a lot of mapping work to do. To overcome this disadvantage there is a generator that can help with this task. But the biggest disadvantage of MyBatis is the lack of type safety. SQL statements are written in Strings and also the mapping is just strings this may cause problems during runtime because the mapping and the SQL are not checked during compile time.
jOOQ
jOOQ is a framework that embraces SQL and makes SQL the primary language to speak to the database from Java in a typesafe and fluent way. jOOQ provides a domain-specific language (DSL). All the artifacts you use with this DSL are generated from the database meta-model.
The difference to MyBatis is that you don’t write SQL in plain text and therefore the compiler can check your SQL statements and you have full code completion in your IDE.
DepartmentDTO department = dsl
.select(DEPARTMENT.ID, DEPARTMENT.NAME)
.from(DEPARTMENT)
.where(DEPARTMENT.NAME.eq("IT"))
.fetchOneInto(DepartmentDTO.class);
As you can see in the above example you really write SQL! The types in capital letters are constants that are generated from the database meta-model and provide the type-safety with the DSL.
Should I still use ORM?
As usual, the answer is “it depends”. But because of the impedance mismatch and the fact you have to be an expert in ORM and SQL you really should think twice if it’s worth investing in this technology. With jOOQ you will get a great alternative plus full control over the database access!
What’s Next?
In the next blog post, I will introduce jOOQ as the best way to use SQL in Java applications. Stay tuned!
Often you don’t want to map all relationships in an entity model (a blog post about this topic in detail will follow).
Let’s have a look at this class diagram:
There is no mapping between PurchaseOrder and PurchaseOrderItem because there is no use case for this relationship. Either we want to read all orders or we want to display all items of an order but we never want to have all orders with all items.
On the database, we have for sure the foreign key that stores the PurchaseOrder ID on the PurchaseOrderItem table. So we just map the purchaseOrderId in the PurchaseOrderItem entity:
@Entity
public class PurchaseOrderItem {
@Id
@GeneratedValue
private Integer id;
private Integer purchaseOrderId;
@ManyToOne
private Product product;
// Getters/Setters and more ...
}
Now if we need to join the two tables anyway for example in an aggregation to get the sum of the item price, we can do that because of the JPA 2.1 extension JOIN ON:
SELECT NEW entity.PurchaseOrderInfo(p.id, sum(i.product.price))
FROM PurchaseOrder p
JOIN PurchaseOrderItem i ON p.id = i.purchaseOrderId
GROUP BY p.id
This is supported in EclipseLink and Hibernate >= 5.1.
Relational databases have tables and columns and object-oriented programming languages have classes and fields but they also provide inheritance.
Relational databases have tables and columns and object-oriented programming languages have classes and fields but they also provide inheritance. A concept that is not known in relational databases.
JPA provides solutions to map inheritance to database tables but by default, JPA doesn’t care about inheritance and will not map any fields from a superclass to the database.
You always have to define how inheritance should be handled. JPA distinct two scenarios:
Scenario 1: Mapped Superclass
Often you want to define fields and/or behavior in a common superclass like a timestamp when the entity was created:
Unless you don’t define how this should be handled, JPA will not consider the field createdAt when persisting the Address entity.
To tell JPA to include the fields of a superclass in the mapping you have to use the MappedSuperclass annotation:
@MappedSuperclass
public class BaseEntity {
private Timestamp createdAt;
}
@Entity
public class Address extends BaseEntity {
}
In that case, JPA assumes that the database table Address contains a column with the name createdAt.
Scenario 2: Mapping Inheritance
Let’s assume that we have the following inheritance hierarchy:
The Employee class is abstract and has a field name, whereas FulltimeEmployee doesn’t have its own fields but ParttimeEmployee has a field that defines the percentage he works for the company.
The mapping of class hierarchies is specified through metadata.
There are three basic strategies that are used when mapping a class or class hierarchy to a relational database:
a single table per class hierarchy
a joined subclass strategy, in which fields that are specific to a subclass are mapped to a separate table than the fields that are common to the parent class, and a join is performed to instantiate the subclass.
a table per concrete entity class
An implementation is required to support the single table per class hierarchy inheritance mapping strategy and the joined subclass strategy.
Support for the table per concrete class inheritance mapping strategy is optional in this release. Applications that use this mapping strategy will not be portable.
Support for the combination of inheritance strategies within a single entity inheritance hierarchy is not required by this specification.
Metadata
To activate inheritance we have to add the Inheritance annotation to the base class Employee:
@Inheritance
@Entity
public abstract class Employee {
@Id
private Integer id;
private String name;
}
@Entity
public class FulltimeEmployee extends Employee {
}
@Entity
public class ParttimeEmployee extends Employee {
private int percentage;
}
Database Representation
OK…. but what do the database tables look like?
The Inheritance annotation has an attribute strategy of type InheritanceType. There are three inheritance types as described in the spec: SINGLE_TABLE, JOINED, TABLE_PER_CLASS.
SINGLE_TABLE
The default strategy is SINGLE_TABLE and would look like this:
As you can see all attributes of the whole inheritance hierarchy are flattened to a single table. Additionally, a column dtype was added. (The column name is not defined in the specification and is, therefore, implementation-dependent.) This column indicates which class must be instantiated when loading the data from the database.
The advantage of this strategy is that it’s easy to use even if the user directly uses the database and doesn’t know about inheritance in the entity model.
But there is also a disadvantage. Have a look at the column percentage. The N indicates that this column is nullable. But wait a ParttimeEmployee must always have a percentage! That’s not possible because the other subtype of Employee FulltimeEmployee doesn’t have this field.
So if you have mandatory fields in subclasses SINGLE_TABLE is probably not the best strategy.
JOINED
The JOINED strategy will create a table for every class in the hierarchy:
This looks like our class diagram, doesn’t it? The id is inherited means that for every instance of either FulltimeEmployee or ParttimeEmployee we will have a record in the Employee table and the primary key of Employee will be the primary and the foreign key in the FulltimeEmployee or ParttimeEmployee table.
And in contrast to the SINGLE_TABLE strategy, the percentage column in the ParttimeEmployee is now not nullable!
Perfect, but is there a disadvantage? Yes, the JPA implementation could always join the tables and this could be slower than the SINGLE_TABLE strategy. But this depends on the database and the indexes that are set on the tables.
Also querying the database with plain SQL could also be a bit more complicated because you have to do the same joins that the JPA implementation would do to get all the data.
TABLE_PER_CLASS
The last strategy is called TABLE_PER_CLASS because there is only a table if the Entity class is not abstract. In our example, the table Employee is missing because the class Employee is abstract.
In the first place, this strategy looks pretty good as it also allows to have the percentage column not nullable in the database table. But the disadvantage is when you query on the superclass level Employee then it has to execute two queries or do a union of FulltimeEmployee and ParttimeEmployee which could lead to a performance issue.
Conclusion
If you are using inheritance with JPA you can choose between the mapped superclass to have all fields of the superclass in the database table of the entity or to use “real” inheritance with one of the three inheritance strategies.
In my opinion, the TABLE_PER_CLASS strategy is not useful and with the disadvantage of multiple queries or a union probably the slowest option.
Choose SINGLE_TABLE when your subclass has only a few fields that are not mandatory. Also, choose this option when you use the database not only with JPA but also with plain SQL because this has the simplest model for querying.
With JPA it’s possible to map Java enums to columns in a database table using the Enumerated annotation.
@Enumerated
private Color color;
An enum can be mapped as an integer or a string but mapping of enums that contain state is not supported.
EnumType.ORDINAL
The default mapping is using an integer that represents the ordinal value of the enum value. The ordinal value is created at compile time based on the position of the enum value.
Let’s have a look at an example enum Color
public enum Color {
RED, GREEN, BLUE
}
Enum is a Java compile time construct that means the enum is translated in a class.
This is how Color looks after decompiling:
public final class Color extends Enum {
public static Color[] values() {
return (Color[])$VALUES.clone();
}
public static Color valueOf(String name) {
return (Color)Enum.valueOf(com/example/demo/Color, name);
}
private Color(String s, int i) {
super(s, i);
}
public static final Color RED;
public static final Color GREEN;
public static final Color BLUE;
private static final Color $VALUES[];
static {
RED = new Color("RED", 0);
GREEN = new Color("GREEN", 1);
BLUE = new Color("BLUE", 2);
$VALUES = (new Color[] {
RED, GREEN, BLUE
});
}
}
As you can see RED has the value 0, GREEN 1 and BLUE 2.
But imagine what will happen if we change the enum Color like this:
public enum Color {
YELLOW, RED, GREEN, BLUE
}
Now the ordinal values are all changed! This means you have to migrate your database and if you don’t do this you will have wrong values!
To conclude using the ordinal value is a dangerous idea and I don’t understand why the JPA expert group decided to define this as the default behavior.
So how to solve this issue?
EnumType.STRING
We can define the mapping to use the string representation of the enum instead of the ordinal value:
@Enumerated(EnumType.STRING)
private Color color;
Like this JPA will store RED, GREEN and BLUE in the column of the database table and this resolves the problem when adding new enum values or changing the order of existing ones.
But if we change the name of the enum value itself we still have to do a database migration.
The problem with enums is that there is no representation of the allowed values in the database. So your application is responsible to only allow valid values to be inserted in the database table.
But what happen when someone is adding a value directly in the database table that is not valid? For example I added a record in the table with the color BLACK and now Hibernate is complaining:
java.lang.IllegalArgumentException: Unknown name value [BLACK] for enum class [Color]
To avoid this problem we could add a check constraint to the database table:
CREATE TABLE Cars (
color VARCHAR(20)
CHECK (color IN ('RED', 'GREEN', 'BLUE'))
);
This would solve the problem but you see what happened? We have code duplication!
RED, GREEN and BLUE are defined in the enum and in the database as well and now we have to make sure to always change both artifacts the enum and the table.
Are Enums with JPA an Antipattern?
Bill Karwin wrote in his highly recommended book SQL Antipatterns: Avoiding the Pitfalls of Database Programming in “Chapter 11.31 Flavors” about the same problem when using enum like types in database tables.
I fully agree with his final statement:
Use metadata (in our case enums) when validating against a fixed set of values (that never change).
Use data when validating against a fluid set of values.
Think twice when you use an enum with JPA.
We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept All”, you consent to the use of ALL the cookies. However, you may visit "Cookie Settings" to provide a controlled consent.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
Cookie
Duration
Description
cookielawinfo-checkbox-analytics
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checkbox-functional
11 months
The cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checkbox-necessary
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-others
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-performance
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy
11 months
The cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.