Ask Sawal

Discussion Forum
Notification Icon1
Write Answer Icon
Add Question Icon

What is jpa in java example?

5 Answer(s) Available
Answer # 1 #

In this blog, we’re going to look at how to create a simple JPA application in IntelliJ IDEA Ultimate. JPA allow you to store, access and manage Java objects in a relational database.

If you want to create a Jakarta Persistence application with the new jakarta namespace, checkout this version of the blog or watch this video.

First, we’ll create a new project in IntelliJ IDEA Ultimate by clicking on the New Project button in the Welcome screen. We’ll select Java Enterprise from the left menu which allows us to take advantage of the enterprise framework support provided in IntelliJ IDEA Ultimate. In this tutorial, I’ll use the latest Long Term Supported (LTS) Java version which is Java 11. Then, I’ll select Library for my template. I won’t be using any application servers for my persistence application so I will not configure the application server field. Then, I’ll click Next.

In the next window, we’ll select the libraries required by my application. I want to create a JPA application that uses Hibernate so under Implementations, I will select Hibernate. Then, I’ll click Next.

In the next window, I will set the project name to JPA-App and change the group to my company name, com.jetbrains. Then click Finish. IntelliJ IDEA will create the project and generate some files for us.

In our new project, let’s open our generated pom.xml file. You’ll notice that IntelliJ IDEA generated some dependencies needed for our application based on the frameworks selected when we created our project. In addition to these dependencies, our application will also need the dependencies for the database where we’ll be persisting our data. In this tutorial, I’ll use a light-weight database called HyperSQL. In my pom.xml file, I’ll add my HyperSQL dependency by pressing Alt+Insert for Windows/Linux or ⌘N for macOS. Then, I’ll choose Dependency. A window will popup that I’ll use to search for my HyperSQL dependency by entering hsqldb in my search window. Under org.hsqldb, I will select the latest version to include in my pom.xml file. Then, click Add. The following dependency will be added to the pom.xml file as a result:

When the dependency is added, I will load my maven changes by pressing on Ctrl+Shift+O on Windows/Linux or ⇧⌘I on macOS. I can also click on the icon that appears in the top right corner of my pom.xml file. Now that our dependencies are all set, let’s start creating the files needed for our persistence application.

We’ll open up our Persistence tool window by going to View -> Tool Windows -> Persistence. The Persistence tool window allows us to create a variety of resources for our persistence applications. You’ll see that IntelliJ IDEA created a persistence.xml configuration file where we’ll configure our managed persistence classes as well as our database. In addition, a default persistence unit is created for us.

Let’s create an Entity which will represent an Employee. We can do so by right-clicking on our default persistence unit clicking New then clicking Entity.

For Create Class, we’ll enter in Employee. For Destination Package, we’ll create a new package called entity. Since the package currently doesn’t exist, it’ll be shown in red. Once we click OK, IntelliJ IDEA will create the new entity package along with our Employee class. Our Employee class will be created with a generated ID along with its setter and getter.

According to the JPA specification, an entity must have a no-arg constructor so we’ll generate it by bringing up the Generate window using Alt+Insert for Windows/Linux or ⌘N for macOS. We’ll choose Constructor from the list. Then click Select None so we can generate a constructor with no arguments. IntelliJ IDEA creates the Employee no-arg constructor.

Now let’s add a couple more variables to our Employee entity. I’ll add a String variable for the Employee’s first name called fName (which isn’t the best variable name but we’ll be changing that later in the tutorial). We’ll also add a String variable for the Employee’s last name called lName.

You’ll notice that the Employee Entity has some gutter icons.

The gutter icon on the Entity class declaration allows you to navigate to the Persistence Tool Window. There are also gutter icons for your entity’s persistent fields. IntelliJ IDEA will distinguish ID fields with a small key icon. You’ll notice that the ID field has two gutter icons, one for field access and one for property access.

Let’s go ahead and generate the setters and getters for my new fields. I’ll bring up the Generate menu (Alt+Insert for Windows/Linux or ⌘N for macOS) and select Getter and Setter. I’ll press Ctrl to select both variables and click OK. IntelliJ IDEA generates the getters and setters for both variables.

Here is what my Employee class looks like so far:

Now that our Employee entity is complete, let’s create our Main class where we’ll create an Employee object and persist it to a database. In the Project Window, we’ll select the java folder and bring up the New menu by pressing Alt+Insert for Windows/Linux or ⌘N for macOS. Choose Java Class and then type in our class name, Main.

In our new class, let’s add a main method. I’ll type in main and press Enter, letting IntelliJ IDEA complete the declaration for me. Now, I’ll create the Employee object that we’ll persist to our database. We’ll create an Employee object then set its first and last name.

Now, the first step to persisting my employee is to create an EntityManagerFactory (you’ll notice that if you type Emf, IntelliJ IDEA will bring up the EntityManagerFactory class that we can select). IntelliJ IDEA will also suggest a variable name that you can use. We’ll create the EntityManagerFactory by calling the Persistence.createEntityManagerFactory("default") method with default as our persistence unit name.

Next, we’ll create the EntityManager by calling the EntityManagerFactory.createEntityManager() method. Once we do, we can now begin a transaction by calling the EntityManager’s getTransaction().begin(). Then, we can persist our Employee object that we created earlier by calling the EntityManager’s persist method. Now that this is done, we can cleanup our resources. We’ll commit our transaction and close our EntityManager and EntityManagerFactory.

The final Main class should look similar to the following:

At this point, we’re almost ready to persist our data. The main step missing is setting up the database where our data will be persisted. You’ll remember earlier in the tutorial we mentioned that we’ll be using HyperSQL as our database. So, let’s go ahead and setup our database.

Let’s navigate to our persistence.xml configuration file from the Persistence tool or Project window (under src/main/resources/META-INF/persistence.xml). In the persistence.xml file, you’ll notice that our Employee entity has already been configured as a managed persistence class in our default persistence unit. Let’s add the JPA properties required to configure our HyperSQL database.

You’ll see that as soon as you start typing <, IntelliJ IDEA brings up suggestions for all the element that go into . I’ll choose and press Enter. Then I’ll start typing < and select which will insert with the name and value attributes.

For my first property, I want to specify the HyperSQL JDBC driver. I’ll set the first property name attribute to javax.persistence.jdbc.driver and value attribute to org.hsqldb.jdbcDriver.

Then, I’ll add another property element to configure the database URL. I’ll set the property name attribute to javax.persistence.jdbc.url. For the value, I want my program to create an embedded HyperSQL database for me when it runs. So I will specify my URL to create an embedded database in my project’s target directory and call it myDB. I’ll also set the shutdown property to true so that the database will close with the last connection. I can do this by specifying a value of jdbc:hsqldb:file:target/myDB;shutdown=true.

Next, I’ll add two property elements to configure the database user and password. For the user, I’ll set the property name attribute to javax.persistence.jdbc.user and value attribute to user. For the password, I’ll set the property name attribute to javax.persistence.jdbc.password and value attribute to password.

Finally, I’ll add another property that will result in my entity’s table being generated for me in the database. I’ll set the property name attribute to hibernate.hbm2ddl.auto and value attribute to update. This property results in the Employee table getting created for me in the database.

The final persistence.xml file should look like this:

Now that we’ve finished configuring our database, let’s go back to our Main class and run our application. We can run our application by pressing Ctrl+Shift+F10 for Windows/Linux or ^⇧R for macOS.

Once our application runs, a HyperSQL database will be created as well as an Employee table. Then our Employee object will be persisted to our database table. At this point, you might want to view the tables that were created for you in the database. To do that, let’s configure IntelliJ IDEA to connect to the HyperSQL database that was created by our application.

We’ll need to copy the database URL we specified in the persistence.xml file (jdbc:hsqldb:file:target/myDB;shutdown=true). Remember that specifying this URL in your persistence.xml file resulted in our application creating an embedded database called myDB. We will now connect to this database and see the data that was persisted into our database.

Open the Database tool window by going to View -> Tool Windows -> Database. Click on the + button and choose Data source from URL.

Then, paste in the database URL (jdbc:hsqldb:file:target/myDB;shutdown=true). IntelliJ IDEA will detect that it’s a HyperSQL URL. Click OK.

Let’s finish configuring the database. I’ll set the database configuration name to myDB. For my User and Password fields, I’ll enter in the user and password I set in my persistence.xml file (user, password). If you have a warning about missing HyperSQL drivers, click on the Download missing driver files.

Under the Options tab, I will enable the Auto-disconnect after setting and set it to disconnect after 3 seconds. This setting will disconnect the database in IntelliJ IDEA and release all locks allowing my application’s process to continually connect and write to the database. Then I will test my connection and make sure my configuration is valid then I’ll click OK.

In the Database Window, you will now see the myDB database. The database contains an EMPLOYEE table under the default PUBLIC schema. When we double-click on the EMPLOYEE table, we see that our Employee entity is persisted into an Employee table with an ID, FNAME and LNAME column. We also see the data from the Employee object we created in our Main class.

Now that we have our database configured, let’s connect this datasource to our persistence unit by right-clicking on the default persistence unit in the Persistence tool window and clicking Assign Data Sources… then selecting our myDB database from the drop-down menu. This step is required for the IntelliJ IDEA code completion that we’ll see in the next section.

I want to persist another Employee object so I’ll go back to my Main class. Since my first Dalia Employee is already persisted, I can take advantage of the code I already wrote and simply replace the first and last name with other names:

I also want to make some adjustments to my Employee object so I’ll navigate to my Employee class. I want to rename my variables and give them better names. I’ll select the fName variable and press Shift+F6. I’ll choose Include Accessors and rename my field from fName to firstName. I’ll do the same for the lName variable and replace lName with lastName.

Now, let’s try to rerun our application so we can persist our second Employee.

You’ll notice that after the application runs, we get an error saying that the firstName object was not found.

This error happens because the first time the application ran, it created an Employee table using the fName variable name as its column name. After refactoring the variable to firstName, the application tries to persist the second Employee into the existing table using the new firstName variable which doesn’t match the database table.

We can either drop the EMPLOYEE table and create a new table with the new variable name or, if we have existing data in our table and we can’t drop the table, we can add an @Column annotation that maps the Entity’s variable to the database column. Since I don’t want to lose my existing data, I’ll add the @Column annotation to my Entity’s variables. On the firstName variable declaration, I’ll start typing in the @Column annotation then select the name attribute from the list of suggestions. Then, I will press Ctrl+Space for code completion. IntelliJ IDEA will bring up the current EMPLOYEE table column names. I will choose the FNAME column name.

Note: the database code completion is only available after you assign your persistence unit to your data source we did in the last section.

We’ll similarly add the @Column annotation to the lastName variable.

If your Employee entity is generated with property access (the @Id is on the getId() method instead of the id field), place your @Column annotations on your getFirstName() and getLastName() methods instead of your fields.

Now, let’s try to re-run our application.

The application runs successfully. We can verify that the second employee was persisted to the database by going to the Database view and viewing our EMPLOYEE table.

In this tutorial we created a JPA application that persisted an Employee entity into a database. The application used for this tutorial is available on GitHub.

See also

[5]
Edit
Query
Report
Abhiraj Maaney
TIP INSERTER
Answer # 2 #

By itself, JPA is not a tool or framework; rather, it defines a set of concepts that guide implementers. While JPA's object-relational mapping (ORM) model was originally based on Hibernate, it has since evolved. Likewise, while JPA was originally intended for use with relational databases, some JPA implementations have been extended for use with NoSQL datastores. A popular framework that supports JPA with NoSQL is EclipseLink, the reference implementation for JPA 3.

The core idea behind JPA as opposed to JDBC, is that for the most part, JPA lets you avoid the need to “think relationally." In JPA, you define your persistence rules in the realm of Java code and objects, whereas JDBC requires you to manually translate from code to relational tables and back again.

Popular JPA implementations like Hibernate and EclipseLink now support JPA 3. Migrating from JPA 2 to JPA 3 involves some namespace changes, but otherwise the changes are under-the-hood performance gains.

Because of their intertwined history, Hibernate and JPA are frequently conflated. However, like the Java Servlet specification, JPA has spawned many compatible tools and frameworks. Hibernate is just one of many JPA tools.

Developed by Gavin King and first released in early 2002, Hibernate is an ORM library for Java. King developed Hibernate as an alternative to entity beans for persistence. The framework was so popular, and so needed at the time, that many of its ideas were adopted and codified in the first JPA specification.

Today, Hibernate ORM is one of the most mature JPA implementations, and still a popular option for ORM in Java. The latest release as of this writing, Hibernate ORM 6, implements JPA 2.2. Additional Hibernate tools include Hibernate Search, Hibernate Validator, and Hibernate OGM, which supports domain-model persistence for NoSQL.

While they differ in execution, every JPA implementation provides some kind of ORM layer. In order to understand JPA and JPA-compatible tools, you need to have a good grasp on ORM.

Object-relational mapping is a task–one that developers have good reason to avoid doing manually. A framework like Hibernate ORM or EclipseLink codifies that task into a library or framework, an ORM layer. As part of the application architecture, the ORM layer is responsible for managing the conversion of software objects to interact with the tables and columns in a relational database. In Java, the ORM layer converts Java classes and objects so that they can be stored and managed in a relational database.

By default, the name of the object being persisted becomes the name of the table, and fields become columns. Once the table is set up, each table row corresponds to an object in the application. Object mapping is configurable, but defaults tend to work, and by sticking with defaults, you avoid having to maintain configuration metadata.

Figure 1 illustrates the role of JPA and the ORM layer in application development.

When you set up a new project to use JPA, you will need to configure the datastore and JPA provider. You'll configure a datastore connector to connect to your chosen database (SQL or NoSQL). You'll also include and configure the JPA provider, which is a framework such as Hibernate or EclipseLink. While you can configure JPA manually, many developers choose to use Spring's out-of-the-box support. We'll take a look at both manual and Spring-based JPA installation and setup shortly.

From a programming perspective, the ORM layer is an adapter layer: it adapts the language of object graphs to the language of SQL and relational tables. The ORM layer allows object-oriented developers to build software that persists data without ever leaving the object-oriented paradigm.

When you use JPA, you create a map from the datastore to your application's data model objects. Instead of defining how objects are saved and retrieved, you define the mapping between objects and your database, then invoke JPA to persist them. If you're using a relational database, much of the actual connection between your application code and the database will then be handled by JDBC.

As a specification, JPA provides metadata annotations, which you use to define the mapping between objects and the database. Each JPA implementation provides its own engine for JPA annotations. The JPA spec also provides the PersistanceManager or EntityManager, which are the key points of contact with the JPA system (wherein your business logic code tells the system what to do with the mapped objects).

To make all of this more concrete, consider Listing 1, which is a simple data class for modeling a musician.

The Musician class in Listing 1 is used to hold data. It can contain primitive data such as the name field. It can also hold relations to other classes such as mainInstrument and performances.

Musician's reason for being is to contain data. This type of class is sometimes known as a DTO, or data transfer object. DTOs are a common feature of software development. While they hold many kinds of data, they do not contain any business logic. Persisting data objects is a ubiquitous challenge in software development.

One way to save an instance of the Musician class to a relational database would be to use the JDBC library. JDBC is a layer of abstraction that lets an application issue SQL commands without thinking about the underlying database implementation.

Listing 2 shows how you could persist the Musician class using JDBC.

The code in Listing 2 is fairly self-documenting. The georgeHarrison object could come from anywhere (front-end submit, external service, etc.), and has its ID and name fields set. The fields on the object are then used to supply the values of an SQL insert statement. (The PreparedStatement class is part of JDBC, offering a way to safely apply values to an SQL query.)

While JDBC provides the control that comes with manual configuration, it is cumbersome compared to JPA. To modify the database, you first need to create an SQL query that maps from your Java object to the tables in a relational database. You then have to modify the SQL whenever an object signature changes. With JDBC, maintaining the SQL becomes a task in itself.

Now consider Listing 3, where we persist the Musician class using JPA.

Listing 3 replaces the manual SQL from Listing 2 with a single line, entityManager.save(), which instructs JPA to persist the object. From then on, the framework handles the SQL conversion, so you never have to leave the object-oriented paradigm.

The magic in Listing 3 is the result of a configuration, which is created using JPA's annotations. Developers use annotations to inform JPA which objects should be persisted, and how they should be persisted.

Listing 4 shows the Musician class with a single JPA annotation.

Persistent objects are sometimes called entities. Attaching @Entity to a class like Musician informs JPA that this class and its objects should be persisted.

Like most modern frameworks, JPA embraces coding by convention (also known as convention over configuration), in which the framework provides a default configuration based on industry best practices. As one example, a class named Musician would be mapped by default to a database table called Musician.

The conventional configuration is a timesaver, and in many cases it works well enough. It is also possible to customize your JPA configuration. As an example, you could use JPA's @Table annotation to specify the table where the Musician class should be stored.

Listing 5 tells JPA to persist the entity (the Musician class) to the Musician table.

In JPA, the primary key is the field used to uniquely identify each object in the database. The primary key is useful for referencing and relating objects to other entities. Whenever you store an object in a table, you will also specify the field to use as its primary key.

In Listing 6, we tell JPA what field to use as Musician's primary key.

In this case, we've used JPA's @Id annotation to specify the id field as Musician's primary key. By default, this configuration assumes the primary key will be set by the database--for instance, when the field is set to auto-increment on the table.

JPA supports other strategies for generating an object's primary key. It also has annotations for changing individual field names. In general, JPA is flexible enough to adapt to any persistence mapping you might need.

Once you've mapped a class to a database table and established its primary key, you have everything you need to create, retrieve, delete, and update that class in the database. Calling entityManager.save() will create or update the specified class, depending on whether the primary-key field is null or applies to en existing entity. Calling entityManager.remove() will delete the specified class.

Simply persisting an object with a primitive field is only half the equation. JPA also lets you manage entities in relation to one another. Four kinds of entity relationships are possible in both tables and objects:

[4]
Edit
Query
Report
Tsuyoshi Montaner
Taxi Dancer
Answer # 3 #

A few general themes permeate most of the recommended approaches outlined in this article:

With the general themes listed above in mind, we move on to some recommended practices for effective JPA-based applications.

In an ideal world, the default configuration settings would always be exactly what we wanted. Our use of “configuration by exception” would not require any exceptions to be configured. We can approach this ideal world by minimizing the frequency and severity of our deviations from the assumed configuration. Although there is nothing inherently wrong about providing specific exceptions to the default configuration settings, doing so requires more effort on our part to denote and maintain the metadata describing the exceptions to the default configuration.

For many organizations, it makes the most sense to use annotations in the code during development, because in-code configuration is significantly more convenient for developers. For some of these organizations, it may be preferable to use external XML files during deployment and production, especially if the deployment team is different from the development team.

JPA enables XML-based configuration data to be used as an alternative to annotations, but it is even more powerful to use the XML configuration approach to override the annotations. Using the override process enables developers to take advantage of annotations during source code development while allowing these in-code annotations to be overridden outside the code at production time.

As I discussed in significantly greater detail in my OTN article “ Better JPA, Better JAXB, and Better Annotations Processing with Java SE 6,” Java SE 6 provides built-in annotation processing that can be used to build the mapping XML file from the in-code annotations. This approach is appropriate for organizations whose development staff benefits from in-code annotations but whose deployment staff benefits from external configuration.

For configuration that is likely to change often for various deployments of the software, external configuration makes the most sense. For configuration that is fairly static across multiple deployments, in-code annotations can be defined once and there is no need to change them often.

There are some configuration settings that must be expressed in the XML configuration files rather than via in-code annotations. One example of such configuration is the definition of default entity listeners that cover all entities within a persistence unit.

Another situation in which external configuration should be used instead of in-code annotations is for vendor-specific settings. Placing implementation-specific settings in external configuration files keeps the code portable and clean. Generally, JPA vendor-specific properties should be declared with name/value properties in the persistence.xml file rather than within source code.

SQL statements that are specific to a particular database can also be placed outside the source code, in the XML descriptor file. If database-specific SQL statements must be used, it is best to specify them as native named queries and annotate them in XML for the general persistence unit rather than in a particular entity’s Java source code file.

JPA 1.0 specification co-lead Mike Keith covered many of the trade-offs associated with an XML metadata strategy (XML strategy) versus an in-source metadata strategy (annotations strategy) in the OTN column “To Annotate or Not” (see “Additional Resources”).

I prefer to specify object-relational mapping by annotating entity fields directly, rather than annotating get/set methods (properties), for several reasons. No single reason overwhelmingly favors specifying persistence via fields rather than via properties, but the combined benefits of field persistence specification make it the more attractive approach.

Because persistence is all about storing, updating, and retrieving the data itself, it seems cleaner to denote the persistence directly on the data rather than indirectly via the get and set methods. There is also no need to remember to mark the getter but not the setter for persistence. It is also cleaner to mark a field as transient to indicate that it should not be persisted than to mark a get() method as transient. By using fields rather than properties, you don’t need to ensure that the get and set methods follow the JavaBeans naming conventions related to the underlying fields. I prefer the ability to look at a class’s data members and determine each member’s name, each member’s datatype, comments related to each data member, and each member’s persistence information all in one location.

The order of business logic and persistence in get/set methods is not guaranteed. Developers can leave business logic out of these methods, but if fields are annotated instead, it does not matter if business logic is added to the get or set methods later.

A developer may want methods that manipulate or retrieve more than one attribute at a time or that do not have “get” or “set” in their names. With field annotations, the developer has the freedom to write and name these methods as desired without the need to place the @Transient annotation or the “transient” keyword in front of methods not directly related to persistence.

I prefer using @EmbeddedId to designate composite keys, for three main reasons:

1. Use of @EmbeddedId is consistent with use of the @Embedded annotation on embedded Java classes that are not primary keys.

2. @EmbeddedId enables me to represent the composite key as a single key in my entity rather than making me annotate multiple data members in my entity with the @Id annotation.

3. The @EmbeddedId approach provides encapsulation of any @Column or other column mapping on the primary key columns in a single Java class. This is better than forcing the containing entity to handle object-relational mapping details for each column in the composite key.

In short, I prefer the @EmbeddedId approach for composite primary keys because of the grouping of primary-key-related details within the single @Embeddable class. This also makes it simple to access the primary key class as a single, cohesive unit rather than as individual pieces inside an entity.

The ideal approach is to use a single-value key, because this generally requires the least extra effort on the part of the JPA developer.

The JPA specification warns against using approximate types, specifically floating types (float and double). In general, my preference with JPA is to use surrogate primary keys that are integers, whenever possible.

The following general guidelines for maintaining portable JPA code are based on warnings in the JPA specification regarding features that are optional or undefined.

The JPA specification allows implementations to have columns from one table reference non-primary-key columns of another table, but JPA implementations are not required to support this. Therefore, for applications portable across different JPA implementations, it is best to relate tables via references from one table to the primary key column(s) of the other table. I prefer this as a general database principle anyway.

Even if your JPA provider does implement the optional “table per concrete class” inheritance mapping strategy, it is best to avoid this if you need JPA provider portability.

It is also best to use a single inheritance mapping strategy within a given Java entity class hierarchy, because support for mixing multiple mapping inheritance strategies within a single class hierarchy is not required of JPA implementations.

Beyond what is discussed here, the JPA specification points out additional issues to keep in mind when developing portable JPA-based applications. In general, anything cited in the specification as optional, undefined, or ambiguous or specifically called out as nonportable across JPA implementations should be avoided unless absolutely necessary. In many of these cases, the exceptions to portability are not difficult to avoid. (A good resource regarding portable JPA applications is the 2007 JavaOne conference presentation “Java Persistence API: Portability Do’s and Don’ts.” Another good resource on portable JPA applications is the article “Portable Persistence Using the EJB 3.0 Java Persistence API.” Both of these are listed under “Additional Resources.”)

There are occasions when a JPA implementation might provide nonstandard features (“extensions”) that are highly useful.

Although it is generally desirable for applications to be as standards-based as possible to improve the ability to migrate them between various implementations of the standard, this does not mean that implementation-specific features should never be used. Instead, the costs and benefits of using an all-standards approach should be compared with the costs and benefits of employing the vendor-specific features.

The following issues should be considered when determining whether to use features and extensions specific to a particular JPA implementation.

One example of making a trade-off decision by answering these questions is the use of Oracle TopLink Essentials’ logging mechanism in JPA-based applications. I am comfortable using this provider-specific feature, for the following reasons:

Another example of a highly useful but provider-specific function is the use of the second-level cache, which is often vital to acceptable performance. Information on the reference implementation’s extensions is available in the “TopLink Essentials JPA Extensions Reference” (see “Additional Resources”).

Using features specific to a certain database is riskier than using features specific to a certain JPA provider, because the database specifics are not typically handled as elegantly as the JPA provider specifics are.

Maintain database independence with standard JPA query language statements. A red flag for database-specific SQL statements in your code is the use of the @NamedNativeQuery and @NamedNativeQueries annotations (or their corresponding XML descriptor elements). Similarly, EntityManager.createNativeQuery method calls also indicate dependence on a specific database.

Even when use of vendor-specific features is warranted, you can take steps to obviously identify and isolate these vendor specifics from the standardized database access.

I prefer to place database-specific queries in the XML deployment descriptors (one or more named-native-query elements) in an attempt to keep my actual code (including annotations) as free from vendor-specific code as possible. This enables me to isolate proprietary database code to the external descriptors rather than mingling it with my standards-oriented code. I also prefer to include my named-native-query XML elements as subelements of the root element of the object-relational mapping file(s), rather than as subelements of any particular entity.

Native named queries are scoped to the entire persistence unit, even when a particular native named query is defined in a particular entity’s Java class, so it is also considered a best practice to include some other unique identifier in the native named query’s name. If you place the native named queries together under the root element in the external XML mapping file, it is easier to visually detect naming collisions. A disadvantage of this approach is that it is less obvious which entity is returned from a query, but you can address this issue by including the returned entity’s name as part of the native named query.

An advantage of finally having a consistent API that works with both standard and enterprise Java is that we can write Java-based persistence layers that work with both standard and enterprise applications.

Effective layering can be used to ensure that our JPA-based database access (DAO) code can be used in both standard and enterprise Java contexts. To do this properly, entity classes and DAO layers should not perform transaction handling, because doing so would conflict with transactions provided by the enterprise Java application server. Transactions need to be pushed out of the entity classes to the client for standard Java environments.

The figure below demonstrates how the same entity class can be used for both standard and enterprise Java environments. Structuring the entities this way requires only a small amount of effort but allows a high degree of reuse of the entity class.

Moving the access of the entity manager back to the layer specific to its host environment and moving the transaction handling back to the same layer specific to the host environment enable the JPA entity classes and the database access layer to be reusable in standard Java environments, in Java EE Web containers, and in Java EE EJB application servers.

Although the graphic above shows JPA transactions and entity manager handling within the Web tier, I typically prefer to keep this functionality in a stateless session bean that is accessible from the Web tier. The graphic shows JPA in the Web tier simply to demonstrate how transactions and entity manager handling can be separated from common JPA and DAO code.

It is important to note that different types of entity managers (application managed and container managed) should not be used at the same time or interchangeably. Also, a single entity manager should not be used across concurrent transactions.

Because the Java Persistence API is intended for database access, it generally should not be used in application tiers other than the business tier. Placing JPA code in the presentation layer typically renders it unusable by any other Java-based application outside of that layer and diminishes one of the key advantages of JPA.

Although comments and annotations can be useful for describing what code needs to do or is expected to do, it is even better when the code can speak for itself.

The “transient” keyword has been a built-in part of the Java programming language for many years and provides a mechanism, in standard Java code, to express that a field should not be persisted. I prefer to denote this exception from persistence with this keyword rather than with the @Transient annotation or XML entry. The one exception is when an entity class needs to have a field be serializable but not persistable. In such a situation, the @Transient annotation (or XML equivalent) is the only appropriate choice.

Naming conventions enable developers to more easily read, maintain, and enhance other developers’ code. Use of JPA-related naming conventions complements use of JPA-related defaults.

Naming conventions can supply a unique label to each named query, to ensure that all named queries within a given persistence context have unique names. The easiest method for doing this is to use an entity class name to prefix the name of each named query associated most closely associated with that entity. The JPA blueprints recommend similar use of naming conventions for named queries and other aspects of JPA-based code.

Because you will be using J2SE 5 (or a later version) with your JPA-based applications, you can apply J2SE 5’s features in your own code.

Generics enable JPA developers to specify one-to-many and many-to-many relationships between entities without the need to express the targetEntity attribute. This makes possible self-describing code rather than accomplishing the same functionality with in-source annotations or external XML configuration.

The enum cannot be used as an entity, but it can be used for a data member of a persisted entity to provide type safety and control of the finite range of values for a data member.

The focus of this article has been on developing highly maintainable JPA-based applications, but the JPA specification provides many useful “hooks” for tweaking performance without necessarily making the JPA code less portable. JPA providers are supposed to ignore any provider properties in the persistence.xml file with an unrecognized property name. Query hints are ignored by JPA providers to which they do not apply, but I still prefer to place them in external XML rather than in the Java code.

These JPA provider “hooks” should be used when necessary to improve performance, but I prefer to keep the code free from them and declare as many of these as possible in external XML descriptor files rather than in the code.

Major integrated development environments (IDEs) now bundle several JPA-related tools. JDeveloper offers a wizard that can easily create JPA-based entity classes with appropriate annotations directly from specified database tables. With just a couple more clicks, the JDeveloper user can similarly create a stateless session bean to act as a facade for these newly created entity beans. NetBeans 6.0 offers similar JPA wizards, and the Eclipse Dali project supports JPA tools for the Eclipse IDE.

It is likely that many more highly useful JPA-related tools will continue to emerge.

A developer can use Spring to write JPA-based applications that can be easily run in standard Java environments, web containers, and full application server EJB containers with no changes necessary to the source code. This is accomplished via the Spring container’s ability to inject datasources configured outside of the code and to support transactions via aspect-oriented programming also configured outside of the code. The Spring framework enables JPA developers to isolate specifics of handling JPA in the various environments (Java SE standalone, Java EE web containers, and Java EE EJB containers) in external configuration files, leaving transparent JPA-based code.

Another feature Spring 2.0 offers JPA developers is the @Repository annotation, which is helpful in assessing database-specific issues underlying a JPA PersistenceException.

Finally, the Spring framework provides a convenient mechanism for referencing some of the JPA provider extensions that are common across the JPA providers.

The article “ Using the Java Persistence API with Spring 2.0” (see “Additional Resources”) has more information on use of the Spring Framework with JPA.

The most effective use of JPA in a Java EE environment results from following effective EJB practices. Appropriate use of EJB 3.0 features such as dependency injection ensures that JPA use in Java EE environments is most effective. Another example of applying EJB best practices to JPA is use of a stateless session bean as a facade to JPA entities in Java EE applications.

The Java Persistence API is closely related to many other technologies, and much can be learned from the identified best practices for those technologies. These related technologies include relational databases, SQL, object-relational mapping (ORM) tools, and JDBC.  For example, the JDBC best practice of using named or positional parameters with a PreparedStatement is mirrored in JPA. JPA supports named or positional parameters in the JPA Query Language for security and potential performance benefits.

The Java Persistence API specification is fairly readable. It explains which JPA features are optional for implementations and are therefore not portable. The specification also discusses some likely trends or future directions that may be taken and warns against doing anything that might conflict with the anticipated change or addition.

Mike Keith has pointed out that “distilling and properly explaining the usage of features in correct contexts” is not part of the specification’s mandate. Therefore, a more efficient approach to gaining familiarity with the JPA standard is to access references that clearly explain correct and appropriate application of JPA. Several of these resources are included in the “Additional Resources” section.

The Java Persistence API blueprints documentation is another good source of information on practices for effective use of JPA.

Oracle has been a major player in developing the Java Persistence API, having co-led the JPA 1.0 expert group, and has been involved heavily with the JPA reference implementations. Besides providing the JPA 1.0 reference implementation (Oracle TopLink Essentials), Oracle is leading the EclipseLink project to provide the JPA 2.0 reference implementation. Oracle also provides Oracle TopLink 11g as a JPA implementation with more features than those that come with the reference implementation.

OTN also features a large set of JPA resources. Two of my favorite resources available there are “JPA Annotation Reference” and the OTN Technical Article “ Taking JPA for a Test Drive”.

Additional and more complex JPA best practices and techniques will continue to be identified as more experience with JPA is gained. Online resources can be especially timely and relevant. Blogs vary in terms of quality, but the better JPA-oriented blogs provide invaluable tips and insight into JPA development. There are several blogs about JPA that I have consistently found to be useful in my JPA work. Some of these are listed in “Additional Resources.”

The Java Persistence API provides Java SE and Java EE developers with a single, standardized mechanism for database persistence. The practices outlined in this article help them develop JPA-based code that realizes the advantages provided by the JPA specification.

EclipseLink www.eclipse.org/eclipselink/

“Java Persistence API: Best Practices and Tips” (2007 JavaOne conference)

“Java Persistence API: Portability Do’s and Don’ts” (2007 JavaOne conference)

“Java Persistence 2.0” (2007 JavaOne conference)

[0]
Edit
Query
Report
Rathorde Yumi
TREATER HELPER
Answer # 4 #
  • 4.1. Project and Entity. Create a Java project "de.
  • 4.2. Persistence Unit. Create a directory "META-INF" in your "src" folder and create the file "persistence.
  • 4.3. Test your installation. Create the following Main class which will create a new entry every time it is called.
[0]
Edit
Query
Report
Dileep Navakanth
Computer Customer Support Specialist
Answer # 5 #

Java Persistence API is a collection of classes and methods to persistently store the vast amounts of data into a database.

[0]
Edit
Query
Report
Yoshihiro Crombie
Wedding Planner