Share this page to your:
Mastodon

Most of the time when I want to talk to a database through JPA I use Spring and it is pretty simple to organise things like this:

<bean id=”entityManagerFactory”  class=”org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean”></bean>  
   <property name=”persistenceXmlLocation” value=”classpath:sandbox-persistence.xml” />  
   <property name=”persistenceUnitName” value=”nz.co.senanque.madura.sandbox” />  
   <property name=”dataSource” ref=”dataSource” />  
   <property name=”jpaVendorAdapter”>  
     <bean class=”org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter”>  
       <property name=”showSql” value=”true” />  
       <property name=”generateDdl” value=”true” />  
       <property name=”databasePlatform” value=”org.hibernate.dialect.H2Dialect” />  
       </bean>  
   </property>  
   <property name=”jpaProperties”>  
     <map>  
       <entry key=”hibernate.dialect” value=”org.hibernate.dialect.H2Dialect” />  
       <entry key=”hibernate.format\_sql” value=”true” />  
       <entry key=”hibernate.connection.autocommit” value=”false” />  
     </map>  
   </property>  
>  </bean>

Okay, make that fairly simple. and I have left out the data source and the transaction manager beans. Part of the reason this gets complicated is that there are a lot of layers in this. There’s the Java code that calls it, then Spring, then JPA, Hibernate, JDBC and finally the actual database.

The place I’m working just now tends towards a CDI approach rather than Spring so the above isn’t the obvious thing to do. Instead I should just make use of the more ‘native’ JPA facilites. And I do that, but they usually assume a container and I’m working in integration testing, and running from JUnit, ie outside a container.

And it is quite simple, or it would be. I have a jar file built by another project that contains the entity classes and the persistence.xml file. I can use this code to get the entity manager:

EntityManagerFactory factory =   
     Persistence.createEntityManagerFactory( “punit”,  properties);  
 EntityManager entityManager = factory.createEntityManager();

Once I have an EntityManager I can operate the database. Unlike in the Spring approach I don’t have to specify the location of the persistence.xml file, I can rely on JPA finding it for me as long as I have hibernate-entitymanager on my class path. I can put the specific connection details in the properties when I create the factory. My properties file looks like this;

javax.persistence.jdbc.url=[some url]  
  javax.persistence.jdbc.user=whatever  
  javax.persistence.jdbc.password=whatever  
  hibernate.show\_sql=false  
  hibernate.archive.autodetection=class, hbm  
  hibernate.connection.driver\_class=oracle.jdbc.driver.OracleDriver  
  hibernate.dialect=org.hibernate.dialect.Oracle10gDialect*

So that’s quite simple really.

Not so fast.

The persistence.xml file I’m working with contains an entry specifying a jta-data-source, and it is a JNDI datasource. JNDI? Normally you don’t get JNDI outside a container but for testing you can use SimpleJNDI and add a jndi.properties file to your classpath that contains this:

java.naming.factory.initial=org.osjava.sj.SimpleContextFactory  
  org.osjava.sj.root=target/test-classes/config  
  org.osjava.jndi.delimiter=/  
  org.osjava.sj.jndi.shared=true*

As long as you’ve included the SimpleJNDI jar in your classpath this will load and it will look for data source definitions in the specified root directory. I’m using a subdirectory of target and my maven build copies the files into there, my source file is in /src/test/resources/config/jdbc.properties and contains:

java:/jdbc/xyz.type=javax.sql.DataSource  
  java:/jdbc/xyz.driver=oracle.jdbc.driver.OracleDriver  
  java:/jdbc/xyz.url=[some url]  
  java:/jdbc/xyz.user=whatever  
  java:/jdbc/xyz.password=whatever*

I also needed the following in my original properties file:

hibernate.transaction.jta.platform=org.hibernate.service.jta.platform.internal.JBossStandAloneJtaPlatform *

The JNDI datasource in persistence.xml is, of course, java:/jdbc/xyz. I could probably have saved the duplication of definitions by using (I think)

javax.persistence.jdbc.datasource=java:/jdbc/xyz*

But I didn’t try that, here’s why.

That configuration did actually work. It opened the database and successfully ran a query. But after several database operations it stopped working, told me it couldn’t get a connection and I realised it was opening a fresh connection every operation and not reusing them when it let them go. I needed a pool and, while I’m sure there is a way to organise a pool, probably even a simple way, the SimpleJNDI docs weren’t clear enough for me and I’d spent a lot of time getting this far.

So I cut my losses and went to the persistence.xml file and added another persistence unit without the jta-data-source and specifying transaction-type=”RESOURCE_LOCAL”. That worked perfectly, no problems with pooling, I didn’t need JNDI or the extra properties files, just the first one. Good.

But when I built the application and tried to deploy it I found it refused to deploy because some of the JPA references in it don’t specify which persistence unit, they assume there is only one. When there is more than one it naturally fails to deploy. Fair enough. But I was hoping to minimise changes to that other project, ideally no changes at all, so I didn’t want to hunt through the code to change the JPA references.

Note that there is no way to specify a default persistence unit, that would have helped.

Next solution was to just add another persistence.xml file with the persistence unit I wanted. Easy! If you’re using eclipselink you can specify it like this:

eclipselink.persistencexml=META-INF/persistenceTEST.xml*

Naturally this file won’t be noticed by the application deploying, it only cares about ‘persistence.xml’ and all would be well.

Except I am not using eclipselink. I’m using hibernate. That doesn’t have any equivalent of this. I am stuck with the standard name.

Okay, I can make my own persistence/xml and put it in my local META-INF directory and, since my own project files are scanned before the dependent jar files that will work right? Well, yes and no.

JPA will look for entities defined in the same jar as the persistence.xml, and those entities are not defined in my project, they are defined in the jar file. I actually tried this first. There is a way to make this work: I can define all the entity classes explicitly. There are about (ugh) 500. And they change because people like adding new tables to this thing. So this is the worst solution, but it is also the best because it does actually work.

To make life a little easier I went back to the second persistence unit in the original persistence.xml file (the one that made it refuse to deploy) and put it back in temporarily. Then I ran a simple test case that included this code:

for (EntityType entityType: entityManager.getMetamodel().getEntities()) {  
  System.out.println(“”+  
        entityType.getJavaType().getName()+”  “);}

It’s a bit quick and dirty but it gets me the current list of entities which I can add to my own custom persistence.xml file. I had to go back and remove the second persistence unit from the real persistence.xml file, and then I’m done.

Maybe I could have got the pooling working in the SimpleJNDI solution, it felt like that was likely, but I was running out of time on this one and needed to get something that worked. I might have switched from hibernate to eclipselink too, but that would have taken more time than I had.

One thing I will be doing is raising a ticket on those JPA references that don’t have a persistence unit specified, that would have helped. And I will make sure I always specify them myself in future code.

Previous Post Next Post