Transactions in general
As we have seen, the standard pattern for executing a use case is to get a Session from the SessionFactory, get a Transaction from the Session, interact with the database via the Session, commit the Transaction and close the Session. Details of the exception handling have been given above. This is fine for normal, fully serialised database transactions but there are two situations when it is not fine:

In general there are 4 standard transaction isolation levels:

  • Read Uncommitted
  • Read Committed
  • Repeatable Read
  • Serializable

They indicate different levels of locking strategies used within transactions and effect just how isolated one transaction really is from other transactions running simultaneously. Thus Serializable means that two transactions run as if one had completely finished (committed or rolled back) before the other had started. In practice, providing this level of isolation requires considerable resources and causes problems with scalability of applications. For this reason, most applications use a weaker form of isolation and use other strategies to overcome the consequent problems that can arise. (I believe this is the best way to do and will give justice to your application)
The most common isolation level used, and the default obtained with a PostgreSQL JDBC connection, is Read Committed. This ensures that one transaction can not see any value which has been written by another transaction if that other transaction has not yet committed. Therefore we don’t have to worry about the other transaction rolling back with the result that we would have to roll back this transaction. With this level, the isolation problems that can occur are:
Situation 1. Lost Updates: tx 1 reads a row, tx 2 reads the same row, tx1 writes the row and commits, tx 2 writes the row and commits, the value written by tx1 is lost. This can be dealt with using versioning (see below).

  • Unrepeatable Reads: Transaction (tx) 1 reads a row, tx 2 writes the row and commits, tx 1 reads the same row and gets a different result from last time. The fact that Hibernate uses a cache, and, adding to that, the use of versioning, will handle this situation.
  • Phantom Read: tx 1 executes a Select query. tx 2 inserts or deletes new rows in the database and commits, tx 1 executes the same query again and finds a different set of rows from what was there last time. There is very little that you can do about this except to be aware, when you design your transactions, of the issue.

Situation 2. When long transactions are required. Here the problem is usually that some user interaction, which could take a considerable period of time, is required in a use case with database accesses taking place both before and after the user interaction. The issue is that we should not keep a database transaction open for a long period of time (i.e., for more than a fraction of a second), whereas a user interaction could take from minutes to hours. The solution is to break the long transaction up into two (or more) database transactions, and to use detached objects from the first transaction to carry the necessary information to the presentation layer. These objects get modified, outside any transaction, as part of the user interaction and then reattached to the second transaction to cause the necessary updates.
If we left it at that, then all the isolation problems described above could occur, as well as the nastier one of a Dirty Read: Consider a long transaction and a normal transaction: the first contains two database transactions tx1a and tx1b (perhaps the first is to book a flight, the second to book a hotel). The other transaction, tx2 might be to order meals for the passengers on the flight (okay; not very realistic but you get the idea). Now tx1a could update a row and commit and tx2 could read it. Now tx1b decides to roll back (perhaps there were no hotels available). This only rolls back tx1b itself, because tx1a is committed and cannot be rolled back, but since this is part of a long transaction, the programmer has written explicit code to undo the effects of tx1a when tx1b rolls back (such an undo operation is normally called a compensating transaction: it may not even really undo the original transaction, it might only make up for it in some way such as authorizing a reimbursement or a voucher if a promised booking cannot be honoured). Now tx2 has executed, read data from a long transaction, which has since been undone, and has committed. To be correct, the effects of this second transaction should be undone as well but there is no way of knowing this.
I believe  the safe solution here is only to allow database writes in the last database transaction of a long transaction, and to use versioning there. Under certain application specific circumstances, more relaxed strategies can be taken but the above rule of thumb is safe and simple and should only be ignored with very careful analysis.

The idea of versioning is very simple and particularly easy to handle with the support Hibernate provides:
Add a version instance variable, and corresponding accessor and mutator methods, to your objects. An int or long is recommended although some people prefer a TimeStamp or Calendar. The latter two have slightly worse performance and are not absolutely guaranteed to work correctly (one might end up with two updates made so close together in time that they have the same TimeStamp value – although some operating systems ensure that this can not the case), but they have the advantage that you can easily see exactly when the update was made.
declare the version instance variable in your mapping file for the class. This requires the following element added to the mapping file immediately after your id element (assuming your instance variable is called “version” and you want it in a column in the database called “version”):

Now whenever you make an object dirty in memory, Hibernate will update its version (in memory). Whenever the object gets flushed to disk (e.g., at the end of a transaction or because you call session.update or session.saveOrUpdate to re-attach a detached object, Hibernate will throw a StaleObjectStateException if the version number of the object on disk is not the same as it was when the object was loaded. By catching that exception, the programmer can then decide what to do about the conflict (e.g., report back to the user that the choice he/she has just made is, in fact, no longer available and could they please make another one).