Isolation Levels : well explained

Tags

, ,


READ COMMITTED

Specifies that shared locks are held while the data is being read to avoid dirty reads, but the data can be changed before the end of the transaction, resulting in nonrepeatable reads or phantom data. This option is the SQL Server default.

READ UNCOMMITTED

Implements dirty read, or isolation level 0 locking, which means that no shared locks are issued and no exclusive locks are honored. When this option is set, it is possible to read uncommitted or dirty data; values in the data can be changed and rows can appear or disappear in the data set before the end of the transaction. This option has the same effect as setting NOLOCK on all tables in all SELECT statements in a transaction. This is the least restrictive of the four isolation levels.

REPEATABLE READ

Locks are placed on all data that is used in a query, preventing other users from updating the data, but new phantom rows can be inserted into the data set by another user and are included in later reads in the current transaction. Because concurrency is lower than the default isolation level, use this option only when necessary.

SERIALIZABLE

Places a range lock on the data set, preventing other users from updating or inserting rows into the data set until the transaction is complete. This is the most restrictive of the four isolation levels. Because concurrency is lower, use this option only when necessary. This option has the same effect as setting HOLDLOCK on all tables in all SELECT statements in a transaction.

Now what are Phantom reads:

At the time of execution of a transaction, if two queries that are identical are executed, and the rows returned are different from one another, it is stated that a phantom read occurred. The possibility of occurring phantom reads is when the range locks are not acquired by the execution of SELECT.

So how to maintain this in Hibernate :

Transaction Summary : Usually, you use read committed isolation for database transactions, together with optimistic concurrency control (version and timestamp checking) for long application transactions.
Hibernate greatly simplifies the implementation of application transactions
because it manages version numbers and timestamps for you.

In Hibernate we can use :

LockMode PESSIMISTIC_FORCE_INCREMENT
Transaction will immediately increment the entity version.
LockMode OPTIMISTIC_FORCE_INCREMENT  : Optimisticly assume that transaction will not experience contention for entities.
Advertisements

Don’t Use Pessimistic Locking > try using Optimistic Locking

Tags

, ,


A collision is said to occur when two activities, which may or may not be full-fledged transactions, attempt to change entities within a system of record.  There are three fundamental ways (Celko 1999) that two activities can interfere with one another:

  1. Dirty read.  Activity 1 (A1) reads an entity from the system of record and then updates the system of record but does not commit the change (for example, the change hasn’t been finalized). Activity 2 (A2) reads the entity, unknowingly making a copy of the uncommitted version. A1 rolls back (aborts) the changes, restoring the entity to the original state that A1 found it in. A2 now has a version of the entity that was never committed and therefore is not considered to have actually existed.
  2. Non-repeatable read.  A1 reads an entity from the system of record, making a copy of it. A2 deletes the entity from the system of record. A1 now has a copy of an entity that does not officially exist.
  3. Phantom read.  A1 retrieves a collection of entities from the system of record, making copies of them, based on some sort of search criteria such as “all customers with first name Bill.”A2 then creates new entities, which would have met the search criteria (for example, inserts “Bill Klassen” into the database), saving them to the system of record. If A1 reapplies the search criteria it gets a different result set.

Locking Strategies:

2.1 Pessimistic Locking

Pessimistic locking is an approach where an entity is locked in the database for the entire time that it is in application memory (often in the form of an object).  A lock either limits or prevents other users from working with the entity in the database.  A write lock indicates that the holder of the lock intends to update the entity and disallows anyone from reading, updating, or deleting the entity.  A read lock indicates that the holder of the lock does not want the entity to change while the hold the lock, allowing others to read the entity but not update or delete it.  The scope of a lock might be the entire database, a table, a collection of rows, or a single row.  These types of locks are called database locks, table locks, page locks, and row locks respectively.The advantages of pessimistic locking are that it is easy to implement and guarantees that your changes to the database are made consistently and safely.  The primary disadvantage is that this approach isn’t scalable.  When a system has many users, or when the transactions involve a greater number of entities, or when transactions are long lived, then the chance of having to wait for a lock to be released increases.  Therefore this limits the practical number of simultaneous users that your system can support.

2.2 Optimistic Locking

With multi-user systems it is quite common to be in a situation where collisions are infrequent.  Although the two of us are working with Customer objects, you’re working with the Wayne Miller object while I work with the John Berg object and therefore we won’t collide.  When this is the case optimistic locking becomes a viable concurrency control strategy.  The idea is that you accept the fact that collisions occur infrequently, and instead of trying to prevent them you simply choose to detect them and then resolve the collision when it does occur.

Figure 1 depicts the logic for updating an object when optimistic locking is used.  The application reads the object into memory.  To do this a read lock is obtained on the data, the data is read into memory, and the lock is released.  At this point in time the row(s) may be marked to facilitate detection of a collision (more on this later).  The application then manipulates the object until the point that it needs to be updated.  The application then obtains a write lock on the data and reads the original source back so as to determine if there’s been a collision.  The application determines that there has not been a collision so it updates the data and unlocks it.  Had a collision been detected, e.g. the data had been updated by another process after it had originally been read into memory, then the collision would need to be resolved.

Ways to use above : versioning and session cache in Hibernate

Distributed Transactions – Two phase commit (2PC)

Tags

,


Basic algorithm

Commit request phase

or voting phase

  1. The coordinator sends a query to commit message to all cohorts and waits until it has received a reply from all cohorts.
  2. The cohorts execute the transaction up to the point where they will be asked to commit. They each write an entry to their undo log and an entry to their.
  3. Each cohort replies with an agreement message (cohort votes Yes to commit), if the cohort’s actions succeeded, or an abort message (cohort votes No, not to commit), if the cohort experiences a failure that will make it impossible to commit.
Commit phase or Completion phase

Success

If the coordinator received an agreement message from all cohorts during the commit-request phase:

  1. The coordinator sends a commit message to all the cohorts.
  2. Each cohort completes the operation, and releases all the locks and resources held during the transaction.
  3. Each cohort sends an acknowledgment to the coordinator.
  4. The coordinator completes the transaction when acknowledgments have been received.

Failure

If any cohort votes No during the commit-request phase (coordinator will send an abort message):

  1. The coordinator sends a rollback message to all the cohorts.
  2. Each cohort undoes the transaction using the undo log, and releases the resources and locks held during the transaction.
  3. Each cohort sends an acknowledgement to the coordinator.
  4. The coordinator undoes the transaction when all acknowledgements have been received.

Disadvantages

The greatest disadvantage of the two-phase commit protocol is that it is a blocking protocol. A node will block while it is waiting for a message. Other processes competing for resource locks held by the blocked processes will have to wait for the locks to be released. A single node will continue to wait even if all other sites have failed. If the coordinator fails permanently, some cohorts will never resolve their transactions, causing resources to be tied up forever. The algorithm can block indefinitely in the following way:

If a cohort has sent an agreement message to the coordinator, it will block until a commit or rollback is received. If the coordinator is permanently down, the cohort will block indefinitely, unless it can obtain the global commit/abort decision from some other cohort.

When the coordinator has sent “Query-to-commit” to the cohorts, it will block until all cohorts have sent their local decision. However, if a cohort is permanently down, the coordinator will not block indefinitely: Since the coordinator is the one to decide whether the decision is ‘commit’ or ‘abort’ permanent blocking can be avoided by introducing a timeout: If the coordinator has not received all awaited messages when the timeout is over it will decide for ‘abort’. This conservative behaviour of the protocol is another disadvantage: It is biased to the abort case rather than the complete case.

Dirty Read in Hibernate and Versioning

Tags


Transactions in general
As we have seen, the standard pattern for executing a use case is to get a Session from the SessionFactory, get a Transaction from the Session, interact with the database via the Session, commit the Transaction and close the Session. Details of the exception handling have been given above. This is fine for normal, fully serialised database transactions but there are two situations when it is not fine:

In general there are 4 standard transaction isolation levels:

  • Read Uncommitted
  • Read Committed
  • Repeatable Read
  • Serializable

They indicate different levels of locking strategies used within transactions and effect just how isolated one transaction really is from other transactions running simultaneously. Thus Serializable means that two transactions run as if one had completely finished (committed or rolled back) before the other had started. In practice, providing this level of isolation requires considerable resources and causes problems with scalability of applications. For this reason, most applications use a weaker form of isolation and use other strategies to overcome the consequent problems that can arise. (I believe this is the best way to do and will give justice to your application)
The most common isolation level used, and the default obtained with a PostgreSQL JDBC connection, is Read Committed. This ensures that one transaction can not see any value which has been written by another transaction if that other transaction has not yet committed. Therefore we don’t have to worry about the other transaction rolling back with the result that we would have to roll back this transaction. With this level, the isolation problems that can occur are:
Situation 1. Lost Updates: tx 1 reads a row, tx 2 reads the same row, tx1 writes the row and commits, tx 2 writes the row and commits, the value written by tx1 is lost. This can be dealt with using versioning (see below).

  • Unrepeatable Reads: Transaction (tx) 1 reads a row, tx 2 writes the row and commits, tx 1 reads the same row and gets a different result from last time. The fact that Hibernate uses a cache, and, adding to that, the use of versioning, will handle this situation.
  • Phantom Read: tx 1 executes a Select query. tx 2 inserts or deletes new rows in the database and commits, tx 1 executes the same query again and finds a different set of rows from what was there last time. There is very little that you can do about this except to be aware, when you design your transactions, of the issue.

Situation 2. When long transactions are required. Here the problem is usually that some user interaction, which could take a considerable period of time, is required in a use case with database accesses taking place both before and after the user interaction. The issue is that we should not keep a database transaction open for a long period of time (i.e., for more than a fraction of a second), whereas a user interaction could take from minutes to hours. The solution is to break the long transaction up into two (or more) database transactions, and to use detached objects from the first transaction to carry the necessary information to the presentation layer. These objects get modified, outside any transaction, as part of the user interaction and then reattached to the second transaction to cause the necessary updates.
If we left it at that, then all the isolation problems described above could occur, as well as the nastier one of a Dirty Read: Consider a long transaction and a normal transaction: the first contains two database transactions tx1a and tx1b (perhaps the first is to book a flight, the second to book a hotel). The other transaction, tx2 might be to order meals for the passengers on the flight (okay; not very realistic but you get the idea). Now tx1a could update a row and commit and tx2 could read it. Now tx1b decides to roll back (perhaps there were no hotels available). This only rolls back tx1b itself, because tx1a is committed and cannot be rolled back, but since this is part of a long transaction, the programmer has written explicit code to undo the effects of tx1a when tx1b rolls back (such an undo operation is normally called a compensating transaction: it may not even really undo the original transaction, it might only make up for it in some way such as authorizing a reimbursement or a voucher if a promised booking cannot be honoured). Now tx2 has executed, read data from a long transaction, which has since been undone, and has committed. To be correct, the effects of this second transaction should be undone as well but there is no way of knowing this.
I believe  the safe solution here is only to allow database writes in the last database transaction of a long transaction, and to use versioning there. Under certain application specific circumstances, more relaxed strategies can be taken but the above rule of thumb is safe and simple and should only be ignored with very careful analysis.

Versioning
The idea of versioning is very simple and particularly easy to handle with the support Hibernate provides:
Add a version instance variable, and corresponding accessor and mutator methods, to your objects. An int or long is recommended although some people prefer a TimeStamp or Calendar. The latter two have slightly worse performance and are not absolutely guaranteed to work correctly (one might end up with two updates made so close together in time that they have the same TimeStamp value – although some operating systems ensure that this can not the case), but they have the advantage that you can easily see exactly when the update was made.
declare the version instance variable in your mapping file for the class. This requires the following element added to the mapping file immediately after your id element (assuming your instance variable is called “version” and you want it in a column in the database called “version”):

Now whenever you make an object dirty in memory, Hibernate will update its version (in memory). Whenever the object gets flushed to disk (e.g., at the end of a transaction or because you call session.update or session.saveOrUpdate to re-attach a detached object, Hibernate will throw a StaleObjectStateException if the version number of the object on disk is not the same as it was when the object was loaded. By catching that exception, the programmer can then decide what to do about the conflict (e.g., report back to the user that the choice he/she has just made is, in fact, no longer available and could they please make another one).

Testing Rules : Lets Play

Tags


Difference between Stub and Mock

Stubbing a method is all about replacing the method with code that returns a specified result (or perhaps raises a specified exception). Mocking a method is all about asserting that a method has been called (perhaps with particular parameters).

Using Moctio, set of two things will able to test the code. spy and mock.

On mock you set expectation and you use it on only mock objects. and significance of creating mock object is to substitute any dependency.

And for the objects you want to test obvious you cannot mock it. so here to stub we can use spy.

See below examples:

TicketAction ticketAction;
TicketDAO mockTicketDao;
User mockUser;

@Before
public void setupAction() {
ticketAction = spy(new TicketAction());

User mockUser = mock(User.class);
when(mockUser.getUsername()).thenReturn(“some.email.id”);

doReturn(mockUser).when(ticketAction).getUser();

mockTicketDao = mock(TicketDAO.class);
ticketAction.setTicketDAO(mockTicketDao);
}

@Test
public void listTicketShouldLoadTicketListForCurrentUser() {
ArrayList currentUserTickets = new ArrayList();
when(mockTicketDao.getUserTickets(mockUser.getUsername())).thenReturn(currentUserTickets);

assertEquals(TicketAction.SUCCESS, ticketAction.listTickets());
assertSame(currentUserTickets, ticketAction.getTicketList());
}

JUnit 4 Test cases for Spring dependency


The base class for TestCase

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(locations = {
“classpath:spring-jndi.xml”
})
@TestExecutionListeners({
Log4JSetupListener.class,
JndiSetupListener.class,
DependencyInjectionTestExecutionListener.class,
DirtiesContextTestExecutionListener.class,
TransactionalTestExecutionListener.class
})
public class SpringAppTestCase extends Assert implements ApplicationContextAware {

protected JndiTemplate jndiTemplate = new JndiTemplate();

@Autowired
protected JdbcTemplate jdbcTemplate;

protected ApplicationContext applicationContext;

public void setApplicationContext(ApplicationContext ac) {
this.applicationContext = ac;
}

@Test
public void doit(){

}

protected Logger log = Logger.getLogger(getClass().getName());

}

How to implement Hibernate Many to Many Relationships

Tags

, ,


“Many-to-many” example

This is a many-to-many relationship table design, a STOCK table has more than one CATEGORY, and CATEGORY can belong to more than one STOCK, the relationship is linked with a third table called STOCK_CATEGORY.
many-to-many-relationship

 

Stock.java

public class Stock implements java.io.Serializable {
private Set<StockCategory> stockCategories = new HashSet<StockCategory>(0);

public Set<StockCategory> getStockCategories() {
return this.stockCategories;
}

public void setStockCategories(Set<StockCategory> stockCategories) {
this.stockCategories = stockCategories;
}

Category.java

public class Category implements java.io.Serializable {
   private Set<StockCategory> stockCategories = new HashSet<StockCategory>(0);
   ...
   public Set<StockCategory> getStockCategories() {
	return this.stockCategories;
   }

   public void setStockCategories(Set<StockCategory> stockCategories) {
	this.stockCategories = stockCategories;
   }

StockCategory.java

public class StockCategory implements java.io.Serializable {
   private Integer stockCategoryId;
   private Stock stock;
   private Category category;
   ...

Now implementing all this with Hibernate annotation

Stock.java
@Entity()
@Table(name = “stock”, catalog = “mkyong”, uniqueConstraints = {
@UniqueConstraint(columnNames = “STOCK_NAME”),
@UniqueConstraint(columnNames = “STOCK_CODE”) })
public class Stock implements java.io.Serializable {
private Set stockCategories =
new HashSet(0);

@Id
@GeneratedValue(strategy = IDENTITY)
@Column(name = “STOCK_ID”, unique = true, nullable = false)
public Integer getStockId() {
return this.stockId;
}

@OneToMany(fetch = FetchType.LAZY, mappedBy = “stock”)
public Set getStockCategories() {
return this.stockCategories;
}

public void setStockCategories(Set stockCategories) {
this.stockCategories = stockCategories;
}

Category.java


@Entity
@Table(name = “category”, catalog = “mkyong”)
public class Category implements java.io.Serializable {
private Set stockCategories =
new HashSet(0);

@Id
@GeneratedValue(strategy = IDENTITY)
@Column(name = “CATEGORY_ID”, unique = true, nullable = false)
public Integer getCategoryId() {
return this.categoryId;
}

@OneToMany(fetch = FetchType.LAZY, mappedBy = “category”)
public Set getStockCategories() {
return this.stockCategories;
}

public void setStockCategories(Set stockCategories) {
this.stockCategories = stockCategories;
}

StockCategory.java

@Entity
@Table(name = “stock_category”, catalog = “mkyong”,
uniqueConstraints = @UniqueConstraint(columnNames = {
“STOCK_ID”, “CATEGORY_ID” }))
public class StockCategory implements java.io.Serializable {
private Integer stockCategoryId;
private Stock stock;
private Category category;

@Id
@GeneratedValue(strategy = IDENTITY)
@Column(name = “STOCK_CATEGORY_ID”, unique = true, nullable = false)
public Integer getStockCategoryId() {
return this.stockCategoryId;
}

@ManyToOne(fetch = FetchType.LAZY)
@JoinColumn(name = “STOCK_ID”, nullable = false)
public Stock getStock() {
return this.stock;
}

@ManyToOne(fetch = FetchType.LAZY)
@JoinColumn(name = “CATEGORY_ID”, nullable = false)
public Category getCategory() {
return this.category;
}

Java Concurrency with Best practices


Using synchronization in Java.

Use semaphore. See basic example of concurrency using semaphore here.

Using Semaphores for Signaling

Create a class/thread to use semaphores

Semaphore semaphore = new Semaphore();

SendingThread sender = new SendingThread(semaphore);

ReceivingThread receiver = new ReceivingThread(semaphore);

receiver.start();
sender.start();

create your own semaphores

public class Semaphore {
  private boolean signal = false;

  public synchronized void take() {
    this.signal = true;
    this.notify();
  }

  public synchronized void release() throws InterruptedException{
    while(!this.signal) wait();
    this.signal = false;
  }

}

Now its usage in thread

public class SendingThread {
  Semaphore semaphore = null;

  public SendingThread(Semaphore semaphore){
    this.semaphore = semaphore;
  }

  public void run(){
    while(true){
      //do something, then signal
      this.semaphore.take();

    }
  }
}


Annotated Hibernate – Spring Integration : step by step

Tags

, ,


Create a simple class , which should be mapped to database table. We will be using ManyToOne and OneToOne mapping in account.

import java.io.Serializable;
import javax.persistence.Entity;

@Entity
public class MyTable implements Serializable
{
  private Integer id;
  private String title;
  private Artist artist;
  private String label;
  private Date releaseDate;
  private List<Track> tracks;










 @Id
  @GeneratedValue(strategy=GenerationType.AUTO)
  public Integer getId()
  {
    return id;
  }










@ManyToOne(cascade = { CascadeType.PERSIST, CascadeType.MERGE })
  public Artist getArtist()
  {
    return artist;
  }










 @OneToMany(cascade=CascadeType.ALL,
             fetch=FetchType.EAGER)
  public List<Track>getTracks()
  {
    return tracks;
  }










 @ManyToOne( cascade = {CascadeType.PERSIST, CascadeType.MERGE} )
  public Record getRecord()
  {
    return record;
  }

}

2nd class-table mapping of above used class Track

import java.io.Serializable;
import javax.persistence.Entity;

@Entity
public class Track implements Serializable
{
  private Integer id;
  private String title;
  private int lengthInSeconds;
  private Record record;
  /*
   .... standard getters/setters/equals/hashCode/toString
  */
}

Finally, here’s the Artist:

import java.io.Serializable;
import javax.persistence.Entity;

@Entity
public class Artist implements Serializable
{
  private Integer id;
  private String name;
  private String genre;
  /*
   .... standard getters/setters/equals/hashCode/toString
  */
}



Now Using a Hibernate support and Java code to call a DAO classes and its integration with Hibernate session

/**
 * Implementation of MusicDAO implemented by extending Spring's HibernateDaoSupport
 * @author roblambert
 */
public class MusicDAOHibernateImpl extends HibernateDaoSupport implements MusicDAO
{
  public Collection<Artist> getArtists()
  {
    return getHibernateTemplate().loadAll(Artist.class);
  }

  public Artist getArtistById(Integer id)
  {
    Artist artist = (Artist) getHibernateTemplate().load(Artist.class, id);
    return artist;
  }

  public Artist saveArtist(Artist artist)
  {
    getHibernateTemplate().saveOrUpdate(artist);
    return artist;
  }

  public Record getRecordById(Integer id)
  {
    Record record = (Record) getHibernateTemplate().load(Record.class, id);
    return record;
  }

  public Record saveRecord(Record record)
  {
    getHibernateTemplate().saveOrUpdate(record);
    return record;
  } 

  public Track getTrackById(Integer id)
  {
    Track track = (Track) getHibernateTemplate().load(Track.class, id);
    return track;
  }

  public List<Record> searchRecordsByTitle(String title)
  {
    return getHibernateTemplate().findByNamedQuery("record.titleLike", "%"+title+"%");
  }
}

Now we will use Spring’s AnnotationSessionFactoryBean to use annotated class. This is what gives us support for making the system aware the of mappings for persistence using the annotations.

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE beans PUBLIC "-//SPRING//DTD BEAN//EN" "http://www.springframework.org/dtd/spring-beans.dtd">
<beans>
  <bean id="sessionFactory"
        class="org.springframework.orm.hibernate3.annotation.AnnotationSessionFactoryBean">
    <property name="hibernateProperties">

      <props>
<!--YOUR HIBERNATE PROPERTIES HERE...-->
      </props>
    </property>
    <property name="annotatedClasses">
      <list>

        <value>com.zabada.springrecipes.hibernatejpa.entity.Track</value>
        <value>com.zabada.springrecipes.hibernatejpa.entity.Record</value>
        <value>com.zabada.springrecipes.hibernatejpa.entity.Artist</value>
      </list>

    </property>
  </bean>

  <!--sessionFactory will get autowired-->
  <bean id="hibernateInterceptor"
        class="org.springframework.orm.hibernate3.HibernateInterceptor"
        autowire="byName" />

  <!--sessionFactory will get autowired-->

  <bean id="musicDaoTarget"
        class="com.zabada.springrecipes.hibernatejpa.dao.MusicDAOHibernateImpl"
        autowire="byName" />

  <bean id="musicDAO" class="org.springframework.aop.framework.ProxyFactoryBean">
    <property name="proxyInterfaces">
      <value>com.zabada.springrecipes.hibernatejpa.dao.MusicDAO</value>
    </property>

    <property name="interceptorNames">
      <list>
        <value>hibernateInterceptor</value>
        <value>musicDaoTarget</value>
      </list>

    </property>
  </bean>

</beans>


Now one another example of named query :

@Entity
@NamedQuery(name="record.titleLike",
            query="select r from Record r where r.title like ?")
public class Record implements Serializable
{
  //...
}

Hibernate playing with annotations


@Entity (AccessType.FIELD)
public class Disk
{
@Id
protected Long id ;

// current solution, mapping using @Where
@OneToMany (mappedBy = “disk”)
@OrderBy (“listOrder”)
@Where (clause = “parent_id is null”)
protected List<Directory> directory ;

// proposed alternate mapping using @Criterion
@OneToMany (mappedBy = “disk”)
@OrderBy (“listOrder”)
@Criterion (isNull = “parent”)
protected List<Directory> directory ;
}

@Entity (AccessType.FIELD)
{
@Id
protected Long id ;
protected Integer listOrder ;
protected String name ;

@ManyToOne (fetch = FetchType.LAZY)
protected Disk disk ;

@ManyToOne (fetch = FetchType.LAZY)
protected Directory parent ;

@OneToMany (mappedBy = “parent”)
@OrderBy (“listOrder”)
protected List<Directory> children ;
}

Attended Agile Conference NCR 2011

Tags


Last Friday,

I attended AGILE  conference in Dwarka.

Out of 10 sessions, i liked 2 the most.

  1. Agile, from Rebel to Regular: What does that mean for the future of Agile?  by Olav Maassen (Xebia)

very well explained that old waterfall is also not bad. And conclusion was that we must keep on using new techniques. He gave an example that companies using Agile is not guarantee that project will go successfully. Keep on inventing was the last line.

  1. Test Driven Design by Jonas Auken

What i learnt here is that start coding by first first by writing test only. learnt  new way of writing the code. I know Junit and wrote many test cases.

Install Tomcat GoDaddy Certificate


For tomcat use GoDaddy Apache certificate .

And Install it using following commands:

openssl pkcs12 -export -chain -CAfile gd_bundle.crt -in <domain.com>.crt -inkey <domain.com>.key -out keystore.tomcat -name tomcat -passout pass:<password>

/usr/lib/jvm/java-6-sun-1.6.0.22/bin/keytool -list -v -storetype pkcs12 -keystore keystore.tomcat

 

 

Configure Tomcat with following settings

 

<Connector port=”8443″ protocol=”HTTP/1.1″ SSLEnabled=”true”
maxThreads=”150″ scheme=”https” secure=”true”
keystoreFile=”/etc/apache2/ssl/keystore.tomcat”
keystorePass=”secret”
keystoreType=”PKCS12″
clientAuth=”false” sslProtocol=”TLS” />