KickBorn – Iron man next robot


It is a time to  turn your phone to hands-free Kickborn, one can call it “J.A.R.V.I.S” from Iron man.

Kickborn is now fully hands free.  It listen your voice in background, without internet and activated on your command like “hello white” which is a default voice command to activate.

You can following questions in your mind-

Q1 – What is the battery performance?

A1 – Kickborn hands free is highly battery friendly when listens in background

Q2 – What can kickborn do ?

A2 –  Any time your  phone is around, just speak “hello  white” and then you can do whole  bunch of things like “Call  John” ,  “Send text to Robert” or “Remind we tomorrow for  laundry at 10 pm” or “Play music swift” or  “Which is the biggest planet”  or “Who is the president of  America”

Q3 – There is a risk  that kickborn records the audio and can mis-use  it

A3 – kickborn never stores the data on disc  or cloud, It only listen for voice command, rest all is ignored.

Q4 – Where can i find latest version of Kickborn?

A4 – It is at play store Download-button

Sample Puppet module to install Jboss rpm

Tags

, ,


Simply pasting the working puppet module here to install Jboss 7 rpm –

# Puppet class jboss_install to install jboss rpm and start
# the jboss server as service
class jbosseap (
 $jboss_bind_address = '',
 $jboss_bind_address_management = '',
 $jboss_multicast_address = ''
) {
 
class jboss_rpm_install {
 # Required RPM package
 $jboss_core_rpm = [ "jbossas-appclient", 
 "jbossas-bundles", 
 "jbossas-core", 
 "jbossas-hornetq-native", 
 "jbossas-jbossweb-native",
 "jbossas-modules-eap", 
 "jbossas-product-eap", 
 "jbossas-standalone", 
 "jbossas-welcome-content-eap" 
 ]
package { $jboss_core_rpm: ensure => "installed" }
}
 
class jboss_config {
require jboss_rpm_install
 # Ensure Jboss home, will be created by rpm
 file { '/etc/jbossas':
 ensure => directory,
 owner => 'jboss',
 group => 'jboss'
 } 
 
 # Copy master standalone template xml 
 file { '/usr/share/jbossas/standalone/configuration/standalone.xml':
 ensure => file,
 owner => 'jboss',
 content => template('jbosseap/standalone-full-ha.xml.erb'),
 mode => '0600'
 }
 
 # Ensure Jboss user, will be created by rpm
 user { jboss:
 ensure => present,
 managehome => true,
 gid => 'jboss',
 comment => 'Ensure JBoss User'
 }
}
include jboss_rpm_install
include jboss_config 
 
 # Start Jboss service
 service { jbossas:
 enable => true,
 ensure => true ? { true => running, default => undef },
 require => Class['jboss_config'] 
 }
 
 
}

 

Application server client to connect to JBoss Remote Clustered EJB


Just putting raw steps to guide through the configuration part –

First copy “jboss-ejb-client.properties” to the class path which defines the jboss cluster topolgy. Or manually load the properties into the EJB context on the client side.

 

For manually load , use code below –

final Properties clientProps = new Properties();

 try {
 InputStream inputStream = new FileInputStream(new File(System.getProperty("PropertiesDir"), 
 EJB_CLIENT_PROPERTY_FILE_NAME));
 clientProps.load(inputStream);
 } catch (IOException e) {
 throw new EventSenderException("Could not find EJB client configuration properties at " + EJB_CLIENT_PROPERTY_FILE_NAME
 + " in directory ");
 }
 final EJBClientConfiguration clientConfiguration = new PropertiesBasedEJBClientConfiguration(clientProps);
 final ConfigBasedEJBClientContextSelector selector = new ConfigBasedEJBClientContextSelector(clientConfiguration);
 EJBClientContext.setSelector(selector);
 StatelessEJBLocator<RemoteService> statelessEJBLocator = new StatelessEJBLocator<RemoteService>(RemoteService.class,
 getPropertyVal(REMOTE_HDS_APPLICATION_NAME), getPropertyVal(REMOTE_HDS_MODULE_NAME),
 getPropertyVal(REMOTE_HDS_BEAN_NAME), "");
 remoteService = EJBClient.createProxy(statelessEJBLocator);


 

And sample properties file should look like this –

### Jboss EJB Client properties file to define Jboss Host and port properties
### 
###
remote.connections=default
remote.connection.default.host=10.51.5.99
remote.connection.default.port = 4447
remote.connection.default.username=hds
remote.connection.default.password=hdshds
remote.connection.default.connect.options.org.xnio.Options.SASL_POLICY_NOANONYMOUS=false

# Declare the cluster(s)
remote.clusters=ejb
remote.cluster.ejb.username=hds
remote.cluster.ejb.password=hdshds

# Connection configuration(s) for the "ejb" cluster (these are just random examples. You might have to add username and password
# or callbackhandler depending on the security configuration(s) of these cluster node servers.
remote.cluster.ejb.connect.options.org.xnio.Options.SASL_POLICY_NOANONYMOUS=false
remote.cluster.ejb.connect.options.org.xnio.Options.SSL_ENABLED=false
remote.connectionprovider.create.options.org.xnio.Options.SSL_ENABLED=false

# Remote EJB client properties
#
# Application name of remote hds. It is the name of the .ear deployed with Remote EJB in Jboss. 
# For e.g. if name of deployed ear is 'hds.ear', then value should be 'hds'.
remote.hds.application.name=hds
# Module name of remote hds. It is the module name or jar name in which Remote service implementation is deployed. 
# For e.g. - if jar name is 'service-service.jar', then this value should be 'service-service'.
remote.hds.module.name=service-service-1.0.0-SNAPSHOT
# It is the simple name of implementation of remote interface RemoteService. 
# For e.g. - If 'RemoteServiceImpl' is providing implementation then value of this should be 'RemoteServiceImpl'.
remote.hds.bean.name=RemoteServiceImpl


Should work smoothly.. steps are quite raw . But i hope it will help someone Or may be helpful me later 😉

Hadoop – Reading continued…

Tags

, ,


Why use Hadoop ?

One typical drive from 1990 could store 1,370 MB of data and had a transfer speed of 4.4 MB/s, so you could read all the data from a full drive in around five minutes. Over 20 years later, one terabyte drives are the norm, but the transfer speed is around 100 MB/s, so it takes more than two and a half hours to read all the data off the disk.

This is a long time to read all data on a single drive and writing is even slower. The obvious way to reduce the time is to read from multiple disks at once. Imagine if we had 100 drives, each holding one hundredth of the data. Working in parallel, we could read the data in under two minutes.

The approach taken by MapReduce may seem like a brute-force approach. The premise is that the entire dataset or at least a good portion of it is processed for each query. But this is its power. MapReduce is a batch query processor, and the ability to run an ad hoc query against your whole dataset and get the results in a reasonable time is transformative. It changes the way you think about data, and unlocks data that was previously archived on tape or disk. It gives people the opportunity to innovate with data. Questions that took too long to get answered before can now be answered, which in turn leads to new questions and new insights.

For updating the majority of a database, a B-Tree is less efficient than MapReduce, which uses Sort/Merge to rebuild the database.

An RDBMS is good for point queries or updates, where the dataset has been indexed to deliver low-latency retrieval and update times of a relatively small amount of data. MapReduce suits applications where the data is written once, and read many times, whereas a relational database is good for datasets that are continually updated.

MapReduce works well on unstructured or semistructured data, since it is designed to interpret the data at processing time. In other words, the input keys and values for MapReduce are not an intrinsic property of the data, but they are chosen by the person analyzing the data.

A web server log is a good example of a set of records that is not normalized (for example, the client hostnames are specified in full each time, even though the same client may appear many times), and this is one reason that logfiles of all kinds are particularly well-suited to analysis with MapReduce.

MapReduce is a linearly scalable programming model. The programmer writes two functions a map function and a reduce function each of which defines a mapping from one set of key-value pairs to another. These functions are oblivious to the size of the data or the cluster that they are operating on, so they can be used unchanged for a small dataset and for a massive one. More important, if you double the size of the input data, a job will run twice as slow. But if you also double the size of the cluster, a job will run as fast as the original one. This is not generally true of SQL queries.

In April 2008, Hadoop broke a world record to become the fastest system to sort a terabyte of data. Running on a 910-node cluster, Hadoop sorted one terabyte in 209 seconds (just under 3½ minutes), beating the previous year’s winner of 297 seconds . In November of the same year, Google reported that its MapReduce implementation sorted one terabyte in 68 seconds. As the first edition of this book was going to press (May 2009), it was announced that a team at Yahoo! used Hadoop to sort one terabyte in 62 seconds.

Data Flow

There are two types of nodes that control the job execution process: a jobtracker and a number of tasktrackers. The jobtracker coordinates all the jobs run on the system by scheduling tasks to run on tasktrackers. Tasktrackers run tasks and send progress reports to the jobtracker, which keeps a record of the overall progress of each job. If a task fails, the jobtracker can reschedule it on a different tasktracker. Hadoop divides the input to a MapReduce job into fixed-size pieces called input splits, or just splits. Hadoop creates one map task for each split, which runs the userdefined map function for each record in the split.

Map tasks write their output to the local disk, not to HDFS. Why is this? Map output is intermediate output: it’s processed by reduce tasks to produce the final output, and once the job is complete the map output can be thrown away. So storing it in HDFS, with replication, would be overkill. If the node running the map task fails before the map output has been consumed by the reduce task, then Hadoop will automatically rerun the map task on another node to re-create the map output.

Typical Hadoop installation has following setup –

  • Single instance of a Task Tracker is run on each Slave node. Task tracker is run as a separate JVM process.
  • Single instance of a DataNode daemon is run on each Slave node. DataNode daemon is run as a separate JVM process.
  • One or Multiple instances of Task Instance is run on each slave node. Each task instance is run as a separate JVM process. The number of Task instances can be controlled by configuration. Typically a high end machine is configured to run more task instance

Differences between HDFS and NAS

    • In HDFS Data Blocks are distributed across local drives of all machines in a cluster. Whereas in NAS data is stored on dedicated hardware.
    • HDFS is designed to work with MapReduce System, since computation are moved to data. NAS is not suitable for MapReduce since data is stored seperately from the computations.
    • HDFS runs on a cluster of machines and provides redundancy usinga replication protocal. Whereas NAS is provided by a single machine therefore does not provide data redundancy.

Should read map Reduce introduction posted here : http://developer.yahoo.com/hadoop/tutorial/module4.html#dataflow

MapReduce main concept :

Map – means do some stuff with raw data and send output equals Key  = Value

Reduce – means collecting all data for same key and produce final output

MapReduce work together to do processing in parallel. Awesome concept..

 

 

 

 

Javascript – First open source HTML5 SIP client

Tags

, , ,


Now, one can do real time chat and voice only using  java script. Because javascript client got capability and can be used to connect to any SIP or IMS network from your preferred browser to make and receive audio/video calls and instant messages.

Currently its supported by any web browser supporting WebRTC (preferably Chrome), but definitely open doors for other browser and platform. I liked it. Its getting easier now.

Supported features with this client are:

  • Audio / Video call
  • Instant messaging
  • Presence
  • Call Hold / Resume
  • Explicit Call transfer
  • Multi-line and multi-account

For more information check http://code.google.com/p/sipml5/

Java Enterprise > Performance to Scaling

Tags


First i want to discuss the classic java performance problems :

1. Memory Leak – Keeping strong reference in the memory Or Increased GC time
2. Resource starvation
3. Locks in the database – over synchronization in the code
4. Related Threading problem
5. Network related problems
6. Scalability problems

It takes 80% of the time to find the issue with not performing application and non scalable application. And 20% of the time to fix it.

Practices to find the issues with the code :

1. Monitoring the system/application
2. Always do performance test your code after each release in the production.
3. Keep the historical data of the application of releases to find the culprit in the release, if performance goes down.
4. Modularize the components – very very important. Infact it is the key if u want a scaleable application.
5. Monitor the application – HypericHQ a monitoring tool : Wide variety of data JVM, JMS, OS statistics.
CTRL-BREAK in windows
Kill-3 on Unix/Linux

 

Most of the time database query take time. It is quite necessary you query data on indexes only. Secondly its good not to do atleast query live data and create job to create replica and query on statistics data. In this way db query will not block.

JSF 2 Command button not working

Tags

, ,


There is a issue with command button in ui:repeat tag.

If command button is nested in a ui:repeat tag, command button or link will not work until the iterating bean/List is in view scope or bigger scope.

solution is to set the iterating list should be set in bean with view scope or higher.

It is a fundamental part of the JSF lifecycle that the first phase is the View Restore Phase, and that’s the same with/without Facelets.

JSF iteration controls generally maintain a cursor to do their work, and the only way that the cursor can function properly is if the referenced datamodel is in a non-transient scope. For JSF1, that means Session Scope or higher. For JSF2, View Scope or higher.

Took my 6hrs to fix it. So putting it here

Transitioning from developer to architect.


“Architects That Aren’t Allowed To Code”

This is difficult to digest for my brain. But Of course i was expected to design the software and discuss this with the team. I was naturally expected to review the implementation against the design and standards.
What I wasn’t able to do is to get involved in the implementation.

But in past i corrected something with i didn’t like myself. Now i have to convince someone to change it or simply let it go. At this position i must have knowledge to convince others what is wrong and correct with available time and resources.

Also in this role we have to be quite sure what exactly being used in application and why ? The architect needs to consider the whole of the system and how it interacts with the outside world. System must be scalable to future changes, business requirements and user load.

But at the end titles are not important. What important is the role we perform. Architect is a role more than a title.

JSF quick notes

Tags


JSF is a request-driven MVC web framework based on component-driven UI design model, using XML files called view templates or Facelets views. Requests are processed by the FacesServlet, which loads the appropriate view template, builds a component tree, processes events, and renders the response (typically HTML) to the client. The state of UI components (and some other objects) is saved at the end of each request (called stateSaving (note: transient true)), and restored upon next creation of that view. Several types of state-saving are available, including Client-side and Server-side state saving. Out of the box, JSF 1.x uses JavaServer Pages (JSP) for its display technology, but can also accommodate other technologies (such as XUL and Facelets). JSF 2 uses Facelets by default for this purpose. Facelets is a more efficient, simple, and yet more powerful view description language (VDL).

Managed Beans: A dependency injection system (easily interfaced with CDI, Spring, or Guice) – also called “Backing Beans” or “Page Beans”
A template-based component system, for rapid composite component creation – without the need for Java classes.
Built in Ajax support using  (since JSF v2.0).
Built in support for bookmarking &amp; page-load actions.
Integration with the Unified Expression Language (EL), which is core to the function of JSF. Views may access managed bean fields and methods via EL:
A default set of HTML and web-application specific UI components
A server-side event model : For dispatching events and attaching listeners to core system functionality, such as “Before Render Response” or “After Validation”
State management, supporting: “request”, “session”, “application”, “flash”, and “view” scoped Java beans.
Two XML-based tag libraries (core and html) for expressing a JavaServer Faces interface within a view template (can be used with both JSP or Facelets)

One of the biggest mistakes you can make is to assume that Facelets is merely a replacement for Tiles. Facelets is much more than that: it’s a new way of thinking about JSF.

Java Server faces are the UI server based component framework with many features over other technologies &gt;

&gt; Developers work on UI components tag and not on bare HTML.
&gt; Developers do not need to parse request &amp; response &gt; do not need to parse request
&gt; JSF has different reneders kit &gt; like HTMLRenderer, SVGRenderer (can be used for Phone), WMLRenderer
&gt; J2ee standard
&gt; JSF is represented as tree of UI component called view
&gt; Developers can easily create custom component
&gt; Inbuilt ajax support

Lifecycle of JSF

&gt; Reconstruct the component from previous state
&gt; Parse and coverters of the request
&gt; Validations
&gt; Event..
&gt; Render the view
&gt; Even handling

AJAX

render: none
– execute: @this
– Space separated ids and @form also widely used
– event: action for buttons and links, valueChange for
textfields, text areas, select menus, list boxes
– onevent: none

COMPOSITE COMPONENTS

May want to reuse list, so make component

(For validation only)

<!– Declare attributes here –>

<!– Create output here –>

Composite component with attributes:

– composite:interface Reminder: required is not properly
enforced in Mojarra 2.0.2 and earlier.

– composite:implementation
<h2>Message 1: “#{cc.attrs.msg1}”</h2>
<h2>Message 2: “#{cc.attrs.msg2}”</h2>
– Main page

– Result
Message 1: “Hello, world”
Message 2: “Hi”

Before: ActionListener was used
– For buttons. Form was automatically submitted when clicked
• Now: ValueChangeListener is used
– For combobox, listbox, radio button, checkbox, textfield, etc.
• Difference from ActionListener
– Form not automatically submitted
– Need to add JavaScript to submit the form
onclick=”submit()” or onchange=”submit()”
– Event incompatibility between Firefox and IE
• Firefox, Netscape, and Opera fire the onchange events when the
combobox selection changes, radio button is selected, or
checkbox is checked/unchecked
• Internet Explorer fires event when selection changes and
another GUI control receives the input focus
– onclick generally works consistently in current browsers. Older IE
versions behave differently. Test on multiple browsers!

Event listeners are used to handle events that
affect only the user interface
– Typically fire before beans are populated and validation is
performed
• Use immediate=”true” to designate this behavior
– Form is redisplayed after listeners fire
• No navigation rules apply
• Action listeners
– Attached to buttons, hypertext links, or image maps
– Automatically submit the form
• Value change listeners
– Attached to radio buttons, comboboxes, list boxes, checkboxes,
textfields, etc.
– Require onclick=”submit()” or onchange=”submit()” to submit form
• Test carefully on all expected browsers

Page Templating :

Avoiding repetition in facelets pages
– OOP provides good reuse for Java code.
– But what about facelets code?
• Also want to avoid repeating nearly-identical code there
• Very common to have multiple pages that share the same
basic layout and same general look

Define a template file
– Content that will appear in all clients is entered directly
– Content that can be replaced in client files is marked with
ui:insert (with default values in case client does not
supply content)

Define a client file that uses the template
– Use ui:composition to specify the template file used
– Use ui:define to override content for each replaceable
section in template (marked in template with ui:insert)

Template file:

Design Everything > Composite not Interface > Has a not is a

Tags


Take an example of Duck.

Where duck can fly and quake with different ways. Use Duck abstract class and FlyBehavior + QuakeBehaviour interface and there different implementations including with Do noting implementation.

And dynamically adding implementations into the Duck implementations.

In this we have Duck with has a flyBehavior and quakeBehavior.

public abstract class Duck {

    FlyBehavior flyBehavior;

    QuackBehavior quackBehavior;

     performFly(){

        flyBehavior.fly();

     }

     performQuack(){

         quackBehavior.quack();
}

}

Design Points to Remember 

  • And Remember you have to coding at interfaces not at concrete classes at all. Understand this point.
  • Now after this see if your Duck object changes needs to notify someone. Not here. But take an example of weather where if it changes we need to inform the display or subscribers. So here we will integrate  an Observer pattern.
  • Classes should be open for extension but closed for modification.
  • Remove if-else to abstract method and use implementation and configuration to decide.
  • Divide the design into Creator classes and product classes. Now we have different creator and products. Create an interface for Creator and then concrete classes out of it. similarly for products.
  • While Code  review or refactor there should be no ” new A()” where A is any class. One must use a factory for instating any class. Otherwise it has to be considered as a hard coding. Because this will take you to an OOPS. like u will create a common interface for all the related classes. For example : anywhere you have used a new A() change it to interface and make a Factory to get you instance of specific type. Hence more OOPS. Now to debate this we use many times String s =””; But when we ever we do this we must know that we are violating for valid reasons. like we know String cannot be changed.
  • Number one Rule of Good Design is that you just talk in language of interfaces and interface has to be dependent on interfaces and abstract classes only. Where ever you need some implementation use it with abstraction. By Using a abstract class.

Glassfish and EJB3 > finding it awesome

Tags

,


GlassFish is an open source application server for developing and deploying Java EE applications and web services. This server is compliant with Java Enterprise Edition 5 (Java EE 5), and is in fact the reference implementation of Java EE 5.

The GlassFish server offers tremendous support for JMS messaging, by offering a fully integrated JMS provider. The Java Message Service API is implemented by integrating Sun Java System Message Queue software into GlassFish, providing transparent JMS messaging support.

Few Terms to clear out :

what is domain in glassfish : A domain (statically) is an administrative name space. It’s a boundary, all GlassFish entities within which are controlled by an administrator.

Refer image : http://blogs.sun.com/bloggerkedar/resource/as-domains.jpg

what are node-agents : Node agents are responsible for communication between you domain admin server and server instances or clusters. You create a node agent be executing asadmin create-node-agent <nodeagentname>. If you don’t specify a node agent name the hostname if your system will be used. If you create your node agent on a different server you also have to specify host/port information for your domain admin server by giving –host <hostname> –port <portnumber> to asadmin

JMS Support in GlassFish

GlassFish supports two JMS resources: the connection factory and destination resources.

A connection factory is an object used by JMS clients to create a connection to a JMS provider. Connection factories can be of three types:

  • Connection factory: Used by point-to-point as well as publish-subscribe messaging models.
  • Queue connection factory: Used by the point-to-point messaging model.
  • Topic connection factory: Used by the publish-subscribe messaging model.

A destination is the object used by JMS message producers to post the message to and the source from which JMS message consumers consume the message. Supported destination types are:

  • Queue: The queue is the destination for point-to-point communication.
  • Topic: The topic is the destination for publish-subscribe communication.

Following are some of the JMS connection features supported by GlassFish.

Connection Pool

A GlassFish server automatically pools JMS connections. The user can set connection pool properties using the GlassFish admin console or asadmin commands. Connection pool details are configured while creating connection factories. Some of the connection pool parameters supported by GlassFish are:

  1. Initial and minimum pool size: Indicates the number of initial connections in the pool. This is also the minimum number of connections set for the pool.
  2. Maximum Pool Size: The maximum number of connections available in the pool.
  3. Pool resize quantity: The number of connections to be removed when the pool reaches idle timeout.
  4. Idle timeout: The maximum time that a connection may remain idle in the pool.
  5. Max wait time: The maximum wait time before a connection timeout is sent.
  6. Failure action: In case of a failure, the connection can be closed and reconnected.
  7. Transaction support: The level of transaction support. Supported transaction types are “local transaction,” “XA transaction,” and “no transaction.”
  8. Connection validation: If this property is selected, connections will be validated before passing them on to the application.
Connection Failover

This feature enables the application server to reconnect to the message broker if the connection is lost. If reconnection is enabled and if the primary message broker goes down, then the application server will try to reconnect to another available broker. The user can configure the number of retries and the time interval between retries.

Accessing JMS Resources from Application

In GlassFish, the connection factory and destination can be accessed in two ways: by using Java Naming and Directory Interface (JNDI) lookup or using annotations.

JNDI Lookup

JMS clients can use the JNDI API to look up connection factories and message destinations.

 InitialContext jndi = new InitialContext(); // Lookup queue connection factory QueueConnectionFactory qFactory = (QueueConnectionFactory)jndi. lookup("webTrackerConnFactory"); // Lookup queue Queue queue = (Queue)jndi.lookup("webTrackerQueue"); 
Annotations

Annotations are a declarative style of programming introduced in Java SE 5.0. Annotations are like metatags that can be applied to classes, constructors, methods, variables, etc.

The annotation @Resource is used for looking up connection factories and destinations. In a web application, if this annotation is placed on a variable, then the servlet container will inject the requested resource; i.e., the annotated variable will be pre-populated with an appropriate value before serving the request.

 @Resource(name="connFactory", mappedName="webTrackerConnFactory") private QueueConnectionFactory qFactory; @Resource(name="jmsQueue", mappedName="webTrackerQueue") private Queue queue; 
Put Messaging to Work

So far we have discussed how JMS and MDB work together to facilitate asynchronous messaging. We have also seen GlassFish’s capabilities and the JMS support it provides. Now we will see how we can make these technologies work with the help of a realtime use case called “Web Access Tracker.”

Web Access Tracker helps site administrators or business users to monitor user request statistics like the number of visitors on a day, frequently accessed pages, page/service requested time, request serving time, etc. In the following section, I will explain a solution to collect the web access information for each page accessed by the user, without compromising on the performance of the actual application. You can download the full source code of this example, as well as its deployable binary file, from the link in the Resources section.

Solution

A JMS-based messaging solution has been selected to capture and process web access data with the help of other technologies like Java Architecture for XML Binding (JAXB), Java Server Faces (JSF), servlet filters, etc. GlassFish provides support for JMS messaging and also for deploying the application.

The demo application uses a servlet filter to intercept user requests and to extract required information from the request header. This information will be then posted to a message queue as a JMS message. Messaging is done asynchronously through a message-driven bean, which is the consumer for JMS messages. The data collected from the message will be persisted to an XML data store. JAXB is the technology of choice for accessing and storing data into XML files. User interfaces are developed using JSF.

Refer : http://today.java.net/pub/a/today/2008/01/22/jms-messaging-using-glassfish.html