blogger templates blogger widgets
This is part of a list of blog posts.
To browse the contents go to

Object caching and replication in Websphere (cluster)


This cache holds Java objects for use in a distributed environment. For example, objects may be stored by one application server and then retrieved by other application servers in the same Data Replication Service (DRS) cluster. However, this cache is only available in the Enterprise edition of WebSphere Application Server.

These cache instances are retrieved by a JNDI name that is configured on the Cache Instance resource (which is similar to a JDBC resource). This cache can even be configured so that objects are persistent, flushed to disk when the server is stopped, and loaded again upon restart. Individual entries can be designated as non-shared, push (sent to all servers when they are cached), or pull (only their names are sent, and values are retrieved only when "pulled" from other servers)

The following steps are mostly the same whether you are trying on a standalone server or cluster.

Create a replication domain if not already created. It will automatically be created if “memory-to-memory” session replication is selected during cluster creation process.


Check if all the members are added to the domain.


Create object cache instance.
In admin console,
Resources -> Cache Instances -> Object Cache Instances






You might find a default WebSphere Dynamic Cache instance created that is bound into the global JNDI namespace with the name "services/cache/distributedmap". You could either use this default instance or create one.


Additional cache instances can also be created using a properties file cacheinstances.properties with the following format:
cache.instance.0=/services/cache/instance_one
cache.instance.0.cacheSize=1000
cache.instance.0.enableDiskOffload=true
cache.instance.0.diskOffloadLocation=${WAS_INSTALL_ROOT}/temp
cache.instance.1=/services/cache/instance_two
cache.instance.1.cacheSize=1500
cache.instance.1.enableDiskOffload=true
cache.instance.1.diskOffloadLocation=C:/disk

The distributedmap.properties file must be located in either you application server or application's classpath.

Add the cache service to individual MEMBERS.



Do the same for all members in the cluster.
Do a cluster restart.


Test using this servlet

import java.io.IOException;
import java.io.PrintWriter;
import javax.naming.Context;
import javax.naming.InitialContext;
import javax.naming.NamingException;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import com.ibm.websphere.cache.DistributedMap;

/**
 * Servlet implementation class ObjCacheServlet
 */
public class ObjCacheServlet extends HttpServlet {
 private static final long serialVersionUID = 1L;


 public ObjCacheServlet() {
  super();
 }

 protected void doGet(HttpServletRequest request,
   HttpServletResponse response) throws ServletException, IOException {
  String stringObj = (String) request.getParameter("obj");
  String stringKey = (String) request.getParameter("key");
  String invalidate = (String) request.getParameter("invalid");
  PrintWriter writer = response.getWriter();

  if (stringObj != null && stringKey != null) {
   putCache(stringKey, stringObj);
   writer.println("Storing object:" + stringObj + " with key:"
     + stringKey);
  } 
  
  if(invalidate !=null &&  stringKey != null){
   invalidateCache(stringKey);
   writer.println("Invalidated object with key:"
     + stringKey);
  }

  if(stringObj == null && stringKey != null){
   writer.println("Retrieved object:" + getCache(stringKey) + " for key:"
     + stringKey);
  } 
  
  
  writer.println("Local port:" + request.getLocalPort());
  String memberName = (String) getServletContext().getAttribute(
    "com.ibm.websphere.servlet.application.host");
  writer.println("Cluster member:" + memberName);

  writer.flush();
  writer.close();
 }

 protected void doPost(HttpServletRequest request,
   HttpServletResponse response) throws ServletException, IOException {
  // TODO Auto-generated method stub
 }

 private static DistributedMap getDm() throws Exception {
  Context ctx;
  DistributedMap dm = null;

  try {
   ctx = new InitialContext();
   dm = (DistributedMap) ctx.lookup("services/cache/objcache");
   if (dm == null) {
    System.out.println("dm is null");
   }
  } catch (NamingException e) {
   e.printStackTrace(System.out);
  }

  return dm;
 }

 public static void putCache(Object key, Object obj) {
  System.out.println("putCache");
  if ((obj == null) || (key == null)) {
   // log error
   return;
  }

  try {
   String keyAsString = ((String) key).toLowerCase();

   getDm().put(keyAsString, obj);
   System.out.println("Storing obj with key:" + key);
  } catch (Exception e) {
   e.printStackTrace(System.out);
  }
 }

 public static Object getCache(Object key) {
  System.out.println("getCache");
  Object obj = null;
  try {
   String keyAsString = ((String) key).toLowerCase();
   obj = getDm().get(keyAsString);
  } catch (Exception e) {
   e.printStackTrace(System.out);
   obj = null;
  }

  return obj;
 }

 public static void invalidateCache(Object key) {
  System.out.println("invalidateCache");
  try {
   getDm().invalidate(key);

  } catch (Exception e) {
   e.printStackTrace(System.out);
  }
 }
}

Now hit the urls and try different cases.

http://localhost/TestSR/ObjCacheServlet?obj=firstobj&key=1

Storing object:firstobj with key:1
Local port:9088
Cluster member:MEMBER_002

http://localhost/TestSR/ObjCacheServlet?key=1

Retrieved object:firstobj for key:1
Local port:9091
Cluster member:MEMBER_003

http://localhost/TestSR/ObjCacheServlet?key=1

Retrieved object:firstobj for key:1
Local port:9087
Cluster member:MEMBER_001

http://localhost/TestSR/ObjCacheServlet?key=1&invalid=true

Invalidated object with key:1
Retrieved object:null for key:1
Local port:9088
Cluster member:MEMBER_002

http://localhost/TestSR/ObjCacheServlet?key=1

Retrieved object:null for key:1
Local port:9091
Cluster member:MEMBER_003

8 comments:

  1. Thank you for your blog. You done a good job. Keep on blogging unique information like this with us.
    due diligence data room

    ReplyDelete
  2. This comment has been removed by the author.

    ReplyDelete
  3. Great information. I have got some important suggestions from it.
    Video editing institute in chennai

    ReplyDelete