Showing posts with label simple little things of programming. Show all posts
Showing posts with label simple little things of programming. Show all posts

Thursday, February 23, 2012

U limit to extend

Sitting in the year 2012 it is needless to say Multithreaded software applications are not an option, it is the norm. Systems are today even more complex with its underneath distributed architecture with Clustered services and multiple system and user threads. Recently three of Tachyon's development team members had to scratch their heads for many hours to debug a problem none of us ever cared to look at before. And post the Aha moment came a tandem smile making it worthwhile to blog about and share with the rest of the Gang.
We were testing Tachyon on a system that we got as part of a system rotational policy. Tachyon is a pure Java application using a few of Oracle's middletier products and we have created a system agnostic deployable artifact and a rich console to manage our processes and nodes. With 1Terabyte of Memory on these machine we were already smiling like a Kid with an Icecream in hand in Winter. We started seeing an OutOfMemoryError when the fourth node was started. This just couldn't be possible we thought - each node has just 1GB heap allocated, in no way possible it could exhaust the 1T memory. "free -m" reflected our assumption. No swapping and plenty of memory available to grow. And then we happen to run the 'ulimit' (..the command I derived my blog title from):
$ ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 28138
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 1024
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited


Turns out among three nodes the total number of threads started was already in 900+ and the fourth node didn't have enough user processes available to start. What took us in a total wrong path was an "OutOfMemory" Error. May be its an overused term in this scenario but eventually we did find that under Linux, threads are counted as processes so any limits to the number of processes also applies to threads. In a heavily threaded app we can quickly run out of threads.
There is a security implication too. As we researched the topic more we came across a term "Bash fork() bomb - :(){ :|:& :};:". This is a bash function that gets called recursively and is often used by Unix administrators to test the process limitations. An "unlimited" max user processes setting could also be misused to carry a Denial of Service attack by exhausting the total number of threads to deny applications running on that system to start any new threads.
Following command can be used to find out how many threads are already started by a user:
$ ps -u -L | wc -l
If processes are started by an user account , this command is an useful tool along with "ulimit -a" to figure out how many more processes can still be started - A key mechanism for system provisioning.
At the end for time being, ulimit -u unlimited was good enough for us to continue our testing.
Enjoy!

Wednesday, January 18, 2012

Measuring a Toddlers weight and Object size in heap

...And the common problem is they both need some warm up phase. Toddlers are notorious to not stay standstill and if you have one you know typically the first reading is almost always wrong. But for these naughty Kids there is a way to measure it - Hold them in your arms measure your weight with the kid and then measure yours alone and subtract it. This modelling can also be applied to when Objects need to be measured for their size in a heap. Why? Because flakiness do exist with JVMs and based on sheer observation the first reading is almost always wrong. So not being part of a JVM team how am I suppose to measure an Object size fairly accurately? By reducing the flakiness. By measuring it multiple times. By minimising the affect of chaos.
Problem: To provide an API so that it can measure the size of an Object passed to it

Solution: Clean up the JVM, measure the heap size, create multiple instances of the Object, create strong references to these instances. clean up the JVM again and measure the heap. Take a difference and divide by the total number of Objects used.

If you are looking for measuring the size of Objects in a Coherence cache then look into Coherence's MemoryCalculator APIs. This solution uses Coherence's PoF framework for serialzing and deserializing non-Serialozable Objects.

1. We need to create multiple instances of the Object passed so that strong references can be maintained to these Objects and do not let GC collect these. So make multiple instances of the passed object. How? There are multiple options -
  • If Object implements Serializable or any of Serializable types - Serialize the Object into byte array and to create an instance use this byte array to reconstruct the Object.
  • If Object is Cloneable - To create a new instance then clone the Object
  • If Object is neither Serializable nor Cloneable - Oracle Coherence provides a mechanism to serialize a non-serializable Object called Portable Object Format. PoF as it is commonly called, allows programmers to write external serializers for an Object that does not implement Serializer interface. These Serializers and the Object they serialize and deserialize can be defined in a pof configuration and loaded by a Coherence system property tangosol.pof.config. Once this is done the Object is ready to be serialized.

For the third option Coherence provides a utility in ExternalizableHelper to convert the Object into a Binary:


ConfigurablePofContext pofContext = new ConfigurablePofContext ("my-pof-config.xml");
then:
Binary binObj = ExternalizableHelper.toBinary (objToBeMeasured, pofContext);

This binObj can then be used to create new instances to make multiple strong references:
Object[] objects = new Object [1000];
Runtime runTime = Runtime.getRuntime();

for (int i = -1; i < 1000; ++i) {
     Object o = ExternalizableHelper.fromBinary(binObj, pofContext);
    // -- Reject the first object
     if (i >= 0) {
        objects[i] = o;
     } else {
            o = null;
            // -- Execute GC;
        beforeSize = runTime.totalMemory() - runTime.freeMemory();
     }
}

Execute GC again;
afterSize = runTime.totalMemory() - runTime.freeMemory();

Use the difference of (afterSize - beforSize)/1000I also found a very good implementation of "Execute Garbage Collection" from an article on the javaworld.com, that I am reproducing it here:


for (int i = 0; i < 4; ++i) {
    long m1 = runTime.totalMemory() - runTime.freeMemory();
    long m2 = Long.MAX_VALUE;
    for (int j = 0; (m1 < m2) && (j < 500): ++j) {
         runTime.runFinalization();
         runTime.gc();
         Thread.yield();
         m2 = m1;
         m1 = runTime.totalMemory() - runTime.freeMemory();
    }
}
Watching your baby's and Object's weight is critical and you know it why? Enjoy!

Thursday, June 30, 2011

Enums, PropertyChangeListener and State machine

State machines are key structures to build stateful applications over stateless messages. Following is a simple uni-directional state machine used as a compute engine with different processes triggering at different state of the machine. The machine has three states - Begin, Work and Finish.

When a Session is started the Machine sets itself in a Begin state, Each next() then switches the machine from the current state to the next. Processor(s) can be registered at different states of the machine.

The states are Begin -> Work -> Finish -> Null (Sink) and are defined via an Enum:
public enum State {
  BEGIN,
  WORK,
  FINISH;
}

Lets start with a Session (Also the state machine)

public class Session 
{
  private String BEGIN;
  private String WORK;
  private String END;

  /**
   * Create a Session state machine with three uni-directional states.
   * BEGIN -> WORK -> FINISH -> Null
   */
  private enum SessionState 
  {

    BEGIN() {
        @Override
        public SessionState next() {
            return WORK;
        }
    },

    WORK() {
        @Override
        public SessionState next() {
            return FINISH;
        }
    },

    FINISH() {
        @Override
        public SessionState next() {
            return null;
        }
    };

    public abstract SessionState next();

   }

private SessionState STATE = SessionState.BEGIN;

private String sessionId;

private PropertyChangeSupport propertyChangeSupport;

private Session() {
    propertyChangeSupport = new PropertyChangeSupport(this);
}

public static Session newSession() {
    return new Session();
}

void next() {
    synchronized (STATE) {
        SessionState old = STATE;
        STATE = STATE.next();
        if (STATE != null) {
            propertyChangeSupport.firePropertyChange(STATE.name(), old, STATE);
        }
    }
}

boolean isValid() {
    return STATE != null;
}

String sessionId() {
    switch (STATE) {
        case BEGIN:
            if (sessionId == null) {
                sessionId = UUID.randomUUID().toString();
            }
            return sessionId;
        case WORK:
            if (sessionId == null || sessionId.trim().equals("")) {
                throw new IllegalStateException("Session Id can not be null in WORK");
            }
            return sessionId;
        case FINISH:
            if (sessionId == null || sessionId.trim().equals("")) {
                throw new IllegalStateException("Session Id can not be null in  FINISH");
            }
            return sessionId;
        default:
            throw new IllegalStateException("Session has not yet started");
    }
}

void addStateChangeListener(State state, PropertyChangeListener listener) {
    propertyChangeSupport.addPropertyChangeListener(state.name(), listener);
}

}


Lets build some processing units - One for Work and the other for Finish states


public class WorkSessionListener implements PropertyChangeListener {
private final String sessionId;

public WorkSessionListener(String sessionId) {
    this.sessionId = sessionId;
}

 public void propertyChange(PropertyChangeEvent evt) {
    System.out.println ("[" + sessionId + " - Working]=" + evt.getOldValue() + " -> " + evt.getNewValue());
}
}

public class FinishSessionListener implements PropertyChangeListener {

 private final String sessionId;

public FinishSessionListener(String sessionId) {
    this.sessionId = sessionId;
}

public void propertyChange(PropertyChangeEvent evt) {
    System.out.println ("[" + sessionId + " - Finish]=" + evt.getOldValue() + " -> " + evt.getNewValue());
}
}


Now lets test it


import org.junit.Test;

public class SessionStateChangeTest {
@Test
public void testSessionStateChange() {
    final Session session = Session.newSession();
    final Session session2 = Session.newSession();

    session.addStateChangeListener(State.WORK, new WorkSessionListener(session.sessionId()));
    session.addStateChangeListener(State.FINISH, new FinishSessionListener(session.sessionId()));


    session2.addStateChangeListener(State.WORK, new WorkSessionListener(session2.sessionId()));

    Thread t1 = new Thread(new Runnable() {
        public void run() {
            while (session.isValid()) {
                session.next();
            }
        }
    });

    Thread t2 = new Thread(new Runnable() {
        public void run() {
            while (session2.isValid()) {
                session2.next();
            }
        }
    });

    t1.start();
    t2.start();

    try {
        t1.join();
        t2.join();
    } catch (InterruptedException exp) {

    }
}

}


We should see two BEGIN -> WORK and one WORK -> FINISH messages for two session ids.

Enjoy!

Tuesday, May 17, 2011

My view on DDD

In any DDD practice there are two concerns : core-domain and core-challenges. Core challenges like performance, throughput, scalability, latency etc drives the selection of the most efficient model that projects the core-domain.

Tuesday, March 15, 2011

To find which class contains a method in a given jar

Okay.. I vaguely remembered a method that existed somewhere that could do what I wanted but could not recall the class it was in, and all I had was the jar. So I wrote this:


#!/bin/bash
for i in `jar tvf $1 | grep class | awk '{print $8}' | sed -e "s/\//./g" | sed -e "s/\.class//g"`;
do
echo $i;
javap -classpath $1 $i | grep $2;
done

Enjoy!

Thursday, September 23, 2010

Returning empty collection or a null?

Should a null value for an expected collection be preferred or an empty Collection object? Lets see through a program:


import java.util.Collection;
import java.util.Vector;

import static java.lang.System.out;

public class E {

public void isNull (Collection col) {
long t = System.nanoTime();
if (col == null) {
}
out.println ("Time to check null: " + (System.nanoTime () - t));
}

public void isEmpty (Collection col) {
long t = System.nanoTime ();
if (col.isEmpty ()) {
}
out.println ("Time to check empty: " + (System.nanoTime () - t));
}

public static void main (String args []) {
E e = new E ();
e.isNull ((Collection) null);
e.isEmpty (new Vector ());
}

}
Running this on my laptop with Java 6 produced:
Time to check null:  45234
Time to check empty: 10750

Something to think about..

Sunday, August 22, 2010

Software development process - which one to use?

If you came to this blog in hope you will find a concrete answer then before you get disappointed let me tell you - I don't know. I don't know like many other including those who do claim they know. Software development process is like a system of political governance one system is not made for everyone. Before choosing one, do not read why one fails but why one succeeds. If water flow process was that bad it would have never been used. Democracy is popular not because it is the best system and everyone who follow are always happy. It is popular because it gives you the best safety net when things start to go wrong. Software development is on the same line. If you get hold of five best believe me you would succeed in any process. If you have an idea and the only one developing it full time, no process is the best process. The process becomes critical when you have varying class of members. Some strong in certain areas and others in some other. How efficient is your business owners? Do they just come to office to flash their shiny cars or they really spend time with your team to delegate the real requirements? Do you produce documents because you think they are ought to be produced or you do think they are good source of concrete referential descriptions. Have you thought if just a blog or wiki could replace your documents in a file? If Agility does work for you then use it. If morning 15min standup meetings with a white board full of yellow stickies sounds ridiculous then you don't have to. If pairing up accelerates the deliverable then yes do it. If one room with a big table makes your team more productive then invest in that instead. It really is what works for you. Don't let articles, blogs, books drive you. Think what you really worked in the past and how that can be modified to adapt to new challenges. Process names are just to publish books read them as a novel not necessarily to become a character of it.

Thursday, May 06, 2010

Ask the problem first

There is something called "A Problem Statement". No this is not a 20 page business requirement document. Its a single (at most two) line statement that describes what a customer is trying to solve. Believe me the ones who can not give it, don't know what they are doing. With repetitive refinements these are some typical problems you get:

  • To speed up our company's web-site's responses.
  • To stop our application getting bogged down under heavy load.
  • To exercise stocks under xx Milli-seconds.
  • To bring the application back up with x% of data loss with in 't' minutes.
If you look at it a problem statement did not include any implementation details or even functionality details. They were all business problems.

Why a Problem Statement is important?

Nah.. I am not a qualified MBA but I will tell you a story. A business analyst comes to Engineering and asks for a Cassette player for a Car so that the customer can play his songs. Engineering builds it and the player gets integrated in the car. Analyst is happy. Customer is happy and the Analyst moves on (gets a promotion). But was he/she right? No. Because the Customer never asked him to get him a Cassette player. What he really meant was "I need a way to play my favorite music while I drive". So what happens? Cassette players gets outdated and we build CD players. In a few years we build MP3 players and so on. And if not cautious you could end up driving a car with a Cassette player, a CD player and a MP3 player. Believe me its a bad car.

So whats wrong?
The baggage is wrong. Engineering ends up supporting features that has no revenue potential. The better solution is you gave the Customer a Cassette Player in '80s because that was what technologically possible then. In '90s your car only offers a CD player and so on. And in turn you assist a parallel industry or professional services who provide you gadgets to convert your Cassettes to MP3s. The continuous refactoring of your product is as important as providing solutions to the problems. So never lose the Problem Statement. Its not the cassette but the music what Customers are really paying for.

Monday, March 15, 2010

Development problems I hate to get into

Here are my pet peeves when it comes to Software development:

  1. Click New Project and select type 'X' in your Eclipse IDE
  2. Deploying unending Apache libraries
  3. Power-point driven Architectural directions without self-contained working prototypes.
  4. Helpless Component dependencies and NoClassDefFoundError
  5. Having to change standard J2SE modules to work in a specific platform.
  6. URLs in response to RFH that you have already been banging your head on.

Wednesday, February 11, 2009

Issues and non issues of Distributed computing

By Distributed computing I refer to platforms, infrastructure or technology that involves multiple JVMs to perform a cohesive operation. So what are the core issues and non-issues before we select the right platform?

  • NI: Moving object from one system (JVM) to another
    • How compact is the object over the wire?
  • NI: Synchronizing changes from one JVM to another
    • How fast is the synchronization?
    • How many JVMs to be synchronized?
    • Is the synchronization synchronous or Async
  • NI: Continuous Availability of the Application
    • What state does the failure of one server leaves the data in?
    • What happens to in-flight Transaction upon failure?
    • What state is the persistence storage in?
    • Upon recovery what happens to the new node?
    • How is the past affecting the present for the recovered node?
    • How is the user request handled after its submission and after the node's failure but before the processing completes?
  • NI: Managing common state across multiple services
    • Is that a single point of failure?
    • What is the transaction isolation?
  • NI: A feature list
    • How extensible is it?
    • Are the integration points well defined?
There are plenty of products that address the Non-Issues but fail miserably in their implementation of their resolution. So before selecting the distributed computing platform not only look the supported features but also how they are implemented. Read more on Coherence

Sunday, February 01, 2009

JPA Annotations in Domain Model

Why I don't like them?

  1. It corrupts the domain object. Domain object doesn't have to know that it is meant for persistence to a relational database.
  2. Vis-a-vis Coherence, CacheStore that is meant to interact with an external data source is not necessarily meant for relational database. CacheStore could hook a database, a Directory server or even another Coherence cluster. @OneToOne has no sense if the external datasource is a Coherence cluster.
  3. Having a separate OR Mapping is much cleaner

Sunday, January 18, 2009

Implementing JMS Queue on top of Oracle Coherence

In this series about building JMS on top of reliable and fast Oracle Coherence data grid, I added the functionality of a JMS Queue. Projects like ezMQ re-iterates a fact to perceive Coherence data grid as a high availability System of Record not mere a Cache Provider. The solution to build a JMS Queue is a little tricky compared to implementing a JMS Topic on top of Oracle Coherence. The reason is inherent behavior of Coherence to broadcast the cache events to all Map Listeners. The solution revolves around the following method:

private void dispatchQueueEvent(MapEvent mapEvent) {
EventListener[] eList =
m_listenerSupport.getListeners(AlwaysFilter.INSTANCE).listeners();
int size = eList.length;
MapListener mListener = (MapListener) eList[Base.getRandom().nextInt(size)];
mapEvent.dispatch(mListener);
}
The method collects all the registered Listeners on that Cache node, picks one from the list randomly and dispatches the Map Event to it. Second component is a Custom NamedCache that extends Coherence's WrapperNamedCache. The key method is it's addMapListener ().
    public void addMapListener(MapListener listener, Filter filter, boolean fLite) {
if (singleListener == null) {
singleListener = new InternalListener();
}
m_listenerSupport.addListener(listener, AlwaysFilter.INSTANCE, false);
super.addMapListener(singleListener, filter, fLite);
}
And then at the end an EntryProcessor that makes sure even if Listeners are distributively registered one and only one of those Listeners receive the message. This is done by setting an event dispatch state that every thread checks against before dispatching the event. The class is pretty simple as well:

private class MLSEntryProcessor implements InvocableMap.EntryProcessor, Serializable {

private MapEvent mapEvent;

public MLSEntryProcessor(MapEvent mapEvent) {
this.mapEvent = mapEvent;
}

public Object process(InvocableMap.Entry entry) {
String state = (String) entry.getValue();
if (state == null) {
try {
dispatchQueueEvent(mapEvent);
entry.setValue(STATE.DISPATCHED.name(), true);
} catch (Exception exp) {
exp.printStackTrace();
}
}
return null;
}

public Map processAll(Set set) {
return Collections.EMPTY_MAP;
}

}
More details with more source code has been provided at http://sites.google.com/site/miscellaneouscomponents/Home/ezmq
Enjoy!

Saturday, January 10, 2009

Integrating Oracle Coherence with Twitter

Months ago I wrote a program to integrate Calendar with Twitter. This time I integrated Twitter with Coherence data grid. Programmatically this is no brainer - data are being put in a Coherence Cache and then there is a cache listener that publishes the data (message) to twitter. I am a big fan of Twitter. Simple interface, Revolutionary idea and an Awesome channel. So what I did is expanded my implementation of JMS Subscriber for Oracle Coherence and added an interface to tweet the JMS Message. Read more about ezMQ here.... Following is a sample code that is a subscriber of Coherence Topic and a Publisher to Twitter:

public class Subscriber implements MessageListener {

private String un = "<your_twitter_account_id>";
private String pw = "<your_twitter_password>";

public Subscriber() {
}

private void twitter (String message) throws MalformedURLException,
IOException {
String credentials =
new BASE64Encoder ().encode ((un + ":" + pw).getBytes());
URL url = new URL ("http://twitter.com/statuses/update.xml");
URLConnection uC = url.openConnection();
uC.setDoOutput(true);
uC.setRequestProperty("Authorization", "Basic " + credentials);
OutputStreamWriter wR = new OutputStreamWriter (uC.getOutputStream());
wR.write("&status=" + message);
wR.flush();
wR.close();

// -- Get the response back
BufferedReader bR =
new BufferedReader (new InputStreamReader (uC.getInputStream()));
String line = null;
while ((line = bR.readLine()) != null) {
System.out.println(line);
}
bR.close ();
}

public static void main(String[] args)
throws Exception {
Subscriber s = new Subscriber ();
InitialContext ctx = new InitialContext();

// -- Create
TopicConnectionFactory factory =
(TopicConnectionFactory) ctx.lookup("TopicConnectionFactory");

// -- Connecting to Proxy
TopicConnection connection = factory.createTopicConnection();

// -- This is a NamedCache
Topic topic = (Topic) ctx.lookup("Topic");

TopicSession subSession =
connection.createTopicSession(false, Session.AUTO_ACKNOWLEDGE);
TopicSubscriber subscriber = subSession.createSubscriber(topic);
subscriber.setMessageListener(s);

System.out.println("Click to end");
System.in.read();
}

public void onMessage(Message message) {
try {
TextMessage tMsg = (TextMessage) message;
String text = tMsg.getText();
// -- Send the message to Twitter
twitter(text);

} catch (JMSException e) {
e.printStackTrace();
} catch (MalformedURLException e) {
e.printStackTrace ();
} catch (IOException e) {
e.printStackTrace ();
}
}
}
Enjoy!

Friday, December 12, 2008

Softer part of Software Architecture

Or the harder part the way you look at it. Almost 90% of applications does the following in some form or the other.

  1. Bring message (data) from one system to another.
  2. Transform the data in a form that your system recognizes.
  3. Process the data
  4. Making it available for reference
  5. Send it to another system
  6. Archive it
Critical things to consider:
  • How many components do we introduce between each of these steps?
  • How much latency does each component introduce?
  • How many jars/libraries would you need to get this system to work?
  • Can I upgrade one layer without affecting others?
  • How many systems do you need to configure?
  • If one system is unavailable can others survive?
  • How platform independent is one system from another?
  • Can we have single script/single click deployment process?
  • How many people would you need and of what different skill set?

Saturday, December 06, 2008

Why using static block for Singleton will not work in all cases

Probably everyone has used Singleton pattern and following is a typical implementation:


public class Single {
private static Single single;

// -- Rule#1: Make sure no one outside the class could call new
private Single () {
}

// -- Rule#2: Provide another channel to get an instance
public static Single getInstance () {
if (single == null) {
synchronized (Single.class) {
if (single == null) {
single = new Single ();
}
}
}
return single;
}
}
I came across another implementation that to be frank I never thought was used to implement singletons.

public class Single {
private static Single sing;

static {
sing = new Single ();
}

private Single () {
}

public static Single getInstance () {
return sing;
}
}
Even though its a tricky way but bluntly this is a wrong implementation for one simple reason: It does not work if the class is lazily loaded (Class.forName ("Single");) and constructor takes a parameter. So I would rather continue to stay with a good old implementation.

Saturday, November 15, 2008

Timer service in Oracle Coherence

It would be nice to have a Timer Service that runs inside a Coherence Cluster and can execute pre-established Jobs at a certain time of a day, everyday? Such a component can be written as a separate application or configured inside Coherence. Not off-the-shelf available but I have uploaded a simple Quartz based implementation on Miscellaneous Components website. Please read More...

Friday, November 07, 2008

Transfer of Power pattern

This is a little Java trick that came out of the following Problem Statement: How to call a pre-defined method of an invoking class if it is defined in it? Or, invocation through precedence.
If I have three classes Invoker1, Invoker2 and MyUtility and both Invokers call a method of MyUtility, Is it possible for MyUtility to call a method of the Invoker if defined instead of it's own? I call it a transfer of power pattern - An Invoker class dictates what to call not by condition or heirarchy but by definition of its location.

// -- MyUtility.java
public class MyUtility {
private void preInvocation () {
...
}
public void doSomething () {
System.out.println ("I am doing something");
}
}
// -- Invoker1.java
public class Invoker1 {
public void invoke () {
new MyUtility ().doSomething ();
}

public void preInvocation () {
System.out.println ("Invoker1's preInvocation called");
}
}
// -- Invoker2.java
public class Invoker2 {
public void invoke () {
new MyUtility ().doSomething ();
}
}
The way this application works is if preInvocation method is defined in an invoking class, MyUtility should call it instead of it's own. A workflow is defined in a way that does not require a tight integration with an Object hierarchy.

How to do it? Define MyUtility in the following way:

// -- MyUtility.java
public class MyUtility {
private void preInvocation() {
System.out.println("Pre-Invocation of MyUtility");
}
public void doSomething() {
Throwable t = new Throwable();
StackTraceElement[] elements = t.getStackTrace();
String callerClassName = elements[1].getClassName();
Class clz;
try {
clz = Class.forName(callerClassName);
Method method =
clz.getDeclaredMethod("preInvocation", null);
method.invoke(clz.newInstance(), new Object[] { });
} catch (Exception exp) {
// -- Any Exception and call it's own.
preInvocation();
}
}
}
# Run Invoker1 => Invoker1's preInvocation called
# Run Invoker2 => Pre-Invocation of MyUtility
Have fun!

Tuesday, September 23, 2008

Replacing proxy information on the fly

This is another neat trick under Coherence's arm. Lets say you have an *Extend client that connects to a set of Coherence proxies. If the proxies for some reason are down and none is available and you do want to connect to another cluster that you know is up (like a DR), how could you connect to it without restarting the application or using any DNS Switch? Following is a way to replace the Extend configuration on the fly:


public void changeConfigOnTheFly (String host, String port) {
ConfigurableCacheFactory factory = CacheFactory.getConfigurableCacheFactory();
XmlElement cc = new SimpleElement ("cache-config");
XmlElement rcs = cc.addElement("caching-schemes")
.addElement ("remote-cache-scheme");
rcs.addElement("service-name")
.setString("ExtendTcpCacheService");
XmlElement xE = rcs.addElement("initiator-config")
.addElement("tcp-initiator")
.addElement("remote-addresses");
XmlElement rAs = xE.addElement("socket-address");
XmlElement add = rAs.addElement("address");
add.setString(host);
XmlElement port = rAs.addElement("port");
port.setString(port);

// -- Now set the new XmlElement
factory.setConfig(cc);

// -- And of course you need to restart the Cache Service thread
factory.ensureService("ExtendTcpCacheService").stop ();
factory.ensureService("ExtendTcpCacheService").start ();
}

Yay! And you can keep your *Extend cache configuration simple and tidy by replacing the proxy configuration on the fly. Something is missing in this configuration though - The cache mappings. Actually you do not need it. As long as cache schemes are defined, caches too can be created on the fly:

CacheService cS = (CacheService)CacheFactory.getService("ExtendTcpCacheService");
NamedCache sCache = cS.ensureCache("MyCache", null);

Don't you just love Oracle Coherence?

Saturday, September 06, 2008

Bringing Database application to Grid Computing in one day

So you have a Database application with Tables and Relations. Application is poorly performing, and not scaling and it is time to move. But it is so critical that rewriting it should be taken with precise planning. This is a common scenario at numerous clients. "Is there a way I can use Coherence Grid without too much programming effort by the time my IT staff learns the new technology?" Yes! as long as you know what a Map is. Follow the following steps and you are ready to Griditize your application in no time:


    Set up the tools:
  1. Download jDeveloper 10g (or latest) from Oracle.

  2. If jDev is not already enabled, Download AspectJ plugin from this link. Drop the jar in $JDEV_HOME/jdev/extenstions.


  3. Create Java Objects from Database tables:
  4. Start jDeveloper and go to Connections Tab. If Connection Tab does not appear, click the "View" Menu option and select Connection Navigator.

  5. Right click on Database connection and create a new connection. Test the connection for Success.

  6. Create a new jDeveloper project and give it a name.

  7. Right click on the Project and select Project Properties. Under Technology Scope, select Toplink to add. Don't worry, we are not mandating to use Toplink even though highly recommending it when you need database persistence and data load (Cache Stores etc). Click OK.

  8. Right click on the project and select New. A new Window appears, select Toplink option under Business Tier. Select the first option - "Java Objects from Tables". Give a name to Toplink Map and select the Database Platform and Database Connection you had created in step (3/4). Select the appropriate schema and import Tables. Select Tables that you want to create Java POJOs from. Move them from Available area to the Selected area. Click Next-> and provide a Java package name, say pkgname. Note this package name, you will have to use it later. After finishing the process you would see a bunch of Java classes in your project. You are half way there. Go and grab a cup of coffee.


  9. Kill some time:
    While you are finishing your coffee read this: Aspect programming with Coherence

    Make the Java POJOs Grid Ready:
  10. Create a new empty Java class under <pkgname>.root package. I would name it EzRoot (Motto is if it is not EZ (easy) it is useless). Copy the ELHelper class from the blog that you just read.

  11. Create a new Aspect by name EzRootAspect with the code copied from TestAspect.aj as was shown in the earlier blog. Replace all "Test." strings to "EzRoot." and add the following:
    declare parents: <pkgname>.* extends EzRoot; right below the aspect declaration. Don't forget to import EzRoot. Make sure package names of ELHelper and EzRootAspect are correct.

  12. Right click on the Project and select Rebuild. Or you can also create a new Deployment Profile and build a jar file. The compiled jar contains Coherence Grid ready Java classes. Each Java class represents a Table from your data model and now can also directly be used with Coherence.


In ten steps you have cut your week's work to half hour. The rest is your creativity. Whats next? Lets see.. I would love to give you a way to generate dirty but quickly a Coherence Cache configuration and you can go live in one day ;). May be some seasoned Domain Modeler can tell me the best way to tell a difference between an Entity and a Value Object without searching for an "ID" attribute. That way I can make an intelligent decision on how many caches should be created. The fun begins now...

Saturday, August 30, 2008

Extending Model Driven Design beyond codes

Whoever is familiar with Eric Evans should also be familiar with the following diagram:
The diagram pretty much is an essence of a good design. Even though mostly referred when working on one application and how components should be created also has a potential to extend it to higher view for Enterprise solutions. Besides the importance of Entities itself, two interesting pieces of the diagram are "Services" and "Repositories". They are not just a set of APIs but are extended to Infrastructure as well.


  • What about Services? How does a business create Services? How does it manage them? How can these services be orchestrated? And combined to produce a work flow? SOA combined with BPEL is what fills in the services gap.

  • Now what about the Repository and Aggregation? Again its not just API but how would you manage Gigabytes and gigabytes of data in a fault tolerant and high performance way? And how to maintain integrity of your Entities with this data management system. Oracle Coherence is what offers a perfect solution to address the infrastructure behind Repository and aggregation.

Concept of Entities is when an Object has an unique identity, matches perfectly for what a Map is designed for. And Coherence is built for very large data set with a simple Map interface of NamedCache. The question is what to manage in this Map? Entities or Value objects? Entities do have IDs. Vis-a-vis value objects? Why not? Objects already have a behavioral identity in its' hashCode (). Isn't there a difference between an ID and Identifiable. Last part: Factories and layered architecture. Factories can very easily hide behind services and unique interfaces for each individual applications. I can't see any centralized concept for Factories.. Can you?