Wednesday, December 30, 2009

I don't know which cache this Object goes to

..This was something that came up during a sluggish talk at a customer's site. A developer had just jokingly remarked.

Problem Statement: How to make Coherence objects destination aware?

One way is to annotate the objects that are to be cached. So lets create some simple annotations:

@Documented
@Retention(RetentionPolicy.RUNTIME)
@Target ({ElementType.TYPE, ElementType.PACKAGE})
public @interface Cache {
String name();
int ttl() default -1;
String pk() default "getId";
}
All this annotation is telling is the name of the cache, what should be the TTL (default being -1) and how to find the primary key for this object? The annotation only complements the coherence-cache-config.xml and does not replace it (Unlike JPA annotations). You still need to define all cache names in the configuration file. Next step is to create a sample domain object annotated with Cache:

@Cache(name = "CacheDestination", ttl=-1, pk="getId")
public class Domain implements Serializable {
private Key id;
private int age;
private String name;

public Domain() {
}

public void setId(Key id) {
this.id = id;
}

public Key getId() {
return Key;
}

public void setAge(int age) {
this.age = age;
}

public int getAge() {
return age;
}

public void setName(String name) {
this.name = name;
}

public String getName() {
return name;
}

@Override
public String toString () {
return id + ":" + age + ":" + name;
}
}
Unfortunately this destination aware class will not do anything unless the class that puts this Object in the cache extracts the destination. Cache annotation also tells which method to call to find it's primary key, in this sample it is getId () (Remember JPA?). For developers who do not want to "know" which cache to put this object to, need an Abstraction layer. Quite a few customers that I have worked with try to create an Abstraction layer on top of Coherence to make "all caching" logic to be centralized in one layer. Coherence APIs are already highly abstracted. NamedCache is probably the most important interface in all of Coherence APIs. NamedCache already extends java.util.Map. So having an abstraction layer that returns an instance of NamedCache is good enough for Map centric applications. Problem is that NamedCache is not just a Map. It also provides quereablility feature on top of this data structure. It also provides support for Events, transactional support and process invocation features that are missing from a vanilla Map contract. Most likely this abstraction layer will miss some of these native features of NamedCache that even JCACHE (JSR107) dictates only a subset of features of. The challenge here is that NamedCache has no knowledge of objects being destination aware or in simple English does not understand the Cache annotation, unless I find Product team drunk enough to put this idea forward to. So we need to build one. Lets create a ReducedMap:
public interface ReducedMap {
public Object put (Object value) throws NoSuchMethodException,
IllegalAccessException, InvocationTargetException,
OperationNotSupportedException;
}
and implement it in a custom Map using Java delegation pattern to pass cache invocations to Coherence.
public class AnnoMap extends AbstractMap implements ReducedMap {
private volatile NamedCache nCache;

public AnnoMap() {
}

public int size() {
return (nCache == null) ? 0 : nCache.size();
}

public boolean isEmpty() {
return (nCache == null) ? false : nCache.isEmpty();
}

public Set entrySet() {
return (nCache == null) ? Collections.EMPTY_SET : nCache.entrySet();
}


public boolean containsKey(Object key) {
if (nCache == null) {
Class clz = key.getClass();
Cache c = (Cache)clz.getAnnotation(Cache.class);
if (c != null) {
String name = (c.name() == null) ? clz.getName() : c.name();
nCache = CacheFactory.getCache(name);
}
}
return (nCache == null) ? false : nCache.containsKey(key);
}

public Object put(Object value) throws NoSuchMethodException,
IllegalAccessException, InvocationTargetException,
OperationNotSupportedException {
Class clz = value.getClass();
Cache c = (Cache)clz.getAnnotation(Cache.class);
if (c != null) {
String name = (c.name() == null) ? clz.getName() : c.name();
nCache = CacheFactory.getCache(name);
Method m = value.getClass().getMethod(c.pk(), new Class[0]);
Object key = m.invoke(value, null);
return nCache.put(key, value, c.ttl());
}
throw new OperationNotSupportedException("Class not annotated");
}


public Object get(Object key) {
if (nCache == null) {
Class clz = key.getClass();
Cache c = (Cache)clz.getAnnotation(Cache.class);
if (c != null) {
String name = (c.name() == null) ? clz.getName() : c.name();
nCache = CacheFactory.getCache(name);
}
}
return (nCache == null) ? null : nCache.get(key);
}

public Object put(Object key, Object value) {
Class clz = value.getClass();
Cache c = (Cache)clz.getAnnotation(Cache.class);
if (c == null)
throw new UnsupportedOperationException("Value not annotated");
String name = (c.name() == null) ? clz.getName() : c.name();
nCache = CacheFactory.getCache(name);
return nCache.put(key, value);
}

}
Now as you can see AnnoMap is very limited in its capability but if you are taking this route probably this Map is the place to enhance. Lets run a quick test:
public static void main(String[] args) throws NoSuchMethodException,
IllegalAccessException, InvocationTargetException,
OperationNotSupportedException {
Domain d = new Domain ();
// -- KeyObject will be similar to Domain object with the same Cache
// -- destination defined in the Cache annotation.

Key key = new KeyObject (...);
d.setId(k);
d.setXXX(..);
AnnoMap map = new AnnoMap ();
map.put(d);

System.out.println(map.get (key));
}
Enjoy!

Saturday, December 26, 2009

Thursday, December 17, 2009

बहुत दिनों बाद एक शेर और

इस तरफ सूखी सी नदिया, उस तरफ सैलाब पानी का,
बतला दे ये खुदा मुझे अब, मैं तैरुं तो तैरुं किस तरफ?
इस तरफ टूटी सी नैया, उस तरफ पता न मांझी का,
अब खामोश मत रह, बता मैं बैठूं तो बैठूं किस तरफ?
खड़ा धंसती जमीं पर अब दोराहे पे अकेला,
बतला दे ये खुदा मुझे अब, मैं जाऊं तो जाऊं किस तरफ?

Tuesday, November 24, 2009

Thanks Congress for giving BJP what they wanted... An Issue

Burdens of past should not drag the wishes of future. Ram Temple is a similar issue that has become a playground to gain political mileage. Its in the news again and of course thanks to Congress for giving BJP an issue, once again. You would be politically naive to believe BJP leadership had no knowledge of plans of bringing the disputed structure down while in power in the state where it happened. Level of involvement of its leaders could be debatable but their ignorance is something that can not be believed. It can not be believed in a similar way as if the structure was a center of any relevance in the middle of a Temple town, the birth place of Ram. The disputed structure was less of a religious importance to Muslims in India than being a relic of the history. It neither represented Islamic culture nor an icon of introduction of Islam in India other than a thumping sign of Victory of a Muslim King over the local Hindu rulers. The place also called Masjid-i-Janmasthan was a place where till late 19th century Hindus and Muslims prayed together peacefully. Today more than a few centuries later we have a constitution to respect and abide by that is key to the survival of Indian democracy and is also the life line of our individual identities. BJP who championed the cause of reconstruction of Ram temple, after losing a series of recent political battles and thoroughly rejected by people sees this issue as a life source of its own existence. This is what BJP wanted and so does the Samajwadi Party who got rejected by its own traditional Muslim electorate in the recent elections. BJP is not for Hindus, so not SP for Muslims and Congress is no champion of secularism either. So it is now up to the common mass to scream and let them know that one of the most pious places of Hindu religion can't be used to spill blood of any fellow Indians irrespective of their religious beliefs or its bloody past.

Sunday, November 15, 2009

The clouds from above


मैं क्या कहूँ इन बादलों को देख कर यूँ अब?
लगें नीचे से नम आँखें और ऊपर से मेरे सपने।
सुनहरे सुर्ख फूलों कि ये लम्बी नर्म सी चादर,
बिछी इक छाओं सी ऊपर चाहे ये हमे ढंकने।
या शायद है ये इक नदिया बहे जो यूँ तूफानी सी,
और पानी हैं मेरे सपने जो बादल बन सदा बरसें।

Friday, November 13, 2009

My Name with ditaa

Sunday, November 01, 2009

Functional Programming with Coherence

If you come across the following, don't get surprised. Its called Functional programming.

object FP {
def doSomething (callback: () => Unit) {
callback();
}
def letsPrint() {
println ("Ha ha!");
}
def main (args: Array[String]) {
doSomething (letsPrint)
}
}
Functions in Scala are treated as Objects that can be passed around. The concept of Object has still not changed - Its a place holder of application state. So in Functional Programming Model, functions are states and if it is legal then why not manage them in a state repository like Oracle Coherence?

Problem Statement: Executing new Entry Processors without having to deploy them in the cluster and continue to achieve 100% up time?

In a simpler English it means how can I push new EntryProcessor(s) and execute them without having to bounce the cluster to deploy the new class. The solution lies in the concept of Functional Programming - Passing functions to the Executor but this time slightly in a different way. Lets again, begin with a cache configuration:
<!DOCTYPE cache-config SYSTEM "cache-config.dtd">
<cache-config>
<caching-scheme-mapping>
<cache-mapping>
<cache-name>EPFeeder</cache-name>
<scheme-name>feeder-scheme</scheme-name>
</cache-mapping>
<cache-mapping>
<cache-name>EPCache</cache-name>
<scheme-name>ep-scheme</scheme-name>
</cache-mapping>
</caching-scheme-mapping>

<caching-schemes>
<class-scheme>
<scheme-name>ep-scheme</scheme-name>
<class-name>DynaEPCache</class-name>
<init-params>
<init-param>
<param-type>string</param-type>
<param-value>{cache-name}</param-value>
</init-param>
<init-param>
<param-type>string</param-type>
<param-value>coherence-cache-config.xml</param-value>
</init-param>
</init-params>
</class-scheme>
<distributed-scheme>
<scheme-name>feeder-scheme</scheme-name>
<service-name>EPFeederScheme</service-name>
<backing-map-scheme>
<read-write-backing-map-scheme>
<internal-cache-scheme>
<local-scheme>
<high-units>100MB</high-units>
<listener>
<class-scheme>
<class-name>EPListener</class-name>
</class-scheme>
</listener>
</local-scheme>
</internal-cache-scheme>
</read-write-backing-map-scheme>
</backing-map-scheme>
<autostart>true</autostart>
</distributed-scheme>

<proxy-scheme>
...
</proxy-scheme>

<invocation-scheme>
<scheme-name>InvocationService</scheme-name>
<service-name>InvocationService</service-name>
<thread-count>2</thread-count>
<autostart>true</autostart>
</invocation-scheme>
</caching-schemes>
</cache-config>

EPFeeder Cache is where we store the Functions (Implementations of process() method of the EntryProcessor) against the EP's class name. EPCache is a demo cache where your data resides and you would be executing new EPs against. Next Step is to see how the client code will look like:
public class DynaCUtilTest extends TestCase {
....
private String getEPImpl() {

StringBuffer sBuffer = new StringBuffer();
sBuffer.append("public Object process (Entry entry) {");
sBuffer.append ("System.out.println(\"In process\");");
sBuffer.append ("System.out.println(\"Key:\" + entry.getKey());");
sBuffer.append ("System.out.println(\"Value:\" + entry.getValue());");
sBuffer.append("return null;");
sBuffer.append("}");

return sBuffer.toString();
}

public void testCreateEP()
String impl = getEPImpl();
String clzName = "EPClass_v1";
NamedCache eCache = CacheFactory.getCache("EPFeeder");
eCache.put(clzName, impl);

NamedCache nCache = CacheFactory.getCache("EPCache");
nCache.invoke("A", new EPClass_v1());
}
}
getEPImpl () is the implementation that we would replace the process() method with as we feed new EP classes, in this case the first version of it named EPClass_v1. So what happens next?
When a new implementation is put in the EPFeeder Cache, A Backing Map Listener picks up this event and creates a new class (EntryProcessor) on all the cluster members dynamically using an Invocation Service. This step achieves 100% up time for Coherence. The Backing Map Listener (EPListener) looks something like this:

public class EPListener extends MultiplexingMapListener {

public EPListener() {
}

protected void onMapEvent(MapEvent mapEvent) {
String key = ...;
String impl = ...;
InvocationService iS =
(InvocationService)CacheFactory.getService("InvocationService");
Invocable inv = new EPCreator (key, impl);
// -- Create a new Class on all nodes
Set set = CacheFactory.getCluster ().getMemberSet();
iS.query(inv, set);
}

}
The core of this listener is the magic EPCreator Invocable but before we look at the EPCreator let see an EPInterface:
import com.tangosol.util.InvocableMap;

public interface EPInterface extends InvocableMap.EntryProcessor {
}
The most critical piece of this puzzle is the Invocable and how it does its magic. The EPCreator executes the following in it's run() method. Lets put it in its own Util class (DynaCUtil):
    public static Class createEP(String clzName, String impl) {
ClassPool pool = ClassPool.getDefault();
pool.importPackage("com.tangosol.util.InvocableMap.EntryProcessor");
pool.importPackage("com.tangosol.util.InvocableMap.Entry");
Class clz = null;
CtClass eClass = null;
boolean shouldCreate = false;
try {
eClass = pool.get(clzName);
} catch (NotFoundException e) {
shouldCreate = true;
}
if (shouldCreate) {
eClass = pool.makeClass(clzName);
eClass.setInterfaces(new CtClass[] {
pool.makeClass("EPInterface") });
try {
eClass.addConstructor(CtNewConstructor.defaultConstructor(eClass));
} catch (CannotCompileException e) {
e.printStackTrace();
}
try {
eClass.addMethod(CtNewMethod.make(impl, eClass));
StringBuffer sBuffer = new StringBuffer ();
sBuffer.append ("public java.util.Map processAll(java.util.Set set) {");
sBuffer.append ("System.out.println(\"In processAll\");");
sBuffer.append ("return java.util.Collections.EMPTY_MAP;");
sBuffer.append ("}");
eClass.addMethod (CtNewMethod.make (sBuffer.toString(), eClass));

} catch (CannotCompileException e) {
e.printStackTrace();
}
try {
clz = eClass.toClass();
} catch (CannotCompileException e) {
e.printStackTrace();
}
}

What the heck was it? The Invocable uses JavaAssist to create a new EntryProcessor on the fly. Now the last question is as Coherence is a self-healing system where new nodes can join and leave anytime, how to make sure new EP Classes are available to the new nodes? And the answer is a Custom NamedCache which also is the last piece in the puzzle. The class would look something like the following:

public class DynaEPCache extends WrapperNamedCache {

public Object invoke(Object oKey, InvocableMap.EntryProcessor agent) {
String name = agent.getClass().getName();
createEP (name, (String) CacheFactory.getCache("EPFeeder").get (name));
return super.invoke(oKey, agent);
}

public Map invokeAll(Collection collKeys, EntryProcessor agent) {
....
}

private void createEP (String name, String impl) {
if (impl == null) {
throw new RuntimeException ("EntryProcessor not created yet!");
}
DynaCUtil.createEP(name, impl);
}
}

A much more advanced implementation is sitting on my laptop that a pieces I will soon upload to http://sites.google.com/site/miscellaneouscomponents/Home. In the meantime just Enjoy!

Why should I pay a single penny for Windows 7 upgrade from Vista?

I bought a Sony Vaio with Vista Home premium edition about a year ago. It looked good with new Apple-type interface and I brought it home. It did not take me long to find this was the worst OS Microsoft has ever produced. The most irks I got to find out that I could not VPN to my work network as it failed to connect. With no Vista support offered by my Company I was stuck and had to continue with my other very bulky Dell with XP. This weekend I had a chance to look at Windows(7) at a local BestBuy. It did not feel even a little different from Vista. Now Microsoft is asking for $129 upgrade fee for it. Why? When did I ever use my Vista properly at the first place? You sell me a buggy, practically non-working OS and then asking for a fee to get it fixed, if they have? Isn't this keeping Vista users hostage? Unless I shelve another $129 on top of a costly purchase that we already had made they do not have any other options. For me Windows(7) has already failed in the first week of its launch.

Saturday, October 31, 2009

Why measuring exact size in memory could be a futile exercise?

Coherence being an In-memory data grid, it is important to provision the hardware right. Many factors play different roles - Total RAM on the box, avoiding paging, providing linear scalability without stepping on Out of Memory errors and so on. Now the problem is how to measure how many additional nodes (Cache Servers) one would need and in result how many new boxes when we have to scale out? Also, if Indexes are created how to measure the additional space required and how to do it right?
There are two ways to measure things - First, like measuring Gold and Second like measuring Onions. Onions are always approximate. Coherence data sizing is like measuring Onions and its not like you can not measure it like Gold - accurate and precise but in most cases it is not needed. Why? Because the dynamic/auto provisioning nature of the cluster and cheaper memory by the day. It is much easier to approximate the size and add new nodes or boxes in the cluster than to be a Mathematician and calculate the size in bytes. If you are an Operations person you need quick and almost correct formulas. If you are a Coherence enthusiast you might already know it - On a 32-bit machine 1.2GB of RAM is needed to run a JVM with 1GB heap. Off of 1GB heap having only 375MB of space for primary data storage in distributed data scheme with one backup count. Keeping 30% of scratch space left per JVM to keep the GC profile in check and so on. What about Indexes? That's easy too.. Account for 30% overhead for each Index added. So watch for how many Indexes are added as it is easy to cross the data size itself. Are these numbers accurate? Nope. They are not meant to be either. Are they simple? Yes and close to correct. After all when it comes to provisioning a system like Coherence its okay to just measure it like Onions.

A love letter from Nigeria, again!

Just received this letter from that Dead Uncle who I never knew that had migrated to Nigeria I did not know about. A few uncles like these and I will be the richest person on this Nigerian Earth!

I am Mr David Lewis. a Foreign Transfer Manager working with ZENITH BANK of Nigeria.I just started working with ZENITH. and I came across your unpaid fund File stamped hold due to you have not come for the claim.

The most annoying thing is that they won't tell you the truth that on no account will they ever release the fund to you,instead they allow you spend money unnecessarily, or allow the government confiscate your fund, I do not intend to work here all the days of my life, I can release this fund to you if you can certify me of my security.

I needed to do this because you need to know the statues of your Funds and cause for the delay,Please this is like a Mafia setting in Nigeria, you may not understand it because you are not a Nigerian.. The only thing needed to release this fund is the Change Of Ownership which will be tendered to this bank Zenith to prove to them that you have come for the claim of your fund left in your name and the INTERNAL REVENUE S ERVICE(IRS)for clearance of the transferred amount in your account or in any means you will like to receive your fund.

Once the Change Of Ownership is obtained from the Federal High Court here in Nigeria funds will immediately reflect in your bank within 10 Minutes,the document is all that is needed to complete this transaction.

I have the Deposit Certificate for your own proof and the Next Of Kin application form to fill out.

Note that the actual funds is valued at $25 MILLION USD and the president made a compensation fund release for all unpaid beneficiary valued at $15 million usd.
Listed below are the mafia's and banks behind the non release of your funds that i managed to sneak out for your kind persual.

1) Prof. Charles soludo
2) Chief Joseph Sanusi
3) Dr. R. Rasheed
4) Barrister Awele Ugorji
5) Mr Roland Ngwa
6) Barrister Ucheuzo Williams
6) Mr. Ernest Chukwudi Obi
7) Dr. Patrick Aziza
Deputy Governor - Policy / Board Member
8) Mr. Tunde Lemo
Deputy Governor - Financial Sector Surveillance / Board Member
9) Mrs. W. D. A. Mshelia
Deputy Governor - Corporate Services / B oard Members
10) Mrs. Okonjo Iweala

Do get in touch with me immediately with my direct number to conclude this final transaction immediately,and also send to me your convenient tel/fax numbers for easy communications

Regards,
Mr . David Lewis.

This much snow in Denver in October

Friday, October 30, 2009

Level(3) not accessible

This sign in Denver International Airport's Elevator always reminds me of Level(3) a network company.

Tuesday, October 27, 2009

BJP must have done something real bad to be rejected like this

So it looks like when it comes to elections, the story BJP is hearing is the same again and again. There must be something new brewing in electorate's minds these days. Aren't these the same guys who got carried away with religious sentiments just a few years back? So what has changed since? We still go to Churches, Gurudwaras, Temple and Mosques. Aren't we Hindus or Muslims any more? So why are we rejecting BJP and beating it like this? There is a reason and the reason is BJP can't seem to see the new realities of India and if they can see it they can't seem to find a leadership who can promise and deliver new frontiers for the nation. Otherwise Mumbai where one of the worst security lapse occurred and caught the state and federal (Congress) governments totally off-guard still would not have rejected the nationalist BJP/Shiv Sena combined. Congress with its new astute and young leadership has seems stuck a chord with this new India. They are struggling to solve problems but still seems sincere to resolve them. Its the truth in their tone which now is making it a "party with a difference". BJP has failed and it has failed in numerous ways. LK Advani has become Kapil Dev of politics. Sitting at the leadership role for as long as possible, not delivering and not allowing new blood to take over either. Kapil was sensible as he knew how much to extend and when to throw in the towel. Advani seems struggling with it. RSS who knows how to disassociate themselves from BJP are only pretending to be a separate entity. They are not. If RSS had an answer BJP would not have been in this quagmire. The disciplined party workers of BJP were never so disciplined after all. In fighting has taken over their regional units and this list goes on and on. At the end by now BJP should see they have been outright rejected in a shape they stand today. When it comes to Congress the truth is we only consider to vote them out when we get disenchanted with its leaders not because we love the other option. And if they continue to do what they are doing I see no reason why even in future elections there will be any new challenges for them.

Sunday, October 25, 2009

Coherence - Back to Basics

One question that keeps cropping up is when to configure an application as a cluster member and when over *Extend? The answer has always been pretty simple - If application stability cannot be guaranteed then it cannot be considered as a cluster member. There is more to it, btw. Following questions you must ask before opting for any configuration strategy:

  • If an application is written in either C++ or .NET then there is no other choice but to configure it as an *Extend client.
  • If the interface to Coherence is Java (including JNI) then choices are a little deeper.
Lets talk about it in more details (The content is not a dictation to its readers but an assistance to put them on a right track).

When to configure as a cluster member?
  1. Applications deployed inside a container (Like an Application server) are typically configured as storage disabled cluster members. The reason is application servers themselves being "server-side" components are considered fairly stable.
  2. Applications demanding "extreme" performance are configured as cluster members.
  3. *Extend introduces an additional network hop and if this additional couple of milliseconds are not acceptable then configure it as a cluster member.
  4. If proxy-reconnect is an issue.
When to *Extend?
I am a big proponent of *Extend configuration because it provides us a framework that we are already so used to. With new enhancements in Coherence v3.5 and later in Proxy services and serialization mechanism, overhead on proxy has reduced considerably and this opens up a some nice architectural options.
  1. If the application is a desktop application where reliability cannot be guaranteed.
  2. If network over which the application is accessing Coherence resources is slow or unreliable.
  3. If application is written in .NET or C++
Besides these there are some advanced considerations:
  1. If application is already very TCP/IP centric. New services can be developed on top of Proxy as is demonstrated here, here and here.
  2. If an application arbiter is needed or something like a Gatekeeper.
  3. With *Extend proxy layer can be scaled independent of other storage nodes. *Extend configuration allows three layers of scalability - First application clients, second the proxy layer itself and the third the core storage members or non-proxy cluster members. Storage member layer is about data scalability and proxy layer is about request scalability.
  4. In line with the popular Adapter pattern typically found in BPEL designs. Application specific adapters can be plugged into Coherence using Proxy's TCP/IP service making application clients even more agnostic of Coherence infrastructure beyond the Map implementation.
  5. If a new component is added that introduces new features that are not available out of the box from Coherence, it is much safer to deploy it on proxy nodes. Always remember we do not have any access to either the underline protocol (TCMP) or the core coherence component layer. Any feature that simulate either of the two must be performed by proxy.
  6. Anyone still remembers SOA? Even though SOA does not dictate the transport layer but it is much cleaner to have well defined services running and accessing coherence resources over *Extend. If services are deployed inside a container then revisit (1), (2) and (3).
Enjoy!

Thursday, October 22, 2009

A Biker's spectrum

Sunday, October 18, 2009

आज एक और शेर

ना पूछो मेरा आसमां पे क्यूँ घर बना न था?
सितारे टूट के ना गिर पड़े आँगन में, ये डर था।
कि जाते फ़िर कहाँ वो रौशनी से भागने वाले?
ज़मीं के इक किसी कोने पे बन शायर के बैठा था।

Tuesday, October 13, 2009

Monday, October 12, 2009

शेर

चले आओ यहाँ कि आज भी तुम याद आते हो,
जुबाँ चाहे रहे चुप, आँख में तुम दर्द पाओगे।
अंधेरे से रहें अब दिन भी यादों में तुम्हारे ही,
दियों कि बत्तियों को आज भी तुम गर्म पाओगे।
न आना था तो आख़िर कह दिया होता न लौटोगे,
हवाओं में तुम अपनी खुशबुएँ क्यों छोड़ जाओगे?
अगर जो राह में तेरी कभी दिखें हम कहीं बैठे,
कमसकम याद तो अपनी मेरे संग छोड़ जाओगे?

Saturday, October 10, 2009

Season's first snow

Friday, October 09, 2009

Why I don't watch football and what the heck it has anything to do with Obama's Nobel peace prize?

I am interested in things that I feel I could do with practice and perseverance. I may not succeed in Cricket professionally or Poker or Tennis or any other sports but at least I feel after 10 or 20 years of regular dedicated practice I can at least stand confidently. futball is the only sport that I am certain even if I spend the rest of my life practising I can never become a muscular giant who can physically push and willing to hit and jump on others. I can never become that and is the only reason that dampens my interest in this game. Futball has shut its door on me. Now prizes are something similar. If I am being put in a right direction by my Mentors and I study hard and experiment and read every book this world has ever written and may be just may be a Billion in a one shot that I could get a Nobel prize in Chemistry but for Peace? No way. Irrespective of a fact that I spend rest of my life in war zones affecting people not to take up arms and kill others, or risk my own to save one or spread the power of spirituality or anything that I could do to make this world a better place to live, I cannot get a Nobel peace prize. Unlike the past, this prize is now reserved for Presidents and those who have a chance to become one. Obama had a chance to get it. Not because he made this world any more peaceful but he became a President who ousted a school of thought that initiated one of the bloodiest wars of recent times. War is still raging in Iraq and Afghanistan, Israel and Palestinians are still logging horns. Iran has become stronger. War has now officially escalated to interior Pakistan and the world is still no safer than it was eight years ago. It seems Peace has changed its definition. Someone once told that the inner strength of your faith can be channelized to achieve peace. By not raising a finger to those who are hitting you on your head you can achieve the peace. By looking straight into the eyes with love of those who have drawn guns to kill, you will achieve the peace. And that Man Gandhi never got the Nobel peace prize. Either he was above its stature or the meaning of peace was different then. Congratulations to Obama for that five person committee to see you a new age Gandhi. But like Futball, Nobel peace prize too is out of my reach not because I cannot learn the rules but because it is now judged on an unacceptable level of scale.

Monday, October 05, 2009

Integrating LDAP with Coherence

Please read Securing a Coherence Cache as a precursor to this blog. The link talks about how to externalize the configuration of cache security provider that can be configured in coherence cache configuration. The security provider class that implements SecurityProvider interface has to implement a simple method checkAccess (Subject). Subject needs to be passed by the "cache client" and has to be authenticated/authorized in the security provider. Since I started Oracle coherence consulting it has come up time and again to integrate Coherence with an LDAP provider so that application/user accounts can be controlled on what access they can have to which cache with user accounts being managed in a Directory server. So lets think about it again and see if we can streamline this solution and something generic can be built.

Problem Statement: To setup Coherence cache in such a way that discrete cache access can be set up driven by Enterprise directory.

Lets, think about architectural decision points:
  • LDAP Server is an external data source. Use CacheLoader.
  • Avoid accessing LDAP for each Cache request. Cache user authentications (An admin cache).
  • Protect the admin cache that manages the user-auth to not allow any access. Protect the protector.
  • Protect the proxy from *Extend access. Cluster member has inherent trust. Use authorized-hosts for cluster members.
  • Use JAAS
  • Manage authorization locally but authentication centrally.
How about a quick activity diagram?

Authentication
User authentication has to happen only once (typically once in 24 hrs). This is not such a bad cost that can be incurred once in a day as user accounts do not change and if changes it changes very infrequently. Authentication information can be cached once an account is verified against a Directory server. We also need to make sure that the cache that manages account authentication information is inaccessible to any unauthorized user or applications. Now how to do it?
  • Create an Admin cache.
  • Plug a custom CacheLoader that interacts with an external Directory server.
  • Build the cache key to include cache name and user credentials, the cache value be Boolean.TRUE or Boolean.FALSE.
  • Using the <entitled> XmlElement configure a DisAllowSecurityProvider as it's security-provider.
  • DisAllowSecurityProvider denies all requests to this cache other than made by a very "few chosen". Scroll down for its implementation.
So how would such an Admin cache configuration look like?
<distributed-scheme>
<scheme-name>admin-distributed-scheme</scheme-name>
<service-name>AdminDistributedService</service-name>
<backing-map-scheme>
<read-write-backing-map-scheme>
<internal-cache-scheme>
<local-scheme>
<high-units>200KB</high-units>
<unit-calculator>BINARY</unit-calculator>
<expiry-delay>86400000</expiry-delay>
</local-scheme>
</internal-cache-scheme>
<cachestore-scheme>
<class-scheme>
<class-name>LDAPCacheLoader</class-name>
<init-params>
<init-param>
<param-type>string</param-type>
<param-value>ldap.server.com</param-value>
</init-param>
<init-param>
<param-type>int</param-type>
<param-value>389</param-value>
</init-param>
</init-params>
</class-scheme>
</cachestore-scheme>
</read-write-backing-map-scheme>
</backing-map-scheme>
<autostart>true</autostart>
<entitled>
<security-provider>DisAllowSecurityProvider</security-provider>
</entitled></distributed-scheme>

So Admin cache is size limited and expires every 24 hrs of the first authentication and does not allow any access. How could that be? LDAPCacheLoader's load () method can be very simple. The cache key passed could be a "username$password" that can be parsed and authenticated against a Directory server using LDAP APIs. If authentication succeeds return a Boolean.TRUE otherwise false. So how is this load () invoked and from where?

Default Security Provider
Caching user authentication is a luxury that can be centralized. Applications deal with two aspects of cache security - Authentication and Authorization and these can be split in two classes. Combined with cached-authentication lets write an abstract Default security provider. Any security provider that extends it gets the "performance" for free.

public abstract class DefaultSecurityProvider implements SecurityProvider {
private NamedCache nCache = CacheFactory.getCache ("USER_CRED");
public boolean checkAccess (Subject subject) {
String user_pw = ((Principal) principals.iterator().next()).getName();
String userName = getUserName (user_pw);
Boolean isPresent = (Boolean) nCache.get (user_pw + "$$" + cacheName);
boolean isAuth = false;
if (isPresent.booleanValue()) {
isAuth = authorize (userName);
}
return isAuth;
}
public abstract boolean authorize(String userName);
}
Authorization
Like authentication authorization should be relatively inexpensive too. There could be two approaches. One, using Directory server to store authorization attributes too. Even though it is perfectly doable but authorization is owned by Coherence or application and should be "owned" by it. Central governance should only be applied to authentication and not to authorization. So lets find an inexpensive way... how about Java Permission object driven by a policy file? Lets write a Policy file:
grant Principal CustomPrincipal "Principal1" {
permission java.util.PropertyPermission "Cache1", "read, write";
.. More can be added here...
};
grant Principal CustomPrincipal "Principal2" {
permission java.util.PropertyPermission "Cache2", "read, write";
... More can be added here...
};

What about the custom security provider?

public class MyCustomSP extends DefaultSecurityProvider {

public MyCustomSP (String cacheName) {
super(cacheName);
this.cacheName = cacheName;
}

public boolean authorize(final String user) {
if (user == null) {
System.out.println("Auth not in USER_CRED cache");
return false;
}
try {
PropertyPermission fp =
new PropertyPermission(cacheName, "write");
new SecurityManager().checkPermission(fp);
return true;
} catch (SecurityException exp) {
...
}
return false;
}
}
Now in this implementation if a User Principal has "write" permission then it gets the access. But, out of NamedCache behaviors if each method can be classified into two - Either read or write then the method that was invoked can also be passed along with it's classification to checkAccess () method. Instead of hard-coded "write" for every access, NamedCache's each method can have a fine grained user authorization. Of course you reserve the right to create your own Permission object and a set of Actions and use that.

I am not done yet!
In the activity diagram there is a logical concept of Gatekeeper. Who is it? And how does it do it? This gatekeeper is a combination of a custom NamedCache (EntitledNamedCache) and a SecurityProvider called DisAllowSecurityProvider. EntitledNamedCache is auto-magically configured for caches that has &entitled> Element defined (Read Securing a Coherence Cache for more information). While, DisAllowSecurityProvider is configured on the Admin Cache (USER_CRED) that stores the authentication info.

What does DisAllowSecurityProvider do?
public class DisAllowSecurityProvider implements SecurityProvider {
public DisAllowSecurityProvider() {
}

public DisAllowSecurityProvider(String cacheName) {
}

public boolean checkAccess(Subject subject) {
StackTraceElement[] elements = new Throwable().getStackTrace();
StackTraceElement e3 = (StackTraceElement) elements[3];
StackTraceElement e0 = (StackTraceElement) elements[0];

try {
if (SecurityProvider.class.isAssignableFrom(Class.forName(e0.getClassName())) ||
SecurityProvider.class.isAssignableFrom(Class.forName(e3.getClassName()))) {
return true;
} else {
return false;
}
} catch (ClassNotFoundException f) {
Base.log (f);
return false;
}
}
}
So here you go, you get a decently flexible Coherence Cache security implementation. Enjoy!

**One of my colleagues Steve Brockman asked if it was possible to extend the security to other cluster nodes too besides the proxy nodes. The solution is a little different but easy to make. Following are the steps how to do it:
  1. Copy coherence-cache-config.xml to say alt-cache-config.xml
  2. Open alt-cache-config.xml in an editor and remove all the <entitled> section from the configuration.
  3. Edit ExtendedCacheFactory and look for FILE_CFG_CACHE in the file. The next line is where the class sets the cache configuration name. Hardcode the param-value to alt-cache-config.xml (Or, be more creative but set it to alt-cache-config.xml).
  4. Deploy the alt-cache-config.xml on all the cluster nodes.
  5. Set -Dtangosol.coherence.override=proxy-override.xml on all cluster nodes.

Saturday, October 03, 2009

Bollywood still has in it

Movies have become my pastime. Not a junkie first day guy but thats how I refresh my mind. After a series of disappointments I had stopped watching too many Hindi (Bollywood) movies. Once in a while I watched one that was different but not something that touched me for long, but getting three in a row was something unheard of. Last two weekends it happened and I am impressed. First watched Baabar and this was one movie that gave me dark dreams. With relatively heavy violence that blended well with the plot gave me dreams of me shooting others with blood in my hands. I was impressed. Second I watched - Kaminey. Shahid Kapoor in a double role playing low level operator of Mumbai underworld turned out to be its realistic portrayal. Good acting covered for a few patchiness and songs were not bad either. I always had a soft corner for dark movies. Having watched pretty much all of DeNiro and Al Pacino's also loved movies like Satya, Shool and Sehar. So may be two in a row was not that surprising. And then the hat-trick came about with Fashion - Another Bollywood creation. Fashion is based on life of Mumbai's modeling industry. From the beginning I held my breath that the plot was about to falter. No, it did not. The movie had no gun shots and none of those things but I felt I was watching a Hollywood creation. Three in a row impressive films has certainly shaken my perception of Hindi Cinema. Good going. I am back!

Friday, September 18, 2009

फ़िर से एक शेर


ख़ुद से जो रूठा हो, वो जाए किधर कहाँ जाए?
आवाजें ढूंढता वो जो, किधर जाए कहाँ जाए?
ख़ुद की जो सूरत आज भी आईने न उतरी हो,
जो ताके ख़ुद को दरिया में, किधर जाए कहाँ जाए?

Tuesday, September 08, 2009

I am very disappointed with Apple and AT&T

People who know me know that I get very emotionally attached with things and am an impulsive buyer. My stuff, my work, software I use, my tables, my chairs and including my phone. With everyone around me brandishing their iPhones, I decided to buy one. Went to my company's corporate website and punched out to buy a new 16GB iPhone 3GS with an upgraded plan with AT&T. I paid $399 for this device when everyone around me did $199. I was excited and as mentioned earlier my impulsiveness took over me. Since I got this device I have been on tech support calls pretty much every week for one reason or the other. For me Apple is a name of perfection and it should have acted that way. Sadly my expectations are misplaced. First, the internet was not working if I was not connected to a WiFi, then I had trouble receiving my company's email then it refused to discover a bluetooth headset that even a $50 Samsung phone was able to do. And then after a tech support person promising me someone giving me a call on Tuesday about the excess $200 I paid no body called me. In the evening I ended up talking to someone who seemed a little more helpful created a new case and scheduled someone to call me tomorrow as the "customer relations office" was supposed to be closed at the time I called. On the other issue, after a series of tries I was asked to physically go to a "Mac-Genius" store to find what the issue with my bluetooth was? After a quick test the Genius asked me to bring the device tomorrow so that he could replace it with a new set, as I had to backup my data and apps. Besides some cool looking geeks helping customers and sleek Apple stores I don't find this company to be any different. Screwed up support, painful waits on phone, unfulfilled promises, faulty devices and messed up billing - I don't think justifies twice the dollar it asks for its products. Because when I bought iPhone at the cost of losing choice, the moment Apple asks me to get a phone service from AT&T both AT&T and Apple share a collective responsibility for giving me the top-notch service. They both seems failing.

Sunday, September 06, 2009

फ़िर से एक शेर

दिखूं ख़ुद को जो शीशे में हैरानी आन होती है,
लगे बदला सा ये चेहरा, या फिरे नज़रें ही मेरी हैं?
ये धब्बे आँख के जैसे कहें कुछ नयी कहानी है,
या यादें हैं पुरानी वो, बस पलकों पे आनी हैं?
बरस के जा चुके बादल लगें मेरे खुले गेसू,
जमा आंखों के गड्ढों में, क्या ये पीने का पानी है?

Friday, August 28, 2009

I am glad BJP lost

Because if they don't know how to run a party then how could they have run a country? This party has to reinvent itself. I see a new opposition emerging by next elections most probably under Nitish Kumar with these breakaway BJP liberals. Under Advani and Rajnath BJP has dragged itself back to its dark ages. Except once I have always voted for BJP but I won't even go near them in what they are today. Even Mayawati knows how to run a political party than these guys do. This lauh purush is a joke. He is a living example of age should not be a criteria of leadership. He rode on Vajpayee's charisma and conveniently projected himself as his heir when he has never been able to prove his administrative abilities. What a disaster he has been for BJP. If RSS can't find BJP a new set of leadership who are genuinely rooted in its beliefs then they are better off gone. Sonia Gandhi who was targeted by this party with a difference on being naive, a foreigner, a dumb and what not has kicked their butt for a second time in a row. Leaders prove their mettle by demonstrating results not by some BS talk. Their loss has indeed saved India.

Tuesday, August 25, 2009

Blaming the heroes

India is in a row over Jaswant Singh's book on Jinnah, the founding leader of Pakistan. He praised him for his secular credentials that later got a supportive argument from K Sundarshan, the retired RSS chief himself. While BJP battles internal unrest bitten by two consecutive defeats in national polls, this new political upheavel becomes important. First, it came out of BJP a political avatar of Hindu Mahasabha that has its roots planted in denial of two nation theory that many blame Jinnah for. Second the leaders who said and gave support to this argument have been associated with BJP for more than a few decades. This episode is also important as it challenges the history that is being taught in schools at both sides of the border. Today as we see it, probably partition was a good thing at least for India. The cancer that has spread across the world emanating from a region of Pakistan, even though has victimized India but not to an extent that it would have been if India and Pakistan were a one big nation. It is also unfortunate that heroes are judged on the decisions they made generations ago. Maturity of a nation is when it knows, admits and corrects the blunders of leaders of past and moves on. As it turns out Jinnah changed himself to what he was not and never believed in, because of indifferent attitude of some of the leaders of the then freedom movement. While Gandhi saw an opportunity in garnering support of Muslims for freedom from British rule in Caliphet movement of Turkey, Jinnah took a more pragmatic approach and opposed it on the grounds of no association of Caliphas with Indian muslims. It was not because he wanted the British rule but because it had no Indian roots in it. This position should be saluted. Personalities collided and decisions were made based on egos. As much one theory failed the other succeeded but neither of the two sides of leaders were responsible for either its successes or failures. Decision to not buy an umbrella because it did not rain can not be used as an argument to prove of one being a visionary. Every human has a Ram and a Ravana inside him, but he is judged based on when he brings out who. While Jinnah was a hurt soul, Nehru a stubborn visionary, Patel had to save what was to be left. They all had to play a role in history and they did for never to be forgotten. Now what? Pakistan is a reality and India is a reality only the truth is not real. It is time to move on as the past fades away in oblivion.

Friday, August 21, 2009

Don't dare to call it a clunker

Almost everyone remembers his or her first car. The car we buy right after learning how to drive. In most cases with our own money. The car that we loved to sneak a peek of late nights when we pretended not to be asleep just to have a reason to go and look at it. The car we took to drive our loved ones around. The car we washed with our own hands and loved to do so. A vehicle that drove us to school or our first job or to your first girlfriends house. The car that made us proud when there was nothing else that did. A dent on it was a dent on your heart. The car that you thought you will pass it on to your grand kids. Yes the car that couldn't keep up with the greed of oil companies. The car that started to cough as if it aged with me. Yes the car that I poured my dreams into. Don't dare to call it a clunker.

Pushing changed data to Coherence

Problem Statement - How to capture data changed in an external data source and invalidate the Cache?
I attended a customer call where we discussed this same problem and I think I could be more explanatory in a blog than on a call. Before we consider any solution one thing that must be known is that Coherence is a data source. Not a relational data source, not like a directory interface but it is a data source - of a different kind. So the problem now becomes more generic - how do we synchronize two discrete data sources? The solution will revolve around similar solutions as if you synchronize an LDAP with a relational database. Have you done that?
Following are the four architectural line of attacks that can be taken. Depending on how stale-data can an application deal with, a few or none of these solutions may or may not work. So make your own judicious decision.

  • Invalidate the data with in.
  • Source of data change propagates the change.
  • Let an external mechanism do it.
  • Whoever changed the database should also change the Cache.
Invalidating the Cache Entries With In
  • [Time to live] Coherence Cache entries have a time to live attribute. TTL defines how long an Entry should live in the cache. Based on data access frequency and expected cache hits an appropriate expiration time can be set. If over an hour period 90% of cache hits are expected to take place in first 15 mins of entry being put and are less frequently later on then a ttl of 15mins gives you a good invalidation parameter. A typical use case is load the data, process it and invalidate it as soon as it is done. This can be helpful if the same entry is being accessed multiple times during the processing.
  • [Refresh Ahead Factor] This is based on a wonderful analogy of serving fries in McDonald's. If we ask for fries that are about to be over then we get the last pieces but it triggers an asynchronous load of fries from an oven. An appropriate refresh-ahead-factor can be set in the cache configuration that will trigger an asynchronous Cache Load operation (using a CacheLoader component - must have a read-through pattern) if data are accessed after the second half of this factor of expiration. So if data changed in an external data source it will be refreshed in the Cache and next access gets the latest. The refresh ahead assumes a continuous stream of data access while lesser database (or any external data sources) updates behind the scene.
Propagation by data source where changed occurred
  • If you are already on Oracle 11g a DCN mechanism can be used where applications can directly register with the database for change-events. After receiving the change-events the application can then propagate the changes to Coherence.
  • Responsibility of change event propagation lies with the owner where change occurred.
An External Agent
  • Oracle Data Integrator has a Changed Data Capture feature that can be used to push changes from a data base to Coherence.
  • Or a simple DB Adapter. An external application polls for data changes at a regular frequency, captures the changed data set and propagates the changes to Coherence cache.
  • Oracle's BPEL PM has an inbuilt DBAdapter that can propagate the change to Coherence using an embedded Java Activity.
  • Simple and could be light weight. Polling could be heavy and needs to be considered when provisioning database loads.
After all, you did it
  • Coherence supports three heterogenous platform - Java, C++ and .NET. It is very likely that application that changed the data in an external data source is running on one of these three platforms. Application that changed data in the database can be extended to propagate the same changes to Coherence as well. Transactional successes should be considered to avoid data being propagated in Cache but didn't succeed in database.
  • ORM like Toplink has SessionEventListener that can retrieve changed data set upon a database commit and this SessionEventListener can propagate the changes to Coherence cache(s).

Saturday, August 08, 2009

एक और शेर

पतंगें उड़े हवाओं में मेरे यादों की तरह से,
डोर देकर मेरे हाथों में, कि मैं भूलूँ न उसे।
रंग तो साथ ले जायें कोसों दूर आसमानों में,
उँगलियों में बाँध जायें बस काटने को मंझे।
बचाने उसे भागूँ मैं इधर दौड़ के पछाड़ के,
कि उतार लुंगा उसे शाम होते ही इस भरोसे।

Monday, July 27, 2009

Why Continental should apologize

We are never short of controversies. India's former President Dr. Abdul Kalam was frisked by Continental security staff at Delhi Airport prior to boarding a flight to US. Since then there has been articles in support and opposition to if Continental was right to do so. First of all he is not any other "Government appointed VIP", He is an ex-President of India. I don't think American ex-Presidents go through security checks in America or outside. He is not some minister that you would need an ID card to find out. He should have had some security clearance. Even if Continental had legal rights to frisk anyone they want they should have been cautious who they pick. And even if legally they are not required to, Continental should apologize to probably the best President India had since 1960s as their moral responsibility.

Wednesday, July 22, 2009

Custom Events in Coherence

Coherence supports three types of Events:
  • MapEvents: Generated because of CRUD operation on the Cache.
  • MemberEvent: Generated when a cluster Member joins or leaves a CacheService and,
  • ServiceEvent: A generic event framework that can be dispatched by the cache services (MemberEvents are of type ServiceEvent). 
 I wrote a Quartz based Timer processes in Coherence late last year by creating a custom cache-scheme "timer-scheme". It was natural to think in terms of generating events when a Job is scheduled to run and has completed. These events would help clients to know when a Coherence Task has been scheduled and completed. The biggest challenge was how to add a new TimerEvent in Coherence?
Problem Statement: How to dispatch custom TimerEvent from Timer scheme

Ideally when a new service is added it has to be added in Coherence components package. This is a core layer and without using the right tools it is not possible to change. The component IDE is not publicly available. So whats the hack?

First, create a TimerService class:


public class TimerService implements Service {
   private Collection listeners = new ArrayList();
   public void addServiceListener (...) {...}
   public void removeServiceListener (...) {...}
}
Second, write a TimerListener interface:


public interface TimerListener extends ServiceListener {}
Then a TimerEvent like:


public class TimerEvent extends ServiceEvent {
   protected int m_nId;
   protected Service m_service;

   public TimerEvent (Service service, int nId) {
     super (service, nId);
     m_service = service;
     m_nId = nId;
   }
   ...
   public void dispatch(TimerListener listener) {
     switch (getId()) {
        case TimerEvent.SERVICE_STARTED:
             listener.serviceStarted(this);
        break;

        case TimerEvent.SERVICE_STOPPED:
             listener.serviceStopped(this);
        break;
     }
   }
   ...
}
Now the trick is without creating a TimerService component how do I use it? And, here is a hack: Create a custom ConfigurableCacheFactory and return a singleton TimerService from it's ensureService () method. This class looks like:


public class ExtendedConfigurableCacheFactory
   extends DefaultConfigurableCacheFactory {

   /**
    * Default Constructor
    */
   public ExtendedConfigurableCacheFactory() {
     super();
   }

   /**
    * Constructor loads the cache configuration from a given path and the
    * classloader to use.
    *
    * @param path
    * @param loader
    */
   public ExtendedConfigurableCacheFactory(String path, ClassLoader loader) {
      super(path, loader);
   }

   /**
    * Constructor loads the cache configuration from a given path using the
    * default classloader.
    * @param path
    */
   public ExtendedConfigurableCacheFactory(String path) {
      super(path);
   }

   /**
    * Constructor to load the coherence cache configuration
    * @param xmlConfig
    */
   public ExtendedConfigurableCacheFactory(XmlElement xmlConfig) {
      super(xmlConfig);
   }
   ...
   public Service ensureService(String serviceName) {
      if (serviceName.equals("DistributedTimerService") {
          return SingletonTimerService.TIMER_SERVICE;
      } else {
          return super.ensureService (serviceName);
      }
  }

  private static class SingletonTimerService {
     protected TimerService TIMER_SERVICE = new TimerService ();
  }
}
Now the last leg of this problem is how to manage these TimerEvents and how to dispatch them? A few well positioned EntryProcessors can do this. First, an EP that updates the state of the Timer Task:


public class TimerStatusUpdateProcessor extends AbstractProcessor {
   /**
    * Status of Job
    */
   public static enum Status {
      SCHEDULED,
      RUNNING,
      COMPLETED,
      ;
   }

   private Status status;

   /**
    * Constructor to pass the Job's status to be set
    * @param status
    */
   public TimerStatusUpdateProcessor(Status status) {
     this.status = status;
   }

   /**
    * Sets the status of the Job's status
    * @param entry
    * @return
    */
   public Object process(InvocableMap.Entry entry) {
      entry.setValue(status);
      return null;
   }
}

Next a class CoherenceJob that all Quartz Timer Job has to extend. This CoherenceJob makes sure that only one Job runs across the entire cluster. A sneak preview of CoherenceJob is:


public abstract class CoherenceJob implements Job, Serializable {
   public CoherenceJob() {
   }

   public void execute(JobExecutionContext context) {

     // -- Only run if the Job is scheduled
     NamedCache nCache =
        CacheFactory.getCache(CoherenceTrigger.CACHE_NAME);

     Member member =
        nCache.getCacheService().getCluster().getLocalMember();
     nCache.invoke(context.getJobDetail().getFullName(),
           new JobExecutor(context, member));
   }

   public abstract void process(JobExecutionContext context);

   private class JobExecutor extends AbstractProcessor implements Serializable {

    private transient JobExecutionContext context;
    private Member member;

    public JobExecutor(JobExecutionContext context, Member member) {
      this.context = context;
      this.member = member;
    }

    public Object process(InvocableMap.Entry entry) {
      String configName = System.getProperty("tangosol.coherence.cacheconfig");
      if (configName == null) {
        configName = "coherence-cache-config.xml";
      }
      InvocationService iS =
          (InvocationService) new DefaultConfigurableCacheFactory(configName).
            ensureService("JobInvocationService");
            iS.query(new JobProcessor(context), Collections.singleton(member));
      return null;
    }
}

private class JobProcessor implements Invocable {

    private JobExecutionContext context;
    private transient InvocationService iS;

    public JobProcessor(JobExecutionContext context) {
      this.context = context;
    }

    public void init(InvocationService invocationService) {
      iS = invocationService;
    }

    public void run() {
     System.out.println("Processing the real work");
     process(context);
    }

    public Object getResult() {
     return null;
    }
  }
}
So what about the Client? Here you go:


public class TimerTest extends TestCase {
   public TimerTest(String sTestName) {
     super(sTestName);
   }

   public static void main(String args[]) {
     Service service = CacheFactory.getConfigurableCacheFactory().
                             ensureService("DistributedTimerService");
     service.addServiceListener(new MyTimerListener ());
 
     try {
      System.in.read();
     } catch (Exception exp) {
        exp.printStackTrace();
     }
     System.exit(1);
   }

   protected void setUp() throws Exception {
     super.setUp();
   }

   protected void tearDown() throws Exception {
     super.tearDown();
   }

   private static class MyTimerListener implements TimerListener {

     public void serviceStarting(ServiceEvent serviceEvent) {
     System.out.println("Service Starting: " + serviceEvent.getId());
   }

   public void serviceStarted(ServiceEvent serviceEvent) {
     System.out.println("Service Started: " + serviceEvent.getId());    
   }

   public void serviceStopping(ServiceEvent serviceEvent) {}

   public void serviceStopped(ServiceEvent serviceEvent) {
     System.out.println("Service Stopped: " + serviceEvent.getId());
   }
  }
}
The problem with the last TimerService is that only those instances will dispatch events that has listeners registered in it's own JVM. With this solution TimerListeners are only registered at the client. As this TimerService is a single instance per classloader there has to be a mechanism to dispatch changes in one to other nodes. One solution is to piggyback ServiceEvents on top of MapEvents. With this change the TimerService becomes:


public class TimerService implements Service, MapListener {

 private NamedCache nCache = null;
 private Collection listeners = new ArrayList();

 public TimerService() {}

 public void addServiceListener(ServiceListener serviceListener) {
     nCache = CacheFactory.getCache(CoherenceTrigger.CACHE_NAME);
     nCache.addMapListener(this);
     nCache.getCacheService().addServiceListener(serviceListener);
     listeners.add((TimerListener)serviceListener);
 }

 public void removeServiceListener(ServiceListener serviceListener) {
     nCache = CacheFactory.getCache(CoherenceTrigger.CACHE_NAME);
     nCache.getCacheService().removeServiceListener(serviceListener);
     listeners.remove(serviceListener);
 }
...
public void entryInserted(MapEvent mapEvent) {
     notifyOtherTimerServices(mapEvent);
}

public void entryUpdated(MapEvent mapEvent) {
    notifyOtherTimerServices(mapEvent);
}

public void entryDeleted(MapEvent mapEvent) {}

private void notifyOtherTimerServices(MapEvent mapEvent) {
    String newValue =
      ((TimerStatusUpdateProcessor.Status)mapEvent.getNewValue()).name();

    String oldValue = "";
    if (mapEvent.getOldValue() != null) {
      oldValue =
       ((TimerStatusUpdateProcessor.Status)mapEvent.getOldValue()).name();
    }
    if (!(oldValue.equals("") && newValue.equals("COMPLETED"))) {
      int s =
        (newValue ==
         TimerStatusUpdateProcessor.Status.valueOf("SCHEDULED").name()) ?
         TimerEvent.SERVICE_STARTED : TimerEvent.SERVICE_STOPPED;
      TimerEvent tE = new TimerEvent(this, s);
      Collection listeners = this.getListeners();
      for (TimerListener listener : listeners) {
        tE.dispatch(listener);
      }
    } 
  }
}
Rest is left on your creativity. The entire project can be downloaded from Here... Enjoy!

Monday, July 13, 2009

GM Ad

Our drive to success is as smooth as our vehicles drive. We are the New General Motors.

Wednesday, June 17, 2009

Tricks with CacheFactory

Coherence's com.tangosol.net.CacheFactory is one of the most powerful tools available that provides a console input to the data grid. The CacheFactory can be used to initialize new caches, insert, delete or update data and can also be used to find the size of the cache and if a cache contains a certain key or not. CacheFactory console uses Java reflection to invoke methods. Typing help displays all the commands that can be executed. Some are pretty obvious like put, get, size, cache <cache_name> etc. But there are a lot that are hidden and are not so obvious. Following are some useful commands that can be executed:
&keySet To return a set of all the keys in the cache
&containsKey 1 Returns true or false if the Cache contains a key = 1
filter EQ1 Equals toString MyValue Creates an EqualsFilter
list <cache_name> EQ1Returns a set of values whose toString on the Cache Entry is equal to MyValue

And the fanciest of all (execute it after initializing a cache):
&getNamedCache.
getCacheService.
getBackingMapManager.
getCacheFactory.getConfig
Prints the cache configuration currently loaded by this node.
&getNamedCache.
getCacheService.
getBackingMapManager.
getCacheFactory.getConfig.
findElement /caching-schemes/proxy-scheme

Prints the proxy-scheme XmlElement defined in the cache configuration currently loaded by this node.
invoke:Management "#1 cache foo; &getCacheService.getService.getStorage bar" To find the internal storage on node 1 from the cache foo to the cache bar with the same service
Enjoy!

Wednesday, June 10, 2009

Windows Ad

We don't price ourselves to be just different. Its a smart investment in your future.

Sunday, May 24, 2009

Yes Deletion is not write behind

There are a few critical words that Coherence or any similar distributed mechanisms are built around. Coherency, Consistency and Availability are a few. In a room if you ask a question to anyone about something and irrespective of who you ask you get the same reply we have a coherent system. They don't have to speak the truth but as long as they all tell the same answer it is a coherent system. Consistency is across the boundaries of a single data source. If the answer against a Coherence cluster is same as what say a relational database tells at a given time then these two systems are consistent systems. Availability is when applications have an ability to find the data irrespective of failures and unavailability. Coherence cache like any database supports Entry Insert, Update and Delete operations. While the Cache itself is coherent, Entry Insertion and Updates are about data availability while Delete is a peculiar use case and is about data consistency. Coherence write-behind is a mechanism that allows asynchronous/delayed persistence of changes in the Cache to an external data source like a Database. While inserts and updates can be delayed Entry deletion is not write-behind. Why?
Applications quite often confuse data eviction with data deletion. A poor analogy of eviction is like if someone moves from Phoenix, AZ to LA. If someone lives in Arizona and decides to leave it is similar to data eviction. Application looking at AZ data would not find him but there could be a way to find him if need be (barring a fact that its not the same object that lives at two places in real life but it does in two systems). Deletion is like that person dies. You cannot die in Arizona and live in Los Angeles. If thats the case probably LAPD has to get involved ;). Data deletion has to be conservative and needs to make sure it is consistent across all the data sources. This is why Coherence write-behind mechanism makes Entry deletion a synchronous process even if there is a delay set. The good thing is API contract is just to an extent of calling erase() method of the configured CacheStore. Even though not recommended but If your application mandates that all CRUD operations are of equal priority can be done by using some Timer service inside the erase () erase process still can offload the deletion steps to a separate thread that deletes data after a write-behind delay.

Sunday, May 17, 2009

इक शेर

आवाजें दूर की गुजरां करें मेरे कानो के साए से,
सुनाई कुछ भी ना दे बस जगाती हैं इक आहट सी।
पुकारें चुप सी रहती हैं लगे बहती आखों से किसी के,
बुलाते खींचते अपनी तरफ़, ये धागे गुम हवाओं सी।
कि आजा पास अब ये दूरियां नागवार होती हैं,
करें क्या? बंधीं हैं बेड़ियों सी नागिनें पैरों में चाहत सी।
कहाँ जायें किधर जायें किधर जायें कहाँ जायें?
ये राहें काटती पैरों को, बुला के पास काटों सी।

Friday, May 15, 2009

What BJP does not understand again and again

So it came out true what I had predicted - A win for UPA and rejection of Left and Third front. For BJP it must hurt with two consecutive defeats with people rejecting its leader and its policies. BJP and NDA will sure analyze what went wrong but it does not matter if they do not learn from it. They did not learn the last time either. If they want to get their acts together they have to go back and do major overhaul on what they actually stand for and how they see their future as. Following are simple reasons why BJP failed and is ironic why they couldn't see it.

  1. Personal attacks on leaders does not work. People have rejected it in the past and they will in the future.
  2. Indians have already accepted Sonia Gandhi as an Indian, projecting her as an outsider does not strike a cord with common masses.
  3. An absolute absence of young leaders in the age group of 35-45. A wave of Omar Abdullas, Sachin Pilots, Jyotir Adityas and Rahul Gandhis are coming to crush you.
  4. Absence of any credible presence in South or strong alliance. Banking on post-election alliances would not work.
  5. Retaining the states. They keep losing some key northern states. The states they have won is because how the state government has performed not because of Advani's appeal.
  6. L.K Advani is not an accepted leader of common Indians and he has been rejected in 2009 elections. Sooner he retires the better.
  7. Growth is what will bring them back. There has to be BJP in every state winning local elections. Grass root presence is key to revival.
  8. Up till 1991 BJP was seen as a party not hungry for power. If they have to win then do not project themselves as an alternative of Congress but a party that has concrete beliefs on issues.
  9. BJP is a company that can not build themselves by acquisitions (alliances) alone, they need to generate products.

A tale of two cell phones - why reliability is important

I loved my Palm Treo when I bought it. At that time iPhone was not available and getting a smart phone with a touch screen was all I wanted. And then I broke its screen (thanks for tossing a TV remote on the Bed). I was sad but resilient to fix it instead of buying a new one. Thanks to eBay bought a Palm touch screen and replaced it. While doing so I dropped its mic and for the sake of not taking any more risks I decided to buy a Bluetooth headset so I could continue using the phone. I so loved the phone that I also configured twitter with it. And then one day it decided to breach my trust and apathy. The phone decided to divorce the bluetooth headset. Even though there was a problem with the headset as well but even after getting a replacement it just did not work. I so wanted an iPhone. The price and thanks to the economy I decided to buy a $50 3G Samsung flip phone instead. Moving back to a non-touch-screen phone was hard. 3G was good but my current plan does not have an unrestricted data plan so watching ESPN on phone was not something that I could do. Good enough phone and except a few times going to Apple website to watch an iPhone I decided to settle down. It was like after a failed relationship while on the bounce unable to get any attention from that darn rich beauty deciding to get married to somebody else. The Samsung phone just worked with nice handy features. I stayed happy till the day when I needed reliability more than features. Had to be on a conference call all day for 12 hrs, I plugged the phone with a charger hoping the battery will remain available while I talk. No it did not. It failed the reliability test. The battery died in the middle while I was talking. The charger should have worked while the phone was busy, or at least that what I assumed. Then my old Palm saved the day. Flipped the SIM hooked a cheap Walmart wired headset and I was ready to go. The best thing while I talked, the phone charger continued to charge the battery and for the entire session the battery signal remained green. Features, bluetooth, touch screen was not on my mind. When it comes to saving the day its the battery that matters.

Wednesday, May 06, 2009

Mac Ad

It took humans thousands of years to Mechanize.. Isn't it time to Mac-anize?

Monday, April 27, 2009

My Sis' reasons why BJP is better

Nuclear deal:
The stage was set by Atal. Anyone could have done it. It was done because US was willing to do it. Congress did not do anything special that BJP could not have done.
Kandahar episode:
Did anyone ask Congress what would they have done if Rahul Baba was in the same plane?
Mumbai terror attack:
No one suppose to talk about CS Station where 57 people died. Everyone is talking about Taj which was at the center of international attention. Are lives of common people cheaper? What was their preparedness? How come a few terrorists walked free for so many hours and held the entire nation hostage for three days? Why was response so late?
Kasab/Afjal Guru:
Why is Afjal not hanged yet when Supreme Court has already convicted him? Why is plea bargain still pending? Why has home ministry not forwarded anything to the President's office yet? About Kasab - He is captured on video. If he is found to be 17 would they keep him in a juvenile detention center?
Zero leadership:
India needs a strong leadership. South Asia is in mess. India needs a leader like Indira Gandhi an internationally aggressive personality not a weak person who seeks permission from a family to do anything.
Rahul as PM - You kidding me?
Congress is trying to project Rahul or Priyanka as a Prime Ministerial candidate. This cannot be accepted.
National progress:
BJP and Congress are the only two parties who would not slow down the national progress. Its about the speed of it. Third front is a mess. They don't fight on issues but do seat calculations. Third front has stronger allies of Congress and they impede our progress.

Wednesday, April 22, 2009

Push Replication - You know it alright

So you have heard about it alright here, here, here, this one too, here and even here and of course a hero of numerous talks at NYSIG, UKSIG and more. I have had opportunities to work on multi-cluster Coherence solutions at many client locations and also be part of a project whose one member is an active contributor to Push Replication. So what new do I have to bring on table? A working example for Incubator junkies for one. If you have followed Brian Oliver's write-up at Push Replication page I am sure you already know how to set this up and get running in just a few minutes. So I thought to spice it up a little. So lets build the following:
Setting up Active-Activen clusters
Active/Active is pretty similar to how we set up Active/Passive clusters but it needs some special classes to be used. First, make sure SafePublishingCacheStore is configured in the cache config and as we register a publisher we use SafeLocalCachePublisher instead of LocalCachePublisher.

Making use of introduce element in cache configuration

Start using introduce tag while writing coherence cache config. This is one of the very useful features introduced in coherence common incubator pattern. introduce tag allows us re-use of common cache configurations and is as simple as:
<cache-config>
<introduce-cache-config file="coherence-pushreplicationpattern-cache-config.xml"/>
</cache-config>


Dynamic subscription of subscriber clusters
One architecture that Push replication supports and possibly the most popular one as well, is a hub-n-spoke model. In the hub-n-spoke model not only the spoke clusters know about the hub but the hub knows about all the clusters on the spokes as well at least at the time of deployment. This "knowledge" of the other cluster is in shape of a set of remote-cache-schemes. Recently I came across a requirement where the number of spoke clusters were not known at the time of deployment. This ever expanding subscriber cluster introduces new challenges to Push replication deployments. Coherence is all about 100% up time and stopping the hub every time a new subscriber cluster joins, is not a preferable deployment. So lets introduce how new clusters can dynamically join the hub so that hub does not know about the subscriber but subscriber knows about the hub.

Lets start with some cache configurations and how it will look in a production environment. The following samples are part of a proof of concept and there is a scope of a few tweakings.

Coherence Cache Configuration on the hub
<?xml version="1.0" encoding="windows-1252" ?>
<!DOCTYPE cache-config SYSTEM "cache-config.dtd">
<cache-config>
 <introduce-cache-config file="coherence-pushreplicationpattern-cache-config.xml" />
  <caching-scheme-mapping>
  </caching-scheme-mapping>
 <caching-schemes>
     <proxy-scheme>
         <scheme-name>proxy-scheme</scheme-name>
         <service-name>ProxyCacheScheme</service-name>
         <acceptor-config>
             <tcp-acceptor>
                 <local-address>
                     <address>localhost</address>
                     <port>20000</port>
                     <reusable>true</reusable>
                 </local-address>
                 <keep-alive-enabled>true</keep-alive-enabled>
             </tcp-acceptor>
          </acceptor-config>
         <autostart>true</autostart>
     </proxy-scheme>            
 </caching-schemes>
</cache-config>
Hmm... The only configuration the hub cache config has is proxy-scheme. The hub does not know anything about who the subscribers will be. Will explain later how it will be done, scroll down.

coherence-pushreplicationpattern-cache-config.xml
Its pretty much the same as seen when you download the push replication project, just replace PublishingCacheStore with SafePublishingCacheStore.

Subscriber Cluster Cache Configuration
Cache configuration deployed on the subscriber cluster looks a little more complete as subscriber knows about which hub it has to connect to. The configuration looks like:
<cache-config>
 <introduce-cache-config file="coherence-pushreplicationpattern-cache-config.xml" />
 
 <caching-scheme-mapping>
 </caching-scheme-mapping>
 <caching-schemes>
     <proxy-scheme>
         <scheme-name>proxy-scheme</scheme-name>
         <service-name>ProxyCacheScheme</service-name>
         <acceptor-config>
             <tcp-acceptor>
                 <local-address>
                     <address>localhost</address>
                     <port>9099</port>
                     <reusable>true</reusable>
                 </local-address>
                 <keep-alive-enabled>true</keep-alive-enabled>
             </tcp-acceptor>            
         </acceptor-config>
         <autostart>true</autostart>        
     </proxy-scheme>    
 
     <remote-invocation-scheme>
         <scheme-name>RemoteSiteInvocationService</scheme-name>
         <service-name>RemoteSiteInvocationService</service-name>
         <initiator-config>
             <tcp-initiator>
                 <remote-addresses>
                     <socket-address>
                         <address>localhost</address>
                         <port>20000</port>
                     </socket-address>
                 </remote-addresses>
             </tcp-initiator>
         </initiator-config>    
     </remote-invocation-scheme>
 
     <invocation-scheme>
         <scheme-name>invocation-scheme</scheme-name>
         <service-name>InvocationService</service-name>
         <autostart>true</autostart>
     </invocation-scheme>
 </caching-schemes>

</cache-config>


Dynamic Registration of Subscriber Cluster
The following example is a very scaled down sample tuned to be run on a single machine. Change it as needed. Also the following class should be made a MBean so that it can be executed from a JMX console.
Run:
java -Dtangosol.coherence.distributed.localstorage=false -Dtangosol.coherence.clusteraddress=<subscriber-multicast-ip> -Dtangosol.coherence.cacheconfig=subscriber-cache-config.xml PushReplicationClient <proxy-port> <subscriber-name>
public class PushReplicationClient implements Serializable {

 public PushReplicationClient() {
 }

 public static void main(String[] args) {
     PushReplicationClient pC = new PushReplicationClient();
     String cacheName = "publishing-cache";
     String remoteServiceName = args[1];
     NamedCache nCache = CacheFactory.getCache(cacheName);
     PublisherRegistrationTask rTask =
                          pC.new PublisherRegistrationTask(cacheName,
                                        remoteServiceName, remoteServiceName);
     InvocationService iS =
         (InvocationService) CacheFactory.getService("RemoteSiteInvocationService");
     iS.query(pC.new SubscriberTask(remoteServiceName, Integer.parseInt(args[0])),
              null);
     iS.query(rTask, null);
     Member sM = CacheFactory.getCluster().getOldestMember();
     InvocationService isLocal =
         (InvocationService) CacheFactory.getService("InvocationService");
     rTask = pC.new PublisherRegistrationTask(cacheName,
                          "RemoteSiteInvocationService", remoteServiceName);
                                                                              
     isLocal.execute(rTask, new HashSet(Collections.singletonList(sM)), null);

 }

 private class PublisherRegistrationTask implements Invocable {

     private String cacheName;
     private String serviceName;
     private String publisherName;

     public PublisherRegistrationTask(String cacheName, String serviceName,
                                      String publisherName) {
         this.cacheName = cacheName;
         this.serviceName = serviceName;
         this.publisherName = publisherName;
     }

     public void init(InvocationService invocationService) {

     }

     public void run() {
         PushReplicationManager pM =
             DefaultPushReplicationMananger.getInstance();
         BatchPublisher batchPublisher =
             new RemoteInvocationPublisher(serviceName,
                                           new BatchPublisherAdapter(
                                           new SafeLocalCachePublisher(cacheName)),
                                           true, 10000, 100, 10000, 5);
         pM.registerBatchPublisher(cacheName, publisherName,
                                   batchPublisher);
     }

     public Object getResult() {
         return null;
     }

 }

 private class SubscriberTask implements Invocable {

     private String serviceName;
     private int port;

     public SubscriberTask(String serviceName, int port) {
         this.serviceName = serviceName;
         this.port = port;
     }

     public void init(InvocationService invocationService) {

     }

     public void run() {
         ConfigurableCacheFactory factory =
             CacheFactory.getConfigurableCacheFactory();
         XmlElement root = factory.getConfig();
         XmlElement cS = root.findElement("caching-schemes");
         XmlElement riS = cS.addElement("remote-invocation-scheme");
         riS.addElement("scheme-name").setString(serviceName);
         riS.addElement("service-name").setString(serviceName);
         XmlElement iC = riS.addElement("initiator-config");
         XmlElement tI = iC.addElement("tcp-initiator");
         XmlElement rA = tI.addElement("remote-addresses");
         XmlElement sA = rA.addElement("socket-address");
         sA.addElement("address").setString("localhost");
         sA.addElement("port").setInt(port);
         factory.setConfig(root);
         System.out.println(cS);
     }

     public Object getResult() {
         return null;
     }

 }

}

There are three parts of this class:
  1. Updates the cache configuration deployed on the hub to register itself.
  2. Registers a Publisher on the hub so that changes made in the hub is pushed to this subscriber cluster.
  3. Registers a publisher in the local cluster so that the replication is in Active-Active mode. Changes made in the subscriber cluster also is pushed to the hub's cache and thereafter to other subscriber clusters.
Execute this program for each subscriber cluster that needs to join the hub. Change the subscriber cache configuration accordingly. While I was developing this sample I found an issue in SafeLocalCachePublisher as it was missing a default constructor. A JIRA has been opened and will be fixed in the next Incubator release. In the meantime download the push replication source code and add a default constructor in SafeLocalCachePublisher. So thats pretty much it. Running geographically distributed dynamically subscribed multi-clusters in a hub-n-spoke architecture in less than 10 minutes and then staying up 100% of the time. A complete project with an MBean can be downloaded from Dynamic Push Replication Subscription page. Enjoy!