Sunday, January 30, 2011

TM1 TI Analytical Modeling- back to the 1990's OOPS - 2


TM1 itself introduces a pattern on the nature of cube development by clear demarcations on data definition, parameter definition, association of values to parameters from data sets an then finally loading the data on boundaries defined by parameters and rules. This simple but generic and powerful pattern can help create any type of multi-dimensional structure rapidly. This is one of the capabilities that TM1 brings for data analysis and can be used in literally any industry.
However, when we think of enterprise solution for analytics, where OOAD is required for simplification, to ensure consistency, reproducibility and reducing duplication (increasing reuse), this tight regulation poses serious challenge, especially without capabilities of programming languages like function, Array or Vector. When similar activities need to be done repeatedly time and again, ensuring these guidelines is quite a challenge.
For example, creation of dimension and population of the dimension elements is quite a generic thing that would need to be written time and again. A simple function would have made life very easy, but then we need to work around by creating a new TI process to do this simple task. Even if such a process is created that can accept parameters, multiple members in the team can still go ahead and create a dimension in their own way without adhering to the standards. This creates lot of variation on the architecture management in not only TM1 but most analytics projects involving the tools like TM1/COGNOS or ESSBASE or SPOTFIRE.
Besides, the other challenges to coming up with an enterprise architecture solution is the level at which objects are considered. While definition of object in TM1 is identical to that in any high level programming language, the universe of defining all objects is a row of data used in the input to TI process unlike the higher level programming languages where a row of data from the table represents an instance of an object.  Cube creation requires various types of analytical operations to be performed on collections of records. Handling these types of operations bring enormous challenges to how the enterprise solution is modeled. Hence, I consider that programming in TM1 takes us back to the 1990's where design patterns were the way out. I am trying to see if we have any new flavors in structural, creational or behavioral patterns that the differ from what the founding fathers or OOAD put up. My respect for Eric Gamma and the Gang-Of-Four increases as I can use combinations of their findings to achieve most of what I want, albeit the fact that I have to implement them without a array or vector or function to assist/simplify my task.

Tuesday, January 25, 2011

TM1 TI Analytical Modeling - back to the 1990's OOPS - 1


Analytics is now a very highly pursued field across the corporate world as most of the companies are trying to make use of the vast data they have accumulated to make predictive analysis and informed decisions or improve their business processes by understanding process data trail. Working with data is very different ball game especially when it comes to analytical modeling and developing integrated enterprise analytical reporting solutions around diverse organizational functions. The most modern tools like EXCEL, Essbase, TM1, Cognos or Spotfire - all of them offer remarkable capabilities to do various types of data analysis. Like most products, these are commercial solutions and provide tools to develop relations and translations across data segments in their own style.

Working with TM1 is however a different flavor, especially the Turbo Integrator (TI). One of the major issues with this tool despite having powerful capabilities is the fact that TI is not as capable as an object oriented programming language like Java or C++. However, it is expected to deliver OO results on data. Some of the basic capabilities of any programming language like Function() or Array[] are not available. This makes it very complicated to conceptualize enterprise solutions that are expected to be reusable. The situation is quite similar to the 1990's when programming languages were not very powerful and enterprise solutions architecture was not as mature as it is today. But the answer to implementing reusable, scalable, object-oriented enterprise solutions is same as in the 1990's - 'Design Patterns' or simply put, similar solutions to similar problems.

A common pattern that I used in one recent implementation of Enterprise Solution includes aligning / organizing the candidate data set on co-ordinate system where the axes have discrete points and use simple properties of co-ordinate systems on this model. Such an organization allows determining the characteristics of the elements in question and create a modular solution. The nature of the TI process tool itself can be effectively used to create code modules that are similar to functions dependent on arguments. Since these patterns are not specific to a particular industry or company specific data, pattern based enterprise solutions can be developed using TM1 for any industry specific problem. Such a style or architecture is important in the current business environment where  there is a huge demand for such intelligent/analytical reports to beat the market or improve business performance/productivity. It is not easily realized that these tools are extremely costly to implement and manage because of the proximity of the output to senior management and the dynamics of the implementation that are so closely tied to data. Hence, pattern based solutions become important to scale and reproduce output.