Wednesday, November 26, 2014

Some Messages Stuck in the ActiveMQ Queue

In one of our product we are using Apache ActiveMQ 5.5.0 with Spring 3.0.7.  I have two publisher pushing message to a common queue and two consumers listening from the same common queue.
Last week I encountered this strange issue where some messages got stuck in the queue (I could see the message in the ACTIVEMQ_MSGS table since I am using persisted queue).
The strange part was only few messages were getting stuck while others were still getting processes just fine.

I looked into the logs and started thinking that I am hitting some ActiveMQ bug (possibly https://issues.apache.org/jira/browse/AMQ-3966). But I continued my diagnosis. Today after spending about two days on this issue I realized that it was NOT ActiveMQ bug but a bug in my code.

Here is what was happening, for some of the messages the code was making an HTTP call to a REST API. Those calls never got completed and just got blocked. Since the consumer had 10 threads to handle messages , so even when one thread got stuck others were still working fine. But slowly even these threads got stuck as they receive similar message and tried to make the HTTP call the same REST API. And finally the consumer stopped processing messages.
Interestingly, this behavior was happening only on one of the consumer and the other consumer was able to make HTTP calls the REST API successfully.

From ActiveMQ brokers point of view both of the consumers are up and running , so it keep on sending half of the message to the first consumer and hence all the those message get piled up in the ACTIVEMQ_MSGS table.

Restarting the consumer resolved the issue because then the threads were re-created. And by that time the issue with making API calls was also resolved.

So, the learning from this issue is, I should have added some READ_TIMEOUT to the HttpClient while making the REST API call. That way the thread would have thrown the "READ_TIMEOUT" error and got freed to process next message.

Hope this will help someone.

Friday, October 31, 2014

Enable Conversation using Session attributes in Spring

In a Spring framework project we use form objects saved as session attributes to achieve a conversational style of creating and editing business objects.
But recently I realized that this functionality does not work if you have more than one tab open in the same browser. Reason is simple, if you are editing an object in first tab and start editing another object in second tab then the session attributes gets replaced by the second object. And now if you save first object then actually the second object will be updated with the information of first object.

This happens because spring saves the objects into session with same attribute name, so the object saved later will replace any other object. And when POST request is made from already loaded UI then it will always update the object which was saved later.

There is a very easy solution to the above issue. We can extend the class "DefaultSessionAttributeStore" and override just one method,  which is "getAttributeNameInSession(WebRequest request, String attributeName)", as shown below:
  
@Override
  protected String getAttributeNameInSession(WebRequest request, String attributeName) {
    String cid = request.getParameter(attributeName + "_cid") == null ? ""
        + request.getAttribute(attributeName + "_cid", WebRequest.SCOPE_REQUEST) : request.getParameter(attributeName
        + "_cid");
    if (cid != null || !"".equals(cid)) {
      return super.getAttributeNameInSession(request, attributeName + "_" + cid);
    }
    return super.getAttributeNameInSession(request, attributeName);
  }


This class should also implement interface "InitializingBean" and override method "afterPropertiesSet()" as shown below:
  @Override
  public void afterPropertiesSet() throws Exception {
    annotationMethodHandlerAdapter.setSessionAttributeStore(this);
  }

This will make sure that this custom session attribute store is added to the annotation handler adaptor.

Now when ever you save the form into model map, make sure that you add another attribute with name "{your form name}_cid" and value as the unique id for the form.
And from the JSP add a hidden input which will be sent along with the POST request.
<input name="{your form name}_cid" type="hidden" value="<c:out value='${your form unique id}'/>" />

Thats it! You can now edit different entities in different tabs under same browser session.

Please add comment if you have any question.

Thanks,
Manish

Monday, October 20, 2014

Resolving "Could not resolve view with name ... in servlet with name ..."

I was creating a web application using Spring with Apache Tiles, and my application did not work after I enabled the TilesView and TilesConfigurer. My configuration was as shown below:



I was getting below error.
SEVERE: Servlet.service() for servlet [onecode] in context with path [/onecode] threw exception [Could not resolve view with name 'search' in servlet with name 'onecode'] with root cause
javax.servlet.ServletException: Could not resolve view with name 'search' in servlet with name 'onecode'
    at org.springframework.web.servlet.DispatcherServlet.render(DispatcherServlet.java:1200)
    at org.springframework.web.servlet.DispatcherServlet.processDispatchResult(DispatcherServlet.java:1005)
    at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:952)
    at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:870)
    at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:961)
    at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:852)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:621)
    at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:837)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:728)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:305)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
    at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:222)
    at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:123)
    at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:472)
    at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:171)
    at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:99)
    at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:936)
    at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118)
    at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:407)
    at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1004)
    at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:589)
    at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:310)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:744)


From the error it seems the control is not even reached to Tiles because I did not see any Tiles class in the stack trace. I checked all the configuration files multiple times and everything looked just fine.
After spending some time struggling to find the issue, I noticed that I had a typo in the "tiles-def.xml" file. One of the definition was extending a base definition and there I typed the incorrect name of the extended definition. The issue got resolved as soon as I corrected the base definition.

~Manish

Friday, September 5, 2014

Jail breaking Cognos: Fix ClickJacking in Cognos

All of us who work with Cognos, know that sometimes how difficult it can be to fix or customize a simple request from the End user, for example modifying Prompt behaviors.
I’ve faced my share of Fancy requests like Hiding the Prompt name from the Drop-down <ToDo> or Reload a prompt without Refreshing the page.
Thanks to experts like “Cognos Paul” and groups like Ironside and Cognoise, we know how to get these done.

This time I got a little more complex request than UI customizations. In the security testing for our application, Testing team reported that Cognos is susceptible to  ClickJacking (or Frame busting).
IBM replied with “you can configure your Web-Server to set X-FRAME-OPTIONS that disables framing”.
But this works  only with latest browsers and if the victim is using a really old browser, like Mozilla 3.20 in our case, then it doesn’t.

There is a simple Solution to this, provided by OWASP, Link:
Add the following code to the landing page of your application.


Now comes the second part of the problem. In our application we are using Cognos LDAP authentication and hence don’t have a customized page for Login.
So the solution was to find Where to add this piece of code in Cognos so that the entire portal is secured from this issue.

With some effort I figured out that there are two pieces of the puzzle. First is that Cognos generates the HTML from its XSL files at runtime, so we can’t directly paste the code in the HTML.
Second was that there are some JS files, which are loaded for every page, as required by the portal.

So I found that for Login page, HTML is generated from this “render.xsl” file and for Landing/Portal pages, “framework.xsl”.
Then I added the above code to these two files like this:

For Login page:


For Portal pages:

You have to do it in two places as “framework.xsl” didn’t allow us to add HTML “Style” element, which then has to be added to  “presentation.xsl


For “framework.xsl



If done correctly, any webpage which tries to use your portal in an iframe, your portal will bust out of that frame and URL will be changed to actual portal URL.

Hope it helps.


Thursday, August 7, 2014

Incorrect position of form object in spring controller

Today I wasted some time to find out why my spring controller method errors out even before entering into the method.
My failing method is:


But I knew I am doing things right. I debugged  the application and then searched on net but could find why it is failing. Suddenly I thought is it the position of the "User" model attribute which is causing the issue. Then I changed the method as below:


Notice that I moved the method argument "@ModelAttribute @Valid User user" from first to second last. And Voila!! it worked.

FYI: I am using Spring 3.2.

Thursday, July 24, 2014

Learning MongoDB

I spent sometime to learn some basic stuff about MongoDB. I started searching internet and found some very helpful links.
Try the MongoDB online (interactive tutorial): http://try.mongodb.org/
Little MongoDB book (PDF) : http://openmymind.net/mongodb.pdf

After going through above links I got some confidence and I downloaded the MongoDB from http://www.mongodb.org/downloads .  Installing the db was very simple and within minutes I was having mongo up and running on my Macbook.

Then I thought of creating a sample spring application to learn spring integration with mongo. I followed the documentation at http://docs.spring.io/autorepo/docs/spring-data-mongodb/1.4.3.RELEASE/reference/html/index.html.

I created an application which fetches all users from the database. The code for the sample application is available at https://github.com/itsmanishagarwal/SpringMongoDb.

Anybody can download the code and use as per their need.

~Manish

Why should we allow only single session per user?

Some time back me , Mishra and Mittal were discussing some technical issues and then Mishra mentioned that he had a requirement to implement this feature to allow only one session per user. He implemented this without any problem but then we started thinking what are the possible reasons for which people want to implement this requirement. After some thinking, discussing and googling we could come with below reasons:
  1. Security. If we allow only one session per user then on creation of second session we can alert the user that there is already a active session and allow a way to kill the previous session. This way if the user is unaware of previous session then this will serve as a warning that his/her credentials may have been compromised. And user can change his credentials.
  2. Licensing. Some products are priced as per number of users using the product. So, avoiding the multiple session per user will prevent the misuse of the license.
  3. Product Implementation. This is very specific to the product requirement. If the application maintains some kind of user's working state then multiple sessions can mess it up.
 I will update this post if I could find some more reasons.

~Manish

Wednesday, June 11, 2014

Unable to add any directory into the watched directory chain in virgo tomcat server 3.0.3

Recently I had a requirement where I had to add a directory into the watched directory chain in virgo tomcat server (VTS 3.0.3). Requirement was to check if some jars are present in a particular directory, and if jars are present then add the directory programmatically (using shell script) in the watched directory chain.
So, I created a shell script which looks like below:

if [ -f /usr/local/vts/external_jars/util* ] ; then
echo "Found the util jar under /usr/local/vts/external_jars"
sed -i '
/usr.watchDirectory=repository\/usr/ a\
external_jars.type=watched \
external_jars.watchDirectory=external_jars
' $VTS_HOME/config/org.eclipse.virgo.repository.properties
sed -i "s/usr,/usr,external_jars,/" $VTS_HOME/config/org.eclipse.virgo.repository.properties
fi
When I ran the script it worked all fine, and the configuration file also got updated, but when I restarted my virgo server it did not pick my "external_jars" directory into the chain of watched directory.
I compared the changes with other watched directories but could not find any clue.

Then after struggling a lot I realized that there is a extra space after the value of property "external_jars.type". And as soon as I removed the extra space, it worked.

Corrected script is:

if [ -f /usr/local/vts/external_jars/util* ] ; then
echo "Found the util jar under /usr/local/vts/external_jars"
sed -i '
/usr.watchDirectory=repository\/usr/ a\
external_jars.type=watched\
external_jars.watchDirectory=external_jars
' $VTS_HOME/config/org.eclipse.virgo.repository.properties
sed -i "s/usr,/usr,external_jars,/" $VTS_HOME/config/org.eclipse.virgo.repository.properties
fi

Just thought of sharing this info, so I blogged it.

Saturday, March 29, 2014

How to quickly setup ActiveMQ broker, publisher and subscriber

Last week I was working on an issue related to ActiveMQ messaging. During my debugging the most painful part was to start the entire application and then execute the test scenario just to test some functionality or feature in ActiveMQ. After spending some time I realized that to speed up my debugging and anaysis I have to create a separate application/program which I can start and stop quickly after making changes. So, I created two programs:
ActiveMQPublisherTest: This program start the ActiveMQ broker and then push messages into a queue.
ActiveMQSubscriberTest: This program listens to the ActiveMQ broker started by "ActiveMQPublisherTest" and receives the event published by it.

I have published the entire source code on GitHub at: https://github.com/itsmanishagarwal/ActiveMQTest

To use these programs you just need to change the XML files to point to your database.

Any suggestions or feedback is welcome.

~Manish

Thursday, March 20, 2014

How to verify if a file which belongs to an RPM is modified?

Recently I had to struggle to find out a way to verify if a file which belongs to an RPM is modified or not. After searching a bit on google I found that there is a option in "rpm" tool to verify all the files but there is no direct way to find if a particular file is modified. So, I decided to create a function which can help me doing that.


function
 isFileModified {
  FILE=$1
  if rpm -Vf $FILE | grep $FILE >/dev/null 2>&1 && rpm -Vf $FILE | grep $FILE \
| awk -F" " '{print $1}' | grep -e ".*5.*" >/dev/null 2>&1; then
    return 0
  else
    return 1
  fi
}

Explanation:
rpm -Vf $FILE : Returns list of all the files which got modified in the RPM package.
grep $FILE : Check if the file to be checked is in the list of modified file.
awk -F" " '{print $1}' : Truncates the attributes of the provided file
 grep -e ".*5.*"  : Check if the md5 digest of the file is changed.

So, the functions returns 0 if the file's md5 digect is changed after it is installed by the RPM. Else it will return 0.

Thanks,
Agry

Tuesday, October 2, 2012

Override the properties in wro.properties

Sometimes there is a need to override some properties on local environment to facilitate faster development. If you have implemented the wro4j using spring then it is very easy to override any property in wro.properties file.

Just replace the  wroProperties bean in the applicationContext.xml with below code and spring will look for the property file from other locations.
    <bean id="wroProperties"
       
class="org.springframework.beans.factory.config.PropertiesFactoryBean">
       
<property name="ignoreResourceNotFound" value="true"></property>
       
<property name="locations">
           
<list>
               
<value>file:${catalina.home}/conf/wro.properties</value>
               
<value>file:${catalina.home}/wro.properties</value>
               
<value>file:${user.home}/wro.properties</value>
           
</list>
       
</property>
   
</bean>

Property "<property name="ignoreResourceNotFound" value="true"></property>" ensures that bean creation will not fail even if the wro.properties file is missing. And property file at location mentioned latter overrides the property file before it. Means, if there is a property file at user home then it will override all the property file at other locations.

In my development setup I have placed a wro.properties i user home and set the managerfactoryclassname property to my custom class which disables the minimization. 
(To disable minimization check my blog at: http://msquare-tech.blogspot.in/2012/10/disable-minimizing-resources-when-using.html)

~Manish


Disable minimizing the resources when using wro4j


After implementing the wro4j in my application, the performance of the pages improved but there was one problem. Now because all the resources are minified, it become difficult to debug the Javascript issues from Firebug. 
I resolved the above issue by following below steps:

1. Extending the "DefaultGroupExtractor" class and overriding only one method :
/*
* Never minimize the resources
*/ 

@Override
  public boolean isMinimized(HttpServletRequest request) {
    return false;
  }
2.  Extending the "BaseWroManagerFactory" class and setting new group extractor as created in step 1:
/*
* Return the custom extractor as created above.
*/
  @Override
  protected GroupExtractor newGroupExtractor() {
    return new CustomDefaultGroupExtractor(); // extractor created in step 1
  }
3. In the wro.properties file add the manager factory class as below:
managerFactoryClassName=com.vmops.web.optimizer.CustomWroManagerFactory

4. Restart the server. Now you will see no resources are minimized.

~Manish

Wednesday, June 27, 2012

Implement wro4j in five steps.


WRO4J (Web Resource Optimizer For Java) is an awesome open source resource optimizer. I recently implemented it in my application. So here I am providing steps I followed to implement it and issues I faced.

Tools used: Maven

1. Add maven dependency for the WRO4J in you pom.xml as follows:

2. Add a filter in web.xml as follows:

3. Under WEB-INF create a folder wro.xml with content as follows:

This will create a js and css at runtime by combining all js and css under the group all and return and all.js and all.css respectively.

4. Under same folder WEB-INF create another file wro.properties with content as follows:
debug=true
disableCache=true
gzipResources=true
jmxEnabled=false
preProcessors=semicolonAppender
postProcessors=jsMin,cssCompressor,cssMin

5. Open any new existing JSP page and add a js call as follows:
and for css add the call as:
Thats it!! Now start your server and open your page.

I faced one issue while implementing wro4j and that is due to a dependency of wro4j jars. This version of wro4j requires commons-io.2.1 but because some other dependency older version of common-io got loaded. I did not get any error but was getting empty results when calling /wro/all.js and /wro/all.css. So, be careful.

~Manish


Thursday, November 17, 2011

How to Install CTools on Windows

Last week I read a post from ‘Slawomir Chodnicki’ on “Creating Dashboards with CDE”. Thanks to him for a really useful post.

I wanted to install the CTools on my Pentaho environment. But the issue was that I was on Windows and the CTools installer is made for Unix/Linux environment.

Pedro Alves published a blog on How to install it on Unix and Yogaraj Khanal has used Cygwin to use it on Windows.

I am not a big fan of Cygwin plus due to IT restrictions it was a lengthy process to get CYGWIN installed on my machine,  I so I tried to install CDF/CDE/CDA without a script. It’s not That difficult actually. You just need to follow the steps mentioned in the Shell script “ctools-installer.sh”.

Steps:

1. Get the CDF/CDE/CDA files from location mentioned under ‘downloadCDF/CDA/CDE’ functions.

2. Put them under a temporary directory.

3. Remove any existing CDE/CDF/CDA files.

4. Unzip them as per details given under function ‘installCDF/CDE/CDA’.

5. Copy them to respective location.

Things to remember:

1. You’ve to first Unzip the dist.zip files. This will give you another set of Zip and Jar’s.

List of files:

cda-samples-TRUNK-20111013.zip
cda-TRUNK-20111013.zip                                                       CDA
cda-TRUNK-SNAPSHOT.jar

pentaho-cdf-dd-solution-TRUNK-20111028.zip
pentaho-cdf-dd-TRUNK-20111028.zip                         CDF Dashboard
pentaho-cdf-dd-TRUNK-SNAPSHOT.jar

pentaho-cdf-TRUNK-SNAPSHOT.jar
pentaho-cdf-TRUNK-SNAPSHOT.zip                           CDE

2. Put the Jar’s into the location under Tomcat installation for Pentaho BI Server. In my case it was “tomcat\webapps\pentaho\WEB-INF\lib”.

3. Now the Zip files which You get from the Step 1, contain two sets of directory structures. One will be the configuration files for CDF/CDE/CDA and other will be Samples.

4. Samples need to be placed under ‘/biserver/pentaho-solutions’ and Configuration files under ‘/biserver/pentaho-solutions/system

It took me 20 mins max to set this up. I started the BI server and lived “Happily ever After”

It’s no big deal but it might save you some time and un-necessary CYGWIN setup.

Friday, November 4, 2011

Kettle Trials: Using Modified JavaScript Value to Generate Surrogate Key

 

In the last Post I explained how I implemented Surrogate Key using SSIS, for Dimension Table Updates.

I tried to do the Same thing with Kettle and I was not amazed that How easy it was. Kettle gives you so many options to do a task.

There are many ways to implement it. I’ve implemented it using “Add Sequence” and “Modified JavaScript”.

1.  Logic is to First get the Last Used Surrogate Value. This can be done in two ways, either we can use a sequence (if the DB supports) or we can use Kettle’s own sequence generator. In this example I’ve used the DB sequence as I want this example to be similar to the SSIS exercise which I did although Latter should be a better option as it’ll have Performance benefits.

2. Next to pass this value to a Kettle Variable using “Set Variable” Step.

3. Save this Transformation which will act as a Configuration Transformation for the Main Transformation.

4. Create a New transformation which will do the Actual Loading.

5. Get the Staging Data by using Input Step (I’ve used Flat file Input to keep things simple)

6. Then I used “Modified JavaScript” Step.

In this step first Get the Variable Value, which we set in the Step2. Then increment this Value for Each row and we’re done.

7. Now just pass the Output of above step to a Output step (a DB table or a Flat file).

Another alternative could’ve been: Pick Max value from the Primary Key column of the Table and assign it the Variable, but that can be a performance issue with Larger tables.

Issues which I faced:

How to use Variables into “Modified JavaScript Step”? There is a nice method “getVariable” available under “Special Functions” Category with which we can reference the Variable which we are using in the Transformation. This is the thing which I like most about Kettle, It is an ETL Tool with a Programmers Mind-set rather than being a mere Drag-N-Drop Tool.

Another issue was with Formatting. With the JavaScript I had converted few variables from String to Number and Vice-versa, without passing the Optional Formatting Parameter.

So a thing to note always use the formatter explicitly otherwise Kettle will format the Variables as per its Logic.

So I was expecting “1234” and I got “1234.00” or “000000001234”.

str2num(MAX_ID, "#")

num2str(strVarValue, "#")

clip_image001clip_image002

I kept getting this Error “Sequence Not Found” when I validated the Transformation, although the Sequence was there. I tried the SQL Editor available inside the Kettle and even it verified that Sequence was there. It was really annoying. I restarted the Database, Cleared the Cache of the DB but nothing worked.

Actual issue was that I had created my DB schema under a DB with Multiple Schemas in it and while creating the Connection in Kettle I had filtered it (Attached image).

But it seems Kettle and DB2 aren’t so friendly and that’s why the Error.

Solution to this issue is that Create a OS User with same name as Schema and then Create the User Objects in the schema with the connection under this New User’s Credentials.

Job:

clip_image003

Setup Variable Transformation:

clip_image004

Set Variable and Add Sequence Step:

clip_image005clip_image006

Main Transformation:

clip_image007

Modified JavaScript Step

clip_image008

Wednesday, November 2, 2011

SSIS Trials : Implementing Surrogate Key in Dimensions

Yesterday I finally decided to try my Hands on Microsoft’s ETL tool, SSIS. I have worked with various ETL tools and this was the one I never used. It was part of a PoC (Proof of Concept) work which I had to. The objective was to read a text file and load it into a DB. This was kind of a Standard Dimension loading procedure. So I just had to extract the attributes from the Flat file, transform it and then load it into the DB.

Interesting part was the Creation of Surrogate Key or Primary Key. This is a really a good way to analyze capability of your ETL tool.

Informatica gives you many options like You can use “Sequence Generator Transformation” or You can use Global Variable or You can use Database Sequence (if your DB supports it).

Pentaho-Kettle ETL allows you to use Variables and “Database Sequence” to deal with this situation.

In SSIS you can do it with the Help of an “Execute SQL Task” and “Script Component” Transformation. It used a simple logic:

1. Pick data from a Source (flat file in my case).

2. Get Max value of the Surrogate key from Database Table.

3. Pass it to a Solution Variable.

4. Use the VB script, which will assign it to a Script Variable which will be incremented for Each row of data.

5. Concatenate the Resultset using Derived Column Transformation.

I googled for the Script (to increase the variable For Each Row) and found this Blog really useful.

There were few hiccups which actually were quite frustrating and time almost took me 2 Hours to finally solve them.

Firstly You should always match the datatypes of Variable and the Return type of Database Resultset.

Suppose your Query Return a SUM and the Variable is of type String you might get this Error:

[Execute SQL Task] Error: An error occurred while assigning a value to variable "Max_ID": "The type of the value being assigned to variable "User::Max_ID" differs from the current variable type. Variables may not change type during execution. Variable types are strict, except for variables of type Object. ".

Next thing to take care of is the Resultset mapping under Execute SQL Task editor. When you’re using ODBC data source, you should use Resultset Name as 1 otherwise you might get this Error:

“[Execute SQL Task] Error: An error occurred while assigning a value to variable "Max_Seq_ID": "Result column index 0 is not valid.".”

Then I tried changing it to Column name which I’ve used in the Query and then it gives error: (I’ve connected to a DB2 database with ODBC)

[Execute SQL Task] Error: An error occurred while assigning a value to variable "Max_Seq_ID": "Value does not fall within the expected range.".

If you change Resultset name to 1 with ODBC connection, you’ll get:

“[Execute SQL Task] Error: An error occurred while assigning a value to variable "Max_Seq_ID": "[IBM][CLI Driver] CLI0122E Program type out of range. SQLSTATE=HY003".”

Then I found out that we need to use “1” when using ODBC and use “0” with ADO.NET, as Resultset Name.

If you use Alias name from the Query in Resultset name with ADO.NET:

[Execute SQL Task] Error: An error occurred while assigning a value to variable "Max_Seq_ID": "Result binding by name "dasdas" is not supported for this connection type. ".

image013 image003

So many errors for one simple solution. I don’t want to Sound Critical but Informatica is Easy in regards to Transformation Usability but so is its Price.

Anyways Once the Execute SQL task is configured properly, I added a Data Task to the Solution and then Added a Script Component with Output to a Derived Column and then to a Flat file (You can use a database table as well). In the Script Component Editor, Please do the following changes:

image013

1. Add the Variable we used in Execute SQL task to ReadOnlyVariables.

2. Add a Output Column to “Inputs and Outputs” Section with Correct Datatype and Encoding.

3. Design the script with proper Type casting between Variables and Columns.

4. Set the Pre-Compile flag to False or you might get error:

“The script component is configured to pre-compile the script, but binary code is not found. Please visit the IDE in Script Component Editor by clicking Design Script button to cause binary code to be generated.”

I used an Derived Column Transformation to Add the Base column from the Flat file to the Generated Surrogate key from Script Component.

image007

 

Thursday, September 29, 2011

How to Synchronize your data to Backup server


Last year I realized the importance of data backup when one of my colleague lost all of his critical data due to Hard disk failure and as usual he didn’t had any backup. So then I thought that it’s better to be Safe than Sorry. After this It occurred to me that my IT department has already provided me quota on backup servers, which I never utilized. IT provided me a batch file (as most of the machines in my organization are on windows), which goes like this:

xcopy c:\data\*.* h:\c-data /c /e /h /y
xcopy D:\data\*.* h:\d-data /c /e /h /y
@echo Backup is complete
@pause

I somehow didn’t like this logic and modified it as per my convenience.

echo Close Outlook
@pause
echo remove stale data
echo Y | rmdir Old\c-data /S
echo Y | rmdir Old\d-data /S
@pause

echo Move Last backup to Old location
move c-data Old\
move d-data Old\
mkdir c-data
mkdir d-data

echo Start backup
echo D | xcopy D:\Dev H:\d-data /c /e /h /y  >backup.log
echo D | xcopy "c:\Documents and Settings\manish.mishra\My Documents" "H:\c-data\My Documents" /c /E /h /y >>backup.log
echo D | xcopy "c:\Documents and Settings\manish.mishra\Desktop" H:\c-data\Desktop /c /e /h /y >>backup.log
@echo Backup is complete
@pause

I used this for few weeks but still it was not an optimized method because it was copying entire data irrespective of the fact whether it was changed or not. It should be incremental.

So I Googled it and found a useful tool from Microsoft for this purpose.

It’s called “SyncToy”. I downloaded and installed it. It’s pretty simple to use and has a very intuitive GUI. I didn’t bother to read all the FAQ’s and Help manuals.

1. So first you need to “Create New Folder Pair”

clip_image001

2. Next Select the folder you want to backed up as “Left Folder”

clip_image001[4]

3. Then select Backup Server location as “Right Folder”

4. Next comes the interesting part. “Synctoy” gives you an option so as to Select what kind of Synchronization you’ll like to perform.

clip_image002clip_image003 clip_image004

a. Synchronize: This option will check both the left and right folders for “Add/Update/Delete/Rename” and synchronize both of them. This option can be useful in case you want to use Synctoy as a Development repository, when you’re working on the same project but from multiple machines.

b. Echo: This option only compares Left to Right not vice versa. (I chose this one as I don’t need to synchronize Backup server to Local).

c. Contribute: This one is similar to Echo with the only difference being Delete. File which are deleted on Left won’t be synced to Right.

You can change these options later as well.

5. Give this Synchronization pair a Meaningful name so that you can identify it later.

clip_image005

6. After this you’ll get this Pair listed in the Home Screen of the Tool and you can select which Pair you want to Run.

7. Selecting the pair will give you other options as well, which you don’t get at the time of Creating the Pair.

clip_image006

8. You can also click on Preview button which will show you a report displaying all the modified content.

clip_image007

9. Click on Run and you’re done.

Monday, August 15, 2011

Prompts with Multi-Select and Conditional Cascading

Another issue with Cognos:
Today I learnt a new caveat with Cognos Report Studio. I had a business case where I need to create two prompts, One cascading onto the other.

But the trick was that Cascading had to be conditional. For ex: for a particular value (let’s say ‘YYY’) of Prompt1, Prompt2 should display otherwise it should be blank. And to make it more complicated, Prompt1 was Multi-select.
I am no expert with Cognos so I started with my usual instincts and implemented the situation as:
1. Created Prompt1 with Multi-Select option.
2. Created Prompt2 without the cascade.
3. Add a filter to the Query of Prompt2: ?filter1? like ‘YYY’

Being a database guy, I thought that this should work and to a extent it did. Issue was that when I select ‘YYY’ as the first selection and rest of the values after that then it worked fine.


clip_image001

Now the problem arises when ‘YYY’ is not the first selection. Then second prompt remains blank.


clip_image002

I couldn’t understand that what was the reason. I changed filter to ?filter1? like ‘%YYY%’ but still no result.

Then I thought that Cognos might not treat Like as databases do. So I tried CONTAINS but still the same.
Then I checked IN function and it works like a charm. ‘YYY’ IN ?filter1?

It’s nothing great but just a minor issue which might spoil your Friday evenings.

Hope this helps.

I’ll try to find exact difference between “LIKE” and “CONTAINS”.