Thursday, September 22, 2016

ODI Decrypt password

Below java code can be used to decrypt the passwords stored in Oracle Data Integrator. 



 import com.sunopsis.dwg.DwgObject;  
 public class OdiDecrypt {  
   public OdiDecrypt() {  
     super();  
   }  
   @SuppressWarnings("deprecation")  
   public static void main(String[] args) {  
     OdiDecrypt odiDecrypt = new OdiDecrypt();  
     @SuppressWarnings("deprecation")  
     String strMasterPassEnc="gHyXQ5WaJua6RFCRmP1l";  
     String strMasterPass=DwgObject.snpsDecypher(strMasterPassEnc);  
     System.out.println(strMasterPass);  
   }  
 }  





Required .jars:

apache-commons-lang.jar
odi-core.jar which is available at $ORACLE_HOME\oracledi.common\odi\lib\

ODI Decrypt password

Below java code can be used to decrypt the passwords stored in Oracle Data Integrator. 



 import com.sunopsis.dwg.DwgObject;  
 public class OdiDecrypt {  
   public OdiDecrypt() {  
     super();  
   }  
   @SuppressWarnings("deprecation")  
   public static void main(String[] args) {  
     OdiDecrypt odiDecrypt = new OdiDecrypt();  
     @SuppressWarnings("deprecation")  
     String strMasterPassEnc="gHyXQ5WaJua6RFCRmP1l";  
     String strMasterPass=DwgObject.snpsDecypher(strMasterPassEnc);  
     System.out.println(strMasterPass);  
   }  
 }  





Required .jars:

apache-commons-lang.jar
odi-core.jar which is available at $ORACLE_HOME\oracledi.common\odi\lib\

Thursday, September 15, 2016

ODI 12C Studio installation

Oracle Data Integrator Studio is a developer's interface for configuring and managing ODI. To install ODI download the latest files Disk 1 & Disk 2 from OTN and extract to a local folder.



Make sure you have Java 8 installed on your system.If you have a older version of Java and would like to update to newer one use the below commands from the command prompt.

Click Start, type cmd. When the cmd.exe icon appears, right click and select Run as administrator.To add/update system environment variables permanently:

setx -m JAVA_HOME "C:\Program Files\Java\jdk1.8.0"
setx -m PATH "%PATH%;%JAVA_HOME%\bin";

Open command prompt as administrator and execute the jar file as below



















Select Standalone Installation
















Click Finish to complete installation.

Open the studio from the windows start menu.






Connecting to the Master Repository


If you have installed any previous version of Oracle Data Integrator on the same computer you are currently using, you may be asked whether or not you want to import preferences and settings from those previous installations into ODI Studio. The tasks and descriptions in this section assume that no previous versions of Oracle Data Integrator exist on your computer.

To connect to the master repository:
  1. From the ODI Studio menu, select File, then select New.
    On the New gallery screen, select Create a New ODI Repository Login, then click OK.
  2. On the Oracle Data Integrator Login screen, click the plus sign (+) icon to create a new login. On the Repository Connection Information screen:
    • Oracle Data Integrator Connection section:
      • Login Name: Specify a custom login name.
      • User: Specify SUPERVISOR (all CAPS).
      • Password: Specify the supervisor password of RCU Custom Variable screen.
    • Database Connection (Master Repository) section
      • User: Specify the schema user name for the Master repository. This should be prefix_ODI_REPO as specified on the Select Components screen in RCU.
      • Password: Specify the schema password for the Master repository. This was specified on the Schema Passwords screen in RCU.
      • Driver List: Select the appropriate driver for your database from the drop-down list.
      • URL: Specify the connection URL. Click on the magnifying glass icon for more information about the connection details for your driver.
    • In the Work Repository section, select  Master Repository Only.




  1. Click Test to test the connection, and fix any errors. After the test is successful, click OK to create the connection.
  2. Specify and confirm a new wallet password on the New Wallet Password screen.
  3. After you have successfully created a new login, you are returned to ODI Studio.
    Select Connect to Repository and, when prompted, provide your new wallet password.
    After providing your wallet password, the Oracle Data Integrator Login screen appears. Provide the following information to log in:
    1. In the drop-down menu in the Login Name field, select the name of the new login you just created.
    2. Specify SUPERVISOR as the user name.
    3. Provide the password for the Supervisor user.

Sunday, August 7, 2016

MapReduce FileAlreadyExistsException - Output file already exists in HDFS

The below exception is because your output directory is already existing in the HDFS file system. 


 Exception in thread "main" org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory file:/C:/HadoopWS/outfile already exists  
     at org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:146)  
     at org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:266)  
     at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:139)  


You have to delete the output directory after running the job once. This can be done on command line using the below script:

$ hdfs dfs –rm -r /pathToDirectory

If you would like it to do through the java code below code snippet can be used. This will delete the output folder before running the job everytime.

Path output =new Path(outPath);
FileSystem hdfs = FileSystem.get(conf);
        if (hdfs.exists(output)) {
            hdfs.delete(output, true);
}



Another workaround would be to pass the output directory through command line as below.

$ yarn jar {name_of_the_jar_file.jar} {package_name_of_jar} {hdfs_file_path_on_which_you_want_to_perform_map_reduce} {output_directory_path}

If you would like to create a new directory everytime below code can be used.


 String timeStamp = new SimpleDateFormat("yyyy.MM.dd.HH.mm.ss", Locale.US).format(new Timestamp(System.currentTimeMillis()));  
 FileOutputFormat.setOutputPath(job, new Path(“/MyDir” + "/" + timeStamp));  

Tuesday, August 2, 2016

Hadoop - Find The Largest Top 10 Directories in HDFS

Sometimes it is necessary to know what file(s) or directories are eating up all your disk space.Below scripts will help you to use Unix and Linux command for finding the largest or biggest the files or directories on HDFS.


 echo -e "calculating the size to determine top 10 directories on HDFS......"  
 for dir in `hadoop fs -ls /|awk '{print $8}'`;do hadoop fs -du $dir/* 2>/dev/null;done|sort -nk1|tail -10 > /tmp/size.txt  
 echo "| ---------------------------     | -------    | ------------ | ---------   | ----------   ------ |" > /tmp/tmp  
 echo "| Dir_on_HDFS | Size_in_MB | User | Group | Last_modified Time |" >> /tmp/tmp  
 echo "| ---------------------------     | -------    | ------------ | ---------   | ----------   ------ |" >> /tmp/tmp  
 while read line;  
 do  
     size=`echo $line|cut -d' ' -f1`  
     size_mb=$(( $size/1048576 ))  
     path=`echo $line|cut -d' ' -f2`  #(Use -f3 if running on cloudera)  
     dirname=`echo $path|rev|cut -d'/' -f1|rev`  
     parent_dir=`echo $path|rev|cut -d'/' -f2-|rev`  
     fs_out=`hadoop fs -ls $parent_dir|grep -w $dirname`  
     user=`echo $fs_out|grep $dirname|awk '{print $3}'`  
     group=`echo $fs_out|grep $dirname|awk '{print $4}'`  
     last_mod=`echo $fs_out|grep $dirname|awk '{print $6,$7}'`  
     echo "| $path | $size_mb | $user | $group | $last_mod |" >> /tmp/tmp  
 done < /tmp/size.txt  
 cat /tmp/tmp | column -t  

Wednesday, November 4, 2015

Error: Could not find or load main classorg.apache.hadoop.hdfs.server.namenode.NameNode

Error: Could not find or load main class org.apache.hadoop.hdfs.server.namenode.NameNode

This error "Error: Could not find or load main class org.apache.hadoop.hdfs.server.namenode.NameNode" can be seen When you try to execute the below command.

$ bin/hdfs namenode -format

The most common reason for this is the HADOOP_PREFIX environment variable is not set or to the correct path atleast.To set the path to the current session execute the following or set it in the profile permanently:

$ export HADOOP_PREFIX=/path_to_hadoop_location

For example,

$ export HADOOP_PREFIX= /u01/bigdata/hadoop

Once the value is set and running the command again the following start up messages should be seen.

-bash-4.2$ bin/hdfs namenode -format
15/10/20 14:13:07 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = localhost/10.xx.xx.xx
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 2.6.1
STARTUP_MSG:   classpath = /u01/bigdata/hadoop/etc/hadoop:/u01/bigdata/hadoop/share/hadoop/common/
..................................................................................

Wednesday, August 12, 2015

OHS 11g Webgate for OAM 11gR2

Install Oracle HTTP Server 11g

Oracle HTTP Server is available as a webserver component in Oracle Web Tier. Download Oracle Web Tier 11g from Oracle.Create a non root user and extract the installer contents from the downloaded Oracle Web Tier zip file and execute runInstaller.




click NextIf you wish to install software updates enter your credentials




select the Install and Configure option and click Next



Be sure you have all the required prerequisites and then click Next.



Create a new Middleware home 



Enter your details to receive security updates.



Select Oracle HTTP Server




Specify Component Details 


Depending on your configuration, select the Auto Port Configuration option or the Specify Ports Using Configuration File 



Verify the installation summary and click Install







Installing Oracle HTTP Server 11g Webgate

Start the Installer by executing  ./runInstaller -jreLoc <WebTier_Home>/jdk


 Click Next to continue.







Specify the Middleware Home and Oracle Home locations.



Click Install to begin the installation.




Click Finish to dismiss the installer.




Post-Installation Steps


Move to the following directory under your Oracle Home for Webgate<Webgate_Home>/webgate/ohs/tools/deployWebGate and run the following command to copy the required  agent from the Webgate_Home directory to the Webgate Instance location.


For example,

-bash-4.1$ ./deployWebGateInstance.sh -w /u02/app/ssodxbstage/oracle/ohs3/instances/ohs_instance3/config/OHS/ohs3 -oh /u02/app/ssodxbstage/oracle/Oracle_OAMWebGate1

Copying files from WebGate Oracle Home to WebGate Instancedir

Run the following command to ensure that the LD_LIBRARY_PATH variable

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/u02/app/ssodxbstage/oracle/ohs3/lib
cd /u02/app/ssodxbstage/oracle/Oracle_OAMWebGate1/webgate/ohs/tools/setup/InstallTools

On the command line, run the following command to copy the apache_webgate.template from the Webgate_Home directory to the Webgate Instance location (renamed to webgate.conf) and update the httpd.conf file to add one line to include the name of webgate.conf

./EditHttpConf -w <Webgate_Instance_Directory> [-oh <Webgate_Oracle_Home>] [-o <output_file>]

-bash-4.1$ ./EditHttpConf -w /u02/app/ssodxbstage/oracle/ohs3/instances/ohs_instance3/config/OHS/ohs3 -oh /u02/app/ssodxbstage/oracle/Oracle_OAMWebGate1
The web server configuration file was successfully updated
/u02/app/ssodxbstage/oracle/ohs3/instances/ohs_instance3/config/OHS/ohs3/httpd.conf has been backed up as /u02/app/ssodxbstage/oracle/ohs3/instances/ohs_instance3/config/OHS/ohs3/httpd.conf.ORIG

Tuesday, August 11, 2015

Configure SSO for multiple EBS instances

Often there used to be requirements in enterprises to configure SSO for multiple EBS instances  whether it be Dev, UAT, Prod instances or multiple production environments using the same access manager.In such cases mutiple instances can be secured using one application domain,SSO agent and webgate.

Adding Policies to an existing WebGate and Application Domain



Follow the steps below to add the required policies for additional Oracle E-Business Suite integration to an existing WebGate and Application Domain.

  •     Change directories to <RREG_Home>/input.
  •     Create a new file named EBS_OAM_PolicyUpdate.xml or use the existing to serve as a parameter file to the oamreg tool. Below is a sample.
 <?xml version="1.0" encoding="UTF-8"?>  
 <PolicyRegRequest>  
   <serverAddress>{protocol}://{oam_admin_server_host}:{oam_admin_server_port}</serverAddress>  
   <hostIdentifier>{Identifier for your existing WebGate}</hostIdentifier>  
   <applicationDomainName>{Identifier for your existing WebGate}</applicationDomainName>  
 </PolicyRegRequest>  

      

  • Replace {protocol} with either http, or https if the component has been SSL enabled.
  • Replace {oam_admin_server_host} with the fully qualified name for your OAM host.
  • Replace {oam_admin_server_port} with the weblogic administration server port (the SSL port if the Admin Server has been SSL enabled).
  • Replace {Identifier for your existing WebGate} within both the <hostIdentifier> and <applicationDomainName> elements with the Identifier for your existing WebGate.

Create a new file named ebs.oam.conf to serve as URIs file to the oamreg tool.Change directories to <RREG_Home> and run the following command to add the new policies.


     ./bin/oamreg.sh policyUpdate input/EBS_OAM_PolicyUpdate.xml

When prompted for the admin username and password, enter the credentials for your Oracle Access Manager Administrator, by default user "web logic".

When prompted "Do you want to import an URIs file?(y/n)", enter "y".

Enter the full path for the URIs file that you just created as <RREG_Home>/input/ebs.oam.conf.

The script should complete successfully with a Request summary. Login to the OAM console and check if the URIs are added for the new instance.





Configuring Access gate for multiple EBS Instances


The access gate can be deployed on dedicated managed server as eag_server1 protecting ebs_instance1, eag_server2 protecting ebs_instance2 or can be done on the same weblogic server with different context root.A unique name need to be  used for each application deployment.For example: ebsauth_myEBS1, ebsauth_myEBS2.Also the deployment for each Oracle E-Business Suite environment is performed from a separate file system directory.For example: <MW_HOME>/appsutil/accessgate/ebsauth_myEBS1,<MW_HOME>/appsutil/accessgate/ebsauth_myEBS2.Each Oracle E-Business Suite AccessGate application is tied to a single Apps DataSource configuration during deployment.

 Below entry is required on the OHS with the webgate for redirecting to the corresponding access gate.

   <Location /ebsauth_myEBS1>  
    SetHandler weblogic-handler  
    WebLogicHost eaghost.example.com  
    WebLogicPort 8099  
   </Location>  
   <Location /ebsauth_myEBS2>  
    SetHandler weblogic-handler  
    WebLogicHost eaghost.example.com  
    WebLogicPort 8099  
   </Location>  


Cleanup for Logout from Oracle E-Business Suite



On the WebTier, locate the file oacleanup.html that you copied during Oracle E-Business Suite AccessGate installation to the /public subdirectory on your htdocs root directory.For example $ORACLE_INSTANCE/config/OHS/ohs1/htdocs/public/oacleanup.html

Edit the file and replace CONTEXT_ROOT with the value of the context root for any deployment of Oracle E-Business Suite AccessGate protected by this WebGate. For example:

<script type="text/javascript" src='/ebsauth_myEBS/ssologout_callback?mode=cleanup'></script>

Search for the following line and add a callback for each additional logout callback.

 function doLoad()  
 {  
 logoutHandler.addCallback('/ebsauth_myEBS/ssologout_callback');  
 logoutHandler.addCallback('http://webgatehost2.example.com:7780/ebsauth_myEBS2/ssologout_callback');  

Friday, March 13, 2015

Oracle SOA JVM settings

Upgrading the JVM to the latest version is often required for tuning the performance or upgrading the products like SOA from 11g to 12c which requires higher version of JVM.Even though latest versions have top performance it might not be certified with the product for support. By having proper back of the files we can always fall back to the certified JVM version and reproduce the issue and work with support in case latest versions are not certified. These steps are applicable while starting the servers using the node manager from the admin console.


Upgrading JRockit for SOA


Below are the steps required to upgrade JRockit to a newer version which in our case will be from JRockit R28.2.5 to JRockit R28.3.5.All Java SE Downloads are available on MOS (Doc ID 1439822.1).Since we dont want to replace the java version of the system we have downloaded  and extracted the file to a location with write access(for example /u05/java/jrockit-jdk1.6.0_91).Navigate to the bin folder and run ./java -version to find the version of the newly installed JVM.


  • Navigate to $DOMAIN_HOME/bin folder and edit the setDomainEnv.sh file and set the new java home for the variable BEA_JAVA_HOME.
  • Another file that needs is commEnv.sh in the $WL_HOME/common/bin location.
  • If required change the nodemanager.properties file located at  $WL_HOME/common/nodemanager.



Confirm the changes have taken effect from the node manager and the server logs as below.







Switching the JVM from JRockit to HotSpot


Since JRockit is getting converged with HotSpot as HotRockit and no more update on JRockit after JDK 6 we can also look into options like switching the JVM from one vendor to the other and below are the steps required to switch from JRockit to HotSpot. In that case below are the steps that need to be followed.

Download the latest JRE from Doc ID 1439822.1 and upload the tar to the location where it needs to be installed and run the below command to extract and install.

gzip -dc server-jre-8uversion-solaris-sparcv9.tar.gz | tar xf -


Edit the file commEnv.sh in the $WL_HOME/common/bin location. The varaible JAVA_VENDOR needs to be set to Sun and JAVA_HOME needs to point to the new java home.It is very important to set the Java vendor since there will be many parameters set based on the vendor type.




Navigate to $DOMAIN_HOME/bin folder and edit the setDomainEnv.sh file and set the new java home for the variable SUN_JAVA_HOME.




Also if there are any VM specific parameters that are set as arguments along with memory arguments  that will need to be removed.For example parameter -Xgcprio:pausetime is specific to JRockit.

Make sure from the server logs the JVM has been changed.




Setting Specific memory in SOA domain


Usually there will be many managed servers running in the SOA domain and it will be required to set specific JVM size for each of the servers. This can be achieved by changing the setSOADomainEnv.sh in the $DOMAIN_HOME/bin location as below. 

if [ "${SERVER_NAME}" = "wls_soa1" ] || [ "${SERVER_NAME}" = "wls_soa2" ]; then
  DEFAULT_MEM_ARGS="-Xms1536m -Xmx1536m"
  PORT_MEM_ARGS="-Xms4096m -Xmx4096m"

elif [ "${SERVER_NAME}" = "" ] || [ "${SERVER_NAME}" = "AdminServer" ]; then
  DEFAULT_MEM_ARGS="-Xms1536m -Xmx1536m"
  PORT_MEM_ARGS="-Xms1536m -Xmx1536m"

elif [ "${SERVER_NAME}" = "wls_osb1" ] || [ "${SERVER_NAME}" = "wls_osb2" ]; then
  DEFAULT_MEM_ARGS="-Xms1536m -Xmx1536m"
  PORT_MEM_ARGS="-Xms1536m -Xmx1536m"

elif [ "${SERVER_NAME}" = "wls_wsm1" ] || [ "${SERVER_NAME}" = "wls_wsm2" ]; then
  DEFAULT_MEM_ARGS="-Xms768m -Xmx768m"
  PORT_MEM_ARGS="-Xms768m -Xmx768m"

else
  DEFAULT_MEM_ARGS="-Xms768m -Xmx768m"
  PORT_MEM_ARGS="-Xms768m -Xmx768m"

fi


This can be useful to set specific port for the servers when monitoring the JVM remotely from tools like java mission control.

But setting a larger heap size always is not a good idea for many reasons and a capacity planning needs to be done before changing these values.For better performance there should be a 50 percent free heap space for the process to run.The free heap space can be monitored using tools like Java mission control locally or remotely which should come below 50 percent on full garbage collection. Below is a sample screen shot.





Any customization to the above mentioned files will be overridden if the configuration wizard is run again and will need to be reconfigured. As a option these customizations can be maintained in a different file (say setCustomEnv.sh) and included in setDomainEnv.sh file so that not impacted by any upgrade and can be added back.