Monday, 24 April 2017

Check Port on Centos 7 or Redhat 7

Check Firewall setting 
[root@gukapi]# firewall-cmd --list-ports
9020/tcp 80/tcp 443/tcp
[root@gukapi]# firewall-cmd --list-all
  target: default
  icmp-block-inversion: no
  services: ssh dhcpv6-client gukapi
  ports: 9020/tcp 80/tcp 443/tcp
  masquerade: no
  rich rules:

Check port is listening or not.

[root@gukapi services]# netstat -anp | grep 9020
[root@gukapi services]#
[root@gukapi services]# ps -ef | grep dotnet
root     10000  8127  0 15:08 pts/1    00:00:00 grep --color=auto dotnet

Start the application

[root@gukapi]# dotnet WhatsApp.API.d
WhatsApp.API.deps.json  WhatsApp.API.dll
[root@gukapi]# dotnet WhatsApp.API.dll
Hosting environment: Production
Content root path: /root/dotname/gukapi/
Now listening on: http://localhost:9020
Application started. Press Ctrl+C to shut down.

Check Port again & We can also test with telent command.

 --- Using netstat
[root@gukapi services]# cd /root/dotname/gukapi/
[root@gukapi]# netstat -anp | grep 9020
tcp        0      0*               LISTEN      10018/dotnet
tcp6       0      0 ::1:9020                :::*                    LISTEN      10018/dotnet

  --- Using Telnet
[root@gukapi]# telnet localhost 9020
Trying ::1...
Connected to localhost.
Escape character is '^]'.

Wednesday, 12 November 2014

Replace Keyword in CLOB Column

We have got really interesting scenario for our companies busiest website. We want to replace work 'GCSE's' to 'GCSEs'. Now interesting thing is that text or data stores in CLOB columns of few Tables. Some how we manage to find the tables and it's CLOB column.

  Then after we have used following two procedure to replace those text. And good thing about this script is that it's work for >32k  length of  column rows. In this scenario we calling SEARCH_N_REPLACE_CLOB  procedure from SEARCH_PRO Procedure.

@@@@@@@@@@@@@@@ SEARCH_N_REPLACE_CLOB  procedure @@@@@@@@@@@@@@@@@@@

create or replace 
PROCEDURE search_n_replace_clob (dest_lob IN OUT CLOB,search_str VARCHAR2,replace_str VARCHAR2) 


temp_clob CLOB; 

end_offset INTEGER := 1; 

start_offset INTEGER := 1; 

occurence NUMBER := 1; 

replace_str_len NUMBER := LENGTH(replace_str); 

temp_clob_len NUMBER := 0; 

dest_lob_len NUMBER := 0; 


IF dest_lob IS NULL THEN 

RAISE_APPLICATION_ERROR(-20001, 'Destination LOB is empty'); 


IF DBMS_LOB.ISOPEN(dest_lob) = 0 THEN 





end_offset := DBMS_LOB.INSTR(dest_lob,search_str,1,occurence); 

IF end_offset = 0 THEN 

temp_clob_len := DBMS_LOB.GETLENGTH(temp_clob); 

dest_lob_len := DBMS_LOB.GETLENGTH(dest_lob) - start_offset + 1; 

IF dest_lob_len > 0 THEN 





temp_clob_len := DBMS_LOB.GETLENGTH(temp_clob); 

IF (end_offset - start_offset) > 0 THEN 

DBMS_LOB.COPY(temp_clob,dest_lob,(end_offset - start_offset),temp_clob_len+1,start_offset); 


start_offset := end_offset + LENGTH(search_str); 

occurence := occurence + 1; 

IF replace_str IS NOT NULL THEN 




IF LENGTH(search_str) > LENGTH(replace_str) OR LENGTH(replace_str) IS NULL THEN




END ; 

@@@@@@@@@@@@@@@@ SEARCH_PRO Procedure @@@@@@@@@@@@@@@@@@@@@@@@@@

create or replace 

l_amt NUMBER DEFAULT 99900000;
l_length NUMBER :=0 ;
t_Clob CLOB;
--cursor c1 is select C from test FOR UPDATE;
--cursor c1 is select CUSTOM_VAL from test2  where custom_val like '%GCSE''s%' FOR UPDATE;
--cursor c1 is select CUSTOM_VAL from HEM.SQ_AST_ATTR_VAL where custom_val like '%GCSE''s%' FOR UPDATE;
--cursor c1 is select value from HEM.SQ_AST_MDATA_VAL where value like '%GCSE''s%' FOR UPDATE;
--cursor c1 is select XML from HEM.SQ_FNB_IDX where XML like '%GCSE''s%' FOR UPDATE;
--cursor c1 is select value from HEM.SQ_SCH_IDX where value like '%GCSE''s%' FOR UPDATE; 



FETCH C1 INTO t_clob;

if (l_amt > l_length) then
end if;

--dest_lob CLOB; 

-- dbms_output.put_line( to_char(dest_lob) ) ; 


@@@@@@@@@@@@@@@@@@ EXECUTE PL/SQL Block  @@@@@@@@@@@@@@@@@@@

set serveroutput on


  --SELECT c INTO dest_lob FROM test WHERE key = 4 FOR UPDATE; 


Thursday, 6 November 2014

RMAN check all the backup in one go

Do the following from RMAN while connected to the target. If the preview fail then so will the duplication. 

Do the following so we will know your RMAN configuration. 

To get the right time of the backup you have to make sure NLS date is set. 

If using csh 


If using ksh use: 



spool log to preview.log; 
set echo on; 

show all; 

list incarnation of database; 

run { 

set until time "to_date('25-09-2014 20:00:00','DD-MM-YYYY HH24:MI:SS')"; 
allocate channel d1 type='sbt_tape'; 
restore preview database controlfile spfile;} 

list backup summary; 

list backup of database archivelog all; 

list backup of controlfile; 

list archivelog all; 

report schema; 

list copy of database; 

-- Upload preview.log 

RMAN 11GR2 : DUPLICATE Without Target And Recovery Catalog Connection ( Doc ID 874352.1 ) 

Thursday, 17 July 2014

Search Text in Varchar2 Column

prodb]$ cat find_value.sql

  match_count INTEGER;
-- Type the owner of the tables you are looking at
  v_owner VARCHAR2(255) :='MATRIX';

-- Type the data type you are look at (in CAPITAL)
  v_data_type VARCHAR2(255) :='VARCHAR2';

-- Type the string you are looking at
  v_search_string VARCHAR2(4000) :='GCSE';

  FOR t IN (SELECT table_name, column_name FROM all_tab_cols where owner=v_owner and data_type = v_data_type  ) LOOP
-- dbms_output.put_line(t.table_name);
    'SELECT COUNT(*) FROM '||t.table_name||' WHERE '||t.column_name||' = :1'
    INTO match_count
    USING v_search_string;
  --  dbms_output.put_line(t.table_name);
    IF match_count > 0 THEN
      dbms_output.put_line( t.table_name ||' '||t.column_name||' '||match_count );
    END IF;



Wednesday, 9 April 2014

Configure SHMMAX and SHMALL in Linux

SHMMAX and SHMALL are two key shared memory parameters that directly impact’s the way by which Oracle creates an SGA. Shared memory is nothing but part of Unix IPC System (Inter Process Communication) maintained by kernel where multiple processes share a single chunk of memory to communicate with each other.

While trying to create an SGA during a database startup, Oracle chooses from one of the 3 memory management models a) one-segment or b) contiguous-multi segment or c) non-contiguous multi segment. Adoption of any of these models is dependent on the size of SGA and values defined for the shared memory parameters in the linux kernel, most importantly SHMMAX.

So what are these parameters - SHMMAX and SHMALL?

SHMMAX is the maximum size of a single shared memory segment set in “bytes”.

silicon:~ #  cat /proc/sys/kernel/shmmax


SHMALL is the total size of Shared Memory Segments System wide set in “pages”.

silicon:~ #  cat /proc/sys/kernel/shmall


The key thing to note here is the value of SHMMAX is set in "bytes" but the value of SHMMALL is set in "pages".

What’s the optimal value for SHMALL?

As SHMALL is the total size of Shard Memory Segments System wide, it should always be less than the Physical Memory on the System and should be greater than sum of SGA’s of all the oracle databases on the server. Once this value (sum of SGA’s) hit the limit, i.e. the value of shmall, then any attempt to start a new database (or even an existing database with a resized SGA) will result in an “out of memory” error (below). This is because there won’t be any more shared memory segments that Linux can allocate for SGA.

ORA-27102: out of memory

Linux-x86_64 Error: 28: No space left on device.

So above can happen for two reasons. Either the value of shmall is not set to an optimal value or you have reached the threshold on this server.

Setting the value for SHMALL to optimal is straight forward. All you want to know is how much “Physical Memory” (excluding Cache/Swap) you have on the system and how much of it should be set aside for Linux Kernel and to be dedicated to Oracle Databases.

For e.g. Let say the Physical Memory of a system is 6GB, out of which you want to set aside 1GB for Linux Kernel for OS Operations and dedicate the rest of 5GB to Oracle Databases. Then here’s how you will get the value for SHMALL.

Convert this 5GB to bytes and divide by page size. Remember SHMALL should be set in “pages” not “bytes”.

So here goes the calculation.

Determine Page Size first, can be done in two ways. In my case it’s 4096 and that’s the recommended and default in most cases which you can keep the same. 

silicon:~ # getconf PAGE_SIZE



silicon:~ # cat /proc/sys/kernel/shmmni

Convert 5GB into bytes and divide by page size, I used the linux calc to do the math.

silicon:~ # echo "( 5 * 1024 * 1024 * 1024 ) / 4096 " | bc -l


Reset shmall and load it dynamically into kernel

silicon:~ # echo "1310720" > /proc/sys/kernel/shmall
silicon:~ # sysctl –p

Verify if the value has been taken into effect.

silicon:~ # sysctl -a | grep shmall
kernel.shmall = 1310720

Another way to look this up is

silicon:~ # ipcs -lm

------ Shared Memory Limits --------
max number of segments = 4096                          /* SHMMNI  */
max seg size (kbytes) = 524288                  /* SHMMAX  */
max total shared memory (kbytes) = 5242880      /* SHMALL  */
min seg size (bytes) = 1

To keep the value effective after every reboot, add the following line to /etc/sysctl.conf

echo “kernel.shmall = 1310720” >> /etc/sysctl.conf

Also verify if sysctl.conf is enabled or will be read during boot.

silicon:~ # chkconfig boot.sysctl
boot.sysctl  on

If returns “off”, means it’s disabled. Turn it on by running

silicon:~ # chkconfig boot.sysctl on
boot.sysctl  on

What’s the optimal value for SHMMAX?

Oracle makes use of one of the 3 memory management models to create the SGA during database startup and it does this in following sequence. First Oracle attempts to use the one-segment model and if this fails, it proceeds with the next one which's the contiguous multi-segment model and if that fails too, it goes with the last option which is the non-contiguous multi-segment model.

So during startup it looks for shmmax parameter and compares it with the initialization parameter *.sga_target. If shmmax > *.sga_target, then oracle goes with one-segment model approach where the entire SGA is created within a single shared memory segment.

But the above attempt (one-segment) fails if SGA size otherwise *.sga_target  > shmmax, then Oracle proceeds with the 2nd option – contiguous multi-segment model. Contiguous allocations, as the name indicates are a set of shared memory segments which are contiguous within the memory and if it can find such a set of segments then entire SGA is created to fit in within this set. 

But if cannot find a set of contiguous allocations then last of the 3 option’s is chosen – non-contiguous multi-segment allocation and in this Oracle has to grab the free memory segments fragmented between used spaces.

So let’s say if you know the max size of SGA of any database on the server stays below 1GB, you can set shmmax to 1 GB. But say if you have SGA sizes for different databases spread between 512MB to 2GB, then set shmmax to 2Gigs and so on.

Like SHMALL, SHMMAX can be defined by one of these methods..

Dynamically reset and reload it to the kernel..

silicon:~ #  echo "536870912" >  /proc/sys/kernel/shmmax

silicon:~ #  sysctl –p           -- Dynamically reload the parameters.

Or use sysctl to reload and reset ..

silicon:~ #  sysctl -w kernel.shmmax=536870912

To permanently set so it’s effective in reboots…

silicon:~ #  echo "kernel.shmmax=536870912" >>  /etc/systctl.conf

Install doc for 11g recommends the value of shmmax to be set to "4GB – 1byte" or half the size of physical memory whichever is lower. I believe “4GB – 1byte” is related to the limitation on the 32 bit (x86) systems where the virtual address space for a user process can only be little less than 4GB. As there’s no such limitation for 64bit (x86_64) bit systems, you can define SGA’s larger than 4 Gig’s. But idea here is to let Oracle use the efficient one-segment model and for this shmmax should stay higher than SGA size of any individual database on the system.

Friday, 7 February 2014

Troubleshooting gc block lost and Poor Network Performance in a RAC Environment (Doc ID 563566.1)

Troubleshooting gc block lost and Poor Network Performance in a RAC Environment (Doc ID 563566.1)


In Oracle RAC environments, RDBMS gathers global cache work load statistics which are reported in STATSPACK, AWRs and GRID CONTROL. Global cache lost blocks statistics ("gc cr block lost" and/or "gc current block lost") for each node in the cluster as well as aggregate statistics for the cluster represent a problem or inefficiencies in packet processing for the interconnect traffic. These statistics should be monitored and evaluated regularly to guarantee efficient interconnect Global Cache and Enqueue Service (GCS/GES) and cluster processing. Any block loss indicates a problem in network packet processing and should be investigated.

The vast majority of escalations attributed to RDBMS global cache lost blocks can be directly related to faulty or mis-configured interconnects. This document serves as guide for evaluating and investigating common (and sometimes obvious) causes.

Even though much of the discussion focuses on Performance issues, it is possible to get a node/instance eviction due to these problems. Oracle Clusterware & Oracle RAC instances rely on heartbeats for node memberships. If network Heartbeats are consistently dropped, Instance/Node eviction may occur. The Symptoms below are therefore relevant for Node/Instance evictions.


  • "gc cr block lost" / "gc current block lost" in top 5 or significant wait event

  • SQL traces report multiple gc cr requests / gc current request /
  •  gc cr multiblock requests with long and uniform elapsed times
  • Poor application performance / throughput
  • Packet send/receive errors as displayed in ifconfig or vendor supplied utility
  • Netstat reports errors/retransmits/reassembly failures
  • Node failures and node integration failures
  • Abnormal cpu consumption attributed to network processing

Troubleshooting 11.2 Clusterware Node Evictions

Troubleshooting 11.2 Clusterware Node Evictions (Note 1050693.1)

Starting, a node eviction may not actually reboot the machine.  This is called a rebootless restart.

To identify which process initiates a reboot, you need to review below are important files

  • Clusterware alert log in /log/alertnodename
  • The cssdagent log(s) in /log//agent/ohasd/oracssdagent_root
  • The cssdmonitor log(s) in /log//agent/ohasd/oracssdmonitor_root
  • The ocssd log(s) in /log//cssd
  • The lastgasp log(s) in /etc/oracle/lastgasp or /var/opt/oracle/lastgasp
  • IPD/OS or OS Watcher data.  IPD/OS is an old name for the Cluster Health Monitor.  The names can be used interchaneably although Oracle now calls the tool Cluster Health Monitor
  • 'opatch lsinventory -detail' output for the GRID home
  • Message files /var/log/message
Common Causes of eviction:

OCSSD Eviction: 1) Network failure or latencies issue between nodes.  It takes 30 consecutive missed checkins to cause a node eviction.  2)  Problem writing / reading the voting disk  3) A member kill escallation like the LMON process may request CSS to remove an instance from the cluster via the instance eviction mechanisim.  If this times out, it could escalate to a node evict.

CSSDAGENT or CSSDMONITOR Eviction:  1) OS Scheduler problem as a result of OS is locked upor execsive amounts of load on the server such as CPU utilization is as high as 100% 2) CSS process is hung 3) Oracle bug