Tuesday, October 4, 2016

ARCHIVE LOG FILE MANAGEMENT

                   ARCHIVE LOG FILE MANAGEMENT

Archived log files are used for media recovery.
Before overwritting online redolog file content if we take the backup of the redolog file content into the backup destination the backup file is  called Archive log file.

REDOLOG FILES  3 TYPES:
**Online Redolog files
**Archive Redologfiles
**Standby Redologfiles
As a DBA to generate archive logfiles we need to put database in Archive log mode.
By default database will be in noarchive log mode.
To put the database into Archivelog Mode use the below steps.
SQL>shut immediate;
put the database into mount state.
SQL>startup nomount;
----------------------------------------------
SQL>alter database archivelog;
---------------------------------------------
SQL>alter database open;
SQL>select log_mode from v$database;
By above commands we are instructing oracle that start generating archivelogfiles for every logswitch.
If don't specify archive destination default destination well be 
consideredas $ORACLE_HOME/DBS/ARCH
To specify explicitly as DBA use the Parameter 
LOG_ARCHIVE_DEST in pfile or spfile.
When logswitch occurs oracle perform below 2 things.
**generate the checkpoint event
**Generate archivelogfiles.

TO KNOW THE DATABASE INARCHIVE LOGMODE OR NOT
SQL>select log_mode from v$database;
SQL>archive log list;
SQL>select archiver from v$instance;

NOTE:
In real time all production databases must be in archivelogmode
MOUNT STAGE:
 it is called maintance stage.

TO KNOW ARC BG PROCESS STARTED OR NOT IN OS LEVEL
]#ps -ef |grep arc0

when you open database automatically oracle starts arc0 bg process and responibility of this process is for every log switch it will wakeup once and reads the entire content of oldest online sequence number and copies into archive destination as single file.
We can increase the count of arch bg process to copy the content of archive redo  log group (or)fast copying of redolog group content to archive destination.
 By default oracle starts 2 arc bg process in case of 10g .
11g,12c by default oracle starts 4 arch bg process.
Dynamically we can increase the content of arch bg process by specifying the high value  for the parameter.
--------------------------------------------------
LOG_ARCHIVE_MAX_PROCESSES
---------------------------------------------------
LOG_ARCHIVE_PROCESS=2

default value of this parameter 2 in 10g we can specify maximum value 10  where as 11g&12c you can specify max value 30
arc0 to arc9 and arca to arcj+10
SQL>alter system set log_archive_max_process=4 scope=spfile;

TO CHANGE THE LOG_ARCHIVE DESTINATION
SQL>alter system set log_archive_dest='E/arc' scope=both;

SQL>archive log list;
SQL>select group#,status,sequence#,archived from v$log;
SQL>alter system switch logfile;
you can check E/arc directory created files;
SQL>show parameter log_archive_start
            type      value
        boolean     false
From 10g onwards archiveing process is automatic because
 it is always (10g,11g,12c) assumes 
log_archive_start parameter as TRUE

SQL>show parameter log_archive_format
The naming convention of the archive log file decided by
 the parameter log_archive_format
universally accepted naming convention for
 archive log files is %t_%s_%r.arc(%t_%s_%r.dbf default )
 (thread no:1,sequence no,resetlogid)

SQL>alter system set log_archive_format='%t_%s_%r.arc' scope=spfile;
SQL>select group#,status,sequence#,archived from v$log;

DATAFILE&TABLESPACES

                               DATAFILE MANAGEMENT

                              -----------------------------------------------
A datafile is a file that correspondens with a tablespace. One
datafile can be used by one tablespace, but one tablespace can
has more than one datafiles. An Oracle databae include of a 
number of physical files called datafile.
Datafiles are mandatory files inside a database.
datafiles are used to stored information of enduser (or)businessdata.
Universaly accepted naming convention for datafile .dbf.
Datafiles are Physical files.
Datafiles are made of oracle blocks where 
Each block size would be 2k,4k,8k.16k,32k  default 8k.
The blocksize is determined by  the parameter db_block_size .
For better managebility of datafilesm oracle introduced 
tablespace concept.
Tablespace is logical entity with has physical logica structure.
Tablespace is collection of physical datafiles and logical tables.
As a end users are always store data in tables.
oracle internally store those tables data into datafile blocks 
physically.
Tablespace is a collection of one or more datafiles

Tablespace are devided into 2 types.

MANDATORY TABLESPACE.
UNMANDATORY TABLESPACE.

1>MANADATORY TABLESPACE:

-------------------------------------------
This tablespaces are mandatory because without  these 
tablespace we cannot create the database.
we need to specify these during database creation.
In 9i system is mandatory and 10g system,sysaux
11g system,sysaux,undotablespace

SYSTEM TABLESPACE:

----------------------------------
This will the first tablespace which is created inside during a 
database creation.
which  ever the first datafile which specified create database 
command that datafile will be placed assigned to system tablespace.
we cannot change this tablespace name ever.
system tablespace contains datadictionary tables.

TO KNOW THE CURRENT EXISTING IN THE TABLESPACES:

SQL>select  tablespace_name,status from dba_tablespaces;

By default tablespace status is online i.e datafile is open.

By default we will be create a tablespace is READ WRITE mode.
If tablespace is in READ WRITE mode all the tables and datafiles 
present under the tablespace will be READWRITE mode.
----------------------------------------------------------
SQL>select name,enabled from v$datafile;
----------------------------------------------------------
------------------------------
SYSAUX TABLESPACE:
--------------------------
It is new manadatory tablespace from 10g.
it is an auxiliary tablespace for the system and it is introduced 
by the burden of system tablespace.
From 10g all metadata table which are introduced will be 
automatically stored in sysaux tablespaces.
To know the all new features information which are supported 
by sysaux tablespace.
we need to query           V$SYSAAUX_OCCUPANTS
we cannot drop sysaux tablespace.
sysaux tablespace cannot made read only because it is exist 
inside a database it is always in READWRITE mode.
we cannot rename this one.
sysaux tablespace can be made offline.


---------------------------------------------

UNDO TABLESPACE:
------------------------------------
From g it is mandatory tablespace.
It is used basically support rollback operations it holds data only.


UNMANDATORY TABLESPACES::

------------------------------------------
These tablespaces holds enduser data interms of table
we create these tablespace while creating database or even 
after creating a database.


SQL>create tablespace tablespacename datafile'------' size=memory;


SYNTAX TO ADD DATAFILE:

------------------------------------------
SQL>alter tablespace  tablespacename add datafile 'path location' size=memory;

RENAMING TABLESPACE:

SQL>alter tablespace existing tablespacename rename to newname;
note:To rename a tablespace status must online

when tablespace is empty this syntax use

SQL>drop tablespace tablepacename;

SQL>drop tablespace tablespacename including contents;

only tablespace will be deleted not datafile.

SQL>drop tablespace tablespacename including contents and datafiles;

physical datafiles also deleted.

REDOLOGFILES

                   REDO LOG FILE MANAGEMENT

when ever any concurrent user says commit or Rollback oracle immediately invokes the LGWR bg proccess to flush the entire content of Redollog buffer to Redologfile.

Redolog file holds Redorecords which are moved from redologbuffer .

During the time logwriter copies the data from Redologbuffer to redologfiles  it wakes on wait event called 'logfile sync'.

Redlogfile use for instance Recovery.

While the database opening SMON bg process cross check start scn value with the stop _scn value if both are not matching the smon initites instance recovery.

As a part of instance recovery SMON reads  and apply all redologrecords which are generated between previous checkpoint scn number to the current scn number and top of the corresponding datafile Blocks.after its open the database.Redologfiles are also used for Media Recovery.


Redologfile structure:
------------------------------
Redologfile is Made up of o.s blocks.
if the first two blocks inside a redologfile is denoted as redologfile header.
every Redologfile in a database will have it's own header.
The first block inside a header containts information like o.s block size and total no of blocks allocated to that file and o.s name.
second block of header contains information like database name,databaseid,thread number(1),start scn,,start time,endscn,endtime etc.
All the remaining blocks inside a Redologfile are used fir holding the Redorecords.
If redorecords size is smallerthan pair of osblock inside a redologfile than that can be accomodated perfectly in that pair otherwise a redorecord can span across multiple o.s block if redorecord size is very high.
structure of Redorecord.
-------------------------------
Every Redorecord will have it's own header. The first 20bytes of space it's reserved fo it's header.
Every Redorecord holds below imformation.
They are Thread number,scnnumber,Redobyteaddress(RBA)
Thread number is always 1.
SCN number indicates at what time record got generated .
RBA specefies as address of redorecord.
RBA is a collection of logsequence number(LSN)+os block number+offset.
LOG SEQUENCE NUMBER:
----------------------------------------
When we start database first time log sequence number will be 1 for every logswitch LSN number will keep on incrementing by 1.
Log sequence number is used to identify a group of records inside an a database.
LSN plays major role in case of recoveries.
There are 3 types of logswitches in
1>Automatic
2>Manual
3>Time based logswitch.
Automatic:
when Redologfile is completely full it automatically switches to next redologfile.

Manual:
As a DBA we can issue below command to force on immediate log switch.
SQL>alter system switch logfile;

Time based logswitch:
To perform logswitch based on a time we used the parameter archive_log_target dynamic parameter in pfile.
SQL>alter system set archive_log_target=1800 scope=spfile;
This parameter guarantee logswitch for the time specified by this parameter.
In real time we 30min logswitch occurs.In day 48 log switch occurs.
Logwriter cannot able to write into corrupted redologfiles.
If unable to write than database will go into hangstate,To overcome to this problem we use multiplex the redlogfiles.
If you maintain morethan one redologfile even one redologfile gets corrupted in that oracle can still operate with remaining corrupted files.
To bring all redologfiles under a single unit we use concept called redolog group.
Redolog group is a logical entity which is collection of one or more redologfiles.







Friday, September 30, 2016

Rman Restore from Productions Server To UAT or DEV Server

Rman Restore from Productions Server To UAT or DEV Server

Collect The Refresh Form :-
I say Example like
  IP address:-                                  DBNAME
      PROD --> 192.168.0.9             PRODDB
     TEST -->  192.168.0.15            PRODDB

STEPS:-       
                               PRODUCTIONS
      1)      Take Backup from Production with RMAN Backup
      2)      Copy the Parameter file and Password file from Production to Test environment Through SCP Command
      3)      Move the backups from Production to Test environment

                TESTING  

     1)      Make a directory Udump dpdump cdump
     2)      Edit password and parameter file
     3)      Startup nomount
     4)      Connect rman utility
     5)      Restore controlfile from backup path
     6)      Alter database mount
     7)      Start with catalog in Rman utility
     8)      Restore database and  Recover database
     9)      Open resetlogs

                               Screen Shots
PRODUCTIONS

     1)      Take Backup from Production with RMAN Backup 
I have set to backup path like ‘/ftdatagb/rman/%u.bkp’


2) Copy the Parameter file and Password file from Production to Test environment   Through SCP Command 

3) Move the backups from Production to Test environment 

                      Now am completed in production server Side 



                                              TESTING  
       1)      Make a directory  like /ebdata/FTPDB
Under FTPDB create three directory like adump dpdump cdump
        2)      Edit password and parameter file
 
         You must set two parameter like db file name convert and logfile name convert
        3)      Startup nomount
        4 )      Connect rman utility
   5)      Restore controlfile from backup path
    6)      Alter database mount
     7)      Start with catalog in Rman utility 
           After start with location I have to check all datafile is there or not

         8)      Restore database and  Recover database

                Final again I got error 
           Solution for Error ORA-00349 Rename the logfiles 

           9)      Open resetlogs

            Solution for this error ORA-00312
             Solution for Error ORA-39700:
            Run the script like that
            Script(ORACLE_HOME)/rdbms/admin/catupgrd.sql
            SQL > Shut immediate
            SQL > startup
    
                                             Completed task ………

Wednesday, July 27, 2016

Installing Oracle Database software is one of our regular activities as DBA.

One of the pre-requisites we implement is to set up appropriate Kernel parameters
 at the operating system level on UNIX platforms. But, it would be really awesome to understand the reason behind using those parameters. Incorrect values of these parameters will lead to performance issues in the database as well.
In the Oracle installation documents, it is clearly advised the list of parameters to set and their respective values.


List of Parameters:

Below are the list of parameters Oracle advises in the documents to set up on Linux 64-bit environment. We will take this set of parameters in this blog to understand them in detail.

fs.aio-max-nr = 1048576

fs.file-max = 6815744

kernel.shmall = 2097152

kernel.shmmax = 4294967295

kernel.shmmni = 4096

kernel.sem = 250 32000 100 128

net.ipv4.ip_local_port_range = 9000 65500

net.core.rmem_default = 262144

net.core.rmem_max = 4194304

net.core.wmem_default = 262144

net.core.wmem_max = 1048586

What happens to your #Oracle #Database with wrong #Kernel parameter values? CLICK TO TWEET
Categories:

These parameters can be categorized into 3 sections as the first part of the name says.

fs – File handles: All possible limitations in handling files.
kernel – Kernel specifics: Limitations on resource usage at kernel level like Memory, CPU etc…
net – Network specifics: Limitations on network usage.
Let us explore:

1.fs:

fs.aio-max-nr – This parameter defines the maximum number of ASYNCHRONOUS I/O calls that system can handle on the server. While aio-nr shows the number of calls that system has at that moment.

If this parameter value is insufficient for Oracle Database, then the possible error that you see in alert log will be:

ORA-27090: Unable to reserve kernel resources for asynchronous disk I/O

fs.file-max – This parameter defines the maximum number of file handles, meaning that how many number of opened files can system support at any instance.

It is recommended to have a minimum of 256 as value for every 4MB of RAM you have. So for 8GB RAM = 2048 * 4MB = 2048 * 256 = 524288.

So if you are growing your RAM on the server, then consider to re-check this parameter.

2.kernel:

SHMMNI, SHMALL, SHMMAX – Before we describe each one of these, all of them defines the limitations on using shared memory on the server. With respect to UNIX shared memory is just memory segments shared between multiple application processes on the server. So Oracle Database is one of them.

SHMMNI – It sets the maximum number of shared memory segments that server can handle. As Oracle recommends the value should be at least 4096, it says that we cannot find more than 4096 number of shared memory segments at any instance on the server. Note that SHMMNI value is in numbers.

SHMALL – It defines the total amount of shared memory PAGES that can be used system-wide. It means that to the use all the physical memory this value should be less than or equal to total physical memory size. For DBA’s, it means that sum of all SGA sizes on the server should be less than or equal to SHMALL value. Note that SHMALL value is a number of pages.

SHMMAX – It defines the maximum size that one shared memory segment that server can allocate. Note that SHMMAX value in bytes. Oracle recommends that this value should be greater than half of the physical memory on the server.

Appropriate #Kernel parameters for your #Oracle #Database CLICK TO TWEET
Case study:

Let us run through a case study to understand these parameter effects better.

Consider you have a server with 8GB physical memory(RAM). Let’s define the best possible SHMMNI, SHMALL, SHMMAX values for this system.

SHMMNI – No change it should be 4096. It must be increased if you have more than at least one fourth (1024) Oracle Databases running on the server. Which we never recommend.

SHMALL – By default the page size on Linux is 4KB. The total size of RAM is 8GB. Let us leave at least 1GB of RAM for Linux kernel to run, with which consider 7GB can be used for Oracle Databases. Now value of SHMALL can be:

(7*1024*1024)KB/4KB = 1835008

SHMMAX – If you want the maximum size of SGA on this server to be 5GB, then this parameter value should be 5*1024*1024*1024 = 5368709120 bytes. This, in turn, says that you should not have any database with more than 5GB of SGA. But you can have multiple databases with each 5GB of SGA or even less. This is the fact why Oracle recommends to have this value more than half of the memory to utilize it for SGA(s).

By chance, if your SGA size is more than 5GB say it is 7GB then 2 shared memory areas will be allocated to SGA with one of 5GB and two of 2GB sizes, which doesn’t perform well.

3.net:

net.ipv4.ip_local_port_range – This parameter defines the range of port numbers that system can use for programs which want to connect to the server without a specific port number.

Now, it makes sense if you have come across somebody advising you not to use port numbers for listener beyond 9000 ??. Also, just look back to documents on OEM installation, Oracle uses and advises all the default port numbers less than 9000 ??. As I observed.

net.core.rmem – This parameter defines the default  and maximum RECEIVE socket memory through TCP.

net.core.wmem – This parameter defines the default  and maximum SEND socket memory through TCP.

Oracle recommends to set these values as by default LINUX does not support to transfer or receive large files over TCP. These parameters are pretty important to set considering the amount of the data that flows between database and application – can be BLOB, CLOB or DataGuard redo transfers and so on!!!

“Watch the video below and see how tricky the effect of SHMMAX parameter value would be”, You will like it.

Conclusion:

If you are creating a new oracle database instance, not just a free physical memory on the server to check; But also make sure your SHMALL, SHMMNI, SHMMAX parameters are re-configured
When your data transfer between application and data is going high, run through the network parameters and see if receive and send sockets are the reason behind network delays.
As your database grows, data files will do. Not just making sure DB_FILES parameter is set to support a number of data files; Verify kernel parameters on file handlers are also configured accordingly.

How to Prepare for DBA Interview



Preparation for DBA Interview

Nobody can give us complete or most of questions about interview because it is like our school, college exam which we don't know what exact questions comes in.


For school, college exam we need to go through every books which related to exams topic same like any Interview not for DBA only we need to prepare our self and go through each and every topic related to Oracle Database.

So now question is change from Oracle DBA interview question TO How to prepare myself for DBA interview?

And now , here , we explained how we can prepare our self for DBA interview.

1. First understand we are going for DBA interview means Database Administrator.

“A person who’s responsible to handle and maintain database called DBA”
Means DBA position is very Hard, though, sensitive and confidential.

We can divide Oracle DBA interview in Four Major part.
1. Oracle Database Administration
2. Oracle Database Backup & Recovery Administration
3. Oracle Database Performance Tuning
4. Oracle Database Advanced administration


ü  Oracle Database Administration
1. How to install Oracle Database Software on Different platforms?
Above point include Oracle Database Server or Client installation on different flavor like LINUX/SORALIS/UNIX/WINDOWS etc. so whenever you are going for interview must go through each installation guide.

2. How to Create Database?
There is two option for create DATABASE.
DBCA or MANUALLY

3. How to create tablespace?

4. How to configure Oracle Client to connect to Oracle Server?
In Above point we need to create TNS_ENTRY for database client through NETCA tools.

5. Database User Management
Specific permission on database according user needs. User profile creation., role management, etc.

6. Database Security
In above point we need to do Database Auditing, User Logon, Logoff information.


ü   Oracle Database Backup & Recovery Administration
Ø  1. What is your backup strategy for database backup?
It is depends on many factor.
1. Database size It is in MB, GB or TB
2. Database type It is OLTP or DSS
3. Database availability It is 24x7

So whenever we design backup strategy for database keeps in mind above points. Because Oracle Expert Mr. Tom kyte said “We can FIX everything except BAD DATABASE BACKUP”.

Ø  2. What is your backup method for database backup?
It is also depend on condition.
1. Backup Type COLD backup or HOT backup
2. Backup Method RMAN or USER MANAGED
3. Backup Interval Every Day, Every Week, Every Month.

Oracle Corp. recommended to use RMAN (server managed) backup method for database backup but we can also use USER MANAGED backup if applicable.
NOTE: Always RUN production database in ARCHIVELOG mode.

After set proper backup strategy now practice on database recovery because interviewer must ask question about recovery side.

“Database Recovery is depend on Database backup”
There are so many or different types of recovery so we can divide recovery area in following.

Ø  How many types of recovery we can perform?
1. When datafile is lost
2. when controlfile is lost
3. when online redolog file is lost

Above three files is most important file for database if we lost them then we needed to perform recovery.
1. We can perform recovery at DATABASE level when complete database is lost.
2. We can perform recovery at TABLESPACE level when tablespace is lost.
3. We can perform recovery at DATAFILE level when datafile is lost.
4. We can perform recovery at DATA BLOCK level when any block is corrupted.
5. We can Multiplex CONTROLFILE or REDOLOG file for protect against LOST.

And suppose still we lost all controlfile or redolog files then recovery depends on many conditions.

NOTE: Before going for interview must go through backup and recovery user guide.

ü  Oracle Database Performance tuning administration
This is third big area where most of DBA questions come.

For database tuning we need years of experience. But we can start with bottom.
1. What are database activities?
2. What is database memory area means SGA or PGA size?
3.What is database background process like DBRW, LGWR, CKPT SMON, PMON etc?

For above question oracle providing same diagnostic tools
1. STATPACK or AWR report
2. AUTO TRACE
3. SQL trace
4.Tkprof

Above are same tool to use find out bottleneck of slow database performance so we must know below things about this tools
1. How to use?
2. How to understand?
3. How to use these tools to solve database performance issue.

Above question is simple but answer is difficult because answer want practical or years of experience in database.
Note: Search on Google or visit OTN site for starting.

ü  Oracle Database advanced administration
Oracle Advanced Features
1. Data Guard
2. Partitioning
3. Stream
4. RAC system
5. ASM
6. Others

DBA interviewer should ask question above points but it is also depend on DBA position because above points will include in senior level position.

End of the story is…
For Interview we need good knowledge about Oracle DB and Experience because if we read thousands of interview related questions without good knowledge or experience we can clear interview but can’t be become a good DBA.