SQL Server Warning Fatal Error 7105: Reasons And Solutions

MS SQL Server supports Large Object (LOB) data types for storing a large amount of data. Such data types includes Binary LOB (BLOB), Character LOB (CLOB) and Double-byte Character LOB (DBCCLOB) as well. They use a unique structure which is distinct from normal data types. In certain situations, Microsoft SQL server unable to access LOB data that is provided by database page. Users will receive SQL error 7105 while accessing these type of data and the process terminates. It creates critical situations and data loss necessitates MS SQL repairs to be sorted out.

Consequences of SQL Server Error 7105

Users may get “MSG 7105” when LOB referenced through the SQL database page row may not be accessed. At this end, SQL server application encounters an error message, which resembles the following as:

Because of the high severity of errors, SQL server ends the connection. The same error message appears in Windows Application Event Log and SQL ERRORLOG with EventID.

Causes Behind Error 7105 in SQL Server

There are following reasons those are mentioned below:

  • The corruption problem may occurs inside the LOB page structure that is given by the database.
  • The query that is failed with NOLOCK or READ UNCOMMITTED ISOLATION query hint.
  • Most probably the error is coming inside the SQL Server Engine leading to the failure of the database query.

Solutions to Resolve Error Message

Hence, the following workarounds are discussed as a solution for the respective error:

  • Run DBCC CHECKDB on the SQL Server database
  • It is an inbuilt utility that must be executed to restore or repair the SQL Server database file. Still, in the beginning, it must be run without any repair clause thus, to verify the damage level. After that run DBCC CHECKDB with recommended repair clause to fix SQL Server fatal error 7105. It it possible then, it can fix the error but the problem is that it justifies the data loss which is not suitable.

  • Restore from backup file
  • If the above mentioned method is unable to resolve the error, then users must use the different solution to eliminate the issues. Hence, if a clean and healthy backup is available then, restore the database from backup file for regaining the availability of the data items which is stored in SQL database. It must be possible, only if the backup is available. The steps for restoring procedure are mentioned below:

    • Restore the selected database by clicking on database >> Tasks >> Restore >> Database.
    • After that, select the backup file and then, click OK button.
    • Now, the screen will display the Execution Process and you will have to wait for some time until the process has been completed.
    • After finishing the execution process, a restore database message will be shown on the screen successfully. Click on OK.

Performing above manual steps, users will be able to restore SQL server database. But if the backup is not available then users can go for third party solution to fix SQL fatal error 7105.

Effortless Solution to Resolve SQL Server Error 7105

The most suitable solution for fixing this error code is MS SQL data recovery. It is safe and secure style to troubleshoot the error with the help of below steps:

Step1: First, launch the software, select the MDF file and then, press Open option.

Step 2: Once the files are added, the application will Preview the complete recovered database of MDF and NDF files like tables, stored procedures, triggers, and views and so on.

Step 3: Finally, select an option between two i.e., SQL Server Database and SQL Server Compatible Scripts to export/save the database.

In the end, a user will be able to get database file without SQL error 7105.

Conclusion

While working, various server errors may occur which creates obstacle while accessing the SQL Server database. However, error 7105 in SQL Server is faced due to inaccessibility of large object data that was referenced by a database. We have covered a best possible solution which helps to overcome this issue. Initially, it is suggested to restore the backup of the SQL database, if a user is already having it. Meanwhile, a user can go for another solutions to resolve SQL server 7105.

How to drop all tables in mysql database using a single command

Command:-
mysqldump -u[user] -p[password] --add-drop-table --no-data [DATABASE] | grep ^DROP | mysql -u[user] -p[password] [DATABASE]

Here test is the database name. I'm going to drop all the tables from this database.
Example:-
mysqldump -uroot -predhat --add-drop-table --no-data test | grep ^DROP | mysql -uroot -predhat test

Know How To Resolve SQL Server Error 3271?

When a user is backing up a database or restoration procedure is in process, most of the SQL users faces an error i.e. SQL error 3271. This a type of error encounters when a system is performing an I/O operation.

The main reason behind this error message is related to I/O operation that displays a message of nonrecoverable I/O error. Therefore, in this article, we have discussed the all the major causes of occurrence of this error and what all are the possible solution to fix this error.

Causes of Microsoft SQL Server Error 3271

There are various major causes that are responsible for this error. Therefore, in this section, all the major reasons of occurrence of this error are discussed:

  • Lack of Storage Space
  • The error occurs mainly because of unavailability of space in the storage media. Therefore, it is impossible to fit backup created on the disk. This error message also has some additional text that determines the storage device memory is full, which leads to this I/O error like:

    “A nonrecoverable I/O error occurred on file ‘%ab:’ %ab”

  • VSS Writer of SQL Server
  • The another major reason of the error is an issue with VSS. Volume Shadow Copy Snapshot (VSS) is a replica that is backed up via Window server. Therefore, it can also be possible that an error encountered due to any of the VSS writer that results in failure of the backup operation and further leads to entire backup process failure. Moreover, any error any problem, which is created by VSS writer results an SQL server error 3271.

  • SQL Database Corruption
  • One of the reasons that cause the error is corruption in the SQL database for which user was trying to create a backup. If the user tries to backup the corrupted SQL database then it is more likely to be possible that an I/O error occurs that further leads to error 3271.

A user must run a check through the SQL error logs to determine the exact reason of the error. After identifying the exact reason, a correct measure or solution can be used to fix the error message and prevent the SQL database from this error in future.

How To Fix SQL Server Error 3271?

If the reason of the error message is one of the above mentioned, then a user must follow the measures discussed below:

  • Increase Space On Disk
    1. If the reason of the error is a lack of storage space, then first, a user needs to check the amount of storage space available on the disk on which backup needs to be created.
    2. If there is no space available, first a user needs to free up some space on the storage disk or can use some external storage to save the created backup.
  • Resolve VSS Writer Issue
    1. If the reason of Microsoft SQL error 3271 is VSS writer then, a user must review the instances by running the SQL instance check because the error is due to problematic SQL instance that prevents from taking a snapshot of the database.
    2. After identifying the instance, a user needs to stop that instance and run the backup process without that instance.
    3. If the backup process is completed successfully then it is clear that this particular instance is responsible the error. A user can again check SQL error log to determines the reason behind the inappropriate functionality of that particular instance.
    4. If the user is not able to identify the instance that causing problem on a server, then one needs to stop all the instances on the server to run the backup process.

Note: When you stop all the available instances then, always keep in mind SQL VSS writer is not in use while taking backup.

With the help of these above mentioned manual procedures, a user is able to avoid and fix the error and run the backup process successfully.

Conclusion

While taking backup of SQL database, a user faces an I/O error generally known as Microsoft SQL Server error 3271. Therefore, all the causes of this error need to determined first, then depending upon the cause of error a respective measure should be taken to resolve it. Hence, in this post manual solutions to resolve these issues are discussed with respect to the cause of the error that a user can use to overcome this issue. However, if the error message occurs while restoring backup then there may be possibility of backup file corruption. In such situation you can take the help of third party SQL backup recovery tool to repair corrupt SQL backup file and fix the error message.

Steps to Rename Physical Database Files in SQL Server

The users can manually rename the physical files according to choice using the Transact-SQL commands in the following manner:

Firstly, we will create a database “new_db” using SQL query. Naturally, the physical files will be named as “new_db.mdf”. Our main aim is to rename SQL Server physical files from “new_db.mdf” to “old_db.mdf”. Follow the below mentioned steps to manually rename the files:

  1. Firstly, run the following command to create a new database with name ”new_db”.
  2. CREATE DATABASE [new_db]
    CONTAINMENT = NONE
    ON PRIMARY
    ( NAME = 'name', FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\DATA\new_db.mdf',SIZE = 5MB , MAXSIZE = UNLIMITED, FILEGROWTH = 10MB )

    Thus, a new database is created with physical file “new_db”. The following steps can be used to rename SQL Server physical files name.

  3. Now, locate the physical location of file on your system using the following command:
  4. USE new_db
    GO
    SELECT file_id, name as [logical_file_name], physical_name
    FROM sys.database_files
  5. Also, the database should be brought to OFFLINE state because the files cannot be renamed in the online mode. Run the following command to bring database to offline state:
  6. USE [master];
    GO
    --Disconnect all existing session.
    ALTER DATABASE new_db SET SINGLE_USER WITH ROLLBACK IMMEDIATE
    GO
    --Change database in to OFFLINE mode.
    ALTER DATABASE new_db SET OFFLINE
  7. Since the database is offline now, so browse to the location of the file as found from Step(1). Rename the file to desired name e.g., old_db.mdf.
  8. Now, update the system catalog for updating the new name of the physical file.
  9. ALTER DATABASE new_db MODIFY FILE (Name='new_db', FILENAME='C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\DATA\old_db.mdf') GO

    This updates the physical file name in system catalog file.

  10. Run the following command to bring the database in ONLINE mode:
  11. ALTER DATABASE new_db SET ONLINE
    Go
    ALTER DATABASE new_db SET MULTI_USER
    Go

    So, the database physical file name has been renamed to “old_db.mdf” and the database is also in the Online mode. The users can now work normally on the database to perform any operation.

Conclusion

The user may feel it necessary to rename physical file name in SQL Server due to any reason. However, to avoid confusion between the physical files and database, the users are always advised to rename SQL Server physical database files using T-SQL. All the detailed steps have been mentioned in the above section.

Queries regarding Mysql Proceduers


To find out all the procedures of a database:-

SHOW PROCEDURE STATUS WHERE Db = 'thirstydb';


To set a new definer:-
mysql> UPDATE `mysql`.`proc` p SET definer = 'thirstydev@%' WHERE definer='thirstylive@localhost' AND db='devthirstydb';
Query OK, 0 rows affected (0.02 sec)
Rows matched: 0  Changed: 0  Warnings: 0

mysql> UPDATE `mysql`.`proc` p SET definer = 'thirstydev@%' WHERE definer='thirstylive@%' AND db='devthirstydb';
Query OK, 85 rows affected (0.04 sec)
Rows matched: 85  Changed: 85  Warnings: 0

To select all the procdure of a particular db:-
select name FROM mysql.proc WHERE db = 'devthirstydb' AND type = 'PROCEDURE';

select name FROM mysql.proc WHERE db = 'devthirstydb' AND type = 'FUNCTION';


To delete all the procdure of a particular db:-
DELETE FROM mysql.proc WHERE db = 'devthirstydb' AND type = 'PROCEDURE';

DELETE FROM mysql.proc WHERE db = 'devthirstydb' AND type = 'FUNCTION';




Mysql procedures:-

To find out all the procedures:-

mysql> show procedure status;
+--------------+---------------------------------------+-----------+---------------+---------------------+---------------------+---------------+---------+----------------------+----------------------+--------------------+
| Db           | Name                                  | Type      | Definer       | Modified            | Created             | Security_type | Comment | character_set_client | collation_connection | Database Collation |
+--------------+---------------------------------------+-----------+---------------+---------------------+---------------------+---------------+---------+----------------------+----------------------+--------------------+
| thirstydb    | advanced_vendor_filtering             | PROCEDURE | thirstylive@% | 2016-08-12 05:13:27 | 2016-08-12 05:13:27 | DEFINER       |         | utf8                 | utf8_general_ci      | latin1_swedish_ci  |
| thirstydb    | advanced_vendor_filtering_my_bookings | PROCEDURE | thirstylive@% | 2016-08-12 05:22:29 | 2016-08-12 05:22:29 | DEFINER       |         | utf8                 | utf8_general_ci      | latin1_swedish_ci  |
| thirstydb    | advanced_vendor_filtering_temp        | PROCEDURE | thirstylive@% | 2016-08-08 08:29:11 | 2016-08-08 08:29:11 | DEFINER       |         | utf8                 | utf8_general_ci      | latin1_swedish_ci  |
| thirstydb    | all_food_categories                   | PROCEDURE | thirstylive@% | 2016-08-08 08:29:20 | 2016-08-08 08:29:20 | DEFINER       |         | utf8                 | utf8_general_ci      | latin1_swedish_ci  |
| thirstydb    | apikeyValidity                        | PROCEDURE | thirstylive@% | 2016-08-08 08:29:29 | 2016-08-08 08:29:29 | DEFINER       |         | utf8                 | utf8_general_ci      | latin1_swedish_ci  |
| thirstydb    | apikeyValidity_vendor                 | PROCEDURE | thirstylive@% | 2016-08-08 08:29:41 | 2016-08-08 08:29:41 | DEFINER       |         | utf8                 | utf8_general_ci      | latin1_swedish_ci  |
| thirstydb    | directory                             | PROCEDURE | thirstylive@% | 2016-08-19 03:00:51 | 2016-08-19 03:00:51 | DEFINER       |         | utf8


To find out a procedure by a certain definer name and dbname:-
mysq> use mysql
mysql> select name from  proc where definer = 'thirstydev@%' AND db='devthirstydb';

+---------------------------------------+
| name                                  |
+---------------------------------------+
| advanced_vendor_filtering             |
| advanced_vendor_filtering_my_bookings |
| advanced_vendor_filtering_temp        |
| all_food_categories                   |
| apikeyValidity                        |
| apikeyValidity_vendor                 |
| base_url                              |
| directory                             |
| facebook_login  


To call a procedure:-
mysql> call search_result(1);
Empty set (0.02 sec)

Query OK, 0 rows affected (0.02 sec)


To view a procedure:-

mysql> show create procedure user_login;
+------------+----------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------+----------------------+--------------------+
| Procedure  | sql_mode | Create Procedure                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     | character_set_client | collation_connection | Database Collation |
+------------+----------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------+----------------------+--------------------+
| user_login |          | CREATE DEFINER=`thirstylive`@`%` PROCEDURE `user_login`(email_in VARCHAR(255), password_in VARCHAR(255),
api_key_in VARCHAR(50), base_url_in TEXT)
BEGIN

        DECLARE email_verified_declared INT(2);
        DECLARE user_id_declared BIGINT;

        SELECT email_verified INTO email_verified_declared FROM user_profile WHERE email = email_in AND `password` = password_in;
        SELECT user_id INTO user_id_declared FROM user_profile WHERE email = email_in AND `password` = password_in;

        IF email_verified_declared = 0 THEN

                SELECT 'unverified_email' AS  col_name;

        ELSEIF user_id_declared IS NULL THEN

                SELECT 'invalid_credentials' AS col_name;

        ELSE

                UPDATE user_profile SET api_key = api_key_in WHERE user_id = user_id_declared;
                CALL user_profile(user_id_declared, base_url_in);

        END IF;


END | utf8                 | utf8_general_ci      | latin1_swedish_ci  |
+------------+----------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------+-------------------

How to find out highest table size in sql server 2014 ?

SELECT
    t.NAME AS TableName,
    i.name as indexName,
    sum(p.rows) as RowCounts,
    sum(a.total_pages) as TotalPages,
    sum(a.used_pages) as UsedPages,
    sum(a.data_pages) as DataPages,
    (sum(a.total_pages) * 8) / 1024 as TotalSpaceMB,
    (sum(a.used_pages) * 8) / 1024 as UsedSpaceMB,
    (sum(a.data_pages) * 8) / 1024 as DataSpaceMB
FROM
    sys.tables t
INNER JOIN    
    sys.indexes i ON t.OBJECT_ID = i.object_id
INNER JOIN
    sys.partitions p ON i.object_id = p.OBJECT_ID AND i.index_id = p.index_id
INNER JOIN
    sys.allocation_units a ON p.partition_id = a.container_id
WHERE
    t.NAME NOT LIKE 'dt%' AND
    i.OBJECT_ID > 255 AND  
    i.index_id <= 1
GROUP BY
    t.NAME, i.object_id, i.index_id, i.name
ORDER BY
    TotalSpaceMB desc

Mysql error-log monitoring using Shell script with mail alert

#!/bin/bash
SERNAME=xyz.kmi.com
SERPUBIP=192.168.72.50
SERVER=`hostname`
WEEKDAY=`date '+%w%H%M'`
DATE_VAR=`date '+%Y_%m_%d'`

Error_log_loc=/var/log
egrep 'Error' $Error_log_loc/mysqld.log |sort -u > $Error_log_loc/mysqld_ALERTLOG.txt
cat $Error_log_loc/mysqld.log >> $Error_log_loc/mysqld_archived.log

cat /dev/null > $Error_log_loc/mysqld.log

if [ -s "$Error_log_loc/mysqld.log" ] ; then
cat $Error_log_loc/mysqld_ALERTLOG.txt | mail -s "URGENT -ERROR in Oracle Alert Log File for $SERNAME ($SERPUBIP) at `date` " ssdas@gmail.com
fi

# Weekly alert log datestamp and compress (Sunday 00:15)

sysdate=`date | awk '{ print $3}'`
if [[ $sysdate -eq 4 || $sysdate -eq 11 || $sysdate -eq 18 || $sysdate -eq 25 ]] ; then
mv $Error_log_loc/mysqld_archived.log $Error_log_loc/mysqld_archived_ALERTLOG_${DATE_VAR}.log
gzip $Error_log_loc/mysqld_archived_ALERTLOG_${DATE_VAR}.log
fi
exit 0

Batch script to take database users backup in sql server

SQL Server Version:- SQL 2014

Create a batch file which will be scheduled on task scheduler.

set backuplogfilename=%date:~-7,2%-%date:~-10,2%-%date:~-4,4%-0%time:~1,1%%time:~3,2%%time:~6,2%
SQLCMD.EXE -S localhost -U sa -P "sa@123" -i "E:\users.sql"  >> "E:\users_%backuplogfilename%.log"
-- save and exit

Now schedule the above .bat file in task scheduler


Create a sql file lets say users.sql
Content of users.sql

USE master
GO
IF OBJECT_ID ('sp_hexadecimal') IS NOT NULL
  DROP PROCEDURE sp_hexadecimal
GO
CREATE PROCEDURE sp_hexadecimal
    @binvalue varbinary(256),
    @hexvalue varchar (514) OUTPUT
AS
DECLARE @charvalue varchar (514)
DECLARE @i int
DECLARE @length int
DECLARE @hexstring char(16)
SELECT @charvalue = '0x'
SELECT @i = 1
SELECT @length = DATALENGTH (@binvalue)
SELECT @hexstring = '0123456789ABCDEF'
WHILE (@i <= @length)
BEGIN
  DECLARE @tempint int
  DECLARE @firstint int
  DECLARE @secondint int
  SELECT @tempint = CONVERT(int, SUBSTRING(@binvalue,@i,1))
  SELECT @firstint = FLOOR(@tempint/16)
  SELECT @secondint = @tempint - (@firstint*16)
  SELECT @charvalue = @charvalue +
    SUBSTRING(@hexstring, @firstint+1, 1) +
    SUBSTRING(@hexstring, @secondint+1, 1)
  SELECT @i = @i + 1
END

SELECT @hexvalue = @charvalue
GO

IF OBJECT_ID ('sp_help_revlogin') IS NOT NULL
  DROP PROCEDURE sp_help_revlogin
GO
CREATE PROCEDURE sp_help_revlogin @login_name sysname = NULL AS
DECLARE @name sysname
DECLARE @type varchar (1)
DECLARE @hasaccess int
DECLARE @denylogin int
DECLARE @is_disabled int
DECLARE @PWD_varbinary  varbinary (256)
DECLARE @PWD_string  varchar (514)
DECLARE @SID_varbinary varbinary (85)
DECLARE @SID_string varchar (514)
DECLARE @tmpstr  varchar (1024)
DECLARE @is_policy_checked varchar (3)
DECLARE @is_expiration_checked varchar (3)

DECLARE @defaultdb sysname

IF (@login_name IS NULL)
  DECLARE login_curs CURSOR FOR

      SELECT p.sid, p.name, p.type, p.is_disabled, p.default_database_name, l.hasaccess, l.denylogin FROM
sys.server_principals p LEFT JOIN sys.syslogins l
      ON ( l.name = p.name ) WHERE p.type IN ( 'S', 'G', 'U' ) AND p.name <> 'sa'
ELSE
  DECLARE login_curs CURSOR FOR


      SELECT p.sid, p.name, p.type, p.is_disabled, p.default_database_name, l.hasaccess, l.denylogin FROM
sys.server_principals p LEFT JOIN sys.syslogins l
      ON ( l.name = p.name ) WHERE p.type IN ( 'S', 'G', 'U' ) AND p.name = @login_name
OPEN login_curs

FETCH NEXT FROM login_curs INTO @SID_varbinary, @name, @type, @is_disabled, @defaultdb, @hasaccess, @denylogin
IF (@@fetch_status = -1)
BEGIN
  PRINT 'No login(s) found.'
  CLOSE login_curs
  DEALLOCATE login_curs
  RETURN -1
END
SET @tmpstr = '/* sp_help_revlogin script '
PRINT @tmpstr
SET @tmpstr = '** Generated ' + CONVERT (varchar, GETDATE()) + ' on ' + @@SERVERNAME + ' */'
PRINT @tmpstr
PRINT ''
WHILE (@@fetch_status <> -1)
BEGIN
  IF (@@fetch_status <> -2)
  BEGIN
    PRINT ''
    SET @tmpstr = '-- Login: ' + @name
    PRINT @tmpstr
    IF (@type IN ( 'G', 'U'))
    BEGIN -- NT authenticated account/group

      SET @tmpstr = 'CREATE LOGIN ' + QUOTENAME( @name ) + ' FROM WINDOWS WITH DEFAULT_DATABASE = [' + @defaultdb + ']'
    END
    ELSE BEGIN -- SQL Server authentication
        -- obtain password and sid
            SET @PWD_varbinary = CAST( LOGINPROPERTY( @name, 'PasswordHash' ) AS varbinary (256) )
        EXEC sp_hexadecimal @PWD_varbinary, @PWD_string OUT
        EXEC sp_hexadecimal @SID_varbinary,@SID_string OUT

        -- obtain password policy state
        SELECT @is_policy_checked = CASE is_policy_checked WHEN 1 THEN 'ON' WHEN 0 THEN 'OFF' ELSE NULL END FROM sys.sql_logins WHERE name = @name
        SELECT @is_expiration_checked = CASE is_expiration_checked WHEN 1 THEN 'ON' WHEN 0 THEN 'OFF' ELSE NULL END FROM sys.sql_logins WHERE name = @name

            SET @tmpstr = 'CREATE LOGIN ' + QUOTENAME( @name ) + ' WITH PASSWORD = ' + @PWD_string + ' HASHED, SID = ' + @SID_string + ', DEFAULT_DATABASE = [' + @defaultdb + ']'

        IF ( @is_policy_checked IS NOT NULL )
        BEGIN
          SET @tmpstr = @tmpstr + ', CHECK_POLICY = ' + @is_policy_checked
        END
        IF ( @is_expiration_checked IS NOT NULL )
        BEGIN
          SET @tmpstr = @tmpstr + ', CHECK_EXPIRATION = ' + @is_expiration_checked
        END
    END
    IF (@denylogin = 1)
    BEGIN -- login is denied access
      SET @tmpstr = @tmpstr + '; DENY CONNECT SQL TO ' + QUOTENAME( @name )
    END
    ELSE IF (@hasaccess = 0)
    BEGIN -- login exists but does not have access
      SET @tmpstr = @tmpstr + '; REVOKE CONNECT SQL TO ' + QUOTENAME( @name )
    END
    IF (@is_disabled = 1)
    BEGIN -- login is disabled
      SET @tmpstr = @tmpstr + '; ALTER LOGIN ' + QUOTENAME( @name ) + ' DISABLE'
    END
    PRINT @tmpstr
  END

  FETCH NEXT FROM login_curs INTO @SID_varbinary, @name, @type, @is_disabled, @defaultdb, @hasaccess, @denylogin
   END
CLOSE login_curs
DEALLOCATE login_curs
RETURN 0
GO

exec dbo.sp_help_revlogin
go

How to install single node hadoop cluster on Centos 6

What is hadoop?
Hadoop is an open-source framework to store and process Big Data in a distributed environment. It contains two modules
one is MapReduce and another is Hadoop Distributed File System (HDFS).


•MapReduce: It is a parallel programming model for processing large amounts of structured, semi-structured, and
unstructured data on large clusters of commodity hardware.

•HDFS:Hadoop Distributed File System is a part of Hadoop framework, used to store and process the datasets. It
provides a fault-tolerant file system to run on commodity hardware.

Hostname:- server1.soumya.com
OS:- Centos 6

Step 1: Install Java
Download the java

[root@server1 ~]# wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u101-b13/jdk-8u101-linux-x64.tar.gz"
[root@server1 ~]# tar zxvf jdk-8u101-linux-x64.tar.gz

Step 2:-Install Java with Alternatives

After extracting archive file use alternatives command to install it. alternatives command is available in chkconfig
package.

[root@server1 jdk1.8.0_101]# alternatives --install /usr/bin/java java /u01/jdk1.8.0_101/bin/java 2
[root@server1 jdk1.8.0_101]# alternatives --config java

There are 4 programs which provide 'java'.

  Selection    Command
-----------------------------------------------
   1           /usr/lib/jvm/jre-1.5.0-gcj/bin/java
*+ 2           /usr/lib/jvm/jre-1.7.0-openjdk.x86_64/bin/java
   3           /usr/lib/jvm/jre-1.6.0-openjdk.x86_64/bin/java
   4           /u01/jdk1.8.0_101/bin/java

Enter to keep the current selection[+], or type selection number: 2

Now java8 has been installed.Its recommended to setup javac and jar commands path using alternatives

[root@server1 jdk1.8.0_101]# alternatives --install /usr/bin/jar jar //u01/jdk1.8.0_101/bin/jar 4
[root@server1 jdk1.8.0_101]# alternatives --install /usr/bin/javac javac /u01/jdk1.8.0_101//bin/javac 4
[root@server1 jdk1.8.0_101]# alternatives --set jar /u01/jdk1.8.0_101/bin/jar
[root@server1 jdk1.8.0_101]# alternatives --set javac /u01/jdk1.8.0_101/bin/javac

Now check the java version:-
[root@server1 alternatives]# java -version
java version "1.8.0_101"
Java(TM) SE Runtime Environment (build 1.8.0_101-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.101-b13, mixed mode

Step 3:-
Now configure the enviroment variables:-

# export JAVA_HOME=/u01/jdk1.8.0_101
Setup JRE_HOME Variable
# export JRE_HOME=/u01/jdk1.8.0_101/jre
Setup PATH Variable
# export PATH=$PATH:/u01/jdk1.8.0_101/bin:/u01/jdk1.8.0_101/jre/bin

add the following variables in .bashrc file for autoloading on system boot.

[root@server1] vi ~/.bash_profile

# export JAVA_HOME=/u01/jdk1.8.0_101
Setup JRE_HOME Variable
# export JRE_HOME=/u01/jdk1.8.0_101/jre
Setup PATH Variable
# export PATH=$PATH:/u01/jdk1.8.0_101/bin:/u01/jdk1.8.0_101/jre/bin

:wq (--save & exit)

Step 4:- Create Hadoop user

[root@server1 ~]# adduser hadoop
[root@server1 ~]# passwd hadoop

Now after creation of the user account, its required to create a key for ssh into its own account.

[root@server1 ~]# su - hadoop
[hadoop@server1 ~]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hadoop/.ssh/id_rsa):
Created directory '/home/hadoop/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/hadoop/.ssh/id_rsa.
Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
The key fingerprint is:
c5:3e:25:c0:92:23:d0:17:fa:56:72:4c:79:72:4c:fe hadoop@server1.soumya.com
The key's randomart image is:
+--[ RSA 2048]----+
|  .o  .+o+.      |
|    o.=o+++      |
|    .o.o++= .    |
|     . + o +     |
|      o S o E    |
|     .     .     |
|                 |
|                 |
|                 |
+-----------------+
[hadoop@server1 ~]$
[hadoop@server1 ~]$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
[hadoop@server1 ~]$ chmod 0600 ~/.ssh/authorized_keys

Check the connectivity:-
[hadoop@server1 ~]$ ssh localhost
The authenticity of host 'localhost (::1)' can't be established.
RSA key fingerprint is 0b:59:e4:8b:b1:e6:12:3a:38:4f:ba:74:ef:8a:ad:46.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'localhost' (RSA) to the list of known hosts.
[hadoop@server1 ~]$ exit
logout
Connection to localhost closed.
[hadoop@server1 ~]$

Step 5:- Download hadoop 2.6.0
[hadoop@server1 ~]$ wget http://apache.claz.org/hadoop/common/hadoop-2.6.0/hadoop-2.6.0.tar.gz
[hadoop@server1 ~]$ tar -zxvf hadoop-2.6.0.tar.gz
[hadoop@server1 u01]# mv hadoop-2.6.0 /home/hadoop/hadoop

Step 6:- Edit .bash_profile file and add the following lines from hadoop user.
[hadoop@server1 ~]$ vi /home/hadoop/.bash_profile
export PATH
#Java Env Variables
export JAVA_HOME=/u01/jdk1.8.0_101
export JRE_HOME=/u01/jdk1.8.0_101/jre
export PATH=$PATH:/u01/jdk1.8.0_101/bin:/u01/jdk1.8.0_101/jre/bin


#Hadoop Env Variables
export HADOOP_HOME=/home/hadoop/hadoop
export HADOOP_INSTALL=$HADOOP_HOME
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin

:wq(-save & exit)

Now apply the changes in current running environment
[hadoop@server1 ~]$. /home/hadoop/.bash_profile

Now edit the following file and change the java path

[hadoop@server1 u01]# vi $HADOOP_HOME/etc/hadoop/hadoop-env.sh

export JAVA_HOME=/u01/jdk1.8.0_101

:wq

Now Edit hadoop configuration files and add the following lines.
[hadoop@server1 u01]# cd $HADOOP_HOME/etc/hadoop

[hadoop@server1 u01]# vi core-site.xml
<configuration>
<property>
  <name>fs.default.name</name>
    <value>hdfs://localhost:9000</value>
</property>
</configuration>

[hadoop@server1 u01]# vi hdfs-site.xml
<configuration>
<property>
 <name>dfs.replication</name>
 <value>1</value>
</property>

<property>
  <name>dfs.name.dir</name>
    <value>file:///home/hadoop/hadoopdata/hdfs/namenode</value>
</property>

<property>
  <name>dfs.data.dir</name>
    <value>file:///home/hadoop/hadoopdata/hdfs/datanode</value>
</property>
</configuration>

:wq


[hadoop@server1 u01]# vi mapred-site.xml
<configuration>
 <property>
  <name>mapreduce.framework.name</name>
   <value>yarn</value>
 </property>
</configuration>


:wq


[hadoop@server1 u01]# vi yarn-site.xml
<configuration>
 <property>
  <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
 </property>
</configuration>

:wq

Step 7:- Now format the namenode using following command.
[hadoop@server1 u01]# hdfs namenode -format

Sample output:

16/09/09 14:56:22 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = server1.soumya.com/192.168.2.12
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 2.6.0
...
...
16/09/09 14:56:25 INFO common.Storage: Storage directory /home/hadoop/hadoopdata/hdfs/namenode has been successfully formatted.
16/09/09 14:56:25 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
16/09/09 14:56:25 INFO util.ExitUtil: Exiting with status 0
16/09/09 14:56:25 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at server1.soumya.com/192.168.2.12
************************************************************/


Step 8:-
Now start the Hadoop Cluster
[hadoop@server1 sbin]$ cd $HADOOP_HOME/sbin/
Now run start-dfs.sh script

[hadoop@server1 sbin]$ start-dfs.sh
16/09/09 15:07:54 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [localhost]
hadoop@localhost's password:
localhost: starting namenode, logging to /home/hadoop/hadoop/logs/hadoop-hadoop-namenode-server1.soumya.com.out
hadoop@localhost's password:
localhost: starting datanode, logging to /home/hadoop/hadoop/logs/hadoop-hadoop-datanode-server1.soumya.com.out
Starting secondary namenodes [0.0.0.0]
The authenticity of host '0.0.0.0 (0.0.0.0)' can't be established.
RSA key fingerprint is 0b:59:e4:8b:b1:e6:12:3a:38:4f:ba:74:ef:8a:ad:46.
Are you sure you want to continue connecting (yes/no)? yes
0.0.0.0: Warning: Permanently added '0.0.0.0' (RSA) to the list of known hosts.
hadoop@0.0.0.0's password:
0.0.0.0: starting secondarynamenode, logging to /home/hadoop/hadoop/logs/hadoop-hadoop-secondarynamenode-server1.soumya.com.out
16/09/09 15:08:24 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

Now run start-yarn.sh script.

[hadoop@server1 sbin]$ start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /home/hadoop/hadoop/logs/yarn-hadoop-resourcemanager-server1.soumya.com.out
hadoop@localhost's password:
localhost: starting nodemanager, logging to /home/hadoop/hadoop/logs/yarn-hadoop-nodemanager-server1.soumya.com.out

Step 9:-
Check the hadoop services from browser

http://server1.soumya.com:50070/

To access pthe information about cluster and all applications

http://server1.soumya.com:8088/

To get information about secondary namenode.
http://server1.soumya.com:50090/


Step 10:-Test Hadoop Single Node Setup

[hadoop@server1 sbin]$ hdfs dfs -mkdir /user
16/09/09 15:36:27 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[hadoop@server1 sbin]$ hdfs dfs -mkdir /user/soumya
16/09/09 15:36:37 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

Monitoring percona Error-log using shell script

vi /backups/alertlog_monotor.sh


#!/bin/bash
SERNAME=srv1.xyz.com
SERPUBIP=192.168.72.50
SERVER=`hostname`                      #### Sets the server name for the email
WEEKDAY=`date '+%w%H%M'`               #### Sets the number value of the day of the week
DATE_VAR=`date '+%Y_%m_%d'`

Error_log_loc=/data/mysql
egrep 'Error' $Error_log_loc/$SERNAME.err |sort -u >  $Error_log_loc/$SERNAME_ALERTLOG.txt
cat $Error_log_loc/$SERNAME.err >> $Error_log_loc/$SERNAME_Archived_ALERTLOG.txt

cat /dev/null > $Error_log_loc/$SERNAME.err

if [ -s "$Error_log_loc/$SERNAME_ALERTLOG.txt" ] ; then
cat $Error_log_loc/$SERNAME_ALERTLOG.txt | mail -s "URGENT -ERROR in Alert Log File for $SERNAME ($SERPUBIP) at `date` " ssdas@24x7-itsupport.com
fi

# Weekly alert log datestamp and compress (Sunday 00:15)

if [[ $WEEKDAY -eq 00015 ]]; then
        mv $Error_log_loc/$SERNAME_Archived_ALERTLOG.txt $Error_log_loc/$SERNAME_Archived_ALERTLOG_${DATE_VAR}.log
        gzip $Error_log_loc/$SERNAME_Archived_ALERTLOG_${DATE_VAR}.log
fi

exit 0

:wq (save & exit)

Now Schedule the above script in crontab for every 15mins.

crontab -l

*/15 * * * * /backups/alertlog_monotor.sh

Oracle Alert log monitoring using shell script

Vi /backups/alertlog_monotor.sh

#!/bin/bash
export ORACLE_SID=kmioradb
export ORAENV_ASK=NO
. /home/oracle/.bash_profile
SERNAME=oradb.kminfosystems.com
SERPUBIP=192.168.72.50
SERVER=`hostname`                      #### Sets the server name for the email
WEEKDAY=`date '+%w%H%M'`               #### Sets the number value of the day of the week
DATE_VAR=`date '+%Y_%m_%d'`

Alert_log_loc=/u01/app/oracle/diag/rdbms/kmioradb/kmioradb/trace/
# Check for the existence of ORA- in the alert log and email/page on error
egrep 'ORA-|error|TNS' $Alert_log_loc/alert_$ORACLE_SID.log |sort -u >  $Alert_log_loc/$ORACLE_SID_OUT_ALERTLOG.txt  #### Output file with ORA- errors
cat $Alert_log_loc/alert_$ORACLE_SID.log >> $Alert_log_loc/archived_alert_$ORACLE_SID.log
cat /dev/null >  $Alert_log_loc/alert_$ORACLE_SID.log

if [ -s "$Alert_log_loc/$ORACLE_SID_OUT_ALERTLOG.txt" ] ; then
cat $Alert_log_loc/$ORACLE_SID_OUT_ALERTLOG.txt | mail -s "URGENT -ERROR in Oracle Alert Log File for $SERNAME ($SERPUBIP) at `date` " soumya@gmail.com
fi

# Weekly alert log datestamp and compress (Sunday 00:15)

if [[ $WEEKDAY -eq 00015 ]]; then
        mv $Alert_log_loc/archived_alert_${ORACLE_SID}.log $Alert_log_loc/archived_alert_${ORACLE_SID}_${DATE_VAR}.log
        gzip $Alert_log_loc/archived_alert_${ORACLE_SID}_${DATE_VAR}.log
fi

exit 0

:wq (save & exit)


Now Schedule the above script in crontab for every 15mins.

crontab -l

*/15 * * * * /backups/alertlog_monotor.sh

How to migrate mysql users from one server to another?

Step 1: Get a List of MySQL Users from the source server

[root@db01 tmp]# mysql -B -N -uroot -p -e "SELECT CONCAT('\'', user,'\'@\'', host, '\'') FROM user WHERE user != 'debian-sys-maint' AND user != 'root' AND user != ''" mysql > /tmp/mysql_all_users.txt
Enter password: redhat
[root@db01 tmp]# cat /tmp/mysql_all_users.txt
'CitLivehappy'@'%'
'brightonlands'@'%'
'brochure'@'%'
'carver'@'%'
'CitLivehappy'@'localhost'
'brightonlands'@'localhost'
'brochure'@'localhost'
'carver'@'localhost'


Step 2:- Obtain a List of User Privileges

Fire the next command in the terminal to get the privileges:
[root@db01 tmp]# cd /tmp/
[root@db01 tmp]# while read line; do mysql -B -N -uroot -predhat -e "SHOW GRANTS FOR $line"; done < mysql_all_users.txt > mysql_all_users_sql.sql

Basically, the above command reads every individual line in mysql_all_users.txt and tries to get privileges for each user and paste it on mysql_all_users_sql.sql file.
This file will contain the SQL queries to obtain users along with their privileges.

[root@db01 tmp]# cat mysql_all_users_sql.sql
GRANT USAGE ON *.* TO 'CitLivehappy'@'%' IDENTIFIED BY PASSWORD '*1983DE33EA67561F7C46D591CC6F15FD7BF1E'
GRANT ALL PRIVILEGES ON `Cit-Live-bewealthynhappy`.* TO 'CitLivehappy'@'%'
GRANT ALL PRIVILEGES ON `cit_cvbflive`.* TO 'CitLivehappy'@'%'
GRANT USAGE ON *.* TO 'brightonlands'@'%' IDENTIFIED BY PASSWORD '*69DA4F2138FA03F66C29ACDCD7FED678FA'
GRANT ALL PRIVILEGES ON `Cit-Live-brightonlandscaping`.* TO 'brightonlands'@'%'
GRANT USAGE ON *.* TO 'brochure'@'%' IDENTIFIED BY PASSWORD '*EABF42F3B310313FD230662388030CA895BEB'
GRANT ALL PRIVILEGES ON `Cit-Sub-e-brochure`.* TO 'brochure'@'%'
GRANT USAGE ON *.* TO 'carver'@'%' IDENTIFIED BY PASSWORD '*682687FC5D596EB5B7CF4002EDC1ADBE3'
GRANT ALL PRIVILEGES ON `Cit-Live-carverwelding-blog`.* TO 'carver'@'%'
GRANT ALL PRIVILEGES ON `Cit-Live-carverwelding`.* TO 'carver'@'%'
GRANT ALL PRIVILEGES ON `Cit-Live-carverengindia-blog`.* TO 'carver'@'%'
GRANT ALL PRIVILEGES ON `Cit-Live-carverengindia`.* TO 'carver'@'%'

We can see from mysql_all_users_sql.sql that all the users information along with the privileges and the script is almost ready.
Only thing the above script missing is a ";" at the end of every sql command.

We will add ";" at the end of each sql command
[root@db01 tmp]# sed -i 's/$/;/' mysql_all_users_sql.sql

[root@db01 tmp]# cat mysql_all_users_sql.sql
GRANT USAGE ON *.* TO 'CitLivehappy'@'%' IDENTIFIED BY PASSWORD '*1983DE33EA6756C9BE6591CC6F15FD7BF1E';
GRANT ALL PRIVILEGES ON `Cit-Live-bewealthynhappy`.* TO 'CitLivehappy'@'%';
GRANT ALL PRIVILEGES ON `cit_cvbflive`.* TO 'CitLivehappy'@'%';
GRANT USAGE ON *.* TO 'brightonlands'@'%' IDENTIFIED BY PASSWORD '*69DA4F2138FA03F66C29ACDCD6E99678FA';
GRANT ALL PRIVILEGES ON `Cit-Live-brightonlandscaping`.* TO 'brightonlands'@'%';
GRANT USAGE ON *.* TO 'brochure'@'%' IDENTIFIED BY PASSWORD '*EABF42F3B310313FD23CA750C895BEB';
GRANT ALL PRIVILEGES ON `Cit-Sub-e-brochure`.* TO 'brochure'@'%';
GRANT USAGE ON *.* TO 'carver'@'%' IDENTIFIED BY PASSWORD '*682687FC5D596EB5B7CF4002EDC1BBE3';
GRANT ALL PRIVILEGES ON `Cit-Live-carverwelding-blog`.* TO 'carver'@'%';
GRANT ALL PRIVILEGES ON `Cit-Live-carverwelding`.* TO 'carver'@'%';
GRANT ALL PRIVILEGES ON `Cit-Live-carverengindia-blog`.* TO 'carver'@'%';
GRANT ALL PRIVILEGES ON `Cit-Live-carverengindia`.* TO 'carver'@'%';

Now add "flush privileges" to reload the grant tables.
[root@server1 tmp]# echo "flush privileges;" >> mysql_all_users_sql.sql

Step 3: Now lets import the sql file into new server.

Transfer this sql file into destination(new) server and fire up the below command.

[root@db02 tmp]# mysql -u root -ppassword < mysql_all_users_sql.sql


How to Install MongoDB on Ubuntu 14.04 ?

How to Install MongoDB on Ubuntu 14.04?

Step #1: Setup a the Package Database
First we’ll import the MongoDB public key used by the package management system:

# sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 7F0CEB10

Then we’ll create a list file for MongoDB:

# echo 'deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen' | sudo tee /etc/apt/sources.list.d/mongodb.list

Now reload the package database:

# sudo apt-get update

Step #2: Install Latest Stable Version MongoDB
At this point, installing MongoDB is as simple as running just one command:

sudo apt-get install -y mongodb-org

If you’d like MongoDB to auto-update with apt-get than you’re done with the installation. But, it’s possible to ‘pin’ the version of MongoDB you just installed to prevent apt-get from auto-updating.

Step #3: Get MongoDB Running

Start-Up MongoDB:-
# sudo service mongod start

Check MongoDB Service Status
#sudo service mongod status

Summary List of Status Statistics (Continuous)

#mongostat

Summary List of Status Statistics (5 Rows, Summarized Every 2 Seconds)

#mongostat --rowcount 5 2

Enter the MongoDB Command Line

#mongo

Step 4:-
Now we can configure and basic setting in Mongodb Database Server
# vi /etc/mongod.conf
logappend=true
logpath=logpath=/var/log/mongodb/mongod.log
port=27017
dbpath=/var/lib/mongo
smallfiles = true
bind_ip = 127.0.0.1,159.203.180.48 [To change and open a certain ip's access for mongodb ]
:wq

Register a Connected Server using 2012 SQL Server Management Studio

Register a Connected Server using 2012 SQL Server Management Studio :-

By registering the server, you can save the connection information for servers that we access frequently. A server can be registered before connecting, or at the time of connection
from Object Explorer.


To register a connected server:-
Open up sql server management studio 2012.
In Object Explorer, right-click a server to which you already are connected, and then click Register.

* Server name
Enter the name you want to use for the registered server. Registering a local or remote server using SQL Server Management Studio lets you store the server connection information
for future connections. This field defaults to the server name entered when you were connecting to the server. You can retain this server name or enter another easy-to-use
name for the server.

* Authentication
There are two type of authentication available .
a.Windows authentication
b.Sql server authentication

Choose anyone of above for the authentication method.

* Server description
Enter an optional description of the server. The maximum number of characters allowed is 250.

* Save
Click to save the information you have entered and create a registered server.





Different SQL Server Roles

Different SQL Server Roles:-

Server Roles:-

The Server Roles page lists all possible roles that can be assigned to the new login. The following options are available:
bulkadmin:-
Members of the bulkadmin fixed server role can run the BULK INSERT statement.

dbcreator:-
Members of the dbcreator fixed server role can create, alter, drop, and restore any database.

diskadmin:-
Members of the diskadmin fixed server role can manage disk files.

processadmin:-
Members of the processadmin fixed server role can terminate processes running in an instance of the Database Engine.

public:-
All SQL Server users, groups, and roles belong to the public fixed server role by default.

securityadmin:-
Members of the securityadmin fixed server role manage logins and their properties. They can GRANT, DENY, and REVOKE server-level permissions. They can also GRANT, DENY, and REVOKE
database-level permissions. Additionally, they can reset passwords for SQL Server logins.

serveradmin:-
Members of the serveradmin fixed server role can change server-wide configuration options and shut down the server.

setupadmin :-
Members of the setupadmin fixed server role can add and remove linked servers, and they can execute some system stored procedures.

sysadmin :-
Members of the sysadmin fixed server role can perform any activity in the Database Engine.



Database-Level Roles:-

db_owner :- Members of the db_owner fixed database role can perform all configuration and maintenance activities on the database,
and can also drop the database.

db_securityadmin:- Members of the db_securityadmin fixed database role can modify role membership and manage permissions. Adding principals
to this role could enable unintended privilege escalation.

db_accessadmin :- Members of the db_accessadmin fixed database role can add or remove access to the database for Windows logins, Windows groups,
and SQL Server logins.

db_backupoperator :- Members of the db_backupoperator fixed database role can back up the database.

db_ddladmin :- Members of the db_ddladmin fixed database role can run any Data Definition Language (DDL) command in a database.

db_datawriter:- Members of the db_datawriter fixed database role can add, delete, or change data in all user tables.

db_datareader :- Members of the db_datareader fixed database role can read all data from all user tables.

db_denydatawriter :- Members of the db_denydatawriter fixed database role cannot add, modify, or delete any data in the user
tables within a database.

db_denydatareader :- Members of the db_denydatareader fixed database role cannot read any data in the user tables within a database.

How to setup MariaDB (Master-Slave) Replication on CentOS/RHEL 6x & Ubuntu

Setup MariaDB (Master-Slave) Replication on CentOS/RHEL 6x & Ubuntu
===================================================================

Our Testing Environment Consists of the Following Machines (Both are CentOS 6) :

Master: 192.168.72.220 ser1.kmi.com
Slave: 192.168.72.221 ser2.kmi.com

Step: 1. Bind Hosts File (Both Server) :

# vi /etc/hosts

192.168.72.220 ser1.kmi.com ser1
192.168.72.221 ser2.kmi.com ser2

-- Save & Quit (:wq)

Step: 2. Stop Iptables & Disabled Selinux (Both Server) :

For CentOS:

# service iptables stop
# chkconfig iptables off

For Ubuntu:

# service ufw stop

# vi /etc/sysconfig/selinux

SELINUX=disabled

-- Save & Quit (:wq)

Step: 3. Restart the Both Server :

# init 6

Step: 4. Install NTP :

# yum -y install ntp
# service ntpd restart
# chkconfig ntpd on
# ntpdate pool.ntp.org

Step: 5. Create MariaDB.repo on Both Server :

For CentOS:

# rpm --import http://yum.mariadb.org/RPM-GPG-KEY-MariaDB

# vi /etc/yum.repos.d/MariaDB.repo

[mariadb]
name = MariaDB
baseurl = http://yum.mariadb.org/10.1/centos6-amd64
gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
gpgcheck=1

-- Save & Quit (:wq)

For Ubuntu :

# apt-get install software-properties-common
# apt-key adv --recv-keys --keyserver hkp://keyserver.ubuntu.com:80 0xcbcb082a1bb943db
# add-apt-repository 'deb [arch=amd64,i386] http://mirrors.neusoft.edu.cn/mariadb/repo/10.1/ubuntu trusty main'

Step: 6. Install MariaDB on Both Servers :

For CentOS:

# yum -y install MariaDB-server MariaDB-client

For Ubuntu:

# apt-get update
# apt-get install mariadb-server

Step: 7. Start MariaDB Service & Set MariaDB Root Password on Both Master & Slave Server :

# service mysqld start
# chkconfig mysqld on

# mysql_secure_installation

Step: 8. Check MariaDB Version on Both Node :

# mysql -V
mysql  Ver 15.1 Distrib 10.1.10-MariaDB, for Linux (x86_64) using readline 5.1

Step: 9. Configuring MariaDB Server on Master Node :

# mv /etc/my.cnf /etc/my.bak
# vi /etc/my.cnf

[client]
port            = 3306
socket          = /var/lib/mysql/mysql.sock

[mysqld]
server_id=1
replicate-do-db=employees
bind-address=0.0.0.0
log-bin=mysql-bin
binlog_format=mixed
port            = 3306
socket          = /var/lib/mysql/mysql.sock
skip-locking
key_buffer_size = 16M
max_allowed_packet = 1M
table_open_cache = 64
sort_buffer_size = 512K
net_buffer_length = 8K
read_buffer_size = 256K
read_rnd_buffer_size = 512K
myisam_sort_buffer_size = 8M

-- Save & Quit (:wq)

Step: 10. Restart the MariaDB Service :

# service mysqld restart

Step: 11. Log on to MariaDB Master Node & Create a User Called "Slave" & Give Appropriate Permission :

# mysql -u root -proot_password

MySQL [(none)]> STOP SLAVE;
MySQL [(none)]> GRANT REPLICATION SLAVE ON *.* TO slave IDENTIFIED BY 'SlavePassword' WITH GRANT OPTION;
MySQL [(none)]> FLUSH PRIVILEGES;
MySQL [(none)]> FLUSH TABLES WITH READ LOCK;
MySQL [(none)]> SHOW MASTER STATUS;

+------------------+----------+--------------+------------------+
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+------------------+----------+--------------+------------------+
| mysql-bin.000002 |      181 |              |                  |
+------------------+----------+--------------+------------------+
1 row in set (0.00 sec)

MariaDB [(none)]> exit

Step: 12. Now Take Backup of Full Database :

# mysqldump -u root -proot_password employees > /tmp/employees.sql

Step: 13. After Taken Dump, Log on to MariaDB & Unlock Tables :

# mysql -u root -proot_password

MySQL [(none)]> UNLOCK TABLES;
MySQL [(none)]> exit;

Step: 14. Copy the Dump File to the Slave Node :

# scp -r /tmp/employees.sql root@192.168.72.221:/tmp/

Step: 15. Restore the Dump File on Slave Node :

# mysql -u root -predhat

MySQL [(none)]> create database employees;
MySQL [(none)]> exit;

# mysql -u root -proot_password employees < /tmp/employees.sql

Step: 16. Configuring MariaDB on Slave Node :

Note: It needs to be a Different Integer than 1, as we used 1 in the Master:

# mv /etc/my.cnf /etc/my.bak
# vi /etc/my.cnf

[client]
port            = 3306
socket          = /var/lib/mysql/mysql.sock

[mysqld]
server_id=2
replicate-do-db=employees
bind-address=0.0.0.0
log-bin=mysql-bin
binlog_format=mixed
port            = 3306
socket          = /var/lib/mysql/mysql.sock
skip-locking
key_buffer_size = 16M
max_allowed_packet = 1M
table_open_cache = 64
sort_buffer_size = 512K
net_buffer_length = 8K
read_buffer_size = 256K
read_rnd_buffer_size = 512K
myisam_sort_buffer_size = 8M

-- Save & Quit (:wq)

Step: 17. Restart the MariaDB Server :

# service mysqld restart

Step: 18. Setting Up MariaDB Replication on Slave Node :

Note: Special attention to the "MASTER_LOG_FILE" & "MASTER_LOG_POS" Variables, which should match the Values returned by "SHOW MASTER STATUS" in STEP 10 of above.

# mysql -u root -proot_password

MySQL [(none)]> STOP SLAVE;
MySQL [(none)]> CHANGE MASTER TO
  MASTER_HOST='192.168.72.220',
  MASTER_USER='slave',
  MASTER_PASSWORD='password',
  MASTER_PORT=3306,
  MASTER_LOG_FILE='mysql-bin.000002',
  MASTER_LOG_POS=181,
  MASTER_CONNECT_RETRY=10;
 
Step: 19. Start the Slave & Check its Status :

MySQL [(none)]> START SLAVE;
MySQL [(none)]> SHOW SLAVE STATUS\G;

Note: If the SHOW SLAVE STATUS\G; command returns any Errors. Use those errors to troubleshoot & then run START SLAVE; to test again.

Step: 20. Test MariaDB Database Replication :

Add some Record to the "employees" Database in the Master Node :

MySQL [(none)]> use employees;
MySQL [(none)]> CREATE TABLE Guests1 (
id INT(6) UNSIGNED AUTO_INCREMENT PRIMARY KEY,
firstname VARCHAR(30) NOT NULL,
lastname VARCHAR(30) NOT NULL,
email VARCHAR(50),
reg_date TIMESTAMP
);

Verify that this Change was Replicated in the Slave Node :

MySQL [(none)]> USE employees;
MySQL [(none)]> show tables;

As you can see, Replication is Working Correctly from Master to Slave.

Done...!!!

Invisible index on oracle 11g


Database version:- oracle 11.2.0

Invisible Indexes, which are basically indexes that exist and are maintained by Oracle but are "invisible" to the Cost based optimizer(CBO). Specific sessions can be set to see these
invisible indexes as necessary.The invisible index is an alternative of dropping or making an index unusable.  This feature is also functional when certain modules of an application
require a specific index without affecting the rest of the application.

First, create a simple table and associated index.
SQL> conn soumya/soumya
SQL> set autotrace on
SQL> set linesize 200

SQL> create table soumya_test  (Rollno number(5),name varchar2(20));
Table created.

SQL> create index soumya_test_ind on soumya_test(Rollno);
Index created.

Now insert data into the table.
SQL> begin
  2  for x in 1..1000 loop
  3  insert into soumya_test ( Rollno , name)
  4  values ( x , default);
  5  end loop;
  6  end;
  7  /

PL/SQL procedure successfully completed.

SQL> commit;

Commit complete.


By default, indexes are created as "VISIBLE" .
SQL> SELECT table_name,index_name, visibility FROM user_indexes WHERE index_name = 'SOUMYA_TEST_IND';

TABLE_NAME                     INDEX_NAME                     VISIBILITY
------------------------------ ------------------------------ ----------
SOUMYA_TEST                    SOUMYA_TEST_IND                VISIBLE

SQL>

As the index is visible, it can be considered and used by the CBO .


Here we checked our indexed column by user_ind_columns.
We are now going to check how the query will process by the oracle server.

SQL> explain plan for
  2  select * from soumya_test where Rollno in (555,726,959);

Explained.


SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 2411823278

------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |                 |     3 |    75 |     0   (0)| 00:00:01 |
|   1 |  INLIST ITERATOR             |                 |       |       |            |          |
|   2 |   TABLE ACCESS BY INDEX ROWID| SOUMYA_TEST     |     3 |    75 |     0   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN          | SOUMYA_TEST_IND |     4 |       |     0   (0)| 00:00:01 |
------------------------------------------------------------------------------------------------


PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("ROLLNO"=555 OR "ROLLNO"=726 OR "ROLLNO"=959)

Note
-----
   - dynamic sampling used for this statement (level=2)

19 rows selected.


Now alter the visibility of index   SOUMYA_TEST_IND.
SQL> alter index SOUMYA_TEST_IND invisible;

Index altered.

Check the index status.
SQL> select index_name,visibility from user_indexes
     where table_name='SOUMYA_TEST';

INDEX_NAME                     VISIBILITY
------------------------------ ----------
SOUMYA_TEST_IND                INVISIBLE


As, we can see that index is invisible now oracle wont use it.
now we again check the same explain plan for same query and compare the results.

SQL> explain plan for
  2  select * from soumya_test where Rollno in (555,726,959);

Explained.

SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 3654503979

---------------------------------------------------------------------------------
| Id  | Operation         | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |             |     3 |    75 |     3   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| SOUMYA_TEST |     3 |    75 |     3   (0)| 00:00:01 |
---------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

   1 - filter("ROLLNO"=555 OR "ROLLNO"=726 OR "ROLLNO"=959)

Note
-----
   - dynamic sampling used for this statement (level=2)

17 rows selected.


In above explain plan we see oracle didn't use the index like it did in previous query.

Now change back the visibility once again.

SQL> alter index SOUMYA_TEST_IND visible;

Index altered.

any specific sessions can be altered so they can "see" these invisible indexes.
SQL> select index_name,visibility from user_indexes
  2       where table_name='SOUMYA_TEST';

INDEX_NAME                     VISIBILITY
------------------------------ ----------
SOUMYA_TEST_IND                INVISIBLE

SQL> show parameter OPTIMIZER_USE_INVISIBLE_INDEXES

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
optimizer_use_invisible_indexes      boolean     FALSE

SQL> ALTER SESSION SET OPTIMIZER_USE_INVISIBLE_INDEXES = true;

Session altered.

SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 3654503979

---------------------------------------------------------------------------------
| Id  | Operation         | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |             |     3 |    75 |     3   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| SOUMYA_TEST |     3 |    75 |     3   (0)| 00:00:01 |
---------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

   1 - filter("ROLLNO"=555 OR "ROLLNO"=726 OR "ROLLNO"=959)

Note
-----
   - dynamic sampling used for this statement (level=2)

17 rows selected.


We see that optimizer is still not using the invisilbe index so we need to query the explain plan again.

SQL> explain plan for
     select * from soumya_test where Rollno in (555,726,959);
Explained.

SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 2411823278

------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |                 |     3 |    75 |     0   (0)| 00:00:01 |
|   1 |  INLIST ITERATOR             |                 |       |       |            |          |
|   2 |   TABLE ACCESS BY INDEX ROWID| SOUMYA_TEST     |     3 |    75 |     0   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN          | SOUMYA_TEST_IND |     4 |       |     0   (0)| 00:00:01 |
------------------------------------------------------------------------------------------------


PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("ROLLNO"=555 OR "ROLLNO"=726 OR "ROLLNO"=959)

Note
-----
   - dynamic sampling used for this statement (level=2)

19 rows selected.


Now we see the invisible index is being used.

How to configure mail on SQL Server 2012

Configure mail on SQL Server 2012



Please share your ideas and opinions about this topic.

If you like this post, then please share with others.
Please subscribe on email for every updates on mail.

How to drop undo tablespace in oracle 11g?



SQL> select file_name,tablespace_name from dba_data_files;

FILE_NAME                                     TABLESPACE_NAME
--------------------------------------------- --------------------
/u01/app/oracle/oradata/prim/users01.dbf      USERS
/u01/app/oracle/oradata/prim/undotbs01.dbf    UNDOTBS1
/u01/app/oracle/oradata/prim/sysaux01.dbf     SYSAUX
/u01/app/oracle/oradata/prim/system01.dbf     SYSTEM
/u01/app/oracle/oradata/prim/example01.dbf    EXAMPLE


SQL>  drop tablespace UNDOTBS1 including contents;
 drop tablespace UNDOTBS1 including contents
*
ERROR at line 1:
ORA-30013: undo tablespace 'UNDOTBS1' is currently in use

The error shows that the undo tablespace is in use.
Now to drop the current undo tablespace we need to create another undo tablespace and make it a default one.

SQL> create undo tablespace undotbs2 datafile '/u01/app/oracle/oradata/prim/undotbs02.dbf' size 50M reuse autoextend on maxsize 500M;

Tablespace created.

SQL> alter system set undo_tablespace=undotbs2 scope=both;

System altered.

Now try to drop the old undotablespace.

SQL> drop tablespace UNDOTBS1 including contents;

Tablespace dropped.

SQL> select file_name,tablespace_name from dba_data_files;

FILE_NAME                                     TABLESPACE_NAME
--------------------------------------------- --------------------
/u01/app/oracle/oradata/prim/users01.dbf      USERS
/u01/app/oracle/oradata/prim/undotbs02.dbf    UNDOTBS2
/u01/app/oracle/oradata/prim/sysaux01.dbf     SYSAUX
/u01/app/oracle/oradata/prim/system01.dbf     SYSTEM
/u01/app/oracle/oradata/prim/example01.dbf    EXAMPLE

How to create read only users in SQL Server 2012


Database Version: SQL Server 2012
Step 1.
Connect to your database server.
Expand Security > Logins > New login.

Step 2.
Put the login name and  select sql server authentication  and provide password.
On this window you may/ may not select the option enfornce password policy.

Step 3.
From User Mapping option select the database you want to give access.
Tick the boxes for role membership next to public and db_datareader.
Confirm by clicking OK.

Done. The read only user is created!

How to Reset SA Password in Sql Server 2012

I had a scenario where i forgot SA password or windows authentication was not working.
So i used the below method to reset the SA password to login into SQL Server.

Step 1:-
Change SQL SA password from a command prompt

Go to the command prompt of the server and type in command prompt osql –L
C:\Users\Administrator>osql -L

Servers:
localhost

Step 2:-
Copy full name of SQL Server and type: OSQL -S <insert_servername_here> -E
C:\Users\Administrator>osql -S localhost -E

Step 3:-
Execute the following query: sp_password NULL, '<insert_new_password_here>', 'sa'
1> sp_password NULL, 'soumya@123#','sa'
2> GO

Done, the sa password has been reset.

Last Login Time for nonsys user in oracle 12c

Sqlplus shows Last Login Time for non system users in oracle12c.

In oracle 12c a new security feature has been added which allows us to check when did  a non system user logged in.

[oracle@server3 ~]$ sqlplus soumya/soumya

SQL*Plus: Release 12.1.0.2.0 Production on Mon Feb 15 08:35:28 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Last Successful login time: Mon Feb 15 2016 08:35:14 +05:30

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options


The feature can be disabled by doing the following:-
[oracle@server3 ~]$ sqlplus -nologintime soumya/soumya

SQL*Plus: Release 12.1.0.2.0 Production on Mon Feb 15 08:45:09 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options


The actual information is basically stored in column SPARE6 of SYS.USER$ and can be queried in the column LAST_LOGIN of DBA_USERS.

[oracle@server3 ~]$ sqlplus / as sysdba
SYS@ORA12C> col username FOR a15
SYS@ORA12C> col last_login FOR a25
SYS@ORA12C> ALTER SESSION SET NLS_TIMESTAMP_TZ_FORMAT='DD.MM.YYYY HH24:MI:SS';

Session altered.

SYS@ORA12C> SELECT username,last_login FROM dba_users WHERE username='SOUMYA';

USERNAME        LAST_LOGIN
--------------- -------------------------
SOUMYA          15.02.2016 08:45:09

SYS@ORA12C> col name FOR a15
SYS@ORA12C> ALTER SESSION SET NLS_DATE_FORMAT='DD.MM.YYYY HH24:MI:SS';
col spare6 for a40
Session altered.

SYS@ORA12C> SELECT name,spare6 FROM USER$ WHERE name='SOUMYA';

NAME            SPARE6
--------------- -------------------
SOUMYA          15.02.2016 03:15:09


oracle@server3 ~]$ sqlplus soumya/soumya

SQL*Plus: Release 12.1.0.2.0 Production on Mon Feb 15 08:52:32 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Last Successful login time: Mon Feb 15 2016 08:45:09 +05:30

Expdp Backup script with retention for Windows

Oracle Version:- 11g

Make sure all the directories mentioned in the script is present.Just copy the following lines mentioned below and save it as .bat file. Schedule it on task scheduler
as per your requirement.Also make sure you have 7zip installed on your machine for the zip purpose.

@ECHO OFF
For /f "tokens=2-4 delims=/ " %%a in ('date /t') do (set mydate=%%c-%%b-%%a)
For /f "tokens=1-2 delims=/:" %%a in ("%TIME%") do (set mytime=%%a%%b)
set mytime=%mytime: =0%

set ORACLE_SID=orcl
set ORACLE_HOME=C:\app\Oracle11G\product\11.2.0\dbhome_1
set ORACLE_BASE=C:\app\Oracle11G\
set PATH=%ORACLE_HOME%\bin
set BACKUPPATH=C:\app\Oracle11G\Datapump
forfiles /p "C:\app\Oracle11G\Datapump" /m * /d -7 /c "cmd /c rd /s /q @path"
cd %BACKUPPATH%
mkdir "C:\app\Oracle11G\Datapump\%mydate%_%mytime%"


%ORACLE_HOME%\BIN\expdp system/system directory=BACKUP Full=Y dumpfile=backup_%mydate%_%mytime%.dmp logfile=log_%mydate%_%mytime%.log exclude=statistics
cd %BACKUPPATH%
"C:\Program Files\7-Zip\7z.exe" a -mx9 C:\app\Oracle11G\Datapump\Backup_%mydate%_%mytime%.7z C:\app\Oracle11G\Datapump\backup_%mydate%_%mytime%.dmp
move C:\app\Oracle11G\Datapump\Backup_%mydate%_%mytime%.7z C:\app\Oracle11G\Datapump\%mydate%_%mytime%\
move C:\app\Oracle11G\Datapump\log_%mydate%_%mytime%.log C:\app\Oracle11G\Datapump\%mydate%_%mytime%\
del backup_%mydate%_%mytime%.dmp

Snapshot database in SQL Server

Database Version : SQL Server 2012

What is a database snapshot ?

Database snapshots are an Enterprise only feature which was introduced  in SQL Server 2005.A database snapshot is a view of what the source database looked like at the time when the
snapshot was created.

How does a snapshot work:-

a. When you create a snapshot a sparse file is created for each data file.
b.When data is modified in the source database for the first time, the old value of the modified data is copied to the sparse file.
c.If the same data is the subsequently changed again, those changes will be ignored and not copied to the snapshot.

How to create a snapshot?

USE master
GO
-- Create a source Database
CREATE DATABASE sourcedb
GO
USE sourcedb
GO
-- Populate sourcedb Database with some Table
CREATE TABLE Employee (ID INT, Value VARCHAR(10))
INSERT INTO Employee VALUES(1, 'First');
INSERT INTO Employee VALUES(2, 'Second');
INSERT INTO Employee VALUES(3, 'Third');
INSERT INTO Employee VALUES(4, 'Fourth');
GO
-- Now Create Snapshot Database
CREATE DATABASE SnapshotDB ON
(Name ='sourcedb',
FileName='E:\snapshotdb.ss1')
AS SNAPSHOT OF sourcedb;
GO
-- Select from source and Snapshot Database
SELECT * FROM sourcedb.dbo.Employee;
SELECT * FROM SnapshotDB.dbo.Employee;
GO

ID Value
1 First
2 Second
3 Third
4 Fourth

ID Value
1 First
2 Second
3 Third
4 Fourth



Now lets do some dml activity on source db :-
-- Delete from sourcedb Database
DELETE FROM sourcedb.dbo.employee;
GO

ID Value


ID Value
1 First
2 Second
3 Third
4 Fourth


So we can see the rows have been deleted from the sourcedb but the snapshot db is totally intact as it is.

Now, incase if we want to undo all the changes we did, we can revert back the database from the snapshotdb which was created before doing the activity.

-- Restore Data from Snapshotdb Database
USE master
GO
RESTORE DATABASE sourcedb
FROM DATABASE_SNAPSHOT = 'Snapshotdb';
GO
-- Select from sourcedb and Snapshotdb Database
SELECT * FROM sourcedb.dbo.employee;
SELECT * FROM Snapshotdb.dbo.employee;
GO

ID Value
1 First
2 Second
3 Third
4 Fourth

ID Value
1 First
2 Second
3 Third
4 Fourth


So we see all the datas are restored to the point when the snapshotdb was created.

Benefits of Database Snapshots:-

1.Snapshots can be used for reporting purposes.
2.Maintaining historical data for report generation.
3.Using a mirror database that you are maintaining for availability purposes to offload reporting.
4.In the event of a user error on a source database, you can revert the source database to the state it was in when a given database snapshot was created. For example, before doing
large dml activity, usually create a snapshot of the original db prior to the dml activity being done.