Configure Data Guard using a physical standby database

I recently decided to test a rollback strategy that we thought we would use when upgrading our databases to Oracle 11g. We were using Oracle 10g at the time and we had the following data monitoring settings:

  • Primary database,
  • Physical standby DB 1,
  • Physical standby DB 2,

NOTE: The operating system we use is Windows Server 2003, SP2. While these instructions are for the Windows environment, they also work in any other environment, the only difference being some of the specific Windows configuration, such as creating Windows services.

The plan for upgrading Data Guard

The plan was to upgrade both data security DBs along with the primary to 11g. As part of the upgrade, we needed a failback plan so that if the upgrade failed, we could restore the primary and standby databases to a time before the upgrade took place. We didn’t want to rebuild the standby databases if the upgrade failed, because that would be very time consuming and require a lot more work. The primary rollback plan of the database was relatively simple:

  • Shut down the database
  • Take a “snap” of the LUNs at the SAN level
  • Bring up the database and perform the upgrade
  • Drop snap if the upgrade was successful
  • Return to the module if the upgrade failed

First questions?

The problem we faced with the standby databases was that we were unable to pin the drives to the SANs they were running on because the SAN technology was older and we didn’t have any licenses for it. We also had some questions about data protection configuration and how it would work:

  • If we roll back the primary database, how can we roll back the physical standby databases if they applied logs during the upgrade?
  • Can we stop the standby databases from applying logs, roll back the primary database to the snap version and start sending logs again?
  • Will the physical standby database continue where it left off if we delete the logs created while the upgrade was in progress?
  • If the primary database was on archivelog string 100 at the time of the module and 50 logs are created during the upgrade process and we click it back, will it continue from string 100 again?

We made some educated guesses about what we thought, but the questions above should give you an idea of ​​the type of questions we weren’t sure before completing the tests. The best way to prove something is to actually test it, so I went to test our theory. What was the theory? Well, we thought (I mean hoped …) that if we canceled the sending of the log while the upgrade was running (delaying the log archive destinations), the primary database rolled back to the snap version of the disks, removing all the logs that were produced while the upgrade was running, and then re-enabled the log archive destinations that would fetch the data protector physical standby databases where they left off, no wiser for the failed upgrade on the primary database.

What we are going to test

Here’s a list of exactly what I’ll go through in the example below:

  • Configure data protection on version (although this also applies to 11g data protection)
  • In preparation, back up your primary database to restore it and create your physical standby databases
  • The recovery process to set up your physical standby databases
  • Run log transmission from the primary to the standby databases
  • Configure the rollback strategy above
  • Simulation of a failed upgrade attempt
  • Revert to the fast version of the database (at SAN level)
  • A test to see if it all worked as it was before the upgrade

SAN Vs Flashback Database – Personal View …

I must point out that I worked with the storage team for this test to get a snapshot of the disk (s) before the upgrade work is finished. I personally think this is a much better solution for this kind of work than using Oracle’s flashback technology for several reasons:

  1. Flashback needs a lot of space to save changes
  2. I have had problems using Oracle flashback technology while performing an upgrade that has damaged my guaranteed restore point. very bad!
  3. The SAN snapshot is very fast, clean and does not require any configuration changes from Oracle’s point of view – just a clean shutdown of the database while the snapshot is being taken

So, in this test, I’ll show you how to create a standby database from your primary database with all the commands I needed to use. Hopefully it all goes without saying for an expert Oracle DBA. If not, ask questions at the bottom of the page in the comments section or send me an email and I’ll do my best to help.

Step-by-Step Guide to Creating and Configuring Data Guard Physical Standby

On your primary database, reduce the size of the database as much as possible to make backup, copying, and restoring to the standby database faster. I often use this little script to do it dynamically for a specific table space. It will try to shrink each table space by 500MB, but you can configure that to whatever you want and do it for each table space by removing the “where tablespace_name =” section from the statement.

</p> <p>coil shrink. from<br /> enable serverout<br /> to start<br /> for i in (select &#8216;change database file&#8217; || file_id || &#8216;resize&#8217; || TRUNC (bytes &#8211; 512000000) || &#8216;;&#8217;<br /> as cmd of dba_data_files where tablespace_name = &#8216;DW3_L1_X128K&#8217;) loop<br /> dbms_output.put_line (i.cmd);<br /> end loop;<br /> end;<br /> /</p> <p>rinse</p> <p>@ shrink.out</p> <p>

RMAN Level0 and Archive Log Mode

The first stage is to create a level 0 RMAN backup that you will later use to restore and create your physical standby database. I assume you don’t want to delete the database to complete the backup, so hopefully your database is already running in ARCHIVELOG mode? You can check with this question:

</p> <p>SELECT log_mode<br /> FROM v $ database;</p> <p>

If your primary database is not in ARCHIVELOG mode, you can either make a cold backup, i.e. while the database is not available, or put the database in ARCHIVELOG mode. To do this, configure your parameter log_archive_dest_1 and enable it as follows:


Now check that the archived redo logs appear in the location specified by the log_archive_dest_1 parameter.

Force database logging

If you are creating a physical standby database, it is a requirement that there are no unregistered transactions in your database, such as direct path loading and transactions that do not generate REDO because your database is an identical physical copy of the primary, therefore all transactions are applied in the correct order. To enforce this, you need to run the following command:


OK so now we are working in ARCHIVELOG mode and we always log all transactions in the REDO stream, we can backup level 0 using the following commands:

</p> <p>rman target sys /<pwd>@<SID><br /> run</p> <p>backup as compressed backup set incremental level = 0 database plus archivelog;</p> <p>

How long the backup takes depends on the size of your database. If you use the normal compression level for RMAN, your backup is approximately 20% of the size of your database.

Once the level 0 backup is complete, you need to copy the files to the standby host to restore it to create the physical standby database.

Initialization parameter configuration

Before we perform the recovery of the physical standby database, we need to configure some additional parameters on the primary database so that the data protection in the data protection configuration can act as the primary database. I have listed the parameters below that I had to change to configure the primary database for Data Guard:

NOTE: I also configured some of the parameters that are only needed if the primary database becomes the standby database, in other words if a failover or switchover scenario should occur.

NOTE: In the examples below, TEST is the primary database and TESTDG is the physical secondary database

NOTE: The file locations for the Data Guard database differ only by the name of the database. For example, the path ‘D: ORADATA TEST ‘ will become ‘D: ORADATA TESTDG ‘.

</p> <p>alter system set log_archive_config = &#8216;DG_CONFIG = (TEST, TESTDG)&#8217; scope = both;<br /> &#8211; This shows that there must be two databases in the Data Guard configuration</p> <p>change system set LOG_ARCHIVE_DEST_STATE_1 = ENABLE scope = both;<br /> &#8211; Enables the first archive log destination</p> <p>alter system set log_archive_dest_2 = &#8216;SERVICE = TESTDG ASYNC VALID_FOR = (ONLINE_LOGFILES, PRIMARY_ROLE)<br /> DB_UNIQUE_NAME = range of TESTDG = both;</p> <p>&#8211; configures archived REDO log transmission while the database in the primary role is running to the Data Guard DB (TESTDG)</p> <p>change system set LOG_ARCHIVE_DEST_STATE_2 = DEFER scope = both;<br /> &#8211; Enables the second archive log destination</p> <p>change system set remote_login_passwordfile = EXCLUSIVE scope = spfile;<br /> &#8211; This value must be the same in both databases for the logs to be sent</p> <p>change system set LOG_ARCHIVE_FORMAT =% t_% s_% r.arc scope = both;<br /> &#8211; Configures the format for the names of the archived redo-logs</p> <p>change system set LOG_ARCHIVE_MAX_PROCESSES = 4 scope = both;<br /> &#8211; This is the maximum number of processes that shipping logs can be.<br /> Adding more can increase CPU usage if a large number of logs need to be sent</p> <p>change system set FAL_SERVER = TESTDG range = both;<br /> &#8211; Set the Fetch Archive Log server as TESTDG, the standby database<br /> &#8211; Only required for when primary switches to standby</p> <p>change system set DB_FILE_NAME_CONVERT = &#8216;TESTDG&#8217;, &#8216;TEST&#8217; scope = spfile;<br /> &#8211; This converts all database paths where TESTDG is in the name and replaces it with TEST<br /> &#8211; Only required for when primary switches to standby</p> <p>change system set LOG_FILE_NAME_CONVERT = &#8216;TESTDG&#8217;, &#8216;TEST&#8217; scope = spfile;<br /> &#8211; This converts all log file paths where TESTDG is in the name and replaces it with TEST<br /> &#8211; Only required for when primary switches to standby</p> <p>change system set STANDBY_FILE_MANAGEMENT = AUTO scope = both;<br /> &#8211; Ensures that when files are added and deleted, they are also added to and removed from the Data Guard database</p> <p>

File Checking and PFILE Creation

Now that the initialization parameters are configured in the primary database, it is time to create a standby check file and the Data Guard database parameter file.

</p> <p>CREATE ALTER DATABASE STANDBY FILE AS &#8216;C: temp TEST.ctl&#8217;;</p> <p>CREATE PFILE = &#8216;C: temp initTEST.ora&#8217; FROM SPFILE = &#8216;G: Oracle oradata TEST admin pfile spfileTEST.ora;</p> <p>&#8211; Create a parameter file for the standby database</p> <p>&#8211; I always explicitly define where the PFILE and SPFILE locations are to avoid problems using the wrong file</p> <p>&#8211; Copy this file along with the standby monitor file to the Data Guard DB server, ready for recovery</p> <p>

Now take the parameter file initTEST.ora and change the settings so that it can be used for the physical standby database.

</p> <p>DB_NAME = TEST<br /> &#8211; The DB_NAME remains the same as the primary DB, I just wanted to point this out to you</p> <p>DB_UNIQUE_NAME = TESTDG<br /> &#8211; Change this to reflect the name of your Data Guard database, I&#8217;m using TESTDG here</p> <p>CONTROL_FILES = &#8216;D: ORADATA TESTDG CONTROL01.CTL&#8217;, &#8216;D: ORADATA TESTDG CONTROL02.CTL&#8217;, &#8216;D: ORADATA TESTDG CONTROL03.CTL&#8217;<br /> &#8211; Must be at least two, I&#8217;m using three, and the only difference in the path is that I changed TEST to TESTDG</p> <p>DB_FILE_NAME_CONVERT = &#8216;TEST&#8217;, &#8216;TESTDG&#8217;<br /> &#8211; Same configuration as before, but converts primary data file names to Data Guard names</p> <p>LOG_FILE_NAME_CONVERT = &#8216;TEST&#8217;, &#8216;TESTDG&#8217;<br /> &#8211; As before, the log file names in the path are changed to reflect their location on disk</p> <p>LOG_ARCHIVE_DEST_1 = &#8216;LOCATION = O: flash_recover_TEST ArchLogs VALID_FOR = (ALL_LOGFILES, ALL_ROLES) DB_UNIQUE_NAME = TESTDG&#8217;<br /> &#8211; This is where the archived REDO logs are placed in the standby and primary roles</p> <p>LOG_ARCHIVE_DEST_2 = &#8216;SERVICE = TEST ASYNC VALID_FOR = (ONLINE_LOGFILES, PRIMARY_ROLE) DB_UNIQUE_NAME = TEST&#8217;<br /> &#8211; This specifies the parameters used for when the database is run in the primary role<br /> &#8211; Only required for when standby switches to primary</p> <p>LOG_ARCHIVE_DEST_STATE_1 = ENABLE<br /> LOG_ARCHIVE_DEST_STATE_2 = ENABLE<br /> &#8211; Make sure both archive destinations are turned ON</p> <p>FAL_SERVER = TEST<br /> &#8211; This refers to the name of the primary database</p> <p>

OK, so we configured all initialization parameters for both the primary and physical standby database, and we added the appropriate parameters if we decide to fail or change the primary and standby roles in the Data Guard configuration .

Windows Service, Listener, TNSNames, etc.

The next step is to configure the Windows service on the server, the listeners, TNSNames files, password files, and create a SPFILE for the standby database. Let’s see how we’re going to do that.

1. Add TNSNames entries to existing Oracle clients on primary and standby database servers

Following are the TEST and TESTDG database names as we did in the earlier parts of this article, here are the TNSNames entries for those DBs:

</p> <p>TEST.DEV.INT.COM =<br /> (DESCRIPTION =<br /> (ADDRESS_LIST =<br /> (ADDRESS = (PROTOCOL = TCP) (Host = TESTlistener) (Port = 1521))<br /> )<br /> (CONNECT_DATA =<br /> (SERVICE_NAME = TEST.DEV.INT.COM)<br /> (SERVER = DEDICATED)<br /> )<br /> )</p> <p>TESTDG.DEV.INT.COM =<br /> (DESCRIPTION =<br /> (ADDRESS_LIST =<br /> (ADDRESS = (PROTOCOL = TCP) (Host = TESTDGListener) (Port = 1521))<br /> )<br /> (CONNECT_DATA =<br /> (SERVICE_NAME = TESTDG.DEV.INTCOM)<br /> (SERVER = DEDICATED)<br /> )<br /> )</p> <p>

NOTE: I am using a DNS entry for the HOST parameter. If you use DNS, make sure that the DNS name is resolved correctly using the NSLOOKUP command from both servers.

NOTE: You must ensure that SQLNet traffic is allowed in both directions for log transmission to occur.

2. Add listener items to the Data Guard DB server and the primary DB server

You can create new listeners if you have not already installed them or you can add the configuration to a configuration that you have already installed, it is up to you. These are the entries I have for the TEST and TESTDG databases:

</p> <p>&#8211; Data Guard Listener configuration</p> <p>11GDGLIST =<br /> (DESCRIPTION_LIST =<br /> (DESCRIPTION =<br /> (ADDRESS = (PROTOCOL = TCP) (HOST = (PORT = 1521))<br /> )<br /> )</p> <p>SID_LIST_11GDGLIST =<br /> (SID_LIST =<br /> (SID_DESC =<br /> (GLOBAL_DBNAME = TESTDG.DEV.INT.COM)<br /> (ORACLE_HOME = D: oracle product.2.0 dbhome_11203)<br /> (SID_NAME = TEST)<br /> )<br /> )</p> <p>&#8211; Configuration of primary DB listener<br /> 11 GLIST =<br /> (DESCRIPTION_LIST =<br /> (DESCRIPTION =<br /> (ADDRESS = (PROTOCOL = TCP) (HOST = (PORT = 1521))<br /> )<br /> )</p> <p>SID_LIST_11GLIST =<br /> (SID_LIST =<br /> (SID_DESC =<br /> (GLOBAL_DBNAME = TEST.DEV.INT.COM)<br /> (ORACLE_HOME = D: oracle product.2.0 dbhome_11203)<br /> (SID_NAME = TEST)<br /> )<br /> )</p> <p>

NOTE: Hopefully, you have noticed that in the configuration for the Data Guard listener, the SID_NAME = TEST and not TESTDG. This is because if you remember that the actual DB_NAME is the same as the primary (TEST), but the DB_UNIQUE_NAME is TESTDG.

3. Create a Windows service on the Data Guard DB server

We will use the ORADIM utility to create the Windows service on the Data Guard DB server

</p> <p>D: oracle product.2.0 dbhome_11203 oradim &#8211; NEW -SID TESTDG -STARTMODE m</p> <p>

NOTE: I specify the full path to the oradim utility because the server is multi-homed and I want to avoid possible confusion as to which executable file it wants to avoid. It is always safest to explicitly define which executable file / file / etc you want to use. Then you know exactly what you are going to get.

NOTE: You can change the settings for this Windows service at any time by opening the registry settings (start-> run-> regedit HKEY_LOCAL_MACHINE-> software-> Oracle-> your_orakel_home_name)

4. Create a password file for Data Guard DB

D: oracle product 11.2.0 dbhome_11203 orapwd file = ‘D: ORADATA TESTDG pwdTEST.ora’ password = change_on_install

5. Create an SPFILE for the Data Guard Database

</p> <p>SET ORACLE_SID = TEST<br /> SQLPLUS &#8220;sys / change_on_install as sysdba&#8221;<br /> STARTUP NOMOUNT PFILE = &#8216;D: ORADATA TESTDG ADMIN PFILE initTEST.ora&#8217;<br /> CREATE SPFILE = &#8216;D: ORADATA TESTDG ADMIN PFILE SPFILETEST.ora&#8217; FROM PFILE = &#8216;D: ORADATA TESTDG ADMIN PFILE initTEST.ora&#8217;</p> <p>

Almost there … We are making good progress so far. We’ve done a lot of configuration, now it’s time to find out if it’s really correct! This next stage is restoring the level 0 RMAN backup that was taken from the primary database and copied.

</p> <p>SET NLS_DATE_FORMAT = YYYY-MM-DD: HH24: MI: SS</p> <p>RMAN TARGET sys / change_on_install @ TESTDG</p> <p>ALTER DATABASE MOUNT; &#8211; Remember that you have already nominated the database</p> <p>CATALOG START WITH &#8216;O: Oracle flash_recover_TESTDG flashback TEST BACKUPSET12_05_29&#8217;;<br /> &#8211; Catalog the RMAN level 0 files so that the database is aware of the existence of the files</p> <p>SELECT &#8216;set new name for data file&#8217; &#8221; || filename || &#8221; &#8221; to &#8221; &#8216;|| replace (filename, &#8216;TEST&#8217;, &#8216;TESTDG&#8217;) || &#8216; &#8221;; &#8216;<br /> as a database of dba_data_files;<br /> &#8211; Command used to retrieve file names for switching / setting a new name<br /> &#8211; Use against the primary database and use in the RUN block below</p> <p>run</p> <p>CHANGE CHANNEL c1 DEVICE TYPE DISK FORMAT &#8216;O: Oracle flash_recover_TEST flashback TEST BACKUPSET12_05_29 &#8216;;<br /> CHANGE CHANNEL c2 DEVICE TYPE DISK FORMAT &#8216;O: Oracle flash_recover_TEST flashback TEST BACKUPSET12_05_29 &#8216;;<br /> set new name for data file &#8216;D: ORADATA TEST DATA SYSTEM01.DBF&#8217; to &#8216;D: ORADATA TESTDG DATA SYSTEM01.DBF&#8217;;<br /> set the new name for the data file &#8216;D: ORADATA TEST DATA UNDOTBS01.DBF&#8217; to &#8216;D: ORADATA TESTDG DATA UNDOTBS01.DBF&#8217;;<br /> set the new name for the data file &#8216;D: ORADATA TEST DATA SYSAUX01.DBF&#8217; to &#8216;D: ORADATA TESTDG DATA SYSAUX01.DBF&#8217;;<br /> set the new name for data file &#8216;D: ORADATA TEST DATA USERS01.DBF&#8217; in &#8216;D: ORADATA TESTDG DATA USERS01.DBF&#8217;;<br /> RESTORE DATABASE FROM TAG &#8216;TAG20120529T110507&#8217;;<br /> RESTORE DATABASE;<br /> SWITCH ALL;</p> <p>&#8211; This RUN block assigns two channels to use for the restore and restore operation.<br /> You can use more channels if you have more horsepower!<br /> &#8211; The &#8220;set newname&#8221; commands are derived from the previous dynamic SQL script. Copy and paste<br /> &#8211; The RESTORE command specifies the TAG for the RMAN backup at level 0<br /> &#8211; The commands then RESTORE the database and SWITCH all data file names according to the set new name commands</p> <p>

That is it! It will take some time to recover the data files. The larger the database, the greater the recovery time.

Now is the time to make sure the primary database is enabled to send the REDO logs:

</p> <p>ALTER SYSTEM SET log_archive_dest_state_2 = ENABLE SCOPE = both;<br /> &#8211; On primary database</p> <p>ALTER DATABASE RESTORE MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;<br /> &#8211; On standby DB</p> <p>

Hopefully you’ve configured everything correctly and it will work the first time … For the majority of those who don’t, here are some suggestions to help you:

Possible configuration problems with Data Guard

</p> <p>Error 1017 received logon to standby<br /> &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211; &#8212;&#8212;&#8212;-<br /> Verify that the primary and standby use a password file<br /> and remote_login_passwordfile is set to SHARED or EXCLUSIVE,<br /> and that the SYS password is the same in the password files.<br /> recurring error ORA-16191<br /> &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211; &#8212;&#8212;&#8212;-</p> <p>

Make sure you have done the following:

1. Added TNS items to both the primary and standby databases. If you have multiple houses, make sure you’ve changed the right one

2. Check if items have been added for the listeners on both DB servers

3. Check an SQL Plus login from the primary database to standby and vice versa

4. Copy the primary file password file and use it for the standby database

5. Check the network connection to the standby host on the correct port (default 1521)

Now we need to have configured the Data Guard database and be happy to send the logs from the primary to the standby database – check this first by swapping the logs a few times. You can use this query to check whether the logs apply to physical standby:

</p> <p>set lines 120<br /> pages 1000<br /> change session session nls_date_format = &#8216;DD-MM-YYYY HH24: MI: SS&#8217;;</p> <p>SELECT SEQUENCE #, APPLIED, FIRST_TIME, NEXT_TIME<br /> FROM V $ ARCHIVED_LOG<br /> WHERE FIRST_TIME> SYSDATE -3<br /> ORDER ON SEQUENCE #;</p> <p>

The next stage of the experiment is to create snaps, simulate a failed upgrade, and test the rollback. That’s what I’m going to walk you through now.

Rollback testing with snaps

1. Switch log file to primary


2. Make sure it is applied to standby

</p> <p>set lines 120<br /> pages 1000<br /> change session session nls_date_format = &#8216;DD-MM-YYYY HH24: MI: SS&#8217;;</p> <p><xmp></p> <p>SELECT SEQUENCE #, APPLIED, FIRST_TIME, NEXT_TIME<br /> FROM V $ ARCHIVED_LOG<br /> WHERE FIRST_TIME> SYSDATE -3<br /> ORDER ON SEQUENCE #;</p> <p>3. Set archivelog 2 dest</p> <p>ALTER SYSTEM SET log_archive_dest_state_2 = DEFER SCOPE = both;</p> <p>4. Shut down the primary database</p> <p>CLOSURE IMMEDIATELY;</p> <p>5. Snap in the primary DB drive (s)</p> <p>Ask a storage technician to do this</p> <p>6. Bring up the primary database</p> <p>STARTUP</p> <p>7. Run some sample scripts / DB work. This could be anything, but I&#8217;m using a simple example here to put a few<br /> actions through the database.</p> <p><xmp></p> <p>create table rj_test (<br /> ID NUMBER (10),<br /> NAME VARCHAR2 (25));</p> <p>to start<br /> for i in 1..100000 loop<br /> insert in rj_test values ​​(1, &#8216;Rob&#8217;);<br /> end loop;<br /> end;<br /> /</p> <p>


8. Shut down the database (fake rollback required)


9. Return to the snapped disk

Ask storage engineers to return to snap drive

*** Meanwhile delete archived logs ***

Failure to do so may cause a conflict when the database tries to create other archive logs with the same names. You also don’t want the archived redo logs to be sent to your standby database, so move / delete them just to be safe.

10. Open the database


11. ARMING archivelog 2 dest

</p> <p>ALTER SYSTEM SET log_archive_dest_state_2 = ENABLE SCOPE = both;</p> <p>

12. Re-enable archivelog transmission by running this command in the Data Guard database


13. Verify that archive logs are sent and apply to the standby database

</p> <p>set lines 120<br /> pages 1000</p> <p>change session session nls_date_format = &#8216;DD-MM-YYYY HH24: MI: SS&#8217;;</p> <p>SELECT SEQUENCE #, APPLIED, FIRST_TIME, NEXT_TIME<br /> FROM V $ ARCHIVED_LOG<br /> WHERE FIRST_TIME> SYSDATE -3<br /> ORDER ON SEQUENCE #;</p> <p>

Now that you have completed all these steps, you should hopefully see the archived redo logs being resent to your physical data protection database. And most importantly, this test confirmed that my theory was correct and that you can use snaps to roll back a failed upgrade on the primary database and restore the physical standby databases from where they left off before the upgrade .

I would like to point out here that it is also possible if you want to perform a long running upgrade to your primary database and not want to prevent the log from being sent, that you can take snaps from the disks in the physical standby database at the same time. Doing it this way would mean you have to restore the Data Guard database if the upgrade were to fail, but if it was successful it would already be up to date with the primary so it’s not necessary to catch up with the REDO logs.

Which option you decide to take is up to you and depends on your specific requirements and what you consider acceptable and necessary.

Source by Steven R Matthews