Showing posts with label EXADATA. Show all posts
Showing posts with label EXADATA. Show all posts

Sunday, March 18, 2018

EXADATA non rolling Patching


QFSD (Quarterly Full Stack Download)

Patch number would be - year month date of patch application 160709

This patch includes 

- BP (Bundle Patch) --- For GI and RDBMS home in Compute node - Use Opatch utility to apply the patch.Opatch version should be same across all the compute nodes(GI/RDBMS). While running catbundle.sql you need a downtime . ./datapatch.sh will login to database and run the neccesary scripts like catbundle.sql etc. 

- YUM Update (ISO Patching) - For Compute node- Firmware update - Use dbnodeupdate.sh to apply this patch. While applying this patch compute node will reboot twice , or 5 times it depends . Complete reboot will take may be an hour. Some error may encounter after the last reboot, you should ignore it as it was due to oracle coding issue. Patching will be done for all the compute nodes first, then proceed with cell server patching

- OJVM  Patch --  For compute node, this patch is required only if customer using java libraries.

- IB Switch Patch -- For infiniband switch. This patch will be applied  using patchmanager utility, but this utility must be invoked from ILOM console. You will be login to ILOM console using spsh utility.

-Cell patch   --  For Cell server.  This patch will be applied  using patchmanager utility.
You have to run exacheck utility to verify the cell status and for any hardware issues. if there is any issues you have to fix it prior to patching.  
As a prerequisite for rolling patching,  asm powerlimit should be set to min 4, disk repair time should be minimum of 8 hours (by default 3.5 hours) and ADVM should be set to GI version.  You will be invoking cell patching from compute node itself. In that compute node cat/etc/cell_groups file should have cell server entries of each cell nodes, else it won't apply cell patch for all the cell servers. Reboot all the cell servers one by one to verify the cell server bootup issue, also run patchmgr precheck utility. then apply the patch using patchmgr utlity, this will pickup the cell server ips from cell_groups file and apply the cell patch to cell server one by one. oracle will quot 4 hours for each cell server patch, by realistic patching can be completed in 3.5 hours on all the cell servers. It again depends on ASM resync operation.  After the patching flash cache will be dropped, so you have to recreate it else you will face performance issues. 

PDU patch - For Power distribution unit. How many PDU s will have?  This patch will be released once a year. Yearly once you can see this patch as part of QFSD. This patch will be applied using patchmgr.

High level steps
===============

Cell sever patch
------------------
Check for any critical alert and status of disks using DCLI
clear the cell critical alert if any
Check the ssh equivalence for both compute and cell node
Check for Compute node and cell node uptime,If the uptime is more than 128 days it is recommended to reboot
Reboot the cell and compute node
Make all griddisk inactive and shutdown cell services
unzip qfsd patches
Apply the cell patch 
Check the imageinfo once cell Patch completed and activate the griddisks


Bundle Patch(BP)  Grid and RDBMS patching
---------------- -------------------------
Logon to Each compute node and Bundle patch can be applied parallely
Apply JDBC patch on GI only


ISO Patching(Compute node Pathcing)
-----------------------------------
Make sure that all NFS and ZFS file system are unmounted and comment out in FStab
Do the precheck and notifies for any conflicating RPM's which needs to be removed
Make the file system backup and reboot the node and update the image
Bringup the clusterware stack and enable the CRS

IB5 critical work around
--------------------------
IB Switches will not be upgraded in all the QFSD release,It will be only upgraded if your switch is below specific version and some critical fix will be given
As a root user locate the IBS using below command
ibswitches
then ssh to switch
take the spsh console and run the below command



Technical steps
===============


GI_HOME:/oracle_crs/product/11.2.0.4/crs_1
ORACLE_HOME: /oracle/product/11.2.0.4/db_1

Patch location: /oracle/depot/JULY2016-QFSDP/

drwxr-xr-x 2 oracle dba 4096 Sep 16 15:17 16486998
drwxr-xr-x 2 oracle dba 4096 Sep 16 15:18 23727132
drwxr-xr-x 3 oracle dba 4096 Sep 20 09:18 23274210


Cell Patching
==============
Check for any critical alert and status of disks using DCLI
------------------------------------------------------------

dcli -g /root/cell_group -l root "cellcli -e list alerthistory where endTime=null and alertShortName=Hardware and alertType=stateful and severity=critical" ---> 

Please check theses 3 steps before you stop cluster
dcli -g /root/cell_group -l root "cellcli -e list griddisk attributes name,asmmodestatus,asmdeactivationoutcome"
dcli -g /root/cell_group -l root "cellcli -e list cell attributes cellsrvStatus,msStatus,rsStatus detail"

clear the cell critical alert if any
-------------------------------------

dcli -g /root/cell_group -l root "cellcli -e DROP ALERTHISTORY ALL"

Check the ssh equivalence for both compute and cell node
--------------------------------------------------------

dcli -g /root/cell_group -l root "hostname -i"
dcli -g /root/dbs_group -l root "hostname -i"


Check for Compute node and cell node uptime,If the uptime is more than 128 days it is recommended to reboot
--------------------------------------------------------------------------------------------------------------
dcli -g /root/cell_group -l root "uptime"
dcli -g /root/dbs_group -l root "uptime"

Stop and disable the CRS
-------------------------
dcli -g /root/dbs_group -l root "/oracle_crs/product/11.2.0.4/crs_1/bin/crsctl check crs"
dcli -g /root/dbs_group -l root "/oracle_crs/product/11.2.0.4/crs_1/bin/crsctl stop crs -f"
dcli -g /root/dbs_group -l root "/oracle_crs/product/11.2.0.4/crs_1/bin/crsctl disable crs"

Check for Compute node and cell node uptime,If the uptime is more than 128 days it is recommended to reboot
--------------------------------------------------------------------------------------------------------------
dcli -g /root/cell_group -l root "uptime"
dcli -g /root/dbs_group -l root "uptime"

Reboot the cell and compute node
---------------------------------

dcli -g /root/dbs_group -l root "shutdown -F -r now"
dcli -g /root/cell_group -l root "shutdown -F -r now"

Make all griddisk inactive and shutdown cell services
------------------------------------------------------
dcli -g  /root/cell_group -l root "cellcli -e alter griddisk all inactive"
dcli -g /root/cell_group -l root "cellcli -e alter cell shutdown services all"

unzip qfsd patches

Apply the cell patch using the below commands
-------------------------------------------------

./patchmgr -cells /root/cell_group -reset_force
./patchmgr -cells /root/cell_group -cleanup
./patchmgr -cells /root/cell_group -patch_check_prereq
./patchmgr -cells /root/cell_group -patch

Check the imageinfo once cell Patch completed
-------------------------------------------------
dcli -g /root/cell_group -l root imageinfo
dcli -g /root/cell_group -l root "cellcli -e alter griddisk all active"

Bundle Patch(BP)
=============
Grid and RDBMS patching
========================
Logon to Each compute node and Bundle patch can be applied parallely

/oracle_crs/product/11.2.0.4/crs_1
/oracle/product/11.2.0.4/db_1
GI_HOME:/oracle_crs/product/11.2.0.4/crs_1
ORACLE_HOME: /oracle/product/11.2.0.4/db_1

% /oracle_crs/product/11.2.0.4/crs_1/OPatch/opatch version
% /oracle/product/11.2.0.4/db_1/OPatch/opatch version

/oracle/product/11.2.0.4/db_1/OPatch/opatch lspatches -oh /oracle/product/11.2.0.4/db_1
% /oracle_crs/product/11.2.0.4/crs_1/OPatch/opatch lsinventory -detail -oh /oracle_crs/product/11.2.0.4/crs_1
% /oracle/product/11.2.0.4/db_1/OPatch/opatch lsinventory -detail -oh /oracle/product/11.2.0.4/db_1

% unzip p23274515_112040_Linux-x86-64.zip
# chown -R oracle:oinstall /u01/app/oracle/patches/23274515

export ORACLE_HOME=/oracle_crs/product/11.2.0.4/crs_1

/oracle_crs/product/11.2.0.4/crs_1/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir <UNZIPPED_PATCH_LOCATION>/23274515/23061511
/oracle_crs/product/11.2.0.4/crs_1/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir <UNZIPPED_PATCH_LOCATION>/23274515/23054319
/oracle_crs/product/11.2.0.4/crs_1/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir <UNZIPPED_PATCH_LOCATION>/23274515/22502505

/oracle/product/11.2.0.4/db_1/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir <UNZIPPED_PATCH_LOCATION>/23274515/23061511
/oracle/product/11.2.0.4/db_1/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir <UNZIPPED_PATCH_LOCATION>/23274515/23054319/custom/server/23054319



# /u01/app/11.2.0.4/grid/crs/install/rootcrs.pl -unlock

/oracle_crs/product/11.2.0.4/crs_1/OPatch/opatch napply -oh /oracle_crs/product/11.2.0.4/crs_1 -local <UNZIPPED_PATCH_LOCATION>/23274515/23061511
/oracle_crs/product/11.2.0.4/crs_1/OPatch/opatch napply -oh /oracle_crs/product/11.2.0.4/crs_1 -local <UNZIPPED_PATCH_LOCATION>/23274515/23054319
/oracle_crs/product/11.2.0.4/crs_1/OPatch/opatch napply -oh /oracle_crs/product/11.2.0.4/crs_1 -local <UNZIPPED_PATCH_LOCATION>/23274515/22502505

Apply JDBC patch on GI only
$ cd <PATCH_TOP_DIR>/23727132

/oracle_crs/product/11.2.0.4/crs_1/OPatch/opatch apply -local



export ORACLE_HOME=/oracle/product/11.2.0.4/db_1

/u01/patches/23274515/23054319/custom/server/23054319/custom/scripts/prepatch.sh -dbhome /oracle/product/11.2.0.4/db_1
/u01/app/oracle/product/11.2.0.4/db_2/OPatch/opatch napply -oh /oracle/product/11.2.0.4/db_1 -local <UNZIPPED_PATCH_LOCATION>/23274515/23061511
/u01/app/oracle/product/11.2.0.4/db_2/OPatch/opatch napply -oh /oracle/product/11.2.0.4/db_1 -local <UNZIPPED_PATCH_LOCATION>/23274515/23054319/custom/server/23054319
/u01/patches/23274515/23054319/custom/server/23054319/custom/scripts/postpatch.sh -dbhome /oracle/product/11.2.0.4/db_1

Make sure that Cell Patching is completed and griddisk is made active before running the below command
======================================================================================================
/oracle_crs/product/11.2.0.4/crs_1/rdbms/install/rootadd_rdbms.sh
/oracle_crs/product/11.2.0.4/crs_1/crs/install/rootcrs.pl –patch


ISO Patching(Compute node Pathcing)
===================================
Stop the clusterware
dcli -g /root/dbs_group -l root "/oracle_crs/product/11.2.0.4/crs_1/bin/crsctl check crs"
dcli -g /root/dbs_group -l root "/oracle_crs/product/11.2.0.4/crs_1/bin/crsctl stop crs -f"

Make sure that all NFS and ZFS file system are unmounted and comment out in FStab

Unzip p16486998_121232_Linux-x86-64.zip and copy the patch to all compute nodes

below command will do the precheck and notifies for any conflicating RPM's which needs to be removed

./dbnodeupdate.sh -u -l /ora01/patches/23274210/Infrastructure/12.1.2.3.2/ExadataDatabaseServer_OL6/p23564643_121232_Linux-x86-64.zip -v -N 

Below command will make the file system backup and reboot the node and update the image
./dbnodeupdate.sh -u -l /ora01/patches/23274210/Infrastructure/12.1.2.3.2/ExadataDatabaseServer_OL6/p23564643_121232_Linux-x86-64.zip

Below command will bringup the clusterware stack and enable the CRS
./dbnodeupdate.sh -c

IB5 critical work around
=========================
IB Switches will not be upgraded in all the QFSD release,It will be only upgraded if your switch is below specific version and some critical fix will be given

As a root user locate the IBS using below command
ibswitches
then ssh to switch
take the spsh console and run the below command

-> set /SP/services/http secureredirect=disabled
-> set /SP/services/http servicestate=disabled
-> set /SP/services/https servicestate=disabled
-> exit




Monday, September 19, 2016

How to enable Write-Back Flash Cache in EXADATA?

Methods are available:
1. Rolling Method - Assuming that RDBMS & ASM instances are UP and enabling Write-Back Flash Cache in One Cell Server at a time
2. Non-Rolling Method - Assuming that RDBMS & ASM instances are DOWN while enabling Write-Back Flash Cache

Note: Before performing the below steps, Perform the following check as root from one of the compute nodes:

Check all griddisk “asmdeactivationoutcome” and “asmmodestatus” to ensure that all griddisks on all cells are “Yes” and “ONLINE” respectively.

# dcli -g cell_group -l root cellcli -e list griddisk attributes asmdeactivationoutcome, asmmodestatus

Check that all of the flashcache are in the “normal” state and that no flash disks are in a degraded or critical state:

# dcli -g cell_group -l root cellcli -e list flashcache detail
exadata01cell01: WriteThrough
exadata01cell02: WriteThrough
exadata01cell03: WriteThrough

1.     Rolling Method:

(Assuming that RDBMS & ASM instances are UP and enabling Write-Back Flash Cache in One Cell Server at a time)

Login to Cell Server:

Step 1. Drop the flash cache on that cell

#cellcli –e drop flashcache
Flash cache exadata01cell01_FLASHCACHE successfully dropped

Step 2. Check the status of ASM if the grid disks go OFFLINE. The following command should return 'Yes' for the grid disks being listed:

# cellcli -e list griddisk attributes name,asmmodestatus,asmdeactivationoutcome
         DATAC1_CD_00_exadata01cell01   OFFLINE  Yes
         DATAC1_CD_01_exadata01cell01   OFFLINE  Yes
         DATAC1_CD_02_exadata01cell01   OFFLINE  Yes
         DATAC1_CD_03_exadata01cell01   OFFLINE  Yes
         DATAC1_CD_04_exadata01cell01   OFFLINE  Yes
         DATAC1_CD_05_exadata01cell01   OFFLINE  Yes
         DBFS_DG_CD_02_exadata01cell01  OFFLINE  Yes
         DBFS_DG_CD_03_exadata01cell01  OFFLINE  Yes
         DBFS_DG_CD_04_exadata01cell01  OFFLINE  Yes
         DBFS_DG_CD_05_exadata01cell01  OFFLINE  Yes
         RECOC1_CD_00_exadata01cell01   OFFLINE  Yes
         RECOC1_CD_01_exadata01cell01   OFFLINE  Yes
         RECOC1_CD_02_exadata01cell01   OFFLINE  Yes
         RECOC1_CD_03_exadata01cell01   OFFLINE  Yes
         RECOC1_CD_04_exadata01cell01   OFFLINE  Yes
         RECOC1_CD_05_exadata01cell01   OFFLINE  Yes

Step 3. Inactivate the griddisk on the cell
# cellcli –e alter griddisk all inactive

Step 4. Shut down cellsrv service
# cellcli -e alter cell shutdown services cellsrv 

Stopping CELLSRV services...
The SHUTDOWN of CELLSRV services was successful.

Step 5. Set the cell flashcache mode to writeback 
# cellcli -e "alter cell flashCacheMode=writeback"

Cell exadata01cell01 successfully altered

Step 6. Restart the cellsrv service 
# cellcli -e alter cell startup services cellsrv 
Starting CELLSRV services...
The STARTUP of CELLSRV services was successful.
  
Step 7. Reactivate the griddisks on the cell
# cellcli –e alter griddisk all active
GridDisk DATAC1_CD_00_exadata01cell03 successfully altered
GridDisk DATAC1_CD_01_exadata01cell03 successfully altered
GridDisk DATAC1_CD_02_exadata01cell03 successfully altered
GridDisk DATAC1_CD_03_exadata01cell03 successfully altered
GridDisk DATAC1_CD_04_exadata01cell03 successfully altered
GridDisk DATAC1_CD_05_exadata01cell03 successfully altered
GridDisk DBFS_DG_CD_02_exadata01cell03 successfully altered
GridDisk DBFS_DG_CD_03_exadata01cell03 successfully altered
GridDisk DBFS_DG_CD_04_exadata01cell03 successfully altered
GridDisk DBFS_DG_CD_05_exadata01cell03 successfully altered
GridDisk RECOC1_CD_00_exadata01cell03 successfully altered
GridDisk RECOC1_CD_01_exadata01cell03 successfully altered
GridDisk RECOC1_CD_02_exadata01cell03 successfully altered
GridDisk RECOC1_CD_03_exadata01cell03 successfully altered
GridDisk RECOC1_CD_04_exadata01cell03 successfully altered
GridDisk RECOC1_CD_05_exadata01cell03 successfully altered

Step 8. Verify all grid disks have been successfully put online using the following command:
# cellcli -e list griddisk attributes name, asmmodestatus

        DATAC1_CD_00_exadata01cell02   ONLINE         Yes
         DATAC1_CD_01_exadata01cell02   ONLINE         Yes
         DATAC1_CD_02_exadata01cell02   ONLINE         Yes
         DATAC1_CD_03_exadata01cell02   ONLINE         Yes
         DATAC1_CD_04_exadata01cell02   ONLINE         Yes
         DATAC1_CD_05_exadata01cell02   ONLINE         Yes
         DBFS_DG_CD_02_exadata01cell02  ONLINE         Yes
         DBFS_DG_CD_03_exadata01cell02  ONLINE         Yes
         DBFS_DG_CD_04_exadata01cell02  ONLINE         Yes
         DBFS_DG_CD_05_exadata01cell02  ONLINE         Yes
         RECOC1_CD_00_exadata01cell02   ONLINE         Yes
         RECOC1_CD_01_exadata01cell02   ONLINE         Yes
         RECOC1_CD_02_exadata01cell02   ONLINE         Yes
         RECOC1_CD_03_exadata01cell02   ONLINE         Yes
         RECOC1_CD_04_exadata01cell02   ONLINE         Yes
         RECOC1_CD_05_exadata01cell02   ONLINE         Yes

Step 9. Recreate the flash cache 
# cellcli -e create flashcache all 

Flash cache exadata01cell01_FLASHCACHE successfully created

 If the flash disk is used for flash cache, then the effective cache size increases. If the flash disk is used for grid disks, then the grid disks are re-created on the new flash disk. If those gird disks were part of an Oracle ASM disk group, then they are added back to the disk group, and the data is rebalanced on them based on the disk group redundancy and ASM_POWER_LIMIT parameter.

Step 10. Check the status of the cell to confirm that it's now in WriteBack mode:

# cellcli -e list cell detail | grep flashCacheMode 
flashCacheMode:         WriteBack                            


Step 11. Repeat these same steps again on the next cell to the FINAL cell. However, before taking another storage server offline, execute the following making sure 'asmdeactivationoutcome' displays YES:

# cellcli -e list griddisk attributes name,asmmodestatus, asmdeactivationoutcome
         DATAC1_CD_00_exadata01cell01   ONLINE  Yes
         DATAC1_CD_01_exadata01cell01   ONLINE  Yes
         DATAC1_CD_02_exadata01cell01   ONLINE  Yes
         DATAC1_CD_03_exadata01cell01   ONLINE  Yes
         DATAC1_CD_04_exadata01cell01   ONLINE  Yes
         DATAC1_CD_05_exadata01cell01   ONLINE  Yes
         DBFS_DG_CD_02_exadata01cell01  ONLINE  Yes
         DBFS_DG_CD_03_exadata01cell01  ONLINE  Yes
         DBFS_DG_CD_04_exadata01cell01  ONLINE  Yes
         DBFS_DG_CD_05_exadata01cell01  ONLINE  Yes
         RECOC1_CD_00_exadata01cell01   ONLINE  Yes
         RECOC1_CD_01_exadata01cell01   ONLINE  Yes
         RECOC1_CD_02_exadata01cell01   ONLINE  Yes
         RECOC1_CD_03_exadata01cell01   ONLINE  Yes
         RECOC1_CD_04_exadata01cell01   ONLINE  Yes
         RECOC1_CD_05_exadata01cell01   ONLINE  Yes

After changing the flashcache modes on all cells, check if flashcache modes are changed to write-back for all cells.
CellCLI> dcli -g ~/cell_group -l root cellcli -e "list cell attributes flashcachemode"
exadata01cell01: WriteBack
exadata01cell02: WriteBack
exadata01cell03: WriteBack
  
2.     Non-Rolling Method:

(Assuming that RDBMS & ASM instances are DOWN while enabling Write-Back Flash Cache)

Step 1. Drop the flash cache on that cell
# cellcli -e drop flashcache 

Step 2. Shut down cellsrv service

# cellcli -e alter cell shutdown services cellsrv 

Step 3. Set the cell flashcache mode to writeback 

# cellcli -e "alter cell flashCacheMode=writeback" 

Step 4. Restart the cellsrv service 

# cellcli -e alter cell startup services cellsrv 

Step 5. Recreate the flash cache 

# cellcli -e create flashcache all


Write-Back Flash Cache Not Required for DiskGroup:

Note: We can disable Write-Back Flash Cache diskgroups like RECO not requiring this feature. This can save space in the flash cache.
CACHINGPOLICY could be used to change the flash cache policy of the griddisk.

Before changing the cache policy from default to none, ensure there is no cached data in flash cache for the grid disk:

CellCLI> create griddisk all harddisk prefix=RECO, size=1006, cachingPolicy="none“;

OR

CELLCLI>ALTER GRIDDISK grid_disk_name FLUSH;
CELLCLI>ALTER GRIDDISK grid_disk_name CACHINGPOLICY="none";


Flushing the data from Flash Cache to Disk – Manual Method:

The data which is not been synchronized with griddisk can be synchronized using the FLUSH option.
CELLCLI>ALTER GRIDDISK grid_disk_name FLUSH

Use the following command to check the progress of this activity:

CELLCLI>LIST GRIDDISK ATTRIBUTES name, flushstatus, flusherr


Reinstating WriteThrough FlashCache:

1.   To reinstate Writethrough caching, FlashCache must first be flushed
2.   FlashCache must then be dropped and cellsrv stopped.

Step 1. CELLCLI> alter flashcache all flush
Step 2. CELLCLI> drop flashcache
Step 3. CELLCLI> alter cell shutdown services cellsrv
Step 4. CELLCLI> alter cell flashCacheMode = WriteThrough
Step 5. CELLCLI> alter cell startup services cellsrv

Monitoring Flash Cache Usage:

CELLCLI> list metricdefinition attributes name, description where name like '.*_DIRTY‘




CD_BY_FC_DIRTY
Number of unflushed bytes cached in FLASHCACHE on a cell disk
FC_BY_DIRTY
Number of unflushed bytes in FlashCache
FC_BY_STALE_DIRTY
Number of unflushed bytes in FlashCache which cannot be flushed. Because cached disks are not accessible
GD_BY_FC_DIRTY         
Number of unflushed bytes cached in FLASHCACHE for a grid disk

Sunday, September 18, 2016

Data/query Processing in EXADATA

Please read the data points

EXADATA - Importance of Cellinit.ora and Cellip.ora files

Cellinit.ora and Cellip.ora

After Oracle Exadata Storage Server is configured, the database server host must be configured with the cellinit.ora and the cellip.ora files to use the cell. 

The files are located in the /etc/oracle/cell/network-config directory of the database server host.These configuration files contain IP addresses, not host names.

cellinit.ora - This file contains the database IP addresses.
cellip.ora -    This file contains the storage cell IP addresses.

Example:

Quarter RAC Exadata machine contains 2 compute(DB) nodes  and 3 Cell(Storage) servers.Below picture shows the ip configurations. 

192.168.50.23 and 192.168.50.24 belongs to Compute Nodes.
192.168.51.27, 192.168.51.28 and 192.168.51.29 belongs to Cell Servers.

How to change system account password in EXADATA?

Run the below commands from first node of respective servers, as ‘root’ user, in order to effect the change on all nodes of the corresponding servers

For “root” user:
=================
dcli -g /opt/oracle.SupportTools/onecommand/dbs_group -l root "echo <random pwd>| passwd root --stdin"

Verify the root account password change across all nodes using the below command:

dcli -g /opt/oracle.SupportTools/onecommand/dbs_group -l root "chage -l root"

For “oracle” user:
===================
dcli -g /opt/oracle.SupportTools/onecommand/dbs_group -l root "echo <random pwd>| passwd oracle --stdin"

Verify the oracle account password change across all nodes using the below command:

dcli -g /opt/oracle.SupportTools/onecommand/dbs_group -l root "chage -l oracle"

Note: The <random pwd> string will get substituted with the actual password that will be set.