Environment:
- Exadata X9M2
- Virtual environment with KVM
- Oracle Home in use: 19.22.0.0
- Oracle Home to remove: 12.1.0.2
- OS Linux 7.9
- VM Name: ex2-itouglab01
STEP 10: Save XML file
You can save the XML file as a backup of the current situation, but this is not fundamental because each time you load an XML configuration and automatic backup is executed in the format:
Itoug-ex2-itouglab_<YYYY-MM-DD>_<HH24MISS>.xml
In any case you can execute an explicit backup using the SAVE command :
oedacli> SAVE FILE NAME=/EXAVMIMAGES/onecommand/Itoug-ex2-itouglab_DEPLOY_20240318.xml
STEP 11: DEPLOY activity:
You could execute the deploy directly from prompt, but for sure this is not the best way in case of connection failure and because the root password method used in this example doesn’t work.
To complete this task you have to create a file with all the commands previously defined (usually saved in linux-x64.Conf directory) with the deploy phase at the end:
# cat DeleteDBHome.cmd
LOAD FILE NAME=/EXAVMIMAGES/onecommand/Itoug-ex2-ex2itouglab.xml
LIST DATABASEHOMES
LIST CLUSTERS
LIST XMLACTIONS
RESET ACTIONS
DELETE DATABASEHOME where CLUSTERNAME='ex2itouglab' DBHOMELOC='/u01/app/oracle/product/12.2.0.1.220118/dbhome_1'
SAVE ACTION
MERGE ACTIONS
LIST XMLACTIONS
SAVE FILE NAME=/EXAVMIMAGES/onecommand/Itoug-ex2-ex2itouglab_DELETE_DBHOME_DEPLOY_20240620.xml
DEPLOY ACTIONS
Executing oedacli –help you can see all the available options:
# ./oedacli --help
Usage:
oedacli [ -h ] [ -l ] [ -j ] [ -q ] [ -f commandfile ] [ -c configfile [ -e immediatecommand ]]
-h, --help
Display help.
-l, --enhanced-logging
Enable verbose logging
-j, --json-output
LIST command output will be in json format.
-q, --quiet-mode
For LIST commands, return only data, no on-screen status.
-f, --command-file commandfile
A file containing commands to be executed.
-c, --config-file configfile
The name of the OEDA xml file to process
If not specified, load the file using the LOAD FILE command in the cli.
-e, --immediate-command immediatecommand
An immediate command to run, typically a LIST command.
If specified, must be the last parameter in the list
--sshkeys
Enable SSH Key login to remote nodes
--enablesu
Run command with SU using root user for grid/oracle user
--enablersa
Use RSA keys for SSH. If not specified, ECDSA keys are used by default.
--exitonerror
Exit OEDACLI session on error
-v, --version
reports the version of OEDACLI
BEWARE!!!
Don’t forget to be sure that no users are logged in the directory that will be removed from the deinstall process!Otherwise you should umount it manually as you will read in the log:
Unmounting file system..
File System /u01/app/oracle/product/12.1.0.2.220719/dbhome_1 is currently being accessed and following are the details. Please unmount this file system on ex2itouglab manually.
Now execute in nohup mode the oedacli using input parameters to use the ssh connection and the command file previously defined:
# nohup ./oedacli -f /EXAVMIMAGES/onecommand/linux-x64.Conf/DeleteDBHome.ex2itouglab.cmd --exitonerror --sshkeys --enablersa > /EXAVMIMAGES/onecommand/linux-x64.Conf/DeleteDBHome.ex2itouglab.log &
SUCCESS - file loaded OK
Customer : Itoug - On Line Games Stage and Prod
version : "CloneInstall"
cluster :
id : "Cluster-c570fead1-8ea0-b75d-6303-d553d84200e7_id"
databaseHomeName : "itouglab_home_121"
databaseSwOwner : "0594e31d-1e80-2c36-b847-89a63c721963"
databaseVersion : "12.1.0.2.220719"
databaseHomeLoc : "/u01/app/oracle/product/12.1.0.2.220719/dbhome_1"
inventoryLocation : "/u01/app/oraInventory"
installType : "rac_database"
language : "all_langs"
machines :
machine :
domainGroup :
machine :
id : "ex2_713bcc3f-b4fa-4011-8108-98b70836a6aa_compute_Cluster-c570fead1-8ea0-b75d-6303-d553d84200e7_vm01_id"
domainGroup :
machine :
id : "ex2_3dec114a-9eeb-42f8-a038-b2224d00ef8d_compute_Cluster-c570fead1-8ea0-b75d-6303-d553d84200e7_vm01_id"
basedir : "/u01/app/oracle"
id : "DbHome_6f71f8a3-a1e0-1607-c66f-3d551f58c320_id"
version : "CloneInstall"
cluster :
id : "Cluster-c570fead1-8ea0-b75d-6303-d553d84200e7_id"
databaseHomeName : "itouglab_home_1922"
databaseSwOwner : "0594e31d-1e80-2c36-b847-89a63c721963"
databaseVersion : "19.22.0.0.240116"
databaseHomeLoc : "/u01/app/oracle/product/19.22.0.0/dbhome_1"
inventoryLocation : "/u01/app/oraInventory"
language : "all_langs"
machines :
machine :
domainGroup :
machine :
id : "ex2_713bcc3f-b4fa-4011-8108-98b70836a6aa_compute_Cluster-c570fead1-8ea0-b75d-6303-d553d84200e7_vm01_id"
domainGroup :
machine :
id : "ex2_3dec114a-9eeb-42f8-a038-b2224d00ef8d_compute_Cluster-c570fead1-8ea0-b75d-6303-d553d84200e7_vm01_id"
patches :
patch :
basedir : "/u01/app/oracle"
useZfs : "false"
id : "Cluster-c570fead1-8ea0-b75d-6303-d553d84200e7_databaseHome1"
version : "CloneInstall"
clusterName : "ex2itouglab"
clusterOwner : "710c8ec9-4b4d-2e51-4845-ee47a2bec251"
clusterVersion : "19.22.0.0.240116"
clusterHome : "/u01/app/19.0.0.0/grid"
inventoryLocation : "/u01/app/oraInventory"
asmScopedSecurity : "true"
clusterVips :
clusterVip :
vipName : "ex2-itouglab01-vip"
domainName : "farm.Itoug-italia.local"
vipIpAddress : "10.20.6.133"
machines :
machine :
domainGroup :
machine :
id : "ex2_713bcc3f-b4fa-4011-8108-98b70836a6aa_compute_Cluster-c570fead1-8ea0-b75d-6303-d553d84200e7_vm01_id"
id : "ex2_713bcc3f-b4fa-4011-8108-98b70836a6aa_compute_Cluster-c570fead1-8ea0-b75d-6303-d553d84200e7_vm01_id_vip"
vipName : "ex2-itouglab02-vip"
domainName : "farm.Itoug-italia.local"
vipIpAddress : "10.20.6.134"
machines :
machine :
domainGroup :
machine :
id : "ex2_3dec114a-9eeb-42f8-a038-b2224d00ef8d_compute_Cluster-c570fead1-8ea0-b75d-6303-d553d84200e7_vm01_id"
id : "ex2_3dec114a-9eeb-42f8-a038-b2224d00ef8d_compute_Cluster-c570fead1-8ea0-b75d-6303-d553d84200e7_vm01_id_vip"
customerName : "Itoug"
application : "Bwin Production"
scanIps :
scanIp :
clusterScans :
clusterScan :
id : "Cluster-c570fead1-8ea0-b75d-6303-d553d84200e7_id_scan_client"
diskGroups :
diskGroup :
id : "b44bb2cc-41c2-99fe-d3d8-35a5f1d5709a"
id : "3a22d93c-16bd-6a79-21f1-7a6d095b8ccb"
id : "5a21e8cc-c59c-61a2-aa15-6b61517a4a1a"
basedir : "/u01/app/grid"
language : "all_langs"
patches :
patch :
id : "Cluster-c570fead1-8ea0-b75d-6303-d553d84200e7_id"
processMerge
processMergeActions
Merging Action : DELETE DATABASEHOME where CLUSTERNAME='ex2itouglab' DBHOMELOC='/u01/app/oracle/product/12.1.0.2.220719/dbhome_1'
Merging DELETE DATABASEHOME
Action Validated and Merged OK
Action ID=1 merged=true deployed=false
ID=1,CMDID=1,CMD="DELETE DATABASEHOME where CLUSTERNAME='ex2itouglab' DBHOMELOC='/u01/app/oracle/product/12.1.0.2.220719/dbhome_1'"
File : /EXAVMIMAGES/onecommand/Itoug-ex2-ex2itouglab_DELETE_DBHOME_DEPLOY_20250114.xml saved OK
Deploying Action ID : 1 DELETE DATABASEHOME where CLUSTERNAME='ex2itouglab' DBHOMELOC='/u01/app/oracle/product/12.1.0.2.220719/dbhome_1'
Deploying DELETE DATABASEHOME
Validating Oracle home..
Deinstalling database home itouglab_home_121
|\\\\
|||||
/||||
Unmounting file system..
Unmounting file system /u01/app/oracle/product/12.1.0.2.220719/dbhome_1 on ex2-itouglab01-dbadm.farm.Itoug-italia.local
Unmounting file system /u01/app/oracle/product/12.1.0.2.220719/dbhome_1 on ex2-itouglab02-dbadm.farm.Itoug-italia.local
Updating /etc/fstab entries...
Completed deleting additional Oracle Home on Cluster ex2itouglab
Done...
Done [Elapsed = 40316 mS [0.0 minutes] Tue Jan 14 17:19:02 CET 2025]]
Usually the activity is quite quick (few seconds), but for sure the next part require more time for manual activities.
STEP 12: Check KVM image info:
Connect to VM where you have removed the oracle home and verify info about the KVM images from fstab map:
# cat /etc/fstab
/dev/VGExaDbDisk.grid19.18.0.0.230117.img/LVDBDisk /u01/app/19.0.0.0/grid xfs defaults 0 0
/dev/VGExaDbDisk.db12.1.0.2.220719_4.img/LVDBDisk /u01/app/oracle/product/12.1.0.2.220719/dbhome_1 xfs defaults 0 0
/dev/VGExaDbDisk.db19.22.0.0.240116_4.img/LVDBDisk /u01/app/oracle/product/19.22.0.0/dbhome_1 xfs defaults 1 1
/dev/VGExaDbDisk.u01.20.img/LVDBDisk /u01 xfs defaults 0 0
Check Inventory
If everything is OK in previous step, now you can check if Inventory has been updated successfully:
$ cat /etc/oraInst.loc
inventory_loc=/u01/app/oraInventory
inst_group=oinstall
$ cat /u01/app/oraInventory/ContentsXML/inventory.xml
<?xml version="1.0" standalone="yes" ?>
<!-- Copyright (c) 1999, 2024, Oracle and/or its affiliates.
All rights reserved. -->
<!-- Do not modify the contents of this file by hand. -->
<INVENTORY>
<VERSION_INFO>
<SAVED_WITH>12.2.0.7.0</SAVED_WITH>
<MINIMUM_VER>2.1.0.6.0</MINIMUM_VER>
</VERSION_INFO>
<HOME_LIST>
<HOME NAME="OraGiHome19180" LOC="/u01/app/19.0.0.0/grid" TYPE="O" IDX="1" CRS="true"/>
<HOME NAME="itouglab_home_1918" LOC="/u01/app/oracle/product/19.18.0.0/dbhome_1" TYPE="O" IDX="2"/>
<HOME NAME="agent13c1" LOC="/u01/app/oracle/agent/agent_13.5.0.0.0" TYPE="O" IDX="4"/>
<HOME NAME="itouglab_home_1922" LOC="/u01/app/oracle/product/19.22.0.0/dbhome_1" TYPE="O" IDX="5"/>
<HOME NAME="ebprd_home_122" LOC="/u01/app/oracle/product/12.2.0.1.220118/dbhome_1" TYPE="O" IDX="3" REMOVED="T"/>
</HOME_LIST>
<COMPOSITEHOME_LIST>
</COMPOSITEHOME_LIST>
</INVENTORY>
STEP 13: Remove disk from VM
After the database home delete you have to remove the disk from the virtual machine because it has been unmounted but not removed.
List Logical volumes Summary:
# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
LVDbHome VGExaDb -wi-ao---- 4.00g
LVDbKdump VGExaDb -wi-ao---- 20.00g
LVDbSwap1 VGExaDb -wi-ao---- 16.00g
LVDbSys1 VGExaDb -wi-ao---- 15.00g
LVDbSys2 VGExaDb -wi-a----- 15.00g
LVDbTmp VGExaDb -wi-ao---- 3.00g
LVDbVar1 VGExaDb -wi-ao---- 2.00g
LVDbVar2 VGExaDb -wi-a----- 2.00g
LVDbVarLog VGExaDb -wi-ao---- 18.00g
LVDbVarLogAudit VGExaDb -wi-ao---- 1.00g
LVDbVdEX2ITOUGLAB01DBADMACFS2 VGExaDb -wi-ao---- 128.00m
LVDbVdEX2ITOUGLAB01DBADMDATAC2 VGExaDb -wi-ao---- 128.00m
LVDbVdEX2ITOUGLAB01DBADMRECOC2 VGExaDb -wi-ao---- 128.00m
LVDoNotRemoveOrUse VGExaDb -wi-a----- 2.00g
LVDBDisk VGExaDbDisk.db12.1.0.2.220719_4.img -wi-a----- 50.00g
LVDBDisk VGExaDbDisk.db19.22.0.0.240116_4.img -wi-ao---- 50.00g
LVDBDisk VGExaDbDisk.grid19.18.0.0.230117.img -wi-ao---- 50.00g
LVDBDisk VGExaDbDisk.u01.20.img -wi-ao---- 150.00g
Identify details the one you want to remove:
# lvdisplay |grep db12|grep Path
LV Path /dev/VGExaDbDisk.db12.1.0.2.220719_4.img/LVDBDisk
Check the active volumes:
# vgdisplay -A|grep img
VG Name VGExaDbDisk.db19.22.0.0.240116_4.img
VG Name VGExaDbDisk.u01.20.img
VG Name VGExaDbDisk.grid19.18.0.0.230117.img
VG Name VGExaDbDisk.db19.26.0.0.250121_4.img
VG Name VGExaDbDisk.db19.18.0.0.230117_3.img
VG Name VGExaDbDisk.db12.2.0.1.220118_4.img
Deactivate the volume group:
# vgchange -an VGExaDbDisk.db12.2.0.1.220118_4.img
0 logical volume(s) in volume group "VGExaDbDisk.db12.2.0.1.220118_4.img" now active
Check if the volume is correctly de-activated:
# vgdisplay -A|grep img
VG Name VGExaDbDisk.audit.img
VG Name VGExaDbDisk.db19.18.0.0.230117_3.img
VG Name VGExaDbDisk.diag01.img
VG Name VGExaDbDisk.grid19.18.0.0.230117.img
VG Name VGExaDbDisk.u01.20.img
Remove the logical volume:
# lvremove /dev/VGExaDbDisk.db12.1.0.2.220719_4.img/LVDBDisk
Do you really want to remove active logical volume VGExaDbDisk.audit.img/LVDBDisk? [y/n]: y
Logical volume "LVDBDisk" successfully removed
List volume groups:
# vgs
VG #PV #LV #SN Attr VSize VFree
VGExaDb 1 14 0 wz--n- <100.55g 2.17g
VGExaDbDisk.db12.1.0.2.220719_4.img 1 0 0 wz--n- <52.00g <52.00g
VGExaDbDisk.db19.22.0.0.240116_4.img 1 1 0 wz--n- <52.00g <2.00g
VGExaDbDisk.grid19.18.0.0.230117.img 1 1 0 wz--n- <52.00g <2.00g
VGExaDbDisk.u01.20.img 2 1 0 wz--n- 153.99g 3.99g
List physical volumes (to save info before remove volume group):
# pvdisplay|grep db12 -B2 -A8
--- Physical volume ---
PV Name /dev/sdg1
VG Name VGExaDbDisk.db12.1.0.2.220719_4.img
PV Size <52.00 GiB / not usable 3.95 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 13311
Free PE 13311
Allocated PE 0
PV UUID rJA7TB-KdjQ-4u2w-iQIc-UeXM-AXSQ-tqyfa3
Remove volume group:
# vgremove VGExaDbDisk.db12.1.0.2.220719_4.img
Volume group "VGExaDbDisk.db12.1.0.2.220719_4.img" successfully removed
Check that physical volume is not assigned to a volume group:
# pvdisplay|grep /dev/sdg1 -B1 -A9
"/dev/sdg1" is a new physical volume of "<52.00 GiB"
--- NEW Physical volume ---
PV Name /dev/sdg1
VG Name
PV Size <52.00 GiB
Allocatable NO
PE Size 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID rJA7TB-KdjQ-4u2w-iQIc-UeXM-AXSQ-tqyfa3
Remove physical disk:
# pvremove /dev/sdg1
Labels on physical volume "/dev/sdd1" successfully wiped.
Check the disk has been removed:
# lsblk|grep db12 -B2
STEP 14: Remove disk from Dom 0
Checking the VM on Dom 0:
[root@ex2dbadm01 linux-x64]# virsh domblklist ex2-itouglab01-dbadm.farm.eurobet-italia.local
Target Source
-------------------------------------------------------------------------------------------------------------------------
sda /EXAVMIMAGES/GuestImages/ex2-itouglab01-dbadm.farm.eurobet-italia.local/System.img
sdb /EXAVMIMAGES/GuestImages/ex2-itouglab01-dbadm.farm.eurobet-italia.local/grid19.18.0.0.230117.img
sdc /EXAVMIMAGES/GuestImages/ex2-itouglab01-dbadm.farm.eurobet-italia.local/db19.18.0.0.230117_3.img
sdd /EXAVMIMAGES/GuestImages/ex2-itouglab01-dbadm.farm.eurobet-italia.local/db12.1.0.2.220719_4.img
sde /EXAVMIMAGES/GuestImages/ex2-itouglab01-dbadm.farm.eurobet-italia.local/u01.img
sdg /EXAVMIMAGES/GuestImages/ex2-itouglab01-dbadm.farm.eurobet-italia.local/diag.img
sdh /EXAVMIMAGES/GuestImages/ex2-itouglab01-dbadm.farm.eurobet-italia.local/u01_1.img
sdi /EXAVMIMAGES/GuestImages/ex2-itouglab01-dbadm.farm.eurobet-italia.local/db-klone-Linux-x86-64-19000240116.50.img
or:
# vm_maker --list --disk-image --domain ex2-itouglab02-dbadm.farm.eurobet-italia.local
File /EXAVMIMAGES/GuestImages/ex2-itouglab01-dbadm.farm.eurobet-italia.local/System.img
File /EXAVMIMAGES/GuestImages/ex2-itouglab01-dbadm.farm.eurobet-italia.local/grid19.18.0.0.230117.img
File /EXAVMIMAGES/GuestImages/ex2-itouglab01-dbadm.farm.eurobet-italia.local/db12.1.0.2.220719_4.img
File /EXAVMIMAGES/GuestImages/ex2-itouglab01-dbadm.farm.eurobet-italia.local/db-klone-Linux-x86-64-19000240116.50.img
File /EXAVMIMAGES/GuestImages/ex2-itouglab01-dbadm.farm.eurobet-italia.local/u01.img
File /EXAVMIMAGES/GuestImages/ex2-itouglab01-dbadm.farm.eurobet-italia.local/u01_1.img
Check in the physical path images listed:
# cd /EXAVMIMAGES/GuestImages/ex2-itouglab01-dbadm.itougdomain.local/
# ll *.img
-rw-r--r-- 1 root root 55834574848 Jan 15 15:38 db-klone-Linux-x86-64-19000240116.50.img
-rw-r--r-- 1 root root 55834574848 Jan 15 15:10 db12.1.0.2.220719_4.img
-rw-r--r-- 1 root root 55834574848 Jan 15 15:38 grid19.18.0.0.230117.img
-rw-r----- 1 root root 108770046976 Jan 15 15:38 System.img
-rw-r--r-- 1 root root 141733920768 Jan 15 15:38 u01_1.img
-rw-r--r-- 1 root root 23622320128 Jan 15 15:38 u01.img
Usually the image to remove should have date and time set to the last activity, so a bit before the other files.
Identify the disk you want to remove:
It is important to note that first setup disks are usually in the format:
db<release version>.YYMMDD_x.img
for example:
db19.18.0.0.230117_3.img
Instead created home in a second time have usually the format of a clone:
db-klone-Linux-x86-64-<base release version><version date YYMMDD>.<clone size>.img
for example:
db-klone-Linux-x86-64-19000240116.50.img
Stop virtual machine (Optional but Recommended)
# virsh shutdown <vm_name>
or
# virsh destroy <vm_name> (if the VM does not shut down gracefully)
or:
# vm_maker --stop-domain <vm_name>
Remove the Disk using “vm_maker”:
# vm_maker --detach --disk-image <image_name> --domain <vm_name> [ --delete ]
or:
# vm_maker modify --vm <vm_name> --remove-disk <disk_path_or_device>
Example:
# vm_maker --detach --disk-image /EXAVMIMAGES/GuestImages/ex2-itouglab01-dbadm.farm.eurobet-italia.local/db12.1.0.2.220719_4.img --domain ex2-itouglab01-dbadm.farm.eurobet-italia.local --delete
[INFO] Disk image /EXAVMIMAGES/GuestImages/ex2-itouglab01-dbadm.farm.eurobet-italia.local/audit.img detached from domain ex2-itouglab01-dbadm.farm.eurobet-italia.local
Verify the Disk Removal:
# virsh domblklist ex2-itouglab01-dbadm.farm.eurobet-italia.local
Target Source
-----------------------------------------------------------------------------------------------------------
sda /EXAVMIMAGES/GuestImages/ex2-itouglab01-dbadm.farm.eurobet-italia.local/System.img
sdb /EXAVMIMAGES/GuestImages/ex2-itouglab01-dbadm.farm.eurobet-italia.local/grid19.18.0.0.230117.img
sdc /EXAVMIMAGES/GuestImages/ex2-itouglab01-dbadm.farm.eurobet-italia.local/db19.18.0.0.230117_3.img
sdd /EXAVMIMAGES/GuestImages/ex2-itouglab01-dbadm.farm.eurobet-italia.local/db-klone-Linux-x86-64-19000240116.50.img
sde /EXAVMIMAGES/GuestImages/ex2-itouglab01-dbadm.farm.eurobet-italia.local/u01.img
sdi /EXAVMIMAGES/GuestImages/ex2-itouglab01-dbadm.farm.eurobet-italia.local/u01_1.img
Check in the physical path also:
# cd /EXAVMIMAGES/GuestImages/ex2-itouglab01-dbadm.farm.eurobet-italia.local/
# ll *.img
-rw-r--r-- 1 root root 55834574848 Jan 15 15:38 db-klone-Linux-x86-64-19000240116.50.img
-rw-r--r-- 1 root root 34359738368 Jan 15 09:13 diag.img
-rw-r--r-- 1 root root 55834574848 Jan 15 15:38 grid19.18.0.0.230117.img
-rw-r----- 1 root root 108770046976 Jan 15 15:38 System.img
-rw-r--r-- 1 root root 141733920768 Jan 15 15:38 u01_1.img
-rw-r--r-- 1 root root 23622320128 Jan 15 15:38 u01.img
If the VM was previously stopped, start it again after removing the disk.
# virsh start <vm_name>
or:
# vm_maker --start-domain <vm_name>
Execute the same on all physical nodes
OEDACLI error log
In case of error first of all check log file of the latest operation:
/EXAVMIMAGES/onecommand/linux-x64/log
less Step1_Cli_RemoveDbHome_240905_155515.out
You could find errors like the following:
2024-09-05 15:55:18,028 [FINE][ OCMDThread][ MinaSessionPool:743] Connected to ex2-itouglab01-dbadm.farm.Itoug-italia.local but failed the authentication with exception message: No more authentication methods available
In this case be sure that SSH is enabled on all nodes.
BIBLIOGRAPHY
https://docs.oracle.com/en/engineered-systems/exadata-database-machine/dbmmn/detach-command.html
Commenti recenti