anystor::system health alert> storage disk set-led -disk anystor-04:5a.01.5 -action blink
anystor::system health alert> storage disk set-led -disk anystor-03:0b.01.5 -action blink
anystor::system health alert> storage disk set-led -disk anystor-04:5a.01.5 -action blink
anystor::system health alert> storage disk set-led -disk anystor-03:0b.01.5 -action blink
# Snapmirror cleanup – update all snapmirrors with lagtime over one week and idle
$mirrors = @()
Get-NcSnapmirror | %{
$sm = “” | Select “Source”,”Destination”,”Status”,”State”,”LagTime”
$sm.Source = $_.SourceLocation
$sm.Destination = $_.DestinationLocation
$sm.Status = $_.Status
$sm.State = $_.MirrorState
$sm.LagTime = $_.LagTime
$mirrors += $sm
}
$mirrors
write-output $mirrors | export-csv snap.csv
# write-output $mirrors | Format-table -AutoSize
# write-Output $mirrors | where- {$sm.LagTime -gt 604800 -and $sm.Status -eq “idle”} | Format-Table -AutoSize
From the CDOT CLI:
Change to the advanced privilege level:
set -priv adv
To download the firmware:
system firmware download -package http://web_server/all_shelf_fw.zip -nodenodename
To manually update the disk shelf firmware without rebooting:
system node run -node nodename -command storage download shelf
To manually update the ACPP firmware without rebooting:
system node run -node nodename -command storage download acp
~ from cosonoks’s blog
anystor300::*> volume delete -vserver anyvsrv001_esxh -volume T2C0_vol1320_someipms325_Export -disable-offline-check true
Error: command failed: Volume “T2C0_vol1320_someipms325_Export” in Vserver “anyvsrv001_esxh” is the source endpoint of one or more SnapMirror relationships. Before you
delete the volume, you must release the source information of the SnapMirror relationships using “snapmirror release”. To display the destinations to be used in the
“snapmirror release” commands, use the “snapmirror list-destinations -source-vserver anyvsrv001_esxh -source-volume T2C0_vol1320_someipms325_Export” command.
anystor300::*> snapmirror list-destinations -source-vserver anyvsrv001_esxh -source-volume T2C0_vol1320_someipms325_Export
Progress
Source Destination Transfer Last Relationship
Path Type Path Status Progress Updated Id
———– —– ———— ——- ——— ———— —————
anyvsrv001_esxh:T2C0_vol1320_someipms325_Export
DP somevsrv001_esxh:T2C0_lun1026_someipms325_Export_vol
Idle – – 2a3a0416-40f0-11e5-9b65-123478563412
anystor300::*> snapmirror release -source-vserver anyvsrv001_esxh -source-volume T2C0_vol1320_someipms325_Export *
[Job 39418] Job succeeded: SnapMirror Release Succeeded
1 entry was acted on.
anystor300::*> volume delete -vserver anyvsrv001_esxh -volume T2C0_vol1320_someipms325_Export -disable-offline-check true
Warning: Are you sure you want to delete volume “T2C0_vol1320_someipms325_Export” in Vserver “anyvsrv001_esxh” ? {y|n}: y
[Job 39419] Job succeeded: Successful
As a temporary workaround to clear the false reports, you can run the following command from the Cluster Shell:
::> system health alert delete -node *
From: http://www.cosonok.com/2013/12/a-difference-in-clustered-ontap-v-7.html
For Reference: To Force Delete the SnapMirror Snapshot
na81::> set diag
na81::*> snapshot delete -vserver vs1 -volume testshare -snapshot snapmirror.15991611-422d-11e3-8578-123478563412_9_2147484694.2013-11-30_142009 -ignore-owners true
Warning: This Snapshot copy is currently used as a reference Snapshot copy by one or more SnapMirror relationships. Deleting the Snapshot copy can cause future SnapMirror operations to fail. Are you sure you want to delete ‘snapmirror.15991611-422d-11e3-8578-123478563412_9_2147484694.2013-11-30_142009’ for volume ‘testshare’ in Vserver ‘vs1’? YES
https://kb.netapp.com/support/index?page=content&id=1011753
if needed, example of releasing snapmirror or vault relationship, snapmirror release -source-vserver anysrv001_esxh -source-volume anysrv001_srv001_Export_vol *
Must add new and confirm old at the same time.
Example.
stor001::> vserver modify -vserver vsrv001_file -aggr-list stor001_01_aggr1, stor001_02_aggr2, stor001_03_aggr3
example:
stor312::> system node run -node stor312-01 -command disk show -n
stor312::> storage disk
stor312::storage disk> storage disk assign -disk stor312-01:1a.11.11 -owner stor312-01
stor312::storage disk> storage disk show -disk stor312-01:1a.11.11