I have DS20E with FCA-2354 HBAs. My server fails to boot from a NetApp Clustered ONTAP storage system.
The LUN is detected by the OpenVMS server but fails to boot with error message saying disk path no longer valid. A re-init of the system shows lot of $0$DGA devices as well, which I belive should not come. After an init, executing "show dev dga" on shows the required FC LUNs but the same command run after a "wwidmgr -show wwid" gives the improper $0$DGA devices.
The SAN Boot works on a NetApp 7-Mode system but fails with NetApp Clustered ONTAP storage system.
I have seen this issue on all OpenVMS versions.
Can anyone guide me with this?
Below is the console log messages
P00>>>show dev dga
dga5.1001.0.8.1 $1$DGA5 NETAPP LUN C-Mode 8200
dga5.1002.0.8.1 $1$DGA5 NETAPP LUN C-Mode 8200
dga6.1001.0.8.1 $1$DGA6 NETAPP LUN C-Mode 8200
dga6.1002.0.8.1 $1$DGA6 NETAPP LUN C-Mode 8200
P00>>>boot
(boot dga5.1001.0.8.1 -flags 0,0)
block 0 of dga5.1001.0.8.1 is a valid boot block
reading 1230 blocks from dga5.1001.0.8.1
device dga5.1001.0.8.1 no longer valid
failed to read dga5.1001.0.8.1
bootstrap failure
(boot dga5.1002.0.8.1 -flags 0,0)
block 0 of dga5.1002.0.8.1 is a valid boot block
reading 1230 blocks from dga5.1002.0.8.1
device dga5.1002.0.8.1 no longer valid
failed to read dga5.1002.0.8.1
bootstrap failure
(boot dga6.1001.0.8.1 -flags 0,0)
block 0 of dga6.1001.0.8.1 is a valid boot block
reading 1230 blocks from dga6.1001.0.8.1
device dga6.1001.0.8.1 no longer valid
failed to read dga6.1001.0.8.1
bootstrap failure
(boot dga6.1002.0.8.1 -flags 0,0)
block 0 of dga6.1002.0.8.1 is a valid boot block
reading 1230 blocks from dga6.1002.0.8.1
device dga6.1002.0.8.1 no longer valid
failed to read dga6.1002.0.8.1
bootstrap failure
P00>>>init
Initializing...
1024 Meg of system memory
probing hose 1, PCI
bus 0, slot 7 -- ewa -- DE500-AA Network Controller
bus 0, slot 8 -- pga -- FCA-2354
probing hose 0, PCI
probing PCI-to-ISA bridge, bus 1
probing PCI-to-PCI bridge, hose 0 bus 2
bus 0, slot 5, function 1 -- dqa -- Cypress 82C693 IDE
bus 0, slot 5, function 2 -- dqb -- Cypress 82C693 IDE
bus 0, slot 5, function 3 -- usba -- Cypress 82C693 USB
bus 0, slot 6, function 0 -- pka -- Adaptec AIC-7895
bus 0, slot 6, function 1 -- pkb -- Adaptec AIC-7895
bus 0, slot 7 -- vga -- S3 Trio64/Trio32
bus 0, slot 8 -- pgb -- FCA-2354
bus 2, slot 4 -- eia -- DE602-AA
bus 2, slot 5 -- eib -- DE602-AA
initializing GCT/FRU at 1e6000
Testing the System
Testing the Memory
Testing the Disks (read only)
Testing ei* devices.
Testing ew* devices.
System Temperature is 30 degrees C
I had another client that had a similar issue with newer Netapp hardware. In that case, he was not able to boot the san disk but was able to use the san disks as data disks. His solution was to use a local shadow set for the system disk and just use the san disks for data. The problem, we believe is with the Alpha side that does not handle the communications properly during the boot process. Further investigation was not possible at that time due to production requirements. Note that this problem only occurred with the newer Netapp hardware and software levels. I believe that the Ontap version is critical in showing this problem.
I was setting up a new OpenVMS environment with Clustered ONTAP and hit this issue. As you mentioned the data disks are working fine. I was even able to mount the boot disk on the DCL prompt after booting from a CD and the files are all readable and intack on the SAN Boot disk. Unfortunately, the Boot part fails.
The same disk after doing backup to another local disk or a NetApp 7-Mode disk works fine as a boot disk.
As I stated initially, this is an issue with the Alpha and Ontap not being "in- sync" with each other for the boot process. As far as I know, Netapp does not support this setup with the new ontap system and the old alpha systems. Use either the 7-mode or a local set of disks for the boot system and add another on the SAN as a shadow member to keep a copy there as well.
You don't indicate the OpenVMS version. With V7.3-2 you are limited to 3 shadow members and I would have 2 "local" with a copy on the SAN totaling 3 copies. With V8 you can have more which can give you 2 copies on the SAN.
malmberg August 04 2022 No more VAX hobbyist licenses.
Community licenses for Alpha/IA64/X86_64 VMS Software Inc.
Commercial VMS software licenses for VAX available from HPE.
ozboomer July 20 2022 Just re-visiting.. No more hobbyist licenses? Is that from vmssoftware.com, no 'community' licenses?
valdirfranco July 01 2022 No more hobbyist license...sad
mister_wavey February 12 2022 I recall that the disks failed on the public access VMS systems that included Fafner
parwezw January 03 2022 Anyone know what happened to FAFNER.DYNDS.ORG?
I had a hobbyist account here but can longer access the site.
gtackett October 27 2021 Make that DECdfs _2.1A_ for Vax
gtackett October 27 2021 I'm looking for DECdfs V2.4A kit for VAX.
Asking here just in case anyone is still listening.
MarkRLV September 17 2021 At one time, didn't this web site have a job board? I would love to use my legacy skills one last time in my career.
malmberg January 18 2021 New Hobbyist PAKs for VAX/VMS are no longer available according to reports. Only commercial licenses are reported to be for sale from HPE
dfilip January 16 2021 Can someone please point me to hobbyist license pak? I'm looking for VAX/VMS 7.1, DECnet Phase IV, and UCX/TCPIP ... have the 7.1 media, need the license paks ... thanks!