We have just undertaken a project to upgrade the Checkpoint Management server from R71.40 to R77.20. It went very smoothly, and was probably a lot easier than I first expected.
The first thing to note is that this upgrade cannot be done direct. In accordance with the upgrade path, you must first upgrade to R75.40.
Luckily for us, we had shiny new hardware to put the new management gateway on to. This meant we could prepare the new gateway in advance of the change implementation, and gave us an easy option for rolling back should we have needed it.
Fortunately, rolling through different versions is pretty simple and can be done in VirtualBox, VMWare, or whatever virtualisation platform you fancy. The advantage of this is you can keep the IP's the same but in a fully isolated network. Of course if they are all connected to the same virtual network, you probably only want to bring one manager up at a time, as they'll all have the same IP.
We started by building a R71.40, a R75.40 and a R77.20 Checkpoint Management Server in virtual world. We found we had to thick provision the hard drives here, otherwise the partition sizes would be all wrong and the process wouldn't work. Trial and error found 30GB to be a reasonable size...much smaller and it could fail through the process.
You could probably get away without building a virtual R71.40, and performing the export on the live box. We didn't want to do anything at all to the live manager though, so we opted to build a virtual replica. Turns out this will also come in handy when we run through upgrading the gateways in the coming weeks, because we can virtualise the entire thing. You could also probably get away without the 77.20 box too...but again this was good for practice run-throughs and will be handy when we do the gateways.
We made the new virtual R71.40 a full replica of our existing manager by running a standard restore of one of our recent nightly backups. For the actual implementation we took one from as close before as we could get away with - this meant "hand amending" the minimal amount of rules we could. It's pretty simple - make sure a cpconfig has already been ran to install the same products as the existing server, then drop the backup file into /var/CPbackup/backups and type "restore".
On the R75.40 VM, all that needs to be done is the first time wizard via the web GUI - again installing the same products as required.
Now, starting on the 75.20 box, log on and cd to \$FWDIR/bin. You need to upload the upgrade_tools folder to FTP or similar - I found it convenient to archive it first:
tar -cf upgrade_tools_75.40.tar upgrade_tools/
Once you have the file, you need to log on to the 71.40 server and FTP it on to there. It needs to go into the correct file path, so if you archived it, you could do this:
cd $FWDIR/bin mv upgrade_tools upgrade_tools_71.40 tar -xf upgrade_tools_75.40.tar upgrade_tools
Then you need to run the pre-upgrade verifier. I'm not entirely sure what this does - it never actually failed for us, and seems to run really quickly. You might be able to skip it, but it's recommended and only takes a few seconds:
cd $FWDIR/bin/upgrade_tools ./pre_upgrade_verifier -p $FWDIR -c R71 -t R75.20
If you just try and run it without the options, it will explain what the options mean. If this comes back and says everything is ok, then do the actual export:
I'm not 100% sure, but I think I had problems with this command if I didn't output to the /tmp directory. I might be wrong there, but I just stuck to it once I got it working. This will give you the export file which you can import into R75.
So logging on to the R75.40 box then, first FTP the export file into /tmp, then run the import (make sure you have finished the first time wizard and installed the products first!):
cd $FWIDR/bin/upgrade_tools ./upgrade_import /tmp/export_from_R71.tgz
This will ask you if it can run a cpstop to stop all the services, then it will run through the import and ask you if it can run a cpstart to restart all of the services. Just press yes and watch. The only time I ever hit an error here was due to not allocating enough disk space in the VM, or thin provisioning it.
That's it...the R75.40 now has the existing policy, licenses, SIC's.
The procedure for getting this into R77.20 is pretty much the same, except that the upgrade_export and upgrade_import tools have been renamed to the migrate tool. So really briefly:
Get the upgrade_tools folder from the 77.20 server, and put it on the 75.40 server. Run the pre-upgrade verifier and perform the export:
./pre_upgrade_verifier -p $FWDIR -c R75.40 -t R77 ./migrate export /tmp/export_from_R75.tgz
FTP it off, stick it on the R77 box, and import it:
./migrate import /tmp/export_from_R75.tgz
After this, we sat the original and the new side by side and went through Smart Dashboard with a fine tooth comb to make sure all of our settings had transferred. They had - rules, objects, licenses - even SIC had transferred. We had to manually configure everything in the web interface - SNMP, backups, hostnames, etc. But that was expected, as we only moved the Checkpoint database, not the OS-type stuff.
On the day the change was a simple one - shut down the switchport facing the old manager, bring up the switchport facing the new one. Test SIC, change any rules and objects which had been modified since the restore was created (you can pull these out of the old tracker). Verify the policy. Checkpoint seems to have got better at detecting "rule xxx hides rule xxx for services xxxxx" errors, as we had 2 errors which we didn't have on R71. These were easily fixed though. Then push the policy and drink some coffee while the testers do their testing! There was no down time at all. It's worth noting that the gateways won't log to tracker until the policy is pushed.
That was that really. Pretty straight forward. The gateways are next - I expect those to be a lot more complex, as they run a fairly intricate BGP configuration, and that seems quite substantially changed between SPLAT and GAiA.