When you reach the exit step to restore the database for homogeneous system copy, exit the installer. Restore the database from a backup of the primary host.
All subsequent installation phases have already been executed on the primary database server. It is mandatory that you test for proper functionality of failover and takeover with these parameter settings. Because individual configurations can vary, the parameters might require adjustment. Specific to IBM Db2 with HADR configuration with normal startup: The secondary or standby database instance must be up and running before you can start the primary database instance.
When you use Pacemaker for automatic failover in the event of a node failure, you need to configure your Db2 instances and Pacemaker accordingly. This section describes this type of configuration. Recent testing revealed situations, where netcat stops responding to requests due to backlog and its limitation of handling only one connection. The netcat resource stops listening to the Azure Load balancer requests and the floating IP becomes unavailable.
For existing Pacemaker clusters, we recommended in the past replacing netcat with socat. Currently we recommend using azure-lb resource agent, which is part of package resource-agents, with the following package version requirements:.
Note that the change will require brief downtime. For existing Pacemaker clusters, if the configuration was already changed to use socat as described in Azure Load-Balancer Detection Hardening , there is no requirement to switch immediately to azure-lb resource agent. It's not important which node the resources are running on. You must manage the Pacemaker clustered Db2 instance by using Pacemaker tools.
If you use db2 commands such as db2stop, Pacemaker detects the action as a failure of resource. If you're performing maintenance, you can put the nodes or resources in maintenance mode. Pacemaker suspends monitoring resources, and you can then use normal db2 administration commands. For details see Azure Load balancer Limitations. Select the availability set or the virtual machines hosting IBM Db2 database created in the preceding step.
Select TCP as the protocol and port Keep the Interval value set to 5 , and keep the Unhealthy threshold value set to 2. Select the front-end IP address, the back-end pool, and the health probe that you created earlier for example, Db2-frontend. The following changes are required:. When you install primary and dialog application servers against an Db2 HADR configuration, use the virtual host name that you picked for the configuration.
If you performed the installation before you created the Db2 HADR configuration, make the changes as described in the preceding section and as follows for SAP Java stacks. To configure the Db2 log archiving for HADR setup, we recommend that you configure both the primary and the standby database to have automatic log retrieval capability from all log archive locations. Both the primary and standby database must be able to retrieve log archive files from all the log archive locations to which either one of the database instances might archive log files.
The log archiving is performed only by the primary database. If you change the HADR roles of the database servers or if a failure occurs, the new primary database is responsible for log archiving. If you've set up multiple log archive locations, your logs might be archived twice. In the event of a local or remote catch-up, you might also have to manually copy the archived logs from the old primary server to the active log location of the new primary server.
We recommend configuring a common NFS share where logs are written from both nodes. The NFS share has to be highly available. You can use existing highly available NFS shares for transports or a profile directory.
For more information, see:. Every test assumes that you are logged in as user root and the IBM Db2 primary is running on the azibmdb01 virtual machine.
Resource migration with "crm resource migrate" creates location constraints. Location constraints should be deleted. If location constraints are not deleted, the resource cannot fail back or you can experience unwanted takeovers. Cluster node azibmdb01 should be rebooted. When azibmdb01 is back online, the Db2 instance is going to move in the role of a secondary database instance.
If the Pacemaker service doesn't start automatically on the rebooted former primary, be sure to start it manually with:. Pacemaker will restart the Db2 primary database instance on the same node, or it will fail over to the node that's running the secondary database instance and an error is reported. Pacemaker will promote the secondary instance to the primary instance role.
The old primary instance will move into the secondary role after the VM and all services are fully restored after the VM reboot:. In such a case, Pacemaker will detect that the node that's running the primary database instance isn't responding. The next step is to check for a Split brain situation. After the surviving node has determined that the node that last ran the primary database instance is down, a failover of resources is executed.
After the failed node is back online, it starts the Db2 instance into the secondary role. Skip to main content. This browser is no longer supported. It must be the same port number for both database instances. It is mandatory that you test for proper functionality of failover and takeover with these parameter settings.
Because individual configurations can vary, the parameters might require adjustment. Specific to IBM Db2 with HADR configuration with normal startup: The secondary or standby database instance must be up and running before you can start the primary database instance. To set up the Standby database server by using the SAP homogeneous system copy procedure, execute these steps:. For details see Azure Load balancer Limitations. Select the availability set or the virtual machines hosting IBM Db2 database created in the preceding step.
Select TCP as the protocol and port Keep the Interval value set to 5 , and keep the Unhealthy threshold value set to 2. Select the front-end IP address, the back-end pool, and the health probe that you created earlier for example, Db2-frontend. When you use Pacemaker for automatic failover in the event of a node failure, you need to configure your Db2 instances and Pacemaker accordingly. This section describes this type of configuration.
It's not important which node the resources are running on. You must manage the Pacemaker clustered Db2 instance by using Pacemaker tools. If you use db2 commands such as db2stop, Pacemaker detects the action as a failure of resource. If you're performing maintenance, you can put the nodes or resources in maintenance mode. Pacemaker suspends monitoring resources, and you can then use normal db2 administration commands.
The following changes are required:. When you install primary and dialog application servers against an Db2 HADR configuration, use the virtual host name that you picked for the configuration. If you performed the installation before you created the Db2 HADR configuration, make the changes as described in the preceding section and as follows for SAP Java stacks. To configure the Db2 log archiving for HADR setup, we recommend that you configure both the primary and the standby database to have automatic log retrieval capability from all log archive locations.
Both the primary and standby database must be able to retrieve log archive files from all the log archive locations to which either one of the database instances might archive log files. The log archiving is performed only by the primary database. If you change the HADR roles of the database servers or if a failure occurs, the new primary database is responsible for log archiving.
If you've set up multiple log archive locations, your logs might be archived twice. In the event of a local or remote catch-up, you might also have to manually copy the archived logs from the old primary server to the active log location of the new primary server. For more information, see:. Every test assumes IBM Db2 primary is running on the az-idb01 virtual machine. User with sudo privileges or root not recommended must be used.
Resource migration with "pcs resource move" creates location constraints. Location constraints in this case are preventing running IBM Db2 instance on az-idb If location constraints are not deleted, the resource cannot fail back. The Db2 instance is going to fail, and Pacemaker will move master node and report following status:. Pacemaker will restart the Db2 primary database instance on the same node, or it will fail over to the node that's running the secondary database instance and an error is reported.
In such a case, Pacemaker will detect that the node that's running the primary database instance isn't responding. The next step is to check for a Split brain situation. After the surviving node has determined that the node that last ran the primary database instance is down, a failover of resources is executed.
In the event of a kernel panic, the failed node will be restared by fencing agent. After the failed node is back online, you must start pacemaker cluster by.
Skip to main content. This browser is no longer supported. Download Microsoft Edge More info. Contents Exit focus mode. Is this page helpful? Please rate your experience Yes No. Any additional feedback?
0コメント