Before you can remove a node from a Dynatrace Managed cluster, you must first disable the node.
To disable a cluster node:
- Log in as an administrator. Use your Dynatrace Managed URL or a cluster node that you want to keep (you can't remove a node while you're logged into the node).
- Go to the Dynatrace Managed Cluster nodes page.
- Click Disable to stop monitoring data processing on the node. Enter your Dynatrace Managed password to confirm you want to disable the node. If you want to temporarily exclude the node from a cluster, stop here and enable the node later. Dynatrace Server software will keep running on the machine but it won't process monitoring data.
To remove a disabled cluster node:
Note: Before proceeding with node removal, it's recommended that you wait for at least the duration of the transaction storage data retention period (up to 35 days). This is because while all metrics are replicated in the cluster, the raw transaction data isn't stored redundantly. Removing a node before its transaction storage data retention period expires may impact code-level and user-session data analysis. Note: Remove no more than one node at a time. To avoid data loss, allow 24 hours before removing any subsequent nodes. This is because it takes hours for the long-term metrics replicas to be automatically redistributed on the remaining nodes.
- Click Remove to completely remove the node and change the cluster configuration.
- Enter your Dynatrace Managed password to confirm that you want to remove the node.
The node will then stop and be completely uninstalled from your server instance.
Removing a "dead" node
When a cluster node stops operating due to a hardware failure or other condition, you can't remove the node via the Cluster nodes page. Though the node machine may not be available, it's still registered and displayed as a member of the cluster. Dead nodes are represented with inactive tiles (see example below).
To manually remove a node from a cluster:
Log in to the machine that hosts a healthy cluster node.
As root, locate the ID of the broken node:
cassandra-nodetool.sh lists all nodes registered within a cluster and shows their IDs. Dead nodes are marked as
DN in the first column:
[user@gdnvua ~]# /opt/dynatrace-managed/utils/cassandra-nodetool.sh status Note: Ownership information does not include topology; for complete information, specify a keyspace Datacenter: datacenter1 ======================= Status=Up/Down |/ State=Normal/Leaving/Joining/Moving -- Address Load Tokens Owns Host ID Rack UN 172.18.1.98 304.79 MB 256 48.5% 7bc033cd-20ad-46c7-ad2d-9a0a4c32f425 rack1 DN 172.18.1.99 295.89 MB 256 51.5% 912d124f-7611-46af-8d9b-98ca30c00501 rack1
Note the Host ID value of the broken cluster node (you'll need it in the next step).
cassandra-nodetool.sh to remove the node:
/opt/dynatrace-managed/utils/cassandra-nodetool.sh removenode <Host ID>
Host ID> is the value displayed by the status command).
For example, to remove one of the hosts shown in the previous example execute:
/opt/dynatrace-managed/utils/cassandra-nodetool.sh removenode 912d124f-7611-46af-8d9b-98ca30c00501
(optional) You can install a new node on another machine.
Note : Dynatrace Server shows dead and removed nodes for 7 days. Even after a dead node is no longer displayed on the Clusters page it's still registered as a cluster node. You need to remove the node manually.