Log Insight 4.8 – Migrate removed worker node logs to new master node

Log Insight

Log Insight 4.8 – Migrate removed worker node logs to new master node

Log Insight

VMware-vRealize-Log-Insight-8.0.0.0-14870409_OVF10.ova
3 node cluster with the VIP setup.

You remove a node from the cluster and have issues adding it back. Open an SR before removing the node the next time!
You want to create a new read only archive node and migrate the data over to it. This read only node will NOT receive any new logs.

Note – I have seen this were 2/3 worker nodes would not work in the cluster and we ended up standing up a new 3 node cluster and using the commands to copy the data over.

 

Deploy a new standalone LI on the same VLAN and subnet. Run through the wizard. Make sure there is enough disk space. VMware-vRealize-Log-Insight-8.0.0.0-14870409_OVF10.ova

SSH into the new node 192.168.0.245
service loginsight stop

SSH into the old node 192.168.0.243
service loginsight stop

At the console of the old node 192.168.0.243. Or via SSH but send the process to the background in case the putty session dies run this SCP command. I have seen this take 6 hours for 450GB when both VMs are on the same ESXi host!
Replace 192.168.0.245 with your new node.

scp -r /storage/core/loginsight/cidata/store 192.168.0.245:/storage/core/loginsight/cidata

 

On the new node .245 ssh in and run the command below to make the buckets available from the copied data.
for bucket in $(ls /storage/core/loginsight/cidata/store | grep -v ‘generation|buckets|strata_write.lock’); do echo y | /usr/lib/loginsight/application/sbin/bucket-index add $bucket –statuses archived; done

Repeat the SCP and for bucket command if you have a 2nd old LI node that was removed from a cluster.

 

Start the LI service on .245
service loginsight start

Login to the new .245 node. The last 48 hours shows the old logs.

Setup

VMware-vRealize-Log-Insight-8.0.0.0-14870409_OVF10.ova
3 node cluster with the VIP setup.

You remove a node from the cluster and have issues adding it back. Open an SR before removing the node the next time!
You want to create a new read only archive node and migrate the data over to it. This read only node will NOT receive any new logs.

Note – I have seen this were 2/3 worker nodes would not work in the cluster and we ended up standing up a new 3 node cluster and using the commands to copy the data over.

 

Commands

Deploy a new standalone LI on the same VLAN and subnet. Run through the wizard. Make sure there is enough disk space. VMware-vRealize-Log-Insight-8.0.0.0-14870409_OVF10.ova

SSH into the new node 192.168.0.245
service loginsight stop

SSH into the old node 192.168.0.243
service loginsight stop

At the console of the old node 192.168.0.243. Or via SSH but send the process to the background in case the putty session dies run this SCP command. I have seen this take 6 hours for 450GB when both VMs are on the same ESXi host!
Replace 192.168.0.245 with your new node.

scp -r /storage/core/loginsight/cidata/store 192.168.0.245:/storage/core/loginsight/cidata

 

On the new node .245 ssh in and run the command below to make the buckets available from the copied data.
for bucket in $(ls /storage/core/loginsight/cidata/store | grep -v ‘generation|buckets|strata_write.lock’); do echo y | /usr/lib/loginsight/application/sbin/bucket-index add $bucket –statuses archived; done

Repeat the SCP and for bucket command if you have a 2nd old LI node that was removed from a cluster.

 

Start the LI service on .245
service loginsight start

Login to the new .245 node. The last 48 hours shows the old logs.