March 18, 2016 at 6:57 am #13204
While trying to load multiple use cases on same cluster, we made seperate zookeeper,hbase instances along with seperate port. But in cluster config and meta data config the node id was kept same as the namespaces was different
I made the earlier instance with node id 1,2 and 3 on while starting the service on the second case. Even though we were using the different name space it still gave an error saying that node1 exists. Finally i have to change the node id.
Could you please let me knwo when we had to change it as there were not common link with both the installation other than node id and server name.?
March 18, 2016 at 6:58 am #13205
Can you send me your configuration files so I can see what you’re tryi MG to do?
March 18, 2016 at 6:59 am #13206
You need to have the following thing unique for each cluster, if you are going to share the same nodes, same HBase and same Zookeeper
· HBase Schema name (in ClusterConfig.json & .properties file)
· Zookeeper base path (in ClusterConfig.json & .properties file)
· Port number
You can have the same nodeids in different clusters where you meet the above conditions, if you are using the same nodes.
March 18, 2016 at 6:59 am #13207
All these were different.
the node id was the only same value rest were changed. Still there were error. Once i changed the node id it worked.
March 18, 2016 at 7:00 am #13209
If the zookeeper you were using in configuration are completely different, it suggests that we are only storing one zookeeper, which means we may not have it properly nested within the cluster object and all clusters are drawing from that one zookeeper.
It also suggests we aren’t keeping node id’s separated by cluster id. I’ll do some testing to confirm this.