Home Forums Kamanja Forums Features & Releases Multiple nodes on the same cluster

This topic contains 4 replies, has 3 voices, and was last updated by  Archived_User13 2 years, 2 months ago.

  • Author
  • #13204 Reply


    While trying to load multiple use cases on same cluster, we made seperate zookeeper,hbase instances along with seperate port. But in cluster config and meta data config the node id was kept same as the namespaces was different

    I made the earlier instance with node id 1,2 and 3 on while starting the service on the second case. Even though we were using the different name space it still gave an error saying that node1 exists. Finally i have to change the node id.

    Could you please let me knwo when we had to change it as there were not common link with both the installation other than node id and server name.?

  • #13205 Reply


    Can you send me your configuration files so I can see what you’re tryi MG to do?

  • #13206 Reply


    Hi Arun,

    You need to have the following thing unique for each cluster, if you are going to share the same nodes, same HBase and same Zookeeper

    · HBase Schema name (in ClusterConfig.json & .properties file)

    · Zookeeper base path (in ClusterConfig.json & .properties file)

    · Port number

    You can have the same nodeids in different clusters where you meet the above conditions, if you are using the same nodes.



  • #13207 Reply



    All these were different.

    the node id was the only same value rest were changed. Still there were error. Once i changed the node id it worked.

  • #13209 Reply


    If the zookeeper you were using in configuration are completely different, it suggests that we are only storing one zookeeper, which means we may not have it properly nested within the cluster object and all clusters are drawing from that one zookeeper.

    It also suggests we aren’t keeping node id’s separated by cluster id. I’ll do some testing to confirm this.

Reply To: Multiple nodes on the same cluster
Your information: