Home Forums Kamanja Forums Problems & Solutions Kamanja 1.2 Installation eror

This topic contains 4 replies, has 5 voices, and was last updated by  Archived_User79 1 year, 8 months ago.

  • Author
    Posts
  • #13452 Reply

    Archived_User28
    Participant

    Team

    Post the installation and while uploading metadata we are getting below error. The there is some error in initial upload but the

    ERROR [main] – Stacktrace:org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 7 actions: org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: Column family serializedInfo does not exist in region fatafatpronto:config_objects,,1450352299280.eb03defc3df43a551a9bb6faf1ef079c. in table ‘NameSpace:config_objects’, {NAME => ‘key’, DATA_BLOCK_ENCODING => ‘NONE’, BLOOMFILTER => ‘ROW’, REPLICATION_SCOPE => ‘0’, COMPRESSION => ‘NONE’, VERSIONS => ‘1’, TTL => ‘FOREVER’, MIN_VERSIONS => ‘0’, KEEP_DELETED_CELLS => ‘false’, BLOCKSIZE => ‘65536’, IN_MEMORY => ‘false’, BLOCKCACHE => ‘true’}, {NAME => ‘value’, DATA_BLOCK_ENCODING => ‘NONE’, BLOOMFILTER => ‘ROW’, REPLICATION_SCOPE => ‘0’, COMPRESSION => ‘NONE’, VERSIONS => ‘1’, TTL => ‘FOREVER’, MIN_VERSIONS => ‘0’, KEEP_DELETED_CELLS => ‘false’, BLOCKSIZE => ‘65536’, IN_MEMORY => ‘false’, BLOCKCACHE => ‘true’}
    at org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:659)
    at org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:615)
    at org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:1901)
    at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:31451)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2035)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)
    at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
    at java.lang.Thread.run(Thread.java:745)
    : 7 times,
    at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:227)
    at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$1700(AsyncProcess.java:207)
    at org.apache.hadoop.hbase.client.AsyncProcess.waitForAllPreviousOpsAndReset(AsyncProcess.java:1658)
    at org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:208)
    at org.apache.hadoop.hbase.client.BufferedMutatorImpl.flush(BufferedMutatorImpl.java:183)
    at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1470)
    at org.apache.hadoop.hbase.client.HTable.put(HTable.java:1076)
    at com.ligadata.keyvaluestore.HBaseAdapter$$anonfun$put$1.apply(HBaseAdapter.scala:357)
    at com.ligadata.keyvaluestore.HBaseAdapter$$anonfun$put$1.apply(HBaseAdapter.scala:340)
    at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
    at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
    at com.ligadata.keyvaluestore.HBaseAdapter.put(HBaseAdapter.scala:340)
    at com.ligadata.MetadataAPI.MetadataAPIImpl$.SaveObjectList(MetadataAPIImpl.scala:648)
    at com.ligadata.MetadataAPI.MetadataAPIImpl$.UploadConfig(MetadataAPIImpl.scala:5130)
    at com.ligadata.MetadataAPI.Utility.ConfigService$$anonfun$uploadClusterConfig$1.apply(ConfigService.scala:55)
    at com.ligadata.MetadataAPI.Utility.ConfigService$$anonfun$uploadClusterConfig$1.apply(ConfigService.scala:54)
    at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
    at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
    at com.ligadata.MetadataAPI.Utility.ConfigService$.uploadClusterConfig(ConfigService.scala:54)
    at scala.com.ligadata.MetadataAPI.StartMetadataAPI$.route(StartMetadataAPI.scala:289)
    at scala.com.ligadata.MetadataAPI.StartMetadataAPI$.main(StartMetadataAPI.scala:102)
    at scala.com.ligadata.MetadataAPI.StartMetadataAPI.main(StartMetadataAPI.scala)

    Result: {
    “APIResults” : {
    “Status Code” : 0,
    “Function Name” : “UploadConfig”,
    <bank specific value>
    “Result Description” : “Uploaded Config successfully”
    }
    }

    When we start the cluster it says no node configs avaliable.When i check by dumping the node details it gives error saying no configs availiable.
    gbrdsr000002264:/apps/kamanja/scripts$ kamanja dump all cfg objects
    Using default configuration /apps/kamanja/Install/config/MetadataAPIConfig.properties
    WARN [main] – DATABASE_SCHEMA remains unset
    WARN [main] – DATABASE_LOCATION remains unset
    WARN [main] – DATABASE_HOST remains unset
    WARN [main] – ADAPTER_SPECIFIC_CONFIG remains unset
    WARN [main] – SSL_PASSWD remains unset
    WARN [main] – AUDIT_PARMS remains unset
    WARN [main] – DATABASE remains unset
    log4j:WARN No appenders could be found for logger (org.apache.curator.framework.imps.CuratorFrameworkImpl).
    log4j:WARN Please initialize the log4j system properly.
    log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
    Result: {
    “APIResults” : {
    “Status Code” : -1,
    “Function Name” : “GetAllCfgObjects”,
    “Result Data” : null,
    “Result Description” : “Failed to fetch all configs. No configs available.”
    }
    }

  • #13453 Reply

    Archived_User7
    Participant

    Please send me your cluster config and metadataAPI config and I’ll see if I can reproduce.

  • #13454 Reply

    Archived_User19
    Participant

    Hi Arun ,

    Please make sure that you clean hbase tables from the previous installation .

  • #13455 Reply

    Archived_User36
    Participant

    Arun,

    Kamanja team is looking into this error and trying to reproduce it here. It is in this regard we look forward to the WebEx session to get more details about your config and discuss changes in metadata schema specifically serializedinfo column etc.

    Thanks,
    – Mahesh.

  • #13456 Reply

    Archived_User79
    Participant

    It is likely we are accessing the old table. HBaseAdapter doesn’t create the new table if the table already exists. I think It is more of an upgrade issue.

Reply To: Kamanja 1.2 Installation eror
Your information: