March 18, 2016 at 7:42 am #13284
We are facing this error while uploading a big model jar to be used via PMML API.Looks like there is a limit in place which is restricting the jar uploading to 50 MB or so per file.
Can you confirm if this is a configuration that can be changed or will require code changes in the engine. This was working in ver 1.0.3 post a fix by Pokuri, but not working post upgrade to 1.1 release.
We have a planned go-live event for tomorrow and hence would appreciate any quick help. Below is the stack trace.
ERROR [PathChildrenCache-0] - Exception while processing event from zookeeper ZNode /fatafategy/monitor/engine, reason null, message null ERROR [PathChildrenCache-1] - Exception while processing event from zookeeper ZNode /fatafategy/monitor/metadata, reason null, message null ERROR [PathChildrenCache-0] - Exception while processing event from zookeeper ZNode /fatafategy/monitor/engine, reason null, message null ERROR [PathChildrenCache-1] - Exception while processing event from zookeeper ZNode /fatafategy/monitor/metadata, reason null, message null ERROR [main] - Failed to insert/update object for : TermsToTags-assembly-1.0.jar, Reason:null, Message:KeyValue size too large ERROR [main] - StackTrace:com.ligadata.Exceptions.InternalErrorException: Failed to Update the Jar of the object(pmml.searchtweet.000000000000000100): Failed to insert/update object for : TermsToTags-assembly-1.0.jar at com.ligadata.MetadataAPI.MetadataAPIImpl$.UploadJarsToDB(MetadataAPIImpl.scala:1313) at com.ligadata.MetadataAPI.MetadataAPIImpl$$anonfun$AddFunctions$2.apply(MetadataAPIImpl.scala:2355) at com.ligadata.MetadataAPI.MetadataAPIImpl$$anonfun$AddFunctions$2.apply(MetadataAPIImpl.scala:2355) at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33) at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108) at com.ligadata.MetadataAPI.MetadataAPIImpl$.AddFunctions(MetadataAPIImpl.scala:2355) at com.ligadata.MetadataAPI.TestMetadataAPI$.AddFunction(TestMetadataAPI.scala:243) at com.ligadata.MetadataAPI.TestMetadataAPI$$anonfun$23.apply$mcV$sp(TestMetadataAPI.scala:2488) at com.ligadata.MetadataAPI.TestMetadataAPI$.StartTest(TestMetadataAPI.scala:2576) at scala.com.ligadata.MetadataAPI.StartMetadataAPI$.main(StartMetadataAPI.scala:73) at scala.com.ligadata.MetadataAPI.StartMetadataAPI.main(StartMetadataAPI.scala)
March 18, 2016 at 7:43 am #13286
The max size allowed into Hbase column is controlled from Kamanja code.
src code file: KeyValueHBase.scala
src line: config.setInt(“hbase.client.keyvalue.maxsize”, 419430400);
Will let the engineering team to weigh in the pros/cons of making the change, testing, and cutting a RC version.
March 18, 2016 at 7:44 am #13287
They will not all be in for a couple of hours here in Menlo Park, but it will be top priority. I will pester them :-).
March 18, 2016 at 7:44 am #13290
We should make sure it becomes a configuration value rather than hard coded. While making that change, see if there are any other hard coded constants which could be pushed to configuration.
March 18, 2016 at 7:46 am #13292
ok – I just checked the dev branch and it loos like this part of the code is already made configurable (sorry, I was looking at the local version earlier)
so may be it is just a configuration change.
March 18, 2016 at 7:48 am #13294
In 1.1 (on the master branch), we have the following line (line 158 of KeyValueHbase.scala).
config.setInt(“hbase.client.keyvalue.maxsize”, getOptionalField(“hbase_client_keyvalue_maxsize”, parsed_json, adapterSpecificConfig_json, “104857600”).toString.trim.toInt);
This allows for up to 100mb files to be uploaded by default. I have tested this using 75mb, 100mb, and 105mb files (the 105mb file failed to upload).
If you need a larger file uploaded than 100mb, you can add the following line to your MetadataAPIConfig.properties file:
This will set the max size for hbase to 400mb. I tested this locally and I am able to upload files as large as 400mb.
As to why it’s failing to upload a 50mb file, I’m not sure. Version 1.1 contains the code necessary to perform this operation. Are you certain it’s 1.1 you’re on? I took a look at the line number of MetadataAPIImpl.scala:1313 and the exact def UploadJarsToDB isn’t there. Instead, UploadJarToDB (notice the lack of plurality), is there instead. The catch statement for UploadJarsToDB (plurality is back) is on line 1292.