Skip to main content

Check out Interactive Visual Stories to gain hands-on experience with the SSE product features. Click here.

Skyhigh Security

MapDB CLI Commands

Use these commands to detokenize, encrypt and import the encrypted data and delete old data entries from MapDB.

Delete old data from MapDB

  • As logs are processed by Cloud Connector, new usernames and IP address values in logs are hashed and stored as key-value pairs in LevelDB (mapdbcache folder in Cloud Connector installation directory).
  • Data is also stored in reverse (ex: Key→ 462768abd7d0f3d532822c9842fe84c57ecf6eb75651f5e011cb4096fcf91c09,   value → UserName ) in LevelDB (deObfmapdbcache folder).
  • Key-value pairs are stored in a human-readable format in the MapDB.txt file as well.
  • Data in MapDB.txt file is also used to rebuild mapdbcache and deObfmapdbcache in case the leveldb database is corrupting using the command line ./shnlpcli mapdb --import MapDB.txt

EX: Loglines (BlueCoat):

AZ18GW0001 2020-05-25 14:23:30 163 bhaskar - - OBSERVED - - 200 TCP_NC_MISS GET application/atomicmail dns 25 - "Mozilla/5.0 (Windows; U; Windows NT 5.1; cs-CZ) AppleWebKit/525.28.3 (KHTML, like Gecko) Version/3.2.3 Safari/525.29);" - 300000 20000 - - - -

MapDB.txt entries:|b1fe3ba8d1776a05f76fcd5f2bb5333a4817225df78ccb24a9758c55e76aa7c0

Use Map data

  • DashBoard would de-tokenize  user name and Ip address by using data stored in deObfmapdbcache.
  • Dashboard would only use last one year's data for processing.
  • As new users and Ip addresses are  processed from logs, new data will be added to LevelDB (both mapdbcache and  deObfmapdbcache ) and also MapDB.txt
  • Data older than year would become dormant and thereby effecting disk space.

Encrypt MapDB.txt file

  • All MapDBEncrypted.txt files can be found in the MapDBEntries folder in the Cloud Connector installation directory.
  • Before making entry to MapDBEncrypted.txt, first size of MapDB.txt is checked, and if its more than 100 MB(default value  and  is a configurable param), the existing file will be renamed with time stamp as suffix (ex: MapDB202007130355123.txt). and new entries are written to fresh MapDBEncrypted.txt file. 

Shnlpcli commands

In order to import data from encrypted file, the  encrypted text file import command line is as follows:

  1. Import command
    1. New mandatory argument "--encrypted true" to be passed to mention that txt file passed for importing has encrypted data  or not
    2. ./shnlpcli mapdb --import MapDBEntries/MapDBEncrypted.txt --encrypted true
      1. MapDBEncrypted.txt is encrypted text file which has all data  to be imported to DB
    3. ./shnlpcli mapdb --import MapDB.txt --encrypted false
      1. Need to pass "--encrypted false" in order to import from old txt files where data is not encrypted
    4. If data need to be imported to new  TTL db  use argument "--importtottldb true", to import data to new  TTLDB
      1. ./shnlpcli mapdb --import MapDBEntries/MapDBEncrypted.txt --encrypted true --importtottldb true
      2. Note: if argument  --importtottldb is passed as false or not passed then all data will be imported only to old LevelDB
      3. Note:  --importtottldb will only work if write to TTL DB option is set to true on dashboard
    5. MapDBEntries is folder in Cloud Connector installation directory  where MapDBEncrypted.txt is present
  2. Export command
    1. ./shnlpcli md --export MapDBEntries/NewMapDB.txt
      1. this command would export all data both from LevelDB and RocksDB to NewMapDB.txt file in MapDBEntries folder in EC installation directory
    2. If data from only ttlDB need to be exported, then argument " --exportFromTTLCacheOnly true" can be passed
      1. ex: ./shnlpcli md --export MapDBEntries/NewMapDBttl.txt --exportFromTTLCacheOnly true
      2. Note: if "–exportFromTTLCacheOnly" is passed as false or not passed then all data  from LevelDB and RocksDB to NewMapDB.txt file in MapDBEntries folder in Cloud Connector installation directory
      3. Note: –exportFromTTLCacheOnly will be able to export data from ttl db only  if write to TTL DB option is set to true on dashboard
  3. Decrypt command
    1. ./shnlpcli mapdb --decrypt MapDBEntries/MapDBEncrypted.txt --salt <salt value>
    2. This command would decrypt file if salt passed as an argument matches the salt of the tenant.
    3. command would create new file with .decrypt suffix in MapDBEntries directory


  1. In Windows servers TTL DB would work only Microsoft Visual C++ 2015 is installed. Therefore, before changing the option write to TTL DB on dashboard please make sure that Microsoft Visual C++ 2015 is installed in Windows server where Cloud Connector is installed.
  2. Salt change:  Changing salt value makes existing MapDBEncrypted.txt file useless as existing MapDBEncrypted.txt contents are encrypted with old salt value. However, if we attempt to import data from the old MapDBEncrypted.txt file after changing the salt value, it would lead to data corruption in DB. Please follow below steps after changing the salt value:
    1. After changing salt value, take back up of existing MapDBEncrypted.txt files
    2. Export data from LevelDB/ Rocks-TTLDB
    3. Use new exported MapDBEncrypted.txt files for importing data in future
  3. Switching from TTL DB to Level DB: Switching back from TTL DB to LevelDB for any reason, will make data stored in TTL DB till that point become invisible to Cloud Connector. Hence, detokenization using data stored in rocks data would fail. Therefore, before switching back to LevelDB from TTL DB, perform the following  steps:
    1. Export all data from TTL Db
    2. import all exported data from TTL DB to Level DB and then switch option  write to TTL DB on dashboard

Delete old entries from MapDB.txt file

  • Since TTL DB cache has the functionality to delete old data and thereby containing only the latest data in it, we export all data from TTL DB at regular intervals of time to MapDB.txt file. Hence, the new exported MapDB.txt file will have the latest data till the time of export.
  • Flags used in the scheduler:
    1. mapdbtxt.enableMapDBTxtScheduler takes boolean value  True/False as input. Whenever this value is set to True then only the scheduler will be enabled. By default this flag is set to false.
    2. mapdb.ttldb (existing flag which enable to use TTL DB or not) takes boolean value  True/False as input. Whenever this value is set to True data is written to TTL cache. By default this flag is set to false. The Scheduler will run only if both mapdb.ttldb and mapdbtxt.enableMapDBTxtScheduler are set to True.
    3. mapdbtxt.exportFromTTLOnly takes boolean value  True/False as input. Whenever this value is set to True then data from TTL cache only will be exported. By default this flag is set to false.
    4. mapdbtxt.deleteoldentries.fixedDelaySeconds Time interval in secat which scheduler to export data needs to run. By default value set to 1209600 sec (2 weeks).
    5. mapdbtxt.deleteoldentries.initialDelaySeconds Initial delay in sec after Cloud Connector start, when scheduler gets activated.  By default value set to 604800 sec (1 week).

CLI commands for mapdb

Command Usage Options


 Column index (starts with 1) that needs
 to be tokenized or de-tokenized.
 Applicable only for token and detoken commands.
       Default index: 1
 Decrypt encrypted MapDBEncrypted.txt file can 
 be used  for manual debugging of detokenization. 
 Creates new file in the same location with 
 .decrypt suffix.

Mandatory parameter Salt value needs to be passed to perform decryption

 De-Tokenize a particular column from the input 
 file, creates new file in the same location 
 with .detoken suffix.
Imports mappings from MapDB.txt. Pass True  
if data is encryped and False  if not encypted.
Export existing mapping from Level DB store 
to file.
If  you want to explicitly export from TTL db 
only pass value as true, and instead of default 
value for both levelDB and ttlDB 
pass the value as false.
Import mappings from another DB using MapDB.txt 
file format.
 If you don't want to import to TTL db and use 
 levelDB instead, pass False as parameter value.
Salt value to decrypt file. Mandatory parameter 
without which decryption cannot happen.
Tokenize a particular column from the input file, 
creates new file in same location with
.token suffix.
  • Was this article helpful?