MapDB CLI Commands
Use these commands to detokenize, encrypt and import the encrypted data and delete old data entries from MapDB.
Delete old data from MapDB
- As logs are processed by Cloud Connector, new usernames and IP address values in logs are hashed and stored as key-value pairs in LevelDB (mapdbcache folder in Cloud Connector installation directory).
- Data is also stored in reverse (ex: Key→ 462768abd7d0f3d532822c9842fe84c57ecf6eb75651f5e011cb4096fcf91c09, value → UserName ) in LevelDB (deObfmapdbcache folder).
- Key-value pairs are stored in a human-readable format in the MapDB.txt file as well.
- Data in MapDB.txt file is also used to rebuild mapdbcache and deObfmapdbcache in case the leveldb database is corrupting using the command line
./shnlpcli mapdb --
import
MapDB.txt
EX: Loglines (BlueCoat):
AZ18GW0001 2020-05-25 14:23:30 163 192.168.1.5 bhaskar - - OBSERVED - - 200 TCP_NC_MISS GET application/atomicmail dns 5pmweb.com 25 https://www.5pmweb.com/application/atomicmail https://www.5pmweb.com/application/atomicmail - "Mozilla/5.0 (Windows; U; Windows NT 5.1; cs-CZ) AppleWebKit/525.28.3 (KHTML, like Gecko) Version/3.2.3 Safari/525.29);" - 300000 20000 - - - 67.227.197.144 -
MapDB.txt entries:
192.168.1.5|b1fe3ba8d1776a05f76fcd5f2bb5333a4817225df78ccb24a9758c55e76aa7c0
bhaskar|5e7f4b907e8c8f2539b953155bf495f1f2c9712013f3c03e70d833768ae319e0
Use Map data
- DashBoard would de-tokenize user name and Ip address by using data stored in deObfmapdbcache.
- Dashboard would only use last one year's data for processing.
- As new users and Ip addresses are processed from logs, new data will be added to LevelDB (both mapdbcache and deObfmapdbcache ) and also MapDB.txt
- Data older than year would become dormant and thereby effecting disk space.
Encrypt MapDB.txt file
- All MapDBEncrypted.txt files can be found in the MapDBEntries folder in the Cloud Connector installation directory.
- Before making entry to MapDBEncrypted.txt, first size of MapDB.txt is checked, and if its more than 100 MB(default value and is a configurable param), the existing file will be renamed with time stamp as suffix (ex: MapDB202007130355123.txt). and new entries are written to fresh MapDBEncrypted.txt file.
Shnlpcli commands
In order to import data from encrypted file, the encrypted text file import command line is as follows:
- Import command
- New mandatory argument "--encrypted true" to be passed to mention that txt file passed for importing has encrypted data or not
- ./shnlpcli mapdb --import MapDBEntries/MapDBEncrypted.txt --encrypted true
- MapDBEncrypted.txt is encrypted text file which has all data to be imported to DB
- ./shnlpcli mapdb --import MapDB.txt --encrypted false
- Need to pass "--encrypted false" in order to import from old txt files where data is not encrypted
- If data need to be imported to new TTL db use argument "--importtottldb true", to import data to new TTLDB
- ./shnlpcli mapdb --import MapDBEntries/MapDBEncrypted.txt --encrypted true --importtottldb true
- Note: if argument --importtottldb is passed as false or not passed then all data will be imported only to old LevelDB
- Note: --importtottldb will only work if write to TTL DB option is set to true on dashboard
- MapDBEntries is folder in Cloud Connector installation directory where MapDBEncrypted.txt is present
- Export command
- ./shnlpcli md --export MapDBEntries/NewMapDB.txt
- this command would export all data both from LevelDB and RocksDB to NewMapDB.txt file in MapDBEntries folder in EC installation directory
- If data from only ttlDB need to be exported, then argument " --exportFromTTLCacheOnly true" can be passed
- ex: ./shnlpcli md --export MapDBEntries/NewMapDBttl.txt --exportFromTTLCacheOnly true
- Note: if "–exportFromTTLCacheOnly" is passed as false or not passed then all data from LevelDB and RocksDB to NewMapDB.txt file in MapDBEntries folder in Cloud Connector installation directory
- Note: –exportFromTTLCacheOnly will be able to export data from ttl db only if write to TTL DB option is set to true on dashboard
- ./shnlpcli md --export MapDBEntries/NewMapDB.txt
- Decrypt command
- ./shnlpcli mapdb --decrypt MapDBEntries/MapDBEncrypted.txt --salt <salt value>
- This command would decrypt file if salt passed as an argument matches the salt of the tenant.
- command would create new file with .decrypt suffix in MapDBEntries directory
Note:
- In Windows servers TTL DB would work only Microsoft Visual C++ 2015 is installed. Therefore, before changing the option write to TTL DB on dashboard please make sure that Microsoft Visual C++ 2015 is installed in Windows server where Cloud Connector is installed.
- Salt change: Changing salt value makes existing MapDBEncrypted.txt file useless as existing MapDBEncrypted.txt contents are encrypted with old salt value. However, if we attempt to import data from the old MapDBEncrypted.txt file after changing the salt value, it would lead to data corruption in DB. Please follow below steps after changing the salt value:
- After changing salt value, take back up of existing MapDBEncrypted.txt files
- Export data from LevelDB/ Rocks-TTLDB
- Use new exported MapDBEncrypted.txt files for importing data in future
- Switching from TTL DB to Level DB: Switching back from TTL DB to LevelDB for any reason, will make data stored in TTL DB till that point become invisible to Cloud Connector. Hence, detokenization using data stored in rocks data would fail. Therefore, before switching back to LevelDB from TTL DB, perform the following steps:
- Export all data from TTL Db
- import all exported data from TTL DB to Level DB and then switch option write to TTL DB on dashboard
Delete old entries from MapDB.txt file
- Since TTL DB cache has the functionality to delete old data and thereby containing only the latest data in it, we export all data from TTL DB at regular intervals of time to MapDB.txt file. Hence, the new exported MapDB.txt file will have the latest data till the time of export.
- Flags used in the scheduler:
- mapdbtxt.enableMapDBTxtScheduler takes boolean value True/False as input. Whenever this value is set to True then only the scheduler will be enabled. By default this flag is set to false.
- mapdb.ttldb (existing flag which enable to use TTL DB or not) takes boolean value True/False as input. Whenever this value is set to True data is written to TTL cache. By default this flag is set to false. The Scheduler will run only if both mapdb.ttldb and mapdbtxt.enableMapDBTxtScheduler are set to True.
- mapdbtxt.exportFromTTLOnly takes boolean value True/False as input. Whenever this value is set to True then data from TTL cache only will be exported. By default this flag is set to false.
- mapdbtxt.deleteoldentries.fixedDelaySeconds Time interval in secat which scheduler to export data needs to run. By default value set to 1209600 sec (2 weeks).
- mapdbtxt.deleteoldentries.initialDelaySeconds Initial delay in sec after Cloud Connector start, when scheduler gets activated. By default value set to 604800 sec (1 week).
CLI commands for mapdb
Command | Usage | Options |
--column
|
Column index (starts with 1) that needs to be tokenized or de-tokenized. Applicable only for token and detoken commands. Default index: 1 |
|
--decrypt |
Decrypt encrypted MapDBEncrypted.txt file can be used for manual debugging of detokenization. Creates new file in the same location with .decrypt suffix. |
Mandatory parameter Salt value needs to be passed to perform decryption
--detoken |
De-Tokenize a particular column from the input file, creates new file in the same location with .detoken suffix. |
|
--encrypted |
Imports mappings from MapDB.txt. Pass True if data is encryped and False if not encypted. |
|
--export |
Export existing mapping from Level DB store to file. |
|
--exportFromTTLCacheOnly |
If you want to explicitly export from TTL db only pass value as true, and instead of default value for both levelDB and ttlDB pass the value as false. |
|
--import |
Import mappings from another DB using MapDB.txt file format. |
|
--importtottldb |
If you don't want to import to TTL db and use levelDB instead, pass False as parameter value. |
|
--salt |
Salt value to decrypt file. Mandatory parameter without which decryption cannot happen. |
|
--token |
Tokenize a particular column from the input file, creates new file in same location with .token suffix. |