Databases in msfconsole are used to keep track of your results. It is no mystery that during even more complex machine assessments, much less entire networks, things can get a little fuzzy.

This is where Databases come into play. Msfconsole has built-in support for the PostgreSQL database system. With it, we have direct, quick, and easy access to scan results with the added ability to import and export results in conjunction with third-party tools. Database entries can also be used to configure Exploit module parameters with the already existing findings directly.

Setting up the Database

PostgreSQL

#Status
sudo service postgresql status

#Start PostgreSQL
sudo systemctl start postgresql

#After starting PostgreSQL, we need to create and initialize the MSF database with msfdb init.
sudo msfdb init

#If the initialization is skipped and Metasploit tells us that the database is already configured, we can recheck the status of the database.
sudo msfdb status

 sudo msfdb init

After the database has been initialized, we can start msfconsole and connect to the created database simultaneously.

MSF - Connect to the Initiated Database

sudo msfdb run

MSF - Reinitiate the Database

If, however, we already have the database configured and are not able to change the password to the MSF username, proceed with these commands:

msfdb reinit
cp /usr/share/metasploit-framework/config/database.yml ~/.msf4/
sudo service postgresql restart
msfconsole -q

msf6 > db_status

MSF - Database Options

help database

db_status

Using the Database

With the help of the database, we can manage many different categories and hosts that we have analyzed. Alternatively, the information about them that we have interacted with using Metasploit. These databases can be exported and imported. This is especially useful when we have extensive lists of hosts, loot, notes, and stored vulnerabilities for these hosts. After confirming that the database is successfully connected, we can organize our Workspaces.

Workspaces

We can think of Workspaces the same way we would think of folders in a project. We can segregate the different scan results, hosts, and extracted information by IP, subnet, network, or domain.