Google Chrome forensic tool
Forensic tool for processing, analyzing and visually presenting Google Chrome artifacts.
- Mounting of volume with Google Chrome data and preserving integrity trough manipulation process
- read only
- hash checking
- Suspect profile and behavior estimations including:
- personal information (emails, phone nums, date of birth, gender, nation, city, adress...)
- Chrome metadata
- Accounts
- Version
- Target system metadata
- Operating system
- Display resolution
- Mobile Devices
- Browsing history URL category classification using ML model
- Login data frequency (most used emails and credentials)
- Browsing activity during time periods (heatmap, barchart)
- Most visited websites
- Browsing history
- transition types
- visit durations
- avg. visit duration for most common sites
- Login data (including parsed metadata)
- Autofills
- estimated cities and zip codes
- estimated phone number
- other possible addresses
- geolocation API (needed to be registered to Google)
- Downloads (including default download directory, download statistics...)
- default download directory
- download statistics
- Bookmarks
- Favicons (including all subdomains used for respective favicon)
- Cache
- URLs
- content types
- payloads (images or base64)
- additional parsed metadata
- Volume
- volume structure data (visual, JSON)
- Shared database to save potential evidence found by investigators
-
Clone the repository:
git clone https://github.com/ChmaraX/forensix.git cd forensix
-
Prepare your browser data: Copy your Chrome/Brave browser data to the
data
directory:# For Chrome (replace with your actual profile path) cp -r "/Users/username/Library/Application Support/Google/Chrome/Default/." ./data/ # For Brave (replace with your actual profile path) cp -r "/Users/username/Library/Application Support/BraveSoftware/Brave-Browser/Profile 2/." ./data/
-
Build and start the application:
docker-compose up --build
That's it! The Docker setup will automatically:
- Build all services from source code
- Install Node.js and Python dependencies
- Download the ML model (~700MB) for URL classification
- Start all services
Note: The first build may take several minutes due to downloading dependencies and the ML model.
If you prefer to run without Docker:
-
Install Python dependencies:
pip install -r requirements.txt
-
Download the ML model:
./download-model.sh
-
Install and start services manually:
# Server cd server npm install npm start # Client (in another terminal) cd client npm install npm start
The runninng services are listenning on:
- ForensiX UI => http://localhost:3000
- ForensiX Server => http://localhost:3001
- MongoDB => http://localhost:27017
If you want to use HTTPS
for communication between on UI or Server side, place key and certificate into /certificates
directory in either /server
or /client
directory.
To generate self-signed keys:
openssl req -nodes -new -x509 -keyout server.key -out server.cert
Change baseURL
protocol to https in /client/src/axios-api.js
,
then rebuild the specific changed image:
docker-compose build <client|server>