Skip to content

Stabilized v1.1.5 #94

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 45 commits into from
Dec 17, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
45 commits
Select commit Hold shift + click to select a range
c436ba1
Update pyproject.toml
OSINT-TECHNOLOGIES Nov 15, 2024
a1e7f69
Bumped version to v1.1.5 rolling
OSINT-TECHNOLOGIES Nov 15, 2024
4290022
Added keywords_list none value if PS was set to N
OSINT-TECHNOLOGIES Nov 19, 2024
b53b3a3
Added proxifier.py to handle proxies gathering for GD
OSINT-TECHNOLOGIES Nov 19, 2024
31fe4a2
Moved Dorking DB related functions to db_creator.py
OSINT-TECHNOLOGIES Nov 19, 2024
bc8f647
Moved Dorking DB related functions in this module
OSINT-TECHNOLOGIES Nov 19, 2024
8f2e132
Corrected local imports names
OSINT-TECHNOLOGIES Nov 19, 2024
2c694af
Corrected local imports names
OSINT-TECHNOLOGIES Nov 19, 2024
e585b7e
Added user-agents list support for config generator
OSINT-TECHNOLOGIES Nov 21, 2024
3bc523d
Added user-agents rotation support
OSINT-TECHNOLOGIES Nov 21, 2024
8382705
Added user-agents rotation module
OSINT-TECHNOLOGIES Nov 21, 2024
86ee762
Delete dorking/proxifier.py
OSINT-TECHNOLOGIES Nov 21, 2024
e3fc3a2
Added CLI output for user-agent changing
OSINT-TECHNOLOGIES Nov 26, 2024
d09f545
HTML report cosmetical improvements (PS and SI)
OSINT-TECHNOLOGIES Nov 26, 2024
f841d11
HTML report cosmetical improvements (PS and SI)
OSINT-TECHNOLOGIES Nov 26, 2024
959508d
Code clean up, added X.com links parsing support
OSINT-TECHNOLOGIES Nov 26, 2024
0f5f685
Extended Twitter links paragraph with X.com links
OSINT-TECHNOLOGIES Nov 26, 2024
fea6d68
Added support of X.com links for HTML reports
OSINT-TECHNOLOGIES Nov 26, 2024
ffdffc7
Added new paragraph for proxies file path
OSINT-TECHNOLOGIES Nov 30, 2024
0a627c2
Added code to handle proxies usage
OSINT-TECHNOLOGIES Dec 2, 2024
ff5b3cd
Added module to handle proxies
OSINT-TECHNOLOGIES Dec 2, 2024
2fffdcf
Updated proxies handling logic
OSINT-TECHNOLOGIES Dec 2, 2024
01a71b6
Updated proxies handling logic
OSINT-TECHNOLOGIES Dec 2, 2024
7a25cc5
Added function to check on working proxies
OSINT-TECHNOLOGIES Dec 2, 2024
6ed399b
Added support of check on working proxies
OSINT-TECHNOLOGIES Dec 2, 2024
d3d48e6
Modified check on working proxies
OSINT-TECHNOLOGIES Dec 2, 2024
2cc0978
Changed usage of proxies_list on working_proxies list
OSINT-TECHNOLOGIES Dec 2, 2024
2c88e87
Modified color scheme for dorking results output
OSINT-TECHNOLOGIES Dec 2, 2024
411188e
CLI cosmetical improvements
OSINT-TECHNOLOGIES Dec 2, 2024
0225060
Red-colored error
OSINT-TECHNOLOGIES Dec 3, 2024
7f44eac
Added API scans paragraph placeholders
OSINT-TECHNOLOGIES Dec 3, 2024
f0b7495
Added support of VirusTotal API scan reporting
OSINT-TECHNOLOGIES Dec 3, 2024
0c3d143
Added support of VirusTotal API results transfering
OSINT-TECHNOLOGIES Dec 3, 2024
966b358
Added returns from VirusTotal API module
OSINT-TECHNOLOGIES Dec 3, 2024
3822d09
Added ST API values for return
OSINT-TECHNOLOGIES Dec 12, 2024
0a32158
Added reporting support for SecurityTrails API
OSINT-TECHNOLOGIES Dec 12, 2024
735ee67
Added support of ST API reporting for HTML report
OSINT-TECHNOLOGIES Dec 12, 2024
25c75c6
Added support of SecurityTrails API reporting
OSINT-TECHNOLOGIES Dec 12, 2024
4fb4a21
Fixed error when not selecting ST API scan doesn't allow to create HT…
OSINT-TECHNOLOGIES Dec 12, 2024
1ee6dc6
Fixed error when not selecting ST API scan doesn't allow to create HT…
OSINT-TECHNOLOGIES Dec 12, 2024
d0b796e
Cosmetical fixes for ST API HTML report paragraph
OSINT-TECHNOLOGIES Dec 12, 2024
573750e
Cosmetical update for N/A org name filler
OSINT-TECHNOLOGIES Dec 16, 2024
85cf5c5
Cosmetical CLI fixes for used APIs ID string in pre-scan summary
OSINT-TECHNOLOGIES Dec 16, 2024
079a8c9
CLI color scheme improvements
OSINT-TECHNOLOGIES Dec 16, 2024
ee6ab6f
Bumped version
OSINT-TECHNOLOGIES Dec 17, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 14 additions & 0 deletions apis/api_securitytrails.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,12 @@ def api_securitytrails_check(domain):
api_key = str(row[1])
print(Fore.GREEN + 'Got SecurityTrails API key. Starting SecurityTrails scan...\n')

alive_subdomains = []
txt_records = []
a_records_list = []
mx_records_list = []
ns_records_list = []
soa_records_list = []
subdomains_url = f"https://api.securitytrails.com/v1/domain/{domain}/subdomains?apikey={api_key}"
response = requests.get(subdomains_url)

Expand All @@ -31,14 +37,19 @@ def api_securitytrails_check(domain):
for value in record_data.get('values', []):
if record_type == 'a':
print(Fore.GREEN + "IP: " + Fore.LIGHTCYAN_EX + f"{value['ip']} " + Fore.GREEN + "| Organization: " + Fore.LIGHTCYAN_EX + f"{value['ip_organization']}")
a_records_list.append({'ip': value.get('ip', ''), 'organization': value.get('ip_organization', '')})
elif record_type == 'mx':
print(Fore.GREEN + "Hostname: " + Fore.LIGHTCYAN_EX + f"{value['hostname']} " + Fore.GREEN + "| Priority: " + Fore.LIGHTCYAN_EX + f"{value['priority']} " + Fore.GREEN + "| Organization: " + Fore.LIGHTCYAN_EX + f"{value['hostname_organization']}")
mx_records_list.append({'mx_hostname': value.get('hostname', ''), 'mx_priority': value.get('priority', ''), 'mx_organization': value.get('hostname_organization', '')})
elif record_type == 'ns':
print(Fore.GREEN + "Nameserver: " + Fore.LIGHTCYAN_EX + f"{value['nameserver']} " + Fore.GREEN + "| Organization: " + Fore.LIGHTCYAN_EX + f"{value['nameserver_organization']}")
ns_records_list.append({'ns_nameserver': value.get('nameserver', ''), 'ns_organization': value.get('nameserver_organization', '')})
elif record_type == 'soa':
print(Fore.GREEN + "Email: " + Fore.LIGHTCYAN_EX + f"{value['email']} " + Fore.GREEN + "| TTL: " + Fore.LIGHTCYAN_EX + f"{value['ttl']}")
soa_records_list.append({'soa_email': value.get('email', ''), 'soa_ttl': value.get('ttl', '')})
elif record_type == 'txt':
print(Fore.GREEN + "Value: " + Fore.LIGHTCYAN_EX + f"{value['value']}")
txt_records.append(value['value'])

if response.status_code == 200:
data = response.json()
Expand All @@ -51,9 +62,12 @@ def api_securitytrails_check(domain):
response = requests.get(subdomain_url, timeout=5)
if response.status_code == 200:
print(Fore.GREEN + f"{i}. " + Fore.LIGHTCYAN_EX + f"{subdomain_url} " + Fore.GREEN + "is alive")
alive_subdomains.append(subdomain_url)
else:
pass
except Exception:
pass
else:
pass

return general_data['alexa_rank'], general_data['apex_domain'], general_data['hostname'], alive_subdomains, txt_records, a_records_list, mx_records_list, ns_records_list, soa_records_list
4 changes: 3 additions & 1 deletion apis/api_virustotal.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ def check_domain(domain, api_key):
if response.status_code == 200:
return response.json()
else:
print(f"Error: {response.status_code}")
print(Fore.RED + f"Error: {response.status_code}" + Style.RESET_ALL)
return None


Expand All @@ -40,10 +40,12 @@ def api_virustotal_check(domain):
print(Fore.GREEN + f"Undetected Samples: {len(result.get('undetected_samples', []))}\n")
print(Fore.LIGHTGREEN_EX + "-------------------------------------------------\n" + Style.RESET_ALL)
conn.close()
return result.get('categories'), len(result.get('detected_urls', [])), len(result.get('detected_samples', [])), len(result.get('undetected_samples', []))
else:
print(Fore.RED + "Failed to get domain report\n")
print(Fore.LIGHTGREEN_EX + "-------------------------------------------------\n" + Style.RESET_ALL)
conn.close()
return 'Got no information from VirusTotal API', 'Got no information from VirusTotal API', 'Got no information from VirusTotal API', 'Got no information from VirusTotal API'
pass


14 changes: 10 additions & 4 deletions datagather_modules/crawl_processor.py
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ def whois_gather(short_domain):
logging.info('WHOIS INFO GATHERING: OK')
w = whois.whois(short_domain)
if w.org is None:
w['org'] = 'n/a'
w['org'] = 'Organization name was not extracted'
logging.info('WHOIS INFO GATHERING: OK')
return w
except Exception as e:
Expand Down Expand Up @@ -110,7 +110,7 @@ def sm_gather(url):
links = [a['href'] for a in soup.find_all('a', href=True)]
categorized_links = {'Facebook': [], 'Twitter': [], 'Instagram': [],
'Telegram': [], 'TikTok': [], 'LinkedIn': [],
'VKontakte': [], 'YouTube': [], 'Odnoklassniki': [], 'WeChat': []}
'VKontakte': [], 'YouTube': [], 'Odnoklassniki': [], 'WeChat': [], 'X.com': []}

for link in links:
parsed_url = urlparse(link)
Expand All @@ -135,6 +135,8 @@ def sm_gather(url):
categorized_links['WeChat'].append(urllib.parse.unquote(link))
elif hostname and (hostname == 'ok.ru' or hostname.endswith('.ok.ru')):
categorized_links['Odnoklassniki'].append(urllib.parse.unquote(link))
elif hostname and (hostname == 'x.com' or hostname.endswith('.x.com')):
categorized_links['X.com'].append(urllib.parse.unquote(link))

if not categorized_links['Odnoklassniki']:
categorized_links['Odnoklassniki'].append('Odnoklassniki links were not found')
Expand All @@ -156,6 +158,8 @@ def sm_gather(url):
categorized_links['Twitter'].append('Twitter links were not found')
if not categorized_links['Facebook']:
categorized_links['Facebook'].append('Facebook links were not found')
if not categorized_links['X.com']:
categorized_links['X.com'].append('X.com links were not found')

return categorized_links

Expand Down Expand Up @@ -209,7 +213,7 @@ def domains_reverse_research(subdomains, report_file_type):
subdomain_socials_grouped = list(dict(subdomain_socials_grouped).values())

sd_socials = {'Facebook': [], 'Twitter': [], 'Instagram': [], 'Telegram': [], 'TikTok': [], 'LinkedIn': [],
'VKontakte': [], 'YouTube': [], 'Odnoklassniki': [], 'WeChat': []}
'VKontakte': [], 'YouTube': [], 'Odnoklassniki': [], 'WeChat': [], 'X.com': []}

for inner_list in subdomain_socials_grouped:
for link in inner_list:
Expand All @@ -234,6 +238,8 @@ def domains_reverse_research(subdomains, report_file_type):
sd_socials['WeChat'].append(urllib.parse.unquote(link))
elif hostname and (hostname == 'ok.ru' or hostname.endswith('.ok.ru')):
sd_socials['Odnoklassniki'].append(urllib.parse.unquote(link))
elif hostname and (hostname == 'x.com' or hostname.endswith('.x.com')):
sd_socials['Odnoklassniki'].append(urllib.parse.unquote(link))

sd_socials = {k: list(set(v)) for k, v in sd_socials.items()}

Expand All @@ -242,7 +248,7 @@ def domains_reverse_research(subdomains, report_file_type):
if not subdomain_ip:
subdomain_ip = ["No subdomains IP's were found"]

if report_file_type == 'pdf' or report_file_type == 'html':
if report_file_type == 'html':
return subdomain_mails, sd_socials, subdomain_ip
elif report_file_type == 'xlsx':
return subdomain_urls, subdomain_mails, subdomain_ip, sd_socials
Loading
Loading