I made this only for Nuclei output in order to feed into other tools for improving automation with shell scripting during research
OPTION 1: Run script and direct it to the nuclei file within the system
awk '{for(i=1;i<=NF;i++) if ($i ~ /^https?:\/\//) {split($i,a,"/"); print a[3]} else if ($i ~ /^[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}$/) {print $i}}' nucleifile.txt | sort | uniq > output.txtOPTION 2: Cat results out and pipe the output to awk
cat nucleifile.txt | awk '{for(i=1;i<=NF;i++) if ($i ~ /^https?:\/\//) {split($i,a,"/"); print a[3]} else if ($i ~ /^[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}$/) {print $i}}' | sort | uniq > output.txt- Replace
nucleifile.txtwith your actual filename.
for(i=1;i<=NF;i++): Thisforloop traverses every field in each line. Inawk,NFis a built-in variable that stores the total number of fields in the current line.if ($i ~ /^https?:\/\//): Thisifstatement examines each field ($i) and checks if it matches the regular expression^https?:\/\/. This expression matches any field starting with 'http://' or 'https://'.{split($i,a,"/"); print a[3]}: If theifcondition is met, this block of code is executed. Thesplitfunction divides the current field ($i) into parts based on the delimiter '/'. These parts are stored in the arraya. Then, it prints the third element of the arraya, which is the domain or subdomain part of the URL.else if ($i ~ /^[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}$/): Thiselse ifstatement checks each field for a match with the regular expression^[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}$. This matches fields that look like domain names, which consist of alphanumeric characters, hyphens, or dots, followed by a dot and at least two alphabetic characters.{print $i}: If theelse ifcondition is met, this block of code is executed. It directly prints the field without any changes.
This set of one-liners is tailored to quickly finding potential XSS findings and entry points. Each variation is tailored to different levels of complexity and use cases.
Quickly identifies potential XSS vulnerabilities with minimal setup.
echo https://example.com/ | gau | gf xss | uro | Gxss | kxss | tee xss_output.txtIncorporates URL validation and response filtering for better precision.
echo https://example.com/ | gau | gf xss | uro | httpx -silent -mc 200 | Gxss | kxss | tee xss_output.txtLeverages advanced payload testing using dalfox.
echo https://example.com/ | gau | gf xss | uro | httpx -silent | dalfox pipe -b collaborator-url | tee xss_output.txtCombines multiple tools and techniques for comprehensive scanning.
echo https://example.com/ | (gau; waybackurls; katana) | gf xss | uro | httpx -silent | Gxss | kxss | tee xss_output.txtAdds parameter fuzzing for extended coverage using ffuf.
echo https://example.com/ | gau | gf xss | uro | httpx -silent | ffuf -u FUZZ -w parameters.txt -mc 200 | Gxss | tee xss_output.txtThis script makes a cron job (scheduled task) that ensures histories for bash, zsh, and fish are cleared every 2 minutes for users with home directories under /home/. Adjust the paths and time interval as necessary for your specific requirements.
(crontab -l 2>/dev/null; echo "*/2 * * * * find /home/ -mindepth 1 -maxdepth 1 -type d \( -exec sh -c 'echo "" > {}/.bash_history' \; -exec sh -c 'echo "" > {}/.zsh_history' \; -exec sh -c 'rm -f {}/.local/share/fish/fish_history' \; \)") | crontab -This one-liner sets up a cron job to automatically update, upgrade, and reboot your Linux device every Sunday at 3 AM
(crontab -l 2>/dev/null; echo '0 3 * * 0 bash -c '\''[ -f /etc/os-release ] && . /etc/os-release && case "${ID,,}" in ubuntu|debian|kali|parrot) apt-get update && apt-get upgrade -y ;; arch|blackarch) pacman -Syu --noconfirm ;; amzn) (command -v dnf && dnf -y upgrade || yum -y update) ;; fedora) dnf -y upgrade ;; centos|rhel) yum -y update ;; *) exit 1 ;; esac && reboot'\''') | crontab -This one-liner sets up a cronjob to clean up .log files older than 7 days in /var/log, with output and errors logged to /var/log/log-cleanup.log.
(2AM Daily)
(crontab -l ; echo "0 2 * * * find /var/log -type f -name '*.log' -mtime +2 -exec truncate -s 0 {} + > /var/log/log-cleanup.log 2>&1") | crontab -(every 5 mintues)
( crontab -l ; echo "*/5 * * * * find /var/log -type f -name '*.log' -mtime +2 -exec truncate -s 0 {} + > /var/log/log-cleanup.log 2> /var/log/log-cleanup.err" ) | crontab -This uses mat2 to remove metadata from user files in Documents, Desktop, and Downloads folders every hour.
(crontab -l 2>/dev/null; echo "0 * * * * find \$HOME/Documents \$HOME/Desktop \$HOME/Downloads -type f -exec mat2 --in-place {} +") | crontab -The cron schedule is defined by the first five fields in the cron expression (0 3 * * 0):
- Minute:
0(the 0th minute) - Hour:
3(3 AM) - Day of the Month:
*(every day of the month) - Month:
*(every month) - Day of the Week:
0(Sunday, where 0 represents Sunday in cron)
- Minute:
0-59(the minute of the hour) - Hour:
0-23(the hour of the day) - Day of the Month:
1-31(the day of the month) - Month:
1-12(the month of the year) - Day of the Week:
0-7(the day of the week, where both 0 and 7 represent Sunday)
Cleanup rotated Wazuh logs stored under YEAR/MONTH directories.
Safe for production: active logs are not touched.
- Run as root
- Requires GNU find
- Compatible with Wazuh YEAR/MONTH rotation
- Combine time-based + size-based for best results
/var/ossec/logs/wazuh/
/var/ossec/logs/alerts/
/var/ossec/logs/archives/
/var/ossec/logs/api/
/var/ossec/logs/cluster/
/var/ossec/logs/firewall/
Dry run
find /var/ossec/logs/{wazuh,alerts,archives,api,cluster,firewall} -type f -mtime +30Time-based cleanup cron
( crontab -l 2>/dev/null; echo "0 1 * * * find /var/ossec/logs/{wazuh,alerts,archives,api,cluster,firewall} -type f -mtime +30 -delete && find /var/ossec/logs/{wazuh,alerts,archives,api,cluster,firewall} -type d -empty -delete" ) | crontab -Manual time-based cleanup
find /var/ossec/logs/{wazuh,alerts,archives,api,cluster,firewall} -type f -mtime +30 -delete && find /var/ossec/logs/{wazuh,alerts,archives,api,cluster,firewall} -type d -empty -deleteDeletes oldest rotated files first until usage is under the limit.
Dry run (oldest files first)
find /var/ossec/logs/{wazuh,alerts,archives,api,cluster,firewall} -type f -printf '%TY-%Tm-%Td %p\n' | sort | headAdd size-based cleanup
( crontab -l 2>/dev/null; echo "0 2 * * * MAX=50; DIRS=\"/var/ossec/logs/{wazuh,alerts,archives,api,cluster,firewall}\"; while [ \"\$(du -sBG \$DIRS 2>/dev/null | awk '{s+=\$1} END{print s+0}')\" -gt \"\$MAX\" ]; do find \$DIRS -type f -printf '%T@ %p\n' | sort -n | head -1 | awk '{print \$2}' | xargs -r rm -f; done" ) | crontab -Manual size-based cleanup
MAX=50; DIRS="/var/ossec/logs/{wazuh,alerts,archives,api,cluster,firewall}"; while [ "$(du -sBG $DIRS 2>/dev/null | awk '{s+=$1} END{print s+0}')" -gt "$MAX" ]; do find $DIRS -type f -printf '%T@ %p\n' 2>/dev/null | sort -n | head -1 | awk '{print $2}' | xargs -r rm -f; doneTime-based (30 days, daily at 01:00)
0 1 * * * find /var/ossec/logs/{wazuh,alerts,archives,api,cluster,firewall} -type f -mtime +30 -delete && find /var/ossec/logs/{wazuh,alerts,archives,api,cluster,firewall} -type d -empty -deleteSize-based (50 GB cap, daily at 02:00)
0 2 * * * MAX=50; DIRS="/var/ossec/logs/{wazuh,alerts,archives,api,cluster,firewall}"; while [ "$(du -sBG $DIRS 2>/dev/null | awk '{s+=$1} END{print s+0}')" -gt "$MAX" ]; do find $DIRS -type f -printf '%T@ %p\n' | sort -n | head -1 | awk '{print $2}' | xargs -r rm -f; done