If you missed my previous series in which i go over the methodology to implementing & using VirusTotal API data combined with Splunk’s correlation power to discover malicious activity on your network by detecting network traffic to domains that had recently been registered — check it out HERE and HERE. Long story short, if you’re developing threat detection content you NEED to provide context for your detections to speed up SOC analysis and decision making. Knowing, right up front, whether a domain associated with an alert is recently registered, is using “Let’s Encrypt” and other whois like information plays a critical part in how fast that alert may be actioned.
Let’s get into it!
In this article I’m going to be focusing on PassiveTotal or RiskIQ data. If you’re not familiar with RiskIQ its an online security intelligence platform that can be used to quickly review information related to indicators. You can set up monitoring rules (like in VirusTotal) or perform ad-hoc searches during live response or threat hunting activities. You do need a login, but, they have free versions and if you’re an enterprise client they’re fairly inexpensive and allow you to collaborate between team members.
Above you can see what search results are available (when using SANS.org) as an example. You can get typical whois information, resolutions, associated sub-domains, hosts, dns name servers, the list goes on. Now, this web interface is very useful during an investigation (I’ve used it plenty of times), but, this information is very rich and useful when developing detection. For example, I previously discussed a detection in which the whois information is used to determine the age of the domain and alerting on anyone visiting a website hosted there.
Same concept can be applied — but — we’re thinking BIGGER! If we can pull in threat intelligence from multiple sources (the typical ones our analysts will be using) then we’re providing that context in the alert AND can use that information to better guide, not just the investigation, but the detection criteria itself.
Enter — the RiskIQ Add-On for Splunk. This app basically provides the custom commands that utilize your API key to query RiskIQ data via web requests. A list of the commands can be found HERE. The really useful commands (in my opinion) are: rptsubdomains, rptosint, rptwhois, and rptpullindicators. I’m going to focus on the last command — rptpullindicators because it takes a field name for a variable rather than a string.
syntax = <Main search> | rptpullindicators field=<field name> type=<dataset-type>
description = Fetch details of indicators of various datasets from PassiveTotal API. Supported datasets are passivedns, whois, certificates, subdomains, trackers, components, hostpairs, osint, hashes, tags
example = | rptpullindicators field=”src_ip,dst_ip” type=”passivedns,whois”
You can see from the output that we actually get a lot of data. The ONLY downside to this app is because it queries the RiskIQ database its pulling back, not just the most recent record, but all the records associated with that query. In the above example for sans.org, you can see we get 6 records back. We only want the most recent (for detection development) so we need to work the data.
| eval domain = “sans.org”
| rptpullindicators field=”domain” type=”whois”
| where isnotnull(rawText)
| fields rating resolutions tag.name virustotal_score webLink whois_creation_date registrar lastLoadedAt registryUpdatedAt registered registrant domain indicator
| stats values(*) as * by domain
| eval registryUpdatedAt = mvindex(mvsort(registryUpdatedAt),-1), registrar = mvdedup(upper(registrar)), lastLoadedAt = mvindex(mvsort(lastLoadedAt),-1), daysSinceRegistered = round((now() — strptime(registered,”%Y-%m-%d”)) / 60 / 60 / 24,2), newlyReigstered = case(isnotnull(daysSinceRegistered) AND daysSinceRegistered <= 90,”yes”,isnotnull(daysSinceRegistered) AND daysSinceRegistered > 90, “no”, 1=1, “unknown”)
Now we have 1 result with a few extra fields we’re evaluating from data brought back from RiskIQ. Those fields, daysSinceRegistered, and newlyReigstered are what we need for detections. Basically from here we could develop a detection looking for domains that had been modified within a threshold time period.
Extra points — incorporate our VirusTotal API lookups into 1 search and wrap that into a macro!
| eval domain_context = coalesce(‘$domain$’,”$domain$”)
[ lookup vt_domain query as domain_context
| eval original_file_name=sig_info_original_name, other_filenames=vt_file_names, file_certificate_authority=sig_info_signers, file_certificate_status=sig_info_verified, virustotal_scan_date=vt_scan_date, virustotal_detections=vt_scan_results, virustotal_score=((last_analysis_positive + “/”) + last_analysis_total), virustotal_scan_date=strftime(virustotal_scan_date,”%F %T”), virustotal_detections=mvindex(virustotal_detections,0,10), other_filenames=mvindex(other_filenames,0,10)
| rptpullindicators field=”domain_context” type=”whois”
| eval lookup_subsearch_filter = “temp”
| fields categories cert_issuer cert_subject cert_validity_not_after cert_validity_not_before last_analysis_positive virustotal_scan_date last_analysis_total virustotal_detections last_dns_a_record popularity_alexa resolutions virustotal_score whois_creation_date registrar lastLoadedAt registryUpdatedAt registered registrant lookup_subsearch_filter domain_context indicator
| stats values(*) as * by domain_context
| eval resolutions2 = resolutions
| nomv resolutions2
| eval resolutions = if(len(resolutions2)>1000,substr(resolutions2,0,1000)+” — <TRUNCATED>”,resolutions)
| fields — resolutions2
| eval registryUpdatedAt = mvindex(mvsort(registryUpdatedAt),-1), registrar = mvdedup(upper(registrar)), lastLoadedAt = mvindex(mvsort(lastLoadedAt),-1), in_virustotal = if(isnotnull(last_analysis_total),”yes”,”no”), in_riskiq = if(isnotnull(registrar),”yes”,”no”), expired_cert = if(strptime(cert_validity_not_after,”%F %T”) < now(),”yes”,”no”), daysSinceRegistered = round((now() — strptime(registered,”%Y-%m-%d”)) / 60 / 60 / 24,2), newlyReigstered = case(isnotnull(daysSinceRegistered) AND daysSinceRegistered <= 90,”yes”,isnotnull(daysSinceRegistered) AND daysSinceRegistered > 90, “no”, 1=1, “unknown”) ]
| eventstats values(categories) as categories values(cert_issuer) as cert_issuer values(cert_subject) as cert_subject values(cert_validity_not_after) as cert_validity_not_after values(expired_cert) as expired_cert values(cert_validity_not_before) as cert_validity_not_before values(last_analysis_positive) as last_analysis_positive values(last_analysis_total) as last_analysis_total values(last_dns_a_record) as last_dns_a_record values(popularity_alexa) as popularity_alexa values(resolutions) as resolutions values(virustotal_score) as virustotal_score values(whois_creation_date) as whois_creation_date values(registrar) as registrar values(lastLoadedAt) as lastLoadedAt first(registrant) as registrant values(registryUpdatedAt) as registryUpdatedAt values(registered) as registered values(virustotal_scan_date) as virustotal_scan_date values(in_virustotal) as in_virustotal values(in_threatconnect) as in_threatconnect values(in_riskiq) as in_riskiq values(daysSinceRegistered) as daysSinceRegistered values(newlyReigstered) as newlyReigstered by domain_context
| where isnull(lookup_subsearch_filter)
| fields — lookup_subsearch_filter domain_context
To add this to a macro, we simply go into Settings (in Splunk), choose “Advanced Search”, “Search Macros”, New Search Macro.
This gives us a screen like this:
Fill out the name of the macro — threat_intel_context(1). The (1) is how many arguments you’re going to provide the macro. Definition is the SPL above. Arguments is the variable ($domain$) that will be passed into the macro. Because these searches will take actual fields rather than strings we can pass along a field name.
Save the macro and set the permissions necessary per your environment and policies. Now we can simply call this macro in our detections!
I wont get to heavy into the SPL, but basically we do a whole lot of eval work to work the data then append all that data to the field we create before the appendpipe then drop any results not including data from the appendpipe commands.
Happy Hunting :)