Although the recent MidnightEclipse CVE-2024-3400 PAN-OS Exploitation was limited in the number of companies impacted, this incident serves as a reminder. The importance of observing network traffic to identify behavior and anomalies that indicate malicious activities and emerging breaches as early as possible is becoming a necessity for any and all security operations teams.
In this particular incident, the actor leveraged a zero-day exploitation of a critical command injection vulnerability in Palo Alto Networks PAN-OS software that enabled an unauthenticated attacker to execute arbitrary code with root privileges on the compromised firewall through execution of a vulnerability in GlobalProtect.
This sophisticated attack used compromised devices to gain access to internal networks and exfiltrate data by creating a cron job to pull commands from an external server. The actor then implemented a Python-based backdoor that enabled the attacker to remotely execute commands on the device via remote network requests facilitating further malicious exploits like lateral movements, data theft, and credential harvesting.
Cybersecurity firm Volexity first identified and reported the incident and worked with the several customers and Palo Alto Networks and Unit 42 to respond to, and remediate, the intrusion. According to the Volexity Threat Research incident report, and the Unit 42 Threat Brief, the actor leveraged a command injection vulnerability, now identified as CVE-2024-3400, to execute arbitrary code with root access on compromised firewalls. Highlights from the report include:
- On April 10, 2024, Volexity identified the zero-day exploitation at one of its network security monitoring (NSM) customers and on April 11, 2024, observed an identical exploitation at another one of its customers by the same threat actor
- Based upon forensic analysis by Volexity, the earliest evidence of an attempted exploitation was observed on March 26, 2024
- The threat actor tested the vulnerability by placing zero-byte files on firewall devices to validate exploitability and three days later the actor was observed exploiting firewall devices to successfully deploy malicious payloads
- After successfully exploiting devices, the actor downloaded additional tooling from remote servers they now controlled to facilitate access to victims’ internal networks and the actor quickly moved laterally through the victims’ networks, extracting sensitive credentials and other files that would enable access during and potentially after the intrusion
- The tradecraft and speed employed by the attacker suggests a highly capable threat actor with a clear playbook of what to access to further their objectives
The full details and timeline of this particular attack sequence is available in the Volexity Report and the Unit 42 Threat Brief.
What Have We Learned?
While this particular incident has been limited to a few companies using a specific exploit, the value of highlighting this incident is to illustrate the importance of observing network traffic in real-time to identify similar attacks and uncover Indicators of Compromise (IoCs) sooner. By identifying an attack as it unfolds, you can significantly reduce dwell time, which was the case in this particular incident.
Examining other post-breach scenarios regularly highlights the value of Network Observability and leveraging network intelligence to record all activities in detail as they happen. Network Flow Metadata (NetFlow) provides the foundational network intelligence and evidence leveraged by upstream analytics solutions that empower security teams to detect abnormal activity and IoCs that can indicate a breach, and to trace the lateral movements of threat actors across the network.
Given the aggressive and progressive nature of cyberthreats, and this recent zero-day exploitation, leveraging the combined power of network-derived intelligence and advanced analytics capabilities into security operations is a critical strategy to identify and mitigate intrusions.
Observability is Your Best Defense
As we examine this recent scenario, it becomes acutely clear that visibility into all network activity is paramount. As the threat landscape continually changes, it is important to adopt a multi-layered cybersecurity approach that transcends traditional perimeter defenses to maintain robust intelligence.
Here are some critical best practices to implement to help identify exploitations and evolving attacks, such as breaches, ransomware and data exfiltration, botnets and espionage intrusions as they emerge.
- Establish Robust Network Visibility – Establish visibility into all traffic crossing important network points and feed the intelligence into security monitoring platforms such as SIEM, NDR, XDR, IPS/IDS or other monitoring platform tools.
- Establish Visibility into Encrypted Traffic – Increasingly threat actors are using encryption as a technique to evade detection. According to the WatchGuard Q4 2023 Internet Security Report malware hiding behind encryption increased to 55% in Q4 2023. Other security threat reports highlight similar statistics that provide evidence of the pervasiveness of threats hiding in encrypted traffic. The implication is that without visibility into encrypted traffic you will miss more than half of the attacks crossing the network. While decryption is certainly one approach to this challenge, it is not only slow and expensive, but more and more traffic cannot be decrypted without the keys. This leaves you blind to external traffic. Consequently, the best approach to uncovering threats hiding in encrypted traffic is to extract fingerprints and other important indicators from encrypted traffic without the need for encryption.
- Enable Historical Forensic Analysis – Leveraging an upstream SIEM, NDR or XDR platform that supports long-term historical analysis is essential. Forensic analysis identifies anomalous behaviors that indicate the scope and scale of the attack and will aid in developing a better future defense strategy. Even if you do not catch the attack early, forensic analysis, as demonstrated in the Operation MidnightEclipse incident, can retroactively identify the presence of IoCs and confirm that you have been attacked. It also enables security analysts to understand attack vectors and identify the behaviors and actions of an anomalous actor that will help devise a strategy to prevent and remediate future attacks more quickly to reduce dwell times.
- Establish East-West Traffic Visibility – It is vital to detect attacks as they move across a network using different IP addresses, credentials, and machines, as they search for sensitive data or key assets. Visibility into lateral movements ties together intelligence and data from multiple resources and reveals east-west connectivity patterns and reconnaissance anomalies and the usage of ports, protocols, applications, file sharing, login failures and other suspicious activities no matter what phase of the attack. Observing east-west traffic enables the security analyst to identify the attack vectors used by threat actors to move laterally across the network – such as attempts to exploit network service vulnerabilities, deploy malware, use stolen credentials, or compromise other systems – to better defend against and prevent future intrusions.
- Visibility with Segmentation is Essential – Even if network segmentation has been implemented, east-west visibility is important. While network segmentation can help prevent pre-exploitation lateral movements, that is not always the case as shown in the Operation MidnightEclipse incident. Post exploitation movements are even more challenging to control as the threat actor’s activity may have become ‘trusted’, so segmentation may not completely lock-down this vector. Also consider that Network Segmentation creates visibility blind spots that can conceal a typical attacker’s Tactics, Techniques, and Procedures (TTPs) such as SQL injections and exploratory Port Scans. So implementing segmentation without east-west traffic visibility may negatively impact your security posture and not enhance it.
- Understand and Observe for TTPs – By understanding the way threat actors operate, the Security team can better detect and mitigate attacks as they evolve. Understanding the various combinations of TTPs is important to improve your security posture. Many cybersecurity research bodies have published valuable insights into common attack tactics that can help your organization devise a response strategy based on proven best practices with automated action and human verification. One example is the MITRE ATT&CK® Matrix which helps cybersecurity teams identify and address commonly encountered TTPs. This widely embraced matrix defines how security teams can continuously monitor network activity to detect abnormal behavior associated with a known TTP to stop it before it turns into a full attack. The MITRE ATT&CK Matrix can be useful both in detecting active intrusions, and identifying threat actor activities that are in the planning or reconnaissance stages of an attack. This is vital as early detection of abnormal activities and reducing dwell times is critical as many high-profile attacks have evaded traditional security protections, moved laterally, and have occurred over months of dwell time.
- Identify Non-Standard Traffic Patterns – This is a common attack technique that cannot be identified by traditional network logging. A syslog will show that “A” connected to “B” over a web connection. But application-level visibility into the actual network packets will identify that “A” established a SSH or QUIC connection via SSL Web connection via port 443 and more about the connection and exchange.
- Enabling Historical Forensic Analysis is Essential – Always leverage historical traffic for clues. In this incident the threat actors had been exploiting the zero-day PAN-OS vulnerability since at least March 26, 2024 – nearly three weeks before it was first discovered. By all accounts the discovery of this particular breach was very fast compared to industry average dwell times. Typical dwell times of 200 days or more have been reported in many security breach reports. It is important that your monitoring solution is capable of storing historical data over long periods of time to understand when and how the breach unfolded and pinpoint traffic pattern changes tied to the attack.
Why Flow Metadata is Your Best Source of Network Intelligence
The post-breach analysis of countless attacks reveals the value and importance of network observability in identifying and thwarting infiltrations. Monitoring network traffic remains one of the most valuable assets in the security analyst’s tool kit.
Observing network traffic is the gold standard, and collecting and analyzing network packets is highly valuable. However, with todays’ volumes of network traffic collecting, analyzing and storing petabytes of network traffic has become impractical. To keep up with the massive volumes of network traffic SecOps and NetOps teams are leveraging unsampled Flow Metadata (NetFlow) to intelligently extract relevant network traffic metrics to optimize collection, streamline analysis, and extend historical retention capacity.
Unsampled Metadata is an abstraction of network traffic based upon full packet analysis of wire data and identifies users, protocols, services, and provides application context revealing a complete and accurate summary of all network and user activity. Compared to full packet capture, Flow Metadata is highly compact and more efficient, and generally represents less than .05% of monitored network traffic volumes. The context-rich Metadata retains up to 95% fidelity compared to captured packets, but with dramatically less upstream processing resources and physical storage required.
Leveraging Flow Metadata for Network Intelligence continually monitors and records all activities across the network. Maintaining historical activities allows the security analyst to delve deeper into actions and activities that will enable security teams to detect anomalous behaviors that indicates an intrusion and pinpoint an emerging breach by visualizing the activities, interactions, and lateral movements of threat actors across the network to spot anomalous traffic patterns and expose command and control communications.
Here are some examples of the value of Flow-Based Intelligence:
- Establish comprehensive visibility across the network at key traffic aggregation points to observe all communications crossing common functional boundaries to identify and pinpoint intrusive activities.
- Use unsampled 1:1 flow monitoring and avoid traditional NetFlow ‘sampling’ to ensure that you see all interactions and are able to capture and retain the full forensic value of traffic for historical analysis.
- Ensure your upstream collection and analytics capabilities are capable of ingesting and analyzing large volumes of unsampled Flow Metadata and can retain it for significant periods to support important historical forensic analysis activities.
- Regularly review your collected network intelligence to establish a ‘normal behavioral baseline’ to improve anomalous detection capabilities to stay ahead of emerging threats.
Flow-based Metadata can dramatically increase historical retention times versus packet captures given its highly efficient footprint. Flow Metadata can represent only 99% or less of actual network traffic while retaining up to 95% or more of its fidelity. This means you can store historical traffic for longer while dramatically reducing your storage capacity requirements. Given today’s network speeds, most packet capture systems can only retain a few hours of traffic. In the case of the highlighted attack scenario, it took nearly three weeks from its infiltration before it was first discovered. It would have been extremely cost prohibitive to have stored any volume of network packets for this period of time.
How NetQuest Can Help
Network traffic intelligence plays a crucial role in modern-day security by providing valuable insights into all activities crossing the network. Real-time network intelligence provides essential insights into network behavior enabling proactive monitoring, threat detection, and incident response capabilities to detect emerging security threats and cyberattacks and identify high-risk behavior from hostile traffic that has bypassed existing controls.
NetQuest brings significant value to Network Observability by enabling cost-effective deep visibility across the entire network to mine and extract network intelligence in real-time. The NetQuest Streaming Network Sensor automates and scales access to network traffic by inspecting all network packets and translating the packets into high-value optimized unsampled metadata. This network intelligence exposes patterns, protocols, and volumes of data flowing across the network to understand how users, devices and systems communicate with each other.
The Streaming Network Sensor delivers multi-terabit, wire-speed advanced packet processing and analysis services specifically tuned for security monitoring environments that rely on accurate and reliable network packet intelligence at scale. Users can choose from thousands of data attributes to extract relevant metrics from network traffic to reveal contextual connection and user activity intelligence for every network flow and transaction with packet-level accuracy. A comprehensive range of Metadata can be extracted from the monitored network traffic such as:
- Network statistics and traffic patterns, such as traffic volume, flow rates, packet sizes, and response times
- Specific protocols and services such as HTTP, FTP, DNS, etc., and their associated behaviors
- Flow Analysis to analyze of network traffic flow between hosts or networks
- Encrypted traffic handshakes and headers
- Application-level metadata
- Subscriber/user information for fixed and mobile devices
The Streaming Network Sensor seamlessly integrates into virtually any SIEM, NDR, XDR, IDS, IPS and other security platforms to enable ultra-scale, cost-effective network observability with relevant context and consistent metrics while optimizing instrumentation costs.
We invite you to learn more about ultra-scale traffic monitoring to achieve unprecedented threat intelligence at scale, better defend against evolving threats, and support more comprehensive investigative activities.