Security With Software Crashes

Security With Software Crashes

Software crashes are a very common problem, but they can often be underestimated in terms of impact from a cybersecurity perspective. From a functional perspective, it is immediately clear what the problem is. Unstable behavior is good for product teams, as it damages the reliability of the product and will lead to satisfaction for the end user. Cybersecurity concerns are typically less apparent. While it is clear that software crashes can lead to denial of service conditions, there can often be more concerns relating to security.

Creating Unexpected Behavior

Software crashes fall under the larger umbrella of unexpected behavior. Unexpected behavior, as the name implies, is anything that falls outside of the normal, expected operation of the product. This can include error forcing, full crashes, and anything in between where the system starts producing strange output. Depending on how the system is configured, these unexpected states can end up being advantageous for malicious hackers.

Error forcing is a very common method of information gathering for hackers. When they are not properly handled, software errors can contain sensitive information that hackers can leverage to craft more targeted attacks. This can include information about the internal tech stack, API keys, and even information about the source code of the application. This information can range from just being mildly helpful to leading to a complete and total compromise of the application. Exposed API keys especially can be extremely dangerous.

Aside from the apparent problems on the front end, there can be unseen problems on the back end that can introduce problems in the system. While attackers may not have visibility on what these problems are, it can still be dangerous to throw the backend into an unknown state. If the program is not able to quickly and safely recover, a simple problem can quickly stem out of control.

The risk behind denial of service conditions is clear. If users are not able to access the functionality of a product, they will not be able to perform whatever the intended purpose of the product is. This can be simply annoying in some cases, or it can be life-threatening when looking at certain critical systems. Denial of service vulnerabilities in medical systems can be debilitating. Device manufacturers must implement proper controls to prevent these vulnerabilities from coming up.

Software crashes can cause many other issues besides denial-of-service vulnerabilities. They may be simply a symptom of a much bigger problem, such as a buffer overflow. One of the first steps in identifying a buffer overflow is reliably reproducing a crash. If the conditions are correct, the software crash could indicate a much larger problem, allowing an attacker to control the target system completely.

Another common issue is forcing a crash to break out of a sandboxed environment. Commonly, this will be for an application running in KIOSK mode. If the application crashes, it may be possible to escape from KIOSK mode and access the underlying system. Attackers will then be able to do whatever they please on the compromised machine. This can lead to information quickly being exfiltrated and backdoors being set up for future access.

Preventing Unexpected Behavior

To prevent vulnerabilities from coming up, it is important to perform rigorous testing against any devices and verify that unknown conditions can not be forced into the system. While this is possible in the early phases of development through carefully implementing checks in the code, it can be easier to identify problems once a finished application is ready. Without seeing how the product runs in real-time, identifying vulnerabilities may be hard.

Once a finished product is available, fuzz testing can be a great way to find areas of weakness in a device. This type of testing should be done against any critical functionalities and interfaces available in the system. Fuzz testing throws various chunks of malicious input at the device and sees how it handles it. If a crash is identified and reliably reproduced, then the input causing the crash can be further analyzed to understand exactly what is happening.

Outside of fuzz testing, Dynamic Application Security Testing (DAST) can be a good way to look for vulnerabilities. DAST will look for far more than just unexpected behavior, but it will not always check input with the same verbosity as fuzz testing. DAST does a great job at identifying a wide range of vulnerabilities in completed systems. Both fuzzing and DAST are part of comprehensive penetration testing, which is essential for testing a medical device.

Blog Search

Social Media