Open Mobile Menu

Blog

Five Things Every Web App Developer Wished Penetration Testers Knew

Views: 4518

Written By: Brian Shura July 25, 2016

Ideally an application penetration tester should come from some type of software development background.  Having worn the developer hat in the past, the tester is more likely to be able to think like a developer, understand the types of security mistakes developers tend to make, and have a better sense of context when assessing the severity and validity of security vulnerabilities.  A penetration tester who is a developer can also better empathize with developers when presenting the test results and recommendations.

 

     1. Usability is important too.

 

Often penetration testers will include advice that doesn’t make sense from a usability standpoint – for example, recommending an account lockout if there are too many failed login attempts. Account lockouts are great for a lot of applications, especially financial applications or applications that require bank-level security.  However, for many applications the cost of locking legitimate users out outweighs the security benefit.  A penetration tester who is sensitive to usability will recommend alternate solutions to mitigate the risk of brute-force attacks, such as strong password complexity rules and CAPTCHAs that are presented after a certain number of failed login attempts.  

This same security vs. usability concept applies in other areas as well, such as the messaging an application returns during a failed login or while attempting a password reset – the most secure options here are sometimes not the most user-friendly and sometimes a balance needs to be struck between these two considerations.

 

     2. Applications are often built and maintained under tight time and cost constraints

 

Developers are often working to meet tight deadlines, with management primarily focused on getting new functionality released quickly. While this isn’t an excuse for writing insecure code or ignoring known security vulnerabilities, a penetration tester who understands the development environment has a better sense of context for working with the development team. Often it makes sense to prioritize security fixes based on risk, to roll out fixes in phases, and in some cases for a person with the right level of management responsibility to accept the risk posed by certain issues.

For example, we’ve often found weak passwords hard-coded into an application.  Switching to strong passwords is something that developers can do quickly in order to help minimize risk.  Refactoring the code to remove the passwords from the source code and pull them from an environment-specific configuration file can be done in a later remediation phase.

 

     3. No, I am not going to redesign my whole application to fix one Low severity issue.

 

Certain development frameworks and platforms have inherent security weaknesses that are repeatedly pointed out during penetration tests. Often these weaknesses are low-severity issues where it is not feasible to craft an exploit or that would take an extremely long time to exploit.  For example, maybe the framework includes built-in Cross-Site Request Forgery protection but the implementation of this protection is not as strong as it could be. Maybe the token doesn’t have quite as many bits of entropy as OWASP would recommend or the token is sometimes passed in the URL to protect any GET requests that might perform sensitive, state-changing transactions. If this is the case, it doesn’t necessarily make sense to re-architect the application to use a different language/framework. In fact, such a drastic change could accidentally introduce other, more serious issues. Instead, it might be more practical to implement other mitigating controls, such as requiring the user to re-enter their password on the very most sensitive transactions.

A common example of this is the lack of HTTPOnly attribute for a cookie, which we typically report as a Low severity finding. The HTTPOnly flag prevents client-side JavaScript code from accessing the cookie, which helps to provide a small layer of protection against Cross-Site Scripting attack that might be aimed at stealing the user’s session token. Many modern web applications make heavy use of client-side JavaScript code that often needs to read the application’s session cookie in order to function properly.  In cases like this I would not consider the lack of HTTPOnly flag to be an actionable / fixable security issue and would instead focus on ensuring that the application has other robust security controls in place such as safe context-specific output encoding and whitelist-style input validation.

 

      4. PUT and DELETE are not an automatic fail.

 

Traditionally, methods such as PUT and DELETE have been flagged as insecure by vulnerability scanning tools since they could often be used to do dangerous things, such as creating or deleting a web page.  However, with the rise in popularity of REST-style design, these methods are often used by web applications and web service APIs.  REST APIs support CRUD (Create, Read, Update, Delete) operations in which PUT is often used for creating objects and DELETE for destroying objects.  Support for these methods, by itself, should not be flagged as a security vulnerability.  Instead, a deeper inspection of the application is warranted to determine whether or not sufficient authentication and authorization is in place for these methods before writing this up as a finding.

 

     5. I would like to understand real impact.

 

Rating a vulnerability High because the scanner said it is High severity is not good enough.  It’s important to explain the real impact posed by the finding.  Is the vulnerability readily exploitable or would it require a large amount of processing power for a long period of time to perform an exploit?  What type of sensitive information is exposed by the vulnerability?  Are there other known vulnerabilities or mitigating controls that affect the risk for this vulnerability?  These are all factors that a penetration tester should take into consideration when rating the severity of a finding and these factors should be explained in the report.  This helps developers understand how to prioritize remediation efforts so that the biggest risks can be addressed quickly. 

At AppSec Consulting our highest severity rating is Emergency.  This rating is used very sparingly.  However, when we do find an Emergency issue we call the customer immediately and send them a write-up with remediation advice.  We recommend treating it like an Emergency and that means the developers should drop what they’re working on and focus on remediation until the issue is remediated or mitigated to the point where it is no longer an emergency.  We always justify an Emergency by showing what can be done with it – for example, an external attacker can compromise a system on the perimeter, then pivot onto the internal network, then perform series of steps to access customer Personally Identifiable Information (PII).  We try not to overuse the Emergency rating but when we call the developer about a finding they know it’s serious.  We’re eager to help by explaining the risk, the remediation options, and finally verifying that the issue is no longer exploitable. 

Identifying and closing a single critical security hole requires close collaboration between the penetration tester and developer.  When this partnership is effective, the results are very rewarding.

Brian Shura

Brian Shura is the Vice President of AppSec Consulting. Brian's team of security professionals performs appplication and network penetration tests, mobile application security assessments, source code reviews, and a variety of other interesting security projects. Brian often teaches application security classes and has created world-class security training for developers, QA analysts, and information security analysts. Prior to his role in application security, Brian spent five years working as a developer on large Internet-facing websites. Brian is also the Project Leader for the Web Application Security Consortium's "Web Application Security Scanner Evaluation Criteria" project.

read more articles by Brian Shura