The Requirement for Application Security in web-based applications

Posted by Janani Kehelwala on April 18, 2016 · 19 mins read Archived

Looking at the threat landscape of modern web applications it is clear that perimeter security measures such as Firewalls are not sufficient. While this does not imply that the perimeter security is incompetent and should be ignored altogether, it implies that it should not exclusively be relied upon. This report analyzes the nature of the most common attacks directed at web applications and proceeds to portray the significance of implementing security at application level. It proposes mitigation techniques that could be applied to overcome the vulnerabilities, and concludes that the optimum security could be achieved by combining perimeter and application level security measures.

Awareness in context with Web application security

Any person with a digital device can be expected to have enforced at least one security measure in their respective device, an access control in the least. Since this accounts for the growing awareness of the need for security throughout the public, it can be assumed that web application providers also implement security features to protect their assets. The very nature of web applications hold considerable appeal to attackers and organizations alike. Their ability to reach a wider audience, payment integration facilities and ability to handle private information provide much needed services to the public, causing them to be a popular choice; crucial even, while attracting continuous attention from malice entities. Users are requested to provide at least one type of information in the list of top 10 types of information exposed (Symantec, 2016, p. 53) by one website or the other, making their security a responsibility of the service provider.

Despite this, the web based attacks blocked per day has increased from nearly 0.5 million to 1.1 million depicting a growth of over 100%. Exploit toolkits targeting clients such as Angler, Sakura and Nuclear (Symantec, 2016, p. 21) has become a business in itself. This depicts that the attackers are numerous and has plenty of motives for attacks. Credit card details are being sold prices as low as 0.5$ up in the black market (Symantec, 2015, p. 17), depicting their extreme availability. Malware infection has become so commonplace that over 3000 websites contained malware in 2015, which was over 100% increase compared to 2014 (Symantec, 2016, p. 9). This is hardly a surprise since if a webserver was compromised, every website hosted in that server is likely to be infected. While some controls can be expected to be in place for web application protection, their usefulness is debatable due to these facts.

As applications that function over networks, web applications reside at the topmost Application layer of the OSI ( Open Systems Interconnection) model. Each layer in the OSI need to be individually secured. Firewalls and packet filtering routers were customized for the security of the network and transport layer (Maguire & Miller, 2010). Since web access must be available over the internet, port 80/443 where HTTP/HTTPS protocols function are kept open by default. This causes firewalls to offer no protection to the application layer, as access to general public includes access to attackers as well. While an NIDS/NIPS can be utilized, it may not have state-full and deep packet inspection considering the traffic level, as if it did, it could be easily disabled through Denial of Service attacks. End to end SSL encryption only encrypts communications between two end points. This communication could consist of encrypted malicious code, which will be decoded and executed at the application layer of an end. None of these controls offer protection for custom made web applications. As security defenses emerge, the threats have also evolved. As perimeter defenses are popularly deployed, attackers focus on the unprotected application layer, the protection of which is often underestimated by service providers. A breakdown of top 10 attacks conducted on web applications can attest to this situation.

Vulnerabilities that cause the Top 10 attacks on web applications

These top 10 attacks have been adapted from OWASP a triennial awareness publication containing a list of most common web application attack categories according to their own risk ranking (OWASP, 2013).

At the top of the list is Injection attacks, which are caused by execution of strategically placed code passed through HTML user inputs.

Broken authentication and session management exploits loopholes in password management ( Password resets, password changes and other authentication options) and accidental confidential information transfer during HTTP communications.

Cross site scripting is also a product of weak input validation.

In Insecure direct object references, attackers take advantage of parameter transfers (eg. through “Get” URLs and hidden input fields) that lack proper access control to access application defined objects (Object Oriented Programming related Classes).

Security Misconfiguration exploits regarding software and firmware is self-explanatory.

Sensitive Data Exposure is caused by failing to secure sensitive data in unlikely scenarios (Such as log files created with sensitive information due to error conditions, the access to which is available to unintended users such as IT staff).

Missing Function Level Access Control is similar to insecure direct object references, referring to function references being used as parameters in place of objects.

In Cross-Site Request Forgery users are tricked into launching malicious code executions on a victim website through HTTP requests which accompany their credentials.

Using Components with Known Vulnerabilities points at the use of frameworks, plugins and libraries due to popularity despite their lack of security controls.

The final category is Unvalidated Redirects and Forwards, which includes redirects caused by legitimate looking links to previously compromised websites.

While these attacks attest for lack of application layer security, the need for it should also be justified.

Current status, reasons for existence and lack of awareness of web-based vulnerabilities

Protection measures that could be taken for these vulnerabilities are freely available in the internet. National Vulnerability Database, OWASP and Annual Security Reports by vendors in security industry are all dedicated to acknowledging users regarding emerging vulnerabilities. Freely and commercially available vulnerability scanners such as Acunetix WVS, NIKTO and Burp Suite helps detecting Injection, Cross side scripting, Remote code execution, improper error management and many more types of vulnerabilities (Bairwa, et al., 2014; OWASP, 2016). Yet, Internet Security Threat Report shows that 78% from the scanned websites were found with various vulnerabilities, 15% of which were critical (Symantec, 2016, p. 9). This condition has been more or less persistent over the past 2 years.

This situation is caused by the competitive nature of each service in the World Wide Web ecosystem. Dedicated software development companies or freelance developers are willing to develop any website you want within any given budget. Organizations and customers are hasty to see the end products. While developers are concerned with adapting customer requirements, search engine optimization, user experience and user interface designs, the security of applications are never addressed (Maguire & Miller, 2010). Once the product is delivered, investors are impatient for deployment without due security audits and vulnerability scans. This competition is true for web hosting providers as well, since each provider is willing to provide cheap domain name and server space services, security controls have to be compromised to gain any financial profits. This situation is quite unfortunate as it depicts a huge gap in awareness while vulnerability growth has been over 4000 during past 10 years (Symantec, 2015, p. 36).

The diversity of the application layer components does not help its security. Application layer contains multiple components such as the operating system of the server, HTTP client tier, application server tier and the database tier, all of which are are necessary for web applications. Server tier could support many languages, multiple development frameworks for which could be readily available. Each of these could contain independent vulnerabilities that could be collectively exploited. For example, if a web hosting service is provided, a simple attack on one poorly coded website could result in server compromise through privilege escalation. With the increasing diversity and number of available options, knowledge of each vulnerabilities have become obscure.

However, the situation does not excuse service providers from protecting their respective customers. It is their responsibility to conduct due diligence and mitigate the risks caused by these vulnerabilities.

Impact, damages and risks

The impact caused by successful attacks on web applications are split between users and service providers. Users will be subjected to financial and identity loss in a data breach, which often misleads service providers will less capitol to ignore security controls. But situation has changed. The high profile incidents such as 2015 Target credit-card information breach, while not necessarily being a web application attack, resulted in a 10 million USD settlement (Hewlett Packard Enterprise, 2015). With existing and upcoming security legislation, organizations will have to account for their actions. While loss of identity causes severe damage, it can be argued that service providers will have the most to lose in financial terms. Organizations will have to financially compensate victims for their lack of security controls and deal with the resulting loss of reputation. With users losing faith in online transactions and privacy protection, e-commerce and advertising will deem no return of investment (Maguire & Miller, 2010). Loss of reputation can also be caused by website defacement and denial of service attacks. Legitimate websites that earn revenue through advertisements could also face loss of reputation due to Malvertising, where users are tricked into clicking malicious links.

While these attacks are detectable, possibility of undetectable backdoors should also be mentioned. Even static website which doesn’t handle any information is vulnerable to this risk since these compromised systems hold commercial value to many interested parties. Compromised resources are used to launch serious attacks to valuable entities, even perform terrorist activities, holding the unsuspecting owners accountable, as is the case with Cross-site request forgery and phishing attacks.

While reactive security is necessary, that proactive security is to be given the higher priority (Maguire & Miller, 2010). Even the industry is encouraging towards proactive security implementations. Google provides higher scores for SSL secure websites in SEO (Search Engine Optimization) and browsers have implemented clear indications to convey whether a connection is unencrypted, DV certified, or EV certified (Symantec, 2016). The “HTTPS everywhere” initiative is growing and currently has a 40% participation out of the targeted 70%. These benefits can all be reaped through mitigation of the risks posed by aforementioned vulnerabilities.

Implementation of web-application security and respective mitigations

Mitigation takes two forms, where it’s done post-development or during the development process. Mitigation during development is preferred as post-development mitigation usually results in weak, costly compensative measures that does not properly mitigate the risks (Maguire & Miller, 2010). This is not always feasible, and might not be even necessary depending on the application.

Being sensitive to the availability of capital by various service providers and the diversity of the web, a risk analysis must be conducted before applying any mitigation techniques. This analysis could take account of the statistics regarding geography of attack distribution, distribution of types of information exposed, most breached sectors by in the context of the application. The analysis should accurately forecast the threats, helping owners focus on the likelihood of their realization and the required level of investment (Maguire & Miller, 2010). For example, investing in code quality assurance in a static website with no dynamic information handling would be useless as the site would not contain exploitable code in the first place. That capitol could be better used in tightening the server security and proper configuration of integrated services.

However, if the application contains any form of information handling, code quality assurance is particularly required. User inputs should be properly validated through variable binding and the use of libraries that process illegal characters (OWASP, 2013). The external code input for advertisements or URLs should be validated and their providers authenticated before presenting them in websites. Code reviews and code signing should be conducted and input from a security expert should be taken into account during the system development life cycle (Maguire & Miller, 2010; Symantec, 2016). This measure alone could considerably mitigate vulnerabilities that allow Injection attacks, Broken authentication and session management, Cross Site Scripting, Insecure direct object references, Missing Function Level Access Control, Sensitive Data Exposure and Unvalidated Redirects and Forwards.

Management must be made aware of these vulnerabilities in order to overcome resistance for investment on security issues. Ample resources should be allocated to developers and periodical application security training should be mandated (Maguire & Miller, 2010).

A policy must be enforce for outsourced development products, which demands a thorough vulnerability assessment done by a neutral third party. Post development and periodical vulnerability assessments should also be conducted. In case of limited resources, automated vulnerability scanners can be utilized (Bairwa, et al., 2014) but depending on the volume and confidentiality of information being handled, expert advice may be required (Maguire & Miller, 2010). Exhaustive penetration testing to assess all possible scenarios must be conducted on applications where post-development mitigations have been applied.

If capitol permits, host based intrusion detection systems with stateful deep packet inspection could be deployed (Maguire & Miller, 2010). However, too much trust should not be put in security products by popular vendors, as their marketing could be deceiving. A collective approach of multiple defenses would offer the best possible protection (Bairwa, et al., 2014; Maguire & Miller, 2010; Symantec, 2016). Providing end to end encryption through SSL is also suggested (Symantec, 2016). In terms of code signing and SSL implementation, standard compliance and certification (Maguire & Miller, 2010) should be persistently pursued.

Proper maintenance of in used plugins, libraries, frameworks, underlying software, operating system and firmware must be assured. Regular installation of updates and patches and proper configuration of default security settings should be given top priority, as this helps mitigate risks in Security Misconfiguration and Using Components with Known Vulnerabilities. Continuous awareness of new vulnerabilities is vital in the regard (Symantec, 2016).

Reactive measures are not to be underestimated. Regular website, server and database backups must be enforced and an incident response process must be defined through policies (Symantec, 2016). Audiences must be encouraged to report suspicious activities (e.g. Paypal Phishing Reporting) to respective administrators to ensure fast detection and better service (Symantec, 2016).

In conclusion, it can be seen that mitigation is not singular. Mitigation responsibilities must be divided among the authorities responsible for OSI Layers, application development lifecycle, and tiers in the application layer to ensure the best possible security.

Conclusions

Threat landscape has evolved along with defenses. Network defenses are still very important, as DoS attacks have been persistent to this day (Symantec, 2015, p. 44). However, relying exclusively on perimeter security is not optimum, as seen by the nature of most popular web application attacks. Awareness must be raised regarding this misconception and the importance of application security must be better communicated. It is not a cost to endure, as it provides heaps of benefits in regard with standard compliance, industry support and general increase of reputation, whereas lack of it could pose disastrous consequences. Web application security can be implemented through numerous techniques, the responsibility of which should be distributed among all authorities in the web application eco system. While the current conditions are not optimum, with the growing support of industry and various security initiatives, hope of a healthy web infrastructure perseveres.


References

Bairwa, S., Mewara, B. & Gajrani, J., 2014. Vulnerability Scanners – A Proactive Approach To Assess Web Application Security. [Online][Accessed 10 April 2016].

Hewlett Packard Enterprise, 2015. Cyber Risk Report, California: HPE Security Research.

Maguire, J. R. & Miller, H. G., 2010. Web-Application Security: From Reactive to Proactive. IT Professional, 12(4), pp. 7-9.

OWASP, 2013. Top Ten Most Critical Web Application Security Risks. [Online] [Accessed 9 April 2016].

OWASP, 2016. Category:Vulnerability Scanning Tools. [Online] [Accessed 15 April 2016].

Symantec, 2015. Internet Security Threat Report. [Online] [Accessed 14 April 2016].

Symantec, 2016. Internet Security Threat Report. [Online] [Accessed 12 April 2016].