High performance WAN – Are you moving in the right direction?

1st June 2018

Image depicting High Performance WAN

Wide area networks (WANs) are critical to the IT infrastructure underlying all business-critical applications, be that in data centres or in the public/private cloud. But for large, distributed business and not for profit organisations, maintaining network links to each branch office can be costly. Many IT teams fall into the trap of thinking that WAN optimization and acceleration is as simple as boosting the bandwidth in a slow office or dropping a single device into the network. However at Wanstor we know this isn’t the case and for best performance a WAN strategy requires bringing together multiple networking and security technologies together.

In this blog we will cover some of the most common WAN strategy challenges that IT teams are facing, and hope you can take away some practical advice which you can implement into your business or not for profit orgainisation straight away.

Can reduction and compression technologies solve insufficient bandwidth problems?

The explosion of apps consuming LAN speeds has put pressure on WAN sites that do not have access to unlimited bandwidth. It has been obvious for some time that data compression is a key technology for reducing such stress. Generally data compression technology works well for most data, except for real-time multimedia (e.g. video conferencing) — which is already compressed and can’t benefit from simple compression techniques. WAN optimization products implement compression in many ways, including:

Standard compression: This method takes streams of data and sends a reduced version of the content across the circuit, saving bandwidth. Standard compression in a WAN environment has many intricacies, including the choice of algorithm, how compression works across streams, and the interaction between compression and encrypted traffic.

Caching: This technique reduces data by maintaining a stored version of recently requested data objects (typically, files or email attachments) at the remote side of the connection. If a data object is requested a second (or third, or fourth) time and it is in the cache, then that copy is returned, eliminating the need to re-transmit the object from the central site to the remote site.  Caching is especially useful in environments where file sharing is done across the WAN, or where the email server (typically Exchange) is located at the central site and not the remote site.

Deduplication: This approach reduces data by detecting duplication in streams of bytes. Deduplication is a term from the world of storage and backup systems

The actual details relating to each of these algorithms is used and whether the vendor calls it caching or deduplication is mostly irrelevant. One important difference is that caching nearly always requires a hard disk of some sort to hold cached data, while deduplication is handled on in real time without any persistent storage.

At Wanstor we suggest Network Managers interested in data compression techniques to reduce bandwidth, evaluate products only by putting them into place in their own networks and comparing the results. The most important detail about data compression is that it requires two devices, one on either end of each WAN circuit or virtual connection. Compression product makers have tried to mitigate the need for deployment and management of hardware by providing compression devices as virtual machines. The idea is to offer compression software that runs directly on end-user devices, and to introduce many other WAN optimization and acceleration techniques into their products to provide more all-in-one solutions.

Can application optimisation solve app issues?

Although compression techniques can provide performance increases in terms of the WAN, optimising apps to run over your organisations WAN’s actually offers benefits far beyond simple compression. Application optimisation can often be provided by the same hardware used for compression, but there is a key difference: Application optimisation requires just one device next to the app server. Because the application optimisation directly affects web traffic, optimisation benefits all app users, not just WAN users.

Examples of application optimisation and benefits:

Application Optimisation Benefit
Better use of browser objects such as JavaScript Many application developers have the Javascript browser re-downloaded and other browser objects, such as style sheets, each time a different page is referenced. Application optimization tools can re-write pages when required to make sure that these large objects are cached in the browser. Reordering objects can also make pages render faster, giving a better user experience.
Compression and optimisation of content and images Web browsers internally support compression without requiring any add-on software; most web servers don’t bother to compress objects. They simply compress as and when required. This helps speed to access and reduces network load.
HyperText Transfer Protocol (HTTP) Extensions and support for emerging standards such as the SPDY protocol HTTP, the protocol underlying Hypertext Markup Language, has always been known to be inefficient. Acceleration hardware can help to interweave connections and increase the speed to access over high- latency, low-bandwidth network connections. SSL offload to the optimization-acceleration device can also speed loaded app servers.

Traditionally, application optimization was the realm of a family of products called application delivery controllers or ADC (formerly known as load balancers). But network product makers have migrated these techniques into other devices as well.

Traffic priority and bandwidth management

Voice and video applications require a constant and predictable bandwidth among simultaneous users. Other apps, such as email and web-based programs, tend to be more up and down in their bandwidth requirements.

Wanstor’s suggested techniques to provide bi-directional traffic management:

Management Technique Result
Transmission Control Protocol modification as and when required By changing TCP window sizes and delaying TCP acknowledgments, individual applications can be better managed and controlled by IT teams.
Application intelligence for User Datagram Protocol applications UDP-based apps, such as voice and video, are not easily flow-controlled the way TCP apps can. By understanding more about the internals of a UDP app, WAN optimization devices can perform call admission control
Subdividing apps Some applications mix both delay-sensitive and bulk traffic over the same connection. WAN optimisation devices may be able to break out multiple types of traffic and give different priorities to each type based on deep knowledge of the internal functions of the application.
App identification Differentiating between business and recreational apps (such as collaborating via SharePoint versus video streaming from YouTube) goes deeper than looking at port numbers. By directly identifying actual applications, WAN optimisation devices can provide granular insight and then limit or guarantee bandwidth as required to meet project objectives.
Time-of-day awareness Although many data centres run 24/7 x 365 many offices, shops and restaurants are only open for a proportion of the day. This provides the opportunity to use bandwidth (most people buy usage on a per 24hr basis) differently during opening hours. At Wanstor we suggest maintenance activities such as Log transfers, backups, software updates and other maintenance should be pushed to outside core operational hours and will benefit from different bandwidth management rules. By moving a lot of your traffic to off-peak times IT teams can also realise some cost savings as well.

In a fully managed hub-and-spoke network, quality of service (QoS) mechanisms can be used to guarantee particular bandwidth and prioritisation for each application. However as the way we shop, eat and play has changed so have the networks which support businesses and organisations which enable humans to undertake their day to day activities.

Multiple data centres, branch-to-branch communication and the use of generally unmanaged circuits (such as Internet, wireless and shared services) have reduced the ability of simple QoS mechanisms to guarantee acceptable app performance. WAN optimisation and acceleration projects also now require management of bandwidth between sites. Simple mechanisms, such as those found in common edge firewalls with unified threat management (UTM), are not sufficient for the complex requirements of a mix of applications and topology.

Bandwidth management can be particularly trying because true bandwidth management works well only in the outgoing direction for each site. Once the packets have come into a site, they’ve already consumed bandwidth and pushed out other apps that might have been more important.

Simply dropping packets that exceed predefined limits won’t work in most situations. WAN optimisation and acceleration vendors have come up with a variety of techniques to providesophisticated bidirectional traffic management.

Use standard based tools to provide better network visibility

Most WAN optimisation techniques try to improve service with limited resources by controlling use of certain resources. But a significant step toward any WAN optimisation and acceleration project depends on gaining network visibility. At Wanstor we believe it is a “must have” that the network management team can answer questions around the applications using their network such as:

  • What applications are being used?
  • Who is running them and when?
  • How much bandwidth do they use (individually and collectively)?
  • What types of errors are occurring?
  • What response times are users experiencing?
  • Which systems are the top talkers and which are the top listeners?

The old reporting categories must be modified because visibility in current WAN environments involves far more than merely tracking IP addresses and ports. True network visibility extends up the stack to identifying real people and real apps. Without strong visibility into the network, no WAN optimization and acceleration project can be successful. Control of the unknown simply leads to frustration and confusion, while good visibility into network and app use can also provide metrics to measure overall project or programme success.

Many devices (including switches, routers, firewalls, WAN optimization controllers (WOCs) and application delivery controllers) will send IPFIX and NetFlow data to a management system. Where no IPFIX data is available, both open-source and commercial hardware and software IPFIX and NetFlow exporters are available to give visibility into unencrypted network traffic.

The benefit of choosing IPFIX and NetFlow is that it represents a standard approach, which means that an organisation will be able to gain visibility into different components mixed and matched on its network.

Link balancing and dynamic routing improves network reliability

Although service-level agreements (SLAs) can set expectations, network managers must prepare for the inevitable link downtime that any WAN will experience. When business-critical apps are used over the network, most organisations choose to use dual links into each of their sites to minimize blockages created by traffic peaks or network problems.

Simply having multiple links doesn’t ensure high availability, as some mechanism must be in place to use the links. If VPN tunnels are in place, some organisations use dynamic routing protocols such as the Open Shortest Path First (OSPF) protocol to make use of dual links. Having two links on at all times always prompts a return on investment question: How can we use both links and still get the most network for the pounds invested? WAN optimization and acceleration vendors have introduced a variety of techniques to balance traffic over multiple network links, with varying levels of success. Because TCP/IP networks have their own routing protocols, attempts to force traffic to take a particular route or to signal a route to upstream devices (such as Multi-Protocol Label Switching or MPLS routers) are often complicated and create brittle networks.

While the idea of using as much of two circuits as possible is attractive from budget and theoretical perspective, network managers should very carefully evaluate any vendor proposal to perform outbound load balancing or dynamic link selection. Experiences with this type of load balancing have not been positive for all businesses. In some cases, these types of technologies have required very specific network configurations for correct operation, and may end up creating more problems than they solve.

Does load balancing improve application reliability?

Although application reliability is not necessarily a WAN-specific concern, the importance of enterprise applications emphasizes the need for more sophisticated types of load balancing and high availability strategies that stretch across data centres.

Traditional load balancing uses a Layer 2 or Layer 3 device as the front-end to a series of systems offering an identical service. As requests come in to the load balancer, it makes a decision based on a predetermined algorithm and passes the request on to whichever system is selected.

The load balancer then manages state information so that further requests from the same client are all directed to the same system. The algorithm chosen can be as simple as a round-robin process or it can be more sophisticated, taking into account CPU utilization, response time and other factors. Originally, the goal of most load balancers was scalability — the ability to handle a greater load than any single server could manage. Over time, the goal has changed. Now, the low cost of server hardware has led many organisations to use load balancers simply for reliability. With two (or more) servers available, uptime can be extended and maintenance windows shortened, even if the load can reside entirely on a single server.

Although people have been talking about global server load balancing for a decade or more, network managers should be aware of one important fact: Global server load balancing has not a solved problem. Because of the way that the Internet, Domain Name System servers and web browsers work, there is no guaranteed reliable approach to providing high availabilityacross multiple data centers.

It should be noted that several techniques have been tried, including DNS-based load balancing and Border Gateway Protocol (BGP) load balancing, but no approach works 100 percent of the time in 100 percent of the possible failure cases. Indeed, the numbers aren’t even close to 100 percent. For every global load-balancing technique discussed, there are many potential places where load balancing will not deliver the desired results. Load balancing is usually provided by dedicated software or hardware, such as a multilayer switch or a DNS server. Many experts consider the distinction between hardware and software load balancers to be no longer meaningful.

Where WAN optimization and acceleration solutions can be found:

Technology Most commonly found in But also available in
Data compression and reduction WAN optimization controllers (WOCs); discrete hardware appliances or software-based virtual appliances Some functionality may be available in web security gateways, but pure data compression and reduction are not often found in other product spaces.
Application optimization Application delivery controllers (ADCs), load balancers WOCs often include some application optimization features. Web application firewalls are a separate niche.
Traffic prioritization and bandwidth management Quality of Service (QoS) and visibility products WOCs often include traffic prioritization; UTM products and next generation firewalls generally include basic bandwidth management and prioritsation.
Routing and link balancing Branch and edge firewalls or combination router–virtual private network (VPN) devices Stand-alone edge routers all generally have this capability, but the location of the router outside of the firewall (which prevents it from seeing into encrypted VPN tunnels) pushes this feature into whatever device handles VPNs for the branch.
Security features; use and misuse controls UTM products and next-generation firewalls Web security gateways and proxy servers may include limited web focused features. Standalone IPSes are rarely used in the branch when UTM or next-generation firewalls are available.

Integration of security tools

WAN optimization is usually considered a largely technical exercise, the goal of which is to get more value out of each pound invested for connectivity. Many network managers now take a more holistic view of network use, and look to security-focused products to help them control overall use of both enterprise and Internet apps. Because most WANs already have a firewall device at the border of each remote site, these devices may be called upon to provide more than simple firewall and VPN services.

Security device manufacturers are bringing many branch management features to their edge devices, including URL and content filtering, app identification and control, bandwidth management, intrusion prevention and antimalware. At Wanstor we believe Network Managers should consider including the capabilities of branch firewall devices in their overall network optimization plan for several reasons. First, these devices are typically already in use, so activating additional capabilities may be as simple as a few mouse clicks or a low-cost subscription add-ons. Branch firewalls are key parts of the WAN. What’s more, changes to traffic profiles or traffic types will also affect the operation and capabilities of the firewalls.

How Wanstor can help Network Managers optimise their WAN

In this blog post we have offered some suggestions around common WAN challenges which Network Managers can use to help improve their WAN performance. Wanstor also offers a range of network optimisation solutions to help:

  • Reduce application latency to remote end-users
  • Create multiple pathways to ensure application availability
  • Centralize the network environment
  • Decrease operating and management costs
  • Maximize bandwidth utilization
  • Postpone the need to upgrade WAN bandwidth
  • Improve disaster recovery position by speeding backup and data replication over the WAN

For more information about Wanstor networking services, please click here: https://www.wanstor.com/wide-area-networking-wan-connectivity-business.htm

Read More

Network Monitoring for the Private Cloud: A brief guide

3rd May 2018

Private Cloud Computing

‘Cloud computing’ as a concept has been around for over 10 years. Up until about 5 years ago many business and not for profit organisations shunned the “cloud” as all they could see were problems and challenges with the implementation of a cloud first policy such as – insufficient processor performance, enormous hardware costs and slow Internet connections making everyday use difficult.

However, today’s technology, broadband Internet connections and fast, inexpensive servers, provide the opportunity for businesses and not for profit IT teams to access only the services and storage space that are actually necessary, and adjust these to meet current needs. For many small and medium sized organisations using a virtual server, which is provided by a service provider, introduces a wide range of possibilities for cost savings, improved performance and higher data security. The goal of such cloud solutions is a consolidated IT environment that effectively absorbs fluctuation in demand and capitalizes on available resources.

The public cloud concept presents a number of challenges for a company’s IT department. Data security and the fear of ‘handing over’ control of the systems are significant issues. If an IT department is used to appropriating its systems with firewalls and to monitoring the availability, performance and capacity usage of its network infrastructure with a monitoring solution, it is much more difficult to implement both measures in the cloud. Of course, all large public cloud providers claim they offer appropriate security mechanisms and control systems, but the user must rely on the provider to guarantee constant access and to maintain data security.

Because of the challenges and general nervousness around data security in public clouds, many IT teams are investigating the creation of a ‘private cloud’ as an alternative to the use of public cloud. Private clouds enable staff and applications to access IT resources as they are required, while the private computing centre or a private server in a large data centre is running in the background. All services and resources used in a private cloud are found in defined systems that are only accessible to the user and are protected from external access.

Private clouds offer many of the advantages of cloud computing and at the same time minimise the risks. As opposed to many public clouds, the quality criteria for performance and availability in a private cloud can be customised, and compliance to these criteria can be monitored to make sure they are achieved.

Before moving to a private cloud, an IT department must consider the performance demands of individual applications and usage variations. Long-term analysis, trends and peak loads can be attained via extensive network monitoring evaluations, and resource availability can be planned according to demand. This is necessary to guarantee consistent IT performance across virtualized systems. However, a private cloud will only function if a fast, highly reliable network connects the physical servers. Therefore, the entire network infrastructure must be analysed in detail before setting up a private cloud. This network must satisfy the requirements relating to transmission speed and stability, otherwise hardware or network connections must be upgraded.

Ultimately, even minor losses in transmission speed can lead to extreme drops in performance. At Wanstor we recommend IT administrators use a comprehensive network monitoring solution like PRTG Network Monitor, in the planning of the private cloud. If an application (which usually equates to multiple virtualized servers) is going to be operated over multiple host servers (“cluster”) in the private cloud, the application will need to use Storage Area Networks (SANs), which convey data over the network as a central storage solution. This makes network performance monitoring even more important.

In terminal set ups in the 1980s, if a central computer broke down it was capable of paralyzing an entire company. The same scenario could happen if systems in the cloud fail. Current developments show that the world has gone through a phase of widely distributed computing and storage power (each workstation had a ‘full-blown’ PC) and returned to centralized IT concepts. The data is located in the cloud, and end devices are becoming more streamlined. The new cloud, therefore, complies with the old mainframe concept of centralized IT. The failure of a single VM in a highly-virtualized cloud environment can quickly interrupt access to 50 or 100 central applications. Modern clustering concepts are used to try to avoid these failures, but if a system fails despite these efforts, it must be dealt with immediately. If a host server crashes and pulls a large number of virtual machines down with it, or its network connection slows down or is interrupted, all virtualized services on this host are instantly affected, which, even with the best clustering concepts, often cannot be avoided.

A private cloud (like any other cloud) depends on the efficiency and dependability of the IT infrastructure. Physical or virtual server failures, connection interruptions and defective switches or routers can become expensive if they cause staff, automated production processes or online retailers to lose access to important operational IT functions.

This means a private cloud also presents new challenges to network monitoring. To make sure that users have constant access to remote business applications, the performance of the connection to the cloud must be monitored on every level and from every perspective.

At Wanstor we believe an appropriate network monitoring solution like PRTG accomplishes all of this with a central system; it notifies the IT administrator immediately in the event of possible disruptions within the private IT landscape both on location and in the private cloud, even if the private cloud is run in an external computing centre. A feature of private cloud monitoring is that external monitoring services cannot ‘look into’ the cloud, as it is private. An operator or client must therefore provide a monitoring solution within the private cloud and, as a result, the IT staff can monitor the private cloud more accurately and directly than a purchased service in the public cloud. A private cloud also enables unrestricted access when necessary. This allows the IT administrator to track the condition of all relevant systems directly with a private network monitoring solution. This encompasses monitoring of every individual virtual machine as well as the VMware host and all physical servers, firewalls, network connections, etc.

For comprehensive private cloud monitoring, the network monitoring should have the systems on the radar from user and server perspectives. If a company operates an extensive website with a web shop in a private cloud, for example, network monitoring could be set up as follows: A website operator aims to ensure that all functions are permanently available to all visitors, regardless of how this is realised technically. The following questions are especially relevant in this regard:


  • Is the website online?
  • Does the web server deliver the correct contents?
  • How fast does the site load?
  • Does the shopping cart process work?

These questions can only be answered if network monitoring takes place from outside the server in question. Ideally, network monitoring should be run outside the related computing centre, as well. It would therefore be suitable to set up a network monitoring solution on another cloud server or another computing centre.

It is crucial that all locations are reliable and a failover cluster supports monitoring so that interruption-free monitoring is guaranteed. This remote monitoring should include

  • Firewall, HTTP load balancer and Web server pinging
  • HTTP/HTTPS sensors
  • Monitoring loading time of the most important pages
  • Monitoring loading time of all assets of a page, including CSS, images, Flash, etc.
  • Checking whether pages contain specific words, e.g.: “Error”
  • Measuring loading time of downloads
  • HTTP transaction monitoring, for shopping process simulation
  • Sensors that monitor the remaining period of SSL certificate validity

If one of these sensors finds a problem, the network monitoring solution should send a notification to the IT administrator. Rule-based monitoring is helpful here. If a Ping sensor for the firewall, for example, times out, PRTG Network Monitor offers the possibility to pause all other sensors to avoid a flood of notifications, as, in this case, the connection to the private cloud is clearly completely disconnected.

Other questions are crucial for monitoring the (virtual) servers that are operating in the private cloud include:

  • Does the virtual server run flawlessly?
  • Do the internal data replication and load balancer work?
  • How high are the CPU usage and memory consumption?
  • Is sufficient storage space available?
  • Do email and DNS servers function flawlessly?

These questions cannot be answered with external network monitoring. Monitoring software must be running on the server or the monitoring tool must offer the possibility to monitor the server using remote probes. Such probes monitor the following parameters, for example, on each (virtual) server that runs in the private cloud, as well as on the host servers:

  • CPU usage
  • Memory usage (page files, swap file, page faults, etc.)
  • Network traffic
  • Hard drive access, free disc space and read/write times during disc access
  • Low-level system parameters (e.g.: length of processor queue, context switches)
  • Web server’s http response time Critical processes, like SQL servers or Web servers, are often monitored individually, in particular for CPU and memory usage.

In addition, the firewall condition (bandwidth use, CPU) can be monitored. If one of these measured variables lies outside of a defined range (e.g. CPU usage over 95% for more than two or five minutes), the monitoring solution will send notifications to the IT administrator.

Final thoughts

With the increasing use of cloud computing, IT system administrators are facing new challenges. A private cloud depends on the efficiency and dependability of the IT infrastructure. This means that the IT department must look into the capacity requirements of each application in the planning stages of the cloud in order to calculate resources to meet the demand. The connection to the cloud must be extensively monitored, as it is vital that the user has constant access to all applications during operation.

At the same time, smooth operation of all systems and connections within the private cloud must be guaranteed. A network monitoring solution should therefore monitor all services and resources from every perspective. This ensures continuous system availability.

For more information about Wanstor and PRTG network monitoring tools please visit – https://www.wanstor.com/paessler-prtg-network-monitor.htm

Read More

Endpoint Security – A state of transition

19th April 2018

Keyboard with sinister lighting

Endpoint security used to be a fairly mundane topic. The normal model used to be that the IT operations team would provision PCs with an approved image and then install Anti-Virus software on each system. The IT Operations team would then make periodic security updates (vulnerability scanning, patches, signature updates, etc.), but the endpoint security foundation was generally straightforward and easy to manage.

However in the last six months at Wanstor, we have seen an increase in the number of organisations increasing their focus on endpoint security and its associated people, processes, and technologies. This is largely down to mobility strategies starting to mature, BYOD becoming more common and mobile working the norm for many employees. Because of these market trends many businesses and not for profit organisations have had to increase their endpoint security budgets to cope with the changing working practices they are now facing.

The maturing of market trends have also meant many endpoint security vendors have had to change their strategies to cope with a transitioning end user workforce who want a stable office environment combined with a flexible work from anywhere approach.

At Wanstor we have seen the endpoint security strategy changing and predominantly being driven by the following factors in many organisations:

Cyber risks need to be addressed, especially around information security best practices – This is a clear indication that many IT security processes organisations have in place are not fit for a changing regulatory and mobile landscape.

Problems caused by the volume and diversity of devices – Addressing new risks associated with mobile endpoints should be a top endpoint security strategy requirement for all IT departments. This will only increase with the addition of more cloud, mobile, and Internet-of-Things (IoT) technologies

The need to address malware threats – Although it has been around for a long time many organisations are still struggling to get to grips with securing endpoints against malware threats. At Wanstor we do not find this overly surprising as the volume and sophistication of malware attacks has never been higher and the landscape is steadily becoming more dangerous. Additionally the sophistication and efficiency of the cybercriminal underworld alongside the easy access that would-be criminals have to sophisticated malware tools are a combination organisations of all sizes need to take seriously. At Wanstor we meet with 100’s of customers on a regular basis and they are all saying the same thing – We are concerned about our ability to stop these malware threats and stay a step ahead of attackers.

While various industry research studies suggest endpoint security strategies are driven by the factors identified above, many businesses and not for profit organisations still struggle to address endpoint security vulnerabilities and threats with legacy processes and technologies as well.

Some of the most common things we see at Wanstor include:

Security teams spending too much time concentrating on attacks which are happening now and not planning for the future – As the threat landscape has evolved so has the pressure on endpoint security staff, systems and processes. In many organisations they only have 1 or possibly 2 trained IT security professionals. This means when an attack happens they have to spend a lot of time attending to high-priority issues. They do not have sufficient time for process improvement or strategic planning. This challenge is something of a contradiction. Strategic improvements cannot and should not come at the expense of the security team failing to respond to high-priority issues, creating a quandary for many organizations: They know they need an endpoint security overhaul, but cannot afford to dedicate ample time at the expense of day-to-day security tactics. Effective endpoint tools must address this challenge by improving both the strategic and day-to-day position of the security team.

Organisations remain too focused/scared of regulatory compliance – At Wanstor we know it is a balance – IT security budgets vs regulatory compliance. However we have recently seen many businesses and not for profit organisations spending too much money/effort on becoming compliant within a changing regulatory landscape. Quite often this is because IT security teams have not worked with the business to properly define what the new regulations actually mean for the business and what the associated IT security spend should be. This often means IT security solutions are purchased ad-hoc and cost the organisation more money in the long run as they are purchased with a short term goal in mind rather than part of a wider security/regulatory plan.

At Wanstor we believe regulatory compliance should come as a result of strong security, and endpoint security cannot be achieved with a compliance-centric approach. For many IT teams this will mean a shift in thinking and closer working with other business departments such as the finance and legal teams.

Endpoint security has too many manual processes and controls – Endpoint security has undergone a major technical transition, but many organisations continue to rely on legacy products and processes to combat new challenges. It is often cheaper and easier for businesses and not for profit organisations to layer new products on top of legacy products as needs arise. However the trade-off is IT security teams become more and more inefficient as they have several layers of security processes and tools they have to manage which can create a security operations nightmare.

Wanstor’s Top Endpoint Security Challenges

  • Security staff spending a significant amount of time attending to high priority issues leading to no time for process improvement or strategic planning
  • Organisations too focused on meeting regulatory compliance requirements than addressing endpoint security risks with strong controls
  • Endpoint security is based upon too many manual processes making it difficult for the security staff to keep up to date with relevant security tasks and new technology trends
  • Organisations viewing endpoint security as a basic requirement and not giving it the time or resources it needs to protect users
  • Lack of monitoring of endpoint activities proactively so it can be difficult to detect a security incident.
  • Businesses and not for profit organisations not having access to the right vulnerability scanning and / or patch management tools so are always vulnerable to having an endpoint compromised by malware
  • Lack of budget to purchase the right endpoint security products as IT teams unsure of how to develop the right business case for management teams to make decisions on

In summary, Wanstor’s research of its own customers, and the changing mobility landscape identifies a situation where the principal endpoint security approach is not an adequate countermeasure for addressing the complexity and sophistication of modern IT security threats.

Wanstor’s own customer and market research evidence strongly suggests that businesses and not for profit organisations at the moment do not view existing endpoint security strategies as viable for blocking sophisticated attacks. As a result, many organisations need to supplement their existing endpoint security products with newer and more robust technologies that offer more functionality across incident detection, response, and remediation.

As a matter of course Wanstor believes all IT teams should take action now to review their endpoint security strategies and evaluate whether or not it is fit for purpose against business requirements. As a minimum the IT team should:

Investigate and test advanced anti-malware products – Organisations of all sizes should investigate and potentially acquire advanced anti-malware solutions. This is because normal solutions are no longer “good enough” to protect an organisation on their own. Instead IT teams need to recognise that all organisations are targets to hackers. In turn this means they should seek the strongest possible endpoint security solutions in order to deal with potential threats both now and in the future.

Continuous endpoint monitoring – As the great management saying goes “If you can’t manage it you can’t monitor it”. The question has to be: – Does your IT team have the right network and security monitoring in place? If it doesn’t how will you even know you are under attack or which endpoint devices are most vulnerable to attack? At Wanstor we always recommend appropriate network monitoring tools are purchased by the IT team. Quite often network monitoring and the ability to detect abnormal network traffic patterns early, help to prevent many security attacks before they become business critical.

Endpoint forensics – Endpoint forensic solutions can (when focused on actual need not cost) improve efficiency and effectiveness related to incident response, and reduce the time it takes for incident detection. Additionally by integrating endpoint data with network security analytics, it gives IT teams a more comprehensive and integrated view of security activities across networks and host systems.

In conclusion, endpoint security needs to change in most organisations to meet changing user needs and demands on IT. At the present time many organisations are struggling to hire the right staff, choose the right technologies, and respond to the many challenges of modern threats. The scale and diversity of these challenges can appear overwhelming, but organisations that take the time to devise and execute solid, integrated endpoint security strategies can the right returns on their security investments and protect their organisations at the same time.

Wanstor believes that organisations who are seeking to overhaul their endpoint security should integrate their endpoint security technologies with their network-level and log monitoring in order to improve incident detection, prevention, and response, while also streamlining the work of their security operations team.

For more information about Wanstor’s endpoint security services, please visit – https://www.wanstor.com/managed-it-security-services-business.htm

Read More

Enterprise Mobility Management – making sure the fundamentals are right

9th April 2018

Enterprise Mobility Management and ensuring the fundamentals are right

Mobility and bring-your-own device (BYOD) are transforming the way people work and the way businesses support them. At Wanstor we believe there is more to mobility than simply enabling remote access. To unlock the full potential of enterprise mobility, IT departments need to allow people the freedom to access all their apps and data from any device, seamlessly and conveniently. Mobile devices also call for the right approach to IT security to protect business information as they are used in more places, over untrusted networks, with a significant potential for loss or theft. The IT department has to maintain compliance and protect sensitive information wherever and however it’s used and stored, even when business and personal apps live side-by-side on the same device.

In this article Wanstor’s Mobility experts have developed a set of key points which the IT department need to take notice of as an enterprise mobility strategy is developed.

Protect and manage key assets, data and information

As employees access data and apps on multiple devices (including personally-owned smartphones and tablets) it can no longer be seen as realistic for IT to control and manage every aspect of the environment. At Wanstor we believe the approach IT teams should take is to focus on what matters most for a business across devices, data and information then choose the right mobility management models that make the most sense for your business and your mobile use cases.

Generally it is accepted there are four models to choose from, either individually or in combination. Mobile device management (MDM), Mobile hypervisors and containers, Mobile application management (MAM) and Application and desktop virtualization. Choosing the right mix of these 4 models will be intrinsically linked to your businesses success.

User experience needs to be at the centre of your thinking

Mobile devices have been a key driver of consumerization in the enterprise, giving people powerful new ways to work with apps and information in their personal lives. This has raised the expectations around IT and the services they provide particularly around mobile devices. No longer can IT teams put strict controls on users instead they must offer an IT experience that compares with the freedom and convenience allowed by consumer technology companies.  At Wanstor we always suggest before MDM planning gets underway that the IT team sits down with a range of users and talk about their needs and preferences to make sure the mobility strategy which is going to be put in place gives them what they really want.

As the IT team works to deliver a superior user experience, Wanstor experts suggest that they examine ways to give people more than they expect and provide useful capabilities they might not have thought of e.g.

  • Allow employees to access their apps and data on any device they use, complete with personal settings, so they can start work immediately once they have been given their work device
  • Give people the choice of self-service provisioning for any app they need through an enterprise app store with single sign-on
  • Automate controls on data sharing and management, such as the ability to copy data between applications, so people don’t have to remember specific policies
  • Define allowed device functionality on an app-by-app basis, so people can still use functions such as printing, camera and local data storage on some of their apps even if IT needs to turn them off for other apps
  • Make it simple for people to share and sync files from any device, and to share files with external parties simply by sending a link.

By developing a mobility strategy alongside the collaboration of users, IT teams can better meet users’ needs while gaining a valuable opportunity to set expectations. This helps to make sure employees understand IT’s own requirements to ensure compliance.

Avoid bypassing

Bypassing company controls and policies via a mobile device represents the worst-case scenario for enterprise mobility. It is surprisingly common that many users if they cannot find/access what they want in terms of IT on their mobile device will bypass IT altogether and access their own cloud services, apps and data.

Many people think great employees are accessing what they want, when they need it. Actually nothing could be further from the truth. Employees accessing unknown apps, sensitive data via public clouds and downloading files which bypass the visibility and control policies of IT mean a business is extremely vulnerable to attack. In reality IT policies and user education can only go so far to prevent bypasses from happening, realistically, if it’s the best solution for someone’s needs and it seems unlikely that IT will find out, it’s going to happen. This makes it essential to provide people with an incentive to work with IT and use its infrastructure, especially when it comes to sensitive data and apps. The best incentive is a superior user experience, delivered proactively and designed to meet peoples’ needs better than the unmanaged alternative.

Embed mobility in your service delivery strategy

Mobile users rely on a variety of application types—not just custom mobile apps, but also third party native mobile apps, Windows apps and SaaS solutions. In developing a mobility strategy, IT teams should think about the mix of apps used by the people and groups in their business, and how they should be accessed on mobile devices. It is widely accepted that there are four ways for people to access apps on mobile devices: Native, Virtualized access experience, Containerized experience and through a fully managed enterprise experience.

For most businesses, a combination of virtualized access and a containerized experience will support the full range of apps and use cases people rely on. This also makes it possible for IT to maintain visibility and control while providing a superior user experience. People can access hosted applications and native mobile apps—as well as SaaS apps such as Salesforce and NetSuite— through a unified enterprise single sign-on. When an employee leaves the business, IT can immediately disable the person’s account to remove access to all native mobile, hosted and SaaS apps used on the device.

Automation is the key to successful EMM outcomes

Automation not only simplifies life for the IT department it also helps them to deliver a better user experience. Think about the difference automation can make for addressing common mobility needs like:

  • An employee replaces a lost device or upgrades to a new one. With the click of a single URL, all of the individual’s business apps and work information are available on the new device, ready for work.
  • As an employee moves from location to location and network to network, situational and adaptive access controls reconfigure apps automatically to make sure appropriate security, with complete transparency for the user.
  • A board member arrives for a meeting, tablet in hand. All the documents for the meeting are automatically loaded onto the device, configured selectively by IT for read-only access, and restricted to a containerized app as needed. Especially sensitive documents can be set to disappear automatically from the device as soon as the member leaves the room.
  • As employees change roles in the business, the relevant apps for their current position are made available automatically, while apps that are no longer needed disappear. Third-party SaaS licenses are instantly reclaimed for reassignment.

One way to perform this type of automation is through Active Directory. First, link a specific role with a corresponding container. Anyone defined in that role will automatically inherit the container and all the apps, data, settings and privileges associated with it. On the device itself, you can use MDM to centrally set up Wi-Fi PINs and passwords, user certificates, two-factor authentication and other elements as needed to support these automated processes.

Define networking requirements

Different applications and use cases can have different networking requirements, from an intranet or Microsoft SharePoint site, to an external partner’s portal, to a sensitive app requiring mutual SSL authentication. Enforcing the highest security settings at the device level degrades the user experience unnecessarily; on the other hand, requiring people to apply different settings for each app can be even more tiresome for them.

By locking down networks to specific containers or apps, with separate settings defined for each, the IT team can make networking specific to each app without requiring extra steps from the user. People can just click on an app and get to work, while tasks such as signing in, accepting certificates or opening an app-specific VPN launch automatically by policy in the background.

Protect sensitive data

Unfortunately in many businesses, IT doesn’t know where the most sensitive data resides, and so must treat all data with the same top level of protection, an inefficient and costly approach. Mobility provides an opportunity for IT teams to protect data more selectively based on a classification model that meets unique business and security needs.

Many companies use a relatively simple model that classifies data into three categories—public, confidential and restricted—and also take into account the device and platform used while other businesses have a much more complex classification model and also take into account many more factors such as user role and location.

The data model deployed should take into account both data classification and device type. IT teams should also want to layer additional considerations such as device platform, location and user role into their security policy. By configuring network access through enterprise infrastructure for confidential and restricted data, IT teams can capture complete information on how people are using information to assess the effectiveness of your data sensitivity model and mobile control policy.

Clear about roles and ownership

Who in your business will own enterprise mobility? In most companies, mobility continues to be addressed through an ad hoc approach, often by a committee overseeing IT functions from infrastructure and networking to apps. Given the strategic role of mobility in the business, and the complex matrix of user and IT requirements to be addressed, it’s crucial to clearly define the structure, roles and processes around mobility. People should understand who is responsible for mobility and how they will manage it holistically across different IT functions. Ownership needs to be equally clear when it comes to mobile devices themselves. Your BYOD policy should address the grey area between fully managed, corporate-owned devices and user-owned devices strictly for personal use – for example:

Who is responsible for backups for a BYO device?

Who provides support and maintenance for the device, and how is it paid for?

How will discovery be handled if a subpoena seeks data or logs from a personally owned device?

What are the privacy implications for personal content when someone uses the same device for work?

Both users and IT should understand their roles and responsibilities to avoid misunderstandings.

Build compliance into the solution

Globally, businesses now face more than 300 security and privacy-related standards, regulations and laws, with more than 3,500 specific controls. Therefore it is not enough to simply try to meet these requirements, businesses need to be able to document compliance and allow full auditability.

Many businesses have already have solved the compliance challenge within their network. The last thing the IT department wants to do now is let enterprise mobility create a vast new problem to solve. Therefore IT departments should make sure mobile devices and platforms support seamless compliance with government mandates, industry standards and corporate security policies, from policy- and classification-based access control to secure data storage. Your EMM solution should provide complete logging and reporting to help you respond to audits quickly, efficiently—and successfully.

Prepare for the future

Don’t write your policies for only today! Keep in mind what enterprise mobility will look like in the next few years. Mobility, devices and users’ needs will continue to evolve and expand the potential of mobility, but they will also introduce new implications for security, compliance, manageability and user experience. IT departments need to pay attention to ongoing industry discussions about emerging technologies like these, and design their mobility strategy around core principles that can apply to any type of mobile device and use case. This way, they can minimize the frequent policy changes and iterations that may confuse and frustrate people.

Read More

A blog on Website Security

22nd February 2018

At Wanstor this week, we have been discussing website security. This is because of news that the Information Commissioner’s Office or ICO had to take its website down after a warning that hackers were taking control of visitor’s computers to mine cryptocurrency.

Following this story, some of our customers have been in contact regarding website security and suggested best practices. In light of this, Wanstor’s security experts have come together to develop the following high level guide to website security.

You may not think your website has anything worth hacking, but corporate websites are compromised all the time. Despite what people think, the majority of website security breaches are not to steal data or deface a website. Instead they are hacked to use servers as an email relay for spam, or to setup a temporary web server, normally to serve files of an illegal nature. Other common ways to abuse compromised machines include using your company servers as part of a botnet, or to mine for Bitcoins. You could even be hit by ransomware. Hacking is regularly performed by automated scripts written to scour the Internet in an attempt to exploit known website security issues in software. By following the tips below, your website should be able to operate in a safer way and put hackers and the tools they use off from attack.

Keep software updated

It may seem obvious, but making sure you keep all software updated is vital to keeping your site secure. This applies to both the server operating system and to any software you may be running on your website such as a CMS or forum. When holes are found in website security software, hackers are quick to attempt abuse. If you are using a managed hosting solution, then your hosting company should take care of any updates, so you do not need to worry about this – unless your hosting company contacts you to tell you to worry!

If you are using third-party software on your website such as a CMS or forum, you should make sure you are quick to apply any security patches. Most vendors have a mailing list or RSS feed detailing any website security issues.  Many developers use tools like Composer, npm, or RubyGems to manage their software dependencies, and security vulnerabilities appearing in a package you depend upon but aren’t paying any attention to is one of the easiest ways to get caught out. Make sure you keep your dependencies up to date and use relevant tools to get automatic notifications when a vulnerability is announced in one of your components.

SQL injection

SQL injection attacks occur when attackers use a web form field or URL parameter to gain access to or manipulate your database. When you use standard Transact SQL, it is easy for such individuals to insert rogue code into your query that could be used to change tables, retrieve information and delete data. You can easily prevent this by always using parameterised queries – most web languages have this feature and it is easy to implement.


Cross-site scripting (XSS) attacks inject malicious JavaScript into your pages, which then runs in the browsers of your users, allowing page content to be modified or information to be stolen or transmitted to the attacker. For example, if you show comments on a page without validation, attackers might submit comments containing script tags and JavaScript, which could run in every other user’s browser and steal their login cookie, allowing the attacker to take control of accounts owned by each user who views the comment. You need to ensure that users cannot inject active JavaScript content into your pages.

The key here is to focus on how your user-generated content could escape the bounds you expect and be interpreted by the browser as something other than what you intended. This is similar to defending against SQL injection. When dynamically generating HTML, use functions which explicitly make the changes you’re looking for, or use functions in your templating tool that automatically ensure appropriate escaping, rather than concatenating strings or setting raw HTML content.

Another powerful tool in the XSS defender’s toolbox is Content Security Policy (CSP). CSP is a header your server can return which tells the browser to limit how and what JavaScript is executed in the page, for example to disallow running of any scripts not hosted on your domain, disallow inline JavaScript. Mozilla have an excellent guide with some example configurations. This makes it harder for an attacker’s scripts to work, even if they can get them into your page.

Error messages

Be careful with how much information you give away in error messages. Provide only minimal errors to your users, to make sure they do not leak secrets present on your server. Although tempting, do not provide full exception details either, as these can make complex attacks like SQL injection far easier. Keep detailed errors in your server logs, and show users only the information they need to see.

Server side validation

Validation should always be done both on the browser and server side. The browser can catch simple failures like mandatory fields which are empty and when you enter text into a numbers only field. These can however be bypassed, and you should make sure you check for these validation and deeper validation server side as failing to do so could lead to malicious code or scripting code being inserted into the database or could cause undesirable results in your website.


Everyone knows they should use complex passwords, but that doesn’t mean they always do. It is crucial to use strong passwords to your server and website admin area, but equally also important to insist on good password practices for your users to protect the security of their accounts. As much as users may not like it, enforcing password requirements such as a minimum of around eight characters, including an uppercase letter and number will help to protect their information in the long run. Passwords should always be stored as encrypted values, preferably using a one way hashing algorithm. Using this method means when you are authenticating users you are only ever comparing encrypted values.

In the event of someone hacking in and stealing your passwords, using hashed passwords could help damage limitation, as decrypting them is not possible. The best someone can do is a dictionary attack or brute force attack, essentially guessing every combination until it finds a match.

Thankfully, many CMS’s provide user management out of the box with a lot of these website security features built in, although some configuration or extra modules might be required to use to set the minimum password strength. If you are using .NET then its worth using membership providers as they are very configurable, provide inbuilt website security and include readymade controls for login and password reset.

File uploads

Allowing users to upload files to your website can be a significant website security risk, even if it’s simply to change their photo, background picture or avatar. The risk is that any file uploaded however innocent it may look, could contain a script that when executed on your server completely opens up your website. If you have a file upload form then you need to treat all files with great suspicion. If you are allowing users to upload images, you cannot rely on the file extension or the mime type to verify that the file is an image as these can easily be faked. Even opening the file and reading the header, or using functions to check the image size are not fool proof. Most images formats allow storing a comment section which could contain PHP code that could be executed by the server.

So what can you do to prevent this? Ultimately you want to stop users from being able to execute any file they upload. By default web servers won’t attempt to execute files with image extensions, but it isn’t recommended to rely solely on checking the file extension as a file with the name image.jpg.php has been known to get through. Some options are to rename the file on upload to make sure ensure the correct file extension, or to change the file permissions so it can’t be executed.

In Wanstor’s opinion, the recommended solution is to prevent direct access to uploaded files. This way, any files uploaded to your website are stored in a folder outside of the webroot or in the database as a blob. If your files are not directly accessible you will need to create a script to fetch the files from the private folder (or an HTTP handler in .NET) and deliver them to the browser. Image tags support an src attribute that is not a direct URL to an image, so your src attribute can point to your file delivery script providing you set the correct content type in the HTTP header.

The majority of hosting providers deal with the server configuration for you, but if you are hosting your website on your own server then there are few things you will want to check. E.g. Make sure you have a firewall setup, and are blocking all non-essential ports.

If you are allowing files to be uploaded from the Internet only use secure transport methods to your server such as SFTP or SSH. Where possible have your database running on a different server to that of your web server. Doing this means the database server cannot be accessed directly from the outside world, only your web server can access it, minimising the risk of your data being exposed. Finally, don’t forget about restricting physical access to your server.


HTTPS is a protocol used to provide security over the Internet. HTTPS guarantees users that they’re communicating with the server that they should be, and that nobody else can intercept or modify the content in transit. If you have anything that your users might want to remain private, it’s highly advisable to use only HTTPS in delivering it. That of course means credit card and login pages. A login form will often set a cookie for example, which is sent with every other request to your site that a logged in user makes, and is used to authenticate those requests. An attacker stealing this would be able to perfectly imitate a user and take over their login session. To defeat these kind of attacks, you almost always want to use HTTPS for your entire site.

Website security tools

Once you think you have done all you can, then it’s time to test your website security. The most effective way of doing this is via website security tools, often referred to as penetration testing or pen testing for short. There are many commercial and free products to assist you in this. They work on a similar basis to scripts hackers will use in that they test all know exploits and attempt to compromise your site using some of the previous mentioned methods such as SQL injection.

Some free tools that are worth looking at include:

  • Netsparker (Free community edition and trial version available). Good for testing SQL injection and XSS.
  • OpenVAS claims to be the most advanced open source security scanner. Good for testing known vulnerabilities, currently scans over 25,000. But it can be difficult to setup and requires a OpenVAS server to be installed which only runs on *nix. OpenVAS was fork of Nessus before it became a closed-source commercial product.
  • io is a tool offering a free online check to quickly report which security headers mentioned above (such as CSP and HSTS) a domain has enabled and correctly configured.
  • Xenotix XSS Exploit Framework is a tool from OWASP (Open Web Application Security Project) that includes a huge selection of XSS attack examples, which you can run to quickly confirm whether your site’s inputs are vulnerable in Chrome, Firefox and IE.

The results from automated tests can be daunting, as they present a wealth of potential issues. The important thing is to focus on the critical issues first. Each issue reported normally comes with a good explanation of the potential vulnerability. You will probably find that some of the issues rated as low or medium in importance aren’t a concern for your site. If you wish to take things a step further then there are some further steps you can take to manually try to compromise your site by altering POST/GET values. A debugging proxy can assist you here as it allows you to intercept the values of an HTTP request between your browser and the server. A popular freeware application called Fiddler is a good starting point.

So what should you be trying to alter on the request? If you have pages which should only be visible to a logged in user then try changing URL parameters such as user id, or cookie values in an attempt to view details of another user. Another area worth testing are forms, changing the POST values to attempt to submit code to perform XSS or to upload a server side script.

Hopefully these tips will help keep your site and information safe. Thankfully most Content Management Systems have inbuilt website security features; it is a still a good idea to have knowledge of the most common security exploits, so you can make sure that you are covered.

For more information about Wanstor’s IT security solutions, please click here – https://www.wanstor.com/managed-it-security-services-business.htm

Read More

Is your data centre under capacity and cost pressures? A co-location strategy may provide the answer

25th January 2018

Is your private cloud strategy really working? What is your framework for success?

For many businesses, the data centre is critical to a successful day-to-day operation. But data centres are under pressure with not only the volume of data they have to store and process for a business but also rising power costs, new environmental responsibilities which need to be adhered to, data centre technologies evolving rapidly, and escalating costs of security, cooling, connectivity, management and maintenance. This means for many businesses when they reach a certain capacity in their data centre the IT department can no longer simply ask finance for the funds to build another one. Instead they need to explore other options and usually it comes down to a choice of two things – retrofit the existing data centre or switch to a co-location provider.
At Wanstor we understand for many businesses there are a number of ‘non-negotiables’ when it comes to the performance of their data centres.

Maintaining stable, secure power – Evolving technologies and changing service requirements affect power and cooling demands. Today’s data centre energy costs are substantial. At Wanstor we have seen some customers spending upwards of 70% of their operational costs just to keep an existing data centre operation running smoothly. Finding a way to control those costs is often a significant driver for businesses to move to hosted data centre solutions.

Redundancy and reliability – Most data centres have backup options for power in case of outages (UPS and a diesel generator). Many businesses spend a lot of time having to upgrade these assets each year to make sure they are in line with their data centres’ changing power requirements.

Keeping data safe – At Wanstor we believe data can be used in a variety of ways to transform a business, but how it is stored, managed and maintained means there is another side to it – RISK. Privacy has to be protected. Confidential information must be safeguarded. Industry compliancy requirements, UK and EU regulations must be met. IT Managers need to know if their company’s data is stored on UK soil. Additionally the constant stream of new developments in IT and physical security due to the continued evolution of IT security threats means that many IT Managers are not confident their own data centres and systems are as secure as possible.

Growth vs Cost – Expand too quickly or too much and the IT Manager risks wasting resources. Limit growth and the IT Manager risks inhibiting a business’s potential. Building a brand-new data centre will give the IT Manager the flexibility to customise a build for their business. However the offset of all the advantages of a newly built data centre are usually wiped out by the finance team when they see the high costs of construction involved, the difficulties in selecting the right build partner and lack of appropriate locations. Indeed as so much is expected of modern day data centres only large enterprises appear to be building them in today’s market. This is backed by Forrester Research which estimates co-location is 37% less expensive than building your own data centre, based on costs over a 15-year period. This means for many small and medium sized companies, the only real solution available to them when they run out of data centre space is to outsource to a co-location provider.

Is hosting the right choice for your business?

For many small and medium sized businesses, moving to a hosted data centre model can be an effective way of offsetting the challenges associated with operating and maintaining their own data centre. At Wanstor we believe IT Managers should examine the types of questions below before deciding whether or not a hosting solution is the right choice for their business. The questions below should help an IT Manager gain a relatively quick view on insource vs outsource regarding their data centres after having answered these questions:

What are you looking to achieve with your data centre operations?

  • Address increasing power and cooling requirements?
  • Maximise uptime, availability and redundancy?
  • Keep technology up to date in an ever changing world?
  • Strengthen physical and data security?
  • Increase capacity whilst reducing power costs?
  • Investigate ways to optimise operational performance across systems and people?
  • Make sure IT teams are focussed on core business offerings?
  • Improve the efficiency and effectiveness of IT resource management and support?
  • Create a predictable cost model?
  • Reduce operational complexity and risk?

By defining what IT and the business wants to achieve with a new data centre, IT Managers can then scope the solution their business needs. Quite often when financial metrics are applied to outcomes IT Managers will conclude that outsourcing to a co-location provider is usually a third cheaper than building a new data centre themselves. This means in the majority of cases the decision will be made to outsource to a co-location provider.

Once the decision to outsource data centre operations to a co-location provider has been made, it is important for the IT Manager to take the time to understand the key characteristics of a dependable co-location data centre. At Wanstor we believe when evaluating a provider’s facilities, IT Managers need to take a close look at the data centre’s capabilities, strengths and potential weaknesses.

From our extensive experience at Wanstor we believe important questions to ask a potential co-location provider include:

What tier ranking is the facility designed to meet? Does your business really need a Tier 4 facility (which you pay a significant amount more for) or will a Tier 3 data centre suffice?

What is your downtime tolerance level and can the facility meet your businesses uptime needs? Remember downtime can affect your business – in terms of revenue, customer experience and brand image.

What security measures are in place? As hosted data centres share multiple customers, advanced security features should be in place including 24/7 x 365 on-site security, network security (intrusion detection, virtualized firewalls and load balancers), and the ability to monitor lines for traffic. At Wanstor we always recommend that IT Managers take the time to discover how much control a provider has over the network that will be delivering hosted data centre services. Additionally it would be wise to ask about managed protection against DDoS attacks, event management and any other security services essential to your business.

Scalability – What are the options? As a business grows, it will need more data centre space and scalable capacity. Additionally any hosted data centre facility that is chosen should be able to adopt new technologies quickly. Cloud services and fully-managed virtualized environments offer many businesses an opportunity to enhance scalability and refocus key IT resources on revenue generating activities. You may not need these services today, but having your data hosted is usually a long-term decision because moving is expensive and risky. So IT Managers need to think beyond the initial contract term and make sure they have room to grow, and some allowance to meet future needs.

Auditing – When transferring data and applications to a data centre, the IT department are also transferring compliance responsibilities. Therefore the IT Manager should check that their data centre provider has the relevant compliance certifications and ask for proof of them.

Power consumption model – Service reliability will depend on a co-location provider’s ability to measure, monitor and allocate power usage. In an over-subscription power allocation model, a single reading is used for the entire data centre. Unused power from one customer can be resold to another and spikes in power demand from other customers can drain your resources. In the power reservation model, you get the total capacity you’ve paid for, whether or not you use it. You’ll always have enough energy to run your systems, and close monitoring ensures the provider can quickly detect and respond to any increases in your demand. This prevents the situation where one customer has the ability to affect another customer’s environment.

What environmental initiatives are included? Integrated sustainable energy technology is good both for operational cost savings and the environment. Time should be taken to consider the co-location providers environmental track record, look for advancements such as virtualized environments, use of free cooling solutions and heat exchangers. All of these can be reliable, cost-effective alternatives to traditional technologies. Many service providers today aspire to improve their power usage effectiveness (PUE), an industry measure of energy efficiency. A service provider with good PUE will also help keep power costs down.

Connectivity – The network linking a provider’s data centres is a critical component of their offering. Data centres typically process large volumes of traffic, and the network that connects the data centres, to each other and to the business needs to sustain volumes reliably and securely at all times. The physical location of data centres is also important, as providers often space out their facilities to minimise the risk of a mass disruption. But many data centre applications are sensitive to latency. The further your data needs to travel, the more likely it is that delay may become an issue. Therefore evaluating the connectivity performance and options from a data co-location provider is crucial.

Beyond the characteristics of the data centre itself, the IT Manager will also want to be confident in a provider’s ability to meet business needs. Other questions the IT Manager should be asking alongside exploring the key areas above include:

What kind of network does the provider operate? How does the network cope with spikes in demand? What are the latency levels for different applications?

What kind of service-level agreements (SLAs) are offered? Are the hosting and connectivity service levels aligned and through the same provider. If they’re not aligned this could spell trouble as one service may operate better than the other leaving a range of performance issues.

Are professional services available to help with understanding technology options/upgrades? One size does not fit all. The right data centre provider will assess your needs, current capabilities and future plans, and will work with you to find a solution that meets your unique business goals.

Can services be scaled quickly and easily? IT needs will continue to evolve that is for certain and not always in ways the IT Manager can predict. Look for power and capacity that can be scaled quickly, giving you the energy, space and bandwidth you need to grow your business.

Does the provider offer virtual hosting and cloud solutions? Dedicating a server to each application and configuring it to handle peak loads can be inefficient. Moving your applications to a virtual server farm can help keep costs low and give you the advantage of architectural flexibility. Virtual solutions also scale up quickly and easily, without requiring the IT Manager to invest in any hardware. Look for a provider equipped with the latest virtual service offerings, such as Infrastructure as a Service (IaaS), which gives IT complete control over capacity and charges only for the services used.

Does the provider invest continually in infrastructure and cloud capabilities? One of the benefits of moving to a hosted data centre model is taking advantage of new technology. A good provider will constantly invest in upgrades and advances e.g. by integrating cloud capabilities or adopting the latest innovations in physical and data security.

Are costs predictable? Working with a data centre provider will give you access to a sophisticated infrastructure without incurring significant capital costs. Make sure the monthly costs associated with the hosted service are stable and predictable, challenge anything out of the norm such as unforeseen maintenance requirements for example.

This article should help IT Managers think about co-location data centre solutions when they are reaching the limits of their own data centre infrastructure. For more information about Wanstor data centre co-location services download our brochure here.

Read More
124-126 Borough High Street London, SE1 1LB
Phone: 0333 123 0360, 020 7592 7860
IT Support London from Wanstor IT Support London