Big Data Management = Big Demands on your IT Infrastructure

11th May 2018
|

Big Data Management means Big Demands on your IT Infrastructure

While the concept of big data management is nothing new, the tools and technology needed to exploit “big data” for commercial and organisational gain are now coming to maturity. Businesses involved in industries such as media, hospitality, retail, leisure & entertainment, and manufacturing have long been dealing with data in large volumes and unstructured formats or data that changes in near real time.

However, extracting meaning from this data has often been prohibitive, requiring custom-built, expensive technology. Now, thanks to advancements in storage and analytics tools and technologies, all businesses and not for profit organisations can leverage big data to gain the insight needed to make their organisations more agile, innovative, and competitive.

At Wanstor, we understand there are a few important business drivers behind the growing interest in big data, which include:

  • The desire to gain a better understanding of customers
  • How to improve operational efficiency
  • The need for better risk management – Improved IT security and reduced fraud
  • The opportunity to innovate to stay ahead of the competition and/or attract/retain customers

In summary these business drivers are primarily the same goals that companies and not for profit organisations have had for years. But with advances in storage and analytics, they can now extract the value that lies within all of their existing data quicker, easier, and more cost-effectively.

At Wanstor we believe to turn these business goals into realities, business and not for profit organisations must think about data management in different ways. Because big data is voluminous, unstructured, and ever-changing, approaches to dealing with it differ from techniques used with traditional data. To turn big data into opportunities, organisations should take the time to find technology solutions that feature the following components:

  • A versatile, scale-out storage infrastructure that is efficient and easy to manage and enables business teams to focus on getting results from data quickly and easily
  • A unified analytics platform for structured and unstructured data with a productivity layer that enables collaboration between IT teams and the wider business
  • Capabilities to be more predictive, driving actions from actual insights

With these components in place, business and not for profit organisations can build infrastructures that deliver on the promises of big data.

Despite the many benefits it delivers, big data is (for many organisations) putting undue demands on their IT teams, as it differs from traditional enterprise data in the following ways:

  • It’s voluminous – Medium and large scale organisations generate and collect large quantities of traditional data, but big data is often orders of magnitude more.
  • It’s largely unstructured – Big data includes Internet log files, scanned images, video surveillance clips, comments on a website, biometric information, and other types of digitized information. This data doesn’t fit neatly into a database. But unstructured data accounts for 80%+ of all data growth in many businesses today.
  • It’s changing – Big data often changes in real time or near real time— E.g. customer comments on a website. This data must be collected over significant periods of time in order to spot patterns and trends.

Furthermore, organisations are beginning to realize that to reap the full value of big data, they must be able to analyse and iterate on the entire range of available digital information. One off snapshots of data do not necessarily tell the whole story or solve a particular business challenge. Efficiently collecting and storing that data for iterative analysis has a significant impact on an organisations storage and IT management resources. In short, IT storage professionals need to find big data solutions that fit the bill, but don’t strain already tight budgets or require significant investments in dedicated personnel.

Due to these new big data demands, as well as the importance of handling information correctly, most organisations consider managing data growth, provisioning storage, and performing fast, reliable, and iterative analytics to be top priorities. But as IT budgets have become squeezed many data storage professionals are saying to us at Wanstor that big data is placing their current IT infrastructures under extreme stress; with many looking to build scalable infrastructures within their data centres or outsource to a co-location or private cloud provider.

As you have probably guessed from the above paragraph big data requires more capacity, scalability, and efficient accessibility without increasing resource demands. Traditionally, storage architectures were designed to scale up to accommodate growth in data. Scaling up means adding more capacity in the form of storage hardware and silos, but it doesn’t address how additional data will affect performance. If you look at traditional storage architectures, RAID controller–based systems end up with large amounts of storage sprawl and create a siloed environment. Instead, organisations need to be able to achieve consolidation within a single, highly scalable storage infrastructure. They also need automated management, provisioning, and tiering functions to accommodate the rapid growth of big data.

At Wanstor we believe organisations of all sizes need storage architectures that are built with big data in mind and offer the following features:

  • Scalability – to accommodate large and growing data stores, including the ability to easily add additional storage resources as needed
  • High performance – to keep response times and data ingest times low, and can keep pace with the business
  • High efficiency – to reduce storage and related data centre costs
  • Operational simplicity – to streamline the management of a massive data environment without additional IT staff
  • Enterprise data protection – to make sure high availability for business users and business continuance in the event of a disaster
  • Interoperability – to integrate complex environments and to provide an agile infrastructure that supports a wide range of business applications and analytics platforms

Final thoughts – As the amount of unstructured data in organisations grow, companies and not for profit organisations of all sizes are learning they need new approaches to managing that data. At Wanstor we believe they require an efficient and scalable storage strategy that helps them to efficiently and effectively manage extreme data growth. Wanstor has a range of big data experts who can work with your business to put the right data storage solution that incorporates scalability, improved performance (both I/O and throughput), and improved data availability. Scalable storage solutions, paired with powerful analytics tools that can derive valuable insight from large amounts of content can help organisations of all sizes reap the benefits of “big data”. The only question you have to answer now is – Is your infrastructure ready?

For more information about Wanstor’s data storage solutions click here – https://www.wanstor.com/data-centre-storage-business.htm .

Read More

Network Monitoring for the Private Cloud: A brief guide

3rd May 2018
|

Private Cloud Computing

‘Cloud computing’ as a concept has been around for over 10 years. Up until about 5 years ago many business and not for profit organisations shunned the “cloud” as all they could see were problems and challenges with the implementation of a cloud first policy such as – insufficient processor performance, enormous hardware costs and slow Internet connections making everyday use difficult.

However, today’s technology, broadband Internet connections and fast, inexpensive servers, provide the opportunity for businesses and not for profit IT teams to access only the services and storage space that are actually necessary, and adjust these to meet current needs. For many small and medium sized organisations using a virtual server, which is provided by a service provider, introduces a wide range of possibilities for cost savings, improved performance and higher data security. The goal of such cloud solutions is a consolidated IT environment that effectively absorbs fluctuation in demand and capitalizes on available resources.

The public cloud concept presents a number of challenges for a company’s IT department. Data security and the fear of ‘handing over’ control of the systems are significant issues. If an IT department is used to appropriating its systems with firewalls and to monitoring the availability, performance and capacity usage of its network infrastructure with a monitoring solution, it is much more difficult to implement both measures in the cloud. Of course, all large public cloud providers claim they offer appropriate security mechanisms and control systems, but the user must rely on the provider to guarantee constant access and to maintain data security.

Because of the challenges and general nervousness around data security in public clouds, many IT teams are investigating the creation of a ‘private cloud’ as an alternative to the use of public cloud. Private clouds enable staff and applications to access IT resources as they are required, while the private computing centre or a private server in a large data centre is running in the background. All services and resources used in a private cloud are found in defined systems that are only accessible to the user and are protected from external access.

Private clouds offer many of the advantages of cloud computing and at the same time minimise the risks. As opposed to many public clouds, the quality criteria for performance and availability in a private cloud can be customised, and compliance to these criteria can be monitored to make sure they are achieved.

Before moving to a private cloud, an IT department must consider the performance demands of individual applications and usage variations. Long-term analysis, trends and peak loads can be attained via extensive network monitoring evaluations, and resource availability can be planned according to demand. This is necessary to guarantee consistent IT performance across virtualized systems. However, a private cloud will only function if a fast, highly reliable network connects the physical servers. Therefore, the entire network infrastructure must be analysed in detail before setting up a private cloud. This network must satisfy the requirements relating to transmission speed and stability, otherwise hardware or network connections must be upgraded.

Ultimately, even minor losses in transmission speed can lead to extreme drops in performance. At Wanstor we recommend IT administrators use a comprehensive network monitoring solution like PRTG Network Monitor, in the planning of the private cloud. If an application (which usually equates to multiple virtualized servers) is going to be operated over multiple host servers (“cluster”) in the private cloud, the application will need to use Storage Area Networks (SANs), which convey data over the network as a central storage solution. This makes network performance monitoring even more important.

In terminal set ups in the 1980s, if a central computer broke down it was capable of paralyzing an entire company. The same scenario could happen if systems in the cloud fail. Current developments show that the world has gone through a phase of widely distributed computing and storage power (each workstation had a ‘full-blown’ PC) and returned to centralized IT concepts. The data is located in the cloud, and end devices are becoming more streamlined. The new cloud, therefore, complies with the old mainframe concept of centralized IT. The failure of a single VM in a highly-virtualized cloud environment can quickly interrupt access to 50 or 100 central applications. Modern clustering concepts are used to try to avoid these failures, but if a system fails despite these efforts, it must be dealt with immediately. If a host server crashes and pulls a large number of virtual machines down with it, or its network connection slows down or is interrupted, all virtualized services on this host are instantly affected, which, even with the best clustering concepts, often cannot be avoided.

A private cloud (like any other cloud) depends on the efficiency and dependability of the IT infrastructure. Physical or virtual server failures, connection interruptions and defective switches or routers can become expensive if they cause staff, automated production processes or online retailers to lose access to important operational IT functions.

This means a private cloud also presents new challenges to network monitoring. To make sure that users have constant access to remote business applications, the performance of the connection to the cloud must be monitored on every level and from every perspective.

At Wanstor we believe an appropriate network monitoring solution like PRTG accomplishes all of this with a central system; it notifies the IT administrator immediately in the event of possible disruptions within the private IT landscape both on location and in the private cloud, even if the private cloud is run in an external computing centre. A feature of private cloud monitoring is that external monitoring services cannot ‘look into’ the cloud, as it is private. An operator or client must therefore provide a monitoring solution within the private cloud and, as a result, the IT staff can monitor the private cloud more accurately and directly than a purchased service in the public cloud. A private cloud also enables unrestricted access when necessary. This allows the IT administrator to track the condition of all relevant systems directly with a private network monitoring solution. This encompasses monitoring of every individual virtual machine as well as the VMware host and all physical servers, firewalls, network connections, etc.

For comprehensive private cloud monitoring, the network monitoring should have the systems on the radar from user and server perspectives. If a company operates an extensive website with a web shop in a private cloud, for example, network monitoring could be set up as follows: A website operator aims to ensure that all functions are permanently available to all visitors, regardless of how this is realised technically. The following questions are especially relevant in this regard:

cloud-computing-lightbox

  • Is the website online?
  • Does the web server deliver the correct contents?
  • How fast does the site load?
  • Does the shopping cart process work?

These questions can only be answered if network monitoring takes place from outside the server in question. Ideally, network monitoring should be run outside the related computing centre, as well. It would therefore be suitable to set up a network monitoring solution on another cloud server or another computing centre.

It is crucial that all locations are reliable and a failover cluster supports monitoring so that interruption-free monitoring is guaranteed. This remote monitoring should include

  • Firewall, HTTP load balancer and Web server pinging
  • HTTP/HTTPS sensors
  • Monitoring loading time of the most important pages
  • Monitoring loading time of all assets of a page, including CSS, images, Flash, etc.
  • Checking whether pages contain specific words, e.g.: “Error”
  • Measuring loading time of downloads
  • HTTP transaction monitoring, for shopping process simulation
  • Sensors that monitor the remaining period of SSL certificate validity

If one of these sensors finds a problem, the network monitoring solution should send a notification to the IT administrator. Rule-based monitoring is helpful here. If a Ping sensor for the firewall, for example, times out, PRTG Network Monitor offers the possibility to pause all other sensors to avoid a flood of notifications, as, in this case, the connection to the private cloud is clearly completely disconnected.

Other questions are crucial for monitoring the (virtual) servers that are operating in the private cloud include:

  • Does the virtual server run flawlessly?
  • Do the internal data replication and load balancer work?
  • How high are the CPU usage and memory consumption?
  • Is sufficient storage space available?
  • Do email and DNS servers function flawlessly?

These questions cannot be answered with external network monitoring. Monitoring software must be running on the server or the monitoring tool must offer the possibility to monitor the server using remote probes. Such probes monitor the following parameters, for example, on each (virtual) server that runs in the private cloud, as well as on the host servers:

  • CPU usage
  • Memory usage (page files, swap file, page faults, etc.)
  • Network traffic
  • Hard drive access, free disc space and read/write times during disc access
  • Low-level system parameters (e.g.: length of processor queue, context switches)
  • Web server’s http response time Critical processes, like SQL servers or Web servers, are often monitored individually, in particular for CPU and memory usage.

In addition, the firewall condition (bandwidth use, CPU) can be monitored. If one of these measured variables lies outside of a defined range (e.g. CPU usage over 95% for more than two or five minutes), the monitoring solution will send notifications to the IT administrator.

Final thoughts

With the increasing use of cloud computing, IT system administrators are facing new challenges. A private cloud depends on the efficiency and dependability of the IT infrastructure. This means that the IT department must look into the capacity requirements of each application in the planning stages of the cloud in order to calculate resources to meet the demand. The connection to the cloud must be extensively monitored, as it is vital that the user has constant access to all applications during operation.

At the same time, smooth operation of all systems and connections within the private cloud must be guaranteed. A network monitoring solution should therefore monitor all services and resources from every perspective. This ensures continuous system availability.

For more information about Wanstor and PRTG network monitoring tools please visit – https://www.wanstor.com/paessler-prtg-network-monitor.htm

Read More

Overcoming Active Directory Administrator Challenges

23rd February 2018
|

Overcoming Active Directory Administrator Challenges

The central role of Active Directory in business environments

Deployment of and reliance upon Active Directory in the enterprise continues to grow at a rapid pace, and is more often becoming the central data storage point for sensitive user data as well as the gateway to critical business information. This provides businesses with a consolidated, integrated and distributed directory service, and enables the business to better manage user and administrative access to business applications and services.

Over the past 10+ years, Wanstor has seen Active Directory’s role in the enterprise drastically expand, as has the need to secure the data it both stores and enables access to. Unfortunately, native Active Directory administration tools provide little control over user and administrative permissions and access. The lack of control makes the secure administration of Active Directory a challenging task for IT administrators. In addition to limited control over what users and administrators can do within Active Directory, the database has limited ability in reporting on activities performed therein. This makes it very difficult to meet audit requirements, and to secure Active Directory. As a result, many businesses need assistance in creating repeatable, enforceable processes that will reduce their administrative overhead, whilst helping increase the availability and security of their systems.

Because Active Directory is an essential part of the IT infrastructure, IT teams must manage it both thoughtfully and diligently – controlling it, securing it and auditing it. Not surprisingly, with an application of this importance there are challenges to confront and resolve in reducing risk, whilst deriving maximum value for the business. This blog will examine some of the most challenging administrative tasks related to Active Directory.

Compliance Auditing and Reporting

To satisfy audit requirements, businesses must demonstrate control over the security of sensitive and business-critical data. However, without additional tools, demonstrating regulatory compliance with Active Directory is time-consuming, tedious and complex.

Auditors and stakeholders require detailed information about privileged-user activity. This level of granular information allows interested parties to troubleshoot problems and also provides information necessary to improve the performance and availability of Active Directory.

Auditing and reporting on Active Directory has always been a challenge. To more easily achieve, demonstrate and maintain compliance, businesses should employ a solution that provides robust, custom reporting and auditing capabilities. Reporting should provide information on what, when and where changes happen, and who made the changes.

Reporting capabilities should be flexible enough to provide graphical trend information for business stakeholders, while also providing granular detail necessary for administrators to improve their Active Directory deployment. Solutions should also securely store audit events for as long as necessary to meet data retention requirements and enable the easy search of these events.

Group Policy Management

Microsoft recommends that Group Policy be a cornerstone of Active Directory security. Leveraging the powerful capabilities of Group Policy, IT teams can manage and configure user and asset settings, applications and operating systems from a central console. It is an indispensable resource for managing user access, permissions and security settings in the Windows environment.

However maintaining a large number of Group Policy Objects (GPOs), which store policy settings, can be a challenging task. for example, Administrators should take special care in large IT environments with many system administrators, because making changes to GPOs can affect every computer or user in a domain in real time. However, Group Policy lacks true change-management and version-control capabilities. Due to the limited native controls available, accomplishing something as simple as deploying a shortcut requires writing a script. Custom scripts are often complex to create and difficult to debug and test. If the script fails or causes disruption in the live environment, there is no way to roll back to the last known setting or configuration. Malicious or unintended changes to Group Policy can have devastating and permanent effects on an IT environment and a business.

To prevent Group Policy changes that can negatively impact the business, IT teams often restrict administrative privilege to a few highly-skilled administrators. As a result, these staff members are overburdened with administering Group Policy rather than supporting the greater goals of the business. To leverage the powerful capabilities of Group Policy, it is necessary to have a solution in place that provides a secure offline repository to model and predict the impact of Group Policy changes before they go live. The ability to plan, control and troubleshoot Group Policy changes, with an approved change and release-management process, enables IT teams to improve the security and compliance of their Windows environment without making business-crippling administrative errors.

Businesses should also employ a solution for managing Group Policy that enables easy and flexible reporting to demonstrate that they’ve met audit requirements.

User Provisioning, Re-provisioning and De-provisioning

Most employees require access to several systems and applications, and each programme has its own account and login information. Even with today’s more advanced processes and systems, employees often find themselves waiting for days for access to the systems they need. This can cost businesses thousands of pounds in lost productivity and employee downtime.

To minimize workloads and expedite the provisioning process, many businesses view Active Directory to be the commanding data store for managing user account information and access rights to IT resources and assets. Provisioning, re-provisioning and de-provisioning access via Active Directory is often a manual process. In a large business, maintaining appropriate user permissions and access can be a time-consuming activity, especially when the business has significant personnel turnover. Systems administrators often spend hours creating, modifying and removing credentials. In a large, complex business, manual provisioning can take days. There are no automation or policy enforcement capabilities native to Active Directory. With little control in place, there is no way to make sure that users will receive the access they need when they need it.

Additionally, there is no system of checks and balances. Administrative errors can easily result in elevated user privileges that can lead to security breaches, malicious activity or unintended errors that can expose the business to significant risk. Businesses should look for an automated solution to execute provisioning activities. Implementing an automated solution with approval capabilities greatly reduces the burden on administrators, improves adherence to security policies, improves standards and decreases the time a user must wait for access. It also speeds up the removal of user access, which minimizes the ability of a user with malicious intent to access sensitive data.

Secure Delegation of User Privilege

Reducing the number of users with elevated administrative privileges is a constant challenge for the owners of Active Directory. Many user and helpdesk requests require interaction with Active Directory, but these common interactions often result in elevated access for users who do not need it to perform their jobs. Because there are only two levels of administrative access in Active Directory (Domain Administrator or Enterprise Administrator), it is very difficult to control what users can see and do once they gain administrative privileges.

Once a user has access to powerful administrative capabilities, they can easily access sensitive business and user information, elevate their privileges and even make changes within Active Directory. Elevated administrative privileges, especially when in the hands of someone with malicious intent, dramatically increase the risk exposure of Active Directory and the applications, users and systems that rely upon it. At Wanstor we have found through our years of experience of dealing with Active Directory that it is not uncommon for a business to discover that thousands of users have elevated administrative privileges. Each user with unauthorized administrative privileges presents a unique threat to the security of the IT infrastructure and business. Coupled with Active Directory’s latent vulnerabilities, it is easy for someone to make business-crippling administrative changes. When this occurs, troubleshooting becomes difficult, as auditing and reporting limitations make it nearly impossible to quickly gather a clear picture of the problem.

To reduce the risk associated with elevated user privilege and make sure that users only have access to the information they require, businesses should seek a solution that can securely delegate entitlements. This is a requirement to meet separation-of-duties mandates, as well as a way to share the administrative load by securely delegating privileges to subordinates.

Change Auditing and Monitoring

To achieve and maintain a secure and compliant IT environment, IT administrators must control change and monitor for unauthorized changes that may negatively impact their business. Active Directory change auditing is an important procedure for identifying and limiting errors and unauthorized changes to Active Directory configuration. One single change can put a business at risk, introducing security breaches and compliance issues.

Native Active Directory tools fail to proactively track, audit, report and alert administrators about vital configuration changes. Additionally, native real-time auditing and reporting on configuration changes, day-to-day operational changes and critical group changes do not exist. This exposes the business to risk, as the IT team’s ability to correct and limit damage is dependent on their ability to detect and troubleshoot a change once it has occurred.

A change that goes undetected can have a drastic impact on a business. E.g. someone who elevated their privileges and changed their identity to that of a senior member of the finance department could potentially access company funds resulting in theft, wire transfers and so forth. To reduce risk and help prevent security breaches, businesses should employ a solution that provides comprehensive change monitoring. This solution should include real-time change detection, intelligent notification, human-readable events, central auditing and detailed reporting. Employing a solution that encompasses all of these elements will enable IT teams to quickly and easily identify unauthorized changes, pinpoint their source, and resolve issues before they negatively impact the business.

Maintaining Data Integrity

It is important for businesses of all sizes to make sure that the data housed within Active Directory supports the needs of the business, especially as other applications rely on Active Directory for content and information.

Data integrity involves both the consistency of data and the completeness of information. For example, there are multiple ways to enter a phone number. Entering data in inconsistent formats creates data pollution. Data pollution inhibits the business from efficiently organizing and accessing important information. Another example of data inconsistency is the ability to abbreviate a department name. Think of the various ways to abbreviate “Accounting.” If there are inconsistencies in Active Directory’s data, there is no way to make sure that an administrator can group all the members of accounting together, which is necessary for payroll, communications, systems access and so on. Another vital aspect of data integrity when working with Active Directory is the completeness of information. Active Directory provides no control over content that is entered natively. If no controls are in place, administrators can enter information in any format they wish and leave fields that the business relies upon blank. To support and provide trustworthy information to all aspects of the business that rely on Active Directory, businesses should employ a solution that controls both the format and completeness of data entered in Active Directory. By putting these controls in place, IT teams can drastically reduce data pollution and significantly improve the uniformity and completeness of the content in Active Directory.

Self-Service Administration

Most requests made by the business or by users require access to and administration of Active Directory. This is often manual work and there are few controls in place to prevent administrative errors. Active Directory’s inherent complexity makes these errors common, and just one mistake could do damage to the entire security infrastructure. With the lack of controls, the business cannot have just anyone administering Active Directory.

While it may be practical to employ engineers and consultants to install and maintain Active Directory, businesses cannot afford to have their highly-skilled and valuable employees spending the majority of their time responding to relatively trivial user requests. Self-service administration and automation are logical solutions for businesses looking to streamline operations, become more efficient and improve compliance. This is achieved by placing controls around common administrative tasks and enabling the system to perform user requests without tasking highly skilled administrators.

Businesses should identify processes that are routine yet hands-on, and consider solutions that provide user self-service and automation of the process. Automation of these processes reduces the workload on highly-skilled administrators, it also improves compliance with policies since automation does not allow users to skip steps in the process. Businesses should also look for self-service and automation solutions that allow for approval and provide a comprehensive audit trail of events to help demonstrate policy compliance.

Final thoughts

Active Directory has found its home as a mission-critical component of the IT infrastructure. As businesses continue to leverage it for its powerful capabilities as a commanding repository, Active Directory is a vital part of enterprise security. Therefore, administrators must be able to control, monitor, administer and protect it with the same degree of discipline currently applied to other high-profile information such as credit card data, customer data and so forth. Because native tools do not enable or support the secure and disciplined administration of Active Directory, businesses must look for solutions that enable its controlled and efficient administration. These solutions help make sure the business information housed in Active Directory is both secure and appropriately serving the needs of the business.

Read More

Why flash storage is so important to the success of hybrid IT infrastructure

9th February 2018
|

Why flash storage is so important to the success of hybrid IT infrastructure

IT leaders are facing critical decisions on how to best deploy data centre and cloud resources to enable digital transformation. The advantages of cloud models have been written about by many IT industry commentators, experts and opinion makers. Understandably, cloud computing is fundamental to delivering the agility, cost efficiencies and simplified operations necessary for modern IT workloads and applications at scale. However the truth is, even in today’s cloud era, IT leaders still need their own IT infrastructure and data centres to make IT work for their business.

At Wanstor, we believe that today and tomorrow’s data centres must support new models for resource pooling, self-service delivery, metering, elastic scalability and automatic chargebacks. They must deliver performance and agility that the business needs. No longer is it good enough to blame legacy IT equipment for standing in the way of business progress. IT departments must make sure they reduce complexity by leveraging technologies and architectures that are simple to deploy and manage. They must achieve levels of automation, orchestration and scalability that are not possible within data centres that operate on their own.

At Wanstor we have been thinking about the future of the data centre. We believe many IT departments are missing the fundamental question when seeking answers to their existing infrastructure plans and that is:

How does the data storage strategy integrate within existing and future company owned IT infrastructure and public cloud infrastructures?

At Wanstor we believe the answer to the “storage strategy” question can be found in a storage strategy that encompasses all flash and no longer relies on cumbersome disks and tapes. All-flash storage is the single most important change an IT Manager will need to make to successfully build their future hybrid infrastructure model. Without a flexible and scalable all-flash storage architecture the future data centre and hybrid cloud model actually fails. The performance, cost efficiencies, simplicity, agility and scalability the modern IT department will need to successfully serve their business cannot be achieved without all-flash storage as the infrastructure foundation.

So how do IT Managers leverage the benefits of all-flash storage to build a service-centric data storage infrastructure required for their business? What are some of the innovations in pricing models and all-flash storage architectures that will help them create a cost-efficient, scalable, resilient and reliable hybrid IT infrastructure?

The first thing IT Managers need to recognise is that moving to all-flash storage for a truly hybrid IT infrastructure is not just simply taking an extra step and buying some more kit nor is it rip everything out and start all over again. Instead it is an iterative process that will take place over a period of time depending on how mature a business’s IT infrastructure model is at the moment and what needs to be delivered by IT for business success in the future.

Migrating applications onto all flash storage

If you are an IT decision maker, you realise that your business has probably spent a quite a bit of budget and a significant amount of effort to make sure business critical applications are supported by an underlying IT infrastructure that is reliable, robust and resilient. Indeed you are probably beginning to experience performance challenges with a range of applications, particularly those that require high levels of IOPS. But applications and workloads that might see incremental improvements through faster, more responsive storage are unlikely to be the first place where IT will deploy all-flash systems. Instead, the IT Manager is likely to have specific applications and workloads where the performance challenges of spinning disk storage are difficult to overcome and the underlying storage infrastructure needs to be modernised instead to avoid putting the business at risk. Typical applications and workloads at this stage include databases supporting online transaction processing solutions for e-commerce, infrastructures supporting DevOps teams, and applications that are specific to a particular industry, which require levels of performance that traditional disk storage simply cannot deliver.

To understand which applications should be moved to all-flash storage first, it is important to do three things:

Understand the businesses own requirements for data storage, applications and budget considerations, and identify those workloads that are causing the most pain or providing the best opportunity to use all-flash storage to drive measurable business improvements.

Evaluate the benefits of all-flash storage solutions and how they can be applied to enhance and strengthen particular applications and workloads.

Compare leading all-flash solutions and determine which features, functions and pricing models will maximize the IT department’s ability to modernise workloads and begin a journey to an IT infrastructure model based around flash storage.

When evaluating the benefits of all flash storage, Wanstor believes IT Managers should consider the following critical factors:

Performance – All-flash storage will deliver performance that is at least 10 times greater than that of traditional disks. When thinking about performance, do not focus solely on IOPS; it is also about consistent performance at low latency. Make sure an all flash architecture is deployed that delivers consistent performance across all workloads and I/O sizes, particularly if starting with multiple workloads.

Total Cost of Ownership – The price of flash storage has come down dramatically in the past 12 months. If the IT and finance teams looked at flash several years ago and were scared off by the price, it is time to explore flash storage again. In fact some all flash storage providers have prices as low as £1k per TB of data.

Smaller storage footprint – This will happen through inline de-duplication and compression, along with thin provisioning, space-efficient snapshots and clones. In some cases the storage footprint can be reduced by a ratio of 5:1, depending upon the application and workload.

Lower operational overheads – Through faster more simple deployments, provisioning and scaling and cost savings as less manual maintenance is required.

Availability and resiliency – All-flash arrays utilise a stateless controller architecture that separates the I/O processing plane from the persistent data storage plane. This architecture provides high availability (greater than 99.999%) and non-disruptive operations. The IT Manager can update hardware and software and expand capacity without reconfiguring applications, hosts or I/O networks, without disrupting applications or sacrificing performance of the hardware.

Simpler IT operations – Many all-flash arrays are now plug and play, so simple that they can be installed in less than hour in many cases. Additionally storage administrators do not have to worry about configuration tuning and tweaking, saving hours or days of effort and associated expenses.

Consolidation – The next stage of moving more applications to flash storage

Once you have put your first applications on an all-flash storage array, the improvements in performance should be enough for the IT and finance teams to decide to invest further in the technology and really accelerate their journey to a flash storage based IT infrastructure.

Most IT leaders, will want to expand the benefits they will have seen from the initial deployment of flash storage to additional applications and workloads across the data centre. As the all-flash storage solution expands to additional applications, IT Managers will find that TCO benefits increase substantially. Because all-flash storage supports mixed workloads, IT Managers will be able to consolidate more applications on fewer devices, thus reducing IT infrastructure capital expenditure. By consolidating, IT Managers will also be able to maximize many of the cost savings mentioned earlier in this article (lower energy consumption, less floor space use, reduced software licensing fees etc).

In dense mixed workload applications, the TCO of using a flash storage solution will typically be 50% to 70% lower than a comparably configured traditional disk solution. Beyond the specific cost savings, the performance gains across more applications will drive significant business improvements and new opportunities. Resulting in a more agile IT infrastructure.

Additionally, the right all-flash storage architecture will help future-proof storage infrastructure, so that the investments being made today will continue to provide value as all flash storage usage is expanded across the business.

Building a business ready cloud on all flash storage

What do IT departments want and need from their cloud infrastructures? How can they leverage the cost savings and agility of the public cloud model, and link it to the control, security, data protection and peace of mind which can be achieved with an on-premises cloud infrastructure?

From Wanstor’s recent experiences many IT Managers want it all when it comes to cloud computing. They want to be able to provide all the features, functions and flexibility available from the leading public cloud service providers within their own IT infrastructure constraints. For many IT departments deploying cloud models similar to the big 3 cloud providers in a private cloud environment is simply unrealistic as the big 3 public cloud operators have lots of cash, resources and availability in terms of their infrastructure platforms.

If the IT department is unable to provide a better alternative to a public cloud solution, it is highly likely users within a business will feel the need to go to the public cloud. This creates a fertile ground for shadow IT initiatives that can cause security problems and other risks.

Beyond delivering public cloud-like features and functionality for an IT infrastructure solution, the IT department may also want to improve in areas where the public cloud may fall short. Performance is an example of this – If you want to use cloud services to support high-performance computing or big data analytics or some of the other important next-generation IT initiatives, it is likely the IT team will have to pay a premium to a public cloud service provider to match the businesses requirements.

Security is another critical area where building your own cloud infrastructure will give the IT department much greater control and peace of mind, particularly as they begin thinking about supporting the most important business applications and data in the cloud. As the IT department moves from the first all-flash applications through consolidation and toward the all flash cloud, an important step will be to bridge the virtualization gap between servers and the rest of the IT infrastructure, namely storage and networking.

To deliver a basic cloud-type service based on a flash storage platform, IT’s list of wants must include:

Shared resources through automated processes – Users should be able to go straight to an on-premises cloud and choose the storage capacity and performance they need, for as long as they need it.

Automated metering and charging – Once users have chosen the resources they want, the cloud infrastructure should be able to meter their usage and create an automated chargeback mechanism so they pay for what they actually used.

Scalability – Once resources are used, they go back into the pool and become available to other users and departments. As storage capacity and performance requirements grow, the storage platform should be simple to upgrade, update and scale. With virtualization across servers, storage and networking, an all-flash storage array becomes the foundation for a cloud infrastructure.

In this article we have discussed all-flash storage and the foundation it provides for a truly hybrid IT infrastructure to take place. Without the benefits of all-flash storage businesses will not be able to modernise their infrastructures to deliver cloud services. It is no coincidence that the largest cloud providers rely on all-flash storage solutions as their storage foundation. As discussed you can take the journey in stages, starting small with a single application or two, and then adding more applications through consolidation and virtualization. You can also implement multiple stages at once. Or you can do everything at once with all-flash storage solutions.

At Wanstor we believe the time for flash storage is now. The technology is great and at a price point where most businesses will see a return on their storage investments within 12 months due to the improved performance they receive across their business operations.

For more information about flash storage and how Wanstor can help your business with its IT infrastructure strategy and storage platforms, please visit https://www.wanstor.com/data-centre-storage-business.htm

Read More

Is your private cloud strategy really working? What is your framework for success?

22nd December 2017
|

Is your private cloud strategy really working? What is your framework for success?

Whether you want to take your IT operations to the public cloud, keep them on-premise, host off-premise using a private cloud model, or indeed choose to invest in a hybrid configuration, the IT Manager must start with a clear understanding of what they are trying to achieve from an IT and business perspective before embarking on their cloud journey.

This may seem like stating the obvious, but at Wanstor we have seen several cases recently where businesses have invested in cloud computing models without thinking about the outcomes they want from a cloud computing strategy.

It can be tempting to get caught up in debates and discussions about “cloud technology”, after all it is a major IT trend which lots of IT and business leaders are talking about in various online and offline publications. However just because something is a hot topic doesn’t mean the fundamental questions of business need are not addressed:

  • What are the key drivers for change?
  • Do we need to change?
  • Are we trying to reduce operational costs?
  • What do we need to do to improve the IT infrastructure environment to better support the business?
  • How can we make staff more productive through IT?
  • What is the right approach for achieving IT objectives over the next 12 months?

Obviously these are not simple questions with simple answers. As Wanstor has learned from our experience of working with 100’s of businesses across the UK on private cloud migration projects, the unique challenges of cloud computing require new ways of thinking, planning, and cross business collaboration to achieve common IT and business goals.

We’ve also seen that success can happen early in a cloud computing engagement by those IT leaders who are able to frame a realistic strategy at the beginning, which has definition and appreciation for the capabilities and limitations of the businesses they lead.

At Wanstor we say business decision makers need to have a “cloud frame of mind.” We believe a “cloud frame of mind” should be used to tackle the various strategic considerations required in a private cloud deployment project.

So let’s start at the beginning, what are you trying to do with your private cloud project?

Generally private clouds are invested in for one of 3 major business reasons:

Agility

  • Reduce time to market: Implement new business solutions quickly to accelerate revenue growth.
  • Better enable the solution development life cycle: Speed up business solutions through better development and test, and a fast path to production.
  • Be more responsive to business change: Deliver quickly on new requirements for existing business solutions.

Cost

  • Reduce operational costs: Optimize daily operational costs like people, power, and space.
  • Reduce capital costs or move to annuity-based operational costs: Benefit from reduced IT physical assets and more pay-peruse services.
  • Make IT costs transparent: Service consumers better understand what they are paying for.

Quality

  • Consistently deliver to better defined service levels: Better service leads to increased customer satisfaction.
  • Ensure continuity of service: Minimise service interruption.
  • Ensure regulatory compliance: Manage the compliance requirements that may increase in complexity with online services.

Where businesses locate their needs amongst these primary drivers and define their objectives as they consider their cloud computing options is a basic starting point in the process. For many in IT the private cloud is proving especially attractive, mainly for what it offers in terms of control over matters of security, data access, and regulatory compliance. Their primary interest in a private cloud architecture revolves around the pressures to cut costs without sacrificing control over essential data, core applications, or business-critical processes. The main secondary interests around private cloud computing are more to do with business growth and the possibilities it offers in terms of scaling workloads at different times of the year. This shows that IT leaders are beginning to think seriously about cloud computing as a way to turn IT into a business enabler rather than being seen as a costly department by other business unit leaders.

As identified above there are several drivers IT leaders are investigating as a means of reasoning to move workloads to a private cloud model. Once the IT leader has identified business needs and objectives, they should take the time to understand the capabilities, limitations, and complexities of their current IT environment, which starts by performing an analysis of technical and organisation maturity vs different capabilities of cloud computing. The next step is then to determine where you want to take your IT team and the business it is serving, and assessing the prerequisites for the desired objectives.

Many of the businesses we work with, start at a basic stage along their cloud optimisation journey. Usually they have already managed to consolidate infrastructure resources for better cost efficiencies through virtualization. If your business fits this profile, an acceptable outcome might be to advance your business to the next stage by implementing more sophisticated infrastructure level resource pooling, which would achieve still greater cost savings as well as a measure of improved time to market. Similarly, your current business capabilities may put you somewhere in the middle of the cloud maturity model, with a relatively high degree of sophistication in business areas you consider your top priorities, such as being able to respond to seasonal shifts in demand for example.

While your ultimate goal might be to bring platform as a service (PaaS) and software as a service (SaaS) architectures so you can leverage a larger set of hybrid cloud capabilities, such as anytime, anywhere access for your customers built on a unified set of compute, network, and storage resources, your near-term focus in the context of an infrastructure as a service (IaaS) model may just be in moving the dial specifically on automated provisioning and de-provisioning of resources. It’s in this approach, by making deliberate, incremental progress in the service of a longer-term strategy that real IT transformation occurs on a private cloud model.

The way forward is to recognise that changing to a functional private cloud model is an evolutionary process, where the investments you make in technology solutions must be evenly matched at each step by the maturity of your business in managing them. Your strategy must be carefully applied in those areas where your business is likely to benefit most. Indeed, not all capabilities of a private cloud need to be, or should be exploited.

The real task lies in balancing the potential goods of a private cloud solution against actual business needs, understanding your capabilities and limitations at each stage of the process, and putting a plan in place that charts a realistic, achievable course of action for getting it done.

The objectives you choose for your private cloud will raise a number of questions about the various technical and organisational implications of implementing your solution. Below are some examples of the kinds of questions IT Managers need to be able to ask in order to frame a comprehensive and realistic strategy for achieving private cloud objectives.

Self-service – Do you want to allow your users to provision the resources they need on-demand without human intervention? How much control should you relinquish? What are the potential consequences of offering a self-service model for common tasks? Will cloud computing be left unchecked and unused if individual users can select their own licences and usage limits, if so how much money will this cost the business, if accounts are left unused?

Usage-based – Pay-per-service, or “chargeback,” is one of the hallmarks of cloud computing, and if your private cloud strategy includes driving greater transparency of costs for better resource planning, you need to know the incentives you are trying to drive. Are you trying to reward good behaviour and punish bad? Do you wish to push more round-the-clock workloads to night time operations for power savings that support your company’s environmental initiatives?

Elasticity – Being able to respond efficiently to fluctuations in resource usage can represent a major selling point for cloud computing. It is important to consider first whether you really need a sophisticated system of automated provisioning and de-provisioning of servers to deal with fluctuations in demand. If significant and relatively unpredictable, then this capability may be appropriate. If the need is regular and predictable, straightforward automation may be sufficient for your purposes. Other questions you need to ask: Which applications are priorities, and which can be pushed back in terms of priorities?

Pooled resources – Consolidating resources to save on infrastructure, platform, and/or software costs is a common goal for large-scale IT operations. If you’re in a medium/large business with several independent departments potentially with their own IT operations, you are likely to encounter critical questions of process: E.g. Will independent groups deal with the inherent limitations of shared infrastructure and services? Will standardised configurations come at the cost of the optimised systems to which they’ve grown accustomed? As you move forward in the process of pooling your resources to get the benefits, you need to be aware of the likely trade-offs in putting everyone on a standard set of services. It may well be worth the cost to the business as a whole, but it may not seem that way to those who lose capabilities or levels of service to which they’ve been accustomed.

Comprehensive network access – As you move out from behind the business firewall and away from tightly controlled client configurations and network access scenarios, there are several important considerations that will need to inform your strategy, beyond the obvious concerns over security, such as the nature and extent of supportability: What kinds of personal devices will you support and to what degree? How will mobile clients (smartphones, operating systems and tablets) access network resources, and will you have the right levels of bandwidth to service them? What forms of authentication will you support?

Whatever objectives you are aiming to achieve, the important point to note is that building a private cloud is a process for which there are numerous tactical and strategic considerations. A successful private cloud implementation relies on the ability to think through all facets of the undertaking, clearly understanding the dependencies, trade-offs, limitations, and opportunities of any particular strategy. The reality for most businesses is that an incremental private cloud strategy is the only realistic path, given the technical and organisational complexity of current IT operations which exist as the business has invested large sums of money into them over a period of time.

Expectations and realities of cloud computing in a business IT context can prove a challenge to resolve. Many IT leaders understand why an incremental approach is needed, but those outside IT, are less clear about the real implications of implementing a cloud solution. The right strategy for achieving private cloud objectives must also include an appropriate communications strategy for setting and managing expectations for the business as a whole. With the whole business informed, from the board room to the front office, the hard work of defining and executing on your private cloud strategy is far more likely to achieve its objectives and set your business on the path to long-term success in the cloud.

For more information about Wanstor’s private cloud services click here.

Read More

Is your digital transformation working? Putting the basics in place

3rd November 2017
|

Digital Transformation

In the current business environment, it’s not enough to automate processes and increase efficiency. To succeed, companies need to be unique and truly differentiate themselves from the competition. Your customers are demanding a more personalised service, and their expectations about the service they receive from your business continue to rise every day. To meet rising customer expectations around their business, and stay competitive, companies need to move to a relationship/value based interactive model with their customers. This increasingly means starting with the customer impact first on any business project, initiative or budgetary spend. This is where digital strategies start and digital transformation can happen. Many businesses have started ‘digital’ programmes of work, but have not yet seen the rewards of their efforts.

At Wanstor we believe there are 4 things businesses should do before embarking on a digital transformation strategy. Under no circumstances is it good enough to dip a toe into digital transformation. Instead business leaders should either commit to a digital transformation programme of work fully or decide when they are going to commit to it. In summary – undertaking a digital transformation programme to execute a digital strategy is not an easy task and half-hearted approaches simply won’t work.

So what are the 4 things all business leaders should do if they want to successfully execute ‘digital’?

Take the time to develop a strategy

The strategy phase of the digital transformation process should help a business define and understand the problems it wants to solve and how it is going to solve them. The old way of working in business is to start with existing problems and requirements then develop a solution. This approach still has value, but only deals with problems that exist today, rather than looking at potential problems/pitfalls in the future. At Wanstor we recommend when building a digital transformation strategy, businesses should instead focus on outcomes and end goals if they are going to be successful. Ask questions such as – What does success look like? What customer experience do we want to create? What story do we want to tell to the business and customers?

Think about the key themes of your transformation and the experience you want to deliver. For example, a restaurant owner may want to personalise the dining experience further. Now the restauranteur has captured a vision of what they want to do, they now require a programme of work to help achieve the set vision. This is where digital comes into play. The restauranteur wants to create an actionable strategic vision that wraps around business objectives. To do this, they first of all need to identify gaps across people, processes, technology and offerings, and then create a roadmap to success. As well as having a clear plan, it is important that any digital initiative is completed at speed so as to stay ahead of the competition and improve the time to benefit ratio of projects which will affect the business and provide a customer with an improved experience.

Design with the customer experience in mind

Designing any solution to a problem in a digital world should always start with the customer in mind. This means thinking about how customers and staff will interact with technology to improve the dining experience for example. First of all think about focusing on the experiences you want to create for your end-users, not the requirements of the solution. Also consider how you can change the way employees engage and collaborate and the way customers interact with your business. Your goal here should be to build the right experience, and allows your staff and ultimately your customers to reach their end goals e.g. a more efficient front of house operation resulting in a better customer dining experience.

Put the right pieces in place

Having a strategy and a design is a great start to your digital transformation. But if you can’t assemble the right pieces – people, propositions, processes and technology you actually haven’t got anything apart from random parts. At this stage it’s time to start unifying the team, the processes and ultimately start shaping the experience. E.g. A restaurant wants to make online bookings easier on its website. To accomplish this, they need to connect the different points of the customer journey with the booking system. What does the customer do when they land on the restaurants website for example? How easy is it to find the booking application? How is the booking data relayed to the restaurant they want to book a table in? Do staff at the restaurant understand the booking system and the customer’s requirements when they book?

It doesn’t matter how many systems need to be involved, it should all be seamless and easy for the customer who should feel like they are accessing one single system. At Wanstor we usually find for processes like ‘restaurant booking’ most restaurant businesses already have the right pieces of technology and parts of the process, but it’s joining them together that is quite often the problem. The key to success is leveraging all disparate systems, services and existing technologies to power elements of the digital ecosystem. Quite often a simple gap analysis of where you are now vs where you want to get to, highlights areas which need to be joined up or require work for integration. By putting the disparate pieces together ‘digital’ can actually start to become a reality.

Get ready for success

The final piece of the digital transformation puzzle is getting and keeping everything running smoothly. Regardless of your deployment method, you will want to implement a plan for continuous management and support. This starts with a dedicated digital transformation team who can help implement governance and a plan to keep your ‘digital’ roadmap and architecture up-to-date at all times. For IT they should consider adding a shared support structure, along the lines of a shared services centre, with skills across a variety of disciplines, such as change management, process optimisation, and agile management, so they can build repeatable processes that are supported by a dedicated group of experts. If you don’t have these skills in-house, you should find a managed service partner who can supplement the team with these skills.

In summary at Wanstor we usually see digital transformation programmes failing or not delivering the benefits they promise as teams, people, processes and technologies are disconnected. By following the 4 steps above you should have by now, grasped that digital transformation is not just about technology but about business change. Those businesses which put together the right strategy, design, and processes in place will ultimately achieve their digital transformation goals.

At Wanstor we believe ‘digital’ can bridge many business and technology gaps. By bringing together a top-down business approach with bottom-up operational experience ‘digital transformation’ adds customer, employee, and operational value by leveraging disparate products, services, and existing technologies, to create, build, and manage digital ecosystems.

By using digital transformation programmes to innovate and improve, businesses can create a long-term competitive advantage. One that creates improved customer loyalty, more customer spend and reduced business operating costs.

Read More
Wanstor
124-126 Borough High Street London, SE1 1LB
Phone: 0333 123 0360, 020 7592 7860
IT Support London from Wanstor IT Support London