Big Data Management = Big Demands on your IT Infrastructure

11th May 2018
|

Big Data Management means Big Demands on your IT Infrastructure

While the concept of big data management is nothing new, the tools and technology needed to exploit “big data” for commercial and organisational gain are now coming to maturity. Businesses involved in industries such as media, hospitality, retail, leisure & entertainment, and manufacturing have long been dealing with data in large volumes and unstructured formats or data that changes in near real time.

However, extracting meaning from this data has often been prohibitive, requiring custom-built, expensive technology. Now, thanks to advancements in storage and analytics tools and technologies, all businesses and not for profit organisations can leverage big data to gain the insight needed to make their organisations more agile, innovative, and competitive.

At Wanstor, we understand there are a few important business drivers behind the growing interest in big data, which include:

  • The desire to gain a better understanding of customers
  • How to improve operational efficiency
  • The need for better risk management – Improved IT security and reduced fraud
  • The opportunity to innovate to stay ahead of the competition and/or attract/retain customers

In summary these business drivers are primarily the same goals that companies and not for profit organisations have had for years. But with advances in storage and analytics, they can now extract the value that lies within all of their existing data quicker, easier, and more cost-effectively.

At Wanstor we believe to turn these business goals into realities, business and not for profit organisations must think about data management in different ways. Because big data is voluminous, unstructured, and ever-changing, approaches to dealing with it differ from techniques used with traditional data. To turn big data into opportunities, organisations should take the time to find technology solutions that feature the following components:

  • A versatile, scale-out storage infrastructure that is efficient and easy to manage and enables business teams to focus on getting results from data quickly and easily
  • A unified analytics platform for structured and unstructured data with a productivity layer that enables collaboration between IT teams and the wider business
  • Capabilities to be more predictive, driving actions from actual insights

With these components in place, business and not for profit organisations can build infrastructures that deliver on the promises of big data.

Despite the many benefits it delivers, big data is (for many organisations) putting undue demands on their IT teams, as it differs from traditional enterprise data in the following ways:

  • It’s voluminous – Medium and large scale organisations generate and collect large quantities of traditional data, but big data is often orders of magnitude more.
  • It’s largely unstructured – Big data includes Internet log files, scanned images, video surveillance clips, comments on a website, biometric information, and other types of digitized information. This data doesn’t fit neatly into a database. But unstructured data accounts for 80%+ of all data growth in many businesses today.
  • It’s changing – Big data often changes in real time or near real time— E.g. customer comments on a website. This data must be collected over significant periods of time in order to spot patterns and trends.

Furthermore, organisations are beginning to realize that to reap the full value of big data, they must be able to analyse and iterate on the entire range of available digital information. One off snapshots of data do not necessarily tell the whole story or solve a particular business challenge. Efficiently collecting and storing that data for iterative analysis has a significant impact on an organisations storage and IT management resources. In short, IT storage professionals need to find big data solutions that fit the bill, but don’t strain already tight budgets or require significant investments in dedicated personnel.

Due to these new big data demands, as well as the importance of handling information correctly, most organisations consider managing data growth, provisioning storage, and performing fast, reliable, and iterative analytics to be top priorities. But as IT budgets have become squeezed many data storage professionals are saying to us at Wanstor that big data is placing their current IT infrastructures under extreme stress; with many looking to build scalable infrastructures within their data centres or outsource to a co-location or private cloud provider.

As you have probably guessed from the above paragraph big data requires more capacity, scalability, and efficient accessibility without increasing resource demands. Traditionally, storage architectures were designed to scale up to accommodate growth in data. Scaling up means adding more capacity in the form of storage hardware and silos, but it doesn’t address how additional data will affect performance. If you look at traditional storage architectures, RAID controller–based systems end up with large amounts of storage sprawl and create a siloed environment. Instead, organisations need to be able to achieve consolidation within a single, highly scalable storage infrastructure. They also need automated management, provisioning, and tiering functions to accommodate the rapid growth of big data.

At Wanstor we believe organisations of all sizes need storage architectures that are built with big data in mind and offer the following features:

  • Scalability – to accommodate large and growing data stores, including the ability to easily add additional storage resources as needed
  • High performance – to keep response times and data ingest times low, and can keep pace with the business
  • High efficiency – to reduce storage and related data centre costs
  • Operational simplicity – to streamline the management of a massive data environment without additional IT staff
  • Enterprise data protection – to make sure high availability for business users and business continuance in the event of a disaster
  • Interoperability – to integrate complex environments and to provide an agile infrastructure that supports a wide range of business applications and analytics platforms

Final thoughts – As the amount of unstructured data in organisations grow, companies and not for profit organisations of all sizes are learning they need new approaches to managing that data. At Wanstor we believe they require an efficient and scalable storage strategy that helps them to efficiently and effectively manage extreme data growth. Wanstor has a range of big data experts who can work with your business to put the right data storage solution that incorporates scalability, improved performance (both I/O and throughput), and improved data availability. Scalable storage solutions, paired with powerful analytics tools that can derive valuable insight from large amounts of content can help organisations of all sizes reap the benefits of “big data”. The only question you have to answer now is – Is your infrastructure ready?

For more information about Wanstor’s data storage solutions click here – https://www.wanstor.com/data-centre-storage-business.htm .

Read More

Network Monitoring for the Private Cloud: A brief guide

3rd May 2018
|

Private Cloud Computing

‘Cloud computing’ as a concept has been around for over 10 years. Up until about 5 years ago many business and not for profit organisations shunned the “cloud” as all they could see were problems and challenges with the implementation of a cloud first policy such as – insufficient processor performance, enormous hardware costs and slow Internet connections making everyday use difficult.

However, today’s technology, broadband Internet connections and fast, inexpensive servers, provide the opportunity for businesses and not for profit IT teams to access only the services and storage space that are actually necessary, and adjust these to meet current needs. For many small and medium sized organisations using a virtual server, which is provided by a service provider, introduces a wide range of possibilities for cost savings, improved performance and higher data security. The goal of such cloud solutions is a consolidated IT environment that effectively absorbs fluctuation in demand and capitalizes on available resources.

The public cloud concept presents a number of challenges for a company’s IT department. Data security and the fear of ‘handing over’ control of the systems are significant issues. If an IT department is used to appropriating its systems with firewalls and to monitoring the availability, performance and capacity usage of its network infrastructure with a monitoring solution, it is much more difficult to implement both measures in the cloud. Of course, all large public cloud providers claim they offer appropriate security mechanisms and control systems, but the user must rely on the provider to guarantee constant access and to maintain data security.

Because of the challenges and general nervousness around data security in public clouds, many IT teams are investigating the creation of a ‘private cloud’ as an alternative to the use of public cloud. Private clouds enable staff and applications to access IT resources as they are required, while the private computing centre or a private server in a large data centre is running in the background. All services and resources used in a private cloud are found in defined systems that are only accessible to the user and are protected from external access.

Private clouds offer many of the advantages of cloud computing and at the same time minimise the risks. As opposed to many public clouds, the quality criteria for performance and availability in a private cloud can be customised, and compliance to these criteria can be monitored to make sure they are achieved.

Before moving to a private cloud, an IT department must consider the performance demands of individual applications and usage variations. Long-term analysis, trends and peak loads can be attained via extensive network monitoring evaluations, and resource availability can be planned according to demand. This is necessary to guarantee consistent IT performance across virtualized systems. However, a private cloud will only function if a fast, highly reliable network connects the physical servers. Therefore, the entire network infrastructure must be analysed in detail before setting up a private cloud. This network must satisfy the requirements relating to transmission speed and stability, otherwise hardware or network connections must be upgraded.

Ultimately, even minor losses in transmission speed can lead to extreme drops in performance. At Wanstor we recommend IT administrators use a comprehensive network monitoring solution like PRTG Network Monitor, in the planning of the private cloud. If an application (which usually equates to multiple virtualized servers) is going to be operated over multiple host servers (“cluster”) in the private cloud, the application will need to use Storage Area Networks (SANs), which convey data over the network as a central storage solution. This makes network performance monitoring even more important.

In terminal set ups in the 1980s, if a central computer broke down it was capable of paralyzing an entire company. The same scenario could happen if systems in the cloud fail. Current developments show that the world has gone through a phase of widely distributed computing and storage power (each workstation had a ‘full-blown’ PC) and returned to centralized IT concepts. The data is located in the cloud, and end devices are becoming more streamlined. The new cloud, therefore, complies with the old mainframe concept of centralized IT. The failure of a single VM in a highly-virtualized cloud environment can quickly interrupt access to 50 or 100 central applications. Modern clustering concepts are used to try to avoid these failures, but if a system fails despite these efforts, it must be dealt with immediately. If a host server crashes and pulls a large number of virtual machines down with it, or its network connection slows down or is interrupted, all virtualized services on this host are instantly affected, which, even with the best clustering concepts, often cannot be avoided.

A private cloud (like any other cloud) depends on the efficiency and dependability of the IT infrastructure. Physical or virtual server failures, connection interruptions and defective switches or routers can become expensive if they cause staff, automated production processes or online retailers to lose access to important operational IT functions.

This means a private cloud also presents new challenges to network monitoring. To make sure that users have constant access to remote business applications, the performance of the connection to the cloud must be monitored on every level and from every perspective.

At Wanstor we believe an appropriate network monitoring solution like PRTG accomplishes all of this with a central system; it notifies the IT administrator immediately in the event of possible disruptions within the private IT landscape both on location and in the private cloud, even if the private cloud is run in an external computing centre. A feature of private cloud monitoring is that external monitoring services cannot ‘look into’ the cloud, as it is private. An operator or client must therefore provide a monitoring solution within the private cloud and, as a result, the IT staff can monitor the private cloud more accurately and directly than a purchased service in the public cloud. A private cloud also enables unrestricted access when necessary. This allows the IT administrator to track the condition of all relevant systems directly with a private network monitoring solution. This encompasses monitoring of every individual virtual machine as well as the VMware host and all physical servers, firewalls, network connections, etc.

For comprehensive private cloud monitoring, the network monitoring should have the systems on the radar from user and server perspectives. If a company operates an extensive website with a web shop in a private cloud, for example, network monitoring could be set up as follows: A website operator aims to ensure that all functions are permanently available to all visitors, regardless of how this is realised technically. The following questions are especially relevant in this regard:

cloud-computing-lightbox

  • Is the website online?
  • Does the web server deliver the correct contents?
  • How fast does the site load?
  • Does the shopping cart process work?

These questions can only be answered if network monitoring takes place from outside the server in question. Ideally, network monitoring should be run outside the related computing centre, as well. It would therefore be suitable to set up a network monitoring solution on another cloud server or another computing centre.

It is crucial that all locations are reliable and a failover cluster supports monitoring so that interruption-free monitoring is guaranteed. This remote monitoring should include

  • Firewall, HTTP load balancer and Web server pinging
  • HTTP/HTTPS sensors
  • Monitoring loading time of the most important pages
  • Monitoring loading time of all assets of a page, including CSS, images, Flash, etc.
  • Checking whether pages contain specific words, e.g.: “Error”
  • Measuring loading time of downloads
  • HTTP transaction monitoring, for shopping process simulation
  • Sensors that monitor the remaining period of SSL certificate validity

If one of these sensors finds a problem, the network monitoring solution should send a notification to the IT administrator. Rule-based monitoring is helpful here. If a Ping sensor for the firewall, for example, times out, PRTG Network Monitor offers the possibility to pause all other sensors to avoid a flood of notifications, as, in this case, the connection to the private cloud is clearly completely disconnected.

Other questions are crucial for monitoring the (virtual) servers that are operating in the private cloud include:

  • Does the virtual server run flawlessly?
  • Do the internal data replication and load balancer work?
  • How high are the CPU usage and memory consumption?
  • Is sufficient storage space available?
  • Do email and DNS servers function flawlessly?

These questions cannot be answered with external network monitoring. Monitoring software must be running on the server or the monitoring tool must offer the possibility to monitor the server using remote probes. Such probes monitor the following parameters, for example, on each (virtual) server that runs in the private cloud, as well as on the host servers:

  • CPU usage
  • Memory usage (page files, swap file, page faults, etc.)
  • Network traffic
  • Hard drive access, free disc space and read/write times during disc access
  • Low-level system parameters (e.g.: length of processor queue, context switches)
  • Web server’s http response time Critical processes, like SQL servers or Web servers, are often monitored individually, in particular for CPU and memory usage.

In addition, the firewall condition (bandwidth use, CPU) can be monitored. If one of these measured variables lies outside of a defined range (e.g. CPU usage over 95% for more than two or five minutes), the monitoring solution will send notifications to the IT administrator.

Final thoughts

With the increasing use of cloud computing, IT system administrators are facing new challenges. A private cloud depends on the efficiency and dependability of the IT infrastructure. This means that the IT department must look into the capacity requirements of each application in the planning stages of the cloud in order to calculate resources to meet the demand. The connection to the cloud must be extensively monitored, as it is vital that the user has constant access to all applications during operation.

At the same time, smooth operation of all systems and connections within the private cloud must be guaranteed. A network monitoring solution should therefore monitor all services and resources from every perspective. This ensures continuous system availability.

For more information about Wanstor and PRTG network monitoring tools please visit – https://www.wanstor.com/paessler-prtg-network-monitor.htm

Read More
Wanstor
124-126 Borough High Street London, SE1 1LB
Phone: 0333 123 0360, 020 7592 7860
IT Support London from Wanstor IT Support London