High performance WAN – Are you moving in the right direction?

1st June 2018
|

Image depicting High Performance WAN

Wide area networks (WANs) are critical to the IT infrastructure underlying all business-critical applications, be that in data centres or in the public/private cloud. But for large, distributed business and not for profit organisations, maintaining network links to each branch office can be costly. Many IT teams fall into the trap of thinking that WAN optimization and acceleration is as simple as boosting the bandwidth in a slow office or dropping a single device into the network. However at Wanstor we know this isn’t the case and for best performance a WAN strategy requires bringing together multiple networking and security technologies together.

In this blog we will cover some of the most common WAN strategy challenges that IT teams are facing, and hope you can take away some practical advice which you can implement into your business or not for profit orgainisation straight away.

Can reduction and compression technologies solve insufficient bandwidth problems?

The explosion of apps consuming LAN speeds has put pressure on WAN sites that do not have access to unlimited bandwidth. It has been obvious for some time that data compression is a key technology for reducing such stress. Generally data compression technology works well for most data, except for real-time multimedia (e.g. video conferencing) — which is already compressed and can’t benefit from simple compression techniques. WAN optimization products implement compression in many ways, including:

Standard compression: This method takes streams of data and sends a reduced version of the content across the circuit, saving bandwidth. Standard compression in a WAN environment has many intricacies, including the choice of algorithm, how compression works across streams, and the interaction between compression and encrypted traffic.

Caching: This technique reduces data by maintaining a stored version of recently requested data objects (typically, files or email attachments) at the remote side of the connection. If a data object is requested a second (or third, or fourth) time and it is in the cache, then that copy is returned, eliminating the need to re-transmit the object from the central site to the remote site.  Caching is especially useful in environments where file sharing is done across the WAN, or where the email server (typically Exchange) is located at the central site and not the remote site.

Deduplication: This approach reduces data by detecting duplication in streams of bytes. Deduplication is a term from the world of storage and backup systems

The actual details relating to each of these algorithms is used and whether the vendor calls it caching or deduplication is mostly irrelevant. One important difference is that caching nearly always requires a hard disk of some sort to hold cached data, while deduplication is handled on in real time without any persistent storage.

At Wanstor we suggest Network Managers interested in data compression techniques to reduce bandwidth, evaluate products only by putting them into place in their own networks and comparing the results. The most important detail about data compression is that it requires two devices, one on either end of each WAN circuit or virtual connection. Compression product makers have tried to mitigate the need for deployment and management of hardware by providing compression devices as virtual machines. The idea is to offer compression software that runs directly on end-user devices, and to introduce many other WAN optimization and acceleration techniques into their products to provide more all-in-one solutions.

Can application optimisation solve app issues?

Although compression techniques can provide performance increases in terms of the WAN, optimising apps to run over your organisations WAN’s actually offers benefits far beyond simple compression. Application optimisation can often be provided by the same hardware used for compression, but there is a key difference: Application optimisation requires just one device next to the app server. Because the application optimisation directly affects web traffic, optimisation benefits all app users, not just WAN users.

Examples of application optimisation and benefits:

Application Optimisation Benefit
Better use of browser objects such as JavaScript Many application developers have the Javascript browser re-downloaded and other browser objects, such as style sheets, each time a different page is referenced. Application optimization tools can re-write pages when required to make sure that these large objects are cached in the browser. Reordering objects can also make pages render faster, giving a better user experience.
Compression and optimisation of content and images Web browsers internally support compression without requiring any add-on software; most web servers don’t bother to compress objects. They simply compress as and when required. This helps speed to access and reduces network load.
HyperText Transfer Protocol (HTTP) Extensions and support for emerging standards such as the SPDY protocol HTTP, the protocol underlying Hypertext Markup Language, has always been known to be inefficient. Acceleration hardware can help to interweave connections and increase the speed to access over high- latency, low-bandwidth network connections. SSL offload to the optimization-acceleration device can also speed loaded app servers.

Traditionally, application optimization was the realm of a family of products called application delivery controllers or ADC (formerly known as load balancers). But network product makers have migrated these techniques into other devices as well.

Traffic priority and bandwidth management

Voice and video applications require a constant and predictable bandwidth among simultaneous users. Other apps, such as email and web-based programs, tend to be more up and down in their bandwidth requirements.

Wanstor’s suggested techniques to provide bi-directional traffic management:

Management Technique Result
Transmission Control Protocol modification as and when required By changing TCP window sizes and delaying TCP acknowledgments, individual applications can be better managed and controlled by IT teams.
Application intelligence for User Datagram Protocol applications UDP-based apps, such as voice and video, are not easily flow-controlled the way TCP apps can. By understanding more about the internals of a UDP app, WAN optimization devices can perform call admission control
Subdividing apps Some applications mix both delay-sensitive and bulk traffic over the same connection. WAN optimisation devices may be able to break out multiple types of traffic and give different priorities to each type based on deep knowledge of the internal functions of the application.
App identification Differentiating between business and recreational apps (such as collaborating via SharePoint versus video streaming from YouTube) goes deeper than looking at port numbers. By directly identifying actual applications, WAN optimisation devices can provide granular insight and then limit or guarantee bandwidth as required to meet project objectives.
Time-of-day awareness Although many data centres run 24/7 x 365 many offices, shops and restaurants are only open for a proportion of the day. This provides the opportunity to use bandwidth (most people buy usage on a per 24hr basis) differently during opening hours. At Wanstor we suggest maintenance activities such as Log transfers, backups, software updates and other maintenance should be pushed to outside core operational hours and will benefit from different bandwidth management rules. By moving a lot of your traffic to off-peak times IT teams can also realise some cost savings as well.

In a fully managed hub-and-spoke network, quality of service (QoS) mechanisms can be used to guarantee particular bandwidth and prioritisation for each application. However as the way we shop, eat and play has changed so have the networks which support businesses and organisations which enable humans to undertake their day to day activities.

Multiple data centres, branch-to-branch communication and the use of generally unmanaged circuits (such as Internet, wireless and shared services) have reduced the ability of simple QoS mechanisms to guarantee acceptable app performance. WAN optimisation and acceleration projects also now require management of bandwidth between sites. Simple mechanisms, such as those found in common edge firewalls with unified threat management (UTM), are not sufficient for the complex requirements of a mix of applications and topology.

Bandwidth management can be particularly trying because true bandwidth management works well only in the outgoing direction for each site. Once the packets have come into a site, they’ve already consumed bandwidth and pushed out other apps that might have been more important.

Simply dropping packets that exceed predefined limits won’t work in most situations. WAN optimisation and acceleration vendors have come up with a variety of techniques to providesophisticated bidirectional traffic management.

Use standard based tools to provide better network visibility

Most WAN optimisation techniques try to improve service with limited resources by controlling use of certain resources. But a significant step toward any WAN optimisation and acceleration project depends on gaining network visibility. At Wanstor we believe it is a “must have” that the network management team can answer questions around the applications using their network such as:

  • What applications are being used?
  • Who is running them and when?
  • How much bandwidth do they use (individually and collectively)?
  • What types of errors are occurring?
  • What response times are users experiencing?
  • Which systems are the top talkers and which are the top listeners?

The old reporting categories must be modified because visibility in current WAN environments involves far more than merely tracking IP addresses and ports. True network visibility extends up the stack to identifying real people and real apps. Without strong visibility into the network, no WAN optimization and acceleration project can be successful. Control of the unknown simply leads to frustration and confusion, while good visibility into network and app use can also provide metrics to measure overall project or programme success.

Many devices (including switches, routers, firewalls, WAN optimization controllers (WOCs) and application delivery controllers) will send IPFIX and NetFlow data to a management system. Where no IPFIX data is available, both open-source and commercial hardware and software IPFIX and NetFlow exporters are available to give visibility into unencrypted network traffic.

The benefit of choosing IPFIX and NetFlow is that it represents a standard approach, which means that an organisation will be able to gain visibility into different components mixed and matched on its network.

Link balancing and dynamic routing improves network reliability

Although service-level agreements (SLAs) can set expectations, network managers must prepare for the inevitable link downtime that any WAN will experience. When business-critical apps are used over the network, most organisations choose to use dual links into each of their sites to minimize blockages created by traffic peaks or network problems.

Simply having multiple links doesn’t ensure high availability, as some mechanism must be in place to use the links. If VPN tunnels are in place, some organisations use dynamic routing protocols such as the Open Shortest Path First (OSPF) protocol to make use of dual links. Having two links on at all times always prompts a return on investment question: How can we use both links and still get the most network for the pounds invested? WAN optimization and acceleration vendors have introduced a variety of techniques to balance traffic over multiple network links, with varying levels of success. Because TCP/IP networks have their own routing protocols, attempts to force traffic to take a particular route or to signal a route to upstream devices (such as Multi-Protocol Label Switching or MPLS routers) are often complicated and create brittle networks.

While the idea of using as much of two circuits as possible is attractive from budget and theoretical perspective, network managers should very carefully evaluate any vendor proposal to perform outbound load balancing or dynamic link selection. Experiences with this type of load balancing have not been positive for all businesses. In some cases, these types of technologies have required very specific network configurations for correct operation, and may end up creating more problems than they solve.

Does load balancing improve application reliability?

Although application reliability is not necessarily a WAN-specific concern, the importance of enterprise applications emphasizes the need for more sophisticated types of load balancing and high availability strategies that stretch across data centres.

Traditional load balancing uses a Layer 2 or Layer 3 device as the front-end to a series of systems offering an identical service. As requests come in to the load balancer, it makes a decision based on a predetermined algorithm and passes the request on to whichever system is selected.

The load balancer then manages state information so that further requests from the same client are all directed to the same system. The algorithm chosen can be as simple as a round-robin process or it can be more sophisticated, taking into account CPU utilization, response time and other factors. Originally, the goal of most load balancers was scalability — the ability to handle a greater load than any single server could manage. Over time, the goal has changed. Now, the low cost of server hardware has led many organisations to use load balancers simply for reliability. With two (or more) servers available, uptime can be extended and maintenance windows shortened, even if the load can reside entirely on a single server.

Although people have been talking about global server load balancing for a decade or more, network managers should be aware of one important fact: Global server load balancing has not a solved problem. Because of the way that the Internet, Domain Name System servers and web browsers work, there is no guaranteed reliable approach to providing high availabilityacross multiple data centers.

It should be noted that several techniques have been tried, including DNS-based load balancing and Border Gateway Protocol (BGP) load balancing, but no approach works 100 percent of the time in 100 percent of the possible failure cases. Indeed, the numbers aren’t even close to 100 percent. For every global load-balancing technique discussed, there are many potential places where load balancing will not deliver the desired results. Load balancing is usually provided by dedicated software or hardware, such as a multilayer switch or a DNS server. Many experts consider the distinction between hardware and software load balancers to be no longer meaningful.

Where WAN optimization and acceleration solutions can be found:

Technology Most commonly found in But also available in
Data compression and reduction WAN optimization controllers (WOCs); discrete hardware appliances or software-based virtual appliances Some functionality may be available in web security gateways, but pure data compression and reduction are not often found in other product spaces.
Application optimization Application delivery controllers (ADCs), load balancers WOCs often include some application optimization features. Web application firewalls are a separate niche.
Traffic prioritization and bandwidth management Quality of Service (QoS) and visibility products WOCs often include traffic prioritization; UTM products and next generation firewalls generally include basic bandwidth management and prioritsation.
Routing and link balancing Branch and edge firewalls or combination router–virtual private network (VPN) devices Stand-alone edge routers all generally have this capability, but the location of the router outside of the firewall (which prevents it from seeing into encrypted VPN tunnels) pushes this feature into whatever device handles VPNs for the branch.
Security features; use and misuse controls UTM products and next-generation firewalls Web security gateways and proxy servers may include limited web focused features. Standalone IPSes are rarely used in the branch when UTM or next-generation firewalls are available.

Integration of security tools

WAN optimization is usually considered a largely technical exercise, the goal of which is to get more value out of each pound invested for connectivity. Many network managers now take a more holistic view of network use, and look to security-focused products to help them control overall use of both enterprise and Internet apps. Because most WANs already have a firewall device at the border of each remote site, these devices may be called upon to provide more than simple firewall and VPN services.

Security device manufacturers are bringing many branch management features to their edge devices, including URL and content filtering, app identification and control, bandwidth management, intrusion prevention and antimalware. At Wanstor we believe Network Managers should consider including the capabilities of branch firewall devices in their overall network optimization plan for several reasons. First, these devices are typically already in use, so activating additional capabilities may be as simple as a few mouse clicks or a low-cost subscription add-ons. Branch firewalls are key parts of the WAN. What’s more, changes to traffic profiles or traffic types will also affect the operation and capabilities of the firewalls.

How Wanstor can help Network Managers optimise their WAN

In this blog post we have offered some suggestions around common WAN challenges which Network Managers can use to help improve their WAN performance. Wanstor also offers a range of network optimisation solutions to help:

  • Reduce application latency to remote end-users
  • Create multiple pathways to ensure application availability
  • Centralize the network environment
  • Decrease operating and management costs
  • Maximize bandwidth utilization
  • Postpone the need to upgrade WAN bandwidth
  • Improve disaster recovery position by speeding backup and data replication over the WAN

For more information about Wanstor networking services, please click here: https://www.wanstor.com/wide-area-networking-wan-connectivity-business.htm

Read More

Big Data Management = Big Demands on your IT Infrastructure

11th May 2018
|

Big Data Management means Big Demands on your IT Infrastructure

While the concept of big data management is nothing new, the tools and technology needed to exploit “big data” for commercial and organisational gain are now coming to maturity. Businesses involved in industries such as media, hospitality, retail, leisure & entertainment, and manufacturing have long been dealing with data in large volumes and unstructured formats or data that changes in near real time.

However, extracting meaning from this data has often been prohibitive, requiring custom-built, expensive technology. Now, thanks to advancements in storage and analytics tools and technologies, all businesses and not for profit organisations can leverage big data to gain the insight needed to make their organisations more agile, innovative, and competitive.

At Wanstor, we understand there are a few important business drivers behind the growing interest in big data, which include:

  • The desire to gain a better understanding of customers
  • How to improve operational efficiency
  • The need for better risk management – Improved IT security and reduced fraud
  • The opportunity to innovate to stay ahead of the competition and/or attract/retain customers

In summary these business drivers are primarily the same goals that companies and not for profit organisations have had for years. But with advances in storage and analytics, they can now extract the value that lies within all of their existing data quicker, easier, and more cost-effectively.

At Wanstor we believe to turn these business goals into realities, business and not for profit organisations must think about data management in different ways. Because big data is voluminous, unstructured, and ever-changing, approaches to dealing with it differ from techniques used with traditional data. To turn big data into opportunities, organisations should take the time to find technology solutions that feature the following components:

  • A versatile, scale-out storage infrastructure that is efficient and easy to manage and enables business teams to focus on getting results from data quickly and easily
  • A unified analytics platform for structured and unstructured data with a productivity layer that enables collaboration between IT teams and the wider business
  • Capabilities to be more predictive, driving actions from actual insights

With these components in place, business and not for profit organisations can build infrastructures that deliver on the promises of big data.

Despite the many benefits it delivers, big data is (for many organisations) putting undue demands on their IT teams, as it differs from traditional enterprise data in the following ways:

  • It’s voluminous – Medium and large scale organisations generate and collect large quantities of traditional data, but big data is often orders of magnitude more.
  • It’s largely unstructured – Big data includes Internet log files, scanned images, video surveillance clips, comments on a website, biometric information, and other types of digitized information. This data doesn’t fit neatly into a database. But unstructured data accounts for 80%+ of all data growth in many businesses today.
  • It’s changing – Big data often changes in real time or near real time— E.g. customer comments on a website. This data must be collected over significant periods of time in order to spot patterns and trends.

Furthermore, organisations are beginning to realize that to reap the full value of big data, they must be able to analyse and iterate on the entire range of available digital information. One off snapshots of data do not necessarily tell the whole story or solve a particular business challenge. Efficiently collecting and storing that data for iterative analysis has a significant impact on an organisations storage and IT management resources. In short, IT storage professionals need to find big data solutions that fit the bill, but don’t strain already tight budgets or require significant investments in dedicated personnel.

Due to these new big data demands, as well as the importance of handling information correctly, most organisations consider managing data growth, provisioning storage, and performing fast, reliable, and iterative analytics to be top priorities. But as IT budgets have become squeezed many data storage professionals are saying to us at Wanstor that big data is placing their current IT infrastructures under extreme stress; with many looking to build scalable infrastructures within their data centres or outsource to a co-location or private cloud provider.

As you have probably guessed from the above paragraph big data requires more capacity, scalability, and efficient accessibility without increasing resource demands. Traditionally, storage architectures were designed to scale up to accommodate growth in data. Scaling up means adding more capacity in the form of storage hardware and silos, but it doesn’t address how additional data will affect performance. If you look at traditional storage architectures, RAID controller–based systems end up with large amounts of storage sprawl and create a siloed environment. Instead, organisations need to be able to achieve consolidation within a single, highly scalable storage infrastructure. They also need automated management, provisioning, and tiering functions to accommodate the rapid growth of big data.

At Wanstor we believe organisations of all sizes need storage architectures that are built with big data in mind and offer the following features:

  • Scalability – to accommodate large and growing data stores, including the ability to easily add additional storage resources as needed
  • High performance – to keep response times and data ingest times low, and can keep pace with the business
  • High efficiency – to reduce storage and related data centre costs
  • Operational simplicity – to streamline the management of a massive data environment without additional IT staff
  • Enterprise data protection – to make sure high availability for business users and business continuance in the event of a disaster
  • Interoperability – to integrate complex environments and to provide an agile infrastructure that supports a wide range of business applications and analytics platforms

Final thoughts – As the amount of unstructured data in organisations grow, companies and not for profit organisations of all sizes are learning they need new approaches to managing that data. At Wanstor we believe they require an efficient and scalable storage strategy that helps them to efficiently and effectively manage extreme data growth. Wanstor has a range of big data experts who can work with your business to put the right data storage solution that incorporates scalability, improved performance (both I/O and throughput), and improved data availability. Scalable storage solutions, paired with powerful analytics tools that can derive valuable insight from large amounts of content can help organisations of all sizes reap the benefits of “big data”. The only question you have to answer now is – Is your infrastructure ready?

For more information about Wanstor’s data storage solutions click here – https://www.wanstor.com/data-centre-storage-business.htm .

Read More

Network Monitoring for the Private Cloud: A brief guide

3rd May 2018
|

Private Cloud Computing

‘Cloud computing’ as a concept has been around for over 10 years. Up until about 5 years ago many business and not for profit organisations shunned the “cloud” as all they could see were problems and challenges with the implementation of a cloud first policy such as – insufficient processor performance, enormous hardware costs and slow Internet connections making everyday use difficult.

However, today’s technology, broadband Internet connections and fast, inexpensive servers, provide the opportunity for businesses and not for profit IT teams to access only the services and storage space that are actually necessary, and adjust these to meet current needs. For many small and medium sized organisations using a virtual server, which is provided by a service provider, introduces a wide range of possibilities for cost savings, improved performance and higher data security. The goal of such cloud solutions is a consolidated IT environment that effectively absorbs fluctuation in demand and capitalizes on available resources.

The public cloud concept presents a number of challenges for a company’s IT department. Data security and the fear of ‘handing over’ control of the systems are significant issues. If an IT department is used to appropriating its systems with firewalls and to monitoring the availability, performance and capacity usage of its network infrastructure with a monitoring solution, it is much more difficult to implement both measures in the cloud. Of course, all large public cloud providers claim they offer appropriate security mechanisms and control systems, but the user must rely on the provider to guarantee constant access and to maintain data security.

Because of the challenges and general nervousness around data security in public clouds, many IT teams are investigating the creation of a ‘private cloud’ as an alternative to the use of public cloud. Private clouds enable staff and applications to access IT resources as they are required, while the private computing centre or a private server in a large data centre is running in the background. All services and resources used in a private cloud are found in defined systems that are only accessible to the user and are protected from external access.

Private clouds offer many of the advantages of cloud computing and at the same time minimise the risks. As opposed to many public clouds, the quality criteria for performance and availability in a private cloud can be customised, and compliance to these criteria can be monitored to make sure they are achieved.

Before moving to a private cloud, an IT department must consider the performance demands of individual applications and usage variations. Long-term analysis, trends and peak loads can be attained via extensive network monitoring evaluations, and resource availability can be planned according to demand. This is necessary to guarantee consistent IT performance across virtualized systems. However, a private cloud will only function if a fast, highly reliable network connects the physical servers. Therefore, the entire network infrastructure must be analysed in detail before setting up a private cloud. This network must satisfy the requirements relating to transmission speed and stability, otherwise hardware or network connections must be upgraded.

Ultimately, even minor losses in transmission speed can lead to extreme drops in performance. At Wanstor we recommend IT administrators use a comprehensive network monitoring solution like PRTG Network Monitor, in the planning of the private cloud. If an application (which usually equates to multiple virtualized servers) is going to be operated over multiple host servers (“cluster”) in the private cloud, the application will need to use Storage Area Networks (SANs), which convey data over the network as a central storage solution. This makes network performance monitoring even more important.

In terminal set ups in the 1980s, if a central computer broke down it was capable of paralyzing an entire company. The same scenario could happen if systems in the cloud fail. Current developments show that the world has gone through a phase of widely distributed computing and storage power (each workstation had a ‘full-blown’ PC) and returned to centralized IT concepts. The data is located in the cloud, and end devices are becoming more streamlined. The new cloud, therefore, complies with the old mainframe concept of centralized IT. The failure of a single VM in a highly-virtualized cloud environment can quickly interrupt access to 50 or 100 central applications. Modern clustering concepts are used to try to avoid these failures, but if a system fails despite these efforts, it must be dealt with immediately. If a host server crashes and pulls a large number of virtual machines down with it, or its network connection slows down or is interrupted, all virtualized services on this host are instantly affected, which, even with the best clustering concepts, often cannot be avoided.

A private cloud (like any other cloud) depends on the efficiency and dependability of the IT infrastructure. Physical or virtual server failures, connection interruptions and defective switches or routers can become expensive if they cause staff, automated production processes or online retailers to lose access to important operational IT functions.

This means a private cloud also presents new challenges to network monitoring. To make sure that users have constant access to remote business applications, the performance of the connection to the cloud must be monitored on every level and from every perspective.

At Wanstor we believe an appropriate network monitoring solution like PRTG accomplishes all of this with a central system; it notifies the IT administrator immediately in the event of possible disruptions within the private IT landscape both on location and in the private cloud, even if the private cloud is run in an external computing centre. A feature of private cloud monitoring is that external monitoring services cannot ‘look into’ the cloud, as it is private. An operator or client must therefore provide a monitoring solution within the private cloud and, as a result, the IT staff can monitor the private cloud more accurately and directly than a purchased service in the public cloud. A private cloud also enables unrestricted access when necessary. This allows the IT administrator to track the condition of all relevant systems directly with a private network monitoring solution. This encompasses monitoring of every individual virtual machine as well as the VMware host and all physical servers, firewalls, network connections, etc.

For comprehensive private cloud monitoring, the network monitoring should have the systems on the radar from user and server perspectives. If a company operates an extensive website with a web shop in a private cloud, for example, network monitoring could be set up as follows: A website operator aims to ensure that all functions are permanently available to all visitors, regardless of how this is realised technically. The following questions are especially relevant in this regard:

cloud-computing-lightbox

  • Is the website online?
  • Does the web server deliver the correct contents?
  • How fast does the site load?
  • Does the shopping cart process work?

These questions can only be answered if network monitoring takes place from outside the server in question. Ideally, network monitoring should be run outside the related computing centre, as well. It would therefore be suitable to set up a network monitoring solution on another cloud server or another computing centre.

It is crucial that all locations are reliable and a failover cluster supports monitoring so that interruption-free monitoring is guaranteed. This remote monitoring should include

  • Firewall, HTTP load balancer and Web server pinging
  • HTTP/HTTPS sensors
  • Monitoring loading time of the most important pages
  • Monitoring loading time of all assets of a page, including CSS, images, Flash, etc.
  • Checking whether pages contain specific words, e.g.: “Error”
  • Measuring loading time of downloads
  • HTTP transaction monitoring, for shopping process simulation
  • Sensors that monitor the remaining period of SSL certificate validity

If one of these sensors finds a problem, the network monitoring solution should send a notification to the IT administrator. Rule-based monitoring is helpful here. If a Ping sensor for the firewall, for example, times out, PRTG Network Monitor offers the possibility to pause all other sensors to avoid a flood of notifications, as, in this case, the connection to the private cloud is clearly completely disconnected.

Other questions are crucial for monitoring the (virtual) servers that are operating in the private cloud include:

  • Does the virtual server run flawlessly?
  • Do the internal data replication and load balancer work?
  • How high are the CPU usage and memory consumption?
  • Is sufficient storage space available?
  • Do email and DNS servers function flawlessly?

These questions cannot be answered with external network monitoring. Monitoring software must be running on the server or the monitoring tool must offer the possibility to monitor the server using remote probes. Such probes monitor the following parameters, for example, on each (virtual) server that runs in the private cloud, as well as on the host servers:

  • CPU usage
  • Memory usage (page files, swap file, page faults, etc.)
  • Network traffic
  • Hard drive access, free disc space and read/write times during disc access
  • Low-level system parameters (e.g.: length of processor queue, context switches)
  • Web server’s http response time Critical processes, like SQL servers or Web servers, are often monitored individually, in particular for CPU and memory usage.

In addition, the firewall condition (bandwidth use, CPU) can be monitored. If one of these measured variables lies outside of a defined range (e.g. CPU usage over 95% for more than two or five minutes), the monitoring solution will send notifications to the IT administrator.

Final thoughts

With the increasing use of cloud computing, IT system administrators are facing new challenges. A private cloud depends on the efficiency and dependability of the IT infrastructure. This means that the IT department must look into the capacity requirements of each application in the planning stages of the cloud in order to calculate resources to meet the demand. The connection to the cloud must be extensively monitored, as it is vital that the user has constant access to all applications during operation.

At the same time, smooth operation of all systems and connections within the private cloud must be guaranteed. A network monitoring solution should therefore monitor all services and resources from every perspective. This ensures continuous system availability.

For more information about Wanstor and PRTG network monitoring tools please visit – https://www.wanstor.com/paessler-prtg-network-monitor.htm

Read More

Endpoint Security – A state of transition

19th April 2018
|

Keyboard with sinister lighting

Endpoint security used to be a fairly mundane topic. The normal model used to be that the IT operations team would provision PCs with an approved image and then install Anti-Virus software on each system. The IT Operations team would then make periodic security updates (vulnerability scanning, patches, signature updates, etc.), but the endpoint security foundation was generally straightforward and easy to manage.

However in the last six months at Wanstor, we have seen an increase in the number of organisations increasing their focus on endpoint security and its associated people, processes, and technologies. This is largely down to mobility strategies starting to mature, BYOD becoming more common and mobile working the norm for many employees. Because of these market trends many businesses and not for profit organisations have had to increase their endpoint security budgets to cope with the changing working practices they are now facing.

The maturing of market trends have also meant many endpoint security vendors have had to change their strategies to cope with a transitioning end user workforce who want a stable office environment combined with a flexible work from anywhere approach.

At Wanstor we have seen the endpoint security strategy changing and predominantly being driven by the following factors in many organisations:

Cyber risks need to be addressed, especially around information security best practices – This is a clear indication that many IT security processes organisations have in place are not fit for a changing regulatory and mobile landscape.

Problems caused by the volume and diversity of devices – Addressing new risks associated with mobile endpoints should be a top endpoint security strategy requirement for all IT departments. This will only increase with the addition of more cloud, mobile, and Internet-of-Things (IoT) technologies

The need to address malware threats – Although it has been around for a long time many organisations are still struggling to get to grips with securing endpoints against malware threats. At Wanstor we do not find this overly surprising as the volume and sophistication of malware attacks has never been higher and the landscape is steadily becoming more dangerous. Additionally the sophistication and efficiency of the cybercriminal underworld alongside the easy access that would-be criminals have to sophisticated malware tools are a combination organisations of all sizes need to take seriously. At Wanstor we meet with 100’s of customers on a regular basis and they are all saying the same thing – We are concerned about our ability to stop these malware threats and stay a step ahead of attackers.

While various industry research studies suggest endpoint security strategies are driven by the factors identified above, many businesses and not for profit organisations still struggle to address endpoint security vulnerabilities and threats with legacy processes and technologies as well.

Some of the most common things we see at Wanstor include:

Security teams spending too much time concentrating on attacks which are happening now and not planning for the future – As the threat landscape has evolved so has the pressure on endpoint security staff, systems and processes. In many organisations they only have 1 or possibly 2 trained IT security professionals. This means when an attack happens they have to spend a lot of time attending to high-priority issues. They do not have sufficient time for process improvement or strategic planning. This challenge is something of a contradiction. Strategic improvements cannot and should not come at the expense of the security team failing to respond to high-priority issues, creating a quandary for many organizations: They know they need an endpoint security overhaul, but cannot afford to dedicate ample time at the expense of day-to-day security tactics. Effective endpoint tools must address this challenge by improving both the strategic and day-to-day position of the security team.

Organisations remain too focused/scared of regulatory compliance – At Wanstor we know it is a balance – IT security budgets vs regulatory compliance. However we have recently seen many businesses and not for profit organisations spending too much money/effort on becoming compliant within a changing regulatory landscape. Quite often this is because IT security teams have not worked with the business to properly define what the new regulations actually mean for the business and what the associated IT security spend should be. This often means IT security solutions are purchased ad-hoc and cost the organisation more money in the long run as they are purchased with a short term goal in mind rather than part of a wider security/regulatory plan.

At Wanstor we believe regulatory compliance should come as a result of strong security, and endpoint security cannot be achieved with a compliance-centric approach. For many IT teams this will mean a shift in thinking and closer working with other business departments such as the finance and legal teams.

Endpoint security has too many manual processes and controls – Endpoint security has undergone a major technical transition, but many organisations continue to rely on legacy products and processes to combat new challenges. It is often cheaper and easier for businesses and not for profit organisations to layer new products on top of legacy products as needs arise. However the trade-off is IT security teams become more and more inefficient as they have several layers of security processes and tools they have to manage which can create a security operations nightmare.

Wanstor’s Top Endpoint Security Challenges

  • Security staff spending a significant amount of time attending to high priority issues leading to no time for process improvement or strategic planning
  • Organisations too focused on meeting regulatory compliance requirements than addressing endpoint security risks with strong controls
  • Endpoint security is based upon too many manual processes making it difficult for the security staff to keep up to date with relevant security tasks and new technology trends
  • Organisations viewing endpoint security as a basic requirement and not giving it the time or resources it needs to protect users
  • Lack of monitoring of endpoint activities proactively so it can be difficult to detect a security incident.
  • Businesses and not for profit organisations not having access to the right vulnerability scanning and / or patch management tools so are always vulnerable to having an endpoint compromised by malware
  • Lack of budget to purchase the right endpoint security products as IT teams unsure of how to develop the right business case for management teams to make decisions on

In summary, Wanstor’s research of its own customers, and the changing mobility landscape identifies a situation where the principal endpoint security approach is not an adequate countermeasure for addressing the complexity and sophistication of modern IT security threats.

Wanstor’s own customer and market research evidence strongly suggests that businesses and not for profit organisations at the moment do not view existing endpoint security strategies as viable for blocking sophisticated attacks. As a result, many organisations need to supplement their existing endpoint security products with newer and more robust technologies that offer more functionality across incident detection, response, and remediation.

As a matter of course Wanstor believes all IT teams should take action now to review their endpoint security strategies and evaluate whether or not it is fit for purpose against business requirements. As a minimum the IT team should:

Investigate and test advanced anti-malware products – Organisations of all sizes should investigate and potentially acquire advanced anti-malware solutions. This is because normal solutions are no longer “good enough” to protect an organisation on their own. Instead IT teams need to recognise that all organisations are targets to hackers. In turn this means they should seek the strongest possible endpoint security solutions in order to deal with potential threats both now and in the future.

Continuous endpoint monitoring – As the great management saying goes “If you can’t manage it you can’t monitor it”. The question has to be: – Does your IT team have the right network and security monitoring in place? If it doesn’t how will you even know you are under attack or which endpoint devices are most vulnerable to attack? At Wanstor we always recommend appropriate network monitoring tools are purchased by the IT team. Quite often network monitoring and the ability to detect abnormal network traffic patterns early, help to prevent many security attacks before they become business critical.

Endpoint forensics – Endpoint forensic solutions can (when focused on actual need not cost) improve efficiency and effectiveness related to incident response, and reduce the time it takes for incident detection. Additionally by integrating endpoint data with network security analytics, it gives IT teams a more comprehensive and integrated view of security activities across networks and host systems.

In conclusion, endpoint security needs to change in most organisations to meet changing user needs and demands on IT. At the present time many organisations are struggling to hire the right staff, choose the right technologies, and respond to the many challenges of modern threats. The scale and diversity of these challenges can appear overwhelming, but organisations that take the time to devise and execute solid, integrated endpoint security strategies can the right returns on their security investments and protect their organisations at the same time.

Wanstor believes that organisations who are seeking to overhaul their endpoint security should integrate their endpoint security technologies with their network-level and log monitoring in order to improve incident detection, prevention, and response, while also streamlining the work of their security operations team.

For more information about Wanstor’s endpoint security services, please visit – https://www.wanstor.com/managed-it-security-services-business.htm

Read More

Enterprise Mobility Management – making sure the fundamentals are right

9th April 2018
|

Enterprise Mobility Management and ensuring the fundamentals are right

Mobility and bring-your-own device (BYOD) are transforming the way people work and the way businesses support them. At Wanstor we believe there is more to mobility than simply enabling remote access. To unlock the full potential of enterprise mobility, IT departments need to allow people the freedom to access all their apps and data from any device, seamlessly and conveniently. Mobile devices also call for the right approach to IT security to protect business information as they are used in more places, over untrusted networks, with a significant potential for loss or theft. The IT department has to maintain compliance and protect sensitive information wherever and however it’s used and stored, even when business and personal apps live side-by-side on the same device.

In this article Wanstor’s Mobility experts have developed a set of key points which the IT department need to take notice of as an enterprise mobility strategy is developed.

Protect and manage key assets, data and information

As employees access data and apps on multiple devices (including personally-owned smartphones and tablets) it can no longer be seen as realistic for IT to control and manage every aspect of the environment. At Wanstor we believe the approach IT teams should take is to focus on what matters most for a business across devices, data and information then choose the right mobility management models that make the most sense for your business and your mobile use cases.

Generally it is accepted there are four models to choose from, either individually or in combination. Mobile device management (MDM), Mobile hypervisors and containers, Mobile application management (MAM) and Application and desktop virtualization. Choosing the right mix of these 4 models will be intrinsically linked to your businesses success.

User experience needs to be at the centre of your thinking

Mobile devices have been a key driver of consumerization in the enterprise, giving people powerful new ways to work with apps and information in their personal lives. This has raised the expectations around IT and the services they provide particularly around mobile devices. No longer can IT teams put strict controls on users instead they must offer an IT experience that compares with the freedom and convenience allowed by consumer technology companies.  At Wanstor we always suggest before MDM planning gets underway that the IT team sits down with a range of users and talk about their needs and preferences to make sure the mobility strategy which is going to be put in place gives them what they really want.

As the IT team works to deliver a superior user experience, Wanstor experts suggest that they examine ways to give people more than they expect and provide useful capabilities they might not have thought of e.g.

  • Allow employees to access their apps and data on any device they use, complete with personal settings, so they can start work immediately once they have been given their work device
  • Give people the choice of self-service provisioning for any app they need through an enterprise app store with single sign-on
  • Automate controls on data sharing and management, such as the ability to copy data between applications, so people don’t have to remember specific policies
  • Define allowed device functionality on an app-by-app basis, so people can still use functions such as printing, camera and local data storage on some of their apps even if IT needs to turn them off for other apps
  • Make it simple for people to share and sync files from any device, and to share files with external parties simply by sending a link.

By developing a mobility strategy alongside the collaboration of users, IT teams can better meet users’ needs while gaining a valuable opportunity to set expectations. This helps to make sure employees understand IT’s own requirements to ensure compliance.

Avoid bypassing

Bypassing company controls and policies via a mobile device represents the worst-case scenario for enterprise mobility. It is surprisingly common that many users if they cannot find/access what they want in terms of IT on their mobile device will bypass IT altogether and access their own cloud services, apps and data.

Many people think great employees are accessing what they want, when they need it. Actually nothing could be further from the truth. Employees accessing unknown apps, sensitive data via public clouds and downloading files which bypass the visibility and control policies of IT mean a business is extremely vulnerable to attack. In reality IT policies and user education can only go so far to prevent bypasses from happening, realistically, if it’s the best solution for someone’s needs and it seems unlikely that IT will find out, it’s going to happen. This makes it essential to provide people with an incentive to work with IT and use its infrastructure, especially when it comes to sensitive data and apps. The best incentive is a superior user experience, delivered proactively and designed to meet peoples’ needs better than the unmanaged alternative.

Embed mobility in your service delivery strategy

Mobile users rely on a variety of application types—not just custom mobile apps, but also third party native mobile apps, Windows apps and SaaS solutions. In developing a mobility strategy, IT teams should think about the mix of apps used by the people and groups in their business, and how they should be accessed on mobile devices. It is widely accepted that there are four ways for people to access apps on mobile devices: Native, Virtualized access experience, Containerized experience and through a fully managed enterprise experience.

For most businesses, a combination of virtualized access and a containerized experience will support the full range of apps and use cases people rely on. This also makes it possible for IT to maintain visibility and control while providing a superior user experience. People can access hosted applications and native mobile apps—as well as SaaS apps such as Salesforce and NetSuite— through a unified enterprise single sign-on. When an employee leaves the business, IT can immediately disable the person’s account to remove access to all native mobile, hosted and SaaS apps used on the device.

Automation is the key to successful EMM outcomes

Automation not only simplifies life for the IT department it also helps them to deliver a better user experience. Think about the difference automation can make for addressing common mobility needs like:

  • An employee replaces a lost device or upgrades to a new one. With the click of a single URL, all of the individual’s business apps and work information are available on the new device, ready for work.
  • As an employee moves from location to location and network to network, situational and adaptive access controls reconfigure apps automatically to make sure appropriate security, with complete transparency for the user.
  • A board member arrives for a meeting, tablet in hand. All the documents for the meeting are automatically loaded onto the device, configured selectively by IT for read-only access, and restricted to a containerized app as needed. Especially sensitive documents can be set to disappear automatically from the device as soon as the member leaves the room.
  • As employees change roles in the business, the relevant apps for their current position are made available automatically, while apps that are no longer needed disappear. Third-party SaaS licenses are instantly reclaimed for reassignment.

One way to perform this type of automation is through Active Directory. First, link a specific role with a corresponding container. Anyone defined in that role will automatically inherit the container and all the apps, data, settings and privileges associated with it. On the device itself, you can use MDM to centrally set up Wi-Fi PINs and passwords, user certificates, two-factor authentication and other elements as needed to support these automated processes.

Define networking requirements

Different applications and use cases can have different networking requirements, from an intranet or Microsoft SharePoint site, to an external partner’s portal, to a sensitive app requiring mutual SSL authentication. Enforcing the highest security settings at the device level degrades the user experience unnecessarily; on the other hand, requiring people to apply different settings for each app can be even more tiresome for them.

By locking down networks to specific containers or apps, with separate settings defined for each, the IT team can make networking specific to each app without requiring extra steps from the user. People can just click on an app and get to work, while tasks such as signing in, accepting certificates or opening an app-specific VPN launch automatically by policy in the background.

Protect sensitive data

Unfortunately in many businesses, IT doesn’t know where the most sensitive data resides, and so must treat all data with the same top level of protection, an inefficient and costly approach. Mobility provides an opportunity for IT teams to protect data more selectively based on a classification model that meets unique business and security needs.

Many companies use a relatively simple model that classifies data into three categories—public, confidential and restricted—and also take into account the device and platform used while other businesses have a much more complex classification model and also take into account many more factors such as user role and location.

The data model deployed should take into account both data classification and device type. IT teams should also want to layer additional considerations such as device platform, location and user role into their security policy. By configuring network access through enterprise infrastructure for confidential and restricted data, IT teams can capture complete information on how people are using information to assess the effectiveness of your data sensitivity model and mobile control policy.

Clear about roles and ownership

Who in your business will own enterprise mobility? In most companies, mobility continues to be addressed through an ad hoc approach, often by a committee overseeing IT functions from infrastructure and networking to apps. Given the strategic role of mobility in the business, and the complex matrix of user and IT requirements to be addressed, it’s crucial to clearly define the structure, roles and processes around mobility. People should understand who is responsible for mobility and how they will manage it holistically across different IT functions. Ownership needs to be equally clear when it comes to mobile devices themselves. Your BYOD policy should address the grey area between fully managed, corporate-owned devices and user-owned devices strictly for personal use – for example:

Who is responsible for backups for a BYO device?

Who provides support and maintenance for the device, and how is it paid for?

How will discovery be handled if a subpoena seeks data or logs from a personally owned device?

What are the privacy implications for personal content when someone uses the same device for work?

Both users and IT should understand their roles and responsibilities to avoid misunderstandings.

Build compliance into the solution

Globally, businesses now face more than 300 security and privacy-related standards, regulations and laws, with more than 3,500 specific controls. Therefore it is not enough to simply try to meet these requirements, businesses need to be able to document compliance and allow full auditability.

Many businesses have already have solved the compliance challenge within their network. The last thing the IT department wants to do now is let enterprise mobility create a vast new problem to solve. Therefore IT departments should make sure mobile devices and platforms support seamless compliance with government mandates, industry standards and corporate security policies, from policy- and classification-based access control to secure data storage. Your EMM solution should provide complete logging and reporting to help you respond to audits quickly, efficiently—and successfully.

Prepare for the future

Don’t write your policies for only today! Keep in mind what enterprise mobility will look like in the next few years. Mobility, devices and users’ needs will continue to evolve and expand the potential of mobility, but they will also introduce new implications for security, compliance, manageability and user experience. IT departments need to pay attention to ongoing industry discussions about emerging technologies like these, and design their mobility strategy around core principles that can apply to any type of mobile device and use case. This way, they can minimize the frequent policy changes and iterations that may confuse and frustrate people.

Read More

Overcoming Active Directory Administrator Challenges

23rd February 2018
|

Overcoming Active Directory Administrator Challenges

The central role of Active Directory in business environments

Deployment of and reliance upon Active Directory in the enterprise continues to grow at a rapid pace, and is more often becoming the central data storage point for sensitive user data as well as the gateway to critical business information. This provides businesses with a consolidated, integrated and distributed directory service, and enables the business to better manage user and administrative access to business applications and services.

Over the past 10+ years, Wanstor has seen Active Directory’s role in the enterprise drastically expand, as has the need to secure the data it both stores and enables access to. Unfortunately, native Active Directory administration tools provide little control over user and administrative permissions and access. The lack of control makes the secure administration of Active Directory a challenging task for IT administrators. In addition to limited control over what users and administrators can do within Active Directory, the database has limited ability in reporting on activities performed therein. This makes it very difficult to meet audit requirements, and to secure Active Directory. As a result, many businesses need assistance in creating repeatable, enforceable processes that will reduce their administrative overhead, whilst helping increase the availability and security of their systems.

Because Active Directory is an essential part of the IT infrastructure, IT teams must manage it both thoughtfully and diligently – controlling it, securing it and auditing it. Not surprisingly, with an application of this importance there are challenges to confront and resolve in reducing risk, whilst deriving maximum value for the business. This blog will examine some of the most challenging administrative tasks related to Active Directory.

Compliance Auditing and Reporting

To satisfy audit requirements, businesses must demonstrate control over the security of sensitive and business-critical data. However, without additional tools, demonstrating regulatory compliance with Active Directory is time-consuming, tedious and complex.

Auditors and stakeholders require detailed information about privileged-user activity. This level of granular information allows interested parties to troubleshoot problems and also provides information necessary to improve the performance and availability of Active Directory.

Auditing and reporting on Active Directory has always been a challenge. To more easily achieve, demonstrate and maintain compliance, businesses should employ a solution that provides robust, custom reporting and auditing capabilities. Reporting should provide information on what, when and where changes happen, and who made the changes.

Reporting capabilities should be flexible enough to provide graphical trend information for business stakeholders, while also providing granular detail necessary for administrators to improve their Active Directory deployment. Solutions should also securely store audit events for as long as necessary to meet data retention requirements and enable the easy search of these events.

Group Policy Management

Microsoft recommends that Group Policy be a cornerstone of Active Directory security. Leveraging the powerful capabilities of Group Policy, IT teams can manage and configure user and asset settings, applications and operating systems from a central console. It is an indispensable resource for managing user access, permissions and security settings in the Windows environment.

However maintaining a large number of Group Policy Objects (GPOs), which store policy settings, can be a challenging task. for example, Administrators should take special care in large IT environments with many system administrators, because making changes to GPOs can affect every computer or user in a domain in real time. However, Group Policy lacks true change-management and version-control capabilities. Due to the limited native controls available, accomplishing something as simple as deploying a shortcut requires writing a script. Custom scripts are often complex to create and difficult to debug and test. If the script fails or causes disruption in the live environment, there is no way to roll back to the last known setting or configuration. Malicious or unintended changes to Group Policy can have devastating and permanent effects on an IT environment and a business.

To prevent Group Policy changes that can negatively impact the business, IT teams often restrict administrative privilege to a few highly-skilled administrators. As a result, these staff members are overburdened with administering Group Policy rather than supporting the greater goals of the business. To leverage the powerful capabilities of Group Policy, it is necessary to have a solution in place that provides a secure offline repository to model and predict the impact of Group Policy changes before they go live. The ability to plan, control and troubleshoot Group Policy changes, with an approved change and release-management process, enables IT teams to improve the security and compliance of their Windows environment without making business-crippling administrative errors.

Businesses should also employ a solution for managing Group Policy that enables easy and flexible reporting to demonstrate that they’ve met audit requirements.

User Provisioning, Re-provisioning and De-provisioning

Most employees require access to several systems and applications, and each programme has its own account and login information. Even with today’s more advanced processes and systems, employees often find themselves waiting for days for access to the systems they need. This can cost businesses thousands of pounds in lost productivity and employee downtime.

To minimize workloads and expedite the provisioning process, many businesses view Active Directory to be the commanding data store for managing user account information and access rights to IT resources and assets. Provisioning, re-provisioning and de-provisioning access via Active Directory is often a manual process. In a large business, maintaining appropriate user permissions and access can be a time-consuming activity, especially when the business has significant personnel turnover. Systems administrators often spend hours creating, modifying and removing credentials. In a large, complex business, manual provisioning can take days. There are no automation or policy enforcement capabilities native to Active Directory. With little control in place, there is no way to make sure that users will receive the access they need when they need it.

Additionally, there is no system of checks and balances. Administrative errors can easily result in elevated user privileges that can lead to security breaches, malicious activity or unintended errors that can expose the business to significant risk. Businesses should look for an automated solution to execute provisioning activities. Implementing an automated solution with approval capabilities greatly reduces the burden on administrators, improves adherence to security policies, improves standards and decreases the time a user must wait for access. It also speeds up the removal of user access, which minimizes the ability of a user with malicious intent to access sensitive data.

Secure Delegation of User Privilege

Reducing the number of users with elevated administrative privileges is a constant challenge for the owners of Active Directory. Many user and helpdesk requests require interaction with Active Directory, but these common interactions often result in elevated access for users who do not need it to perform their jobs. Because there are only two levels of administrative access in Active Directory (Domain Administrator or Enterprise Administrator), it is very difficult to control what users can see and do once they gain administrative privileges.

Once a user has access to powerful administrative capabilities, they can easily access sensitive business and user information, elevate their privileges and even make changes within Active Directory. Elevated administrative privileges, especially when in the hands of someone with malicious intent, dramatically increase the risk exposure of Active Directory and the applications, users and systems that rely upon it. At Wanstor we have found through our years of experience of dealing with Active Directory that it is not uncommon for a business to discover that thousands of users have elevated administrative privileges. Each user with unauthorized administrative privileges presents a unique threat to the security of the IT infrastructure and business. Coupled with Active Directory’s latent vulnerabilities, it is easy for someone to make business-crippling administrative changes. When this occurs, troubleshooting becomes difficult, as auditing and reporting limitations make it nearly impossible to quickly gather a clear picture of the problem.

To reduce the risk associated with elevated user privilege and make sure that users only have access to the information they require, businesses should seek a solution that can securely delegate entitlements. This is a requirement to meet separation-of-duties mandates, as well as a way to share the administrative load by securely delegating privileges to subordinates.

Change Auditing and Monitoring

To achieve and maintain a secure and compliant IT environment, IT administrators must control change and monitor for unauthorized changes that may negatively impact their business. Active Directory change auditing is an important procedure for identifying and limiting errors and unauthorized changes to Active Directory configuration. One single change can put a business at risk, introducing security breaches and compliance issues.

Native Active Directory tools fail to proactively track, audit, report and alert administrators about vital configuration changes. Additionally, native real-time auditing and reporting on configuration changes, day-to-day operational changes and critical group changes do not exist. This exposes the business to risk, as the IT team’s ability to correct and limit damage is dependent on their ability to detect and troubleshoot a change once it has occurred.

A change that goes undetected can have a drastic impact on a business. E.g. someone who elevated their privileges and changed their identity to that of a senior member of the finance department could potentially access company funds resulting in theft, wire transfers and so forth. To reduce risk and help prevent security breaches, businesses should employ a solution that provides comprehensive change monitoring. This solution should include real-time change detection, intelligent notification, human-readable events, central auditing and detailed reporting. Employing a solution that encompasses all of these elements will enable IT teams to quickly and easily identify unauthorized changes, pinpoint their source, and resolve issues before they negatively impact the business.

Maintaining Data Integrity

It is important for businesses of all sizes to make sure that the data housed within Active Directory supports the needs of the business, especially as other applications rely on Active Directory for content and information.

Data integrity involves both the consistency of data and the completeness of information. For example, there are multiple ways to enter a phone number. Entering data in inconsistent formats creates data pollution. Data pollution inhibits the business from efficiently organizing and accessing important information. Another example of data inconsistency is the ability to abbreviate a department name. Think of the various ways to abbreviate “Accounting.” If there are inconsistencies in Active Directory’s data, there is no way to make sure that an administrator can group all the members of accounting together, which is necessary for payroll, communications, systems access and so on. Another vital aspect of data integrity when working with Active Directory is the completeness of information. Active Directory provides no control over content that is entered natively. If no controls are in place, administrators can enter information in any format they wish and leave fields that the business relies upon blank. To support and provide trustworthy information to all aspects of the business that rely on Active Directory, businesses should employ a solution that controls both the format and completeness of data entered in Active Directory. By putting these controls in place, IT teams can drastically reduce data pollution and significantly improve the uniformity and completeness of the content in Active Directory.

Self-Service Administration

Most requests made by the business or by users require access to and administration of Active Directory. This is often manual work and there are few controls in place to prevent administrative errors. Active Directory’s inherent complexity makes these errors common, and just one mistake could do damage to the entire security infrastructure. With the lack of controls, the business cannot have just anyone administering Active Directory.

While it may be practical to employ engineers and consultants to install and maintain Active Directory, businesses cannot afford to have their highly-skilled and valuable employees spending the majority of their time responding to relatively trivial user requests. Self-service administration and automation are logical solutions for businesses looking to streamline operations, become more efficient and improve compliance. This is achieved by placing controls around common administrative tasks and enabling the system to perform user requests without tasking highly skilled administrators.

Businesses should identify processes that are routine yet hands-on, and consider solutions that provide user self-service and automation of the process. Automation of these processes reduces the workload on highly-skilled administrators, it also improves compliance with policies since automation does not allow users to skip steps in the process. Businesses should also look for self-service and automation solutions that allow for approval and provide a comprehensive audit trail of events to help demonstrate policy compliance.

Final thoughts

Active Directory has found its home as a mission-critical component of the IT infrastructure. As businesses continue to leverage it for its powerful capabilities as a commanding repository, Active Directory is a vital part of enterprise security. Therefore, administrators must be able to control, monitor, administer and protect it with the same degree of discipline currently applied to other high-profile information such as credit card data, customer data and so forth. Because native tools do not enable or support the secure and disciplined administration of Active Directory, businesses must look for solutions that enable its controlled and efficient administration. These solutions help make sure the business information housed in Active Directory is both secure and appropriately serving the needs of the business.

Read More
Wanstor
124-126 Borough High Street London, SE1 1LB
Phone: 0333 123 0360, 020 7592 7860
IT Support London from Wanstor IT Support London