Not everybody chooses the cloud as the first option for backing up data. Despite the advantages of practically limitless storage area, pay-as-you-go pricing and resilience, a weak point for the cloud is the network speed for uploading or downloading all those gigabytes (terabytes, petabytes…). The alternative for organisations is to put their own solution in place, something that will let them blast large amounts of data backwards and forwards at high speed. In the old days of IT, an IT team would have been tasked with assembling the requisite components and tweaking them to make them work properly together. But now IT vendors have spotted the need and produced the PBBA, a solution whose popularity is growing steadily.
Did you know that in six years’ time each individual on the planet will correspond to over 5,000 gigabytes of stored data? That’s the estimate from market research company IDC and digital storage enterprise EMC who see worldwide data holdings doubling about every two years to reach 40,000 exabytes (40 million billion gigabytes) by 2020. Right now in 2014, that means making moves to extend and enhance data storage solutions appropriately, and update those disaster recovery plans too. To store and manage all the data forecast to arrive, new techniques and technologies are available to blend with revamps of existing ones.
Despite the publicity given to Big Data and (to a lesser extent) the Internet of Things, their practical advantage has yet to be clarified. It’s difficult to think of them in terms of business continuity when they don’t influence the fortunes of an enterprise; unless you count the negative impact of money spent investigating them. A few companies cite gains in marketing effectiveness for example by analysing huge amounts of online data from customer interactions, but Big Data is not mainstream – or not yet. Similarly, the Internet of Things in which phones, PCs, cars, fridges and more are all web-enabled is a conversation starter rather than a reality. Things would change if either one acquired a killer app.
There is no question that technology today forms the core of business. In their role of facilitating transactions and storing sensitive data—the data of both the staff of the company and the stored data of the clients—the systems and networks of companies are increasingly under siege. This makes data both the most precious asset to the corporation, and the most vulnerable. Losing it may cause irrevocable damage to the reputation of a business, and thereby also the trust of shareholders. Logically, then, network security should be a key focal point in the disaster recovery plan of any business that wishes to stay afloat.
How many passwords do you have? How many can you remember – and what do you do about the others? Business and consumer life is controlled to a significant degree by passwords. It’s a balancing act between making them memorable (for their rightful owners) without opening the door to password abuse or theft. The business continuity challenges that organisations face include weeding out passwords like ‘secret’, ‘1234’ or even just ‘password’, restricting password knowledge to only those who should know, and dealing with passwords that have been forgotten.
Good business continuity training helps managers and enterprises prepare business continuity plans. However, they’ll also need to deal with a further factor – human error. This element is a cause of anything from small business failure to nuclear power plant meltdowns. A little information on the subject can help make business continuity that much more robust. Although sophisticated analytical techniques exist to assess human reliability, in the first instance we’ll take a common sense approach. This also makes it easier to apply error-prevention measures to your organisation and boost your business continuity still further. Compare them also with the theory and principles of business continuity from your training classes, and exercises you do to test BC plans.
Ask people where the next surprise will be in disaster recovery and they may well point to technology, the weather or legislation. While all of these areas should be taken into consideration, there’s another one that is vital to good DR management. It’s people. Perhaps because it’s so obvious, disaster recovery plans sometimes gloss over the human resources factor. ‘Get everybody back to work ‘ is frequently all that’s said, after a detailed discussion of phased computers and network recovery. However, it may take more than snapping your fingers to bring productivity back in a timely way.
It started with IT server virtualisation and then continued with cloud computing. Instead of physical machines running a company’s own software applications, we now simply have interfaces to virtual instances of these things. Computing resources are no longer located in a specific piece of equipment on a company’s premises. They are ‘somewhere’ in the cluster of virtualised servers, or on the network, or in the cloud. Software as a Service (SaaS) takes it all a step further: now not only are businesses relieved of the need to buy and run their own hardware, but there’s someone else to look after the software too. The potential advantages of budget flexibility, resilience and scalability are clear. But that doesn’t change the need to continually verify solid business continuity management, from one end right through to the other.
At the start of each year, there’s always a long list of IT offerings vying for attention. With many solutions still looking for a problem, it pays to take a moment to consider the business impact rather than being seduced by the high-tech glitter. Here’s a quick rundown of what might affect business continuity in 2014.
The data breach at the Target Corp, the US supermarket chain, was a shock for many. The personal information of at least 70 million customers was stolen by hackers who intercepted the information as buyers used credit and debit cards at the company’s points of sale. The reputational damage seems to have quickly spilled over into an impact on the bottom line: Target cut its profit forecasts for the fourth quarter of 2013 by about 20 percent. However, this high profile case (third biggest US retailer) may just be a taste of the problems in line for other enterprises using the same kind of point of sale (PoS) systems.
The world turns, things change and new security risks continue to appear on the scene. Some organisations bury their head in the sand or cross their fingers. ‘It wouldn’t happen to us’ is their motto. Others make plans using different approaches, some better than others. Then they leave the plan untouched without updating it and expect it to hold good. Is such a policy ever justified? Do new threats mean that traditional security principles should be revised? And where should you start if you want to improve your own security risk management?
People are often cited as the most valuable resource of an organisation. The more capable an employee is and the better trained, the more an enterprise stands to profit – up to a point. Difficulties may begin when a person becomes indispensable because of unique expertise that is essential to the smooth running of the company. Those difficulties are then compounded if the expert tries to force the company to stay within that perimeter of expertise; perhaps for fear of being pushed to one side and even being made redundant. A situation like this runs counter to what business continuity is all about. What is the best way to handle it?
150 years ago the Great Blondini, the world-famous tightrope walker, performed incredible feats of balance and daring in his aerial ambulation above Niagara Falls. While today’s Chief Information Office doesn’t always hold crowds breathless with excitement in quite the same way, he or she has a balancing act to get right too. How much detail should CIOs know about the technology the company is using? How much should they get involved in managing IT projects and how much should they concentrate (instead) strengthening relationships with fellow directors? The way CIOs handle these questions can affect both the business continuity and the disaster recovery capabilities of an organisation.
Be honest – do you currently have a malicious software reporting policy? Just relying on the existence of anti-virus software and firewalls may be too optimistic nowadays. The potential damage to information assets and productivity, let alone identity or bank account theft, suggests that a malware reporting policy should be in place in any organisation. Even Google is asking users to contribute to tightening up security by reporting any nefarious activity from websites listed in its results pages. And as an additional source of concern, it seems malware infections are also being caused by some of the very entities that are supposed to be protecting us.
Stick to core competence and competitive advantage, and outsource the rest: such has been the mantra of businesses for decades now. The logic is simple. By using external partners specialised in the non-core activities, for example, accounting, logistics and pay, an enterprise can benefit from that partner’s economies of scale and superior expertise. Profits go up and business continuity is reinforced. Yet outsourcing still gives rise to disappointment and animosity. It turns out that while a watertight contractual agreement is a pre-requisite for dependable outsourcing, it isn’t sufficient. Organisations need more.