If you are familiar with IT security testing for organisations, you have probably heard of the concept of a kill chain. This is a route by which an attacker can achieve a given goal (steal data or sabotage an IT installation, for instance). Kill chains as their name suggests are composed of several links or stages through which an attacker moves to home in on the target result. As efficiency as well as effectiveness is part of business continuity, why reinvent the wheel? The kill chain could provide insights here as well.
The Ebola crisis, also a pandemic because of cases in different countries, has hit the nation of Sierra Leone the hardest. National and international health teams have worked round the clock to contain the disease and prevent new outbreaks. Pharmaceuticals companies have ramped up efforts to develop new vaccines. Sierra Leone counts almost 12,000 people infected with the increases in both city and travelling populations major contributing factors. Recently, the Ebola response team in Sierra Leone tried a new tactic that was in stark contrast with previous measures. The tactic could be summed up in one word – Don’t!
IT security managers and IT teams can install the latest antivirus software and firewall appliances to protect their computers and networks. However, there are also other signs to look out for, which software and hardware products are not always smart enough to see. Human beings on the other hand are naturally gifted in spotting strange behaviour. When patterns change or get disrupted, we notice. Here’s a checklist of ‘indicators of compromise’ to look out for, where changes might indicate an IT security attack in progress.
There is an old joke in sales that things would be great if it wasn’t for the customers. Of course, it is the customers that buy and that keep salespeople in a job. More generally, people accomplish tasks, do projects, have ideas and help to run businesses. Business continuity is inextricably bound up with people. They may be unpredictable as individuals, but display rather more predictable behaviour when grouped together. Predictive analytics has already been growing as a method of forecasting market conditions, economic trends and environmental developments. Increasingly, these techniques are also being applied in cases where people have a direct impact on business continuity.
Information technology has certain features that make it possible to calculate probable dates of demise. It’s all digital, with a finite number of bits and bytes, and calculable error rates. As disk storage capacities increase, technologies viable today may run out of steam tomorrow. They cannot scale forever. Unlike vinyl records in the music industry or Polaroid cameras (a bit of cheat) that were written off, but then experienced resurgence in their markets, when a disk drive is dead, it’s dead. Here is the thinking behind the disturbingly precise estimate that by 2019, RAID 6 drives should no longer be part of the IT landscape or the disaster recovery scene.
Concepts and fashions in business come and go. And sometimes they come back again with a new look or a different name. The origin of the DevOps name is simple to guess. It’s a combination of development and operations. The advantages cited of using a DevOps approach include a lower failure rate of software releases, a faster time to fix, and a faster time to recover if a new release crashes your server. DevOps is currently a buzzword in IT circles, but despite an inception date of 2008, just how new is it?
If the title of this post makes you go cross-eyed, don’t worry. All will become clear. Let’s explain. Active/active IT configurations consist of computer servers that are connected in a network and that share a common database. The ‘active/active’ part refers to the capability to handle server failure. First, if one server fails, it does not affect the other servers. Second, users on a server that fails are then rapidly switched to another server that works. The database that the servers use is also replicated so that there is always one copy available. Now for the other two acronyms: HA stands for high availability; DR (of course) for disaster recovery. It is DR that is more affected in this case.
First there was the dedicated, physical server. Then came virtualisation to help organisations mix and match over different servers on their sites. After that came cloud computing with more virtualisation (and multi-tenancy thrown in). However, organisations typically still did their virtualisation between machines in close physical proximity, even if they were using cloud services. Now the next step is to see how well virtual machines and their data can be transferred between racks of machines not just separated by a few feet, but by hundreds of miles – or at least far enough to be out of range of the next tsunami.
How often have you heard the expression ‘no pain, no gain’? These four words sum up the idea that if you are to receive benefits, then you must suffer (or at least make an effort). Alternatively, you could take it to mean that if you don’t make an effort, you can’t expect benefits. An example in the domain of disaster recovery might be ‘if you skip regular data backups (no effort), you’ll fail when your hard disk crashes (no benefit)’. The problem comes when people use chop logic to infer from ‘no pain, no gain’ that ‘if pain, then gain’ is true as well.
Think you know it all when it comes to business continuity? That’s great. Think you can store all that knowledge? Think again. The way most information technology has developed, it’s great for storing information (bunches of related data), but not so hot for knowledge (insights and deeper relationships). There is no shortage of information to define business continuity, list its component parts, describe planning methodologies and offer case studies. You can access that information, transfer it and store it on your PC or mobile computing device. The problem is in storing your understanding of that material, and the model you develop to see them as a connected whole.
Tape data storage just keeps on going. It’s almost like the steam punk of IT, a branch off into a different universe where everybody reads with bigger candles instead electric light bulbs. But it works. In fact, it works well enough for the largest IT vendors to continue pushing the envelope on data storage density on tape and storage and recovery speeds too. However, tape is not disk. You cannot ‘dip into’ tape in the same way you can randomly access a hard drive. And so, for backup and recovery in particular, the virtual tape library was invented to offer advantages of tape and disk altogether. Nevertheless, there are both pros and cons to consider.
Where are the weak points in your organisation and its operations? Where could disasters or criminals do the most damage? Vulnerability testing, as its name suggests, is done to find out where the soft underbelly is. Then protection and security can be suitably reinforced. In a general sense, it can cover everything: from freak weather conditions to power outages, supplier failure and IT disasters. Indeed, the latter category of IT is where vulnerability testing is often the most performed. This is partly because of the critical role of IT throughout many organisations, and partly because IT vulnerability testing is relatively easy to automate. However, even systematic automated testing can’t do it all. So what’s the solution?
As a business continuity manager, CIO or company risk office, you’ve probably already done numerous risk value calculations. In order to make a table to compare risks and their impacts, you might assign percentages or relative scores to risks, and monetary values or relative scores again to impacts. The risk value in each case is then simply “risk X impact”. You get a simple table that allows you to rank risks in order of their risk value and set your priorities accordingly. However, what may be forgotten is that risk calculations can be positive as well as negative.
Disaster recovery planning for your IT installations may use automated procedures for a number of situations. Virtual machines can often be switched or re-started in case of server failure, and network communications can be rerouted without human intervention. For other requirements, people will be involved in getting IT systems up and running properly after an incident. But people do not switch into auto-run modes like a machine. They can be affected by the surprise factor of an IT disaster and by the pressure to bring things back to normal. Five aspects of usability may need to be designed into your DR planning if you want the best chances of a satisfactory recovery.