Knowledge is a tricky thing to handle. Sometimes you can’t get it to stick, for instance, when you’re trying to get people to use a purpose-built sales forecasts system instead of Excel files. Sometimes you wish you could magically unstick it, in the same way you can erase memory sticks. Employees leaving the business are a case in point. In their heads they hold information about products, plans, customers, system logons and more. As the erasable employee is not currently a reality, you’ll have to face the fact that they will walk out with confidential data in their heads. What can you do about it?
Information security often conjures up notions of complex anti-virus software, hardware firewalls and perhaps a high security data centre with biometric access checks. All of this is possible and often used to good effect. However, it would be a mistake to think that security stopped there. Like the Maginot Line in France at the beginning of World War Two, it’s no good being bullet-proof at the front if the enemy sneaks in round the back. And attackers in cyberspace know that it is often faster and easier to get the access information they need by human security laxity, rather than technical hacking. So what should you be looking out for?
Which disaster recovery measurements do you really need? The answer is the ones that are effective in helping you to plan and execute good DR. So your choice will naturally depend on your IT operations. The two ‘classics’ of the recovery time objective (RTO) and recovery point objective (RPO) are so fundamental that they apply to practically all situations. But suppose your organisation is running a service-oriented IT architecture with business applications like ERP using resources supplied by other servers. If some of the servers cannot be recovered satisfactorily, there may be a secondary impact elsewhere. How can you measure this situation and define a minimum acceptable level of recovery?
A recent announcement explained that cyber-security ‘big names’ McAfee and Symantec have agreed to share their threat data. It’s a development that should benefit customers of both vendors. Historically, IT vendors have swung back and forth between the multi-vendor approach (“we’ll handle the other vendor’s stuff for you”) and so-called coopetition, where two or more providers joined forces by agreeing to operate to a common standard for instance. The McAfee-Symantec pact ranges over sharing malware signatures to information on real-time attacks. Who else might follow this apparently enlightened example?
When should you bring in new technology? When it does a better job at meeting your needs, of course. It’s the same for business continuity management. Migrating from in-house physical servers to cloud computing services should be properly justified by lower costs, higher reliability and better performance for instance. Without sacrificing data confidentiality, control or conformance. While cloud computing makes sense for many organisations, there are cases where it doesn’t (example – cloud computing isn’t always cheaper). Looking at the following business criteria and then analysing what new generation technology has to offer may be the smarter way to do things.
It’s an unfortunate truth. The holes in your IT security are most likely to be where you neither see them nor expect them. That means they’ll be outside the basic security arrangements that most organisations make. Firewalls, up to date software versions and strong user passwords are all necessary, but not sufficient. Really testing security is akin to an exercise in lateral thinking or even method acting. You have to look at your systems and network from the outside to see how a hacker or cybercriminal might try to get through or round the mechanisms you’ve put in place. And there’s more still to this inside-out approach to protecting your organisation.
“The Buck Stops Here”, said US President Truman. And he made it doubly clear by having that statement inscribed on a thirteen-inch sign on his White House Oval Office desk. But what would he have made of the cloud, where IT engineers, managers and employees can all upload data and trying to pin down one person in charge of data security is often a challenge, to say the least? The cloud is great news for organisations looking for reliable pay-as-you-go storage and processing power. However, lack of control over sensitive data being stored and processed there could be a problem waiting to happen. It could affect almost half of all cloud-using organisations, according to a report issued after the Infosecurity Europe 2014 conference. How can you answer this question?
Clouds by definition are nebulous and vague. Their use in IT models and discussions goes back decades, long before the current cloud computing models. A ‘cloud’ was convenient shorthand for showing a link between a system on one side and a terminal or another system on the other. Today however, the concept has evolved. Not only do such clouds link computers, but increasingly they are the computer. Aspects of on-site IT security therefore apply to cloud computing too. For that reason alone, it’s time to firm up definitions about the type of computing that goes on in the cloud, and the IT security approaches suited to each one.
Technology helps organisations to get more done in less time. However, technology alone cannot guarantee business continuity. Solid business processes also contribute to resilience, but there’s another kind of ‘glue’ that can make the difference between enterprises that stand or fall when the going gets tough. It’s organisational culture, or “the way we do things round here”. This is an element that business continuity managers must factor into their planning, for at least two reasons. Firstly, and as we’ve just said, it’s because it’s important – in fact, essential – to BC. Secondly, because someone whose support the BC manager must get is also likely to make organisational culture a top priority.
What is the scarcest IT resource today? Processor power, main memory and disk space all seem to grow unabated. But network bandwidth on the other hand is still comparatively expensive. Consequently, enterprises tend to have less of it, which is turn leaves them more exposed to possible outages. Luckily, other technology means that bandwidth can be made to do more, even if it’s not reasonable to have more of it. Routing voice and data over the same links is a prime example. This simplifies recovery and can also minimize outages. What’s missing in the equation is a simple explanation of terms involved. Here are a few to help you mix and match for the configuration that suits you.
The theory and principles of good business continuity exist. As the world changes, they may change too. However, organizations have and will always have a comprehensive body of knowledge available to help them to continue to operate normally even in the face of adversity. So the question “Does it work” is not meant in the general sense, but in the specific one – as in “Does your particular business continuity planning and management work for you?” It’s a question that one company in the food sector found to be a longstanding frustration, until they put in place a way of answering it.
Commercial enterprises know that the best way to maintain market leadership is to attack yourself. It’s the same in IT security if you want to maximize your resistance against hackers. A niche industry has grown up around penetration testing – or ‘pentesting’ for short. Providers in this sector offer their services for applying automated or manual tests to see if they can ethically hack your computer systems and network. Business self-preservation is a strong motivation for pentesting. Such tests may also be necessary parts of a certification process for being allowed to handle confidential customer or financial data, for example. Some practitioners divide test operations into white-box and black-box testing. But is it really that clear cut?
The literature buffs among you should recognise this paraphrase of Samuel Coleridge’s epic poem, ‘The Ancient Mariner’. Besides having to put up with an albatross hung round his neck, the Ancient Mariner despaired of a lack of drinking water while becalmed at sea (“Water, water, everywhere…”) Given today’s oceans of data, CIOs might feel much the same way. They have to battle to fulfil legal requirements and assist business continuity by enabling management to pick out single data objects from terabytes of storage. AHIMA (American Health Information Management Association) produced a model for the healthcare sector to tackle the problem. It’s a model that might be adapted for other industries too.
As you bring virtualisation into your IT infrastructure, you may have noticed a few security-related aspects that weren’t present in a purely physical ‘one-app-one-server’ environment. First, of all, the virtual administrator (you or whoever) and the system hypervisor have significant new power over your population of servers. Secondly, ‘things’ exist at the virtualisation level that conventional monitoring at the physical level cannot detect. Thirdly, files can skip blithely from one machine to another. In fact, the machines themselves have, logically speaking, become files. These things are reasons for implementing virtualisation in the first place – but they are also security weaknesses.