On March 1st OpenSSL published a security advisory CVE-2016-0800, known as “DROWN”. This is described as a cross-protocol attack on TLS using SSLv2 and is classified with a High Severity. The advice given by OpenSSL is:
“We strongly advise against the use of SSLv2 due not only to the issues described below, but to the other known deficiencies in the protocol as described at https://tools.ietf.org/html/rfc6176”
This vulnerability illustrates how vigilant organizations need to be over the specific versions of software that they use. However, this is easier said than done. Many organizations have a website or application that was built by a third party. The development may have been done some time ago and used what were the then current versions of readily available Open Source components. The developers may or may not have a contract to keep the package they developed up to date.
The application or website may be hosted on premise or externally; wherever it is hosted, the infrastructure upon which it runs also needs to be properly managed and kept up to date. OpenSSL is part of the infrastructure upon which the website runs. While there may be some reasons for continuing to use SSLv2 for compatibility, there is no possible excuse for reusing SSL Private Keys between websites. It just goes against all possible security best practices.
It may be difficult to believe but I have heard auditors report that when they ask “what does that server do?” they get the response “I don’t know – it’s always been here and we never touch it”. The same can be true of VMs in the cloud which get created, used and then forgotten (except by the cloud provider who keeps on charging for them).
So as vulnerabilities are discovered, there may be no process to take action to remediate the operational package. The cyber criminals just love this. They can set up an automated process to externally scan to find where known vulnerabilities exist unpatched and exploit the results at their leisure.
There are two basic lessons from this:
- Most code contains exploitable errors and its evolution generally leads to a deterioration in its quality over time unless there are very stringent controls over change. It is attractive to add functionality but increase in size and complexity leads to more vulnerabilities. Sometimes it is useful to go back to first principles and recode using a stringent approach.
I provided an example of this in my blog AWS Security and Compliance Update. AWS has created a replacement for OpenSSL TLS - S2N Open Source implementation of TLS. S2N replaces the 500,000 lines code in OpenSSL with approximately 6,000 lines of audited code. This code has been contributed to Open Source and is available from S2N GitHub Repository.
- Organizations need to demand maintenance as part of the development of code by third parties. This is to avoid the need to maintain out of date infrastructure components for compatibility.
The infrastructure, whether on premise or hosted, should be kept up to date. This will require change management processes to ensure that changes do not impact on operation. This should be supported by regular vulnerability scanning of operational IT systems using one of the many tools available together with remediation of the vulnerabilities detected.
IT systems need to have a managed lifecycle. It is not good enough to develop, deploy and forget.
On November 30th, 2015 the final version of the standard ISO/IEC 27017 was published. This standard provides guidelines for information security controls applicable to the provision and use of cloud services. This standard has been some time in gestation and was first released as a draft in spring 2015. Has the wait been worth it? In my opinion yes.
The gold standard for information security management is ISO/IEC 27001 together with the guidance given in ISO/IEC 27002. These standards remain the foundation but the guidelines are largely written on the assumption that an organization’s processes its own information. The increasing adoption of managed IT and cloud services, where responsibility for security is shared, is challenging this assumption. This is not to say that these standards and guidelines are not applicable to the cloud, rather it is that they need interpretation in a situation where the information is being processed externally. ISO/IEC 27017 and ISO/IEC 27018 standards provide guidance to deal with this.
ISO/IEC 27018, which was published in 2014, establishes controls and guidelines for measures to protect Personally Identifiable Information for the public cloud computing environment. The guidelines are based on those specified in ISO/IEC 27002 with controls objectives extended to include the requirements needed to satisfy privacy principles in ISO/IEC 29100. These are easily mapped onto the existing EU privacy principles. This standard is extremely useful to help an organization assure compliance when using a public cloud service to process personally identifiable information. Under these circumstances the cloud customer is the Data Controller and, under current EU laws, remains responsible for processing breaches of the Data Processor. To provide this level of assurance some cloud service providers have obtained independent certification of their compliance with this standard.
The new ISO/IEC 27017 provides guidance that is much more widely applicable to the use of cloud services. Specific guidance is provided for 37 of the existing ISO/IEC 27002 controls; separate but complementary guidance is given for the cloud service customer and the cloud service provider. This emphasizes the shared responsibility for security of cloud services. This includes the need for the cloud customer to have policies for the use of cloud services and for the cloud service provider to provide information to the customer.
For example, as regards restricting access (ISO 27001 control A.9.4.1) the guidance is:
- The cloud service customer should ensure that access to information in the cloud service can be restricted in accordance with its access control policy and that such restrictions are realized.
- The cloud service provider should provide access controls that allow the cloud service customer to restrict access to its cloud services, its cloud service functions and the cloud service customer data maintained in the service.
In addition, the standard includes 7 additional controls that are relevant to cloud services. These new controls are numbered to fit with the relevant existing ISO/IEC 27002 controls; these extended controls cover:
- Shared roles and responsibilities within a cloud computing environment
- Removal and return of cloud service customer assets
- Segregation in virtual computing environments
- Virtual machine hardening
- Administrator's operational security
- Monitoring of Cloud Services
- Alignment of security management for virtual and physical networks
In summary ISO/IEC 27017 provides very useful guidance providers and KuppingerCole recommends that this guidance should be followed by cloud customers and cloud service providers. While it is helpful for cloud service providers to have independent certification that they comply with this standard, this does not remove the responsibility from the customer for ensuring that they also follow the guidance.
KuppingerCole has conducted extensive research into cloud service security and compliance, cloud service providers as well as engaging with cloud service customers. This research has led to a deep understanding of the real risks around the use of cloud service and how to approach these risks to safely gain the potential benefits. We have created services, workshops and tools designed to help organizations to manage their adoption of cloud services in a secure and compliant manner while preserving the advantages that these kinds of IT service bring.
MetricStream, a US company that supplies Governance, Risk and Compliance applications, held their GRC Summit in London on November 11th and 12th. Governance is important to organizations because of the increasing burden of regulations and laws upon their operations. It is specifically relevant to IT security because these regulations touch upon the data held in the IT systems. It is also highly relevant because of the wide range of IT service delivery models in use today.
Organizations using IT services provided by a third party (for example a cloud service provider) no longer have control over the details of how that service is delivered. This control has been delegated to the service provider. However the organization will likely remain responsible for the data being processed and held in a way that is compliant. This is the challenge that governance can address and why governance of IT service provision is becoming so important.
The distinction between governance and management is clearly defined in COBIT 5. Governance ensures that business needs are clearly defined and agreed and that they are satisfied in an appropriate way. Governance sets priorities and the way in which decisions are made; it monitors performance and compliance against agreed objectives. Governance is distinct from management in that management plans, builds, runs and monitors activities in alignment with the direction set by the governance body to achieve the objectives. This is illustrated for cloud services in the figure below.
Governance provides an approach to IT security that can be applied consistently across the many different IT service delivery models. By focussing on the business objectives and monitoring outcomes it decouples the activities involved in providing the service from those concerned with its consumption. Most large organization have a complex mix of IT services provided in different ways: on premise managed internally, on premise managed by a third party, hosted services and cloud services. Governance provide a way for organizations to ensure that IT security and compliance can be directed, measured and compared across this range of delivery models in a consistent way.
Since this specification and measurement process can involve large amounts of data from a wide variety of sources it helps to use a common governance framework (such as COBIT 5) and technology platform such as the MetricStream GRC Platform. This platform provides centralized storage of and access to risk and compliance data, and a set of applications that allow this data to be consumed from a wide variety of sources and the results shared through a consistent user interface available on different devices.
The need for this common platform and integrated approach was described at the event by Isabel Smith Director Corporate Internal Audit at Johnson & Johnson. Ms Smith explained that the problem of an integrated approach is particularly important because Johnson and Johnson has more than 265 operating companies located in 60 countries around the world with more than 125,000 employees. These operating companies have a wide degree of autonomy to allow them to meet the local needs. However the global organization must comply with regulations ranging from financial, such as Sarbanes Oxley, to those relating to health care and therapeutic products. Using the common platform enabled Johnson and Johnson to achieve a number of benefits including: getting people across the organization to use a common language around compliance and risk, to streamline and standardize policy and controls and obtain an integrated view of control tests results.
In conclusion organizations need to take a governance led approach to IT security across the heterogeneous IT service delivery models in use today. Many of these are outside the direct control of the customer organization and their use places control of the service and infrastructure in the hands of a third party. A governance based approach allows trust in the service to be assured through a combination of internal processes, standards and independent assessments. Adopting a common governance framework and technology platform are important enablers for this.
Security is a common concern of organizations adopting cloud services and so it was interesting to hear from end users at the AWS Summit in London on November 17th how some organizations have addressed these concerns.
Financial services is a highly regulated industry with a strong focus on information security. At the event Allan Brearley, Head of Transformation Services at Tesco Bank, described the challenges they faced exploiting cloud services to innovate and reduce cost, while ensuring security and compliance. The approach that Tesco Bank took, which is the one recommended in KuppingerCole Advisory Note: Selecting your Cloud Provider, is to identify and engage with the key stakeholders. According to Mr Brearley it is important adopt a culture to satisfy all of the stakeholders’ needs all of the time.
In the UK the government has a cloud first strategy. Government agencies using cloud services must follow the Cloud Security Principles, first issued by UK Communications- Electronics Security Group’s (CESG) in 2014. These describe the need to take a risk based approach to ensure suitability for purpose. Rob Hart of the UK DVSA (Driver & Vehicle Standards Agency), that is responsible for road safety in UK, described the DVSA’s journey to the adoption of AWS cloud services. Mr Hart described that the information being migrated to the cloud was classified according to UK government guidelines as “OFFICIAL”. That is equivalent to commercially sensitive or Personally Identifiable Information. The key to success, according to Mr Hart, was to involve the Information Security Architects from the very beginning. This was helped by these architects being in the same office as the DVSA cloud migration team.
AWS has always been very open that the responsibility for security is shared between AWS and the customer. AWS publish their “Shared Responsibility Model” which distinguishes between the aspects of security that AWS are responsible for, and those for which the customer is responsible.
Over the past months AWS has made several important announcements around the security and compliance aspects of their services. There are too many to cover in here and so I have chosen 3 around compliance and 3 around security. Firstly announcements around compliance include:
- ISO/IEC 27018:2014 – AWS has published a certificate of compliance with this ISO standard which provides a code of practice for protection of personally identifiable information (PII) in public clouds acting as PII processors.
- UK CESG Cloud Security Principles. In April 2015 AWS published a whitepaper to assist organisations using AWS for United Kingdom (UK) OFFICIAL classified workloads in alignment with CESG Cloud Security Principles.
- Security by Design – In October 2015 AWS published a whitepaper describing a four-phase approach for security and compliance at scale across multiple industries. This points to the resources available to AWS customers to implement security into the AWS environment, and describes how to validate controls are operating.
Several new security services were also announced at AWS re:Invent in October. The functionality provided by these services is not unique however it is tightly integrated with AWS services and infrastructure. Therefore these services provide extra benefits to a customer that is prepared to accept the risk of added lock-in. Three of these include:
- Amazon Inspector – this service, which is in preview, scans applications running on EC2 for a wide range of known vulnerabilities. It includes a knowledge base of rules mapped to common security compliance standards (e.g. PCI DSS) as well as up to date known vulnerabilities.
- AWS WAF Web Application Firewall – this is a Web Application Firewall that can detect suspicious network traffic. It helps to protect web applications from attack by blocking common web exploits like SQL injection and cross-site scripting.
- S2N Open Source implementation of TLS – This is a replacement created by AWS for the commonly used OpenSSL (which contained the “Heartbleed” vulnerability). S2N replaces the 500,000 lines code in OpenSSL with approximately 6,000 lines of audited code. This code has been contributed to Open Source and is available from S2N GitHub Repository.
AWS has taken serious steps to help customers using its cloud services to do so in a secure manner and to assure that they remain compliant with laws and industry regulations. The customer experiences presented at the event confirm that AWS’s claims around security and compliance are supported in real life. KuppingerCole recommends that customers using AWS services should make full use of the security and compliance functions and services provided by AWS.
According to GCHQ, the number of cyber-attacks threatening UK national security have doubled in the past 12 months. How can organizations protect themselves against this growing threat especially when statistics show that most data breaches are only discovered some time after the attack took place? One important approach is to create a Cyber Defence Centre to implement and co-ordinate the activities needed to protect, detect and respond to cyber-attacks.
The Cyber Defence Centre has evolved from the SOC (Security Operation Centre). It supports the processes for enterprise security monitoring, defence, detection and response to cyber based threats. It exploits Real Time Security Intelligence (RTSI) to detect these threats in real time or in near real time to enable action to be taken before damage is done. It uses techniques taken from big data and business intelligence to reduce that massive volume of security event data collected by SIEM to a small number of actionable alarms where there is a high confidence that there is a real threat.
A Cyber Defence Centre is not cheap or easy to implement so most organizations need help with this from an organization with real experience in this area. At a recent briefing IBM described how they have evolved a set of best practice rules based on their analysis of over 300 SOCs. These best practices include:
The first and most important of these rules is to understand the business perspective of what is at risk. It has often been the case that the SOC would focus on arcane technical issues rather than the business risk. The key objective of the Cyber Defence Centre is to protect the organization’s business critical assets. It is vital that what is business-critical is defined by the organization’s business leaders rather than the IT security group.
Many SOCs have evolved from NOCs (Network Operation Centres) – however the NOC is not a good model for cyber-defence. The NOC is organized to detect, manage and remediate what are mostly technical failures or natural disasters rather than targeted attacks. Its objective is to improve service uptime and to restore service promptly after a failure. On the other hand, the Cyber Defence Centre has to deal with the evolving tactics, tools and techniques of intelligent attackers. Its objective is to detect these attacks while at the same time protecting the assets and capturing evidence. The Cyber Defence Centre should assume that the organizational network has already been breached. It should include processes to proactively seek attacks in progress rather than passively wait for an alarm to be raised.
The Cyber Defence Centre must adopt a systematized and industrialized operating model. An approach that depends upon the individual skills is neither predictable nor scalable. The rules and processes should be designed using the same practices as for software with proper versioning and change control. The response to a class of problem needs to be worked out together with the rules on how to detect it. When the problem occurs is not a good time to figure out what to do. Measurements is critical – you can only manage what you can measure and measurement allows you to demonstrate the change levels of threats and the effectiveness of the cyber defence.
Finally, as explained by Martin Kuppinger in his blog: Your future Security Operations Center (SOC): Not only run by yourself, it is not necessary or even practical to operate all of the cyber defence activities yourself. Enabling this sharing of activities needs a clear model of how the Cyber Defence Centre will be operated. This should cover the organization and the processes as well as the technologies employed. This is essential to decide what to retain internally and to define what is outsourced an effective manner. Once again, an organization will benefit from help to define and build this operational model.
At the current state of the art for Cyber Defence, Managed Services are an essential component. This is because of the rapid evolution of threats, which makes it almost impossible for a single organization to keep up to date, and the complexity of the analysis that is required to identify how to distinguish these. This up-to-date knowledge needs to be delivered as part of the Cyber Defence Centre solution.
KuppingerCole Advisory Note: Real Time Security Intelligence provides an in-depth look at this subject.
Organizations depend upon the IT systems and the information that they provide to operate and grow. However, the information that they contain and the infrastructure upon which they depend is under attack. Statistics show that most data breaches are detected by agents outside of the organization rather than internal security tools. Real Time Security Intelligence (RTSI) seeks to remedy this.
Unfortunately, many organizations fail to take simple measures to protect against known weaknesses in infrastructure and applications. However, even those organizations that have taken these measures are subject to attack. The preferred technique of attacks is increasingly one of stealth; the attacker wants to gain access to the target organization’s systems and data without being noticed. The more time the attacker has for undetected access the more the opportunity to steal data or cause damage.
Traditional perimeter security devices like firewalls, IDS (Intrusion Detections Systems) and IPS (Intrusion Prevention Systems) are widely deployed. These tools are effective at removing certain kinds of weaknesses. They also generate alerts when suspicious events occur, however the volume of events is such that it is almost impossible to investigate each as they occur. Whilst these devices remain an essential part of the defence, for the agile business using cloud services, with mobile users and connecting directly to customers and partners, there is no perimeter and they are not sufficient.
SIEM (Security Information and Event Management) was promoted as a solution to these problems. However, in reality SIEM is a set of tools that can be configured and used to analyse event data after the fact and to produce reports for auditing and compliance purposes. While it is a core security technology, it has not been successful at providing actionable security intelligence in real time.
This has led to the emergence of a new technology Real Time Security Intelligence (RTSI). This is intended to detect threats in real time or in near real time to enable action to be taken before damage is done. It uses techniques taken from big data and business intelligence to reduce that massive volume of security event data collected by SIEM to a small number of actionable alarms where there is a high confidence that there is a real threat.
At the current state of the art for RTSI, Managed Services is an essential component. This is because of the rapid evolution of threats, which makes it almost impossible for a single organization to keep up to date, and the complexity of the analysis that is required to identify how to distinguish these. This up to date knowledge needs to be delivered as part of the RTSI solution.
The volume of threats to IT systems, their potential impact and the difficulty to detect them are the reasons why real time security intelligence has become important. However, RTSI technology is at an early stage and the problem of calibrating normal activity still requires considerable skill. It is important to look for a solution that can easily build on the knowledge and experience of the IT security community, vendors and service providers. End user organizations should always opt for solutions that include managed services and pre-configured analytics, not just for tools.
KuppingerCole Advisory Note: Real Time Security Intelligence - 71033 provides an in depth look at this subject.
On Friday morning (October 23rd) I was preparing for my lecture on software vulnerabilities to the final year degree students at the University of Salford when I heard the news of the of the TalkTalk data breach.
Now this is not about that breach in particular – it is important to wait until the detailed investigation is complete before drawing conclusions. However that breach provided me with an example of the high level of responsibility now borne by the CISO. Using the story as an example I asked the students how they would like to explain to the press and 4 million customers that their organization had suffered a data breach. Especially if it was – in the words of the old proverb -“all for the want of a nail”
So what does this proverb mean in this context? Well the evidence from the many data breach surveys is that the majority of breaches occur because of vulnerabilities that could easily have been avoided. In my lecture I cover many of these: in particular the OWASP Top Ten project and the CWE/SANS 25 most dangerous software errors. Both of these identify SQL Injection as a highly dangerous but easily avoidable vulnerability.
So what is SQL Injection? When a web based application allows the users of the web interface to perform a query using a text field it is vital that the application checks the user’s input into that field.
The need for this check can be explained using an example – imagine that the field allows the user to input the brand name of the products they wish to see. If the application simply includes the text that the user inputs directly into the SQL query there is a danger. It allows a hacker to input text which is not a brand name but is actually a form of SQL that would always be logically true. In tis case the SQL query would return every record in the database.
Encrypting the database does not help with SQL Injection because the data must have already been decrypted, in the expectation that the system is being used in a legitimate way, in order to perform the query and to provide the results to the application.
The programming effort needed to avoid this kind of vulnerability is very low. All that is usually needed is for the application to scan the content for certain character patterns. Furthermore there is a wide range of tools available that will scan code and exercise the application to detect this as well as other vulnerabilities. So this check is the equivalent of the nail in the old proverb.
The consequences of a data breach extend well beyond the organization holding the data. If an organization loses its own money that organization and its shareholders bear the consequences. However if the personal details of its customers fall into the wrong hands they will be the ones to suffer. When a family’s payment card is refused in the supermarket on a Friday evening or their life savings are stolen from their bank account this is a personal tragedy not just a business risk.
So the CISO is responsible not only for the security of the organization but also for the stewardship of the data that the organization holds about its customers, partners and suppliers. Taking the simple steps needed to avoid well-known vulnerabilities is the equivalent of the nail in the proverb. Failing to take these can lead to much wider consequences. It will be difficult for a CISO to explain to everyone touched by a data breach why the organization’s stewardship of their data was lacking for the want of a nail.
For more information click here.
Many organizations are concerned about the use of cloud services; the challenge is to securely enable the use of these services without negating and the benefits that they bring. To meet this challenge it is essential to move from IT Management to IT Governance.
Cloud services are outside the direct control of the customer’s organization and their use places control of the service and infrastructure in the hands of the Cloud Service Provider (CSP). The service and its security provided cannot be ensured by the customer – the customer can only assure the service through a governance process. A governance based approach allows trust in the CSP to be assured through a combination of internal processes, standards and independent assessments.
Governance is distinct from management in that management plans, builds, runs and monitors the IT service in alignment with the direction set by the governance body to achieve the objectives. This distinction is clearly defined in COBIT 5. Governance ensures that business needs are clearly defined and agreed and that they are satisfied in an appropriate way. Governance sets priorities and the way in which decisions are made; it monitors performance and compliance against the agreed objectives.
The starting point for a governance based approach is to define the organizational objectives for using cloud services; everything else follows from these. Then set the constraints on the use of cloud services in line with the organization’s objectives and risk appetite. There are risks involved with all IT service delivery models; assessing these risks in a common way is fundamental to understanding the additional risk (if any) involved in the use of a cloud service. Finally there are many concrete steps that an organization can take to manage the risks associated with their use of cloud services. These include:
- Common governance approach – the cloud is only one way in which IT services are delivered in most organizations. Adopt a common approach to governance and risk management that covers all forms of IT service delivery.
- Discover Cloud Use – find out what cloud services are actually being used by the organization. There is now a growing market in tools to help with this. Assess the services that you have discovered are being used against the organization’s objectives and risk appetite.
- Govern Cloud Access – to cloud services with the same rigour as if they were on premise. There should be no need for you to use a separate IAM system for this – identity federation standards like SAML 2.0 are well defined and the service should support these. The service should also support the authentication methods, provide the granular access controls and monitor individuals’ user of the services that your organization requires.
- Identify who is responsible for each risk relating to the cloud service – the CSP or your organization. Make sure that you take care of your responsibilities and assure that the CSP meets their obligations.
- Require Independent certification – an important way to assure that a cloud service provides what it claims is through independent certification. Demand the CSP provides independent certification and attestations for the aspects of the service that matter to your organization.
- Use standards – standards provide the best way of avoiding technical lock-in to a proprietary service. Understand what standards are relevant and require the service to support these standards
- Encrypt your data – there are many ways in which data can be leaked or lost from a cloud service. The safest way to protect your data against this risk is to encrypt it. Make sure that you retain control over the encryption keys.
- Read the Contract – make sure you read and understand the contract. Most cloud service contracts are offered by the CSP on a take it or leave it basis. Make sure that what is offered is acceptable to your organization.
KuppingerCole has extensive experience of guiding organizations through their adoption of cloud services as well as many published research notes. Remember that the cloud is only one way of obtaining an IT service – have a common governance process for all. If a cloud service meets your organization’s need then the simple motto is “to trust your Cloud Provider but verify everything they claim”.
This article has originally appeared in KuppingerCole Analysts' View newsletter.
Together with many others I received an offer from Microsoft to upgrade my Windows 7 desktop and Windows 8.1 laptop to Windows 10. Here is my initial reaction to successfully performing this upgrade with a specific focus on the areas of privacy and security.
As always when considering security the first and most important step is to understand what your requirements are. In my case – I have several computers and I mainly use these with Microsoft Office, to use the internet for research and to store personal ‘photos. My main requirements are for consistency and synchronization across these systems together with security and reliability. The critical dimensions that I considered are privacy, confidentiality, integrity and availability. Let’s start with availability:
- Make sure you back up your files before you start the upgrade! My files were preserved without problems but it is better to be safe than sorry. It is also a good idea to understand how you could roll back if there is a catastrophic failure during the upgrade. One really big improvement over Windows 8 is the ability to restore files from a Windows 7 backup.
- Check that your computer is compatible with the upgrade. The Microsoft upgrade tool checks your computer for compatibility and some manufacturers provide information on which systems they have tested. The Dell support site informed me that my new laptop was tested but my old desktop wasn’t. However both upgraded without problems, but I did need to re-install some software – for my HP printer.
- Consider whether you want new features as soon as they are available (with the risk that they may cause problems). The default setting for updates is for these to be automatically installed. You can change this through the advanced setting menu by checking the box to defer upgrades. You will still receive security fixes but new features will be delayed.
- Windows 10 has a number of recovery options – you can roll back to your previous OS for up to 30 days after the upgrade as well as performing a complete reset.
- Windows 10 automatically includes Windows Defender for protection – make sure this is activated. If you prefer another anti-malware product you will need to install this yourself.
- If you already use OneDrive then you will notice some changes. Previous versions of the OneDrive App supported a placeholder function that allowed File Explorer to display files that were held online but not sync’d onto your PC. This is no longer available; any directories that are not sync’d are not visible through file explorer. I experienced sync problems with files that were previously held online only. I was able to resolve this using the OneDrive Setting menu – first uncheck the folder(s) and save the settings. The folders and files are then erased on your device (scary!). Then repeat the process but this time check the folders for sync in the menu. When you save these settings the files in the folders are re-synced from the cloud.
- The user accounts are copied from your previous OS – if these were all local accounts then they remain so. If you have a Microsoft account than you can link this with one of these local accounts. Doing this allows you to use a PIN instead of a password to log-in.
- If you are using Office 365 you will already have a Microsoft Account, you can also set up a free account which provides some free OneDrive space. However if you use the Microsoft account it is a good idea to understand and manage your privacy settings.
- The files in OneDrive are all held in the Microsoft cloud and you need to accept the risk that this poses bearing in mind that most breaches result from weak user credentials.
- If you are using BitLocker to encrypt your files then the encryption key will also be held on your OneDrive unless you opt out.
- You should review the privacy setting from the Express setup and decide what to change.
A future blog will provide more detailed advice on what these mean and how best to set things up. My short advice is to go through these settings carefully and chose which Apps you are happy to allow to access the various functions. In particular I would disable the App Connector since this gives access to unknown apps. I would also not allow Apps to access my name, picture and other info – but then I’m just paranoid.
- You also need to consider the privacy setting for the new Edge browser. These are to be found under “Advanced Settings”. Consider whether you really need Flash enabled since this has been a frequent target for attacks. Also consider enabling the “Do not Track Requests Button”.
- If you decide to use Cortana – this may involve setting region, language and downloading language pack – make sure you check out the privacy agreement:
My personal experience with this upgrade has been very positive. The upgrades went smoothly and the performance especially the boot up time for my old Desktop is much faster than with windows 7. The settings are now much more understandable and accessible but you need to take the time to review the defaults to achieve your objectives for privacy and confidentiality. KuppingerCole plan a series of future blogs that will give more detailed guidance on how to do this.
Industry 4.0 is the German government’s strategy to promote the computerization of the manufacturing industry. This strategy foresees that industrial production in the future will be based on highly flexible mass production processes that allow rich customization of products. This future will also include the extensive integration of customers and business partners to provide business and value-added processes. It will link production with high-quality services to create so-called “hybrid products”.
At the same time, in the US, the Smart Manufacturing Leadership Coalition is working on their vision for “Smart Manufacturing”. In 2013 the UK the Institute for Advanced Manufacturing, which is part of the University of Nottingham, received a grant of £4.6M for a study on Technologies for Future Smart Factories.
This vision depends upon the manufacturing machinery and tools containing embedded computer systems that will communicate with each other inside the enterprise, and with partners and suppliers across the internet. This computerization and communication will enable optimization within the organizations, as well as improving the complete value adding chain in near real time through the use of intelligent monitoring and autonomous decision making processes. This is expected to lead to the development of completely new business models as well as exploiting the considerable potential for optimization in the fields of production and logistics.
However there are risks, and organizations adopting this technology need to be aware of and manage these risks. Compromising the manufacturing processes could have far reaching consequences. These consequences include the creation of flawed or dangerous end products as well as disruption of the supply chain. Even when manufacturing processes based on computerized machinery are physically isolated they can still be compromised through maladministration, inappropriate changes and infected media. Connecting these machines to the internet will only increase the potential threats and the risks involved.
Here are some key points to securely exploiting this vision:
- Take a Holistic Approach: the need for security is no longer confined to the IT systems, the business systems of record but needs to extend to cover everywhere that data is created, transmitted or exploited. Take a holistic approach and avoid creating another silo.
- Take a Risk based approach: The security technology and controls that need to be built should be determined by balancing risk against rewards based on the business requirements, the assets at risk together with the needs for compliance as well as the organizational risk appetite. This approach should seek to remove identifiable vulnerabilities and put in place appropriate controls to manage the risks.
- Trusted Devices: This is the most immediate concern since many devices that are being deployed today are likely to be in use, and hence at risk, for long periods into the future. These devices must be designed and manufactured to be trustworthy. They need an appropriate level of physical protection as well as logical protection against illicit access and administration. It is highly likely that these devices will become a target for cyber criminals who will seek to exploit any weaknesses through malware. Make sure that they contain protection that can be updated to accommodate evolving threats.
- Trusted Data: The organization needs to be able to trust the data from this. It must be possible to confirm the device from which the data originated, and that this data has not been tampered with or intercepted. There is existing low power secure technology and standards that have been developed for mobile communications and banking, and these should be appropriately adopted or adapted to secure the devices.
- Identity and Access Management – to be able to trust the devices and the data they provide means being able to trust their identities and control access. There are a number of technical challenges in this area; some solutions have been developed for some specific kinds of device however there is no general panacea. Hence it is likely that more device specific solutions will emerge and this will add to the general complexity of the management challenges.
More information on this subject can be found in Advisory Note: Security and the Internet of Everything and Everyone - 71152 - KuppingerCole