As with most other contracts, be it about a large purchase or an insurance, you should read (standard) contracts with your cloud provider very carefully. Chances are good that you will detect some points that border on insolence. There are certainly good reasons for using the cloud in business of any size, among them cost reductions and the ability to concentrate on the core business. By providing rapid adoption of new services, the cloud also enables quick innovation. But since your whole business will be influenced by the services delivered, they might sooner or later become disruptive to your daily workflow if not properly implemented.
"Uneven" relationship Clearly, the relationship between cloud service provider (CSP) and tenant is "uneven" from the beginning. The latter first of all has to pay for all extras, frequently called Managed Services - even for those that should be naturally included in any cloud contract. This way the customer has to pay more for letting the provider take over more of his normal responsibility. Delivering those kinds of "value-added service" only for much and more money can't be the unique selling point. I wonder what the provider's legal department says to those offers. The providers should be liable for breakdowns in service and data breach or loss. Most if not all deny that responsibility.
Reading the contract carefully can help avoiding the most obvious pitfalls. Make it a game: Find the aspects that could become a challenge for your daily business. There are some, trust me. Begin with the parts of the contract dealing with end-of-service, changes or availability. Don't be surprised if there is a clause that gives your CSP the freedom to go out of business with you at any time. He can also change services flexibly - mind you, flexibility should be on your side in the cloud, not on his - without having to announce it long in advance. Some CSPs think they don't need to announce it at all. Even if the change means that an important application won't run any longer.
Feature changes can pose problems Feature changes can evolve to a massive problem, when employees can't find some data again or see a completely altered user interface. This will lead to an increase in costs for help desk calls. Or imagine you customer relied on a certain feature that suddenly doesn't exist anymore. Just that the CSP thinks it is useless doesn't mean that you do so too.
Another issue concerns availability: Surely it is not always the CSP's fault if a service is not accessible. But where it is, availability guarantees amount to nothing if they are not connected to penalties. CSPs regularly disinclude liability in their contracts for damages on the tenant's side as a consequence of a longer outage - which is understandable. However, like this guarantees are relatively worthless. It should be added in this context that if you really need high availability you'll probably get it a lot cheaper in the cloud than with your internal IT. The cloud idea is not bad in itself.
Customization and the Cloud API (Application Programming Interface) changes might affect the integration between different cloud services or a cloud service and on-premise applications. It might as well affect customized solutions. Customized solutions? Is cloud computing not all about standards? Aren't the greatest benefits to be found in areas where customization won't mean a competitive advantage? Yes, maybe. But most business solutions - CRM, ERP, HR etc. - don't exist in isolation from other applications. They need to be integrated to work optimally. Last but not least APIs have to be upwards compatible. If they change or features will be turned down, the CSP client has to be informed long in advance to be able to prepare for it and to tell his customers in time.
How to find a good CSP So, how do you recognize a good CSP for your business? First of all, he should see the cloud benefits from your perspective, not only from his. For this he has to understand your main issues and challenges. Customers on the other side should always be prepared that things might not run as they expected. Therefore there should always an exit strategy fixed in the contract. This also helps to avoid the problem of a vendor lock-in which often is the result of long-term initial contracts. If a contract ends, the user should get his full data back immediately without any further costs.
Naturally not everybody running a business understands the concept of the cloud and how it works. It suffices to know how to find a good CSP and what elements a contract should contain that's beneficial to the customer.
This article has originally appeared in KuppingerCole Analysts' View newsletter.
Many organizations are concerned about the use of cloud services; the challenge is to securely enable the use of these services without negating and the benefits that they bring. To meet this challenge it is essential to move from IT Management to IT Governance.
Cloud services are outside the direct control of the customer’s organization and their use places control of the service and infrastructure in the hands of the Cloud Service Provider (CSP). The service and its security provided cannot be ensured by the customer – the customer can only assure the service through a governance process. A governance based approach allows trust in the CSP to be assured through a combination of internal processes, standards and independent assessments.
Governance is distinct from management in that management plans, builds, runs and monitors the IT service in alignment with the direction set by the governance body to achieve the objectives. This distinction is clearly defined in COBIT 5. Governance ensures that business needs are clearly defined and agreed and that they are satisfied in an appropriate way. Governance sets priorities and the way in which decisions are made; it monitors performance and compliance against the agreed objectives.
The starting point for a governance based approach is to define the organizational objectives for using cloud services; everything else follows from these. Then set the constraints on the use of cloud services in line with the organization’s objectives and risk appetite. There are risks involved with all IT service delivery models; assessing these risks in a common way is fundamental to understanding the additional risk (if any) involved in the use of a cloud service. Finally there are many concrete steps that an organization can take to manage the risks associated with their use of cloud services. These include:
- Common governance approach – the cloud is only one way in which IT services are delivered in most organizations. Adopt a common approach to governance and risk management that covers all forms of IT service delivery.
- Discover Cloud Use – find out what cloud services are actually being used by the organization. There is now a growing market in tools to help with this. Assess the services that you have discovered are being used against the organization’s objectives and risk appetite.
- Govern Cloud Access – to cloud services with the same rigour as if they were on premise. There should be no need for you to use a separate IAM system for this – identity federation standards like SAML 2.0 are well defined and the service should support these. The service should also support the authentication methods, provide the granular access controls and monitor individuals’ user of the services that your organization requires.
- Identify who is responsible for each risk relating to the cloud service – the CSP or your organization. Make sure that you take care of your responsibilities and assure that the CSP meets their obligations.
- Require Independent certification – an important way to assure that a cloud service provides what it claims is through independent certification. Demand the CSP provides independent certification and attestations for the aspects of the service that matter to your organization.
- Use standards – standards provide the best way of avoiding technical lock-in to a proprietary service. Understand what standards are relevant and require the service to support these standards
- Encrypt your data – there are many ways in which data can be leaked or lost from a cloud service. The safest way to protect your data against this risk is to encrypt it. Make sure that you retain control over the encryption keys.
- Read the Contract – make sure you read and understand the contract. Most cloud service contracts are offered by the CSP on a take it or leave it basis. Make sure that what is offered is acceptable to your organization.
KuppingerCole has extensive experience of guiding organizations through their adoption of cloud services as well as many published research notes. Remember that the cloud is only one way of obtaining an IT service – have a common governance process for all. If a cloud service meets your organization’s need then the simple motto is “to trust your Cloud Provider but verify everything they claim”.
This article has originally appeared in KuppingerCole Analysts' View newsletter.
Running an application or a service implies covering a large set of diverse responsibilities. Many requirements have to be fulfilled: the actual infrastructure has to be provided, which means that bare metal (computers, network and storage devices) as the foundation layer has to be installed and made available. On the next logical level operating systems have to be installed and appropriately maintained, including patches and updates. Appropriate mechanisms for virtualization have to be implemented.
Any layer of the provided infrastructure has to be implemented in an adequately scalable, stable, available and accessible way, at an at least sufficient level of performance. Service level agreements have to be defined and met which involves responsibilities for availability, accessibility, again scalability. This also requires the allocation of appropriate administrative or user services, e.g. implementing help desks and/or self-service infrastructure.
Security is of utmost importance for every application, service or infrastructure. This includes for example platform security, the reliable and robust management of users and privileged accounts and their individual roles, fine-grained access control and network security including intrusion detection. In a shared, virtualized environment this also demands strong requirements for the separation of individual, parallely operated platforms and the isolation of software, processes and data across the network, the storage and the computing environments. The provisioning of appropriate management interfaces, the implementation of change processes and maintaining stable, reliable and auditable systems operation procedures is a key responsibility within an application system or infrastructure environment.
The aspect of overall application security defines another set of responsibilities which focuses on logical and functional aspects and the business processes implemented. Ensuring all required aspects regarding the IT security of an application or infrastructure system, including the confidentiality, integrity and availability of the computer system and a proper implementation of the underlying business processes are important challenges, no matter which deployment scenario is chosen.
Whenever an application or a service is running on premises, determining who is responsible for which aspects of the infrastructure typically is a straightforward task. All vital building blocks ranging from the infrastructure to the operating system and from the application modules to the stored data and the underlying business processes is in the responsibility of the organization itself, i.e. the internal customer. Many organizations assign individual responsibilities and tasks along the lines of the ITIL service management processes with typical roles like the "application owner" or the "system owner", reflecting different functional aspects and responsibilities within the organization.
Moving services into the cloud or creating new services within the cloud changes the picture substantially and introduces the Cloud Service Provider (CSP) as a new stakeholder to the network of functional roles already established. Cloud services are characterized by the level of services provided. Individual services in the cloud are organized as layers building upon each other. Although the terms are not used consistently across different CSPs, cloud service offerings are often characterized as e.g. "Infrastructure as a Service" (IaaS) or "Platform as a service" (PaaS). Depending on the fact which parts of the services are provided by the CSP on behalf of the customer and which parts are implemented by the tenant on top of the provided service layers, the responsibilities are to be assigned to either the Cloud Service Provider (CSP) or the tenant.
The following image gives a rough overview which responsibilities are to be assigned to which partner within a cloud service provisioning contract in which cloud service model. While an "Infrastructure as a Service" (IaaS) scenario puts the responsibility for only the infrastructure on the Cloud Service Provider (CSP), the only responsibility left to the tenant in a "Software as a service" (SaaS) scenario is the responsibility for the actual business data. This is obvious as the data ownership within an organisation is an inalienable responsible and thus cannot be delegated to anybody outside the actual organisation.
Shared responsibilities between the Cloud Service Provider (CSP) and the tenant are a key characteristic of every deployment scenario of cloud services. The above image gives a first idea of this new type of shared responsibilities before between service providers and their customers. For every real-life cloud service model scenario all responsibilities identified have to be clearly assigned individually to the adequate stakeholder. This might be drastically different in scenarios where only infrastructure is provided, for example the provisioning of plain storage or computing services, compared to scenarios where complete "Software as a service" (SaaS, e.g. Office 365) or even "Business process as a Service" (BaaS) is provided in the cloud, for example an instance of SalesForce CRM. A robust and complete identification of which responsibilities are to be assigned to which contract partner within a cloud service scenario is the prerequisite for an appropriate service contract between the Cloud Service Provider (CSP) and the tenant.
This article has originally appeared in KuppingerCole Analysts' View newsletter.
When I first read about the newly discovered kind of OS X and iOS malware called XcodeGhost, quite frankly, the first thing that came to my mind was: “That’s the Albanian virus!” In case you don’t remember the original reference, here’s what it looks like:
I can vividly imagine a conversation among hackers, which would go like this:
- Why do we have to spend so much effort on planting our malware on user devices? Wouldn’t it be great if someone would do it for us?
- Ha-ha, do you mean the Albanian virus? Wait a second, I’ve got an idea!
Unfortunately, it turns out that the situation isn’t quite that funny and in fact poses a few far-reaching questions regarding the current state of iOS security.
What is XcodeGhost anyway? In short, it’s Apple’s official developer platform Xcode for creating OS X and iOS software, repackaged by yet unknown hackers to include malicious code. Any developer, who would download this installer and use it to compile an iOS app, would automatically include this code into their app, which is then submitted to the App Store and distributed to all users automatically as a usual update. According to Palo Alto Networks, which published a series of reports on XcodeGhost, this malware is able to collect information from mobile devices and send them to a command and control server. It would also try to phish for user’s credentials or steal their passwords from the clipboard.
Still, the most remarkable is that quite a few legitimate and popular iOS apps from well-known developers (mostly based in China) became infected and were successfully published in the App Store. Although it baffles me why a seasoned developer would download Xcode from a file-sharing site instead of getting it for free directly from Apple, the list of victims includes Tencent, creators of the hugely popular app WeChat that has over 600 million users. In total, around 40 apps in the App Store have been found to contain the malicious code. Update: another report by FireEye identifies over 4000 affected apps.
Unfortunately, there is practically nothing that iOS users can do at the moment to prevent this kind of attack. Surely, they should uninstall any of the apps that are known to contain this malicious code, but how many have not yet been discovered? We can also safely assume that other hackers will follow with their own implementations of this new concept or concentrate on attacking other components of the development chain.
Apple’s position on antivirus apps for iOS has been consistent for years: they are unnecessary and create a wrong impression. In fact, none of the apps remaining in the App Store under a name “Antivirus” is actually capable of detecting malware: there are no interfaces in iOS, which would allow them to function. In this regard, user’s safety is entirely in Apple’s hands. Even if they upgrade the App Store to include better malware detection in submitted apps and incorporate stronger integrity checks into Xcode, can we be sure that there will be no new outbreaks of this kind of malware? After several major security bugs like Heartbleed or Poodle in core infrastructures discovered recently (and yes, I do consider Apple Store a critical infrastructure, too), how many more times does the industry have to fall on its face to finally start thinking “security first”?
With the announcement of the IBM Cloud Security Enforcer, IBM continues its journey towards integrated solutions. What had started a while ago in the IBM Security division with integrating identity and analytical capabilities, both from the former IBM Tivoli division and the CrossIdeas acquisition, as well as from the Q1 Labs acquisition, now reaches a new level with the IBM Cloud Security Enforcer.
IBM combines capabilities such as mobile security management, identity and access management, behavioral analytics, and threat intelligence (X-Force) to build a comprehensive cloud security solution that raises the bar in this market.
Running as a cloud solution, IBM Cloud Security Enforcer can sit between the users and their devices on the one hand and the ever-increasing number of cloud applications in use on the other hand. It integrates with Microsoft Active Directory and other on-premise services for user management. While access of enterprise users can be controlled via common edge components, routing traffic to the cloud service, mobile users can access a mobile proxy (World Wide Mobile Cloud Proxy), including support for VPN connections.
The IBM Cloud Security Enforcer then provides services such as application management, a launchpad and an application catalog, entitlement management and policy enforcement, and a variety of analytical capabilities that focus on risks and current threats. It then can federate out to the cloud services.
Cloud security services are nothing new. There are cloud security gateways; there is Cloud IAM and Cloud SSO; there is increasing support for mobile security in that context; and there are Threat Intelligence solutions. IBM’s approach differs in integrating a variety of capabilities. When looking at the initial release (IBM plans to provide regular updates and extensions in short intervals) of IBM Cloud Security Enforcer, there are several vendors which are stronger in single areas, but IBM’s integrated approach is among the leading-edge solutions. Thus we recommend evaluating that solution when looking at improving cloud security for employees.
Imagine you have well thought-out processes for IAM (Identity and Access Management) that ensure that identities are managed correctly and all the challenges in particular of mover and leaver processes are handled well. Imagine you also have a well-working recertification approach implemented and rolled out to your organization. Are you done? Unfortunately not.
Even when you succeed in implementing the core IAM and IAG (Identity and Access Governance) processes including recertification – and not everyone does so – you still are far from the end of your journey.
Why? Because you at best will know that entitlements are assigned correctly and that you meet the “need to know” principle. Unless your joiner, mover, and leaver processes are really well-implemented, you still might be in a situation where users might have excessive entitlements for e.g. 11 months and 29 days, based on a yearly recertification. Yes, you might shorten that period, but that will not solve the problem – it might be 5 months and 29 days at maximum then, but the basic problem remains. That is a good reason for trying to fix the cause (implementing good IAM processes) instead of the symptoms (recertifying).
Furthermore, you still don’t know whether correctly assigned entitlements are abused. What if your backup operator (who must be entitled for backups) does two backups instead of one? One for the business, one to take it home or somewhere else? What if your front office worker accesses all the customer records he has access to within a short period of time, all data ending up at an USB stick? What if a privileged account is hijacked by an attacker who runs privileged actions?
Knowing that the state is correct is no longer sufficient. We need to understand whether entitlements are used correctly. There is no technology in traditional static access management, i.e. creating accounts, assigning them to groups or roles and thus entitling them, which also limits or audits the use of these entitlements. Logging and SIEM provides a little insight.
However, what we really need are more sophisticated approaches. User Activity Monitoring (from the perspective of monitoring and logging) and User Behavior Analytics (the perspective of analyzing collected data) must move to the center of our attention. We need becoming able in identifying anomalies in user behavior. We need setting up processes to deal with suspicious incidents properly – not blocking the business from what it needs to do, not violating worker’s rights, but mitigating risks.
Technology is there, from privileged threat analytics to user behavior analytics and, beyond identities, Real Time Security Intelligence. Such technology can be implemented in a compliant way, even in countries with strong emphasis on privacy and mighty worker’s councils.
When we really want to mitigate access risks, we have to go beyond traditional approaches and even beyond Access Intelligence. We must become able identifying anomalies in user behavior, not only of administrators but also business users (oh yes, there are fraud management solutions for that available as well – so we are not talking about something entirely new). Time to move to the next level of IAM. From preventing (setting correct entitlements) and detecting (recertification and Access Intelligence) to responding, based on better detection and well thought-out, planned incident handling.
This article has originally appeared in KuppingerCole Analysts' View newsletter.
Attribute-based Access Control (ABAC ) has been with us for many years; it embodies a wide range of systems that control access to protected resources based on attributes of the requesting party. As the field has developed there are three characteristics that are most desirable in an ABAC system:
- it should externalise decision making i.e. not require applications to maintain their own access control logic
- it should be adaptive i.e. decisions are made in real-time
- it should be policy-based i.e. access permissions should be determined based on after evaluation of policies
- it should be more than just control i.e. user access should “manage” user’s access control.
Most access control environments today are role-based. Users are granted access to applications based on their position within an organisation. For instance, department managers within a company might get access to the HR system for their department. When a new department manager joins the organisation they can be automatically provisioned to the HR system based on their role. Most organisation use Active Directory groups to managed roles within an organisation. If you’re in the “Fire Warden” group you get access to the fire alarm system. One of the problems with role-based systems is the access control decisions are coarse-grained, you’re either a department manager or you’re not. RBAC systems are also quite static, group memberships will typically be updated once a day or, worse still, require manual intervention to add and remove members. Whenever access control depends upon a person to make an entry in a control list, inefficiencies result and errors occur.
Attribute-based systems have several advantages: decisions are externalised to dedicated infrastructure that preforms the policy evaluation. Decisions are more fine-grained: if a user is a department manager an APAM system can also check a user’s department code and so decide, for instance, whether or not to give them access to the Financial Management system. It can check whether or not they are using their registered smartphone; it can determine the time of day, in order to make decisions that reduce the risk associated with an access request. Such systems are usually managed via a set of policies that allow business units to determine, for instance, whether or not they want to allow access from a smartphone, and if they do, to elevate the authorisation level by using a two-factor mechanism. The benefits are obvious: no longer are we dependent upon someone in IT to update an Active Directory group, and more sophisticated decisions are possible. APAM systems are also real-time. As soon as HR updates a person’s position, their permissions are modified. The very next access request will be evaluated against the same policy set but the new attributes will return a different decision.
So what’s holding us back from deploying APAM systems? Firstly, there’s the “if it’s not broken don’t fix it” syndrome that encourages us to put up with less than optimal systems. Another detractor is the requirement for a mature identity management system, since access to attributes is needed. There is also a need to manage policies but often business groups are unwilling to take on the policy management task.
It’s incumbent on C-level management to grapple with these issues. They must set the strategy and implement the requisite change management. If they do, not only will they be reducing the risk profile associated with their access control system, they’ll open up new opportunities. It will be possible to more easily extend business system access to their business partners, and customers, for whom it is unsustainable to populate Active Directory groups.
APAM has much to offer, we just need is a willingness to embrace it.
This article has originally appeared in KuppingerCole Analysts' View newsletter.
Identity Management and Access Management are on their way into the first line of defence when it comes to enterprise security. With changing architecture paradigms and with the identity of people, things and services being at the core of upcoming security concepts, maintaining identity and Access Governance is getting more and more a key discipline of IT security. This is true for traditional Access Governance within the enterprise and this will become even more true for the digital business and the identities of customers, consumers, partners and devices.
Many organizations have already established Access Governance processes as a toolset for achieving compliance with regulatory requirements and for mitigating access-related risks on a regular basis. Identity and Access Management(IAM) processes accompany every identity through its complete life cycle within an organisation: The management of corporate identities and their access to resources is the combination of both IAM technology and the application of well-defined processes and policies. Traditional ways of adding Access Governance to these processes include the implementation of well-defined access request and approval workflows, the scheduled execution of recertification programs and the analysis of assigned access rights for the violation of the Segregation of Duties (SoD) requirements.
While the initial cause for creating such a program is typically the need for being compliant to regulatory requirements, mature organisations realize that fulfilling such requirements is also a business need and fundamental general benefit. The design and implementation of a well-thought-out dynamic, efficient, flexible and swift identity and access management is the foundation layer for an efficient and proactive Access Governance system.
This requires appropriate concepts for both management processes and entitlement concepts: Lean and efficient roles lead to simplified assignment rules. Intelligent approval processes, including pre-approvals as the default for many entitlements reduce manual approval work and allow for easier certification. Embedding business know-how within the actual entitlement definition allows for the specification of more and more processes in a way that they do no longer require any administrative or business interaction.
Aiming at defining and implementing automatable access assignment and revocation processes in fact reduces the need for various Access Governance processes. Once the processes are designed in a manner that they prevent the assignment of undesirable entitlements to identities and that they make sure that entitlements no longer needed are revoked from identities, they make many checks and controls obsolete. On the other hand, the immediate and automated assignment of entitlements whenever required fulfil business requirements in making people effective and efficient from day one. Subsequent business process changes and thus changes in job descriptions and their required access rights can be propagated automatically without further manual steps.
Applying risk assessments to each individual entitlement is a crucial prerequisite when it comes to analysing assigned access. Once all access is understood regarding its criticality, a risk orientated approach towards recertification (i.e. high-risk entitlements more often and faster) can be chosen and by default time-based assignments of critical entitlements can be enforced.
Well-defined access management and Identity Management life cycle processes can help to ease the burden of the actual Access Governance exercises. Before looking into further, often costly and tedious measures, redesigning and rethinking assignment and revocation processes in an intelligent manner within a lean entitlement model might help in improving efficiency and gaining security.
This article has originally appeared in KuppingerCole Analysts' View newsletter.
Offering Windows 10 as a free upgrade was definitely a smart marketing decision for Microsoft. Everyone is talking about the new Windows and everyone is eager to try it. Many of my friends and colleagues have already installed it, so I didn’t hesitate long myself and upgraded my desktop and laptop at the first opportunity.
Overall, the upgrade experience has been quite smooth. I’m still not sure whether I find all visual changes in Windows 10 positive, but hey, nothing beats free beer! I also realize that much more has been changed “under the hood”; including numerous security features Microsoft has promised to deliver in their new operating system. Some of those features (like built-in Information Rights Management functions or support for FIDO Alliance specifications for strong authentication) many consumers will probably not notice for a long time if ever, so that’s a topic for another blog post. There are several things however, which everyone will face immediately after upgrading, and not everyone will be happy with the way they are.
The most prominent consumer-facing security change in Windows 10 is probably Microsoft’s new browser – Microsoft Edge. Developed as a replacement for aging Internet Explorer, it contains several new productivity features, but also eliminates quite a few legacy technologies (like ActiveX, browser toolbars or VB Script), which were a constant source of multiple vulnerabilities. Just by switching to Edge from Internet Explorer, users are automatically protected from several major malware vectors. It does, however, include built-in PDF and Flash plugins, so it’s potentially still vulnerable to the two biggest known web security risks. It is possible to disable Flash Player under “Advanced settings” in the Edge app, which I would definitely recommend. Unfortunately, after upgrading, Windows changes your default browser to Edge, so make sure you change it back to your favorite one, like Chrome or Firefox.
Another major change that in theory should greatly improve Windows security is the new Update service. In Windows 10, users can no longer choose which updates to download – everything is installed automatically. Although this will greatly reduce the window of opportunity for an attacker to exploit a known vulnerability, an unfortunate side effect of this is that sometimes your computer will be rebooted automatically when you’re away from it. To prevent this, you must choose “Notify to schedule restart” under advanced update options – this way you’ll at least be able to choose a more appropriate time for a reboot. Another potential problem are traffic charges: if you’re connecting to the Internet over a mobile hotspot, updates can quickly eat away your monthly traffic limit. To prevent this, you should mark that connection as “metered” under “Advanced options” in the network settings.
Windows Defender, which is the built-in antivirus program already included in earlier Windows versions, has been updated in a similar way: in Windows 10, users can no longer disable it with standard controls. After 15 minutes of inactivity, antivirus protection will be re-enabled automatically. Naturally, this greatly improves anti-malware protection for users not having a third party antivirus program installed, but quite many users are unhappy with this kind of “totalitarianism”, so the Internet is full of recipes on how to block the program completely. Needless to say, this is not recommended for most users, and the only proper way of disabling Windows Defender is installing a third party product that provides better anti-malware protection. A popular site AV Comparatives maintains a list of security products compatible with Windows 10.
Since most anti-malware products utilize various low level OS interfaces to operate securely, they are known to be affected the most by the Windows upgrade procedure. Some will be silently uninstalled during the upgrade, others will simply stop working. Sometimes, an active antivirus may even block the upgrade process or cause cryptic error messages. It is therefore important to uninstall anti-malware products before the upgrade and reinstall them afterwards (provided, of course, that they are known to be compatible with the new Windows, otherwise now would be a great time to update or switch your antivirus). This will ensure that the upgrade will be smooth and won’t leave your computer unprotected.
Windows 10 comes with the promise of changing computing from ground up. While this might be marketing speak in many aspects that might be true for one central aspect of daily computing life: secure user authentication for the operating system, but also for websites and services.
Microsoft goes beyond the traditional username and password paradigm and moves towards strong authentication mechanisms. While traditionally this was only possible with having costly additional hardware, infrastructure and processes available, e.g. smartcards, Microsoft does it differently now.
So, although the comparison might be difficult for some readers: improving security by implementing all necessary mechanisms within the underlying system is quite similar to what Apple did when they introduced secure fingerprint authentication with the recent models of the iPhone and the iPad, beginning with the iPhone 5S (in comparison to ridiculously inadequate implementations within several android phones as made public just recently).
The mechanism called "Windows Hello" supports various authentication scenarios. So with Windows 10 being an operating system designed to run across a variety of devices, Microsoft is going for multifactor authentication beyond passwords for authentication purposes for mobile phones, for tablets, mobile computers, the traditional desktop and more flavors of devices. One factor can be a device itself and can be enrolled (by associating an asymmetric key pair) to be part of a user authentication process.
The account settings dialog offers new and additional mechanisms for identifying valid users: User authentication with user name and password can be augmented by alternative authentication scenarios using PINs or gestures.
While passwords are typically used globally across all devices, PINs and gestures are specific to a single device and cannot be used in any other scenario.
Picture authentication records three gestures executed with any pointing device (e.g. stylus, finger, mouse) on any desired image (preferably cats, as this is the internet). Reproducing them appropriately logs you into the specific Windows 10 system without the need of typing in a password.
Actually, the combination of your device (something you have) plus PIN or gesture (something you know) can be considered as two-factor authentication for access to your data, e.g. in the OneDrive cloud service.
Other factors deployed for authentication include biometrics like the fingerprint scan, whenever a fingerprint sensor is available or a retina scan when a capable camera is available. Finally, "Windows Hello" adds facial recognition to the login process, although this might be scary for several users to have a camera scanning the room (which of course is nothing new for Xbox users deploying Kinect having their living room scanned all day) while the login screen is active. The requirement for deploying cameras that are able to detect whether it is a real person in 3-D or just the picture avoids simple cheating scenarios.
Once authenticated a user can access a variety of resources by deploying the Microsoft Passport mechanism which deploys asymmetric keys for accessing services and websites securely. A user successfully authenticated towards Microsoft Passport through Microsoft Hello will be able to access information securely by applications acting upon his behalf deploying the necessary APIs. This brings asymmetric key cryptography to different types of end-users, ranging from business users to home users and mobile phone users alike. Depending on the deployment scenario the user Data is then stored within the corporate Microsoft Active Directory infrastructure of the individual organisation, within Microsoft Azure Active Directory for cloud deployments, or -for the home user- within the associated Microsoft Live account, e.g. at Outlook.com.
While Microsoft has been contributing to the standardisation of the FIDO (Fast IDentity Online) protocols for quite some time now, Windows 10 finally claims to come with support for the current versions of the final protocol specifications. This will allow Windows 10 users to connect securely and reliably to Internet sites providing services based on the FIDO standards, especially to prevent man in the middle attacks and phishing scenarios. As of now the FIDO standard implementations were relying on the support from e.g. browser providers like Firefox or Chrome. Support for the FIDO standards built into the Windows 10 operating system might give the standards an enormous boost and allow for a win-win situation for security and the OS.
Windows 10 is now in its early weeks of deployment in the field. It will be interesting to see whether the new authentication mechanisms will be broadly understood as a real game changer for securing identity information and providing stronger authentication. Any appropriately secure way allowing to get rid of password authentication is a chance to improve overall user security and to protect identity data and every connected transaction. So each and every Windows 10 user should be encouraged to deploy the new authentication mechanisms ranging from biometrics to PINs and gestures and to the deployment of the Fido standards through the Microsoft Passport framework. Why not at least once use Windows and be a forerunner in security and privacy?