English   Deutsch   Русский   中文    

KuppingerCole Blog

Redesigning access controls for IAM deployments?

Apr 20, 2015 by Martin Kuppinger

A few weeks ago I read an article in Network World, entitled “A common theme in identity and access management failure: lack of Active Directory optimization”. Essentially, it is about the fact that Microsoft Active Directory (AD) commonly needs some redesign when starting an IAM (Identity and Access Management) project. Maybe yes, and maybe no.

In fact, it is common that immature, chaotic, or even “too mature” (e.g. many years of administrative work leaving their traces with no one cleaning up) access control approaches in target systems impose a challenge when connecting them to an IAM (and Governance) system. However, there are two points to consider:

  1. This is not restricted to AD, it applies to any target system.
  2. It must not be allowed to lead to failures in your IAM deployment.

I have frequently seen this issue with SAP environments, unless they already have undergone a restructuring, e.g. when implementing SAP Access Control (formerly SAP GRC Access Control). In fact, the more complex the target system and the older it is, the more likely it is that the structure of access controls, be they roles or groups or whatever, is anything but perfect.

There is no doubt that redesign of the security model is a must in such situations. The question is just about “when” this should happen (as Jonathan Sander, the author of the article mentioned above, also states). In fact, if we would wait for all these security models to be redesigned, we probably never ever would see an IAM program succeeding. Some of these redesign projects take years – and some (think about mainframe environments) probably never will take place. Redesigning the security model of an AD or an SAP environment is a quite complex project by itself, despite all the tools supporting this.

Thus, organizations typically will have to decide about the order of projects. Should they push their IAM initiative or do the groundwork first? There is no single correct answer to that question. Frequently, IAM projects are so much under pressure that they have to run first.

However, this must not end in the nightmare of a failing project. The main success factor for dealing with these situations is having a well thought-out interface between the target systems and the IAM infrastructure for exposing entitlements from the target systems to IAM. At the IAM level, there must be a concept of roles (or at least a well thought-out concept for grouping entitlements). And there must be a clear definition of what is exposed from target systems to the IAM system. That is quite easy for well-structured target systems, where, for instance, only global groups from AD or business roles from SAP might become exposed, becoming the smallest unit of entitlements within IAM. These might appear as “system roles” or “system-level roles” (or whatever term you choose) in IAM.

Without that ideal security model in the target systems, there might not be that single level of entitlements that will become exposed to the IAM environment (and I’m talking about requests, not about the detailed analysis as part of Entitlement & Access Governance which might include lower levels of entitlements in the target systems). There are two ways to solve that issue:

  1. Just define these entitlements, i.e. global groups, SAP business roles, etc. first as an additional element in the target system, map them to IAM, and then start the redesign of the underlying infrastructure later on.
  2. Or accept the current structure and invest more in mappings of system roles (or whatever term you use) to the higher levels of entitlements such as IT-functional roles and business roles (not to mix up with SAP business roles) in your IAM environment.

Both approaches work and, from my experience, if you understand the challenge and put your focus on the interface, you will be quickly able to identify the best way to handle the challenge of executing your IAM program while still having to redesign the security model of target systems later on. In both cases, you will need a good understanding of the IAM-level security model (roles etc.) and you need to enforce this model rigidly – no exceptions here.


Google+

The New Meaning of “Hacking your TV”

Apr 13, 2015 by Alexei Balaganski

After a long list of high-profile security breaches that culminated in the widely publicized Sony Pictures Entertainment hack last November, everyone has gradually become used to this type of news. If anything, they only confirm the fact that security experts have known for years: the struggle between hackers and corporate security teams is fundamentally asymmetrical. Regardless of its size and budgets, no company is safe from such attacks simply because a security team has to cover all possible attack vectors, and a hacker needs just a single overlooked one.

Another important factor is the ongoing trend in the IT industry of rapidly growing interconnectivity and gradual erosion of network perimeters caused by adoption of cloud and mobile services, with trends such as “Industry 4.0”, i.e. connected manufacturing, and IoT with billions of connected devices adding to this erosion. All this makes protecting sensitive corporate data increasingly difficult and this is why the focus of information security is now shifting from protecting the perimeter towards real-time security intelligence and early detection of insider threats within corporate networks. Firewalls still play a useful role in enterprise security infrastructures, but, to put it bluntly, the perimeter is dead.

Having that in mind, the latest news regarding a hack of the French television network TV5Monde last Wednesday look even more remarkable. Apparently, not just their web site and social media accounts were taken over by hackers calling themselves “Cybercaliphate” and claiming allegiance to the Islamic State, they also managed to disrupt their TV broadcasting equipment for several hours. Political implications of the hack aside, the first thing in the article linked above that attracted my attention was the statement of the network’s director Yves Bigot: “At the moment, we’re trying to analyse what happened: how this very powerful cyber-attack could happen when we have extremely powerful and certified firewalls.”

Now, we all know that analyzing and attributing a cyber-attack is a very difficult and time-consuming process, so it’s still too early to judge whether the attack was indeed carried out by a group of uneducated jihadists from a war-torn Middle-Eastern region or is was a job of a hired professional team, but one thing that’s immediately clear is that it has nothing to do with firewalls. The technical details of the attack are still quite sparse, but according to this French-language publication, the hackers utilized a piece of malware written in Visual Basic to carry out their attack. In fact, it’s a variation of a known malware that is detected by many antivirus products and its most probable delivery vectors could be an unpatched Java vulnerability or even an infected email message. Surely, the hackers probably needed quite a long time to prepare their attack, but they are obviously not highly-skilled technical specialists and were not even good enough at hiding their tracks.

In fact, it would be completely safe to say that the only people to blame for the catastrophic results of the hack are TV5Monde’s own employees. After deploying their “extremely powerful firewalls” they seemingly didn’t pay much attention to protecting their networks from insider threats. According to this report, they went so far as to put sticky notes with passwords on walls and expose them on live TV!

We can also assume with certain confidence that their other security practices were equally lax. For example, the fact that all their social media accounts were compromised simultaneously probably indicates that the same credentials were used for all of them (or at least that the segregation of duties principle isn’t a part of their security strategy). And, of course, complete disruption of their TV service is a clear indication that their broadcasting infrastructure simply wasn’t properly isolated from their corporate network.

We will, of course, be waiting for additional details and new developments to be published, but it is already clear that the case of Sony hack apparently wasn’t as educational for TV5Monde as security experts have probably hoped. Well, some people just need to learn from their own mistakes. You, however, don't have to.

The first thing every organization’s security team has to realize is that the days of perimeter security are over. The number of possible attack vectors on corporate infrastructure and data has increased dramatically, and the most critical ones (like compromised privileged accounts) are actually working from within the network. Combined with much stricter compliance regulations, this means that not having a solid information security strategy can have dramatic financial and legal consequences.

For a quick overview of top 10 security mistakes with potentially grave consequences I recommend having a look at the appropriately titled KuppingerCole’s Leadership Brief: 10 Security Mistakes That Every CISO Must Avoid published just a few days ago. And of course, you’ll find much more information on our website in form of research documents, blog posts and webinar recording.

 

 


Google+

AWS Announces Machine Learning Service

Apr 10, 2015 by Mike Small

AWS has recently announced the Amazon Machine Learning service – what is this and what does it mean for customers? 

Organizations now hold enormous quantities of data and more data in a wide variety of forms is rapidly being generated.  Research has shown that organizations that base their decision making and processes on data are more successful than those that do not.  However interpretation and analysis is needed to transform this data into useful information.  Data analysis and interpretation is not easy and there are many tools on the market to help to transform raw data into valuable information. 

The challenge that most organizations face is that the special skills needed to analyze their data and these skills are not widely available.  In addition, to make use of the data the analysis and results need to be tightly integrated with the existing data sources and applications.  However, in general, software developers do not have the required data analysis skills.  AWS believe that their newly launched Amazon Machine Learning service will overcome these two challenges. 

AWS leveraged the data analysis tools and techniques that were developed for the Amazon.com retail organization when designing and building the ML service.  These are the underlying tools that try to anticipate the interests of buyers so as to direct them to the item they want and hence to make a purchase more likely.  Given the success of Amazon.com these tools and techniques ought to be very useful to the organizations wanting to get closer to their retail customers. 

In addition according to AWS,  the service can be used without the need for expertise in the area of data analytics.  The service provides features that can be used by software developers to build a model based on imperfect data; to validate that the predictions from the model are accurate and then to deploy that model in a way that can easily be integrated without change to existing applications.  AWS shared an anecdotal example in which their service was able to create a model in 20 minutes which had the same accuracy as a model that took two software developers a month to create manually. 

As you would expect the new service is tightly integrated with AWS data sources such as Amazon S3, Amazon Redshift and Amazon RDS. It can be invoked to provide predictions in real-time; for example, to enable the application to detect fraudulent transactions as they come in 

However there are the security and governance aspects of the use of this kind of tool.  The recent KuppingerCole Newsletter on Data Analytics discussed the problem of how to draw the line between improving service to customers and invading their privacy.  At what point does the aggregation and analysis of data become a threat rather than a benefit?  These are difficult questions to answer and regulations and the law provide little help.   

However from the point of view of an organization that wants to get closer to its customers, to provide better products, and to become more competitive data analytics are a powerful tool.   In the past the limiting factor has been the skills involved in the analysis and machine learning is a way to overcome this limitation. 

Using this form of analytics does have some risks.  Firstly it is important to be sure of the accuracy of the data.  This is especially true if the data comes from a source which is outside of your control.  Secondly can you understand the model and conclusions from the analytics process; an explanation would be nice?   If not be careful before you bet the farm on the results.  Correlations and associations are not cause and effect – make sure the results are valid.  Finally are you sure that you have permission to use the data at all and in that way in particular?  Privacy rules can limit the use you can make of personal data. 

Overall, AWS Machine learning provides an attractive solution to enable an organization to become more data driven.  However it is important to set the business objectives for the use of this approachto define the policies for its governance, and the appetite for risks relating to its use.


Google+

Data Security Intelligence – better understanding where your risks are

Apr 08, 2015 by Martin Kuppinger

Informatica, a leader in data management solutions, introduced a new solution to the market today. The product named Secure@Source also marks a move of Informatica into the Data (and, in consequence, Database) Security market. Informatica already has solutions for data masking in place, which is one of the elements in data security. However, data masking is only a small step and it requires awareness of the data that needs protection.

In contrast to traditional approaches to data security – Informatica talks about “data-centric security” – Informatica does not focus on technical approaches alone e.g. for encrypting databases or analyzing queries. As the name of their approach implies, they are focusing on protection of the data itself.

The new solution builds on two pillars. One is Data Security Intelligence, which is about discovery, classification, proliferation analysis, and risk assessment for data held in a variety of data sources (and not only in a particular database). The other is Data Security Controls, which as of now includes persistent and dynamic masking of data plus validation and auditing capabilities.

The target is reducing the risk of leakage, attacks, and other data-related incidents for structured data held in databases and big data stores. The approach is understanding where the data resides and applying adequate controls, particularly masking of sensitive data. This is all based on policies and includes e.g. alerting capabilities. It also can integrate data security events from other sources and work together with external classification engines.

These interfaces will also allow third parties to attach tokenization and encryption capabilities, along with other features. It will also support advanced correlation, for instance by integrating with an IAM solution, and thus adding user context, or by integrating with DLP solutions to secure the endpoint.

Informatica’s entry into the Information Security market is, in our view, a logical consequence of where the company is already positioned. While it provides deep insight into where sensitive data resides - in other words its source - and a number of integrations, we’d like to see a growing number of out-of-the-box integrations, for instance, with security analytics, IAM, or encryption solutions. While there are integration points for partners, relying too much on partners for all these various areas might make the solution quite complex for customers. While this makes sense for IAM, security analytics such as SIEM or RTSI (Real-time Security Intelligence) and other areas such as encryption might become integral parts of future releases of Informatica Secure@Source.

Anyway, Informatica is taking the right path for Information Security by focusing on security at the data level instead of focusing on devices, networks, etc. – the best protection is always at the source.


Google+

Really! Stop Your Employees Using Smart Phones!

Mar 23, 2015 by Amar Singh

Why Not Just Switch off every piece of electric device and live in a cave. 

I am on the record on several occasions for coming out in support of the UK government’s cyber initiatives including the Ten Steps to Cyber Security (Ten Steps) and their more recent Cyber Essentials.So, I was a bit surprised when a business owner asked if he should backtrack on his recent “smart phone for all” bonus for his employees. When I asked him why he mentioned an article he had just read in the Telegraph, titled “Spooks tell business: Consider stripping staff of smart phones to avoid cyber attacks”. 

Cliches, oh cliches.

The same article then adds the typical line about your staff are the ‘weakest link’ cliche. Oh let’s not forget the bit about being blackmailed by spies! What better way to draw attention to an article than to use an attention grabbing headline! Even when it’s not quite accurate and somewhat misleading. What’s even more displeasing is the way the article tries to impress the reader by implying that this information has been “seen” instead of mentioning that the Ten Steps is publicly available and accessible to every business. In fact what the article is referring to is but an updated and revised version of the UK Government’s advice that was first issued in 2012. So ditching the phone stop cyber attacks, right? Put simply: No.

Why? You may ask.

  • Most people are not going to ditch their smart phones. I know I will not.
  • In fact most now carry multiple smart devices including a tablet, a phone, and more recently smart wearables like watches.
  • Any organisation that have a forward thinking revenue generating strategy will already have adopted a mobile first strategy.
  • Just a few days ago the much loved and sometimes loathed Uber was named the most valuable transport company in the world even though it does not own any vehicles of its own. Could it be because it has a mobile first strategy?
  • Cyber attackers will simply find some other way to attack a business. They could even consider trying to revert back to the good old ways of targeting your laptops and desktop computers!

To be fair to the government they appear to have taken a sensible and I would argue risk based approach. Below is an excerpt of what they say Consider the balance between system usability and security. Yes there is the bit of external drives like USB sticks that have been the cause of many a hack and sleepless nights for security teams. I discuss the approach to this headache further down.

Next, Humans, you guessed it, will be Humans!

It’s getting very tiring, borderline exhausting having to hear that staff, who happen to be mostly humans for now, are to blame for all cyber security woes. This needs to stop. Stop declaring the human as the primary problem. Yes, you and I, us humans that is, are part of the problem but being flippant about is not the way to solve this problem.

Again, the government have taken a balanced approach and do not bang on the “it’s your staff’s fault” pronouncements. At least that’s how I have read it. Here is what one para from the Top Ten document set says: Without exception, all users should be trained on the secure use of their mobile device for the locations they will be working in. To me that sounds more like - “You businesses out there - spend some money and educate and train all your users” I concur.

Yes, Mobile is Insecure, but…

Mobile working is insecure. But any device, including your new TV and old laptop are insecure as long as they are switched on! Mobile working has several benefits that both employees and organisations recognise. So accept the facts and have a plan to prevent, detect and respond.

The Top Ten document contains some good advice that I would encourage all to read and understand. In the meantime I strongly recommend every business owner to:

  • Stop blaming the employee for all your cyber security problems
  • Support the employee with the necessary technology to ensure that ‘mistakes’ cannot happen easily.
  • Yes there is sufficient technology available today that can help prevent and detect cyber attacks.
  • Some technologies to consider are automatic VPN connectors, micro virtualisation technologies, encryption technologies. Please engage KC for more information on how we can help you..
  • What the government is actually saying is be pragmatic, understand the risks, and educate the users.
  • Last, but not least, accept the facts, review the threats specific to your company and understand the risk and have a plan to prevent, detect and respond.

Finally! Cut the Government a Break! Seriously.

To be fair to the government it is quite hard producing a document that fits every organisation’s risk profile. The analogy of one size fits all come to mind.In my own customer dealings I have had more senior board members and business owners ask me about cyber security as a result of the UK government’s efforts to make cyber security a board issue. Finally, Please take a risk based approach and spend some time understanding the threats and those attackers that would want to target your company. Cyber or not, this is common sense threat and risk management. It’s no point spending on technology and preparing for spies monitoring your employees if you, for example, are producing regular cleaning products. In such a company it would make more sense if effort and time was spent on preventing insiders leaking financial or human resource data. That’s what I recommend and that’s actually what the government is trying to say.

You can read about the UK Government’s Top Ten Steps to Cyber Security here.


Google+

Just say it! User Experience Trumps Security!

Mar 23, 2015 by Amar Singh

I was about to file The Register’s mobile security article into my “just another article on mobiles and security” when I noticed what I believe to be a half-witted quote.

So, in context. The Register published an article titled “Banks defend integrity of passcode-less TouchID login”. The banks and the quote in question are from the Royal Bank of Scotland (RBS) and NatWest.

What’s the half-witted quote then?

I will address the first two statements for this blog piece.

We do everything we can to make banking secure for our customers and we've tested this to make sure it was safe before launch. Other banking institutions across the world are also using this technology with their customers.

Where is the proof that the above statement is true? The banks could have chosen to have the BSI Kitemark Secure Digital Transaction. Barclays appears to be the only bank that has some of its products approved by the BSI (you can check this on the BSI site)

API spoofing and access to data held in the secure keychain is only possible on a jail-broken iPhone. We strongly advise customers against tampering with the security of their phone.

Really! Blaming it on jail-broken iPhones and users. Most non technical customers would not, in my opinion, know if their iPhone is jail broken or not. In addition the banks are appearing to acknowledge that there is a problem by admitting jail-broken phones are susceptible. So why not configure their app to check for and block installations on jail-broken iPhones?

Also maybe the banks and their outsourcer should have read the recent Mobile Threat Assessment report from FireEye that discusses the increasing ease by which hackers can bypass Apple’s strict review process and invoke risky private APIs. This on non jail-broken iPhones! (the report is titled OUT OF POCKET: A Comprehensive Mobile Threat Assessment of 7 Million iOS and Android Apps)

Be Nice to the Banks.. Come on. Surely they know what they are doing, right?

Let’s give the banks the benefit of the doubt for a minute. They value their customer’s right? During their countless requirements workshops, user experience would have been at the forefront of all their requirements. Right?

“What would our users want?” may have been one of their primary questions during their multiple brain storming sessions. Surely security would have come up during these discussions, right?

So, what about security?

Now I know banks, like most organisations, have to balance security versus cost. Banks have a risk appetite and tolerance and must make trade-offs when it comes to security versus usability. The 4 digit pin is a great example. I get that view and in many cases agree with that approach.

I am guessing there must have been some trade-off with this Touch ID based app too. They must have made assumptions that there will be those who will hack and abuse the system for monetary gain. However, I am guessing, with their compute and brain power, they would have calculated the likelihood and the financial impact to be negligible. The risk acceptable and within their appetite.

So why not come out with one of the first Touch ID only banking apps!

On the other hand it could just be that no one actually thought about security! Maybe because they wrongly assumed that Apple products are super secure or they simply forgot about it altogether.

What’s truly disappointing is that the bank had an opportunity to get both user experience and security right without necessarily sacrificing either. Sadly, it seems, security was again a second thought.


Google+

De-Mail: Now with End-to-end Encryption?

Mar 10, 2015 by Alexei Balaganski

In case you don’t know (and unless you live in Germany, you most probably don’t), De-Mail is an electronic communications service maintained by several German providers in accordance with German E-Government initiative and the De-Mail law declaring this as a secure form of communication. The purpose of the service is to complement traditional postal mail for the exchange of legal documents between citizens, businesses and government organizations.

Ever since its original introduction in 2012, De-Mail has been struggling to gain acceptance of German public. According to the latest report, only around 1 million of private citizens have registered for the service, which is way below original plans and not enough by far to reach the “critical mass”. That is actually quite understandable, since for a private person the service doesn’t bring much in comparison with postal mail (in fact, it even makes certain things, such as legally declining to receive a letter, no longer possible). Major points of criticism of the service include incompatibility with regular e-mail and other legal electronic communications services, privacy concerns regarding the personal information collected during identification process, as well as insufficient level of security.

Now the German government is attempting once more to address the latter problem by introducing end-to-end encryption. Their plan is to rely on OpenPGP standard, which will be introduced by all cooperating providers (Deutsche Telekom, Mentana-Claimsoft and United Internet known for its consumer brands GMX and Web.de) in May. According to Thomas de Maizière, Germany’s Federal Minister of the Interior, adding PGP support will provide an easy and user-friendly way of increasing the security of De-Mail service. Reaction from security experts and public, however, wasn’t particularly enthusiastic.

Apparently, to enable this new functionality, users would have to install a browser plugin. The solution is based on an open source JavaScript OpenPGP implementation and is currently available for Chrome and Firefox browsers only. According to publicly available statistics, this leaves over 60% of all German internet users out of luck, since their browsers are not supported. Even bigger problem is lack of support for mobile Apps or desktop mail clients.

Unfortunately, no integration of the plugin into De-Mail user directory is offered, which means that users are supposed to tackle the biggest challenge of any end-to-end encryption solution – secure and convenient key exchange – completely on their own. In this regard, De-Mail looks no better than any other conventional email service, since PGP encryption is already supported by many mail applications in a completely provider-agnostic manner.

Another issue is supposed ease of use of the new encryption solution. In fact, De-Mail has already been offering encryption based on S/MIME, but it couldn’t get enough traction because “it was too complicated”. However, if you compare the efforts necessary for secure PGP key exchange, it can hardly be considered an easier alternative.

Finally, there is a fundamental question with many possible legal consequences: how does one combine end-to-end encryption with the requirement for the third party (the state) to be able to verify its legitimacy? In fact, the very same de Maizière is known for opposing encryption and advocating the necessity for intelligence agencies to monitor all communications.

In any case, De-Mail is here to stay, at least as long it is actively supported by the government. However, I have serious doubts that attempts like this will have any noticeable impact on its popularity. Legal issues aside, the only proper way of implement end-to-end communications security is not to try to slap another layer on top of the aging e-mail infrastructure, but to implement new protocols designed with security in mind from the very beginning. And the most reasonable way to do that is not to try to reinvent the wheel on your own, but to look for existing developments like, for example, Dark Mail Technical Alliance. What the industry needs is a cooperatively developed standard for encrypted communications, similar to what FIDO alliance has managed to achieve for strong authentication.

Reconciling conflicting views on encryption within the government would also help a lot. Pushing for NSA-like mass surveillance of all internet communications and advocating the use of backdoors and exploits by the same people that now promise increased security and privacy of government services isn’t going to convince either security experts or the general public.


Google+

Migrating IT Infrastructure to the Cloud

Mar 10, 2015 by Mike Small

Much has been written about “DevOps” but there are other ways for organizations to benefit from the cloud. Moving all or part of their existing IT infrastructure and applications could provide savings in capital and, in many cases, increase security.

The cloud has provided an enormous opportunity for organizations to create new markets, to experiment and develop new applications without the need for upfront investment in hardware and to create disposable applications for marketing campaigns. This approach is generally known as DevOps; where the application is developed and deployed into operation in an iterative manner which is made possible by an easily expansible cloud infrastructure.

While DevOps has produced some remarkable results, it doesn’t help with the organization’s existing IT infrastructure. There are many reasons why an organization could benefit from moving some of their existing IT systems to the cloud. Cost is one but there are others including the need to constantly update hardware and to maintain a data centre. Many small organizations are limited to operating in premises that are not suitable as a datacentre; for example in offices over a shopping mall.  Although the organization may be wholly dependent upon their IT systems they may have no control over sprinkler systems, power, telecommunications, and even guaranteed 24x7 access to the building. They may be a risk of theft as well as fire, and incidents outside of their control. These are all factors which are well taken care of by cloud service providers (CSP) hosted in Tier III data centres.

However moving existing IT systems and applications to the cloud is not as simple. These legacy applications may be dependent upon very specific characteristics of the existing infrastructure such as IP address ranges or a particular technology stack which may be difficult to reproduce in the standard cloud environments. It is also important for customers to understand the sensitivity of the systems and data that they are moving to the cloud and the risks that these may be exposed to. Performing a cloud readiness risk assessment is an essential pre-requisite for an organization planning to use cloud services. Many of the issues around this relate to regulation and compliance and are described in KuppingerCole Analysts' View on Compliance Risks for Multinationals.

However it was interesting to hear of a US based CSP dinCloud that is focussing on this market. dinCloud first brought a hosted virtual desktop to the market. They have now expanded their offering to include servers, applications and IT infrastructure. dinCloud claim that their “Business Provisioning” service can help organizations to quickly and easily migrate all or part of their entire existing infrastructure to cloud.

This is a laudable aim; dinCloud claims some successes in the US and intend to expand worldwide. However, some of the challenges that they will face in Europe are the same as those currently faced by all US based CSPs – a lack of trust. Some of this has arisen through the Snowden revelations, the ongoing court case, where Microsoft in Ireland is being required to hand over emails to the US authorities, is fanning these flames. On top of this the EU privacy regulations, which are already strict, face being strengthened; and in some countries certain kinds of data must remain within the country. These challenges are discussed in Martin Kuppinger’s blog Can EU customers rely on US Cloud Providers?

This is an interesting initiative but to succeed in Europe dinCloud will need to win the trust of their potential customers. This will mean expanding their datacentre footprint into the EU/EEA and providing independent evidence of their security and compliance. When using a cloud service a cloud customer has to trust the CSP; independent certification, balanced contracts taking specifics of local regulations and requirements into account, and independent risk assessments are the best way of allowing the customer to verify that trust.


Google+

Facebook profile of the German Federal Government thwarts efforts to improve data protection

Mar 05, 2015 by Martin Kuppinger

There is a certain irony that the federal government has almost simultaneously launched a profile on Facebook with the change of the social network’s terms of use. While the Federal Minister of Justice, Heiko Maas, is backing up consumer organizations with their warnings of Facebook, the Federal Government has taken the first step in setting up its own Facebook profile.

With the changes in the terms of use, Facebook has massively expanded its ability to analyze the data of users. Data is also stored which is left behind by users on pages outside of Facebook for use in targeted advertising and possibly other purposes. On the other hand, the user has the possibility of better managing personal settings for his/her own privacy. The bottom line: it remains clear that Facebook is collecting even more data in a hard to control manner.

Like Federal Minister of Justice Maas says, "Users do not know which data is being collected or how it is being used."

For this reason alone, it is difficult to understand why the Federal Government is taking this step right at this moment. After all, it has been able to do its work so far without Facebook.

With its Facebook profile, the Federal Government is ensuring that Facebook is, for example, indirectly receiving information on the political interests and preferences of the user. Since it is not clear just how this information could be used today or in the future, it is a questionable step.

If one considers the Facebook business model, it can also have an imminent negative impact. Facebook's main source of income is from targeted advertising based on the information that the company has collected on its users. With the additional information that will be available via the Federal Government's Facebook profile, for example, interest groups can, in the future, selectively advertise on Facebook to track their goals.

Here it is apparent, as with many businesses, that the implications of commercial Facebook profiles are frequently not understood. On the one hand, there is the networking with interested Facebook users. Their value is often overrated - these are not customers, not leads and NOT voters, but at best people with a more or less vague interest. On the other hand, there is information that a company, a government, a party or anyone else with a Facebook profile discloses to Facebook: Who is interested in my products, my political opinions (and which ones) or for my other statements on Facebook?

The Facebook business model is exactly that - to monetize this information - today more than ever before with the new business terms. For a company, this means that the information is also available to the competition. You could also say that Facebook is the best possibility of informing the competition about a company's (more or less interested) followers. In marketing, but also in politics, one should understand this correlation and weigh whether it is worth paying the implicit price for the added value in the form of data that is interesting to competitors.

Facebook may be "in" - but it is in no way worth it for every company, every government, every party or other organization.

End users have to look closely at the new privacy settings and limit them as much as possible if they intend to stay on Facebook. In the meantime, a lot of the communication has moved to other services like WhatsApp, so now is definitely the time to reconsider the added value of Facebook. And sometimes, reducing the amount of communication and information that reaches one is also added value.

The Federal Government should in any case be advised to consider the actual benefits of its Facebook presence. 50,000 followers are not 50,000 voters by any means - the importance of this number is often massively overrated. The Federal Government has to be clear about the contradiction between its claim to strong data protection rules and its actions. To go to Facebook now is not even fashionable any more - it is plainly the wrong step at the wrong time.

According to KuppingerCole, marketing managers in companies should also exactly analyze which price they are paying for the anticipated added value of a Facebook profile - one often pays more while the actual benefits are much less. Or has the number of customers increased accordingly in the last fiscal year because of 100,000 followers? A Facebook profile can definitely have its uses. But you should always check carefully whether there is truly added value.


Google+

Organization, Security and Compliance for the IoT

Mar 03, 2015 by Mike Small

The Internet of Things (IoT) provides opportunities for organizations to get closer to their customers and to provide products and services that are more closely aligned to their needs. It provides the potential to enhance the quality of life for individuals, through better access to information and more control over their environment. It makes possible more efficient use of infrastructure by more precise control based on detailed and up to data information. It will change the way goods are manufactured by integrating manufacturing machinery, customers and partners allowing greater product customization as well as optimizing costs, processes and logistics.

However the IoT comes with risks the US Federal Trade Commission recently published a report of a workshop they held on this subject. This report, which is limited in its scope to IoT devices sold or used by consumers, identifies three major risks. These risks are enabling unauthorised access and misuse of personal information, facilitating attacks on other systems and creating risks to personal safety. In KuppingerCole’s view the wider risks are summarized in the following figure:

Organizations adopting this technology need to be aware of and manage these risks. As with most new technologies there is often a belief that there is a need to create a new organizational structure. In fact it is more important to ensure that the existing organization understands and addresses the potential risks as well as the potential rewards.

Organizations should take a well governed approach to the IoT by clearly defining the business objectives for its use and by setting constraints. The IoT technology used should be built to be trustworthy and should be used in a way that is compliant with privacy laws and regulations. Finally the organization should be able to audit and assure the organization’s use of the IoT.

The benefits from the IoT come from the vast amount of data that can be collected, analysed and exploited. Hence the challenges of Big Data governance security and management are inextricably linked with the IoT. The data needs to be trustworthy and it should be possible to confirm both its source and integrity. The infrastructure used for the acquisition, storage and analysis of this data needs to be secured; yet the IoT is being built using many existing protocols and technology that are weak and vulnerable.

The devices which form part of the IoT must be designed manufactured, installed and configured to be trustworthy. The security built into these devices for the risks identified today needs to be extensible to be proof against future threats since many of these devices will have lives measured in decades. There are existing low power secure technologies and standards that have been developed for mobile communications and banking, and these should be appropriately adopted, adapted and improved to secure the devices.

Trust in the devices is based on trust in their identities and so these identities need to be properly managed. There are a number of challenges relating to this area but there is no general solution.

Organizations exploiting data from the IoT should do this in a way that complies with laws and regulations. For personal information particular care should be given to aspects such as ensuring informed consent, data minimisation and information stewardship. There is a specific challenge to ensure that users understand and accept that the ownership of the device does not imply complete “ownership” of data. It is important that the lifecycle of data from the IoT properly managed from creation or acquisition to disposal. An organization should have a clear policy which identifies which data needs to be kept, why it needs to be kept and for how long. There should also be a clear policy for the deletion of data that is not retained for compliance or regulatory reasons.

This article has originally appeared in the KuppingerCole Analysts' View newsletter.


Google+


top
KuppingerCole Blog
By:
KuppingerCole Select
Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.
Register now
Spotlight
Internet of Things
It is its scale and interoperability that fundamentally differentiate the Internet of Things from existing isolated networks of various embedded devices. And this scale is truly massive. Extrapolating the new fashion of making each and every device connected, it is estimated that by 2020, the number of “things” in the world will surpass 200 billion and the IoT market will be worth nearly $9 trillion.
KuppingerCole EXTEND
KC EXTEND shows how the integration of new external partners and clients in your IAM can be done while at the same time the support of the operational business is ensured.
Links
 KuppingerCole News

 KuppingerCole on Facebook

 KuppingerCole on Twitter

 KuppingerCole on Google+

 KuppingerCole on YouTube

 KuppingerCole at LinkedIn

 Our group at LinkedIn

 Our group at Xing

 GenericIAM
Imprint       General Terms and Conditions       Terms of Use       Privacy policy
© 2003-2015 KuppingerCole