Amid escalating scandal, Uber publishes new data privacy statement

Share

Crazy UberThe scandal over Uber’s threats to target the personal lives of journalists continues to grow. Following Buzzfeed’s initial report that a senior Uber exec had described a $ 1m campaign to smear and discredit Pando’s Sarah Lacy, more journalists have spoken out about threats by the company.

San Francisco Magazine Senior Editor Ellen Cushing revealed earlier today that sources had warned her that Uber execs might try to use her rider account data to attack her. That thread is supported by this paragraph in Buzzfeed’s report:

“The general manager of Uber NYC accessed the profile of a BuzzFeed News reporter, Johana Bhuiyan, to make points in the course of a discussion of Uber policies. At no point in the email exchanges did she give him permission to do so.”

As David Holmes reported earlier, that NY general manager, Josh Mohrer, is showing very little remorse. Earlier he tweeted a photo of Uber NY staff dancing to Taylor Swift’s “Shake It Off,” with the caption “#Hatersgonnahate.”

ubernyctweet

Still, the company has clearly realized that other people do care that Uber is accessing passenger data in order to smear or unnerve them. Earlier today, Uber published a new data protection statement on their company blog:

We wanted to take a moment to make very clear our policy on data privacy, which is fundamental to our commitment to both riders and drivers. Uber has a strict policy prohibiting all employees at every level from accessing a rider or driver’s data. The only exception to this policy is for a limited set of legitimate business purposes. Our policy has been communicated to all employees and contractors.

Examples of legitimate business purposes for select members of the team include:

  • Supporting riders and drivers in order to solve problems brought to their attention by the Uber community.

  • Facilitating payment transactions for drivers.

  • Monitoring driver and rider accounts for fraudulent activity, including terminating fake accounts and following up on stolen credit card reports.

  • Reviewing specific rider or driver accounts in order to troubleshoot bugs.

The policy is also clear that access to rider and driver accounts is being closely monitored and audited by data security specialists on an ongoing basis, and any violations of the policy will result in disciplinary action, including the possibility of termination and legal action.

Uber’s business depends on the trust of the riders and drivers that use our technology and platform. The trip history of our riders is confidential information, and Uber protects this data from internal and external unauthorized access. As the company continues to grow, we will continue to be transparent about our policy and ensure that it is properly understood by our employees.

PandoDaily

Share

Apple publishes huge accountability report on privacy and security. But does it go far enough?

Share

big-apple-not-making-friends

In the wake of criticism related to Labor Day Weekend’s celebrity photo leak, Apple has published a detailed look at its built-in security tools, the government’s requests for customer data, and its thinking on how much information should be shared with other parties.

The information contained within is much more detailed than what was offered during Tim Cook’s interview with Charlie Rose, and the company’s efforts to increase awareness for security issues and governmental limitations on so-called “transparency reports” is commendable, but it seems like Apple is still offering consumers a rose-tinted look at the efficacy of its security measures.

Consider the company’s claims that it “doesn’t scan your communications, and we wouldn’t be able to comply with a wiretap order even if we wanted to.” That would be great… if its mobile operating system developed specifically for vehicles didn’t include a handy feature that can determine where you might want to travel based on your personal information. As I wrote when that vehicle-specific software was first announced in March,

Apple is driving deeper into Surveillance Valley. The company yesterday announced CarPlay, a tool that allows drivers to interact with its mapping, messaging, and music services via their car’s built-in controls. The tool’s flagship feature is its ability to “predict where you most likely want to go using addresses from your email, text messages, contacts, and calendars.”

Basically, this means that Apple will sift through your digital communications and personal data to save you the agony of entering an address. In doing so, the company might just create a service that will allow it to escape from cartographic purgatory — all it has to do is convince you that your privacy is less important than your laziness.

So it’s clear that Apple is either lying about its ability to scan and use its customers’ data or it lied about one of the features of its vehicle platform. It can’t have it both ways.

There are also some concerns about the encryption Apple uses to secure iMessages. Though it says that it couldn’t hand over its customers’ messages even if it wanted to, researchers have cast some doubt on the encryption methods used to secure those communications, as the Guardian first reported in October 2013. The company has refuted those claims, but given the fact that encrypting messages is apparently much harder than it seems, some caution is needed.

The company says that it’s addressed those problems by ensuring it doesn’t have the key used to decrypt its customers’ personal information, but it can still provide the government with data backed up to its iCloud service. When the option of backing data up to iCloud is among the first things someone sees when they purchase an iPhone or update their devices to iOS 8, that small loophole suddenly seems much bigger.

Apple also trumpets its thumbprint-scanner, which has been expanded for use in everything from its App Store to individual applications and the upcoming Apple Pay service, as an extra layer of security between would-be hackers and a consumer’s iPhone. That’s true now, but it might not be seen as such a boon in the future, as David Sirota explained at NSFWCORP:

Think about it in practical terms. Whereas in today’s password-based system you can protect yourself after a security breach with a simple password change, in tomorrow’s biometric-based system, you have far fewer – if any – ways to protect yourself after a security breach. That’s because you cannot so easily change your fingers, your eyes or your face. They are basically permanent. Yes, it’s true – security-wise, those biological characteristics may (and I stress “may”) be less vulnerable to a hack than a password. But if and when they are hacked in a society reorganized around biometric security systems, those systems allow for far less damage control than does a password-based system. In effect, your physical identity is stolen – and you can’t get it back.

In light of this, there’s a simple question: is the time it takes to punch in a code worth the hassle? I’d say two seconds and some finger flicks isn’t actually a hassle, whether it’s punching in an iPhone code, typing in a login password or entering an ATM pin number. More importantly, it sure isn’t a very high price to prevent your body from becoming someone else’s permanent skeleton key to your whole life.

It’s heartening to see Apple take such a comprehensive approach to informing its customers about its privacy policies and security practices. The new section of its website offers more information than the transparency reports most companies have relied on, and Cook has taken a strong public stance against the government’s attempts to gather so much personal data. The updates to iOS 8, which are supposed to make it next to impossible for data to be compromised, are also welcome even though Apple’s track record with even basic security features is mixed.

But this is still an incomplete look at what Apple does to protect its customers’ data, and until these issues are clarified or the company learns to respond to fears about its iffy security tools before it makes the biggest product announcement since it revealed the original iPad in 2010, the company should still be questioned and criticized. This is our data, and until we take an active role in holding companies responsible for their actions, we’re never going to be secure.

[illustration by Brad Jonas for Pando]

PandoDaily

Share